chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Where surface water supplies are insufficient, groundwater is often used for irrigation (Figure 4.2.3). Agriculture uses about 70% of the groundwater pumped for human use globally and about 53% of the groundwater pumped in the US (USGS: Groundwater use in the United States). In some parts of the world, groundwater is pumped at a faster rate than natural processes recharge the stored underground water. Groundwater use where pumping exceeds recharge is non-renewable and unsustainable.
Another problem that may occur in some aquifers with excessive groundwater pumping is a compaction of the aquifer and subsidence of the ground surface. When the water is pumped from the pore spaces in the aquifer, the pore spaces compress. The compression of millions of tiny pore spaces in hundreds of meters of aquifer material manifests on the surface as subsidence. The ground elevation actually decreases. Subsidence from groundwater pumping is irreversible and leaves the aquifer in a condition where it cannot be recharged to previous levels.
Our reliance on and depletion of groundwater resources is becoming a global concern as aquifers are being pumped at unsustainable rates in the US (Figure 4.2.4) and all over the world. Enhanced irrigation efficiencies and conservation measures are being implemented when possible to prolong the life of some aquifers. Unfortunately, groundwater is often the water resource that we turn to in times of drought or when other surface-water resources have been depleted. For example, in California during the recent drought, farmers drilled wells and used groundwater to save their crops when surface water resources were not available.
Figure 4.2.3.: Groundwater withdrawals, by State, 2005. Credit: USGS: Groundwater use in the United States
Click for a text description of the groundwater withdrawals image
This map of the U.S. shows total groundwater withdraws by state, in millions of gallons per day. California has the highest at 20,000 - 60,000. Nebraska follows at 10,000 - 20,000. Texas and Arkansas are 5,000 - 10,000, Mississippi, Florida, Colorado, Kansas, Arizona, Oregon, and Idaho are each 2,000 - 5,000. The rest of the country is 0 - 2000.
Figure 4.2.4.: Map of the United States (excluding Alaska) showing cumulative groundwater depletion, 1900 through 2008, in 40 assessed aquifer systems or subareas. Colors are hatched in the Dakota aquifer where the aquifer overlaps with other aquifers having different values of depletion. Credit: USGS: Groundwater depletion
Knowledge Check
Read the following article:
Rosenberg, David M., Patrick McCully, and Catherine M. Pringle. "Global-scale environmental effects of hydrological alterations: introduction." BioScience 50.9 (2000): 746-751.
Knowledge Check (flashcards)
Please take a few minutes to think about what you learned from reading the article, then consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: What is meant by hydrologic alteration?
Back: Hydrologic alteration is an human-made disruption to natural river flows, including dams and diversions. The hydrologic alteration can also include pumping from groundwater, but this article focuses on large dams.
Card 2:
Front: What are the major impacts of hydrologic alteration by dams?
Back: Dam can affect both aquatic and riparian ecosystems, block fish passage, change temperature, affect offshore marine areas, contribute to the extinction of species, and affect nutrient cycling. Some rivers, such as the Nile and the Colorado no longer reach the sea.
Card 3:
Front: What is the connection between agricultural food production and hydrologic alteration of the world’s river systems?
Back: Globally, 70% of human water consumption is for irrigation, so agriculture is a significant contributor to the impacts of dams and diversions on our river systems.
5.2.03: Water Quality Impacts
Runoff from agricultural areas is often not captured in a pipe and discharged into a waterway; rather it reaches streams in a dispersed manner, often via sub-surface pathways, and is referred to as non-point source pollution. In other words, the pollutants do not discharge into a stream or river from a distinct point, such as from a pipe. Agricultural runoff may pick up chemicals or manure that were applied to the crop, carry away exposed soil and the associated organic matter, and leach materials from the soil, such as salts, nutrients or heavy metals like selenium. The application of irrigation water can make some agricultural pollution problems worse. In addition, runoff from animal feeding operations can also contribute to pollution from agricultural activities.
The critical water quality issues linked to agricultural activities include:
• Fertilizers – nutrients (nitrogen and phosphorus)
• Eutrophication – dead zones
• Pesticides
• Soil erosion
• Animal Feeding Operations
• Organic matter
• Nutrients
• Irrigation and return flows
• Salinity
• Selenium
Check Your Understanding
Review the following fact sheet on agricultural impacts on water quality:
Protecting Water Quality from Agricultural Runoff, 2005, EPA Fact Sheet on Agricultural Runoff
Knowledge Check (flashcards)
Please take a few minutes to think about what you just learned from the Fact Sheet, then consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: What is nonpoint source pollution?
Back: Nonpoint source pollution derives from diffuse sources, such as agricultural chemicals and fertilizers. The pollutants are picked up by the water as it runs off over the surface or travels through soils and as groundwater, and carried to rivers and lakes.
Card 2:
Front: What agricultural activities contribute to nonpoint source pollution?
Back: Poorly located or managed animal feeding operation, overgrazing, plowing too often or at the wrong time and improper, excessive, or poorly timed application of pesticides, irrigation water and fertilizers.
Card 3:
Front: What are the major water pollutants contributed by agricultural activities?
Back: sediments, nutrients, pathogens, pesticides, metals, and salts
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.02%3A_Impacts_of_Food_Production_on_Water_Resources/5.2.02%3A_Impacts_of_Groundwater.txt
|
Flow Depletion and Salinity
The Colorado River in the southwestern U.S. is an excellent case study of a river that is highly utilized for irrigation and agriculture. A majority of the Colorado River’s drainage basin has an arid or semi-arid climate and receives less than 20 inches of rain per year (Figure 4.2.5), and yet the Colorado River provides water for nearly 40 million people (including the cities of Los Angeles, San Diego, Phoenix, Las Vegas, and Denver) and irrigates 2.2 million hectares (5.5 million acres) of farmland, producing 15 percent of U.S. crops and 13 percent of livestock (USBR 2012). Much of the irrigated land is not within the boundaries of the drainage basin, so the water is exported from the basin via canals and tunnels and does not return to the Colorado River (Figure 4.2.6).
The net results of all of these uses of Colorado River water (80 percent of which are agricultural) in both the U.S. and Mexico are that the Colorado River no longer reaches the sea, the delta is a dry mudflat, and the water that flows into Mexico is so salty as a result of agricultural return flows that the U.S. government spends millions of dollars per year to remove salt from the Colorado River.
Many farmers in the Colorado River basin are working to use Colorado River water more efficiently to grow our food and food for the animals that we eat. Watch the video below and answer the questions to learn more about farming in the Colorado River basin.
Figure 4.2.5.: Average annual precipitation of the Colorado River basin. Data are United States Average Annual Precipitation, 1961-1990 published by Spatial Climate Analysis Service, Oregon State University; USDA - NRCS National Water and Climate Center, Portland, Oregon; USDA - NRCS National Cartography and Geospatial Center, Fort Worth, Texas. Credit: Map by Gigi Richard
Figure 4.2.6.: Map of the Colorado River basin showing areas outside of the basin using Colorado River water Credit: USBR 2012
Check your Understanding
Watch the following video then answer the questions below
Video: Resilient: Soil, water and the new stewards of the American West (10:13)
Resilient: Soil, Water and the New Stewards of the American West
National Young Farmers Coalition
Click for transcript of the Resilient: Soil, water and the new stewards of the American West video
Narrator: A drop of water from a sprinkler on a quiet Los Angeles street. A shower head in a Las Vegas hotel. Agricultural land in California's Imperial Valley. Where does all this water come from? The Colorado River. In 1922, representatives from seven states gathered at Bishop’s Lodge New Mexico to sign the Colorado River Compact, an agreement on how to allocate water in this precious river system. But that River had more water then, than it does today. The Colorado River Basin touches the lives of every American. The river system runs through seven states in the US, and two in Mexico, and supplies water for over 36 million people. It also irrigates over five million acres of cropland and provides eighty percent of our winter produce, all from one river. And agriculture is the first to feel the pressure. At the headwaters of the Colorado River, farmers and ranchers are creating a toolbox of resilience. They save water with efficient technology and by building healthy soil.
Brendon Rockey, Rockey Farms, Center, Colorado: My grandpa always had a philosophy on this farm that you have to take care of the soil before the soil can take care of you, and he just felt like we had gotten away from that. That's the number one thing with everybody. is yield, yield, yield. Everybody wants just big production, you know, so that's why you want to dump on the fertilizer, kill off anything that poses a threat. It's all about production. We put more of an emphasis on quality. And what's really nice is when you put the emphasis on quality, the quantity usually comes along with it.
Narrator: And he also uses less water. How? By managing his soil more efficiently and working with nature instead of against it. Brendon rotates his potato crops with green manure, or cover crops, that enhance soil health while reducing his dependence on pesticides, fertilizers, and water.
Narrator: Unhealthy soil lacks life. Often a crust forms on its surface. When a crop is watered, very little soaks into the soil. Instead, it sits on top and is left to evaporate or run off. This land often has to be watered more frequently to get water to the crops. Healthy soils teem with life and are often built when farmers plant a mix of cover crops that add nutrients to the soil. When these plants die they become organic matter which helps store water in the soil. That means farmers can irrigate less, and have more certainty in times of drought.
Brendon Rockey: The reason we got into cover cropping was a response to a drought. Now that we've brought in more diverse crops, that have diverse root systems, which actually help benefit the water use efficiency as well, we've regenerated the soil to the point now where I'm growing a potato crop on about 12 to 14 inches of irrigation water per year. We're focusing on the soil, we're investing in the soil and we're bringing up for the functionality of the soil back to its optimum range.
Mike Jensen, Homegrown Biodynamic Farm, Bayfield, Colorado: A farm after 20 years should have much better soil than when it started. The best thing I do for my land is cover cropping. It rejuvenates the soil keeps everything happy, gets all the flora and fauna in balance. It's not about production this year, it's about production for the next 30 years.
Mike Nolan, Mountain Roots Produce, Mancos, Colorado: One thing I've learned from a bunch of folks, old-timers I've worked with is, do your best to not ever have any bare ground, nothing open, no open soil. I mean even in nature, even in the desert, technically there are things covering the ground. There's things, fungi and bacteria, that are holding the ground together. So what I did about three weeks ago is I planted out this oats crop. I’m not gonna harvest this for the seed or anything, but what it's going to do, it's going to hold moisture in here.
Cynthia Houseweart, Princess Beef Company, Hotchkiss, Colorado: Right now we're in full bloom, but what I like to see is a variety of plants. I don't want to just see straight alfalfa, I want to see grass and, and clover. I don't want bare ground. If we didn't have irrigation water, we would be a desert. This would be sagebrush, cedar trees. This, the water is what creates our livelihood. We graze during the growing season. So the conventional thing is, you move your cows off your pastures, grow them and cut them for hay. What we do instead of cutting them for hay, we graze them.
Narrator: Cynthia waters, using a center pivot. As it moves across her fields, the cattle follow behind eating fresh grass.
Cynthia: And the things they trample in, and their manure, adds to the soil, feeds the soil. It breaks down, turns into humus. The soil becomes more like a sponge and can suck up water that we put on it and rain, so the soil improves, which means the plants grow better and then our cows look better.
Dan James, James Ranch, Durango, Colorado: When you build topsoil, you increase the capillary action of the soils ability to retain water; and the less frequent you're applying your water, the more those roots have to go after that water, as it recedes into the ground. And so now you have all these roots below the surface, and all of a sudden here comes your cow. She comes in and she clips that off. Now your plant’s this high and the plant sheds the same proportion of roots. Now you're adding organic material and you're growing topsoil.
Strengthening the soil is also a concern of Steve Ela, a fruit grower in Hotchkiss Colorado. With precise tools like micro sprinklers and permanent drip irrigation, Steve can use water precisely when and where he needs it most, and his soil is healthy enough to efficiently deliver that water to his crops.
Steve Ela, Ela Family Farms, Hotchkiss, Colorado: For us on the farm it's the difference between using first furrows and the micro specters and now drip. It’s been a bit of an evolution of thinking. So for me it's been, it's not that really one system is better or worse, but it's an evolution of thinking, of trying to manage our water better, trying to use the system of irrigation management and cover cropping to manage our weeds, and also to just only to grow better fruit and healthier trees. Yes, it's expensive on the upfront cost, but it's a system then we can use for 20 years. It's very efficient. I think it probably saves us that much, you know, in water.
Narrator: It's innovation that saves water and money, while increasing soil fertility. It's also innovation that includes technology. Water data delivered by weather satellites, GPS, and even smart sensors like those used by Randy Meaker, a Colorado wheat and corn grower. He uses cover crops to improve his soil and by monitoring soil moisture, he can more effectively use the center pivots to reduce water use.
Randy Meaker: There are huge efforts going on right now, trying to figure out how we and the western United States can solve the shortages of water due to drought conditions. There's two ways to keep water in a bucket and one is to put more water in at the top, the other one is to take less water out at the spigot. People in the lower Basin States, where the population centers are, they're looking for us to supply them more water. But what we're looking for is a responsible use from them. What good is it for me to be restricted if I realize that we're still irrigating lawns, we're still washing cars.
Narrator: Water is the lifeblood of our Western landscape. Farmers and ranchers are as essential to it as the water itself. The water challenges these farmers face are many, but across the country they gather to share their water knowledge and provide each other with valuable support. They build community and grow good food, while stewarding both their land and their water. They are the water stewards of the Colorado River Basin.
Knowledge Check (flashcards)
Please take a few minutes to think about what you just learned, then consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: How does the Colorado River touch the lives of nearly every American?
Back: 80% of winter produce in the US are grown with Colorado River water (in 2014)
Card 2:
Front: What practices are introduced in the film that can increase water use efficiency when growing irrigated crops?
Back: Center pivot, Microspray and drip irrigation, Cover crops, Soil health
Card 3:
Front: How can healthy soil reduce the amount of water used to grow crops?
Back: Healthy soils can be more permeable and can store more water and so more water soaks into the ground and is stored in the soil. Water in the soil is available to the plants. Increases ability of soil to retain water.
Card 4:
Front: How do cover crops help conserve water?
Back: Cover crops help improve soil health (see the previous answer).
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.02%3A_Impacts_of_Food_Production_on_Water_Resources/5.2.04%3A_Colorado_River_Case_St.txt
|
Dead Zone in the Gulf of Mexico
Agricultural runoff can contribute pollutants to natural waters, such as rivers, lakes, and the ocean, that can have serious ecological and economic impacts, such as the creation of areas with low levels of dissolved oxygen called dead zones caused by pollution from fertilizers. Nutrients, such as nitrogen and phosphorus, are elements that are essential for plant growth and are applied on farmland as fertilizers to increase the productivity of agricultural crops. The runoff of nutrients (nitrogen and phosphorus) from fertilizers and manure applied to farmland contributes to the development of hypoxic zones or dead zones in the receiving waters through the process of eutrophication (Figure 4.2.7).
Figure 4.2.7.: Eutrophication. Credit: EPA: Mississippi River/Gulf of Mexico Hypoxia Task Force
Watch the following videos from NOAA’s National Ocean Service that show how dead zones are formed and explain the dead zone in the Gulf of Mexico:
The nutrients that make our crops grow better also fertilize phytoplankton in lakes and the ocean. Phytoplankton are microscopic organisms that photosynthesize just like our food crops. With more nitrogen and phosphorus available to them, they grow and multiply. When the phytoplankton dies, decomposers eat them. The decomposers also grow and multiply. As they’re eating all of the abundant phytoplankton, they use up the available oxygen in the water. The lack of oxygen forces mobile organisms to leave the area and kills the organisms that can’t leave and need oxygen. The zone of low oxygen levels is called a hypoxic or dead zone. Streams flowing through watersheds where agriculture is the primary land use exhibit the highest concentrations of nitrogen (Figure 4.2.8).
Figure 4.2.8.: Nitrogen concentrations in streams draining watersheds with different land uses. Credit: Dubrovsky and Hamilton 2010, The Quality of Our Nation’s Waters, Nutrients in the Nation’s Streams and Groundwater: National Findings and Implications
The Mississippi River is the largest river basin in North America (Figure 4.2.9), the third largest in the world, and drains more than 40 percent of the land area of the conterminous U.S., 58 percent of which is very productive farmland (Goolsby and Battaglin, 2000). Nutrient concentrations in the lower Mississippi River have increased markedly since the 1950s along with increased use of nitrogen and phosphorus fertilizers (Figure 4.2.10). When the Mississippi River’s nutrient-laden water reaches the Gulf of Mexico, it fertilizes the marine phytoplankton. These microscopic photosynthesizing organisms reproduce and grow vigorously. When the phytoplankton die, they decompose. The organisms that eat the dead phytoplankton use up much of the oxygen in the Gulf’s water resulting in hypoxic conditions. The resulting region of low oxygen content is referred to as a dead zone or hypoxic zone. The dead zone in the Gulf of Mexico at the mouth of the Mississippi River has grown dramatically and in some years encompasses an area the size of the state of Connecticut (~5,500 square miles) or larger. Hypoxic waters can cause stress and even cause the death of marine organisms, which in turn can affect commercial fishery harvests and the health of ecosystems.
Figure 4.2.9.: The Mississippi and Atchafalaya River Basin and the hypoxic zone in the Gulf of Mexico. Credit: USGS Factbook - Nitrogen in the Mississippi Basin-Estimating Sources and Predicting Flux to the Gulf of Mexico
Figure 4.2.10.: Nitrogen inputs and population from 1940-2010. Credit: USGS: Trends in Nutrients and Pesticides in the Nation's Rivers and Streams
Optional Reading
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.02%3A_Impacts_of_Food_Production_on_Water_Resources/5.2.05%3A_Mississippi_River_Case.txt
|
What can be done to reduce the size of the dead zone?
The dead zone in the Gulf of Mexico is primarily a result of runoff of nutrients from fertilizers and manure applied to agricultural land in the Mississippi River basin. Runoff from farms carries nutrients with the water as it drains to the Mississippi River, which ultimately flows to the Gulf of Mexico. If a number of nutrients reaching the Gulf of Mexico can be reduced, then the dead zone will begin the shrink.
Since 2008, the Hypoxia Task Force, led by the U.S. Environmental Protection Agency and consisting of five federal agencies and 12 states, has been working to implement policies and regulations with the aim of reducing the size of the dead zone in the Gulf of Mexico. Many of the strategies for reducing nutrient loading target agricultural practices including (USEPA, The Sources and Solutions: Agriculture).
• Nutrient management: The application of fertilizers can vary in amount, timing, and method with varying impacts on water quality. Better management of nutrient application can reduce nutrient runoff to streams.
• Cover Crops: Planting of certain grasses, grains or clovers, called cover crops can recycle excess nutrients and reduce soil erosion, keeping nutrients out of surface waterways.
• Buffers: Planting trees, shrubs, and grass around fields, especially those that border water bodies, can help by absorbing or filtering out nutrients before they reach a water body.
• Conservation tillage: Reducing how often fields are tilled reduces erosion and soil compaction, builds soil organic matter, and reduces runoff.
• Managing livestock waste: Keeping animals and their waste out of streams, rivers, and lakes keep nitrogen and phosphorus out of the water and restores stream banks.
• Drainage water management: Reducing nutrient loadings that drain from agricultural fields helps prevent degradation of the water in local streams and lakes.
Watch the following video from the US Department of Agriculture about strategies to reduce nutrient loading into the Mississippi River:
Video: Preventing Runoff Into The Mississippi River (1:44)
Click for a transcript of the Preventing Runoff into the Mississippi River.
A US Department of Agriculture initiative is helping Missouri farmers keep farm field runoff from reaching the Mississippi River. USDA's Natural Resources Conservation Service is working with producers through the Mississippi River Basin healthy watersheds initiative, or MRBI. The focus of the MRBI project is to hopefully cut down on sediment, nutrients, and pesticides that are moving down the Mississippi River to the Gulf of Mexico. USDA NRCS and initiative partners are working with farmers to determine which conservation practices and which conservation systems work best on their farms to keep runoff from reaching the Mississippi. What we're trying to do is identify what conservation practices will have the biggest impact in the reduction of nutrient transport to the Mississippi River, which eventually makes it to the Gulf of Mexico. One such conservation practice is terracing. By building the terraces it controls erosion, which then reduce the sediment. By reducing sediment we're also going to be reducing the hypoxia issue in the Gulf. We wanted to then monitor what effects we could have on reductions by putting out monitoring stations, such as this one so we could be able to determine that what benefits we were having with the cost share dollars we were putting on the land. Landowners interested in learning more about how to protect their soil and reduce runoff should contact their local NRCS office. I'm Bob Ellison for the US Department of Agriculture.
EPA website about nutrient pollution and some solutions to nutrient pollution: The Sources and Solutions: Agriculture
Activate Your Learning
Review the graphs below and answer the questions that follow. Figure 4.2.11 presents the size of the hypoxic zone in the Gulf of Mexico from 1985 to 2014. The U.S. Environmental Production Agency led a task force in 2008 that identified a goal to reduce the five-year average of the size of the dead zone to less than 2,000 square miles by 2015.
Figure 4.2.11.: Size of the Gulf of Mexico hypoxic zone in mid-summer. Credit: Data source: Nancy N. Rabalais, LUMCON, and R. Eugene Turner, LSU. Funding sources: NOAA Center for Sponsored Coastal Ocean Research and U.S. EPA Gulf of Mexico Program. From Gulf of Mexico Ecosystems & Hypoxia Assessment (NGOMEX).
Figure 4.2.12.: Annual Total Nitrogen loads to the Gulf of Mexico. Credit: Mississippi River Gulf of Mexico Watershed Nutrient Task Force, 2013.
Figure 4.2.13.: Annual total phosphorus loads to the Gulf of Mexico. Credit: Mississippi River Gulf of Mexico Watershed Nutrient Task Force, 2013.
Knowledge Check (flashcards)
Consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: How close have we come to achieving the action plan goal? How many years between 1985 and 2014 had a hypoxic zone smaller than 2,000 square miles (5,000 square kilometers)?
Back: We have not come very close. The 5-year average size of the dead zone from 2010-2014 was about 5,800 square miles and the goal is 2,000 square miles. There was only one year, 2000, between 1985 and 2014 in which the dead zone was smaller than the action plan goal of 2,000 square miles.
Card 2:
Front: Relate the loading of phosphorus and nitrogen to the Gulf of Mexico (Figure 4.2.12 and Figure 4.2.13) to the size of the dead zone. What needs to happen to the annual total nitrogen and phosphorus loading in order to reduce the size of the dead zone?
Back: The annual total nitrogen and phosphorus loading to the Gulf of Mexico has remained high from 1980-2011. In order to reduce the size of the dead zone, the annual nutrient loading needs to decrease. There is a goal to reduce the nutrient loading by 45% of the 1980-1996 baseline average.
Card 3:
Front: Propose some possible strategies that could contribute to a reduction in the size of the hypoxic zone. Whose responsibility do you think it is to strategize for reduction of the size of the hypoxic zone? Whose responsibility is it to regulate and enforce strategies for reduction? Who do you think should pay for the proposed strategies? What are some of the challenges associated with your proposed strategies?
Back: Answers will vary, but should reflect some of the strategies discussed on the EPA website and video linked above, such as cover cropping, buffers, nutrient management.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.02%3A_Impacts_of_Food_Production_on_Water_Resources/5.2.06%3A_Managing_Runoff_to_Red.txt
|
Summary
This module has introduced some important concepts that tie our food system to the Earth's water resources. Water resources are essential for food production, and food production also has significant impacts globally on both the quantity surface and groundwater and the quality. Growing crops relies on water from either precipitation or irrigation derived from surface and groundwater. Virtual water is embedded in everything you eat, with the amount of water varying, depending on the crop and the climate in which the crop was grown. Crops grown in hot and dry climates consume more water via transpiration as evaporation rates are higher in those climates. Also, some plants need more water than others, for example, rice uses more water to grow than corn. You explored precipitation rates in different parts of the US compared to evaporation rates and considered how much water might need to be applied to certain crops. Computation of your personal water footprint allowed you to compare your lifestyle and resulting water consumption with average water consumption in the US and globally. Also, these computations along with consideration of virtual water in different food products allowed you to draw conclusions about the impacts of different types of diets on the planet's water resources.
In this unit, we've just touched the surface of the very large issue of how agriculture impacts both the quality and quantity of our water resources. We also looked at a few examples of agricultural practices that help to minimize and reduce these impacts.The Colorado River provided an example of a river on which agricultural diversions have severely impacted the quantity of water in the river. We saw that the Colorado River no longer reaches the sea! The breadbasket of the US, the Midwest, contributes nutrient pollution to the Mississippi River, which has, in turn, created a massive dead zone in the Gulf of Mexico. You explored data on the size of the dead zone and proposed strategies to reduce the nutrient loading and thereby reduce the size of the dead zone in the future.
Reminder - Complete all of the Module 4 tasks!
You have reached the end of Module 4! Double-check the to-do list on the Module 4 Roadmap to make sure you have completed all of the activities listed there before moving on to Module 5!
References and Further Reading
• Dubrovsky, N.M. and P.A. Hamilton, 2010, Nutrients in the Nation’s Streams and Groundwater: National Findings and Implications, USGS Fact Sheet 2010-3078 (http://pubs.usgs.gov/fs/2010/3078/)
• FAO, 2011, The State of the World’s Land and Water Resources for Food and Agriculture (SOLAW) - Managing systems at risk. Food and Agricultural Organization of the United Nations, Rome and Earthscan, London.
• Goolsby, D.A., and Battaglin, W.A., 2000, Nitrogen in the Mississippi Basin--Estimating sources and predicting flux to the Gulf of Mexico: U.S. Geological Survey Fact Sheet 135-00, 6 p.
• Hoekstra, A.Y. and M.M. Mekonnen, 2012. The water footprint of humanity, Proceedings of the National Academy of Science, vol. 109, no. 9, pp. 3232-3237 (http://waterfootprint.org/media/downloads/Hoekstra-Mekonnen-2012-WaterFo...).
• Jones, J.A.A., 2010, Water Sustainability: A Global Perspective, Hodder Education, 452 pp.
• Mississippi River Gulf of Mexico Watershed Nutrient Task Force, 2013, Reassessment 2013: Assessing Progress Made Since 2008, Accessed from http://www2.epa.gov/sites/production/files/2015-03/documents/hypoxia_rea...
• U.S. Bureau of Reclamation (USBR), 2012, Colorado River Basin Water Supply and Demand Study, www.usbr.gov/lc/region/programs/crbstudy/finalreport/index.html
5.04: Summative Assessment- Kansas Farm Case Study
Water is essential to growing food, and the source of water for food production is either naturally occurring precipitation or irrigation from surface or groundwater. The application of fertilizers and pesticides to crops results in the production of water pollution. We can incorporate water resources into our Coupled Human-Natural System diagram, where the climate of the natural system determines the availability of water for food production. The response in the human system is to develop irrigation systems where necessary and implement conservation and efficiency measures in time of scarcity. Also, application of fertilizers and pesticides results in water pollution, which impacts the water quality in the natural system.
Schematic of Coupled Human-Natural System
Click for a text description of the Coupled Human-Natural System.
Human/Natural System Diagram
Human System (Human System Internal Interactions)
• Water management, policy, and regulation
• Irrigation infrastructure = diversions, canals, sprinklers, etc.
• Conservation and efficiency measures
• Multiple competing users of water
• Application of fertilizers and pesticides leads to water pollution
*arrow pointing from human system to natural system: drivers: water use and pollution, climate change*
Natural System (Water availability for food production)
• Climate = precipitation and temperature
• Droughts and floods
• Water quality
*arrow pointing from natural system to human system: conditions: variability in water supply available for food production*
Instructions
In the summative assessment for Module 4, you'll apply what you've learned about coupled human and natural water systems to a particular farming scenario in Pawnee County, Kansas. You'll consider the precipitation in Kansas, the crops you could grow with that precipitation and then look at crop yields for different crops using irrigation. Finally, you'll consider the impact on water resources if you were to shift the types of crops grown and irrigation practices on a farm in Pawnee County, KS. The assignment is explained in the worksheet below.
• Download Module 4 Summative Assessment Worksheet:
• Download the Excel spreadsheet for calculations for Module 4 Summative Assessment
• The discussion portion of the worksheet is incorporated into the weekly discussion post.
Submitting your Assignment
Please submit your assignment in Module 4 Summative Assessment in Canvas.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.03%3A_Summary_and_Final_Tasks.txt
|
Introduction
Interactions Between Soil Nutrients, Nutrient Cycling, and Food Production Systems
Along with water, sunlight, and the earth's atmosphere, the soil is one of the key resources underlying food production by humans. In terms of the coupled human-natural systems we use as a way to understand food systems, we can say that human systems organize landscapes and manage soils, along with agricultural biodiversity and other parts of natural systems, to produce food. Soils exert an influence on this coupled system because they vary in terms of properties such as depth and nutrient content, which alters their response to human management. Soils also have great importance as the site of many nutrient and carbon transformations within the biosphere. They are a storehouse of beneficial soil organic matter that benefits the earth system in many ways. Also, by understanding soils and the earth surface and ecological processes that occur there, human management is able to maintain and improve them, as well as overcome initial limitations or past degradation.
The purpose of this module is to give you as a learner a basic grounding in the nature of soils and soil nutrients. Module 5.1 provides the foundation for understanding soils, soil nutrients and their connection to food. We will also focus on ways that soils are vulnerable to degradation that impairs their role in food production. In module 5.2 we will deepen understanding of how soil management can protect soils in their role of supplying nutrients to crops and protecting other valuable resources such as surface water. To accomplish this we will focus on nitrogen (N) and phosphorus (P) as key nutrients for food production in module 5.2.
Goals
• Identify soil nutrients and soil function as key resources in need of protection for food production and food systems.
• Describe spatial and geographic variation in soil resources and soil fertility.
• Distinguish between preexisting aspects of biogeochemical cycling and human-induced processes that affect biogeochemical cycling.
• Attribute different soil fertility outcomes in food systems to the coupled natural and human factors and feedbacks that produce them.
Learning Objectives
After completing this module, students will be able to:
• Describe the basic properties of soil that distinguish it from mere "dirt".
• Explain how soil serves as a medium for plant growth.
• Explain how the five soil-forming factors interact to produce soils.
• Explain the term "biogeochemical cycling".
• Explain common limiting factors to plant growth that limits food production around the world.
• Explain how nutrient and carbon depletion from soils and soil erosion create conditions of low food productivity.
• Assess how farming practices affect soil fertility.
• Analyze modern fertilizer use as the emergence of a strong human system impact on nutrients in soils that replenishes soil nutrients but can create nutrient pollution.
• Analyze how natural/human system feedbacks operate to limit the actions of poorer food producers around the world.
• Incorporate sustainability challenges related to soil nutrient management into an analysis of food systems.
Assignments
Print
Module 5 Assignments Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 5 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Chapter 2, pp. 9-17 in Building Soils for Better Crops (USDA Sustainable Agriculture Research and Education), available as a free e-book. You can download the entire book since future modules will also use this source.
1. You are on the course website now.
2. Building Soils for Better Crops
To Do
1. Formative Assessment: Mapping Trends in Soil Properties
2. Summative Assessment: N and P Balances
3. Participate in the Discussion
4. Take Module Quiz
1. In course content: Formative Assessment; then take quiz in Canvas
2. In course content: Summative Assessment; then take quiz in Canvas
3. In Canvas
4. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
06: Soils as a Key Resource for Food Systems
Introduction
Overview of soils and nutrients for food production
In this course, we describe food systems as a coupling between human societies and natural earth systems and environments. This coupling is especially clear in the activities of food production that rely on crop and livestock raising. Crops and livestock production (and to a similar extent, fisheries, and aquaculture) require food producers bring together human management with soil conditions and soil nutrients (this module), water (next module), as well as sunlight for energy and adequate climate conditions (temperature, humidity, adequate growing season). To understand these human-natural interactions across the entire course, and to build your capacity to understand natural factors as part of your capstone projects and other chapters of your education, this module describes basic soil properties and the role of soils in creating adequate conditions for crops to grow, which underlies most aspects of food production. It’s therefore very important that we understand soils as the “living skin of the earth” in their properties and history, the global patterns of soil fertility and soil limitations, and then its role in supplying nutrients to plants, and how soil fertility is regenerated by the human societies and management knowledge that allows them to continue supporting food production. Our goal is not to condense an entire course in soil science, although we hope that many of you will go on to take such a course. Rather, we want to sketch out major factors and determinants of the opportunities and limitations posed by soils to a human food production system.
6.01: Soil Basics
We may be used to referring to soil as “dirt”, as in “my keys fell in the dirt somewhere” or “after planting the garden we had dirt all over our hands” but the way in which soil supports food production far more complex than a smear of clay on our hands. One way to define this difference in perspective is to think about the biological and chemical complexity in soil, and the fact that soils are not just brown, powdery handfuls of dirt but occupy a grand scale in the natural systems that underlie food systems. Soil is the "skin of the earth", layers that ascend from bedrock and supply water and nutrients to the fields and forests that make up the terrestrial biosphere. Soils are ecosystems in their own right, within mineral layers that form part of the earth’s surface. Soils can be as shallow as ten centimeters and as deep as many tens of meters.
An interesting exercise is to think of a single term or concept that describes how soils work and what they are. For example, if we were seeking an acronym to describe soil and market it as the marvelous thing that it is1 —and if we lacked time to think of a catchier name – we might think up the acronym “PaBAMOM” which nevertheless is a pretty good summary of what soil is: a “Porous and Biologically Active Mineral-Organic Matrix”. It’s a good summary because it defines the unique properties of soils (see Figure 5.1.1 below):
1. Porous (full of open spaces or pores), at a range of pore sizes from well below a micron (10-6 m or 0.0001 cm) to many centimeters, and therefore able to store water and transmit it to deeper earth layers, and host organisms as diverse as bacteria, plant roots, and prairie dogs. This porosity arises not only from the inherent sizes of particles in soil but is also a result of soil organisms and roots that generate aggregation of the soil from clay and silt particles into crumbs and clods that may be familiar to from typical garden soil. This biologically generated aggregation is sometimes referred to as structure, which is seen in figure 5.1.1 as the overall arrangement of pores and aggregates, and in the notion that soil is a matrix (point five below). The ideas of aggregation and structure will be revisited in this module and in module 7.
2. Soil is phenomenally biologically active and biologically diverse, especially in microbes, which makes it able to perform many useful functions. For example, soil microbes are able to "recycle" or decompose materials like wood, wheat straw, and bean roots into energy for themselves and other soil biotas; various other types of microbes also draw or fix nitrogen out of the air to feed plants, detoxify the soil from organic pollutants, or perform myriad other beneficial services -- and other not so beneficial processes such as diseases.
3. Soil is mineral, formed from the breakdown and chemical processing of the earth’s rock crust into sand, silt, and clay, each with their own ability to store water based on the size of pores they create and unique chemical roles in further processing and breakdown of soil materials. Usually, the mineral part is most of the solid (non-pore) part of soil (Figure 5.1, top pie chart)
4. Soil is also organic, containing bits of organic (carbon-containing) remnants of plants and animals, some of which become stabilized until they last hundreds and even thousands of years as part of the soil. In the current efforts to promote carbon sequestration to alleviate (mitigate) climate change, it is worth noting that the amount of carbon stored in these bits of soil organic matter globally easily exceeds the total carbon stock in all of the planet’s forests.
5. Lastly, it is a matrix, which means that at least as important as the particles, aggregates and pores of the soil are the organisms and processes that occur on and in these particles and pores (Fig. 3.1, bottom). This matrix hosts a highly complex ecosystem that winds its way through the millions of pores, roots, fungal hyphae, insects, and other organisms in soil. And here we are referring to the complex system concept we presented in module one: soil has many interacting parts with overlapping interactions, the ability to produce unexpectedly stable or unstable outcomes, and contains processes that can produce positive and negative feedbacks. One important example of this type of behavior is the range of soil productivity “behavior” of soils over time, including the ability of some soils to sustain moderate to high levels of productivity over years or decades (they resist change through processes of negative feedback), and then collapse in terms of food production as the interlocking, complex systems fueled by organic constituents and biological processes are dismantled (positive feedbacks operate to drive them towards degradation). Soils can then be similarly resilient in terms of remaining unproductive until the complex systems can be rebuilt through soil restoration practices.
Figure 5.1.1.: Top: pie chart showing the typical physical composition of most soils used in food production; Bottom: Basic cross-section of approximately 20 mm wide of soil as a porous, biologically active mineral-organic matrix. Red arrows show non-living components while purple arrows show biological components. Large macropores resulting from good soil structure allow adequate drainage and air entry to a soil for biological activity, while smaller mesopores and micropores hold water at varying degree of availability for plant roots. Macrofauna such as earthworms (>2mm approximate dimension) are also very important but would occupy too much of the diagram to show. Credit: Steven Vanek, adapted from Steven Fonte.
So, soil is not dirt. It is porous and complex, it covers almost every land surface on the planet (ice caps, glaciers, and bare rock are exceptions), and it is a ubiquitous, critical resource that is heavily coupled to human societies for their food production and in need of protection. It’s not dirt, it’s a PaBAMOM!
1. We don’t have to do this marketing job (phew!) because the existence and value of soils are so often taken for granted. Recently, economists have been working on estimating the implicit worth of the services performed for society by a single hectare (100 m x 100 m) of soil, and the amounts can range into tens of thousands of dollars per year depending on soils’ properties and the way they are used.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/06%3A_Soils_as_a_Key_Resource_for_Food_Systems/6.01%3A_Soil_Basics/6.1.01%3A_What_is_Soil.txt
|
Support, Water, and Nutrients
Before examining other basic soils functions, it is helpful and will avoid possible confusion, to understand the basics of how soils support the needs of crops, which in turn support the food needs of humans and their livestock. Firstly, soils provide a physical means of support and attachment for crops – analogous to the foundation of a house. Second, most water used by plants is drawn up through roots from the pores in soils that provide vital buffering of the water supply that arrives at crops either from rainstorms or applied as irrigation by humans. Third, as crops grow and build their many parts by photosynthesizing carbon out of the air (see module 6, next, for more on this) they gain most of the mineral nutrients they need (chemical elements) they need2 from soils, for example by taking up potassium or calcium that started out as part of primary minerals in earth’s crust, or nitrogen in organic matter that came originally from fertilizer or the earth’s atmosphere. The adaptation of crop plants domesticated by human farmers (and other plants) to soils, and the adaptation of the soil ecosystem to plants as their primary source of food mean that soils usually fulfill these roles admirably well.
2 The elements needed by plants other than Carbon (from the air) and Hydrogen/Oxygen (from water) in rough order of concentration are Potassium, Nitrogen, Phosphorus, Calcium, Magnesium, Sulfur, Iron, Manganese, Zinc, Boron, Copper, Molybdenum, and Cobalt (for some plants). Other elements are taken up into plants in a passive way without being essential, such as Sodium, Silicon, or Arsenic.
6.1.03: Soil Formation and Geography
How do soils form in different places?
Soil Formation Factors
Soils around the world have different properties that affect their ability to supply nutrients and water to support food production, and these differences result from different factors that vary from place to place. For example the age of a soil -- the time over which rainfall, plants, and microbes have been able to alter rocks in the earth's crust via weathering-- varies greatly, from just a few years where soil has been recently deposited by glaciers or rivers, to millions of years in the Amazon or Congo River Basins. A soil's age plus the type of rock it is made from gives it different properties as a key resource for food systems. Knowing some basics of soil formation helps us to understand the soil resources that farmers use when they engage in food production. Below are some of the most important factors that contribute to creating a soil:
1. Climate: climate has a big influence on soils over the long term because water from rain and warm temperatures will promote weathering, which is the dissolution of rock particles and liberating of nutrients that proceed in soils with the help of plant roots and microbes. Weathering requires rainfall and is initially a positive process that replenishes these solubilized nutrients in soils year after year and helps plants to access nutrients. However over the long run (thousands to millions of years) and in rainy climates, rainwater passing through a soil (leaching) leaves acid-producing elements in the soil like aluminum and hydrogen ions, and carries away more of the nutrients that foster a neutral pH (e.g. calcium, magnesium, potassium; see the next page on soil properties for a discussion on soil pH). Old soils in rainy areas, therefore, tend to be more acidic, while dry-region soils tend to be neutral or alkaline in pH. Acid soils can make it difficult for many crops to grow. Meanwhile, dry climate soils retain nutrients gained in weathering of rock -- a good thing -- but may lack plant cover because of dry conditions. A lack of plant cover leaves the soil unprotected from damage by soil erosion and means that dry climate soils often lack dead plant material (residues) to enrich the soil with organic matter. Both dry and wet climate soils have advantages as well as challenges that must be addressed by human knowledge in managing them well so that they are protected as valuable resources.
2. Parent material: soils form through gradual modification of an original raw material like rock, ash, or river sediments. The nature of this raw material is very important. Granite rock (magma that hardened under the earth) versus shale (old, compressed seabed sediments) produce very different soils. An important example of parent material influencing soils with consequences for human food production are soils made from limestone or calcium and magnesium carbonates. These rocks strongly resist the process of acidification by rainfall and leaching described above. Limestone soils maintain their neutral level of acidity (or pH) even after thousands of years of weathering, and thus can better maintain their productivity. An example of this parent material influence is the Great Valley in Pennsylvania, USA, where the Amish reside. These Pennsylvania soils are considered some of the most productive soils in the U.S. even after hundreds of years of farming. Pockets of other limestone soils the world over are similarly productive over the long term. In summary, as part of learning about a food production systems of a region, it can be helpful to consider the types of rock that occur in that region, which you may want to consider for your capstone regions.
3. Soil age: the time that a soil has been exposed to weathering processes from climate, and the time over which vegetation has been able to contribute dead organic material, are important influences on a soil. Very young soils are often shallow and have little organic matter. In a rainy climate, young (e.g. 1000 years) to medium aged (e.g. 100,000 years) soils may be inherently very fertile because rainfall and weathering have not yet removed their nutrients. Old soils are usually deep and may be fertile or infertile depending on the parent material and long-term climatic conditions. Soils in previously glaciated regions such as the northern U.S. and Europe are usually thought of as young because glaciers recently (~10,000 years ago) left fresh sediments made from ground up rock materials.
4. Soil slopes, relief, and soil depth: Steep slopes in mountains and hilly regions causes soils to be eroded quickly by rainfall unless soils are covered by throughout the year by crops or forest. These hilly and mountain regions may also have young soils, and the combination of young soils and erosion can make for soils that are quite thin. Meanwhile, flat valley areas are where the eroded soil is likely to accumulate, so soils will be deep. Along with the water holding capacity and the nutrient content of a soil, soil depth determines how much soil "space" or soil volume a crop's roots can explore for nutrients and water. Soil depth is an important and often overlooked determinant of crop productivity of soils. Moreover, these large-scale "mountain versus valley" differences can be mirrored within a single field, with small differences in topography creating differences in drainage, depth, and other soil properties that dramatically affect soil productivity within ten to twenty meters distance.
A Summary of Soil Formation: The Global Soils Map
These four factors along with the vegetation, microbes, and animals at a site, create different types of soils the world over. A basic global mapping of these soil types is given below in Fig. 5.1.2 We've attached some soil taxonomic names (for soil orders, categories used by soil taxonomists) to these basic soil types for those who are familiar with some of the terminology of soil classification. We should emphasize that understanding these orders is not essential to your understanding of food production and food systems, as long as you understand how the basic processes of soil formation described above, and the properties of soils described on the next page, contribute to the overall productivity of a soil. You should think about how the soil formation processes affect crop production in your capstone regions of your final project, and you should be able to find resources on how soils were formed in any place in the United States and around the world.
Figure 5.1.2.: Simplified global soil map classified into broad categories. Credit: Steven Vanek based on USDA world soils map.
Formation and Management Affect a Soil's Productivity
Another important point is that soil formation processes described above largely determine only the initial state of a soil as this passes into human management as part of a coupled human-natural food system. Human management can have equally large effects as soil formation on productivity, either upgrading productivity or destroying it. The best management protects the soil from erosion, replenishes its nutrients and organic matter, and in some ways continues the process of soil formation in a positive way. We'll describe these best practices as part of a systems approach to soil management in module 7. Inadequate human management can be said to "mine" the soil, only subtracting and never re-adding nutrients, and allowing rainfall and wind to carry away layers of topsoil.
The next page adds to this description of soil formation by focusing in on the basic properties that affect food production on soils, like acidity and pH which is discussed above.
Knowledge Check (fill in the blanks)
Answer the following questions on basic properties of soil and factors of soil formation.
1) Fill in the missing words
The materials that make up the solid part of soils are minerals and matter (i.e. not air and not water).
2) Drag the words into the correct boxes
drainage
holding water for plants
Large pores in soil are important for ______ and medium size pores are important for ______.
3) Fill in the missing words
The amount of carbon held in soils globally is (larger or smaller) than the amount of carbon in all the earth's forests.
4) Fill in the missing words
Soils with what type of rock tend to resist the natural process of acidification that happens to soils in many climates?
5) Fill in the missing words
In global comparative terms and considering the global soil map above, a soil in the Northeast United States would tend to be . (young or old).
6) multiple choice
Which of the following is likely to have a low level of organic matter and why? A dry climate soil or a wet climate soil?
• dry climate soil, because plant growth is limited and there is less dead plant material going into the soil
• wet climate soil, because plant growth is rapid and there is more dead plant material going into the soil
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/06%3A_Soils_as_a_Key_Resource_for_Food_Systems/6.01%3A_Soil_Basics/6.1.02%3A_Soils_Support_Plant_Growth_and.txt
|
Nutrients, pH, Soil Water, Erosion, and Salinization
In growing crops for food, farmers around the world deal with local soil properties that we started to describe on the previous page. These properties can either be a positive resource for crop production or limitations that are confronted using management methods carried out by farmers. The first of these, a soil's nutrient status, is described in more detail in module 5.2. Regarding nutrients is only important to emphasize here that most nutrients taken up by plants (other than CO2 gas) come to plant roots from the soil, and that the supply of these nutrients often has to do with the amount of dead plant remains, manure, or other organic matter that is returned to the soil by farmers, as well as fertilizers that are put into soils to directly boost crop growth. Here are the other major soil properties that farmers pay attention to in order to sustain the production of food and forage crops:
Soil pH or Acidity: Near Neutral is Best
Most crops prefer soils that have a pH between 5 and 8, mildly acidic to mildly alkaline (to understand these pH figures, remember that water solutions can be either acidic or basic (alkaline), and that pH 7 is neutral, vinegar has a pH of about 2.5, and baking soda in water creates a pH of about 8). As discussed above under the climate and parent material sections describing soil formation, soils in rainy regions tend to become more acidic over time.& Soils with too low a pH will have trouble growing abundant food or feed for animals. Farmers manage soils with low pH by adding ground up limestone (lime) and other basic (that is, acid-neutralizing) materials like wood ash to their soils. As an alternative, farmers sometimes adapt to soil pH by choosing or even creating crops or crop varieties that have adapted to low pH, acidic soils. For example, potatoes do well in high elevation, acidic soils of the Andes and other areas around the world. Alfalfa for livestock does better in neutral and alkaline soils while clovers for animal food grow better in more acidic soils.
Soil Water Holding Capacity and Drainage: Deep, Loamy, and Loose is Best
Module 4 described the importance of water for food production and the way that humans go to great lengths to provide irrigation water to crops in some regions. Soil properties also play a role in the amount of water that can be stored in soils (for days to weeks) that is then available to crops. A soil that holds more water for crops is more valuable to a farmer compared to a soil that runs out of water quickly. Among the properties that create water storage in soils is soil depth or thickness, where a deep soil is basically a larger water tank for plant roots to access than a thin soil. The proportions of fine particles (clay) versus coarse particles (sand) in a soil, called soil texture, also influences the water available to plants: Neither pure clay nor pure sand hold much plant-available water because clay holds the water too tightly in very small pores (less than 1 micron or 0.001 mm, or smaller than most bacteria) while sand drains too rapidly because of its large pores and leaves very little water. Therefore an even mix of sand, clay, and medium-sized silt particles hold the maximum amount of plant available water. This soil type is known as loamy, which for many farmers is synonymous with “productive”. In addition to these soil properties, farmers try to maintain good soil structure (also called "tilth"), which is the aggregation of soil particles into crumb-like structures, that help to further increase the ability of soils to retain water. Soil aggregation or structure, and its multiple benefits for food production are further described in Module 7 on soil quality.
Clayey soils, and soils that have been compacted by livestock or farm machinery ("tight" vs. "loose" soils), can also have problems allowing enough water to drain through them (poor drainage), which can lead to an oversupply of water and a shortage of air in soil pores (refer back to figure 5.1.1 and the roughly equal proportion of air and water in pores of an agricultural soil). Too much water and too little air in a soil lead to low oxygen in the soil and an inability for roots and soil microbes to function in providing nutrients and water to plants. Part of good tilth, described above, is maintaining a loose structure of the soil.
In the face of these important soil properties for water storage, farmers seek out appropriate soils with sufficient moisture (e.g. deep and loamy, see Figs. 5.1.3 and 5.1.4) but also adequate drainage. Food producers also modify and maintain the moisture conditions of soils, through irrigation but also through maintaining good soil aggregation or tilth (see modules 5.2 and 7), and by avoiding compaction of soils that also leads to poor drainage and soils that are effectively shallower because roots cannot reach down through compacted soils to reach deeper water.
Figure 5.1.3.: The shallow soil with an oat crop is in a mountainous region that has likely suffered erosion, and features of the bedrock can be seen within 50 cm of the surface (yellow line), where the soil becomes much poorer. The total volume available to store nutrients and water in this soil is low. A pick axe head is shown for scale. Credit: Steven Vanek
Figure 5.1.4.: This loamy, deep soil is likely in a flatter region and has an organic-matter rich layer that extends to about 40 cm below the surface and water storage capability to beyond one-meter depth (numbers on tape are cm), an excellent nutrient and water resource for food production. Credit: Stan Buol, North Carolina State University Soil Science, on Flickr Creative Commons (CC BY 2.0)
Salinization and Dry Climates: Hold the salt
Dry climate soils have less rainfall to leach them of minerals. They can, therefore, be high in nutrients, but also carry risks of harmful salts building up as rainfall does not carry these away either. Salt-affected soils may either be too salty to farm at all or may carry a risk that if irrigation water is too high in salts or applied in insufficient amounts to continually “re-rinse” the soil of salts, then salts can build up in soils until crops will not grow. The way that arid soils are managed is a key part of the human knowledge of food production in dry regions.
Relief and Erosion: Don't Let Soil Wash Down the Hill
Soil slope and relief are described on the previous page as creating higher risks of erosion (Fig. 5.1.5). To address this limitation food producers have either (a) not farmed vulnerable sloped land with annual crops, leaving them in forest, tree crops, and year-round grass cover and other vegetation that holds soils on slopes; (b) built terraces and patterned their crops and field divisions along the contours of fields (Fig. 5.1.6). Terracing and terraced landscapes can be seen from Peru to Southeast Asia to Greece and Rwanda. Nevertheless, while sloped soils have been seen as the Achilles heel of environmental sustainability in mountain areas, the extreme elevation differences present in mountain areas can also be seen as a benefit to these food systems. The benefits arise because soils with very different elevation-determined climates and soil properties in close proximity, which allows for the production of a greater variety of crops. The simultaneous production in the same communities of cold- and acid soil tolerant bitter potatoes and heat-loving maize and sugar cane in lower, more neutral soils in the Peruvian Andes is an example of this benefit in high-relief mountain regions.
Figure 5.1.5.: Soil erosion in a mountain landscape. Credit: Steven Vanek
Figure 5.1.6.: Terracing in a mountain landscape. Credit: Quinn Comendent, used with permission under a creative commons license.
Soil Health: Understanding Soils as an Integrated Whole for Food Production
We hope that you are beginning to appreciate that appropriate management of soils is emphatically about integrating management principles like the ones presented here as human responses, along with an understanding of the basic properties of soils, and also the nutrient flows presented next in module 5.2. Soils are very much a complex system, and managing them for food production and environmental sustainability means that we must understand the multiple components and interactions of this system. The way in which this is accomplished has been summarized as the concept of Soil Health, which involves multiple components that are more fully addressed in module 7. Soil health is an aspiration of effective management and means that management has maintained or promoted properties like nutrient availability, beneficial physical structure, and diversity of functionally important and 'health-promoting' microbes and fauna in soils along with sufficient organic matter to feed the soil ecosystem. These integrated properties then allow production to avoid soil degradation, produce sufficient amount of food and livelihoods, and preserve biodiversity in soils as well as other significant ecosystem services like buffering of river flows and storage of carbon from the atmosphere.
3 This is not always true; Molybdenum, Sulfur, Boron and other micronutrients are sometimes found to limit plants, but the complexity of analyzing these is beyond the scope of this survey course.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/06%3A_Soils_as_a_Key_Resource_for_Food_Systems/6.01%3A_Soil_Basics/6.1.04%3A_Soil_Properties_and_Human_Resp.txt
|
Soil scientists have done an enormous amount of work in mapping the patterns of soil at a global level. The most current and detailed effort comes out of mapping work from the Food and Agriculture Organization of the United Nations, now an independent agency that is known as the International Soil Resource Information Centre (ISRIC), and is based on classifying a set of diagnostic types of topsoil layers that occur in different climates, landscape ages, and vegetation types. The details of this system5 are beyond the scope of this course, however, and to summarize the introduction to global soil fertility in this unit we present a simplified version of the United States Department of Agriculture (USDA) system that is still in wide use by soils practitioners in the United States. The USDA system lines up very well with the ISRIC system at this simplified level and allows understanding of the broad strokes of soil nutrient geography in the way we have presented it (Figure 3.8).
Figure 3.8: Simplified map of soil types in the world and associated characteristics, referred to the USDA soil classification system. Source: adapted by S. Vanek from the USDA Natural Resource Conservation Service (NRCS)
This simplified map is intended to serve as a resource for your other learning in the course on how food systems may respond to the opportunities and limitations of soils, and also summarizes the learning in this module about how soils result from an interaction of parent material, time, climate, vegetation, and other factors. For example, you’ll notice that just four very broad summarized types (See section 1 of the soils key, “Dominant global soils” in Fig. 3.8) cover the vast majority of the earth’s surface, and can be organized into a rough typology of precipitation from wet to dry, along with their age and vegetation types (e.g. tropical and subtropical forests; other forest types; grasslands, and desert vegetation). Soils formed by temperate grasslands have been hugely important in recent history because once humans developed steel plows that were sufficiently strong to til prairie soils, these Mollisols could be farmed and became the breadbaskets of the modern era (e.g. the U.S. and Canadian Great Plains, the Ukraine, the Argentinian pampas). There are also small pockets of soils globally that depend strongly on their original parent material. Andisols or volcanic ash soils are an excellent example of this: although their global extent is minuscule and even invisible on our map (Fig. 3.8) at this scale, they often occur in areas with high population densities such as Ecuador, Japan, and Rwanda. The high densities of population are not an accident but occur exactly because these soils have high fertility potential and have become extremely important in these local food systems. The simplified global soils map is also a way to spatially conceptualize a number of key limiting factors in soils that food producers must face: acidic, P-retaining soils in highly weathered tropical and subtropical soils, P retention in volcanic soils, and the risk of salinization of soil in dry climate soils.
In addition, it is worth noting that the broad swaths of soil of young to moderate age and with moderate to high fertility (light green in our map) may be the dominant type of soil in the world and also includes many areas that are critical in terms of the sustainability outcomes for human-natural systems in relation to soils. Because these tend to be “medium-everything” soils (medium age, medium fertility, medium depth, medium pH, medium moisture, etc.) they do not actively dissuade human systems from occupying them with high population densities or intensity of management and production, especially as the global population increases. However these soils are often easily degraded, and so sustainable methods are especially important to guarantee future food production.
Finding out information on soils using the soil order suffix in the name of the soil according to the USDA soil taxonomy system.
Soil taxonomy is an enormous classification system that can initially be confusing. But knowing the first level of classification can be very useful, just like knowing whether an animal is a whale or a beetle is extremely helpful compared to not knowing anything. To classify soils broadly as to their limitations and productive potential, we can use the soil orders of the USDA system (see the order names in parentheses, in Fig. 3.8).
The key below will help you to use the last few letters of a USDA soil name, along with the ISRIC world soil mapping resource to query what types of soil are present around the world or specifically in your capstone regions. The categories are the same as what is presented in Figure 3.3, and you can use the query function in the ISRIC world soil mapper to find out what USDA soil names are present in each area, and draw conclusions about the potential fertility and properties of the soils at a broad level.
First, see the ISRIC resource is at SoilGrids. This was also used in the formative assessment for Module 3.1.
In the ISRIC mapper you will need to click on layers icon in upper right and set the layer to “Soil Taxonomy: TAXOUSDA” and select the “All TAXOUSDA subclasses” -- when you query the map using a right click of the mouse, you’ll get a percent breakdown of the different soil orders at that location.
Key to USDA Soil Taxonomy System
Soil name ending Meanings Example
-Epts
-Ents
-Alfs
Entisols : soils of recent deposition, no soil development.
Inceptisols: the beginning of soil formation – medium to high fertility soils
Alfisols: broad class of medium age, medium to high fertility soils
Glossoboric hapludalfs
Orthents
-Ols Mollisols: prairie soils, high organic matter, generally neutral pH, fertile, deep Dystric haplustolls
-Ids Aridisols – dry region soils, generally high pH Argids
-Ods Spodosols – coniferous forest soils with acid needle litter leaching features Orthods
-Ults
-Oxes
Ultisols – warm region, old, leached soils
Oxisols – oldest tropical soils formed only of weathering remnants, metal oxides
Udults
-Ands Andisols- volcanic ash soils Vitrands
-Erts Vertisols – highly weathered limestone, with shrink-swell clays. Uderts
5 Nevertheless, you may peruse this impressive global resource and the soil horizon definitions at ISRIC.
6.1.06: Formative Assessment- Mapping
Instructions
You will complete an activity on mapping trends in soil properties using an online soil mapping resource. The emergence of tools such as this to visualize global and national soil data easily and with full public access is revolutionizing information about soils and management constraints in different regions of the world. Please download the worksheet so that you can fill it in (either on paper or preferably just by writing in your responses in MS Word).
The two web resources you will need for this worksheet are placed here so you can access them while you fill in the worksheet.
Mainly you will need the International Soil Resource Information Centre's soil mapping resource of the world, SoilGrids. Click past the intro window that will appear in the center of the screen and then pan the map to the area of interest as identified in the worksheet.
Figure 5.1.7.: An example of a map from the SoilGrids data portal. The layer that is shown is the global map of clay content (% clay) in soils, where areas that have more purple are higher in clay content in their surface soils. Credit: The image above and in the downloadable worksheet were generated using the SoilGrids data portal, and are used with permission of the International Soil Reference and Information Centre (ISRIC) according to the terms of an open database license (ODbL). Sharing and adapting of data is permitted.
This is a mapping portal that resembles google earth - you have the ability to pan, zoom in, drag the map with the cursor and mouse (Fig. 5.1.7). When you enter you should see a toolbar in the top right corner. More instructions on the portal are given on the formative assessment worksheet.
You will also need briefly, this online map showing global annual total precipitation.
Files to Download
Download the Worksheet to complete your assessment.
Submitting your assignment
Please submit your assignment in Module 5 Formative Assessment in Canvas.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/06%3A_Soils_as_a_Key_Resource_for_Food_Systems/6.01%3A_Soil_Basics/6.1.05%3A_Understanding_Soil_Maps_at_a_B.txt
|
Introduction
Nutrient Cycling and Nutrient Management for Soils in Food Production
In module 5.2, we present a basic account of nutrient cycling and nutrient management in food production systems. When we talk about nutrients in this context, we are referring to the nutrients that are needed to grow crops which are taken up from soils by the roots of crop plants. These include the important nutrients nitrogen (N) and phosphorus (P) which will form the focus of this module. We refer to N and P as "important" nutrients because they are needed in large quantities, relative to the amounts that are readily available in many soils. In agricultural and ecological terms, we say that crops and food production are especially responsive to N and P abundance: shortage of N or P causes dramatic declines in production of food, while sufficiency and abundance will raise yields, so that N and P supply have been a focus of human management to maintain food production. We will begin by talking about the way that N and P move around in cycles in all ecosystems, including the agroecosystems that are managed by humans to produce food. Human management systems in agriculture thus play a major role in altering the cycles of these nutrients in order to maintain, and in some cases increase the production and supply of food from farmland (farmed soils). This management can also negatively impact water quality in watersheds, as you saw in module four. We will also understand the way that soil organic matter (SOM) relates to these two major nutrients and soil productivity, as well as the general concept of soil depletion and soil regeneration as these relate to strategies of soil management in food production.
6.03: Summary and Final Tasks
Summary
In this module, we have introduced the basics of soil properties and the nature of soil as a key resource for food production, which following modules will build upon to show how soils can be managed sustainably. We hope that you have understood the fundamental composition of soil as minerals, organic matter, water, and air as an essential part of earth's natural systems. We also have tried to illustrate the way in which key properties of soil, like its pH, nutrient content, and retention of water, affect how plants grow and produce food. On the human system side, we also presented the way in which human efforts have managed soil for sustained production of food, including the addition of nitrogen and phosphorus to replenish soil stores that are removed by crop harvests, and the protection of soils from erosion losses. However, a surplus of soil nutrients generated by over-applying N and P is also a problem, as illustrated in the nutrient balances in this module's summative assessment. We will continue to deepen your knowledge of sustainable soil management, as it supports sustainable food systems, during the next modules.
Reminder - Complete all of the Module 5 tasks!
You have reached the end of Module 5! Double-check the to-do list in the Module 5 Roadmap to make sure you have completed all of the activities listed there before moving on to Module 6!
Further Reading
1. Brady, N.C. and Weil. R.R. 2016. The Nature and Properties of Soils. Columbus: Pearson. A very readable and visual textbook that gives an extremely comprehensive treatment of soil science.
2. Dybas, C.L.,2005 "Dead Zones Spreading in World Oceans" Bioscience 55(7): 552-557 - freely available article in Bioscience journal.
3. Scoones, I. (2010). Dynamics and Diversity: soil fertility and farming livelihoods in Africa, case studies from Ethiopia, Mali, and Zimbabwe. Earthscan.
6.04: Summative Assessment- N and P Balances
Introduction
The last page of module 5.2 mentions the twin issues of deficit and surplus that are principal challenges in the management of soil nutrients. The exercise in this summative assessment requires you to use real data on nutrient inputs and outputs from two systems to create nutrient balances, and then analyze the situation of nutrient balance or surplus. These systems are the Ohio River sub-basin of the Mississippi River basin and measurements of nutrient flow from hillside farming in the Bolivian Andes. You should do this activity with a partner or small group in class, and prepare to discuss your results with the class. You will use data from a table to answer questions on the assessment worksheet (download below).
In analyzing the twin issues of nutrient surplus and nutrient shortage in soils and food production systems, you'll be practicing a geoscience "habit of mind" of systems thinking. In other words, to examine the wider impacts of nutrient management or the causes of soil infertility, we need to expand our focus from a single field to a landscape or river basin and think about a web of linkages between farmers, nutrient supplies, economic factors, and watersheds, among other system components. This allows us to contemplate these challenges in the proper frame and over the right timescale.
Download the worksheet to complete and turn in your assessment. The worksheet contains information in a table that you will need to complete the assignment.
Submitting your Assessment
Please submit your assignment in Module 5 Summative Assessment in Canvas.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/06%3A_Soils_as_a_Key_Resource_for_Food_Systems/6.02%3A_Soil_Nitrogen_and_Phosphorus-_Human_Management_of_Ke.txt
|
Introduction
Agroecosystems are coupled Human-Nature Systems that are shaped by ecological and human socioeconomic factors.
Agricultural practices that humans use are determined by multiple agroecological factors including climate, soil, native organisms, and human socioeconomic factors. Usually, climate and soil resources are the most significant natural factors that determine the crops and livestock that humans produce. Although in some cases, to overcome climate and soil limitations, humans alter the environment with technology (ex. irrigation or greenhouses) to expand the range of food and fiber crops that they can produce. In this module, we will explore how climate and soil influence crop plant selection; crop plant characteristics and classifications; and some socioeconomic factors that influence the crops that humans chose to grow.
Goals
• Describe key features of categories of crop plants and how they are adapted to environmental and ecological factors.
• Explain how soil and climatic features determine what crops can be produced in a location, and how humans may alter an environment for crop production.
• Classify environments as high or low resource environments and interpret how both environmental and socio-economic factors contribute to crop plant selection (coupled human-nature systems); and the pros and cons of the cultivation of various crop types.
Learning Objectives
After completing this module, students will be able to:
• Define annual and perennial crops and list some examples of annual and perennial crops.
• Distinguish and explain why annual or perennial crops are cultivated in high resource or resource-limited environments.
• Explain some ways that farmers alter the environment to produce annual or perennial crops.
• Name some major crop plant families with some example crops.
• Explain the nutrient significance of legumes.
• Describe key plant physiological processes and how climate change may influence crop plant growth and yield.
• Classify major crop plants into types including plant families, temperature adaptation, and photosynthetic pathways.
• Formulate an explanation of the advantages and disadvantages of producing annual and perennial crops.
• Interpret what environmental, ecological, and socioeconomic factors influence what crops farmers produce.
• Distinguish some environmental, ecological and socioeconomic advantages and disadvantages of producing types of crops.
Assignments
Print
Module 6 Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 6 Roadmap
Action Assignment Location
To Read
1. Materials on the course website
2. Virginia Cooperative Extension: The Organic Way - Plant Families
3. Penn State Extension: Seasonal Classification of Vegetables
4. Plant & Soil Sciences eLibrary: Transpiration: Water Movement through Plants
5. National Climate Assessment Report: Introduction and Section 1 Increasing Impacts on Agriculture
6. USDA: Background: Corn
1. You are on the course website now.
2. Online: The Organic Way - Plant Families
3. Online: Seasonal Classification of Vegetables
4. Online: Transpiration: Water Movement through Plants
5. Online: Increasing Impacts on Agriculture
6. Online: Background: Corn
To Do
1. Formative Assessment: NASS Geospatial Map Crop Scape Annual and Perennial Crop Analysis and Interpretation of Advantages and Disadvantages
2. Summative Assessment: Top 15 World Food Commodities
3. Take Module Quiz
4. Turn in Capstone Project Stage 2 Assignment
1. In course content: Formative Assessment; then submit in Canvas (as a discussion post)
2. In course content: Summative Assessment; then take quiz in Canvas
3. In Canvas
4. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
07: Crops
Climate, soil resources, and the organisms in the environment influence which food and fiber crop plants humans can produce. To overcome environmental resources limitations, humans also alter the environment to produce food and fiber crops.
7.01: Crop Life Cycles and Environments
Plants need light, water, nutrients, an optimal temperature range, and carbon dioxide for growth. In a natural environment, the availability of plant resources is determined by the:
• soil fertility, soil depth, and soil drainage
• climate: the seasonal temperature and precipitation distribution
• competition with other plants, herbivory by other organisms, and pathogens
• the frequency of environmental disturbances (for example from fire, floods, and herbivory).
In some environments, nutrients, light, and water, are readily available and temperatures and the length of the growing season are sufficient for most annual crops to complete their lifecycle; we will refer to these as high resource environments for crop production. High resource environments tend to have soils that are fertile, well-drained, deep, and generally level, as well as growing seasons with temperatures and precipitation that are optimal for most plant growth. In general, in environments where competition for resources among plants is low, annual plants with more rapid growth rates tend to dominate (Lambers et al, 1998). Consequently, humans tend to cultivate annual plants with high growth rates in high resource environments.
By contrast, in low-resource environments plant growth may be limited due to soil features and/or climatic conditions. Soils may be sloped, with limited fertility, depth, and drainage; and/or the growing season may be short due to extended dry seasons and/or long winters (with temperatures at or below freezing). In natural ecosystems, resources can be limited due to competition among plants, such as in a forest or grassland where established plants limit the light, water, and nutrients for new seedlings. And in these environments where resources are limited, plants with slower growth rates and perennial life cycles tend to succeed (Lambers et al, 1998), and perennials are often the primary crops that humans cultivate in resource-limited environments.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.01%3A_Crop_Life_Cycles_and_Environments/7.1.01%3A_1_Plant_Life_Histories.txt
|
Annual plants grow, produce seeds, and die within one year. In general, annual plants evolved in environments where light, water, and nutrients were available, and they could consistently reproduce in one year or less. Where resource availability is high, plants that can germinate and grow rapidly have a competitive advantage capturing light, nutrients, and water over slower growing plants and are more likely to reproduce. To ensure the survival of their offspring, annuals allocate the majority of their growth to seeds (often contained in fruit); and they tend to produce many seeds.
Human selection of annual crop plants typically further selected for large seeds and/or fruit. Some examples of annual crop plants are corn, wheat, oats, peppers, and beans (see photos). What are some other examples of annual crop plants?
Figure 6.1.1.: Cornfield. Credit: Heather Karsten
Figure 6.1.2.: Wheatfield. Credit: Heather Karsten
Figure 6.1.3.: Corn Grain. Credit: Heather Karsten
Figure 6.1.4.: Peppers. Credit: pixabay
Figure 6.1.5.: Beans. Credit: pixabay
Annual crop plants are generally categorized into one of the three seasons that falls in the middle of their plant growth life cycle: spring, summer, or winter. For instance, summer annuals are generally planted in late spring, grow and develop through summer, and complete their lifecycle by late summer or autumn. Winter annuals are generally planted in early autumn and germinate and grow in autumn. Depending on how cold the winter is where they are cultivated, winter annuals may grow slowly in winter or become dormant until spring. In spring, they grow, flower, and produce seed by early to mid-summer (See Figure 6.1, Annual Crop Types). After an annual crop is harvested, in some regions farmers may be able to plant another crop, such as a winter annual crop after a spring annual crop, this is referred to as double-cropping (cultivating two crops in one year). If only one crop is cultivated in a season, the soil may be left exposed until the next growing season. Leaving crop residue on the soil can reduce erosion, but planting another crop with live plant roots and aboveground vegetation provides better soil protection against water and wind erosion. Alternatively, a cover crop may be planted after the harvested crop to protect the soil from erosion and provide other benefits until the next crop is planted. Cover crops are typically annual crops that can establish quickly; you will learn more about cover crops in Module 7.
Figure 6.1.6.: Annual seasonal crop types with their approximate growing seasons in Northeastern US, Credit: Heather Karsten
Click for a text description of the Annual Seasonal Crop Types image.
Spring annual (April-July): oats, spring barley, spring canola, peas, leafy greens, broccoli
Winter annual (September-July): winter wheat &barley, cereal rye, hairy vetch, crimson clover, triticale, winter canola
Summer annual (June-October): corn, soybeans, sorghum, peppers, string beans, cucumber, summer squash, eggplants
7.1.03: 3 Perennials
Biennials are plants that live and reproduce in two years, and at the other end of the life cycle spectrum are perennial plants that live for 3 or more years. Perennials evolved in environments where resources were limited often due to competition with other plants and their growth rates tend to be slower than annual plants (Lambers et al, 1998). In these resource-limited environments, often plants cannot germinate from seed and reproduce by seed within one year. Therefore, to increase their opportunities for successful reproduction, perennials evolved ways to grow and survive for multiple years to successfully produce offspring. Perennial crops are typically cultivated in environments that may also have a climatic limitation such as a short growing season or dry climate, or where plant ability access to resources may be limited due to frequent disturbance such as grazing.
To survive for multiple years, perennials allocate a high proportion of their growth to vegetative plant parts that enable them to access limited resources and live longer. For instance, they often invest in extensive and deep root systems to access water and nutrients, or in tall and wide-reaching aboveground stems and shoots to compete for light, such as bush and tree trunks and branches. Perennials also store reserves to regrow after growth limiting conditions such as drought, freezing winters, or disturbance such as grazing. Carbohydrates, fat, and protein are stored in stems and roots, or modified stems such as tubers, bulbs, rhizomes, and stolons. In many plant species, these storage organs can produce root and shoot buds that can grow into independent offspring or clonal plants; this is called vegetative reproduction. Although most perennials reproduce both through seed and vegetative reproduction, in resource-limited environments where plant competition is high, the large storage organs and their reserves offer vegetative offspring plants a competitive advantage over starting from seed.
Perennial Crops
Humans have cultivated and selected perennial crop plants for their vegetative plant parts, storage organs, fruit, and seeds. For instance, the leaves and stems are the primary plant parts harvested from perennial forage crops (crops in which most of the aboveground plant material is grazed or fed to animals). Horticultural perennial crops that are harvested for stems and leaves include asparagus, rhubarb, and herbs. And in some cases, a perennial crop's storage organs are harvested each year, limiting the plant's ability to complete its perennial lifecycle and effectively reducing its cultivated lifecycle to an annual. Examples of such crops perennial crops that are cultivated as annuals include potato, sweet potato, and taro, Tree, shrub, and vine food crops managed as perennial crops are typically cultivated for their fruit and seeds, such as apples, stone fruit (ex. peach, plum), plantains, nuts, berries, and grapes (see photos below).
Figure 6.1.7.: Perennial alfalfa field in Pennsylvania. Credit: Heather Karsten
Figure 6.1.8.: Apple orchard in Washington. Entomologist Brad Higbee (left) with Jerry Wattman, manager of this apple orchard near West Parker Heights, Washington. Credit: Scott Bauer of USDA
Figure 6.1.9.: Potatoes in a supermarket in Lima, Peru. Credit: Heather Karsten
Figure 6.1.10.: Perennial tree crops at an open-air market in Peru. Credit: Heather Karsten
Figure 6.1.11.: Foreground: Perennial grassland grazed by Angus cattle in western Montana, where precipitation and high daytime temperatures in summer limit plant growth. In the background, perennial grasses and broadleaf plants including shrubs and trees on the foothills of the mountain range have deep roots to access soil moisture that also help reduce soil erosion. Credit: Heather Karsten
Figure 6.1.12.: Rangeland in Southwest Utah where precipitation and high temperatures limit plant growth. Credit: Heather Karsten
Figure 6.1.13.: Alfalfa roots. Credit: Kulbhushan Grover
Knowledge Check (flashcards)
Consider how you would answer the question on the card below. Click "Turn" to see the correct answer on the reverse side of the card.
Card 1:
Front: What are some other perennial crop plants?
Back: Perennial crop plants include alfalfa, perennial grasses, alfalfa roots, potatoes, raspberries, bananas, oranges.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.01%3A_Crop_Life_Cycles_and_Environments/7.1.02%3A_2_Annuals.txt
|
Annual plants are typically cultivated in high-resource environments and regions with:
• climates that have sufficient precipitation and temperatures for plants to complete their life cycle each year
• soils that soils tend to be relatively flat and well drained, and are not prone to erosion when they are tilled or planted to an annual crop each year
• high fertility soil
Annual crops produce grain and fruit crops within one growing season. Grain crops are typically a concentrated source of carbohydrates, protein and sometimes fat, that can be cost-effectively stored and transported long distances, enhancing their market options and utility. Grain and oilseed annual crops are often processed for multiple uses and markets. For instance, oil is extracted from soybean for industrial and human uses, and the remaining meal is high in protein that is used for both human food products and livestock feed.
If conditions are not ideal for annual crops, farmers sometimes use management practices or technologies to improve conditions for crop growth such as irrigation to compensate for the lack of precipitation or black plastic to warm the soil in environments where temperatures may limit plant growth.
Knowledge Check (flashcard)
Consider how you would answer the question on the card below. Click "Turn" to see the correct answer on the reverse side of the card.
Card 1:
Front: What are some other examples of practices or technologies that farmers might use to increase annual crop production?
Back: Examples include: soil amendments such as fertilizer, lime, organic matter amendments (compost, manure, cover crops); season extension solar-heated hoop houses, or greenhouses; shorter season crop varieties, greenhouses, tile drains to improve soil drainage.
Regions, where perennial crops dominate the landscape, tend to have soil or climatic limitations such as steep or hilly slopes that are prone to erosion, shallow or poorly drained soils, soil nutrient limitations; limited precipitation and soil moisture availability, short growing seasons, or temperatures outside of optimal plant growth temperatures. In these environments, farmers may produce annual crops that are adapted to the environment, such as spring or winter wheat that grow during the cooler season or drought-tolerant annuals such as sorghum and pearl millet. Or farmers may use technologies and management practices, particularly for high-value crops, to improve conditions for crop growth such as tile drains, irrigation or season extension technologies.
See illustration and comparison of plant life cycles, the time and forms of reproduction. Can you name a specific crop plant example for each type of plant life cycle?
Figure 6.1.14.: Illustration and comparison of plant lifecycles. Credit: Bellinder, R. R.; R. A. Kline, D. T. Warholic. 1963. Weed control for the Home Garden. Cornell Cooperative Extension Bulletin 216. Figure 1, Pg. 6.
7.1.05: 5 Perennials and Soil Conservation
Because perennials allocate a high proportion of their growth to vegetative structures and regrow for many years, they can: i. protect soil from erosion; ii. return organic matter (carbon-based materials that originated from living organisms) to the soil, providing multiple soil health benefits; and iii. remove carbon dioxide from the atmosphere, potentially sequestering (storing) carbon in the soil or aboveground plant biomass. Forests, for example, sequester carbon above-ground in trees and in below-ground root systems.
Perennial grasses, in particular, have dense, fibrous roots that protect soil from erosion well and are valuable plants for soil conservation. In addition, over the years, some perennial roots and aboveground plant tissues die when environmental conditions limit growth (ex. drought, winter, grazing), and accumulate organic matter and nutrients in the soil. The majority of the most fertile and deep agricultural soils of the world were formed under natural perennial grasslands, whose deep root systems accumulated organic matter in the soil which contributed many beneficial soil properties, as well as carbon sequestration. Some annual crops can also contribute to conserving soil and add organic matter to the soil if a large portion of the crop residue is left on the soil surface, such corn stalks left on a field after the grain is harvested.
Figure 6.1.15.: Perennial grass roots, belowground rhizomes, and aboveground plant tissues provide year-round protection from soil erosion on a sloping field in Pennsylvania, while also providing forage for ruminant dairy cows during the spring, summer and autumn months. Credit: Heather Karsten
Figure 6.1.16.: Some cool-season perennial grasses with rhizomes indicated by the red arrows. Credit: Maria Carlassare
Figure 6.1.17.: Perennial grass mowed and drying for hay harvest on a steeply sloped field in Pennsylvania. Credit: Heather Karsten
Figure 6.1.18.: Perennial grass mowed and drying for hay harvest on a steeply sloped field in Pennsylvania. Credit: Heather Karsten
Knowledge Check (flashcard)
Consider how you would answer the question on the card below. Click "Turn" to see the correct answer on the reverse side of the card.
Card 1:
Front: Can you name some well-known, high-value perennial crops that are produced in the mountainous regions on the steep slopes of the following countries: Switzerland, Costa Rica, Columbia, Peru, Italy?
Back: Answers could include forage crops for animals that produce milk and meat, coffee, chocolate, cashews, avocados, bananas, plantains, oranges, potatoes, olives, grapes, etc.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.01%3A_Crop_Life_Cycles_and_Environments/7.1.04%3A_4_Crop_Life_Cycles_and_Environments.txt
|
In addition to their lifecycles, crop plants are characterized and classified in multiple ways that are relevant for crop production and management. Common plant features include similar morphology, growth and reproduction; and environmental and climatic adaptions. This module will help you understand more about how crops are adapted to different environments and diversified to interrupt pest lifecycles.
7.02: Crop Plant Characteristic Classification and Climatic Adaptations
Plants that have similar flowers, reproductive structures, other characteristics, and are evolutionarily related, are grouped into plant families (See Figure 2). Species in the same plant family tend to have similar growth characteristics, nutrient needs, and often the same pests (pathogens, herbivores). Planting crops from different plant families on a farm and the landscape; and rotating crops of different plant families over time can interrupt the crop pest life cycles, particularly insect pests, and pathogens, and reduce yield losses due to pests. Increasing plant family diversity can also provide other agrobiodiversity benefits including, diverse seasonal growth and adaptation to weather stresses such as frosts, and drought; different soil nutrient needs, as well as producing diverse foods that provide for human nutritional needs.
Figure 6.2.1.: The Plant Family Tree. Credit: The U. S. Botanic Garden and the National Museum of Natural History, Department of Botany, Smithsonian Institution.
Click for a text description of Figure 6.2.1.
The Plant Family Tree Ancestral Green Algae
1. Modern Green Algae - single and multicellular algae
1. Volvox Spirogyra
2. Seedless Non-vascular - plants with no veins and no seeds
1. Liverworts
2. True Mosses
3. Seedless Vascular - plants with veins and NO seeds
1. Selaginella
2. Quillworts
3. Club Mosses
4. Whisk Ferns
5. Adder's Tongue
6. Water Ferns
7. Royal Ferns
8. Horsetails
9. Climbing Ferns
10. Tree Ferns
11. Common Fern
4. Gymnosperms - Plants with cones
1. Ephedra
2. Welwitschia
3. Firs
4. Pines
5. Ginkgo
6. Cycad
7. Yew
8. Sequoia
9. Cypress
10. Podocarpus
11. Monkey Puzzle
5. Angiosperms - Flowering Plants
1. Dicots
1. Euphorbs
2. Violets
3. Willows
4. Mustard
5. Papaya
6. Cacao
7. Mallow
8. Maples
9. sumacs
10. Citrus
11. Roses
12. Elms
13. Hopes
14. Mulberries/figs
15. Beans
16. Begonias
17. Cucumbers
18. Oaks
19. Walnuts
20. Birch
21. Coffee
22. Milkweeds
23. Gentians
24. Morning Glories
25. Tomatoes
26. Scrophs/Snapdragons
27. African Violets
28. Holly
29. Olive
30. Mints/Verbena
31. Ginseng
32. Carrots
33. Sunflowers
34. Pawpaw
35. Magnolia
36. Laurel
37. Pepper
38. Poppies
39. Grapes
40. Buttercup
41. Eucalyptus
42. Evening Primrose
43. Geraniums
44. Sedum
45. Peonies
46. Currants
47. Sycamore
48. Star Anise
49. Water Lilies
50. Sundew
51. Mistletoe
52. Carnations
53. Beets
54. Cacti
55. Portulaca
56. Blueberries
57. Impatiens
2. Monocots
1. Amborella
2. Aroids
3. Yams
4. Lilles
5. Iris
6. Palms
7. Pineapple
8. Sedges
9. Dayflowers
10. Bananas
11. Gingers
12. Cannas
13. Grasses
14. Orchids
15. Asparagus
16. Agave
17. Amaryllis
18. Onions
19. Daylillies
20. Aloe
Read this summary of the major world food crop plant families and the value of knowing what family plants are in, The Organic Way - Plant Families, then consider these questions.
1. What plants are your five favorite foods produced from?
2. What plant families are they in?
3. Are they annuals or perennials?
The Fabaceae/Leguminosae, commonly called the Legume plant family, is important for soil nitrogen management in agriculture and for soil, human and animal nutrition. Legume plants can form a mutualistic, symbiotic association with Rhizobium bacteria which inhabit legume roots in small growths or nodules in the roots (seed images in the video listed below). The rhizobia bacteria have enzymes that can take up nitrogen from the atmosphere and they share the “fixed nitrogen” with their legume host plant. Nitrogen is an important nutrient for the plants and animals, it is a critical element in amino acids and proteins, genetic material and many other important plant and animal compounds. Legume grains crops, also called pulses are high in protein, such as many species of beans, lentils, peas, and peanuts. Most of their plant nitrogen is harvested in grain, although there is some in crop residues that can increase soil nitrogen content. Perennial legume crops are typically grown as forage crops for their high protein for animals. Because they allocate a large portion of their growth to vegetative plant parts and storage organs, perennial legumes also return a significant quantity of nitrogen to the soil, enhancing soil fertility for non-legumes crops grown in association or in rotation with legumes.
Watch the following NRCS video about legumes and legume research.
Video: The Science of Soil Health: Understanding the Value of Legumes and Nitrogen-Fixing Microbes (2:30)
Click for a transcript of the Science of Soil Health video.
Legumes and cash and cover crops use natural symbiotic relationships with soil microbes to get nitrogen into the soil. NC State University's Dr. Julie Grossman is working to provide farmers with new insights on how to harness this resource. My work really involves looking at legumes to try to figure out how we can really make them make them the most efficient nitrogen source we possibly can, by looking at the microbial component of the legume-rhizobia symbiosis. And we work a lot with organic farmers simply because right now, that's those are the farmers who are really interested in using legumes for nitrogen supply. As nitrogen prices go up we're gonna need to turn to some of these alternative processes such as nitrogen fixation. And when that happens, we need to be able to hit the ground running. We can't say, “okay now we're gonna start doing the research.” We really want to get to know how, when you take a bacteria, a strain of bacteria, and you look at its DNA, how does it differ from other strains of bacteria. Because you can have some that are very high performers and they fix a lot of nitrogen and you can have others that don't really do a heck of a lot for the plant. In my mind, what would really help the farmers is trying to understand the tools they can use as farmers to help increase nutrient supply to their crop plants. So try to figure out how much nitrogen is supplied when they put a legume in the soil and let it decompose, how that is released when it's released, how we can get more nitrogen into the legume by enhancing the fixation ability of the microbes. So all these little pieces will help us be able to help farmers develop their own research, their own experimentation, so they don't need to rely on the recipes. They can say, “Oh, I know that if I can calculate a square meter of legume biomass and I can calculate how much I have and how much nitrogen is in that square, I can then figure out on my whole field, how much nitrogen is being added through this legume to my soil.” And so those are the kinds of things I really want to give to farmers, in terms of having them understand how they can control their own biological process, in their fields on their own, and not have to rely on recipes.
Knowledge Check (flashcard)
Consider how you would answer the question on the card below. Click "Turn" to see the correct answer on the reverse side of the card.
Card 1:
Front: What are some of the benefits of including legumes in a crop rotation?
Back: Legumes return some nitrogen to the soil, particularly perennial legumes that allocate a proportion of their resources to vegetative growth (ex. roots and shoots) and storage organs. Legumes also produce a crop that is high in protein, whether it is a high protein annual grain legume (also called a pulse) or perennial crop.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.02%3A_Crop_Plant_Characteristic_Classification_and_Climatic_Adaptations/7.2.01%3A_1_Plant_Fam.txt
|
In addition to characterizing plants by their taxonomic plant family, crop plants are also classified as either cool season or warm season, referring to the range of temperatures that are optimum for their growth. Examples of cool-season agronomic crops include wheat, oats, barley, rye, canola, and many forage grasses are called cool-season grasses, such as perennial ryegrass, timothy, orchardgrass, tall fescue, smooth bromegrass, and the bluegrasses. Warm season agronomic crops include corn or maize, sorghum, sugarcane, millet, peanut, cotton, soybeans, and switchgrass.
Learn more about the differences in cool and warm season plants and the types of vegetable crops in these categories by reading Season Classification of Vegetables.
In addition, plants are classified by the type of photosynthetic pathway that they have.
Plant Photosynthesis, Transpiration, and Response to Changing Climatic Conditions
Plants require light, water, and carbon dioxide (CO2) in their chloroplasts, where they create sugars for energy through photosynthesis. The chemical equation for photosynthesis is:
6 CO2+ 6 H2O → C6H12O6+ 6 O2
Carbon dioxide (CO2) enters plants through stomata, which are openings on the surface of the leaf that are controlled by two guard cells. The guard cells open in response to environmental cues, such as light and the presence of water in the plant.
Figure 6.2.2.: Stomate on a Tomato leaf Credit: Wikimedia Commons, Tomato leaf stomate, Public Domain
For a brief and helpful review of photosynthesis and plant anatomy such as the plant leaf structures, see Plant Physiology - Internal Functions and Growth.
Water (H2O) enters the plant from the soil through the roots bringing with it important plant nutrients in solution.
Transpiration or the evaporation of water from plant contributes to a “negative water potential.” The negative water potential creates a driving force that moves water against the force of gravity, from the roots, through plant tissues in xylem cells to leaves, where it exits through the leaf stomata. Since the concentration of water is typically higher inside the plant than outside the plant, water moves along a diffusion gradient out through the stomata. Transpiration is also an important process for cooling the plant. When water evaporates or liquid water molecules are converted to a gas, energy is required to break the strong hydrogen bonds between water molecules, this absorption of energy cools the plant. This is similar to when your body perspires, the liquid water molecules absorb energy and evaporate, leaving your skin cooler.
Figure 6.2.3.: Picture of water molecules leaving stomata - side view. Credit: USDA National Institute of Food and Agriculture, found at Plant & Soil Sciences eLibrary
Carbon dioxide (CO2) also diffuses into the plant through the stomata, because the concentration of carbon dioxide is higher outside of the plant than inside the plant, where carbon dioxide concentration is lower due to plant photosynthesis fixing the carbon dioxide into sugars. To conduct photosynthesis, plants must open their leaf stomata to allow carbon dioxide to enter, which also creates the openings for water to exit the plant. If water becomes limited such as in drought conditions, plants generally reduce the degree of stomatal opening (also called “stomatal conductance”) or close their stomata completely; limiting carbon dioxide availability in the plant.
Figure 6.2.4.: Schematic of gas exchange across plant stomata. Credit: ASU School of Life Sciences, Snacking on Sunlight
Read more about how water moves through the plant and factors that contribute to water moving into the roots and out of the plant, as well as carbon dioxide movement in Transpiration - Water Movement through Plants.
Knowledge Check (flashcard)
Check Your Understanding
After completing the above reading about transpiration and factors that contribute to water entering and leaving the plant, consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card
Card 1:
Front: How does transpiration influence the temperature of the plant?
Back: Transpiration or water evaporation from the plant stomata cools the plant, protecting important plant enzymes and plant processes, such as photosynthesis and pollen formation.
Card 2:
Front: In many regions, climate change projections are for warmer air temperatures which will likely increase evapotranspiration (the loss of water as a gas from soil and plants). This will likely contribute to soil drying. If soil water is limited, plants tend to reduce their stomatal opening or close stomates to conserve water. How will reduced transpiration impact the plant's physiological ability to cool or avoid overheating?
Back: If transpiration stomatal opening is reduced evaporative cooling will be reduced and a plant temperature will increase.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.02%3A_Crop_Plant_Characteristic_Classification_and_Climatic_Adaptations/7.2.02%3A_2_Plant_Cla.txt
|
The majority of plants and crop plants are C3 plants, referring to the fact that the first carbon compound produced during photosynthesis contains three carbon atoms. Under high temperature and light, however, oxygen has a high affinity for the photosynthetic enzyme Rubisco. Oxygen can bind to Rubisco instead of carbon dioxide, and through a process called photorespiration, oxygen reduces C3 plant photosynthetic efficiency and water use efficiency. In environments with high temperature and light, that tend to have soil moisture limitations, some plants evolved C4 photosynthesis. A unique leaf anatomy and biochemistry enables C4 plants to bind carbon dioxide when it enters the leaf and produces a 4-carbon compound that transfers and concentrates carbon dioxide in specific cells around the Rubisco enzyme, significantly improving the plant’s photosynthetic and water use efficiency. As a result in high light and temperature environments, C4 plants tend to be more productive than C3 plants. Examples of C4 plants include corn, sorghum, sugarcane, millet, and switchgrass. However, the C4 anatomical and biochemical adaptations require additional plant energy and resources than C3 photosynthesis, and so in cooler environments, C3 plants are typically more photosynthetically efficient and productive.
Since carbon dioxide is the gas that plants need for photosynthesis, researchers have studied how the elevated CO2 concentrations impact C4 and C3 plant growth and crop yields. Although C3 plants are not as adapted to warm temperatures as C4 plants, photosynthesis of C3 plants is limited by carbon dioxide; and as one would expect research has shown that C3 plants have benefitted from increased carbon dioxide concentrations with increased growth and yields (Taub, 2010). By contrast, with their adaptations, C4 plants are not as limited by carbon dioxide, and under elevated carbon dioxide levels, the growth of C4 plants did not increase as much as C3 plants. In field studies with elevated carbon dioxide levels, yields of C4 plants were also not higher (Taub, 2010). In addition, if soil nitrogen was limited, C3 plant response to elevated CO2 concentration was reduced or crop plant nitrogen or protein content was reduced compared to plants grown in high soil N conditions (Taub, 2010). These results suggest that crops will likely require higher soil nutrient availability to benefit from elevated atmospheric carbon dioxide concentrations. For more optional reading information about C3 and C4 plant response to elevated carbon dioxide concentrations, see the following summary of research that is also listed in the additional reading list, Effects of Rising Atmospheric Concentrations of Carbon Dioxide on Plants.
Other Drought Tolerant Crop Plant Traits
Some additional plant traits that help plants tolerate drought and heat stress include deep root systems (typical of perennials) and/or thick leaves with waxes that reduce water loss and the rate of transpiration. In addition, some plants roll their leaves to reduce the surface area for solar radiation reception and heating, and some reduce their stomatal conductance more (water loss) more than others.
Temperature
Elevated temperatures projected with climate change can have multiple impacts on plant growing conditions. Climate change may lengthen growing seasons in some regions, although day lengths will not change. As planting dates are altered with longer growing seasons, crops may also be exposed to high temperature, moisture stress, and risk of frost. Elevated temperatures may also increase evaporation of water from the soil, reducing soil water availability. Higher temperatures are not necessarily ideal for yield, even if the temperatures are below a plants’ optimal temperature. At elevated temperatures, plants grow faster which tends to, one, reduce the amount of the time for photosynthesis and growth, resulting in smaller plants, and two, reduce the time for grain fill, reducing yield, particularly if nighttime temperatures are high (Hattfield et al., 2009). High temperatures can also reduce pollen viability, be lethal to pollen. The multiple effects of high temperatures on plant physiological process and soil moisture likely explain why research has found that grain development and yield are often reduced when temperatures are elevated (Hattfield et al., 2009).
Many factors that are projected to change with climate change could influence plant growth. These include carbon dioxide concentration, temperature, precipitation and soil moisture, and ozone concentrations in the lower atmosphere.
Read the Introduction and Key Message 1 (Increasing Impacts on Agriculture) of the National Climate Assessment.
Consider how you would answer the question on the card below. Click "Turn" to see the correct answer on the reverse side of the card.
Card 1:
Front: How will multiple climate change factors that are projected to change together (such as temperature, carbon dioxide concentration, and soil moisture availability) likely to differ influence crop plant growth and yields?
Back: Although an increase in carbon dioxide has the potential to increase plant productivity in some plants, such as C3 plants, in many cases the combination of elevated temperature and ozone, and reduced soil moisture availability are likely to outweigh the increased availability of C02 and result in reduced crop yields.
7.2.04: Agricultura
Socio-economic Factors
In addition to the climate and soil resources for crop production, many socio-economic factors influence which crops farmers chose to cultivate, including production costs, domestic and international market demand; and government policies that subsidize agricultural producers, and reduce trade barriers or export costs. As discussed in Module 3, the protein, energy, fat, vitamins and micro-nutrients of crops for human nutrition are one predictor of the market value of a crop. However some food crops are highly-valued and cultivated for their cultural and culinary qualities, such as flavor (ex. chilies, vanilla, coffee, wine grapes); and their high economic value often reflects high production and processing costs, as well as market demand for their unique culinary and cultural properties.
Some crops are cultivated for non-human food uses such as livestock feed, biofuel, fiber, industrial oil and starch, and medicinal uses. Crop processing often creates by-products that can be used for other purposes, adding market value. For example, when oil is extracted from oilseeds such as soybean, the soybean meal by-product is high in protein and sold for livestock feed or added to human food products. And for crops that are cultivated on many acres often with support from government policies, the consistent, abundant supply of these commodity crops has contributed to the development of multiple processing technologies, uses, and markets. To better understand factors that contribute to the production of commodity crops, we will now examine two case studies of corn and sugarcane.
Understanding Agricultural Commodities: Two Agricultural Crops Case Studies
In the following two agricultural crop case studies, you will have the opportunity to apply your understanding of crop plant life cycles, classification systems, and crop adaption to climatic conditions to understand how plant ecological features and human socioeconomic factors influence which crops are some of the major crops produced in the world.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.02%3A_Crop_Plant_Characteristic_Classification_and_Climatic_Adaptations/7.2.03%3A_3_C3_and_C4.txt
|
Summary
After completing Module 6, you should now be able to:
• Describe key features of categories of crop plants and how they are adapted to environmental and ecological factors
• Explain how soil and climatic features determine what crops can be produced in a location, and how humans may alter an environment for crop production.
• Describe some plant physiological traits and differences that could influence how plants adapt to climate change.
• Explain how both environmental and socio-economic factors contribute to crop plant selection (coupled human-nature systems).
Reminder - Complete all of the Module 6 tasks!
You have reached the end of Module 6! Double-check the to-do list on the Module 6 Roadmap to make sure you have completed all of the activities listed there before moving on to Module 7.1!
7.04: Summative Assessment- Top 15 World Agricultural Commodities
Instructions
For this summative assessment, you need to have completed the corn and sugarcane agricultural crops case studies in this module. If you have not, go back and read the linked ERS USDA website and watch the FAO video. Then go to FAO (Food and Agriculture Organization) FAOSTAT and look up the top 15 agricultural commodities in the world in 2013, or the most recent year that data is available, as well as the top agricultural commodities in 2000. Download the FAO Top 50 Commodity Changes Key Spreadsheet which has the ranking and total production of the top 50 commodities for 2000 and 2013. In a spreadsheet calculate the percentage of change in the production of the most recent year's top 15 commodities then answer the below questions. Analysis and critical thinking about the data are encouraged.
Answer the following questions:
1. Describe the crops that are used to produce the top 15 agricultural commodities with the classification systems you have learned in Module 6.
1. In what plant families are they?
2. Are the top agricultural commodities produced from annual or perennial plants or both?
3. Are they cool season, warm season, C3 or C4 plants?
2. Which four commodities have increased in production the most in comparison to the other top 11 commodities, which had the greatest percentage of increase in production? By what percentage has the production (in weight, not dollars) of the top four agricultural commodities in the most recent year for which data is available changed since 2000?
3. Why has corn production in the US and sugarcane production in Brazil increased recently? What markets, agroecological and socioeconomic factors do the case study readings and FAO video explain have contributed to the increased production of corn in the US and sugarcane in Brazil?
4. What might socioeconomic, agricultural, and environmental factors explain the significant increase of the four commodities that increased most since 2000 on a global scale?
5. Consider how the increased production of these four commodities likely impacts the soil, nutrient cycling, pest populations, and ecology of an agroecosystem? What are the potential pros and cons of these crops on soil, nutrient cycling, greenhouse gases, other ecological impacts; What are the socio-economic impacts? Distinguish the most significant impacts, and discuss why there are significant advantages or disadvantages of the expansion of these top 4 commodities. The pros and cons may be socioeconomic and/or environmental.
Submitting Your Assignment
Please complete the Module 6 Summative Assessment in Canvas.
08: Capstone Project Stage 2 Assignment
Water, soils, and crops
(Modules 4-6)
The diagram below summarizes the topics you will explore in Stage 2 for your assigned region. In Stage 2 of the capstone, you will engage in spatial thinking and geographic facility to interpret spatial data (for example annual precipitation, evapotranspiration and soils data) and interpret how multiple regional factors contribute to determining which crops are produced in your region.
Capstone Stage 2
Click for a text description of the Capstone Stage 2 Diagram.
Capstone Stage 2 (water, soils, and crops): How do water and soils influence regional crops?
• Describe precipitation and water resources
• Assess water pollution
• Describe soil types
• Describe soil impacts
• Describe regional crops
• Discuss relationship between crops, soils, and climate
What to do for Stage 2?
• Download and complete the Capstone Project State 2 Worksheet that contains a table summarizing the data you’ll need to collect to complete this stage. Remember, you need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research.
• Add relevant visual figures (i.e. maps, tables, graphics, diagrams) to your PowerPoint.
• Add questions and continue to research the questions in your worksheets.
• Continue building a CHNS diagram to illustrate the connections between the natural system and the human food systems of the region. You may decide that you need multiple diagrams.
Capstone Project Overview: Where do you stand?
At this stage, you should have started to investigate your assigned region and have added information, maps and data to your worksheets and PowerPoint file for Stages 1 and 2.
Upon completion of stage 2, you should have at this point:
1. Continued research and data compilation in the Stages 1 and 2 tables in the associated Stages 1 and 2 worksheets.
• Stage 1: Regional food setting, history of regional food systems, diet/nutrition
• Stage 2: Water resources, soils and crops
2. Added to your powerpoint file containing the data that you are collecting about the food system of your assigned region. Information you may have:
• Labeled map of your region
• Soil map of your region
• Precipitation and temperature map of your region
• Major crops and crop families grown in your region
3. Continued to record citations for all references and resources you are using in your research. This is a critical step. Every figure, map, piece of data and bit of information you collect from the web, a book, a person, a journal or any other source must be attributed to the source.
4. Added to your list of questions you have about your region related to key course topics and initiated significant efforts to answer.
5. Revised your CHNS diagram and/or create a new one incorporating topics from Modules 4, 5 and 6.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/07%3A_Crops/7.03%3A_Summary_and_Final_Tasks.txt
|
Introduction
There are multiple soil conservation practices that can reduce soil erosion and improve soil quality. In this module, you will explore what is meant by soil quality or soil health for agricultural production, as well as how strategic crop selection, crop sequencing, and reduced soil tillage practices in combination are most effective for improving soil quality for agriculture.
Goals
• Describe different types of cropping systems types, soil tillage practices, and indicators of soil quality.
• Interpret the effect of cropping systems and soil tillage approaches on soil conservation and quality.
• Distinguish which crop and soil management practices promote soil health and enhanced agroecosystem performance.
Learning Objectives
After completing this module, students will be able to:
• Define and provide an example of some cropping system practices (ex. monoculture, double crop, rotation, cover crop, intercrops).
• Define soil quality and describe some indicators of soil quality.
• Explain some tillage systems and how tillage practices affect soil quality.
• Interpret how the integration of cropping and tillage systems can promote soil conservation and quality.
• Analyze and prescribe some cropping systems and tillage practices that promote soil quality and other agroecosystem benefits.
Assignments
Print
Module 7 Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 7 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Module 7.1: Chapter 1 (Healthy Soil) and Chapter 2 (Organic Matter: What it is and Why it’s so important?) from the book that you can download for free "Building Soils for Better Crops. Edition 3." Sustainable Agriculture Network, USDA. Beltsville, MD.
3. Module 7.2: Chapter 16 (Reducing Tillage) from the book "Building Soils for Better Crops. Edition 3." Sustainable Agriculture Network, USDA. Beltsville, MD.
1. You are on the course website now.
2. Online: Building Soils for Better Crops. Edition 3
3. Online: Building Soils for Better Crops. Edition 3
To Do
1. Formative Assessment: Soil Quality
2. Summative Assessment: Interpreting a 12 Year Summary of Crop and Soil Management Research from New York
3. Take Module Quiz
1. In course content: Formative Assessment; then take quiz in Canvas
2. Summative Assessment (Discussion) in Canvas
3. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
09: Soils and a Systems Approach to Soil Quality
Introduction
Plants and soil interact; soil provides water and nutrients to plants, and plant roots contribute organic matter to the soil, can promote soil structure, and support soil organisms. Above ground crop residues (non-harvested plants parts such as stems and leaves) can also protect the soil from erosion and return organic matter to the soil. But soil tillage can make soil vulnerable to erosion, alter soil physical properties and soil biological activity. In Module 7.1, you will learn what is meant by soil health for agricultural production and explore how crop types and cropping systems can impact the soil.
9.01: Cropping Systems and Soil Quality
Recall in module 5, we examined how soils, climate, and markets play major roles in determining which crops farmers cultivate. In many cases, farmers cultivate multiple crops of more than one life-cycle because the diversity provides multiple benefits, such as soil conservation, interruption of pest lifecycles, diverse nutritional household requirements, and reduced market risk. In this module, we examine some ways that farmers cultivate crops in sequence and define some of the terms for this crop sequencing.
A sole crop refers to planting one crop in a field at a time. Recall from Module 5, the seasonal crop types (Figure 7.1.1) and note that different seasonal crops could be planted in succession. A monoculture refers to planting the same crop year after year in sequence (See Figure 7.1.2). By contrast in a crop rotation, different crops are planted in sequence within a year or over a number of years, such as shown in Figures 7.1.3a and 7.1.3b. When two crops are planted and harvested in one season or slightly more than one season, the system is referred to as double cropping, as illustrated in Figure 7.1.4. Where growing seasons are long and/or crop life cycles are short (ex. leafy greens), three crops may be planted in sequence within a season, as a triple-crop.
Figure 7.1.1.: Crop Term: Seasonal Types and Example Crops. Credit: Heather Karsten
Figure 7.1.2.: Monoculture. Credit: Heather Karsten
Figure 7.1.3a.: Simple Summer Annual Crop Rotation. Credit: Heather Karsten
Figure 7.1.3b.: Dairy Perennial - Annual Crop Rotation. Credit: Heather Karsten
Figure 7.1.4.: Double-cropped annual crops. Credit: Heather Karsten
Crop rotations and double cropping can provide many soil conservation and soil health benefits that are discussed in the reading assignment at the end of this page, and in Module 7.2. Crop rotations can provide additional pest control benefits particularly when crops from different plant families are rotated, as different families typically are not hosts of the same insect pest species and crop pathogens. Integrating crops of different seasonal types and life cycles in a crop rotation also interrupts weed life cycles by alternating the time when crops are germinating and vulnerable to weed competition. Rotating annual crops with perennial forage crops that are harvested a couple of times in a growing season also interrupts annual weed life cycles, because most annual weeds don't survive the frequent forage crop harvests.
When all or most of a crop is grazed or harvested for feed for ruminant livestock, such as dairy and beef cattle or sheep, the crop is referred to as a forage crop. Examples of forage crops include hay and pasture crops, as well as silage that can be produced from perennial crops and most grain crops. For instance, silage from alfalfa, perennial grass species, corn, oat, and rye is made when most of the aboveground plant material (leaves, stems and grain in the case of grain crops) is harvested and fermented in a storage structure called a silo or airtight structure. To preserve the silage, air is precluded from the storage structure and microbes on the plant material initially feed on the crop tissues, deplete oxygen in the storage structure, and produce acidic byproducts that decrease the pH of the forage. This acidic environment without oxygen prevents additional micro-organisms from growing, effectively "pickling", and preserving the forage.
Figure 7.1.5.: Airtight upright silo. Credit: Heather Karsten
Figure 7.1.6.: Bunker silos are packed tightly with heavy equipment and covered with plastic to keep out air and moisture. The bunker silo on the right is uncovered because the silage is being removed to feed to dairy cattle. Credit: Heather Karsten
9.02: Conservation Agriculture- A Systems
Tillage can incorporate soil amendments such as fertilizers; bury weed seeds and crop residues that may harbor diseases and insects; remove residue that insulates the soil and promotes soil warming and crop seed germination and growth. Tillage can also cause soil erosion, disrupt soil organisms and soil structure; and remove residues that slow water run-off and evaporation, conserving soil moisture. Conservation tillage practices can reduce or eliminate the need for tillage, and the integration of perennials and cover crops can also protect soil from erosion and contribute to improving soil quality. In Module 7.2, we explore tillage and cropping practices that farmers can employ and integrate to conserve and improve their soil for long-term farm productivity.
9.03: Summary and Final Tasks
Summary
In this module, you have learned how crop and soil management can protect soil from erosion, improve soil quality and maintain crop productivity in the long-term. Recall that these crop and soil conservation management practices can also help agriculture adapt to climate change because soil that is high in organic matter can store more carbon, nutrients, and water. In addition, diversifying cropping systems can reduce the risk of weather impacting all of the crops on a farm and region, and utilizing a diversity of seasonal crops and varieties can take advantage of longer or potentially different growing seasons.
Reminder - Complete all of the Lesson 7 tasks!
You have reached the end of Module 7. Double-check the to-do list on the Module 7 Roadmap to make sure you have completed all of the activities listed there before you begin Module 8.1.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/09%3A_Soils_and_a_Systems_Approach_to_Soil_Quality/9.txt
|
Introduction
Agroecosystems have many beneficial species that play important roles in processes such as nutrient cycling, pollination, and pest suppression; but some species, typically called pests, reduce crop or livestock yields and/or quality. This module introduces three types of agricultural pests (insects, weeds, and pathogens) and some of the scientific research, technologies, and management approaches developed to reduce agricultural pest damage.
Goals
• Learn some benefits of insects, some characteristics of insect and weed pests, some challenges associated with insect and weed pest control, and how trophic interactions can contribute to insect pest control
• Learn what IPM is and how to apply the economic threshold concept to interpret if a pest population has reached an economic threshold
• Learn some transgenic pest management technologies and their impact
• Understand how few pest control tactics can select for pest resistance while integrated pest and weed management can contribute to long-term successful weed and pest management
Learning Objectives
After completing this module, students will be able to:
• Describe characteristics of insect pests and factors that make them successful pests, as well as beneficial characteristics of insects.
• Explain some history of agricultural pesticides.
• Describe factors that contribute to pests evolving resistance to pest control strategies.
• Discuss what IPM is and why it is effective.
• Interpret how to apply pest scouting data and distinguish if pests have reached an economic threshold.
• Analyze pest management scenarios and describe the agroecosystem benefits of IPM.
• Describe and compare the characteristics of natural ecosystems and agroecosystems, and explain how trophic level interactions and biodiversity may contribute to pest control.
• Describe characteristics of weed pests and factors that make them successful pests.
• Describe categories of weed management tactics with example weed control practices.
• Explain what organisms and factors contribute to crop diseases.
• Explain some recent transgenic pest management technologies and analyze and interpret scientific data about transgenic technologies.
• Differentiate pest control approaches that are likely to be effective in the long term based on IPM principles, and generate or formulate IPM approaches to enhance pest control.
Assignments
Print
Module 8 Assignments Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 8 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Pesticide Development: A Brief Look at the History. Taylor, R. L., A. G. Holley and M. Kirk. March 2007. Southern Regional Extension Forestry. A Regional Peer Reviewed Publication SREF-FM-010 (Also published as Texas A & M Publication 805-124).
3. “Use and Impact of Bt Maize” by: Richard L. Hellmich (USDA–ARS, Corn Insects and Crop Genetics Research Unit, and Dept of Entomology, Iowa State Univ, IA) & Kristina Allyse Hellmich (Dept. of Biology, Grinnell College, IA). 2012 Nature Education
4. The Integrated Pest Management (IPM) Concept. D. G. Alston. July 2011. IPM 014-11. Utah State University Extension and Utah Plant Pest Diagnostic Laboratory
5. IPM Pest Management Decision-Making: The Economic-Injury Level Concept. D. G. Alston. July 2011. IPM 016-11. Utah State University Extension and Utah Plant Pest Diagnostic Laboratory:
6. Plant Disease: Pathogens and Cycles. Timmerman, A., Nygren, A., VanDeWalle, B., Giesler, L., Seymour, R., Glewen, K., Shapiro, C., Jhala, A., Treptow, D. 2019. University of Nebraska-Lincoln. Extension.
1. You are on the course website now.
2. Online: Pesticide Development: A Brief Look at the History
3. Online: Use and Impact of Bt Maize
4. Online: The Integrated Pest Management
5. Online: IPM Pest Management Decision-Making: The Economic-Injury Level Concept
6. Online: Plant Disease: Pathogens and Cycles.
To Do
1. Formative Assessment Part 1: Australian Grain Crop IPM and Part 2: Determining the Economic Threshold of Potato Leafhoppers in Alfalfa
2. Summative Assessment: Herbicide Resistant Weed Interpretation
3. Take Module Quiz
1. In course content: Formative Assessment; complete worksheet then take quiz in Canvas
2. In course content: Summative Assessment, then post discussion in Canvas
3. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
10: Pests and Integrated Pest Management
Ecosystems have many trophic levels of organisms including primary producers, herbivores, omnivores, carnivores; parasites, and decomposers. Agroecosystems are ecosystems managed for food and fiber production that have less diversity and typically fewer trophic interactions than natural ecosystems. But diverse organisms and their trophic interactions provide important functions in agroecosystems including for instance, decomposition and nutrient cycling; plant pollination, and pest suppression. Organisms that reduce agricultural productivity and quality and are referred to as agricultural pests; these include weeds pathogens, insects and other herbivorous organisms. Mammals that graze or browse crops (ex. deer and rodents), and other arthropod species such as mites and slugs (mollusks), can also reduce crop yields through grazing and seed predation.
10.01: Insects and Integrated Pest Management
Pest species can be present in agroecosystems, but not cause significant crop yield loss or livestock productivity reductions. Why? What factors prevent pest populations from reducing yield? One explanation may be that the crop or livestock is resistant to the pest. For instance, a crop plant may produce compounds that fend off pathogen infection or deter insect feeding. And if environmental conditions and resources are ideal, the plant may be able to grow and recover from pest infestation. What other ecological processes and factors might contribute to agricultural resilience to pests or other stresses such as climate change?
Question 1 - Short Answer
Draw a food web pyramid and label the trophic levels as categories of organisms with i. primary producers at the bottom, ii. herbivores next, ii. omnivores and carnivores at the top of the pyramid. Chose a natural ecosystem and list all of the species you can think of that are found at each trophic level in the natural ecosystem. Then draw a second food web pyramid for a type of farm that you are familiar with, and list all of the species you might find at each trophic level. Describe how your the natural ecosystem and the agroecosystem compare. How do they differ?
(add text box)
Click for answer
Answer:
You should have many more species at each trophic level in the natural ecosystem. Additionally, the genetic diversity within species in the natural ecosystem is typically greater than in the agroecosystem.
Question 2 - Short Answer
Odum (1997), an Ecologist summarized some of the major functional differences between natural and agroecosystems that are shown in the table below. Consider how your natural and agroecosystem food pyramids offer examples of the below ecosystem differences. How many predatory and parasitic species are there in the natural ecosystem and agroecosystem? How might the presence of predatory and parasitic organisms impact agricultural pests? How might genetic diversity contribute to pest management and ecosystem stability?
(add text box)
Click for answer
Answer:
Although you may not be familiar with parasitic species such as wasps and nematodes, you likely can think of many predatory species: humans, large and small mammals, predatory birds, rodents, fish, and arthropods (ex. beetles, spiders, ants, etc.)
In natural ecosystems there tend to be more niches and a higher diversity of species compared to most managed agroecosystems that are simpler, have fewer predatory and parasitic species, and less genetic diversity within a species. As the table below indicates with fewer trophic interactions, there are fewer species to reduce pest populations and prevent them from reducing agricultural yield and quality. Further, with low genetic diversity within agricultural species and across the landscape, the agricultural system is more vulnerable to pest outbreaks than natural ecosystems.
Natural Ecosystems and Agroecosystems
Property Natural Ecosystem Agroecosystems
Human Control Low High
Net Productivity Medium High
Species and Genetic Diversity High Low
Trophic Interactions Complex Simple, Linear
Habitat Heterogeneity Complex Simple
Nutrient Cycles Closed Open
Stability (resilience) High Low
10.02: Weeds Transgenic Crops for Pest Management
Weeds are a major crop pest that persist in agricultural ecosystems, and significant resources are allocated to studying weeds and developing technologies to control them. What characteristics make weeds such significant pests and how can they be controlled? We will employ the plant lifecycle terms that you learned about in Module 6 to describe weed lifecycles and identify effective weed control practices. We will also explore how the principles of integrated pest management are applied in weed management; and you will learn about transgenic pest control practices that have been widely adopted for insect and weed control; as well as some plant pathogen management principles.
10.03: Summary and Final Tasks
Summary
Scientists have identified and continue to study and develop strategies to reduce the impact of pests in agriculture. Pest species that are subject to one or few pest control practices over time inevitably develop resistance to the strong selective force. Multiple biological factors and ecological processes, however, influence host-pest population interactions, providing many opportunities to combine pest control tactics and identify new pest control approaches. Climate change will also pose new pest challenges. Some of these challenges are discussed in the online resource that you read parts of in Modules 4 and 5. We highly encourage you to read this a short summary of some of the research on Climate Change Impacts in the United States. See Section title: Key Message 2: Weeds, Diseases, and Pests.
Reminder - Complete all of the Module 8 tasks!
You have reached the end of Module 8! Double-check the to-do list on the Module 8 Roadmap to make sure you have completed all of the activities listed.
Additional Reading:
1. FAO UN More About IPM
2. Cornell University’s Pesticide Safety Education Program (PSEP) Part of the Pesticide Management Education
3. Cullen, E., and R. Proost, D. Volenberg. 2008. Insect Resistance Managment and Refuge Requirements for Bt Corn. University of Wisconsin Extension. Pest and Nutrient Management Program.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/10%3A_Pests_and_Integrated_Pest_Management/10..txt
|
Introduction
We've seen in previous modules how crucial climate is in food production. Temperature and precipitation are critical factors in the growth of crops, choice of crops, and food production capacity of a given region. In this module, we'll first review the mechanism and projected effects of human-induced climate change. We'll also explore the role that agriculture plays in contributing to human-induced climate change. In the second half of this module, you'll explore the varied impacts that climate change may have on agricultural production. The summative assessment for this module will be an important contribution to your capstone project, as you'll be exploring the potential future climate changes in your assigned regions, and begin proposing strategies to improve the resilience of your assigned region.
Goals
• Outline the basic science behind human-induced climate change and the contribution from agriculture.
• Compare various potential impacts of climate change on our global and local food systems.
• Select strategies that enhance the resilience of food systems in the face of a changing climate.
Learning Objectives
After completing this module, students will be able to:
• Identify climate variables that affect agriculture.
• Explain possible climate change impacts on crops.
• Summarize the mechanisms of human-induced climate change.
• Explain the role of food systems in contributing to climate change.
• Discuss how climate change impacts food production and yield.
• Evaluate how farmers adapt to climate change.
• Differentiate impacts of climate change on climate variables in different regions.
Assignments
Print
Module 9 Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 9 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Climate Change: Evidence, Impacts, and Choices, answers to common questions about the science of climate change - use this document for reference for Module 9.1, and read p. 29 for Module 9.2
3. National Climate Assessment - Agriculture Sector, presents six key messages about impacts of climate change on agriculture
4. Fact sheet from Cornell University's Cooperative Extension about Farming Success in an Uncertain Climate.
5. Advancing Global Food Security in the Face of a Changing Climate, p. 18, Box 4, The Chicago Council on Global Affairs.
1. You are on the course website now.
2. Online: Climate Change Evidence, Impacts, and Choices OR Climate Change Evidence, Impacts, and Choices
3. Online: National Climate Assessment - Agriculture Sector
4. Online: Farming Success in an Uncertain Climate
5. Online: Advancing Global Food Security in the Face of a Changing Climate
To Do
1. Global Climate Change Video Assignment (not graded)
2. Summative Assessment: Climate Change Predictions in your Capstone Region
3. Take Module Quiz
4. Submit Capstone Project Stage 3 Assignment
1. In course content: Global Climate Change Video Assignment
2. In course content: Summative Assessment; then submit in Canvas
3. In Canvas
4. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
11: Food and Climate Change
We hear a lot about global climate change and global warming in the news, especially about the controversy surrounding proposed strategies to reduce carbon emissions, but how well do you understand the science behind why our climate is changing and our planet is warming? In this unit, we'll review the basic science that underpins our understanding of global warming. Agriculture is one of the human activities that contributes carbon dioxide to the atmosphere, so we'll consider those contributions and how they can be reduced. Finally, we'll start to look to the future. What are some of the projections for future temperatures? We need to know what the future projections are so that we can plan to make our food systems more resilient to expected changes.
11.01: Understanding Global Climate Change and Food Systems
Understanding the Science of Climate Change: The Basics
Module 9 focuses on how agriculture contributes to global climate and how climate change will affect global agriculture. In addition, we'll explore agricultural strategies for adapting to a changing climate. But, before we explore the connections between global climate change and food production, we want to make sure that everyone understands some of the basic science underpinning global climate change.
Have you ever thought about the difference between weather and climate? If you don't like the weather right now, what do you do? In many places, you just need to "wait five minutes"! If you don't like the climate where you live, what do you do? Move! Weather is the day-to-day fluctuation in meteorological variables including temperature, precipitation, wind, and relative humidity, whereas climate is the long-term average of those variables. If someone asked you what the climate of your hometown is like, your response might be "hot and dry" or "cold and damp". Often we describe climate by the consistent expected temperature and precipitation pattern for the geographic region. So, when we talk about climate change, we're not talking about the day-to-day weather, which can at times be quite extreme. Instead, we're talking about changes in those long-term temperature and precipitation patterns that are quite predictable. A warming climate means that the average temperature over the long-term is increasing, but there can still be cold snowy days and blizzards even!
The two videos below are excellent introductions to the science of climate change. We'll use these videos as your introduction to the basic science behind our understanding of climate change that we'll build on as we explore the connections between climate change and food production in the rest of this module. Follow instructions from your instructor for this introductory section of Module 9.
Optional Video Climate Change: Lines of Evidence
The National Academies of Sciences Engineering and Medicine have prepared an excellent 20-minute sequence of videos, Climate Change: Lines of Evidence, that explains how scientists have arrived at the state of knowledge about current climate change and its causes. Use the worksheet linked below to summarize the story that the video tells about anthropogenic greenhouse gas emissions and the resulting changes in Earth's climate. The narrator speaks pretty quickly, so you'll want to pause the video and rewind when you need to make sure you understand what he's explaining. It's important to take the time to understand and answer the questions in the worksheet because you'll use this information in a future assignment.
If instructed by your instructor, download detailed questions about the Climate Change: Lines of Evidence videos:
Video: What is Climate? Climate Change, Lines of Evidence: Chapter 1 (25:59)
Click for a transcript of What is Climate? Climate Change, Lines of Evidence.
Seven consecutive videos. Video #1 What is Climate? Climate Change, Lines of Evidence: Chapter 1. The National Academy of Sciences has produced this video to help summarize what is known about climate change. What is climate? Climate is commonly thought of as the average weather conditions at a given location or region over time. People understand climate in many familiar ways. For example, we know that winter will generally be cooler than summer. We also know the climate in the Mojave Desert will be much different than the climate in Greenland. Climate is measured by statistics such as average temperatures and rainfall and frequency of droughts. Climate change refers to changes in these statistics over seasons and year-to-year changes, as well as decades - over centuries and even over thousands of years, as with how Earth moves in and out of ice ages and warm periods. This video is intended to help people understand what has been learned about climate change. Enormous inroads have been made in increasing our understanding of climate change and its causes. And a clearer picture of current and future impacts is emerging. Research is also shedding light on actions that might be taken to limit the magnitude of climate change or adapt to its impacts. We lay out the evidence that human activities, especially the burning of fossil fuels, are responsible for much of the warming and related changes being observed on earth. The information is based on a number of national research council reports, each of which represents the consensus of experts who have reviewed hundreds of studies, describing many years of accumulating evidence. The overwhelming majority of climate scientists agree that human activities, especially the burning of fossil fuels, are responsible for most of the global warming being observed. But how is this conclusion reached? Climate science, like all science, is a process of collective learning that relies on the careful gathering and analysis of data, the formulation of hypotheses, and the development of computer models to help understand past and present change. It is the combined use of observations and models that help test scientific understanding, in order to help predict future change. Scientific knowledge builds over time, as new observations and data become available. Confidence in our understanding grows when independent global analysis, by scientific groups in different countries, show the same warming pattern, or if other explanations can be ruled out. In the case of climate change, scientists have understood for more than a century that emissions from the burning of fossil fuels should lead to an increase in the Earth's average surface temperature. Decades of observations and research have confirmed and extended this understanding. Video #2 Is Earth Warming? Climate Change, Lines of Evidence: Chapter 2 How do we know that earth is warmed? Scientists have been taking widespread global measurements of Earth's surface temperature for centuries. By the 1880s, there was enough data to produce reliable estimates of global average temperature. These data have steadily improved and today temperatures are recorded by thermometers at many thousands of locations, both on land and over the oceans. Different research groups, including NASA's Goddard Institute for Space Studies, Great Britain's Hadley Center, and the Japanese Meteorological Agency, have used these raw measurements to produce records of long-term surface temperature change. Research groups work carefully to make sure the data aren't skewed by such things as changes in the instruments taking the measurements, or by other factors that affect local temperature, such as additional heat that has come from the gradual growth of cities. These analyses all show that Earth's average surface temperature has increased by more than 1.4 degrees Fahrenheit over the past 100 years, with much of this increase taking place over the past 35 years. A temperature change of one point four degrees Fahrenheit may not seem like much if you're thinking about a daily or seasonal fluctuation. However, it is a significant change when you think about a permanent increase averaged across the entire planet. For example, one point four degrees is more than the average annual temperature difference between Washington, DC and Charleston, South Carolina, which is more than 450 miles south of Washington. Think about this. On any given day, a difference of nine degrees Fahrenheit might be the difference between wearing a sweater or not. But a change of nine degrees in the global average temperature is the estimated difference between the climate of today and an ice age. In addition to surface temperature, other parts of the climate system are also being monitored carefully. For example, a variety of instruments are used to measure temperature, salinity, and currents beneath the ocean surface. Weather balloons are used to probe the temperature, humidity, and winds in the atmosphere. A key breakthrough in the ability to track global environmental changes began in the 1970s, with the dawn of the era of satellite remote sensing. Many different types of sensors, carried on many dozens of satellites, have allowed us to build a truly global picture of changes in the temperature of the atmosphere, and of the ocean and land surfaces. Satellite data are also used to study shifts in precipitation and changes in land cover. Even though satellites do not measure temperature in the same way as instruments on the surface of Earth, and any errors would be of a completely different nature, the two records agree. A number of other indicators of global warming have also been observed. For example, heat waves are becoming more frequent. Cold snaps are now shorter and milder. Snow and ice cover are decreasing in the northern hemisphere. Glaciers and ice caps around the world are melting and many plants and animal species are moving to different latitudes or higher altitudes due to changes in temperature. The picture that emerges from all of these datasets is clear and consistent. Earth is warming. Video #3 Greenhouse Gases: Climate Change, Lines of Evident: Chapter 3 How do we know that greenhouse gases lead to warming? As early as the 1820s scientists began to appreciate the importance of certain gases in regulating the temperature of Earth. Greenhouse gases, which include water vapor, carbon dioxide, methane, and nitrous oxide, act like a blanket covering the earth, trapping heat in the lower atmosphere, known as the troposphere. Although greenhouse gases are only a tiny fraction of Earth's atmosphere, they are critical for keeping the planet warm enough to support life as we know it. Here's how the greenhouse effect works. As the sun's energy hits earth, some of it is reflected back to space, but most of it is absorbed by land and oceans. This absorbed energy is then radiated upward from the surface of Earth in the form of heat. In the absence of greenhouse gases, this heat would simply escape to space and the planet's average surface temperature would be well below freezing. But greenhouse gases absorb and redirect some of this energy downward, keeping heat near the surface of Earth. As concentrations of heat trapping greenhouse gases increase in the atmosphere, Earth's natural greenhouse effect is amplified, like having a thicker blanket, and surface temperatures slowly rise. Reducing the levels of greenhouse gases in the atmosphere would cause a decrease in surface temperature. Video #4 Increased Emissions: Climate Change, Lines of Evidence: Chapter 4 How do we know humans are causing greenhouse gas concentrations to increase? Determining the human influence of greenhouse gas concentrations was challenging, because many greenhouse gases occur naturally in Earth's atmosphere. Carbon dioxide is produced and consumed in many natural processes that are part of the carbon cycle. Once humans began digging up long buried forms of carbon, such as coal and oil, and burning them for energy, additional CO2 was released into the atmosphere, much more rapidly than in the natural carbon cycle. Other human activities, such as cement production and cutting down forests, have also added CO2 to the atmosphere. Until the 1950s, many scientists thought the oceans would absorb most of the excess CO2 released by human activities. Then a series of scientific papers were published that examined the dynamics of carbon dioxide exchange between the ocean and atmosphere, including a paper by oceanographers Roger Revelle and Han Soos in 1957, and another by Bert Bolin and Erik Erikson in 1959. This work led scientists to the hypothesis that the oceans could not absorb all of the CO2 being emitted. To test this hypothesis, Ravel's colleague Charles David Keeling began collecting air samples at the Mauna Loa Observatory in Hawaii, to track changes in CO2 concentrations. Today such measurements are made at many sites around the world. The data reveal a steady increase in atmospheric CO2. To determine how CO2 concentration varied prior to modern measurements, scientists have studied the composition of air bubbles trapped in ice cores extracted from Greenland and Antarctica. These data show that for at least two thousand years before the Industrial Revolution, atmospheric CO2 concentration was steady and then began to rise sharply beginning in the late 19th century. Today atmospheric CO2 concentration exceeds 390 parts per million, around 40 percent higher than pre-industrial levels. And according to ice core data, higher than any point in the past 800,000 years. Human activities have increased the atmospheric concentrations of other important greenhouse gases as well. Methane, which is produced by the burning of fossil fuels, the raising of livestock, the decay of landfill wastes, the production and transport of natural gas, and other activities, increased sharply throughout the industrial age, before starting to level off at about two and a half times its pre-industrial level. Nitrous oxide has increased by roughly fifteen percent since 1750, mainly as a result of agricultural fertilizer use, but also from fossil fuel burning and certain industrial processes. Some industrial chemicals, such as chlorofluorocarbons used in refrigerants and spray cans, act as potent greenhouse gases and are long-lived in the atmosphere. However, the concentration of CFCs are decreasing due to the success of the 1989 Montreal Protocol, which banned their use. Because CFCs do not have natural sources, their increases can easily be attributed to human activities. In addition to direct measurements of atmospheric CO2 concentrations, there are detailed records of how much coal, oil, and natural gas is burned each year. Through science, estimates are made of how much CO2 is being absorbed on average, by the oceans and plant life on land. These analyses show that almost half of the excess CO2 emitted from human activity remains in the atmosphere for many centuries. Just as a sink will fill up if water enters faster than it can drain, human production of CO2 is outstripping Earth's natural ability to remove it from the air. As a result, atmospheric CO2 levels are increasing. A forensic style analysis of the CO2 in the atmosphere reveals the chemical fingerprints of natural and fossil fuel carbon. These lines of evidence prove conclusively that the increase in atmospheric CO2 is the result of human activities. Video #5 How Much Warming? Climate Change, Lines of Evidence: Chapter 5 How much are human activities heating earth? Greenhouse gases are referred to as forcing agents because of their ability to change the planets energy balance. A forcing agent can push Earth's temperature up or down. Greenhouse gases differ in their forcing power. For example, a single methane molecule has about 25 times the warming power of a single CO2 molecule. However, methane has a shorter lifetime in the atmosphere and is less abundant, while CO2 has a larger warming effect because it is much more abundant and stays in the atmosphere for much longer periods of time. Scientists can calculate the forcing power of greenhouse gases based on the changes in their concentrations over time, and on physically based calculations of how they transfer energy through the atmosphere. Some forcing agents push Earth's energy balance toward cooling, offsetting some of the heating associated with greenhouse gases. For example, some aerosols, which are tiny liquid or solid particles such as sea spray, or visible air pollution suspended in the atmosphere, have a cooling effect because they scatter a portion of incoming sunlight back into space. Human activities, especially the burning of fossil fuels, have increased the number of aerosol particles in the atmosphere, particularly over and around major urban and industrial areas. Changes in land use and land cover are another way that human activities are influencing Earth's climate, and deforestation is responsible for 10 to 20 percent of the excess CO2 emitted to the atmosphere. As mentioned previously, agriculture contributes nitrous oxide and methane. Changes in land use and land cover also modify the reflectivity of Earth's surface. The more reflective a surface, the more sunlight is sent back to space. Cropland is generally more reflective than undisturbed forest, while urban areas often reflect less energy than undisturbed land. Globally, human land-use changes have had a slight cooling effect. When all human agents are considered together, scientists have calculated that the net change in climate forcing, between 1750 in 2005, is pushing earth toward warming. The extra energy is about 1.6 watts per square meter on the surface of Earth. When multiplied by the total surface area of Earth, this represents more than 800 trillion watts of energy. This energy is being added to Earth's climate system every second of every day. That means each year we add to the climate system more than 50 times the amount of power produced annually, by all the power plants of the world combined. The total amount of warming that will occur in response to a climate forcing is determined by a variety of feedbacks, which either amplify or dampen the initial change. For example, as Earth warms, polar snow and ice melt away, allowing the darker colored land and oceans to absorb more heat, causing Earth to become even warmer, which leads to more snow and ice melt and so on. Another important feedback involves water vapor. The amount of water vapor in the atmosphere increases as the ocean surface and the lower atmosphere warm up. Warming of 1 degree Celsius or 1.8 degrees Fahrenheit increases water vapor by about 7%. Because water vapor is also a greenhouse gas, this increase causes additional warming. Feedbacks that reinforce the initial climate forcing are referred to in the scientific community as positive or amplifying feedbacks. There is an inherent lag time in the warming caused by a given forcing. This lag occurs because it takes time for parts of the Earth's climate system, especially the massive oceans, to warm or cool. Even if by magic we could hold all human produced forcing agents at present-day values, Earth would continue to warm well beyond the 1.4 degrees Fahrenheit already observed because of human emissions to date. Video #6 Solar Influence: Climate Change, Lines of Evidence: Chapter 6 How do we know the current warming trend isn't caused by the Sun? Another way to test the scientific theory is to investigate alternative explanations. Because the sun's output has a strong influence on Earth's temperature, scientists have examined records of solar activity to determine if changes in solar output might be responsible for the observed global warming trend. The most direct measurements of solar output are satellite readings, which have been available since 1979. These satellite records show that the sun's output has not shown a net increase during the past 30 years and thus cannot be responsible for the global warming during that period. Before satellites solar energy had to be estimated by more indirect methods, such as records of the number of sunspots observed each year, which is an indicator of solar activity. These indirect methods suggest that there was a slight increase in solar energy during the first half of the 20th century, and a decrease in the latter half. The increase may have contributed to warming in the first half of the century, but that does not explain warming in the latter part of the century. Further evidence that current warming is not a result of solar changes can be found in the temperature trends in the different layers of the atmosphere. These data come from two sources, weather balloons which have been launched twice daily from hundreds of sites around the world since the late 1950s, and satellites, which have monitored the temperature of different layers of the atmosphere since the late 1970s. Both of these datasets have been heavily scrutinized and both show a warming trend in the lower layer of the atmosphere, the troposphere, and a cooling trend in the upper layer, the stratosphere. This is exactly the vertical pattern of temperature change expected from increased greenhouse gases, which trap energy closer to the Earth's surface. If an increase in solar output were responsible for the recent warming trend, the vertical pattern of warming would be more uniform through the layers of the atmosphere. Video #7 Natural Cycles: Climate Change, Lines of Evidence: Chapter 7 How do we know that the current warming trend is not caused by natural cycles? Detecting human influence on climate is complicated by the fact that there are many natural variations in temperature, precipitation and other climate variables. These natural variations are caused by many different processes that can occur across a wide range of timescales, from a particularly warm summer or snowy winter, to changes over many millions of years. Among the most well-known short-term climate fluctuations are El Nino and La Nina, which are periods of natural warming and cooling in the tropical Pacific Ocean. Strong El Nino and La Nina are associated with significant year-to-year changes in temperature and rainfall patterns across many parts of the planet, including the United States. These events have been linked as causes of some extreme conditions, such as flooding in some regions and severe droughts in other areas. Globally, temperatures tend to be higher during El Nino periods such as 1998, and lower during La Nina periods such as 2008. But it is clear that these natural variations are notably smaller than the 20th century warming trend. Major eruptions like that of Mount Pinatubo in 1991, expel massive amounts of particles into the stratosphere that cooled the earth. However, surface temperatures typically rebound in two to five years, as the particles settle out of the atmosphere. The short-term cooling effects of large volcanic eruptions can be seen in the 20th century temperature record, as can the global temperature variations associated with strong El Nino and La Nina events. But an overall warming trend is evident. Natural climate variations can also be forced by slow orbital changes, affecting how solar energy impacts the earth climate system, as is the case with the ice age cycles. For the past 800,000 years, these longer-term natural cycles between ice ages and warm periods saw carbon dioxide fluctuating between around 180 parts per million, at the coldest points, up to about 300 parts per million at the warmest point. Today with carbon dioxide concentrations rising above 390 parts per million, we are overriding the natural cycle and forcing Earth's climate system into a warmer state. Attributing climate change to human activities relies on the combined assessment from observations, as well as information from climate models to help test scientific understanding. Scientists have used these models to simulate what would have happened if humans had not modified Earth's climate during the 20th century. In other words, how global temperatures would have evolved if only natural factors were influencing the climate system, such as volcanoes, the sun, or ocean cycles. These undisturbed earth simulations predict that in the absence of human activities there would have been negligible warming, or even a slight cooling, over the 20th century. When human greenhouse gas emissions and other activities are included in the models, the resulting surface temperatures more closely resemble the observed changes in temperature. Based on a rigorous assessment of available temperature records, climate forcing estimates, and sources of natural climate variability, scientists have concluded that there is more than a 90 percent chance that most of the observed global warming trend over the past 50 to 60 years can be attributed to emissions from the burning of fossil fuels and other human activities. Understanding the causes of climate change provides valuable information to help us manage our future, to find smarter more economical and better ways to produce the food, energy and technologies we need to live and thrive.
If the video does not show up, please watch on the NAS website.
Another resource you can use to help answer the questions is the booklet that goes with this video: Climate Change: Evidence, Impacts, Choices. It is 40 pages, so you might not want to print it. Use it as an online reference.
Penn State geology professor, Richard Alley's, 45-minute video uses earth science to tell the story of Earth's climate history and our relationship with fossil fuels. There is no worksheet associated with this video.
Optional Video: Earth: The Operators' Manual (53:42)
Click for a transcript of Earth: The Operators' Manual video.
RICHARD ALLEY: All across the planet, nations and cities are working to reduce their dependence on fossil fuels and promote sustainable energy options.
ANNISE PARKER: Because it's the smart thing, because it makes business sense, and it's the right thing. NARRATOR: In China, Europe, and Brazil, energy innovations are changing how we live. And in the US, every branch of the military is mobilizing to cut its carbon bootprint.
DAVID TITLEY: We really believe that the climate is changing.
RICHARD ALLEY: In this program, we'll share how we know Earth is warming and why and discover what Earth science tells us about clean, green energy opportunities. I'm Richard Alley. I'm a geologist at Penn State University. But my research has taken me around the planet, from Greenland to Antarctica. I'm fascinated by how our climate has changed dramatically and often, from times with ice everywhere to no ice anywhere on the planet. Records of past climate help us learn how our Earth operates. What has happened can happen again. And I know that sometimes, things change really fast. I'm a registered Republican, play soccer on Saturdays, and go to church on Sundays. I'm a parent and a professor. I worry about jobs for my students and my daughter's future. I've been a proud member of the UN Panel on Climate Change. And I know the risks. And I've worked for an oil company and know how much we all need energy. And the best science shows we'll be better off if we address the twin stories of climate change and energy, and that the sooner we move forward, the better. Our use of fossil fuels for energy is pushing us towards a climate unlike any seen in the history of civilization. But a growing population needs more and more clean energy. But I believe science offers us an operator's manual with answers to both of these huge challenges.
[MUSIC PLAYING] NARRATOR: "Earth-- The Operator's Manual" is made possible by NSF, the National Science Foundation, where discoveries begin.
RICHARD ALLEY: Humans need energy. We always have and always will. But how we use energy is now critical for our survival. It all began with fire. Today, it's mostly fossil fuels. Now we're closing in on 7 billion of us, and the planet's population is headed toward 10 billion. Our cities and our civilization depend on vast amounts of energy. Fossil fuels-- coal, oil, and natural gas-- provide almost 80% of the energy used worldwide. Nuclear is a little less than 5%, hydropower a little under 6%, and the other renewables-- solar, wind, and geothermal-- about 1%, but growing fast. Wood and dung make up the rest. Using energy is helping many of us live better than ever before. Yet well over 1.5 billion are lagging behind, without access to electricity or clean fuels. In recent years, Brazil has brought electricity to 10 million. But in rural Ceara, some still live off the grid-- no electricity, no running water, and no refrigerators to keep food safe. Life's essentials come from their own hard labor. Education is compulsory, but studying is a challenge when evening arrives. The only light is from kerosene lamps. They're smoky, dim, and dangerous. Someday, this mother prays, the electric grid will reach her home.
TRANSLATOR: The first thing I'll do when the electricity arrives in my house will be to say a rosary and give praise to God.
RICHARD ALLEY: More than half of China's 1.3 billion citizens live in the countryside. Many rural residents still use wood or coal for cooking and heating, although most of China is already on the grid. China has used energy to fuel the development that has brought more than 500 million out of poverty. In village homes, there are flat-screen TVs and air conditioners. By 2030, it's projected that 350 million Chinese-- more than the population of the entire United States-- will move from the countryside to cities, a trend that's echoed worldwide. Development in Asia, Africa, and South America will mean 3 billion people will start using more and more energy as they escape from poverty. Suppose we make the familiar, if old-fashioned, 100-watt light bulb our unit for comparing energy use. If you're off the grid, your share of your nation's energy will be just a few hundred watts, a few light bulbs. South Americans average about 13 bulbs. For fast-developing China, it's more like 22 bulbs. Europe and Russia, 5,000 watts, 50 bulbs, and North Americans, over 10,000 watts, more than 100 bulbs. Now let's replace those light bulbs with the actual numbers. Population is shown across the bottom and energy use displayed vertically-- off the grid to the left, North America to the right. If everyone everywhere started using energy at the rate North Americans do, the world's energy consumption would more than quadruple. And using fossil fuels, that's clearly unsustainable. No doubt about it-- coal, gas, and oil have brought huge benefits. But we're burning through them approximately a million times faster than nature saved them for us, and they will run out. What's even worse-- the carbon dioxide from our energy system threatens to change the planet in ways that will make our lives much harder. So why are fossil fuels such a powerful, but ultimately problematic, source of energy? Conditions on the waterways of today's Louisiana help us understand how fossil fuels are made and why they're ultimately unsustainable. Oil, coal, and natural gas are made from things-- mostly plants-- that lived and died long ago. It's taken hundreds and millions of years for nature to create enough of the special conditions that saved the carbon and energy and plants to form the fossil fuels that we use. Here's how it works. Plants, like these tiny diatoms encased in silica shells, grow in the upper layers of lakes and oceans, using the sun's energy to turn carbon dioxide and water into more plants. When they die, if they're buried where there's little oxygen to break them down, their chemical bonds retain the energy that began as sunlight. If enough carbon-rich matter is buried deeply enough for long enough, the Earth's heat and pressure turn it into fossil fuel, concentrating the energy that once fed the growing plants. Vary what goes into Earth's pressure cooker and the temperature, and you end up with the different kinds of fossil fuel. Woody plants make coal. Slimy plants, algae, will give you oil, and both of them give rise to natural gas. The fossil fuels formed over a few hundred million years, and we're burning them over a few hundred years. And if we keep doing that, sooner or later, they must run out. But there's a bigger problem with fossil fuels. As we've seen, they're made of carbon, primarily. And when you burn them, you add oxygen, and that makes CO2 that goes in the air. We're reversing the process by which they formed. And if we keep doing this, it must change the composition of Earth's atmosphere. What CO2 does was confirmed by basic research that had absolutely nothing to do with climate change.
REPORTER: A continuance of the Upper Air Program will provide scientific data concerning the physics of the upper atmosphere.
RICHARD ALLEY: World War II was over, but the Cold War had begun. The US Air Force needed to understand the atmosphere for communications and to design heat-seeking missiles. At certain wavelengths, carbon dioxide and water vapor block radiation, so the new missiles couldn't see very far if they used a wavelength that CO2 absorbs. Research at the Air Force Geophysics Laboratory in Hanscom, Massachusetts produced an immense database with careful measurements of atmospheric gases. Further research by others applied and extended those discoveries, clearly showing the heat-trapping influence of CO2. The Air Force hadn't set out to study global warming. They just wanted their missiles to work. But physics is physics. The atmosphere doesn't care if you're studying it for warring or warming. Adding CO2 turns up the planet's thermostat. It works the other way as well. Remove CO2, and things cool down. These are the Southern Alps of New Zealand, and their climate history shows that the physicists really got it right. These deep, thick piles of frozen water are glaciers-- slow-moving rivers of ice sitting on land. But once, when temperatures were warmer, they were liquid water stored in the sea. We're going to follow this one, the Franz Josef, from summit to ocean to see the real world impact of changing levels of CO2. It's beautiful up here on the highest snow field, but dangers lurk beneath the surface. I've spent a lot of time on the ice. It's standard practice up here to travel in pairs, roped up for safety. The glacier is fed by something like six meters of water a year-- maybe 20 meters, 60 feet of snowfall, so really seriously high snowfall. The snow and ice spread under their own weight, and it's headed downhill at something like a kilometer a year. When ice is speeding up a lot as it flows towards the coast, it can crack and open great crevasses that give you a view into the guts of the glacier. Man, this is a big one. 10, 20, 30, meters more, 100 feet or more heading down in here. And we can see a whole lot of the structure of the glacier right here.
MAN: So what we're going to do is just sit on the edge and then walk backwards, and then I'll lower you.
RICHARD ALLEY: Tell me when. OK. Roll her around, and down we go. Snowfall arrives in layers, each storm putting one down. Summer sun heats the snow and makes it look a little bit different than the winter snow. And so you build up a history. In these layers, there's indications of climate-- how much it snowed, what the temperature was. And all of this is being buried by more snow. And the weight of that snow squeezes what's beneath it and turns it to ice. And in doing that, it can trap bubbles. And in those bubbles are samples of old air-- a record of the composition of the Earth's atmosphere, including how much CO2 was in it, a record of the temperature on the ice sheets and how much it snowed. As we'll see, we can open those icy bottles of ancient air and study the history of Earth's atmosphere. This landscape also tells the story of the Ice Ages and the forces that have shaped Earth's climate. Over the last millions of years, the brightness of the sun doesn't seem to have changed much. But the Earth's orbit, and the tilt of its axis, have shifted in regular patterns over tens and hundreds of thousands of years. The orbit changes shape, varying how close and far the Earth gets as it orbits the sun each year. Over 41,000 years, the tilt of Earth's axis gets larger and smaller, shifting some of the sunshine from the Equator to the poles and back. And our planet has a slight wobble, like a child's top, altering which hemisphere is most directly pointed towards the sun when Earth is closest to it. Over tens of thousands of years, these natural variations shift sunlight around on the planet, and that influences climate. More than 20,000 years ago, decreasing amounts of sunshine in the Arctic allowed great ice sheets to grow across North America and Eurasia, reaching the modern sites of New York and Chicago. Sea level fell as water was locked up on land. Changing currents let the oceans absorb CO2 from the air. That cooled the Southern Hemisphere and unleashed the immense power of glaciers, such as the Franz Josef, which advanced down this wide valley, filling it with deep, thick ice. Now we're flying over today's coastline, where giant boulders are leftovers from that last ice age. A glacier is a great earth-moving machine. It's a dump truck that carries rocks that fall on top of it. It's a bulldozer that pushes rocks in front of it, and it outlines itself with those rocks, making a deposit that we call a moraine that tells us where the glacier has been. We're 20 kilometers, 12 miles, from the front of the Franz Josef glacier today. But about 20,000 years ago, the ice was depositing these rocks as it flowed past us and out to sea. The rocks we can still see today confirm where the glacier once was. Now, in a computer-generated time lapse condensing thousands of years of Earth's history, we're seeing what happened. Lower CO2, colder temperatures, more snow and ice, and the Franz Josef advanced. 20,000 years ago, 30% of today's land area was covered by great ice sheets which locked up so much water that the global sea level was almost 400 feet lower than today. And then, as Earth's orbit changed, temperatures and CO2 rose, and the glacier melted back. The orbits set the stage. But by themselves, there weren't enough. We need the warming and cooling effects of rising and falling CO2 to explain the changes we know happened. Today, atmospheric CO2 is increasing still more, temperatures are rising, and glaciers and ice sheets are melting. You can see this clearly on the lake formed by the shrinking Tasman Glacier across the range from the Franz Josef. This is what the end of an ice age looks like-- glaciers falling apart, new lakes, new land, icebergs coming off the front of the ice. In the early 1980s, we would have been inside New Zealand's Tasman Glacier right here. Now we're passing icebergs in a new lake from a glacier that has mostly fallen apart and ends over six kilometers, four miles away. One glacier doesn't tell us what the world is doing. But while the Tasman has been retreating, the great majority of glaciers on the planet have gotten smaller. This is the Columbia Glacier in Alaska. It's a type of glacier that makes the effects of warming easy to see. It's been retreating so fast that the Extreme Ice Survey had to reposition their time-lapse cameras to follow its motion. In Iceland, warming air temperatures have made this glacier simply melt away, leaving streams and small lakes behind. Thermometers in the air show warming. Thermometers in the air far from cities show warming. Put your thermometer in the ground, in the ocean, look down from satellites-- they show warming. The evidence is clear. The Earth's climate is warming. This frozen library, the National Ice Core Lab in Denver, Colorado, has ice from all over, kept at minus 35 degrees. The oldest core here goes back some 400,000 years. Here, really ancient ice from Greenland in the north and Antarctica in the south reveals Earth's climate history. Let's see what cores like this can tell us. First are those layers I mentioned in the New Zealand snow. They've turned to ice, and we can count them-- summer, winter, summer, winter. Like tree rings, we can date the core. Other cores tell other stories. Look at this. It's the ash of an Icelandic volcano that blew up to Greenland 50,000 years ago. Cores hold other, and even more important, secrets. Look at these bubbles. They formed as the snow turned to ice and trapped old air that's still in there. Scientists now are working with cores from Antarctica that go back even further. They tell us, with a very high degree of accuracy, how much carbon dioxide was in the air that far back. Researchers break chunks of ice in vacuum chambers and carefully analyze the gases that come off. They're able to measure, very precisely, levels of carbon dioxide in that ancient air. Looking at the cores, we see a pattern that repeats-- 280 parts per million of CO2, then 180, 280, 180, 280. By analyzing the chemistry of the oxygen atoms in the ice, you can also see the pattern of rising and falling temperature over time-- colder during the ice ages, warmer during the interglacial periods. Now put the two lines together, and you can see how closely temperature and carbon dioxide track each other. They're not exactly alike. At times, the orbits caused a little temperature change before the feedback effects of CO2 joined in. But just as we saw in New Zealand, we can't explain the large size of the changes in temperature without the effects of CO2. This is the signature of natural variation, the cycle of the ice ages driven by changes in Earth's orbit with no human involvement. But here's where we are today. In just 250 years since the Industrial Revolution, we've blown past 380 with no sign of slowing down. It's a level not seen in more than 400,000 years, 40 times longer than the oldest human civilization. So physics and chemistry tell us that adding carbon dioxide to the atmosphere warms things up, and Earth's climate history shows us there will be impacts, from melting ice sheets to rising sea level. But how do we know, with equal certainty, that it's not just more natural variation, that humans are the source of the increasing CO2? When we look at a landscape like this one, we know immediately that volcanoes put out all sorts of interesting things, and that includes CO2. So how do we know that the rise of CO2 in the atmosphere that we see comes from our burning of fossil fuels and not from something that the volcanoes have done? Well, the first step in the problem is just bookkeeping. We measure how much CO2 comes out of the volcanoes. We measure how much CO2 comes out of our smokestacks and tailpipes. The natural source is small. Humans are putting out 50 to 100 times more CO2 than the natural volcanic source. We can then ask the air whether our bookkeeping is right, and the air says that it is. Volcanoes make CO2 by melting rocks to release the CO2. They don't burn, and they don't use oxygen. But burning fossil fuels does use oxygen when it makes CO2. We see that the rise in CO2 goes with the fall of oxygen, which says that the rising CO2 comes from burning something. We can then ask the carbon in the rising CO2 where it came from. Carbon comes in three flavors-- the lightweight, carbon-12, which is especially common in plants, the medium weight, carbon-13, which is a little more common in the gases coming out of volcanoes, and the heavyweight, carbon-14. It's radioactive and decays almost entirely after about 50,000 years, which is why you won't find it in very old things, like dinosaur bones or fossil fuels. We see a rise in carbon-12 which comes from plants. We don't see a rise of carbon-13, so the CO2 isn't coming from the volcanoes. And we don't see a rise in carbon-14, so the CO2 can't be coming from recently living plants. And so the atmosphere says that the rising CO2 comes from burning of plants that have been dead a long time. That is fossil fuels. The CO2 is coming from our fossil fuels. It's us. So physics and chemistry show us carbon dioxide is at levels never seen in human history. And the evidence says it's all of us burning fossil fuels that's driving the increase. But what about climate change and global warming? Are they for real? Here's what those who have looked at all the data say about the future.
MAN: Climate change, energy security, and economic stability are inextricably linked. Climate change will contribute to food and water scarcity, will increase the spread of disease, and may spur or exacerbate mass migration.
RICHARD ALLEY: Who do you suppose said that? Not a pundit, not a politician. The Pentagon. These war games at Fort Irwin, California provide realistic training to keep our soldiers safe. The purpose of the Pentagon's Quadrennial Defense Review, the QDR, is to keep the nation safe. The review covers military strategies for an uncertain world. The Pentagon has to think long-term and be ready for all contingencies. The 2010 QDR was the first time that those contingencies included climate change. Rear Admiral David Titley is oceanographer of the Navy and contributed to the Defense Review.
DAVID TITLEY: Well, I think the QDR really talks about climate change in terms that really isn't for debate. And you take a look at the global temperatures. You take a look at sea level rise. You take a look at what the glaciers are doing-- not just one or two glaciers, but really glaciers worldwide. And you add all of those up together, and that's one of the reasons we really believe that the climate is changing. So the observations tell us that. Physics tells us this as well.
RICHARD ALLEY: What climate change means for key global hotspots is less clear.
DAVID TITLEY: We understand the Earth is getting warmer. We understand the oceans are getting warmer. What we do not understand is exactly how that will affect things like strong storms, rainfall rates, rainfall distribution. So yes, climate change is a certainty, but what is it going to be like in specific regions of the world, and when?
RICHARD ALLEY: One area of particular concern to the Navy is sea level rise.
DAVID TITLEY: Sea level rise is going to be a long-term and very, very significant issue for the 21st century.
RICHARD ALLEY: The QDR included an infrastructure vulnerability assessment that found that 153 Naval installations are at significant risk from climatic stresses. From Pearl Harbor, Hawaii to Norfolk, Virginia, the bases and their nearby communities will have to adapt.
DAVID TITLEY: Even with one to two meters of sea level rise, which is very, very substantial, we have time. This is not a crisis, but it is certainly going to be a strategic challenge.
RICHARD ALLEY: Globally, climate change is expected to mean more fires, floods, and famine. Nations may be destabilized. For the Pentagon, climate change is a threat multiplier. But with sound climate science, Titley believes forewarned is forearmed.
DAVID TITLEY: The good thing is the science is advanced enough in oceanography, glaciology, meteorology that we have some skill at some frames of predicting this. And if we choose to use those projections, we can, in fact, by our behavior, alter the future in our favor. RICHARD ALLEY: Titley and the Pentagon think the facts are in.
DAVID TITLEY: Climate change is happening, and there is very, very strong evidence that a large part of this is, in fact, man-made.
RICHARD ALLEY: The military is America's single largest user of energy, and it recognizes that its use of fossil fuels has to change. The Pentagon uses 300,000 barrels of oil each day. That's more than 12 million gallons. An armored Humvee gets four miles to the gallon. At full speed, an Abrams battle tank uses four gallons to the mile. And it can cost as much as \$400 a gallon to get gas to some remote bases in Afghanistan. Fort Irwin is a test bed to see if the Army can operate just as effectively while using less fossil fuel and more renewables. And it's not just Fort Irwin in the Army. At Camp Pendleton, Marines were trained on an energy-saving experimental forward operating base that deployed with them to Afghanistan.
ROBERT HEDELUND: Before any equipment goes into theater, we want Marines to get trained on it. So what are some of the things that we can take hold of right away and make sure that we can make a difference for the warfighter down range? RICHARD ALLEY: They test out different kinds of portable solar power units. They also practice how to purify stagnant water and make it drinkable. The Army and Marines both want to minimize the number of convoys trucking in fuel and water. A report for the Army found that in five years, more than 3,000 service members had been killed or wounded in supply convoys.
ROBERT HEDELUND: And if you've got Marines guarding that convoy, and God forbid, it get hit by an IED, then what are the wounded, what are the deaths involved in that? And are we really utilizing those Marines and that capability the way we should?
RICHARD ALLEY: Generators used to keep accommodations livable and computers running are also major gas guzzlers.
ADORJAN FERENCZY: Right now, what we're doing is putting up a power shade. It has flexible solar panels on the top and gives us enough power to run small electronics, such as lighting systems and laptop computers. It also provides shade over the tent structure. Experimenting with this equipment in Africa proved that it could reduce the internal temperature of the tent seven to 10 degrees.
RICHARD ALLEY: All the LED lights in the entire tent use just 91 watts, less than one single old-fashioned incandescent bulb.
ADORJAN FERENCZY: It's a no-brainer when it comes to efficiency.
RICHARD ALLEY: Light-emitting diodes don't weigh much, but they're still rugged enough to survive a typical Marine's gentle touch.
ZACH LYMAN: When we put something into a military application and they beat it up, it's ruggedized. It's ready for the worst that the world can take. And so one thing that people say is if the military has used this thing and they trust it, then maybe it's OK for my backyard.
RICHARD ALLEY: Renewable energy will also play an important role at sea and in the air. The Navy's Makin Island is an amphibious assault ship with jump jets, helicopters, and landing craft. It's the first vessel to have both gas turbines and a hybrid electric drive, which it can use for 75% of its time at sea. This Prius of the ocean cut fuel costs by \$2 million on its maiden voyage. By 2016, the Navy plans to have what it calls a Great Green Fleet, a complete carrier group running on renewable fuels with nuclear ships, hybrid electric surface vessels, and aircraft flying only biofuels. By 2020, the goal is to cut usage of fossil fuels by 50%. Once deployed in Afghanistan, the XFOB cut down on gas used in generators by over 80%. In the past, the Pentagon's innovations in computers, GPS, and radar have spun off into civilian life. In the future, the military's use of renewable energy can reduce dependence on foreign oil, increase operational security, and save lives and money.
JIM CHEVALLIER: A lot of the times, it is a culture change more than anything else. And the Department of Defense, over the years, has proved, time and time again, that they can lead the way in that culture change.
RICHARD ALLEY: If the US military is the largest user of energy in America, China is now the largest consumer on the planet. At 1.3 billion, China has a population about four times larger than the US, so average per-person use in CO2 emissions remain about 1/4 those of Americans. But like the US Military, China is moving ahead at full speed on multiple different sustainable energy options. And it pretty much has to. Cities are congested. The air is polluted. Continued rapid growth using old technologies seems unsustainable. This meeting in Beijing brought together mayors from all over China, executives from state-owned enterprises, and international representatives. The organizer was a US-Chinese NGO headed by Peggy Liu.
PEGGY LIU: Over 20 years, we're going to have 350 million people moving into cities in China. And we're going to be building 50,000 new skyscrapers, the equivalent of 10 Manhattans, 170 new mass transit systems. It's just incredible, incredible scale.
RICHARD ALLEY: This massive, rapid growth comes with a high environmental cost.
MARTIN SCHOENBAUER: They recognize that they're spending as much as 6% of their gross domestic product on environmental issues.
RICHARD ALLEY: In 2009, China committed \$35 billion, almost twice as much as the US, TO energy research and incentives for wind, solar, and other clean energy technologies. It's attracted an American company to set up the world's most advanced solar power research plant. China now makes more solar panels than any other nation. But it's also promoting low-tech, low-cost solutions. Solar water heaters are seen on modest village homes. Some cities have them on almost every roof.
PEGGY LIU: China is throwing spaghetti on the wall right now in terms of over 27 different cities doing LED street lighting, or over 20, 30 different cities doing electric vehicles.
RICHARD ALLEY: But visit any city, and you can see that the coal used to generate more than 70% of China's electricity has serious consequences with visible pollution and adverse health effects. China uses more coal than any other nation on Earth, but it's also trying to find ways to burn coal more cleanly.
PEGGY LIU: In three years, 2006 to 2009, while China was building one new coal-fired power plant a week, it also shut down inefficient coal plants. So it's out with the old and in with the new. And they're really trying hard to invent new models.
RICHARD ALLEY: This pilot plant, designed for carbon capture and sequestration, was rushed to completion in time for Shanghai's 2010 World Expo. It absorbs and sells carbon dioxide and will soon scale up to capture 3 million tons a year that could be pumped back into the ground, keeping it out of the air.
MARTIN SCHOENBAUER: Here in China, they are bringing many plants online in a much shorter time span it takes us in the US. PEGGY LIU: China is right now the factory of the world. What we'd like to do is turn it into the clean tech laboratory of the world. RICHARD ALLEY: If nations choose to pay the price, burning coal with carbon capture can offer the world a temporary bridge until renewables come to scale. PEGGY LIU: China is going to come up with the clean energy solutions that are cost-effective and can be deployed at large scale-- in other words, solutions that everybody around the world wants.
RICHARD ALLEY: Can low-carbon solutions really give us enough energy to power the planet and a growing population? Let's put some numbers on how much energy we can get from non-fossil fuel renewables. Today, all humans everywhere on Earth use about 15.7 terawatts of energy. That's a big number. In watts, that's 157 followed by 11 0's, or 157 billion of those 100-watt light bulbs we used as a reference. To show what's possible, let's see if we can get to 15.7 terawatts using only renewable energy. I'm here in the Algodones Dunes near Yuma, Arizona. The Guinness Book of Records says it's the sunniest place in the world. There's barely a cloud in the daytime sky for roughly 90% of the year. 0.01%, 1/100 of 1%-- if we could collect that much of the sun's energy reaching the Earth, it would be more than all human use today. Today's technologies have made a start. This was the world's first commercial power station to use a tower to harvest concentrated solar energy. Near Seville, Spain, 624 mirrors stretch over an area of more than 135 acres, beaming back sunlight to a tower nearly 400 feet high. Intense heat produces steam that drives the turbine, which generates electricity. When completed, this one facility will be able to power 200,000 homes, enough to supply the entire nearby city of Seville. Remember our target of 15.7 terawatts? Well, the sun delivers 173,000 terawatts to the top of Earth's atmosphere, 11,000 times current human use. No way we can capture all of that potential energy at Earth's surface. But the deserts of America's Southwest, with today's technology, have enough suitable land to supply 80% of the entire planet's current use. Of course, there's one big problem with solar power-- night. But with more efficient transmission lines, and as part of a balanced renewable energy portfolio that includes storage, the sun's potential is vast. In tropical nations like Brazil, the sun heats water, makes clouds, and unleashes rainfall that feeds some of the planet's largest rivers. Iguazu Falls is a tourist attraction, one of the most spectacular waterfalls on Earth, where you can feel the immense power of falling water. The nearby Itaipu Dam on the border of Brazil in Paraguay produces the most hydroelectric power of any generating station in the world. This one dam supplies most of the electricity used in Sao Paulo, a city of more than 11 million. Sao Paulo is 600 miles away, but Brazil made the decision to build innovative, high-voltage direct current transmission lines to minimize energy loss. The Itaipu to Sao Paulo electrical grid has been in operation since 1984 and shows that renewable energy can go the distance. Dams can't be the answer for every nation. They flood landscapes, disrupt ecosystems, and displace people. But hydropower gives Brazil, a nation larger than the continental United States, 80% of its electricity. And worldwide, hydropower could contribute 12% of human energy use, ready at a moment's notice in case the sun goes behind a cloud. Brazil is also using its unique natural environment in another way. Its tropical climate provides ideal conditions for sugarcane, one of the Earth's most efficient plants in its ability to collect the energy of sunlight. Plantations like this one harvest the cane for the production of sugar and the biofuel called ethanol. The US is actually the number one producer of ethanol in the world, mostly using corn instead of cane. But ethanol made from sugar cane is several times more efficient at replacing fossil fuel than corn-based ethanol. Modern facilities like this one pipe back wet waste to fertilize the fields and burn the dry waste, called the gas, to generate electricity to run the factory. For Brazil, at least, ethanol works. Today, almost all cars sold in Brazil can use flex fuels. Drivers choose gasoline blended with 25% ethanol or pure ethanol, depending on price and how far they plan to drive. Local researchers say that if all the gasoline in the world suddenly disappeared, Brazil is the only nation that could go it alone and keep its cars running. Using food for fuel raises big questions in a hungry world. As of now, sugarcane ethanol hasn't affected food prices much. But there are concerns with corn. So here in the US, government labs like NREL, the National Renewable Energy Lab, have launched programs to see if biofuels can be made from agricultural waste. It does work, and researchers are trying to bring the cost down. So with plants capturing roughly 11 times human energy use, they're a growing opportunity. New Zealand takes advantage of another kind of energy. These are the geysers and hot springs at Rotorua on the North Island. Once, they were used by the native Maori people for cooking and bathing. Now geothermal power plants harvest heat and turn it into as much as 10% of all New Zealand's electricity. Many power projects are partnerships with the Maori, benefiting the local people and avoiding the "not in my backyard" problems that often complicate energy developments. Globally, geothermal energy offers three times our current use. But we can mine geothermal, extracting the energy faster than nature supplies it, cooling the rocks deep beneath us to make power for people. This energy exists even where you don't see geysers and mud pots, so it can be extracted without harming these natural wonders. A study by MIT showed that the accessible hot rocks beneath the United States contain enough energy to run the country for 130,000 years. And like hydroelectric, geothermal can provide peaking power, ready to go at a moment's notice if the sun doesn't shine and the wind doesn't blow. Mining energy from deep, hot rocks is a relatively new technology, but people have been using windmills for centuries, and the wind blows everywhere. Here's where the United States is very lucky. Let's take a trip up the nation's wind corridor, from Texas in the South to the Canadian border. Bright purple indicates the strongest winds. All along this nearly 2,000 miles, there's the potential to turn a free, non-CO2-emitting resource into electricity. But that takes choices and actions by individuals and governments. Here's what's been happening in West Texas. It's a land of ranches and farms and, of course, oil rigs and pump jacks. But in the early '90s, this was one of the most financially depressed areas in the state. Communities like Nolan Divide fell on hard times. Schools closed. People moved away. But since 1999, the new structures towering above the flat fields aren't oil derricks, but wind turbines. The largest number-- more than 1,600-- is in Nolan County. Greg Wortham is Mayor of Sweetwater, the county seat.
GREG WORTHAM: It wasn't a philosophical or political decision. It was ranchers and farmers and truck drivers and welders and railroads. and wind workers.
RICHARD ALLEY: Steve Oatman's family has been ranching the Double Heart for three generations. Steve may have doubts about the causes of climate change, but not about wind energy.
STEVE OATMAN: But it's been a blessing. It helps pay taxes. It helps pay the feed bill. Rosco, 30 May.
GREG WORTHAM: We talk about this being green energy because it pays money. The ranchers and the farmers call it mailbox money. They have to get up, and sweat, and work hard all day long. Things are pretty stressful. And if you can just walk to the mailbox and pick up some money because you've got turbines above the ground, that makes life a lot easier. RICHARD ALLEY: Each windmill can generate between 5,000and5,000and15,000 per year. So a ranch with an average of 10 to 20 turbines can provide financial stability for people who have always lived with uncertainty.
STEVE OATMAN: I don't just believe in it because I make a living from it. It's something that's going to have to happen for the country.
RICHARD ALLEY: So now, local schools have growing enrollments and funds to pay for programs.
GREG WORTHAM: We had about \$500 million in tax based in the whole county in 2000. And by the late part of that decade, in less than 10 years, it went up to \$2.5 billion in tax value.
RICHARD ALLEY: By the end of 2009, the capacity of wind turbines in West Texas totaled close to 10,000 megawatts. If Texas were a country, it would rank sixth in the world in wind power. The US Department of Energy estimates that wind could supply 20% of America's electricity by 2030. New offshore wind farms would generate more than 43,000 new jobs. That translates into a \$200-billion boost to the US economy. Worldwide, wind could provide almost 80 times current human usage. No form of energy is totally free of environmental concerns or hefty startup costs. Some early wind farms gave little consideration to birds and other flying critters, like migrating bats. But recent reports by Greenpeace and the Audubon Society have found that properly sighted and operated turbines can minimize problems. Mayor Wortham, for one, welcomes wind turbines into his backyard.
GREG WORTHAM: We like them. Some people don't. But we're more than happy to export our energy to those states who want to buy green, but don't want to see green.
STEVE OATMAN: In the long run, I hope we have wind turbines everywhere they can produce energy. We need them. That's what America is going to have to do. That's the next stepping stone to save ourselves.
RICHARD ALLEY: The state of Texas has invested \$5 billion to connect West Texas wind to big cities like Dallas and Fort Worth. Farther south is Houston, one of the most energy-hungry cities in the country. Its port is America's largest by foreign tonnage, and its refineries and chemical plants supply a good portion of the nation. But already, perhaps surprisingly, Houston is the largest municipal purchaser of renewable energy in the nation. 30% of the power city government uses comes from wind, with a target of 50%. And its mayor wants to cut energy costs and increase energy efficiency.
ANNISE PARKER: I want to go from the oil and gas capital of the world to the green and renewable energy capital of the world.
RICHARD ALLEY: Supported by federal stimulus dollars, the local utility is ahead of schedule to install smart meters. These will help consumers economize on energy use. The city has already installed 2,500 LED traffic lights using 85% less energy than traditional incandescent bulbs. That translates into savings of \$3.6 million per year. City Hall thinks it can also improve air quality by changing the kinds of cars Houstonians drive.
ANNISE PARKER: If
RICHARD ALLEY: The city already operates a fleet of plug-in hybrids. Now it's encouraging the development of an infrastructure to make driving electric vehicles easy and practical. And in Houston's hot and humid environment, it helps to have an increasing number of energy-efficient, LEED-certified buildings. ANNISE PARKER: We're going to do it because it's the smart thing, because it makes business sense, and it's the right thing.
RICHARD ALLEY: Some estimates are that the US could save as much as 23% of projected demand from a more efficient use of energy.
ANNISE PARKER: Well, if you're going to tackle energy efficiency, you might as well do it in a place that is a profligate user of energy. And when you make a difference there, you can make a difference that's significant.
RICHARD ALLEY: Globally, efficiency could cut the demand for energy by 1/3 by 2030. Bottom line-- there are many ways forward, and we can hit that renewable energy target. And if next-generation nuclear is also included, one plan has the possible 2030 energy mix transformed from one relying on fossil fuels to one that looks like this, with renewables-- sun, wind, geothermal, biomass, and hydropower-- totaling 61%, fossil fuels down to 13%, and existing and new nuclear making up the balance. Another plan meets world energy needs with only wind, water, and solar. And in fact, there are many feasible paths to a sustainable energy future. Today's technologies can get us started, and a commitment to research and innovation will bring even more possibilities. We've traveled the world to see some of the sources the planet offers to meet our growing need for clean energy. There's too many options to cover all of them here. And besides, each nation, each state, each person must make their own choices as to what works best for them. But the central idea is clear. If we approach Earth as if we have an operator's manual that tells us how to keep the planet humming along at peak performance, we can do this. We can avoid climate catastrophes, improve energy security, and make millions of good jobs. For "Earth-- The Operator's Manual," I'm Richard Alley.
NARRATOR: "Earth-- The Operator's Manual" is made possible by NSF, the National Science Foundation, where discoveries begin.
[MUSIC PLAYING] For the annotated, illustrated script with links to information on climate change and sustainable energy, web-exclusive videos, educator resources, and much more, visit pbs.org. "Earth-- The Operator's Manual" is available on DVD. The companion book is also available. To order, visit shoppbs.org, or call us at 1-800-PLAY-PBS.
Optional Follow-up Questions to the Videos
If instructed by your instructor, download the following questions that can be applied to either video:
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.01%3A_Understanding_Global_Climate_Change_and_Food_Systems/11.txt
|
In Module 9.1, we explored the causes of global climate change, the ways that our food systems contribute to greenhouse gas emissions and how climate variables are expected to change in different parts of the US. In this unit, we’ll consider the expected impacts of global climate change on food production.
Farmers have always had to struggle against the vagaries of the weather in their efforts to produce food for a growing population. Floods, droughts, heat waves, hailstorms, late frosts, and windstorms have plagued farmers for centuries. However, with increased levels of CO2 in the atmosphere trapping more heat energy, farmers will face more extreme weather events, greater variability and more extreme temperatures. Unpredictable and varied weather can lead to a domino effect through the entire food system, creating shortages and food price spikes. Farmers are developing strategies for resilience in the face of a changing climate, such as, more efficient irrigation, better soil health, and planting more resilient crop varieties.
Climate change can have both direct and indirect impacts on agricultural food production. Direct effects stem directly from changes in temperature, precipitation, and CO2 concentrations. For example, as temperatures increase in crop water demands and stresses on livestock increase. Changes in the maximum number of consecutive dry days can affect crop productivity. Increases in precipitation can increase soil erosion. Increased incidence of extreme weather events can also have direct impacts on agriculture, in the form of floods, droughts, hail and high winds.
Indirect effects of climate change include changes in weed, disease, and insect populations and distributions, which will have impacts on costs of managing pests and may increase crop losses. Increased incidence of wildfire can favor survival on invasive species. Some weeds respond well to increasing CO2 concentrations and may put greater pressure on crops.
In summary, a 2015 report on Climate Change, Global Food Security, and the U.S. Food System states that by 2050, global climate change may result in decreased crop yields, increased land area in crop production, higher food prices, and slightly reduced food production and consumption, compared to model results for 2015 with no climate change (Brown et al. 2015).
Global Effects of Climate Change
Human influences will continue to alter Earth’s climate throughout the 21st century. Current scientific understanding, supported by a large body of observational and modeling results, indicates that continued changes in atmospheric composition will result in further increases in global average temperature, changes in precipitation patterns, rising sea level, changes in weather extremes, and continued declines in snow cover, land ice, and sea ice extent, among other effects that will affect U.S. and global agricultural systems.
While climate change effects vary among regions, among annual and perennial crops, and across livestock types, all production systems will be affected to some degree by climate change. Temperature increases coupled with more variable precipitation will reduce crop productivity and increase stress on livestock production systems. Extreme climate conditions, including dry spells, sustained droughts, and heat waves will increasingly affect agricultural productivity and profitability. Climate change also exacerbates indirect biotic stresses on agricultural plants and animals. Changing pressures associated with weeds, diseases, and insect pests, together with potential changes in timing and coincidence of pollinator lifecycles, will affect growth and yields. When occurring in combination, climate change-driven effects may not simply be additive, but can also amplify the effects of other stresses on agroecosystems.
From Expert Stakeholder Workshop for the USDA Technical Report on Global Climate Change, Food Security, and the U.S. Food System
Brown, M., P. Backlund, R. Hauser, J. Jadin, A. Murray, P. Robinson, and M. Walsh
June 25-27, 2013, Reston, VA,
Brown, M.E., J.M. Antle, P. Backlund, E.R. Carr, W.E. Easterling, M.K. Walsh, C. Ammann, W. Attavanich, C.B. Barrett, M.F. Bellemare, V. Dancheck, C. Funk, K. Grace, J.S.I. Ingram, H. Jiang, H. Maletta, T. Mata, A. Murray, M. Ngugi, D. Ojima, B. O’Neill, and C. Tebaldi. 2015. Climate Change, Global Food Security, and the U.S. Food System. 146 pages.
11.02: Food Production in a Changing Climate
In the first part of this module, we looked at observed and predicted changes in temperature and precipitation. Now, we'll consider some the impacts that changes in temperature and precipitation may have on crops. For example, the projected increase in temperature will increase the length of the frost-free season (the period between the last frost in the spring and the first frost in the fall), which corresponds to a similar increase in growing season length. Increases in frost-free season length have already been documented in the US (Figure 9.2.1). An increase in growing season length may sound like a great thing for food production, but as we'll see, that can make plants more vulnerable to late frosts and can also allow for more generations of pests per growing season, thus increasing pest pressure. The complexity of the system makes adapting to a changing climate quite challenging, but not insurmountable.
Figure 9.2.1.: Observed Changes in the Frost-free Season in1986-2015 compared to 1901-1960. The frost-free season length is the period between the last occurrence of 32°F in the spring and the first occurrence of 32°F in the fall. Increases in frost-free season length correspond to similar increases in growing season length. Credit: National Climate Assessment, 2014.
Crops, livestock, and pests are all sensitive to temperature and precipitation, so changes in temperature and precipitation patterns can affect agricultural production. As a result, it's important to consider future projections of climate variables so that farmers and ranchers can adapt to become more resilient.
Projected changes in some key climate variables that affect agricultural productivity are shown in Figure 9.2.2. The lengthening of the frost-free or growing season and reductions in the number of frost days (days with minimum temperatures below freezing), shown in the top two maps, can have both positive and negative impacts. With higher temperatures, plants grow and mature faster, but may produce smaller fruits and grains and nutrient value may be reduced. If farmers can adapt warmer season crops and planting times to the changing growing season, they may be able to take advantage of the changing growing season.
The bottom-left map in Figure 9.2.2 shows the expected increase in the number of consecutive days with less than 0.01 inches of precipitation, which has the greatest impact in the western and southern part of the U.S. The bottom-right map shows that an increase in the number of nights with a minimum temperature higher than 98% of the minimum temperatures between 1971 and 2000 is expected throughout the U.S., with the highest increase expected to occur in the south and southeast. The increases in both consecutive dry days and hot nights are expected to have negative effects on both crop and animal production. There are plants that can be particularly vulnerable at certain stages of their development. For example, one critical period is during pollination, which is very important for the development of fruit, grain or fiber. Increasing nighttime temperatures during the fruit, grain or fiber production period can result in lower productivity and reduced quality. Farmers are already seeing these effects, for example in 2010 and 2012 in the US Corn Belt (Hatfield et al., 2014).
Some perennial crops, such as fruit trees and grape vines, require exposure to a certain number of hours at cooler temperatures (32oF to 50oF), called chilling hours, in order for flowering and fruit production to occur. As temperatures are expected to increase, the number of chilling hours decreases, which may make fruit and wine production impossible in some areas. A decrease in chilling hours has already occurred in the Central Valley of California and is projected to increase up to 80% by 2100 (Figure 9.2.3). Adaptation to reduced chilling hours could involve planting different varieties and crops that have lower chilling hour requirements. For example, cherries require more than 1,000 hours, while peaches only require 225. Shifts in the temperature regime may result in major shifts in certain crop production to new regions (Hatfield et al., 2014).
To supplement our coverage of the climate variables that affect agriculture, read p. 18, Box 4 in Advancing Global Food Security in the Face of a Changing Climate, and scroll down to the Learning Checkpoint below.
Figure 9.2.2.: Projected Changes in Key Climate Variables Affecting Agricultural Productivity. Changes are shown for 2070-2099 compared to 1971-2000 and projected under an emissions scenario that assumes continued increases in greenhouse gases. Credit: National Climate Assessment, 2014.
Figure 9.2.3.: Reduced winter chilling projected for California’s Central Valley, assuming that observed climate trends continue through 2050 and 2090. Credit: National Climate Assessment, 2014.
Learning Checkpoint. (short answers)
1) What are some of the challenges that farmers will face in a changing climate?
(add text box)
Click for answer
Possible Answers:
• increased temperatures
• leads to increased ET - increased water needs for the same crop production and increased water needs for irrigation
• heat stress
• can lead to reduced crop yields
• change in timing and intensity of rainfall
• more extreme weather events – floods and droughts
• increased CO2 concentrations
• may benefit some crops and weeds
• may negatively affect the nutritional makeup of some crops
• shifting zones of crop production
• changing threats from pests, disease, and invasive species
• insects
• weeds
2) In the first part of this module, we explored some maps from the National Climate Change Viewer. Discuss how the predicted changes in climate that you saw in those maps (Module 9.1 Projected Climate Changes) will likely affect farmers.
(add text box)
Click for answer
Answer: The NCCV shows that temperatures are predicted to increase, including max and min temperatures. Growing seasons will be longer. Increased temperatures could result in heat stress for some crops and increased yields for others. Changes in temperature may result in changing zones of crop production, so farmers may have to change the crops and crop varieties that they grow. Increasing temperatures will lead to increased evaporation and transpiration rates, reduced soil moisture and runoff. If precipitation in an area decreases, then farmers may need to find alternative irrigation water or change to lower-water use crops. In general, a hotter and drier climate will create the need for more water-efficient farm practices and crops.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.02%3A_Food_Production_in_a_Changing_Climate/11.2.01%3A_Climat.txt
|
Plants, whether crops or native plant species have adapted to flourish within a range of optimal temperatures for germination, growth, and reproduction. For example, plants at the poles or in alpine regions are adapted to short summers and long, cold winters, and so thrive within a certain range of colder temperatures. Temperature plays an important role in the different biological processes that are critical to plant development. The optimum temperature varies for germination, growth, and reproduction varies and those optimum temperatures needed to occur at certain times in the plant's life cycle, or the plant's growth and development may be impaired.
Let's consider corn as an example. In order for a corn seed to germinate, the soil temperature needs to be a minimum of 50oF. Corn seed typically will not germinate if the soil is colder than about 50oF. The minimum air temperature for vegetative growth (i.e., the growth of stem, leaves, and branches) is about 46oF, but the optimum range of temperatures for vegetative growth of corn is 77-90oF. At temperatures outside of the optimal range, growth tends to decline rapidly. Many plants can withstand short periods of temperatures outside of the optimal range, but extended periods of high temperatures above the optimal range can reduce the quality and yield of annual crops and tree fruits. Optimal reproduction of corn occurs between 64 and 72oF, and reproduction begins to fail at temperatures above 95oF. Reproductive failure for most crops begins around 95oF.
Water availability is a critical factor in agricultural production. We saw in Module 4 how increased temperature leads to increased transpiration rates. High rates of transpiration can also exhaust soil water supplies resulting in drought stress. Plants respond to drought stress through a variety of mechanisms, such as wilting their leaves, but the net result of prolonged drought stress is usually reduced productivity and yield. Water deficit during certain stages of a plant's growth can result in defects, such as tougher leaves in kales, chards, and mustards. Another example, blossom end rot in tomatoes and watermelon, is caused by water stress and results in fruit that is unmarketable (Figure 9.2.4 and for more photos of blossom end rot on different vegetables, visit Blossom end rot causes and cures in garden vegetables).
In addition to water stress and impacts on plant productivity and yield, increased temperatures can have other effects on crops. High temperatures and direct sunlight can sunburn developing fruits and vegetables. Intense heat can even scald or cook fruits and vegetables while still on the plant.
Figure 9.2.4.: Blossom-end-rot in a tomato. Credit: Scot Nelson, Creative Commons
Crop yield
A warming climate is expected to have negative impacts on crop yields. Negative impacts are already being seen in a few crops in different parts of the world. Figure 9.2.5 shows estimated impacts of climate trends on crop yields from 1980-2008, with declines exceeding 5% for corn, wheat, and soy in some parts of the world. Projections under different emissions scenarios for California's Central Valley show that wheat, cotton, and sunflower have the largest declines in yields, while rice and tomatoes are much less affected (Figure 9.2.6). Notice that there are two lines on the graphs in Figure 9.2.6 projecting crop yields into the future. The red line corresponds to temperature increases associated with a higher carbon dioxide emissions scenario. We saw in Module 9.1 that the more CO2 we emit, the more heat energy is trapped in the lower atmosphere, and therefore the warmer the temperatures. For some crops, those higher temperatures are associated with great impacts on the crop's yield.
Why are some crops affected more by observed and projected temperature increases than others? It depends on the crop, the climate in the region where the crop is being grown, and the amount of temperature increase. Consider the Activate your learning questions below to explore this more deeply.
Why do some crops see a positive yield change with increasing temperatures, such as alfalfa in Figure 9.2.6? Generally, warmer temperatures mean increased crop productivity, as long as those temperatures remain within the optimal range for that crop. If a crop is being grown in a climate that has typical temperatures at the cooler end of the plant's optimal range, than a bit of warming could increase the crop's productivity. If the temperatures increase above the optimal range or exceed the temperature that leads to reproductive failure, then crop yields will decline.
Figure 9.2.5.: Climate change effects on crop yields. Credit: Nelson, 2014
Figure 9.2.6.: Crop Yield Response to Warming in California’s Central Valley. Changes in climate through this century will affect crops differently because individual species respond differently to warming. This figure is an example of the potential impacts on different crops within the same geographic region. Crop yield responses for eight crops in the Central Valley of California are projected under two emissions scenarios, one in which heat-trapping gas emissions are substantially reduced (B1) and another in which these emissions continue to grow (A2). This analysis assumes adequate water supplies (soil moisture) and nutrients are maintained while temperatures increase. The lines show five-year moving averages for the period from 2010 to 2094, with the yield changes shown as differences from the year 2009. Yield response varies among crops, with cotton, maize, wheat, and sunflower showing yield declines early in the period. Alfalfa and safflower showed no yield declines during the period. Rice and tomato do not show a yield response until the latter half of the period, with the higher emissions scenario resulting in a larger yield response. Credit: National Climate Assessment, 2014.
Activate your learning (short answers)
1) Inspect Figure 9.2.5 above. Which crops' yields have already been most affected by climate change, and which crops the least?
(add text box)
Click for answer
Answer: Corn and wheat have seen the largest yield impact. Corn yields were reduced more than 5% in China and Brazil between 1980 and 2008 and wheat yields in Russia were affected nearly 15% and globally more than 5%. Rice has seen the least impact with nearly no yield reduction globally.
2) What are some possible reasons for the difference in yield impact between corn, wheat, and rice that you see in Figure 9.2.5?
(add text box)
Click for answer
Answer: The temperature increase between 1980-2008 produced temperatures outside of the optimal range for vegetative growth and reproduction for corn and wheat, while rice has a warmer range of optimal temperatures. Also, the regions where the different crops are grown may have experienced different ranges of temperature increase between 1980 and 2008.
3) Consider the graph for Wheat in Figure 9.2.5. What is the % yield impact in Russia and United States? What could cause differences in yield impact between regions?
(add text box)
Click for answer
Answer: Between 1980 and 2008, Russia experienced a nearly 15% yield impact on wheat, while the US experienced a slight positive impact on yield of wheat. As we saw in Module 9.1, the temperature increase associated with climate change varies from place to place on the globe, with some regions warming more or less than others. It's possible that the wheat growing regions of Russia experience greater warming from 1980-2008 that exposed their wheat crops to temperatures outside of their optimal range. In addition, some wheat may be being grown in regions where the climate is already on the borderline of being optimal for that crop. So wheat grown in regions where the climate is already near the warmer range of optimal temperatures will see declines sooner. On the other hand, climates that are near the colder side of the optimal temperatures might see an increase in yield with warming temperatures. For example, in the US, wheat is grown in North Dakota where a warming climate could increase yields as the temperatures are more optimal for more of the growing season.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.02%3A_Food_Production_in_a_Changing_Climate/11.2.02%3A_Direct.txt
|
Weeds, Insects and Diseases
Warming temperatures associated with climate change will not only have an effect on crop species; increasing temperature also affects weeds, insect pests, and crop diseases. Weeds already cause about 34% of crop losses with insects causing 18% and disease 16%. Climate change has the potential to increase the large negative impact that weeds, insects, and disease already have on our agricultural production system. Some anticipated effects include:
• several weed species benefit more than crops from higher temperatures and increased CO2 levels
• warmer temperatures increase insect pest success by accelerating life cycles, which reduces time spent in vulnerable life stages
• warmer temperatures increase winter survival and promote the northward expansion of a range of insects, weeds, and pathogens
• longer growing seasons allow pest populations to increase because more generations of pests can be produced in a single growing season
• temperature and moisture stress associated with a warming climate leaves crops more vulnerable to disease
• changes in disease prevalence and range will also affect livestock production
Modeling and predicting the rate of change and magnitude of the impact of weeds, insects, and disease on crops is particularly challenging because of the complexity of interactions between the different components of the system. The agricultural production system is complex and the interactions between species are dynamic. Climate change will likely complicate management of weeds, pests, and diseases as the ranges of these species changes.
Effects on Soil Resources
The natural productive capacity of a farm or ranch system relies on a healthy soil ecosystem. Changing climate conditions, including extremes of temperature and precipitation, can damage soils. Climate change can interfere with healthy soil life processes and diminish the ecosystem services provided by the soil, such as the water holding capacity, soil carbon, and nutrients provided by the soils.
The intensity and frequency of extreme precipitation events are already increasing and is expected to continue to increase, which will increase soil erosion in the absence of conservation practices. Soil erosion occurs when rainfall exceeds the ability of the soil to absorb the water by infiltration. If the water can't infiltrate into the soil, it runs off over the surface and carries topsoil with it (Figure 9.2.7). The water and soil that runoff during extreme rainfall events are no longer available to support crop growth.
Shifts in rainfall patterns associated with climate change are projects to produce more intense rainstorms more often. For example, there has been a large increase in the number of days with heavy rainfall in Iowa (Figure 9.2.8), despite the fact that total annual precipitation in Iowa has not increased. Soil erosion from intense precipitation events also results in increased off-site sediment pollution. Maintaining some cover on the soil surface, such as crop residue, mulch, or cover crops, can help mitigate soil erosion. Better soil management practices will become even more important as the intensity and frequency of extreme precipitation increases.
Figure 9.2.7.: Heavy rainfall can result in increased surface runoff and soil erosion. Credit: Hatfield et al., 2014
Figure 9.2.8.: Increasing Heavy Downpours in Iowa. Iowa is the nation’s top corn and soybean producing state. These crops are planted in the spring. Heavy rain can delay planting and create problems in obtaining a good stand of plants, both of which can reduce crop productivity. In Iowa soils with even modest slopes, rainfall of more than 1.25 inches in a single day leads to runoff that causes soil erosion and loss of nutrients and, under some circumstances, can lead to flooding. The figure shows the number of days per year during which more than 1.25 inches of rain fell in Des Moines, Iowa. Recent frequent occurrences of such events are consistent with the significant upward trend of heavy precipitation events documented in the Midwest. Credit: Hatfield et al., 2014
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.02%3A_Food_Production_in_a_Changing_Climate/11.2.03%3A_Indire.txt
|
Farmers have had to adapt to the conditions imposed on them by the climate of their region since the inception of agriculture, but recent human-induced climate change is throwing them some unexpected curve balls. Extreme heat, floods, droughts, hail, and windstorms are some of the direct effects. In addition, there are changes in weed species and distribution, and pest and disease pressures, on top of potentially depleted soils and water stress. Fortunately, there are many practices that farmers can adopt and changes that can be made to our agricultural production system to make the system more resilient to our changing climate.
Farmers and ranchers are already adapting to our changing climate by changing their selection of crops and the timing of their field operations. Some farmers are applying increasing amounts of pesticides to control increased pest pressure. Many of the practices typically associated with sustainable agriculture can also help increase the resilience of the agricultural system to impact of climate change, such as:
• diversifying crop rotations
• integrating livestock with crop production systems
• improving soil quality
• minimizing off-farm flows of nutrients and pesticides
• implementing more efficient irrigation practices
The video below introduces and discusses several strategies being adopted by New York farmers to adapt to climate change. In addition, the fact sheet from Cornell University's Cooperative Extension about Farming Success in an Uncertain Climate produced by Cornell University's Cooperative Extension outlines solutions to challenges associated with floods, droughts, heat stress, insect invasions, and superweeds. Also, p. 35, Box 8 in Advancing Global Food Security in the Face of a Changing Climate outlines some existing technologies that can be a starting point for adapting to climate change.
Learning Checkpoint: How can farmers adapt to climate change?
• Watch 15 min video by Cornell University about Agriculture and Adaptation about how New York farmers are adapting to climate change.
• Read the fact sheet from Cornell University's Cooperative Extension about Farming Success in an Uncertain Climate
• Read p. 35, Box 8 in Advancing Global Food Security in the Face of a Changing Climate
• Answer the questions below
Video: Climate Smart Farming Story: Adaptation and Agriculture (15:09)
Click for a transcript of the Adaptation and Agriculture video.
Dale Stein. Stein Farms, Le Roy, NY: The weather is definitely becoming more erratic and more extreme than what it had been in the past. Paul King, Six Mile Creek Vineyard, Ithaca, NY: I have that tendency, as others do that have lived a long time in the same place, to say, “Well the winters aren't as cold, we're not getting as much snow.” Rod Farrow, Lamont Fruit Farm, Waterport, NY: Certainly it's been a surprise over the last few years, how much earlier the seasons have become in general. Jessica Clark, Assistant Farm Manager, Poughkeepsie Farm Project: And I would say that it actually does seem like the season gets hotter faster. David Wolfe, Professor of Horticulture, Cornell University: We're here at one of Cornell's apple orchard research sites. New York is well known for the quality of its apples. We’re usually second or third in the US in apple production. And we got there by, farmers from over many years, really working with Cornell researchers to come up with best management practices. But of course, now we're facing, like farmers everywhere, new challenges, challenges associated with climate change. For example, I never expected when I got into this climate change research realm back in the 1990's, that one of the most important things that would come up with regards to the fruit crop growers is actually cold and frost damage in a warming world. The reason for that is that these plants can sometimes be tricked into blooming earlier with a warming winter. And we had known from looking at historical records that the apples were blooming a few days earlier than they used to. But in 2012 there was a real record breaker. The apples in the state bloomed about four weeks earlier than normal, never, never observed before. And of course, this put them into a really long period of frost risk. And sure enough, we lost close to half the crop in much of the state, millions of dollars of damage. So to deal with this sort of thing, we have to think about things like frost risk warning systems for farmers. Farmers may have to consider misting systems or wind machines for frost protection. And our apple breeders may have to think about coming up with genetic types that don't jump the gun in terms of early bloom in warm winters. So the experience of adapting to climate change may be different for each farm. But nevertheless, many of the state's leading agricultural industries, which include dairy, grapes, apples, and fresh market produce, all face new challenges, new risks, and new opportunities. When it comes to climate change and adaptation, farmers across New York all have a story to tell. Dale Stein. Stein Farms, Le Roy, New York: I'm Dale Stein, senior partner at Stein Farms in Le Roy. We milk 850 cows, work almost 3,000 acres of land. Today we've had very heavy rain all morning, they got flood watches up all over. We've seen years where a drought, where on the gravel ground you get almost no yield. We actually had two years in a row, 2011 and 2012 were too dry here, so all our forages were lower production. We feed 75 ton of feed a day, so about 4 tractor-trailer loads of feed a day. We ended up, by the end of 2012, running out of our surplus forage. We used all that up. We end up on those years buying more grain, which increases our cost of production and lowers your profit down. But we're harvesting 1500 to 2000 ton of Triticale every May, that if I didn't have, that's extra on the same ground. If I didn't have that, we would have been in a lot worse place than we were without it. Bill Verbeten, Cornell Extension Specialist: The forage inventory shortages that we've had from extreme weather conditions in recent years, is really just a sign of things to come unfortunately. Farmers have to deal with a change in climate each and every day. And so in Extension, we really try to help farmers manage their risk. And growing a triticale forage crop, or another small grain for forage, can really give another opportunity to protect their resources over the winter, because they're more vulnerable to extreme precipitation events and losing that soil. We can protect the soil. Notice the fibrous root system. This is why this crop can hold soil. Just see how much soil, even in this couple inches of roots, that this is holding onto. Dale Stein. Stein Farms, Le Roy, New York: My standpoint, from what I've seen on this farm, Triticale works very well for us and the palatability is phenomenal, the cow's love it Bill Verbeten, Cornell Extension Specialist: So this is an awesome combination of a profitable crop that protects the environment. Dale: Baffles me why more farmers aren't using Triticale, just baffles me. Paul King, Six Mile Creek Vinyard, Ithaca, New York: I'm Paul King. I do most of the vineyard management, and most of the winemaking, and all of the distilling, here at Six Mile Creek Vineyard, and I've been here for almost 25 years. If we talk about climate change, longer growing season and a little hotter weather will ripen the fruit more dependably. There are some varieties, and I can give you two or three examples. Pinot Noir is a little fussy, Merlot for sure, Cabernet Sauvignon, and to a lesser degree, Chardonnay. I think these are varieties that will benefit. The best management option for any individual vineyard to deal with increasingly varying weather, if we talk about climate change, would be to think carefully about the varieties that they're growing. That's really the biggest management strategy, because everything else you're doing is then a little bit of, sort of a stopgap. Wind turbines help in only very specific weather conditions, where very calm conditions are set up and there's a deep gradient between the temperatures at the surface and just a few hundred feet in the air, and mixing up that layer can help a lot. But they're pretty specific weather conditions and it's a pretty costly investment. You need to grow the varieties that you can grow well, and that's what you need to do. That is especially true at Six Mile Creek, but it's also true for any of the other vineyards. Last winter was a particularly cold one and its really interesting. I think the minimum low temperature in Ithaca is still probably minus 23 degrees Fahrenheit, or so. We didn't really approach that, but what we did see here were lots of excursions to minus 14, minus 15, minus 16 degrees and that is a very, very critical temperature. You're going to get significant blood loss right around that threshold. What is that going to have on the quality and quantity of wine grapes that are grown in region? And certainly at Six Mile Creek Vineyard, we have lost most of the riesling, the fruit that we had here, as compared with our seyval, a hybrid, where we have virtually a full crop. There is a lack of name recognition of some of these hybrids. Seyval Blanc, that sounds a lot like Sauvignon Blanc, but but well is it a Sauvignon Blanc? And well it's not a Sauvignon Blanc, it's a completely different variety. It's my personal favorite. I get six ton per acre, even here. It's disease resistant. It's one of the first great varieties to ripen. It's a beautiful grape variety, it's just relatively unknown. But I think the people that I know that most enjoy wine, really like trying new wines. So there's a huge, huge outlet out there for exploring some of the new hybrids, they're great varieties. It's one of the Finger Lakes fortes. In the long run that's gonna serve to help us. Rod Farrow, Lamont Fruit Farm, Waterport, NY: I'm Rod Pharaoh, one of the owners and operators of Lamont Fruit Farm in Waterport, New York. We operate about 500 acres of apples, grow all kinds of varieties, about 29 different ones. The major varieties would be Empire Honeycrisp, Gala, Fugis, SweeTangos. We've certainly moved our bloom time forward, probably at least five to seven days, and then some years a lot more than that. How much of this we can attribute to climate change is still a little bit debatable to me personally, but there's certainly a sense that things are changing here, and that the climate is getting a little more unpredictable. And the risk of early season and early bloom seems to be greater and greater every year. The chances of a warm spell in March, an extended warm spell, seem to much larger now than they were ten years ago. I would say, in general, our farm’s definitely vulnerable to extreme weather events. It always has been. We're at the mercy of Mother Nature no matter what we do. The question is, has the frequency increased and the risk? Certainly I’d say there have been a lot of extreme instances of weather over the last thirty plus years here. We've had a number of very large hail storms, but certainly the frequency of that has been greater since 1998. One of the things that drives what you do in terms of risk management is the profitability of your business. And a profitable business can afford to do things to mitigate risk, whether that be invest in frost machines or try to choose better orchard sites, or add overhead cooling or overhead irrigation, frost protection. Through the 2000s the orchard business has generally been pretty healthy. So I certainly see an uptick in an investment in risk management. So anywhere we have reasonable sites, or good orchard sites, we've survived any frost that we've ever had, including 2012. And we look at it as a company strategy that investing in the highest possible fruit sites or orchard sites, has just as big, if not greater, economic impact then trying to mitigate a site that's going to be at risk in years when it's cold. Certainly multi-peril insurance can help in years of distinct disaster and actually make years that could be very, very bad for you, actually years that you could not necessarily thrive in, but you can at least survive through. So we're big believers in that. The strategies that are being used at the moment to lower your risk are definitely trying to try to preserve the economic viability of fruit farms and businesses in general in western New York. Not all climate change is negative. So increasing the number of heat units per season has a positive impact on what we can do for fruit size, potential yield, and return bloom tree health. So there's always gains and balances with anything. We certainly have a little bit higher risk but we also possibly have a slightly higher potential in terms of yield and value. Jessica Clark, Assistant Farm Manager, Poughkeepsie Farm Project: My name is Jessica Clark. I'm the assistant farm manager at the Poughkeepsie Farm Project. And the Poughkeepsie Farm Project is a nonprofit that has an educational mission and also a working CSA farm. We are not certified organic, but we do try to use organic practices. We notice climate change in terms of the disease susceptibility of our plants, and I've seen definitely an increase in the number of different diseases and pests that can affect us here in the Northeast. Certainly when we have very extreme weather events, and certainly when we have sort of these very strange, you know, very, either early summer, very late summers or very, very, late falls, so that it doesn't actually get to freezing until February. You know I'm sure that that extends how strong the disease pressure can be the next year, and the pest pressure. And heat stress actually can be a big factor for a lot of our Brassicas. And in general that's something you deal with as a farmer. And the changing of the seasons, spring to summer, brassicas are always going to be a challenge, but they're even more of a challenge. And they're a good indicator in terms of crops, because they do not like a lot of variability in their whether. They pretty much like the weather to always be, you know, relatively mild, not too wet, not too dry, and pretty much the same temperature all the time and that’s really just not what you get here. So we're already dealing with a change in climate, you know, what was it two years ago when we would have 80-degree weather in early March, and then go freezing in April. Crazy things can happen in a season. It's almost like predicting for unpredictability. Having that kind of reinforces the fact that we, you know, should have diversified market areas and also diversified crops. You don't have to be as diversified as the CSA because certainly that can be a little bit overboard in some areas, but certainly to rely on one crop is, you know, like playing a game of dice, like sometimes it's just not going to come up your turn. And if, certainly, if you don't have crop insurance, and even if you do have crop insurance, you know, it can be a very risky, you know, game to play. I know people who are in the orchard business in Ulster County and even their kind of going more into agro-tourism, they're going more into different crops, different specialty crops, just to have something on the side that they can rely on. You know it kind of makes one, as a farmer, more bold, to say like, “oh well, we'll just see how early we can get tomatoes if it's going to be warmer earlier”, or “we'll see how late we can have crops, you know, into the fall”. If it doesn't work, it doesn't work, but you never know and probably something else is going to fail in the meantime. I personally like to also make sure that our organic matter is high in our soils to begin with, so that it has that hummus and organic matter that's capable of holding water, as well as, as much as possible, keeping our soil covered in a cover crop, when we can. And then, even when we're tilling in that cover crop, to try and choose moments where we're not losing too much soil. Certainly we're thinking about carbon sequestration, and being able to lock in a lot of that carbon into our soil. It’s partially because it's good for the earth and partially because it's good for our plants to have that much, you know, to have a high carbon soil. You know, you come into the idea of sustainable farming knowing that you're trying to not, you know, ruin the planet and trying to, you know, make sure that you're not, um yeah, you're not messing things up to bad. David Wolfe, Professor of Horticulture, Cornell University: Well these are just some of the experiences and challenges that farmers throughout the Northeast are dealing with in adapting to climate change. But we have advantages in this region too, such as being relatively water rich. And with a longer growing season, this could open up new opportunities for new markets and new crops. Here at Cornell and Cornell's Institute of Climate Change in Agriculture, we are poised and ready to take on climate change challenges and work with our grower partners, stay one step ahead of the curve, and take advantage of any opportunities that might come our way.
Check Your Understanding (short answers)
1) How can frost damage increase with climate change, even if temperatures are overall warming?
(add text box)
Click for answer
Answer: If temperatures overall warm, some crops will bud earlier in the year as the winter warms making them more susceptible to frost damage in the event of a late frost. For example, in 2012 in the state of New York, apples bloomed four weeks earlier and close to half of the state's apple crop was lost to frost damage.
2) What are some ways that the risk of frost damage can be reduced in a warming climate?
(add text box)
Click for answer
Answer: Frost risk warning systems, misting systems, wind machines, and breeding varieties of crops that don't bloom too early in warming winters.
3) Why is triticale a beneficial forage crop for farmers to grow?
(add text box)
Click for answer
Answer: Extreme weather conditions, such as floods and droughts, can affect the harvest of forage crops. Triticale has a fibrous root system, so it can hold soil. It's a profitable crop that cows love and is more resilient to extreme weather conditions.
4) What is an important management strategy that farmers can use in growing grapes to work with a changing climate?
(add text box)
Click for answer
Answer: Think carefully about the varieties that they are growing, to make sure that they are appropriate for the climate in their region and are resilient to potential future climate changes. For example, some varieties are more cold hardy and other are more heat tolerant. Wind turbines help when the surface temperatures are very cold and there's a steep gradient, and can help prevent frost damage, but they are expensive.
5) What climate change impacts are the farmers in the video dealing with?
(add text box)
Click for answer
Answer: As our global climate changes growing seasons become hotter and some crops are susceptible to heat stress. Warm spells occur early provoking earlier bloom leaving crops vulnerable to frost risks. The frequency of extreme weather incidents has increased (e.g., floods, droughts, hail storms). Increase in the number of diseases and pests. Less predictability in length of growing season, temperature and precipitation.
6) What strategies are implemented by the farmers in the video to manage their farms in a changing climate?
(add text box)
Click for answer
Answer: Wind machines, overhead irrigation, choosing plant varieties appropriately, and siting orchards in appropriate locations. Diversified markets and diversification in crops grown increase resilience. Crop insurance decreases risk. Increase organic matter in soil and use cover crops to increase the water-holding capacity of soils and to protect soils.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.02%3A_Food_Production_in_a_Changing_Climate/11.2.04%3A_How_Fa.txt
|
We've covered quite a bit of ground in this module. We explored how human activities have led to an increase in atmospheric carbon dioxide, which in turn is increasing the surface temperature of the Earth and changing precipitation patterns. The resulting impacts on our agricultural production system are complex and potentially negative. As a result, farmers are adopting new practices and technologies to adapt to our changing climate and create more resiliency in the agricultural system.
Let's put global climate change and its interaction with our agricultural system into the Coupled Human-Natural System (CHNS) diagram that we've been using throughout the course. The development of global climate change is illustrated in the CHNS diagram in Figure 9.2.9, where the increased burning of fossil fuels within the human system results in more CO2 in the atmosphere. The response in the natural system is that more heat energy is trapped. The resulting feedback that affects the human system is that temperature increases along with all of the other climate change effects that we discuss in this module.
Figure 9.2.9.: Coupled Human-Natural System diagram illustrating the development of global climate change
Click for a text description of the Human-Natural system diagram.
This loop shows the human system with an arrow labeled drivers pointing to natural system with an arrow labeled feedbacks pointing back to human system. Those four concepts are defined as follows:
Human system (human system internal interactions): human population growth, industrialization, and increased burning of fossil fuels
Drivers: increased emissions of carbon dioxide and other greenhouse gases
Natural system (natural system processes and interactions): increased greenhouse gas concentrations trap more heat energy in atmosphere
Feedbacks: increased temperatures, extreme weather events, sea level rise and precipitation variability
What would be the next step in the diagram? Consider the feedbacks associated with the arrow at the bottom of the diagram that will affect the human system. What are the possible responses in the human system to these feedbacks? Our response can be categorized into two broad categories: mitigation and adaptation. We've already discussed adaptation strategies that can be implemented by farmers to adapt to a changing climate. Some examples are to change the crops grown to adapt to the higher temperatures or to install more efficient irrigation systems so that crops can be grown more efficiently.
What about mitigation? Mitigation strategies are those that are targeted at reducing the severity of climate change. One important mitigation strategy is to reduce the burning of fossil fuel, and our agricultural system is a significant contributor to greenhouse gas emissions. Shifting to use renewable energy sources and more fuel-efficient equipment are two mitigation strategies. There are other important mitigation strategies that target other greenhouse gas emissions, such as nitrous oxide from fertilizer use and methane from ruminants and some types of irrigated agriculture.
In the next couple of modules, we'll talk more about strategies to make our agricultural systems more resilient and sustainable, and you'll see how our food production can become more resilient to climate change. In addition, you'll get the opportunity to explore the project climate change impacts on your capstone region and to consider how those projected change might affect the food systems of that region.
11.03: Summary and Final Tasks
Summary
In Module 9, we covered the human activities that have led to climate change and the resulting impacts on global climate. We explored some of the climate variables that will affect agriculture and then considered possible adaptation strategies that can be employed to make agriculture more resilient to climate change.
In the next two modules, we will delve deeper into the complexity of the coupled human-natural food system, continuing to employ spatial thinking. In Module 11, we will explore strategies to make food systems more resilient and sustainable. In order, to do that though we need to understand how vulnerable those systems are to stressors like climate change, and to identify the adaptive capacity of those systems. In that final module before the capstone, many of the concepts covered in the course will come together.
Finally, your capstone data collection should be proceeding. The Summative Assessment for Module 9 required that you capture some critical information for your capstone region. The data gathered about projected temperature changes in your capstone region is integral to your final assessment of the resilience of the food systems in your capstone region.
Reminder - Complete all of the Module 9 Tasks!
You have reached the end of Module 9. Double-check the to-do list on the Module 9 Roadmap to make sure you have completed all of the activities listed there before you begin Module 10.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.02%3A_Food_Production_in_a_Changing_Climate/11.2.05%3A_Climat.txt
|
Summary
The summative assessment for Module 9 involves exploring the predictions of future climate variables from climate models for the US, then considering the possible impacts of increased temperature on your capstone region. Also, you will propose strategies to increase the resilience of the food systems in your capstone region to increasing temperatures.
The summative assessment for this module has two parts:
1. Exploration of the National Climate Change Viewer - view national predicted change in climate variables for the US
2. Data collection and interpretation from the National Climate Change Viewer for your capstone region
The second part requires that you work on the data collection for Stage 3 of the capstone project. Your grade for the module summative assessment will be based on your answers to the questions in the worksheet, which you will answer using the data you download and organize for the capstone.
For the capstone project, you will need to consider the resilience and vulnerabilities of the food systems in your assigned region to projected increases in temperatures. Your task now is to determine what are the temperature increases projected in your assigned region as a result of human-induced climate change. Also, you'll need to start thinking about what impacts those changes may have on the food system in your region. You'll use the National Climate Change Viewer (NCCV) to explore predicted changes in climate variables for the US and to investigate the projected changes in minimum and maximum monthly temperatures in your assigned region.
Instructions
Download the worksheet linked below (choose MS Word docx or pdf) and follow the instructions.
Submitting Your Assignment
Type your answers in essay format. Submit your document to Module 9 Summative Assessment in Canvas.
Grading Information and Rubric
Your assignment will be evaluated based on the following rubric. The maximum grade for the assignment is 35 points.
Rubric
Criteria Possible Points
1. Summary of projected changes in climate demonstrates a clear understanding of the data retrieved from the NCCV. Correct units of measure are used in the discussion of climate variables. 10
2. Summary of climate change impacts on crops shows that the students understand basic connections between plants growth and climate variables. 10
3. The answer demonstrates that students considered the adaptation strategies presented in this module and identified strategies appropriate for the regions, including consideration of the region's crops, climate, and food systems. 10
Answers are typed and clearly and logically written with no spelling and grammar errors 5
12: Capstone Project Stage 3 Assignment
Soil/Crop Management, Pests, and Climate Change
(Modules 7-9)
At this stage, you should have collected quite a bit of data related to the physical environment of your region (water, soils, and climate) as well as related to the regional food system, including the history of the regional food system and which crops are grown in your region. You may also have discovered some impacts that the regional food system is having on soil and water resources in the region.
Capstone Stage 3
Click for a text description of the Capstone Stage 3 diagram.
Capstone Stage 3 (Soil/crop management, pests and climate change)
System stressors and management strategies
• describe soil conservation practices
• describe pest management strategies
• discuss agrobiodiversity of region
• discuss future climate scenarios
What to do for Stage 3?
• Download and complete the CapstoneProject_Stage3.docx worksheet that contains a table summarizing the data you’ll need to collect to complete this stage. Remember, you need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research.
• Add questions and continue to research the questions in your worksheet files.
• Keep track of all of the resources and references you use.
• Add relevant data, maps, and figures to your PowerPoint file.
• Revise your CHNS diagram and/or create a new one incorporating topics from Modules 7, 8 and 9.
• Individual Assessment:
• Write a one-page summary based on the data and information you’ve collected so far explaining what you think will be the major issues to address and focus on in your capstone presentation. To write this summary, you’ll need to look back on your worksheets from Stages 1, 2 and 3 and the PowerPoint you’ve been working on. In your summary, you need to synthesize the information you’ve collected so far and to identify the topics you think your group will want to focus on during your presentation.
• Use citations to reference the sources of material you use in your summary. Your list of references is not counted in the one-page requirement. The reference list may span to a second page.
• Please note in your summary if there are any major questions that you haven’t been able to answer about your region’s food systems.
• Submit your summary via the Capstone Stage 3 Assignment in Canvas.
Rubric for Stage 3 Individual Assessment
Rubric for Stage 3 Individual Assessment
Criteria Possible Points
Summary submitted by deadline, one-page in length with reasonable margins and font size. 5
Summary is organized logically and arguments are presented clearly. 5
Important regional issues and topics related to climate, water resources, soil resources, nutrients, crop types, and other topics to be covered in the final presentation are identified and explained clearly and succinctly 10
References are cited properly and demonstrate that appropriate research has been accomplished. 5
Summary is written with correct grammar and spelling. 5
Total Possible Points 30
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/03%3A_Systems_Approaches_to_Managing_our_Food_Systems/11%3A_Food_and_Climate_Change/11.04%3A_Summative_Assessment-_Climate_Change_Predictions_in_you.txt
|
Overview
Understanding Coupling in Natural and Human Systems
Module 10 continues the theme of human-environment interactions seen at smaller scales in agroecosystems in module 8 and elaborates on the coupled human-natural systems (CHNS) concept introduced in Module 1. As learners, in Module 10.1 you will explore different scales and types of food systems, learn about barriers food producers face within food systems, and look in detail at how the framework of CHNS allows us to see divergences of food system into different types, and transitions from one type to another. In Module 10.2 you’ll learn about the impacts of food systems on natural systems, and practice a method called Life-Cycle Assessment (LCA) which is used to measure the impact of Human Food System components on the environment. LCAs can be applied to measure the impacts of both particular products as well as complex human systems on the environment. The food systems typology, the CHNS framework and the broad ideas behind LCAs in measuring impacts across a system are tools that you can use to develop your ideas for the capstone project and other learning efforts beyond this course.
As you apply the CHNS framework and the LCA method, you'll be using a geoscience habit of mind introduced in module 1, that of systems thinking. Systems-oriented frameworks and methods are ways of interpreting and measuring complex systems in a way that incorporates the scale of an entire system as well as linkages among many interacting parts. As designers of this course, we believe that these frameworks and skills will be useful to you whether or not you go in some area of geosciences since systems thinking is a needed skill in today's complex world.
Goals
• Describe ways that food systems impact the earth system.
• Explain the characteristics and scale of the three major food systems coexisting in the world today and their overlap.
• Demonstrate the complexity and interconnectedness of food system types that connecting society to the environment in different ways within a globalized world.
• Construct an assessment that measures the impacts of food systems on the earth system and local environments.
Learning Objectives
After completing this module, students will be able to:
• Define food systems and name the component systems, the roles played by each, and the three dominant and overlapping types of food systems in the world today.
• Name different types of impacts of the food system on earth’s natural systems.
• Define the basic elements of a coupled human-natural system.
• Describe a life cycle assessment (LCA) and state what it is used for.
• Explain examples of food systems to illustrate and compare their combined social and environmental inputs and impacts.
• Apply the concept of natural human systems to food systems and distinguish different ways that food systems develop and change because of human and natural factors.
• Apply a coupled natural system framework to describe how human systems affect earth’s natural systems within food systems.
• Construct life-cycle assessments using data on food production activities that compare the impacts of different types of food systems on the earth systems.
• Synthesize outputs of LCAs you have constructed to compare impacts of different food production systems.
Assignments
Print
Module 10 Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 10 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Watch the video on introduction to food systems, 2013 World Food Day, from the Food and Agriculture Organization of the United Nations.
3. NCAT/ATTRA: Life Cycle Assessment of Agricultural Systems, pp. 1-3 and figure 3 within the reading, which refers to the life-cycle analysis comparing light bulbs, on page 9.
1. You are on the course website now.
2. Online: 2013 World Food Day
3. Online: Life Cycle Assessment of Agricultural Systems
To Do
1. Summative Assessment: Life Cycle Analysis (LCA) of potato production in smallholder Andean and globalized North American Systems
2. Participate in Module 10 Discussion
3. Take Module Quiz
1. In course content: Summative Assessment; then take quiz in Canvas
2. In Canvas
3. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
13: Food Systems
What are food systems and what do they do?
This module builds on the introductory material in modules one and two defining food systems as coupled human-natural systems: human society interacting with the natural earth system. It marks the transition in the course to focusing on human food systems and land use and their impacts on the environment and earth systems.
13.01: Food Systems
In the introductory video below you will see a particular local example of a food system, presented by the Food and Agriculture Organization (FAO) of the United Nations. As you watch, look for examples of Human and Natural system components in a local food system particular to the Red River Delta in Vietnam, and the way that the food system has changed over time. Human and Natural system components were introduced in Module 1, and we have been referring to them regularly along the way in the course, as we have considered the natural system elements in agroecosystems and the way these are managed by humans. Now we will begin to take a larger, whole systems view of food systems.
Please watch the below video celebrating world food day 2013, which describes the Vietnamese “Garden, Pond, Livestock Plan” (V.A.C) food system.
Video: World Food Day 2013 (6:52)
Click for transcript of the world food day 2013 video.
Narrator: In the Red River Delta area around Hanoi, peasant farmers have worked the rice fields for thousands of years. Zune started farming at 18, when he returned to his village after years of war. Zune: From 1975 to 1984 there was real hunger. At that time, we had cooperative markets and we suffered from hunger. We always had to borrow for our meals. Our life was so hard. Narrator: Now, after decades of economic reform, farmers can lease land from the government, decide what to raise, and sell what their farms produce at market. Combined with wide scale support for agriculture, what emerged was a sustainable food system that works, now known as VAC farming. Nguyen Ngoc Triu, Pesident, VACVINA: VAC in Vietnamese is vuon, ao, chuong, which means garden, pond, livestock pen. Vietnamese families, especially in the Delta areas, that normally have a pond in front or in the back of their house. They also have gardens and pig pens. This is an integrated system. Narrator: Waste products from one part of the system are recycled and used by other parts of the system. Even the nutrient-rich silt from the bottom of the pond is recycled to fertilize the garden and create new land for fruit and vegetables. Fish this year are the main business of Zune’s farm. Zune: In the pond I have pike, carp, bighead carp and mudcarp. As an example, normally we can have productivity of two and half tons of fish. After expenses of about 700 US dollars, my net income is 1700 US dollars per year. Narrator: The pigs provide income, but they play yet another role in this complete system that links fish farming, gardening, and livestock. Biogas from the pigs powers the gas cooker in the family's kitchen. Zune: For example, with the pig pen we don't have to buy gas for cooking. The manure is used for fish production. The biogas system helps protect the environment. We also use manure for the garden. The wastewater of biogas is very good for irrigating crops. This type of irrigation water is better than nitrate. Narrator: Zune and his family make anywhere from five to seven thousand a year from all the activities on the farm. All the food they eat is produced here. With the farms profits, Zune’s children have attended school and university and Zune is constantly reinvesting and expanding the farm. Good nutrition is another benefit of VAC. The system is widely credited with increasing people's consumption of fish and animal proteins, and fruit and vegetables, grown according to the seasons, but all year round. Huong Nguyen Thi, National Programme Officer FAO Viet Nam: The VAC system has been there like for 30 or 35 years. It plays quite important role in providing nutritional intake for a small household level, especially in the area where land is very limited. So they can create a nutritional intake for their own family, and at the same time they can have some kind of surplus from agriculture production, and they can sell at the market or share with other surrounding family in the community. Narrator: Today, Vietnam is the world's number two rice exporter. But the Vietnamese have much more than just rice. Sound policy making and investment and nutrition education, for women especially, have brought about a whole of society response that's created just one type of sustainable food system. What can we learn from it? First, all of us need to understand that food comes to us through food systems. They can be local, global, or a bit of both. Next we need to recognize that we ourselves are part of our food systems. Only then can we begin to play a positive role by learning about nutrition, teaching our children, making good choices as consumers, and wasting less food. A food system, after all, is people and what they do, from farm to fork, to feed the body and the mind. As they say, you are what you eat.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.01%3A_Introductory_Video_on_Food_Systems.txt
|
The introductory section below is adapted from "Chapter 3: The food system and household food security” at the document website of the United Nations Food and Agriculture Organization (FAO).
This section attempts to describe the parts of a food system in basic terms, starting from the standpoint of the systems approach. It begins, "the perception underlying the systems approach is that the whole is greater than the sum of its parts. Any situation is viewed in terms of relationships and integration. A food system may thus include all activities related to the production, distribution, and consumption of food that affect human nutrition and health (see Figure 10.1.1, which is reproduced from module 1).
Food production comprises such factors as the use of land for productive purposes (land use), the distribution of land ownership within communities and regions (land tenure), soil management, crop breeding and selection, crop management, livestock breeding and management and harvesting, which have been touched on in previous modules. Food distribution involves a series of post-harvest activities including the processing, transportation, storage, packaging and marketing of food as well as activities related to household purchasing power, traditions of food use (including child feeding practices), food exchanges and gift giving and public food distribution. Activities related to food utilization and consumption include those involved in the preparation, processing, and cooking of food at both the home and community levels, as well as household decision-making regarding food, household food distribution practices, cultural and individual food choices and access to health care, sanitation, and knowledge.
Among the components of the food system, e.g. food processing, communication and education, there is substantial overlap and interlinkage. For example, household decision-making behavior with regard to food is influenced by nutrition knowledge and by cultural practices with regard to food allocation within the household as well as by purchasing power and market prices."
Figure 10.1.1.: Diagram of a food system as a "conveyer belt" of three sequential components delivering Nutrition and Health from Natural Resources and Production Environments. Technical and policy aspects central to food system activities are shown surrounding the three components of production, distribution, and consumption. Credit: Steven Vanek, adapted from Combs et al. 1996. Sustainable Food System Approaches to Improving Nutrition and Health.
Food systems are further embedded in environments and societies (thus, both natural and social/political contexts) which differ according to a variety of factors such as agroecology (the composition of the local agroecosystem, see previous modules), climate, social aspects, economics, health, and policy. The model presented in Figure 10.1.1 above is useful in conceptualizing the various activities that determine food security and nutritional well-being and the interactions within the food system."
Two important features that we want to emphasize in the passage above from the FAO are first, the fact that food systems involve processes at multiple scales (e.g. local agroecosystems, government policy at a national scale, international research and technology development), which eventually have many impacts at a household scale, either in the livelihoods of food producers (who gain income from the food system and also consume food); and also for consumers around the world. Second, criteria with which we should evaluate food systems are their ability to deliver nutrition and health outcomes (see e.g. module 3), and also the sustainability of natural resources and environments, which we will consider in module 10.2. We note that these criteria of environmental sustainability and health are at opposite ends of this "conveyer belt" model of food systems, where the food system "conveyer belt" can be said to deliver nutrition and health outcomes by transforming the inputs from natural resources and environments. These health and nutrition outcomes are associated with the concept of food security (sufficient access to appropriate and healthy food) which was introduced in module 3 and will be further explored in module 11. Human health and environmental sustainability correspond roughly to the positive objectives that we conceive of for the human system and natural systems, respectively: health and equitable nutrition (or food security) in the human system, and environmental sustainability of natural systems. A final observation from Fig. 10.1.1 is that food systems are ubiquitous and touch on all aspects of human societies. We are all participants in food systems, either as producers, consumers, in distribution or in other myriad ways.
Activate Your Learning: Food's Journey in the Food System
Figure 10.1.2.: Supermarkets like this one are one of the most commonly accessed nodes for consumers in the modern globalized food system. Image Credit: Steven Vanek
Food systems comprise the interacting parts of human society and nature that deliver food to households and communities (see the previous page), and can be used to understand food in its relation to the earth system. To better understand food systems, in the exercise below you'll be asked to consider a familiar food of your choice, and the journey this food takes from where it is produced to the meals that we consume every day. Within the food supply chain for this food, you'll be asked to distinguish between social (human system) aspects and environmental (natural system) aspects of that product's food production and supply chain.
Activate Your Learning Activity:
In the blanks below, fill in the blanks regarding the supply chain for food products. You can also download the worksheet for filling in offline, or as part of a classroom activity. As depicted in Fig. 10.1.3, you’ll need to give the origin, some intermediate destinations, and then the final consumption point for the food product. Then you should think of some social or human system dimensions of the production, supply chain, or consumption of this product, as well as some ecological or natural system dimensions and fill in the corresponding blanks. Do this first for a product familiar to you, whose supply chain you either know about or can research quickly (part 1). Then repeat the activity for a food product in the online introductory video from the first page of this module, about food systems in Vietnam (part 2). When you are done with each part you can click on the ‘answers’ link in the below each part, and see how your answers match up.
Figure 10.1.3.: Illustration of source/production areas, intermediate destinations, and consumption points for typical food products from the activity below.
Fill in the blanks below. If you are doing this online, just note your answers on a piece of paper regarding the food product you have chosen, or download the worksheet. When you are done you can click on the ‘answers’ link below to see some possible examples and see if your answers match up with these answers.
1. Food product ______________________________
2. Food supply Chain:
1. Source: Where main raw material is produced, fished, hunted: ___________________________
2. Intermediate destination 1: ___________________________ (e.g processing plant, washing, trucking, warehouse, etc.)
3. Intermediate destination 2: ___________________________ (e.g processing plant, washing, trucking, warehouse, etc.)
4. Intermediate destination 2: ___________________________ (if needed)
5. Consumption point: ___________________________
3. Up to three social or human system dimensions of this food chain (e.g. policy, economic, or cultural factors associated with the production and consumption of this food, recall the Human System factors in a Coupled Human Natural Framework, module 1.2)
1. _______________
2. _______________
3. _______________
4. Up to three ecological or natural system dimensions of this food chain: (ecological factors would include crop and animal species, agroecosystems, climate, water, and soil influences on food production):
1. _______________
2. _______________
3. _______________
Example answers
(These may be a good deal more complete than your examples but give a sense of the range of possible answers)
Example Answer 1 - frozen, breaded fish filet from a local supermarket:
Click for answer.
Answer: Frozen, breaded fish filet (i.e., the fish part) Food supply chain:
1. Fishing boat in Atlantic, Chinese, or Alaskan Fishery, e.g., including flash-freezing.
2. Preparation facility in Canada or U.S.
3. Cold chain shipping / Supermarket
4. Kitchen oven and dining table for food preparation and consumption.
See some fascinating details about the production process, in How It's Made.
Social dimensions:
1. Financing, organization, and contracts for a fishing fleet and processing
2. Government and fishing communities’ agreements on fisheries regulation to avoid overfishing.
3. Supermarket companies
Ecological dimensions:
1. The wild fish species itself
2. Food sources for the fish (algae, other fish)
3. Changing ocean temperatures and conditions with climate change
Example Answer 2 - Bagel or bread from a local bake or coffee shop:
Click for answer.
Answer: Baked good (i.e., flour it is made from) Food supply chain:
1. Farm in Midwest or Western U.S.
2. Grain elevator purchasing and storing grain
3. Flour production facility
4. Café or restaurant kitchen (may be a large centralized kitchen for a chain restaurant) and coffee shop table for consumption.
Social dimensions: (could include any of these)
1. Farm enterprise belonging to a farm family or company – organization of production labor and agroecosystem management.
2. Supply chains and companies for fertilizers, seed, and other agricultural inputs
3. Government policies regulating subsidies to farmers, tax on diesel fuel, pollution regulations etc.
4. Grain commodity markets and corporations
Ecological dimensions:
1. Domesticated wheat species (Triticum aestivum)
2. Prairie soils (Mollisols) with inherent good qualities and climate for wheat growing
3. Soil bacteria breaking down organic matter, releasing nutrients, accessing fertilizer N and releasing nitrous oxide.
4. Bread yeast and/or sourdough bacteria used in bread making.
Exercise 2:
Recall the video celebrating world food day 2013, World Food Day 2013 Video: the Vietnamese “Garden, Pond, Livestock Plan” (V.A.C) food system". You may want to quickly skim the video again and note the food pathways that foods are following in these systems. Then choose either a product that is consumed within the household that appears in the video or one that is sold outside the household (some products fit into both categories). Fill in the same set of production and transport steps for this product as you did in part 1, as well as some social and ecological aspects. You can use a piece of scrap paper or the downloaded worksheet. Note that a product consumed in this farming household may have a very short food supply chain!
Look at the following worksheet and fill in the blanks corresponding to the blanks below. When you are done you can click on the ‘answers’ link to see some possible examples and see if your answers match up with these answers.
1. Food product ______________________________
2. Food supply Chain:
1. Source: Where main raw material is produced, fished, hunted: ___________________________
2. Intermediate destination 1: ___________________________ (e.g processing plant, washing, trucking, warehouse, etc.)
3. Intermediate destination 2: ___________________________ (e.g processing plant, washing, trucking, warehouse, etc.)
4. Intermediate destination 2: ___________________________ (if needed)
5. Consumption point: ___________________________
3. Up to three social or human system dimensions of this food chain (e.g. policy, economic, or cultural factors associated with the production and consumption of this food, recall the Human System factors in a Coupled Human Natural Framework, module 1.2)
1. _______________
2. _______________
3. _______________
4. Up to three ecological or natural system dimensions of this food chain: (ecological factors would include crop and animal species, agroecosystems, climate, water, and soil influences on food production):
1. _______________
2. _______________
3. _______________
Example Answers
Example for the Vietnam VAC food system example:
Click for answer.
Answer: Food Product: Fish from the fish pond in the video Food supply chain (if sold):
1. Brought from pond
2. Transported by cart or truck to city
3. Sold in market
4. Prepared at home
Food Supply chain (if consumed at home):
1. Brought from pond
2. Prepared and eaten.
Social dimensions: (could include any of these)
1. Farm enterprise belonging to a farm family – production roles of family members
2. Government policies promoting choice by farmers of what to grow and the ability to market it (note as a communist government there was a time when this was not allowed)
3. Government and community efforts to promote and adapt the V.A.C. food production methods.
4. The organization of local markets and food sellers that allow farmers to sell products.
Ecological dimensions:
1. Fish species e.g. carp
2. Pond / Garden agroecosystem
3. Recycling of organic wastes from fish production into soils
4. Rainy climate/river delta geography and abundant water for fish and crop production.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.02%3A_Food_Systems-_Environments_Production_Distribution_and_Hou.txt
|
A good way to understand the complexity of different types of food systems is to look for organizing principles to classify them. In the introductory food supply chain exercise at the beginning of this module, if you chose a product that was produced a long distance from where you consumed it, you are aware that the global food system today handles food at an enormous spatial scale. This example leads to one way to organize our understanding of food systems, which is the hierarchy global, regional, and local scales of food systems (Fig. 10.1.4).
Figure 10.1.4.: Spatial scale of organization in food systems. Credit: Steven Vanek
Click for a text description of the spatial scale diagram.
Scale Examples
Global Global grain and meat production (commodities), Global fisheries
Regional Most supermarket and restaurant foods
Local Farmer's market, local hunting and fishing
Household Home gardens and subsistence agriculture
Another helpful way to classify food systems is to look for typologies of food systems. Building typologies is a somewhat subjective but often helpful process where we look for groups of systems or components that hang together in order to better understand their function, importance, or other attributes. For the typology of food systems we present here, we are thinking about classifying food systems based on how production occurs and at what scale, which portions of society are involved in production and distribution, and the rationale underlying production, distribution, and consumption. In this course, we use the scheme of three overlapping food systems that exist at global, regional, and local scales shown below in Fig. 10.1.5.
Figure 10.1.5.: Typology of Food Systems. Credit: Steven Vanek and Karl Zimmerer
Click for a text description of the Food Systems Typology diagram.
Typology of Food Systems into Smallholder, Globalized Corporate, and Alternative types. Two major types at left are the globalized corporate system which is dominant in the industrialized world and in terms of trade of major commodities, and smallholder systems which are in fact extremely important at local and regional levels in the developing world, with more than two billion smallholder farmers globally growing, trading, and selling food in these contexts. The vertical and horizontal axes in the diagram attempt to capture the variation in these systems regarding their integration into global markets and specialized industrial production (vertical axis), and the way in which the systems at right are responses to sustainability challenges of the modern food system (horizontal axis). The alternative types at right, which reflect current trends and movements in the modern food system towards sustainability, are divided into a alternative global or "ecologically modernized" type (see module 2 on the history of food systems) along with a more local or community-based food system type, with both alternative food system types responding to sustainability critiques of the globalized food system in recent history. It is also important to realize that different food systems overlap and are certainly not spatially isolated: for example, smallholders in the developing world simultaneously participate in both smallholder and global ways of producing and consuming food, and urban consumers the world over may simultaneously purchase food from all four systems.
Global Corporate Food System
• High volume, minimized production costs
• Simplified farms that specialize in particular crops
• Global and regional shipping
• Unprocessed and processed foods
• Coordinated through major agribusinesses and food companies
• Goals: markets and return on investment
• Local producers participate via commodity production
Smallholder Food Systems
• Smaller-volume production on many more farms
• Complex, diverse farming systems with e.g. livestock and many crops
• Local/regional shipping and marketing
• Unprocessed foods
• Goals: generating farmer livelihoods and food for direct consumption and local markets
• Mixed production and consumption roles
• Produces a large proportion of food in developing countries
Alternative Food Systems: Globalized and Community-based
• Globalized
• "Ecological modernization" of globalized food system
• Global/national trade networks
• Goals: reform of industrialized farming practices
• Certification schemes: fair-trade, sustainable forestry, etc
• Unprocessed and processed foods
• Mainstreaming of organic products in national/global distribution
• Community-based
• Emphasis on reintegration of local rural-urban economies
• Goals: reform of industrialized farming, local economies
• Local/regional shipping and farmers' markets
• Mainly unprocessed foods
• Organic and local criteria/certification
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.03%3A_Spatial_Scale_and_Typologies_of_Food_Systems.txt
|
Consumers worldwide who enter a supermarket are largely interacting with this type of food system. Some local and regional products are provided, but food is largely sourced from major national and global production regions and can be transported long distances (100 to thousands of miles or km) with enormous quantities of food moving through the system as a whole. There is an emphasis on modern production and processing techniques, efficiency, and lowering the immediate costs of production. Also, many of the products moving through this system are thought of as commodities: products that are generic and replaceable regardless of their origin and that carry standard global and national pricing frameworks. Examples would be corn grain for food, different grades of rice, soy and corn oils, supermarket potatoes and tomatoes, and cuts of pork for supermarket consumption.
Figure 10.1.6.: Summary of characteristics for the globalized corporate food system, taken from the previous page. Credit: Steven Vanek and Karl Zimmerer.
Click for a text description of the globalized corporate food system.
Globalized Corporate Food System
High volume, minimized production costs simplified farms that specialize in particular crops global and regional shipping unprocessed and processed foods coordinated through major agribusiness and food companies goals: markets and return on investment local producers participate via commodity production
Calling this a ‘corporate’ system may obscure the fact that production for this national/global scale system occurs most commonly not on corporate property or company farms, but in family farm enterprises like the thousands of dairy and grain farms that populate many regions of the United States. For example, family farms still constitute about 97% of farms in the United States by number, although the acreage in company-owned farms and the value earned by these company-owned farms is larger than this numerical count suggests (top pie-charts within Figure 10.1.8a below). Also, in some areas of the country, ‘large’ and ‘very large’ family farms have mean farm sizes of many thousands of acres, which contradicts the traditional image of a small family enterprise, and illustrating the pressures for farms to become large in modern industrialized food systems, in order to take advantage of economies of scale in farming (economies of scale refers to the idea that as the size of an enterprise goes up, the efficiency of producing a given item goes up and the cost per item goes down, e.g. baking one tray of muffins every Saturday versus opening a muffin shop making hundreds of muffins every day).
Figure 10.1.7.: This landscape in central Europe appears quite traditional but is, in fact, a production zone for commodity crops in the globalized corporate food system Credit: Steven Vanek
Nevertheless, this portion of the food system is called both ‘global’ and ‘corporate’ because most organizations that coordinate demand and organize processing and distribution of foodstuffs in this layer of the global food system are corporations seeking benefit for their shareholders. In module 3 on nutrition we discussed the way that food has become fiscalized, i.e. it is not only a product working its way through a marketplace to consumers but an active object of investment in the future growth potential of the business of food. These investments are managed through large-scale exchanges like the stock exchange here in the United States. These exchanges allow large swaths of relatively wealthy world citizens (including many in the middle class) to invest in the large-scale production of food and reductions in prices, but can create sustainability issues within the food system because the return on investment rather than food security or environmental sustainability becomes the predominant objective of investors and corporations. Nevertheless, not just corporate entities but also government and civil society (e.g. farmer and community organizations, universities) are also heavily involved in these global systems and can act to reform problems or regulate damaging or unjust practices. They act by way of advocacy and regulation, national/international food policies and support structures such as research on food production and food processing methods.
Within this global system, then, local farmers and fishing communities often act as producers selling into commodity markets, alongside industry-owned farms, feedlots, and other production facilities. In addition to unprocessed food ingredients, the globalized corporate food system has also been largely responsible for the expansion in processed and prepared foods, that seeks to provide convenience for consumers as well as capture the added market value of more prepared foods. Processed foods have been criticized, especially by those advocating community food systems (see description further on), because they displace fresh and whole food components of diets that are important to good nutrition outcomes (see module 3). Processed foods often contain processed industrial ingredients such as corn syrup and processed, low-quality fats along with a lower fiber and vitamin content, which is usually not true of whole unprocessed foods.
Farms and acreage of different crops in United States agriculture:
The pie charts below demonstrate aspects of the description of the globalized food system above. For example, at least in terms of numbers, smaller, family-owned farms with an average size of approximately 240 acres (around 100 hectares with one hectare = 100 x 100 m) dominate the numbers of farms in the United States (Fig. 10.1.8a). Nevertheless, large farms dominate to a greater extent than these smaller farms when considering the total area taken up by farms of different sizes, and a large proportion of income is going to larger operations in a number of classes of farm products (Fig. 10.1.8b). These patterns vary somewhat by what sector of the farming economy is being described, and we include some separate graphs for maize, vegetables, and dairy farms. You will use these graphs in the knowledge check activity below.
Figure 10.1.8a.: Charts summarizing USDA Census data from the 2012 agricultural census on numbers, size, and earnings of U.S. farms. It is notable that by numbers of farms, smaller farms dominate, but the total area taken up by larger farms is substantially greater than their small numbers would suggest. Image credit: Steven Vanek, based on the United States Department of Agriculture (USDA) 2012 agricultural census
Figure 10.1.8b.: Charts summarizing how the total revenue of farm products breaks down among farms of different size classes. Note that when all farm products are included in the estimates of revenue, there is a relatively even division of revenue among the farms of different size classes. However, the much larger numbers of farms in the small size category (88% of all farms, see figure 10.1.8a above) mean that the revenue per farm for these products is likely less. The graphs allow comparison of different products in terms of how different size classes of farms are participating in the markets for different types of products, e.g. "how do mid-size farms compare in the share of total revenue between maize and vegetables?" Credit: Steven Vanek, based on the United States Department of Agriculture (USDA) 2012 agricultural census.
Knowledge Check: Food products and farm sizes in U.S. agriculture
Choose the correct answers based on the graphs in Figure 10.1.8 above and then click on the space for the correct answers.
Question 1 - Short Answer
Which food product type (maize, vegetables, dairy) above had the largest participation of non-family owned farms, in terms of revenue from sales of that food product type?
(add text box)
Click for answer
Answer:
Vegetables. This means that in Maize and Dairy production, family-owned farms are more important in terms of total earnings.
Question 2 - Short Answer
Which crop type above has the largest participation of family-owned farms in the SMALL category, in terms of value sold?
(add text box)
Click for answer
Answer:
Maize
Question 3 - Short Answer
These charts don’t show the large variation that exists in the distribution of farm sizes around the country, which you will want to incorporate into your capstone project for your capstone region. If you have looked up this information for your capstone, does your capstone region have a smaller or larger average (or median) farm size?
(add text box)
Click for answer
Answer:
Here are some answers for different regions of the country that should help you to estimate how farm size varies: California – in line or larger than these figures; Colorado: in line with these figures on farm size; Pennsylvania: smaller. In particular, the Pennsylvania dairy sector is much less dominated by very large and corporate farm holdings. This is information you will be able to find in the USDA Census county-level reports, which give a breakdown on farmland areas for each county in the United States.
Question 4 - Short Answer
How do you think the “small” category of U.S. farms in land area (236 acres or just under 100 hectares) compares to the average landholdings of the “small” category for countries like Peru, Kenya, or India where smallholders (farmers on relatively small land areas) are a large part of the population?
(add text box)
Click for answer
Answer:
The median small farm size of smallholder farmers in these countries is much smaller, often 10 acres (3-4 hectares) or less, which is less than a tenth of the "small" family farm in the united states.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.04%3A_The_Globalized_Corporate_Food_System.txt
|
Figure 10.1.9.: This patchwork landscape of sloped fields and hedgerows with dozens of family farms in western Kenya is typical of smallholder food systems. Credit: Steven Vanek
Approximately 500 million smallholder farms with areas less than 2 hectares (5 acres) support the nutrition and livelihood of approximately two billion people in smallholder farms globally (IFAD, 2013). As such smallholders form an important sector of the global food system, producing up to 80% of local and regional food supplies in Sub-Saharan Africa, South/Southeast Asia, and China. You saw an example of a smallholder system in the summative assessment for module 1. Livelihood strategies of households in this system attempt to overcome risk and guarantee subsistence as well as cash income. For this reason, these "semi-subsistence" farming systems are often complex, for example integrating agriculture, livestock, and agroforestry food production with off-farm livelihood activities that overlap with consumption from the globalized food system (previous page). Most food is consumed either on the farm where it is produced or locally and regionally, with transport and distribution handled by relatively short-distance networks.
Figure 10.1.10.: Smallholder systems often include livestock for purposes of economic value (meat, wool) and better use of landscapes that are not optimal for cropping, such as this rugged patchwork of rangeland and cropped fields in Bolivia where goats are being grazed. Credit: Steven Vanek
Figure 10.1.11.: Hardy, early yielding crops such as barley, pictured here in the Andes, are a good way for smallholders to produce staple grains in difficult environments. Depending on the elevation of a smallholder community in this Andean context, maize and potatoes round out a full range of options for producing carbohydrate staples in this risky mountain environment. Credit: Steven Vanek
13.1.06: Alternative Food Systems- Global and Local Variants
"Quasi-Parallel" Alternatives to Modern Food Systems
You may recall that the concluding section of module 2 on the history of crop domestication and food systems we presented the recent development of alternatives to the modern globalized food system as "quasi-parallel" new movements as well as food production and distribution strategies intended to address sustainability issues. We use the word "quasi-parallel" because the global and local variants of these responses focus on different strategies and scales within the food system, and target different outcomes, even though both consider themselves to be responding to the sustainability challenges in the modern food system, sometimes using similar practices at the farm scale for managing crops and soils. Also, we acknowledge at the outset that dividing these strictly into two variants may not cover every case. The intention is to give you a sense of the range of alternatives being proposed so that you can potentially look for these types of alternative food systems in your capstone regions and integrate them into proposed sustainability strategies for the future of food in these regions. Also, it is likely that both variants will have advantages and disadvantages that are pointed out and debated by proponents and critics. From this debate we can see that sustainability is a contested concept, depending on the assumptions, goals, and arguments used by different advocates: it does not have a single definition to different camps in the debate over sustainability.
Globalized Alternatives: "Ecological Modernization" as a Reformation of Globalized Food Systems
Globalized variants of alternative food systems seek to correct issues of sustainability from within the framework of global food production and food trade networks. This has been called a case of "ecological modernization" because it seeks to reform certain aspects of globalized corporate food systems (previous pages) such as environmental impact and labor standards, while not altering the main features of the modern global system, for example, large scale of production, long-distance distribution, and leveraging the economic power of global investment to expand production and increase efficiency. Advocates of this approach promote strategies such as the "triple bottom line" for companies, which refers to positive environmental and social benefits from company activities being measures of company success in addition to economic profitability (thus a triple measure mirroring the "three-legged stool" of sustainability, see Module 1 and following pages in this module). Advocates also generally point to the fact that given the globalized corporate food system embodies by far the largest impact on levels of social equality and natural systems currently, reforming its activities and standards for performance is a way to have a tremendous impact on global sustainability. Detractors of "ecological modernization", including advocates of community-based food systems below, complain that these reform efforts leave in place unsustainable features of the system, such as large-scale production that is corrosive to local communities, or marginalization of smallholder farmers within markets or in land distribution in some cases (see the "agriculture of the middle" critique and the concept of a poverty trap in the following pages).
Notwithstanding this debate, it is useful to note some main features and trends in this globalized approach. Like the community-based variant, the globalized variant has prescriptive goals for the food system in response to sustainability problems of the modern food system. It supports substitution of more sustainable methods of food production, such as integrated pest management, organic methods, reduced tillage, and protection of watersheds from pollution with improved farming techniques, some of which have been seen in previous modules. Certification schemes are promoted that hold producers and distribution networks to a higher standard, such as organic certification (which generally must conform to standards in the country where the food product is sold). As another example, fair trade certification seeks to improve the price paid to local producers in source regions, who have generally received very low prices for their products, and thus shares the approach of strengthening local economies with the community-based approaches below, even if it uses global trading networks.
Figure 10.1.12.: Characteristics of two types of alternative food systems that are responses to sustainability concerns in food systems. A more global or "ecologically modernized" variant seeks to bring sustainable and fair trade principles into the global food system, while a local and community-based variant seeks to reconnect consumers and communities to food producers at a local scale. Credit: Steven Vanek and Karl Zimmerer.
Click for a text description of the Alternative Food Systems diagram.
Alternative Food Systems: globalized and community-based
• Globalized
• "ecological modernization" of the globalized food system
• global/national trade networks
• goals: reform of industrialized farming practices, capturing market niches
• certification schemes: fair-trade, sustainable forestry, etc.
• unprocessed and processed foods
• mainstreaming of organic products in national/global distribution
• Community-based
• emphasis on reintegration of local rural-urban economies
• goals: reform of industrialized farming, local market reintegration, market niches, local/regional shipping and farmers' markets
• mainly unprocessed foods
• organic and local criteria/certification
Community-based Alternatives
Like the globalized variant, community-based alternative food systems define prescriptive goals but oppose many elements of the globalized corporate food system. The community food system primer (Wilkins and Eames-Sheavly 2010, see link below if further interested) states that "a community food system is a food system in which food production, processing, distribution, and consumption are integrated to enhance the environmental, economic, social and nutritional health of a particular place". Three examples of these prescriptive goals within common components of community food systems are:
1. Organic agriculture as a way to reduce contamination of food with pesticides and improve the ecosystem health of farms
2. Farmers markets and community-supported agriculture schemes that allow consumers to more directly support the activities of farmers
3. An emphasis on supporting the activities of small and medium producers and resisting pressures for production and distribution enterprises to grow larger and larger
Many other examples of community food systems can be found, which also include efforts to link smallholder farmers and their food production systems (see previous page) as producers for burgeoning urban markets in developing countries, thus substituting some of the supply from the globalized corporate food system beyond the food products that are already supplied by smallholders to cities in these countries. The overall volume of food handled by these community-based food systems is generally much smaller than the globalized or smallholder types of food systems. Nevertheless, advocates point out that the potential market of urban consumers in relatively close proximity to small-scale producers around the world is potentially enormous. In fact, channels of alternative food production and distribution (e.g. organic agriculture) are among the fastest growing sectors in volume and economic value on a percentage basis, year after year [USDA-Economic Research Service, 2014].
Figure 10.1.13.: This small-scale pork and poultry farm, with direct self-service marketing to consumers, is a highly functional part of a community-based alternative food system. Credit: Karl Zimmerer
Figure 10.1.14.: These organic soybeans are being harvested at a relatively large scale (hundreds of hectares in aggregate, within a production region of the Northeastern United States) to manufacture and sell tofu within a regional alternative food system. Soybeans are also a commodity crop largely handled by the globalized food system. Credit: Steven Vanek
Additional Reading:
Knowledge Check: Food System Typologies
For each of the following concepts, give which of the three types of food systems it pertains to (global corporate, smallholder, local/alternative).
1. Shareholders invest in food companies
2. Fairtrade organizations link smallholders in Kenya to consumers in the U.K.
3. A dairy farmer sells small lots of milk using weekly deliveries to a mid-size city in New Hampshire.
4. Most supermarket items
5. Very important in densely populated rural areas of the developing world e.g. Ethiopia, Peru.
6. Buying butternut squash at the local farmers market.
7. Large wheat fields near Ciudad Obregón, Mexico for export to processors in Mexico, United States, and Colombia.
Scoring your answers to the knowledge check:
Click for answer.
Answer:
1. global corporate
2. global alternative
3. local community-based, alternative
4. global corporate
5. smallholder
6. alternative community based
7. global corporate
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.05%3A_Smallholder_Farmer_Food_Systems.txt
|
Figure 10.1.15.: Three-legged Stool of Sustainability. Credit: Steven Vanek, based on multiple sources and common sustainability concepts
Click for a text description of 3 Legged Stool of Sustainability.
Sustainable Food System
Environment
• reduce pollution and waste
• use renewable energy
• conservation
• restoration
Community (and social sustainability)
• good working conditions
• health care
• education
• community and culture
Economy
• employment
• profitable enterprises
• infrastructure
• fair trade
• security
Challenges to Food Producers: The issue of scale and the “three-legged stool” of sustainability
The previous page on different variants of alternative food systems stressed different ways of analyzing and critiquing the modern global food system based on issues of sustainability. In your capstone projects, you are asked to propose ideas for sustainable food systems in your capstone regions. Therefore, in this page, we repeat from module one the concept of sustainability as a "three-legged stool" combining aspects of environmental, social, and economic sustainability (Fig. 10.1.15, also seen in module 1). We may be most used to thinking about Environmental Sustainability, for example in the need to conserve energy or recycle food containers to reduce pollution and energy generation by fossil fuels, as well as avoiding litter and saving on landfill space. As we presented in module 1, however, sustainability also contains economic or financial aspects devoted to employment, livelihood, and profitability, as well as the concept of social sustainability that embodies goals of social equity and more harmonious societies. Therefore, we are also interested in the nodes of food production such as farms because of the challenges to the economic sustainability of farm (and fishery) enterprises. It is important to think about economic sustainability because of the economic risks that food producers are exposed to. Economic risk is inherent in producing for local, regional and global food systems because producers may not be producing high-value products and must absorb environmental risk, for example from droughts, floods, or pests (see the previous modules, and module 11, next, regarding adaptive capacity). In a drought year, for example, selling vastly reduced yields of soybean or maize crops usually mean an economic loss for a farm because the price of these crops is not very high on the global or local market.
In addition, social sustainability concerns regarding food production are an important part of debates about modern society: for example, smallholder farmers, and laborers on larger farms and within fisheries in the United States and globally, are some of the most economically and politically marginalized populations in the world. Many researchers and advocates point out that food systems cannot be truly sustainable until they embody a more just distribution of resources and power. In this short section we want to highlight two important concepts that link to these ideas of social sustainability and justice: first, the idea of “poverty traps” within smallholder farming around the world (see Carter and Barrett 2006, reference below), and on the next the threat posed to so-called “agriculture of the middle” in industrialized countries where pressures on producers lead either to a small-scale, niche markets orientation (e.g.. farmers markets) or an inexorable growth toward larger and larger farms that capture economies of scale in agriculture. By introducing these concepts now, you should see both how they fit into the analysis of vulnerability and resilience in the next module. You may be able to incorporate these challenges and potential solutions into your capstone region scenarios.
What is a poverty trap in smallholder agricultural systems?
As you’ve seen in the “pond-garden-livestock” (VAC) system of Vietnam in the video at the beginning of this module, agriculture practiced in smallholder food systems on small plots of land (less than 10 acres or 4 hectares, say) around the world is a hugely important and often quite sustainable enterprise. Smallholder agriculture can embody some of the most efficient use of resources in use today, whether these are traditional methods, well-adapted domesticated plants, new innovations taken up by smallholder families, or labor that is efficiently allocated by a family that is in constant contact with their enterprise and ecosystem. However, a concern about the most impoverished smallholders is that they can fall into what is called a poverty trap, where smallholders produce food from a degraded resource base, either because they have degraded it or because they have been forced to the margins of local society, and many times, both. The diversity of diets can also suffer when the least expensive food sources are local cereal and starch crops or calories coming from the globalized food system. The combination of poverty and degradation of soils and other resources does not permit these farmers sufficient income or well-being to invest in and therefore improve their soils or other aspects of local agroecosystems, and so it is likely that they, and their farm ecosystems, will remain in a low level of productivity and earnings. This is therefore called a poverty trap, and it essentially combines a lack of economic, social, and environmental sustainability for these smallholder households. It has also been linked to the concept of a downward spiral of poverty and soil degradation (Fig. 10.1.16).
Figure 10.1.16.: Spirals of soil regeneration and improved livelihoods (top) versus poverty and soil degradation (bottom). The reinforcing feedbacks between soil quality and productivity with social and economic marginalization illustrate the connections between social and environmental sustainability. Credit: Steven Vanek
An example of this poverty trap "downward spiral" is furnished by the dust bowl of the 1930s in the United States, in the case of many families who farmed small plots of land during the depression. The combination of overexploited soils from decades of agricultural expansion after the U.S. civil war, a depressed economy contributing to overall hardship, and a multi-year drought led to a downward spiral in which many poor farmers were finally forced to leave their land and migrate to other areas of the country seeking employment and public assistance. In module 11, we'll examine further how combined human and natural system factors (like poverty and drought, for example) interact to create vulnerability for parts of human society, and ways that human systems have adapted to surviving such shocks and perturbations as drought. We'll include an example of a native American population that was relatively successful in weathering the Dust Bowl and was not subjected to this sort of poverty trap or downward spiral. Also, in the years after the dust bowl, the Soil Conservation Service of the United States Department of Agriculture (USDA) was highly active in helping farmers to transition to practices that helped to avoid soil erosion and aid in regeneration of degraded soils (see the additional reading resources below for this history of the Soil Conservation Service).
In current-day contexts where poverty traps represent a risk or chronic problem for small producers, then, both government agencies and development organizations focus on reducing barriers to practicing more sustainable agriculture. Development organizations include those called non-governmental organizations or NGOs, nonprofits, international aid organizations, as well as organizations founded by and managed by farmers themselves. For example, see the blog article below in "additional reading", regarding efforts to promote agroforestry in Haiti, where poverty and land degradation have long been intertwined. These organizations try to reduce barriers to investment in sustainable food production through the promotion of simple, low-cost strategies to conserve soils and raise crop diversity and productivity (see the previous modules for examples) that are within financial reach of smallholder producers. These technology options are optimally combined with credit and direct aid that helps farmers to overcome the resource barriers for investing in the protection of the natural systems that sustain their production. This sort of knowledge and technology, and the ability of farmers to invest in the productivity of their soils are an example of adaptive capacity, a concept that will a major topic of module 11. In addition, government, NGOs, and farmer organizations may also engage in political advocacy that seeks a more just distribution of land, credit, or access to markets that can help producers to avoid or move out of poverty traps.
Additional Reading:
• Carter, Michael R., and Christopher B. Barrett. "The economics of poverty traps and persistent poverty: An asset-based approach." The Journal of Development Studies 42.2 (2006): 178-199.
• Jacquet, Bruno: blog article, Agroforestry & Sustainable Land Management in Haiti, documenting efforts to promote agroforestry in Haiti with smallholder farmers.
• Natural Resource Conservation Service (USDA): web page giving a brief history of the dust bowl and the soil conservation service, which became the Natural Resource Conservation Service.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.07%3A_Challenges_to_Producers-_Sustainability_and_Poverty_Traps.txt
|
One of the characteristics of a globalized food system is that a smaller proportion of the population is needed to produce the large amounts of food for the global system. As a result, many analysts have noted shrinkage of the rural population in the rural United States over the last 100 years. Similar out-migrations from rural areas to cities have happened in Europe. Among other factors, this process has been hastened by the use of mechanization for agriculture (tractors, combine harvesters, mechanized crop processing, and transportation; we analyze the environmental impact of this in module 10.2). Mechanization and other factors mean that the cost of producing a bushel of corn, for example, and moving it into the global food system is cheaper when the scale of the farm and transportation infrastructure is larger. This phenomenon is referred to as economies of scale (recall also the example of dramatically scaled-up beef production in Greely, Colorado featured in the video of Module 1.2.) Farm producers in the United States and other industrialized economies thus often face pressures to grow their operations larger so that they can become more profitable, accentuated by competition against larger producers with lower prices, sometimes in other countries with lower labor prices.
These twin trends towards “get big or get out” and “get small for local markets” have left out a huge sector of farms that are mid-sized and that still generate a substantial amount of farm income in the U.S. economy, and utilizing the lion’s share of cultivated soils (see figure 10.1.8 with pie-charts of earnings distributions different size farms in the United States). The analysis regarding this “Agriculture of the Middle” (Kirschenmann 2012) points out the threat posed to millions of farming households, most of whom produce for national and global commodity markets (e.g. soybeans, dairy). This analysis also points out that this sector of farms is vital as productive rural citizens that drive social organization, effective policy-making, and community values in most regions of the country. These mid-size farms are often leaders in the adoption of sustainable practices – especially when they are financially successful, illustrating potential linkages between financial and ecological sustainability. In any case, small and medium-sized farms have always played an important role in the maintenance of a rural landscape that most governments and citizens see as valuable. Agricultural landscapes and enterprises often contribute to the tourism value of a particular region, for example, the Pennsylvania Dutch region or wider presence of dairy farms in diverse, forested landscapes of Pennsylvania, or wine producing regions of California and New York State.
The role of “Agriculture of the Middle” in sustaining rural life according to this analysis is worth protecting, and advocates of this analysis and action to support mid-size farm enterprises point out a few advantages these farms have in interacting with regional farm systems. When effective linkages can be built to regional markets, these farms usually combine production at a medium to large scale (compared to small diversified farms supplying farmers’ markets, say) with a flexible outlook that can allow them to change products and markets quickly, and best adopt sustainable production methods in a way that is visible to consumers and local communities (Kirschenmann 2012). As in the case of poverty traps for small farms discussed above, organizations that promote agriculture of the middle seek to clear barriers to these mid-sized producers. Note also that these “mid-sized” producers are enormous compared to farms in smallholder contexts throughout the developing world, though they are community members in an equivalent way to the role of smallholders in a rural third-world context). Mid-size producers and the food distribution organization that work with them may seek to promote “values-based food supply chains” where not just the commodity value of a food product is taken into account but also the value of a farm that demonstrates environmental sustainability and positive participation in rural communities. For example, many agricultural states now have state-level marketing efforts that promote state and regional agriculture, and these programs increasingly integrate ideas that help to promote mid-sized producers. Farmers, distribution network companies, and food markets have also banded together in different configurations to form networks that seek to support not only food availability for consumers but environmental, social, and economic sustainability along the entire food chain. Some examples of these are the Organic Valley dairy cooperative which now operates across the entire United States, the Red Tomato regional fruit and vegetable marketing effort in New England, U.S.A. (see Fig. 10.1.17 and 10.1.18), or the Country Natural Beef producers in the Northwest United States.
Figure 10.1.17.: Photos showing on-farm production, harvest, packing, distribution, and marketing from a "values-based food chain". This example is from Red Tomato, a local/regional network linking farms to consumers in the states of Massachusetts, Connecticut, and Rhode Island in the United States. Credit: Red Tomato Non-Profit and used with permission.
Figure 10.1.18.: The Red Tomato Logo used in marketing the produce above. Credit: Images courtesy of Red Tomato Non-Profit and used with permission.
More Information on Values-Based Food Supply Chains
You can view a PowerPoint slide set introducing nine case studies on values-based food supply chains like the ones described above:"Agriculture of the Middle"(link is external), a research and education effort of the University of Wisconsin.
Integration into the Capstone Project
As you consider the food system of your capstone region, you may want to incorporate references to efforts that support either smallholder farmers in avoiding poverty traps or encourage continued participation of “Agriculture of the Middle” in the regional food system. The formative assessment for this module asks you to address whether you think there are concerns about poverty traps or agriculture of the middle in your capstone region. As you develop your capstone project final scenarios for sustainability, you may want to search on the internet for resources on regional food chains and food systems, as well as local farmers' markets and other initiatives, within your region of interest.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.08%3A_Challenges_to_Producers-_Sustainability_and_Agriculture_of.txt
|
As you saw in the introductory video about a food system in Vietnam, food systems incorporate both natural and human components. In fact, because of the ubiquitous need for food, food systems are among the most important ways that human societies interact with the physical and biological elements and processes on earth's surface. Land used in some way for food production already occupies over two-thirds of the ice-free land surface (Ellis, 2011 or similar on anthromes) and the trend is for this proportion as well as for the intensity (roughly, the production from each unit of land area) to increase. Human fisheries and other forms of food production from oceans (for example, kelp farming) are also tending to exploit wider and wider areas. In addition, as seen in the multiple types of food systems presented above in section II of this unit, the interactions of human societies with earth's ecosystems in food production is not governed by a single human process but depends greatly on human priorities, land management and food production knowledge, rationales and prescriptive goals for food systems, and government policies that regulate and reward food system outcomes. Understanding these societal factors is key to improving the sustainability of food systems in their impact on the earth's ecosystems.
To understand the interaction of human societies with the earth's surface, a common and productive framework is that of coupled natural-human systems [Liu et al., 2007]. These start from a relatively simple diagram (Fig. 8.9), in which a generic human system (e.g. a community within a human society) interacts with a generic natural system (e.g. a farming-dominated landscape within a production region). The framework also recognizes that natural and human systems have many internal interactions and processes such as biogeochemical nutrient cycling (e.g. the nitrogen cycle, see unit N.N in this course) or the policies, corporate actors, and markets determining food supply chains (a human factor).
Fig. 8.9. A generic natural-human system that can be applied to the food system in its interactions with earth system processes. Credit: National Science Foundation Coupled Natural Human Systems research grant program
Click for a text description of Natural-Human system diagram.
Generic Natural-Human system that can be applied to the food system in its interactions with earth system processes
Arrows labeled human to natural coupling and natural to human coupling form a circle between human system and natural system. Those are defined as follows:
Human system: human system internal interactions
Human to natural coupling: Human system impacts and reorganizes natural system
Natural System: natural system internal interactions
Natural to human coupling: Natural system a)presents food production conditions to the Human System b) responds to human management and other drivers in ways that affect the human system (feedbacks)
So, for example, in the video that you watched on the food system from the Red River delta in Vietnam, the river delta is the initial, broad natural system context that presents opportunities for farming, livestock production, and aquaculture to farming households and national/local government policies. Human farming/aquaculture knowledge and practices, markets and government policies are part of a human system that impacts and reorganizes the natural system over time into its current state. Over time the natural system internal interactions and processes may also change, for example, increases or decreases in soil fertility, crop pests, or animal diseases. Because of the evolution over time of the system, it is useful to reorganize the coupled natural-human system as evolving over time (Fig. 8.10).
Fig. 8.10. A coupled natural-human food system developing over time. The initial coupling at time 0 is the natural system presenting itself to human society with its properties for potential food production. The Human system "responds" by reorganizing the natural system for food production (time 2), and feedbacks from the natural system ensue, with impacts on the human system. These feedbacks are either intended consequences (e.g food production and income generation) or unintended consequences (e.g. river and estuary pollution, greenhouse gas emissions). Credit: Karl Zimmerer/Steven Vanek
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.09%3A_Food_Systems_as_Coupled_Natural-Human_Systems.txt
|
Overlap and Transition in Food Systems
As a final observation about ourselves as consumers within different food system types, it is important to stress that most consumers and households participate in multiple types of food systems at once. For example, someone in a modern society might consume a breakfast energy bar with globally sourced processed ingredients, along with a fair-trade certified cup of coffee and regionally sourced milk, on the way to picking up sweet corn at a direct-marketing farm stand of a local farmer. In addition, and of special importance for thinking about sustainable scenarios for your capstone project, it is important to find explanations of how transitions occur in food systems from one type of predominant system, to the inclusion of alternatives, or in some cases a wholesale change from one food system type another.
Human-Natural Interactions as Drivers of Food System Variation and Transitions
Recall that in Module two we presented the broad strokes of the history of food systems, from prehistoric times to the modern day, including the globalized and local-regional variants of alternative food systems that were featured on the previous page of this module. In module two, we presented the idea of drivers and feedbacks that have caused large changes in the environment-food relations over human history (for example, from hunting and gathering to agriculture). In this final page of module 10.1 we want to develop concepts that constitute Coupled Human-Natural Systems explanations or "ways of seeing" that can help to understand two different processes:
1. The way that two different and parallel food systems can exist in the same place, based on divergence from a common origin
2. The transition from one food system type to another, considering the three major types of food systems and the different variants of alternative food systems.
Both of these ways of seeing may be useful in understanding proposals for sustainable food systems, as well as the issues of resilience and vulnerability of food systems presented in the next module.
Regarding process (1) above, The fact that food systems develop over time due to the interactions of human and natural systems means that different food systems can develop in the same natural system environment. For example, the same environment or natural system can support either a smallholder type of food system (small plots, less mechanization, local consumption) or a global corporate food system (larger land sizes, more mechanization and industrially-produced soil inputs, global distribution and consumption). It can also support a mosaic of the two types. This overlay of two types is in fact fairly common: for example, smallholder agriculture on smaller plots for mixed home and regional consumption coexists in Central America with the export agriculture of major food commodities such as bananas or vegetables, and often involves marginalization of smallholders to smaller landholdings in less productive and more difficult to manage soils. This mosaic of food system types is also increasingly true in Southeast Asia as globalized agriculture for cassava and maize production for export to China as well as domestic consumption coexists with more traditional smallholder agriculture as portrayed in the Vietnamese "VAC" system in the introductory video for this module. Figure 10.1.19 shows how two parallel food systems can develop in the same environment according to the Coupled Human-Natural System diagram we have seen at other points in the course. Starting at "time 0" in the middle we can see how an initial food system type might develop via human system management (e.g., the smallholder system in the Central American or Southeast Asian case). At "time 1", meanwhile, differences in the human system (say marginalization of smallholders to certain environments, investment in industrial farming in other environments) have created two divergent food system types. These divergent types then develop on their own through times two and three, and into the future, incorporating interactions between human and natural systems that embody the issues of social, financial, and environmental sustainability. As we can see by the Central American and Southeast Asian examples, this parallel trajectory of two different food system types is highly relevant, since it represents exactly what has happened in developing countries that seek to industrialize their agricultural sectors but retain a large rural population practicing smallholder agriculture, often in more marginal regions with greater heterogeneity of environments (e.g. mountainous areas of Central and South America, mosaics of large export-oriented farms alongside smallholder agriculture in South and Southeast Asia as well as Eastern and Southern Africa).
Figure 10.1.19.: Divergence of the process of coupled human-natural system development to produce two different types of food systems. Credit: Karl Zimmerer, based on the National Science Foundation Coupled Human-Natural Systems program.
Click for a text description of Figure 10.1.9
Add text description here
Regarding the transition from one food system type to another (process 2 as listed above), the CNHS diagram (figure 10.1.20 below) can help to understand these transitions. As an example, we'll use the transition to alternative food systems from the globalized industrial food system we described in module two and again in this module. In figure 10.1.19, between the "initial coupling" point at center and "Time 1", expansion of a dominant food system begins to create strains on both the natural and the human components of the system. In the case of the globalized industrial food system that emerged in developed countries (sometimes called the global north) after World War II, this system eventually created strains in the natural and human systems and a critique responding to the unintended consequences of the expansion of this industrial and globalized model. You'll recognize these strains and critique as the same issues of sustainability discussed in the previous modules:, "diseases of affluence" from poor nutrition, food insecurity, concerns about water use and the water footprint of food, soil degradation, pest and weed resistance to pesticide and herbicide management, and others. Between Time 1 and Time 2, the human society thus receives signals from the interactions and drivers within the coupled system, and then responds, in the form of a wide range of new policies and "models" of the new system that emerge during Time 3. In some cases, these responses are modest, for example, a new regulation on fertilizer or pesticide application to moderate the unintended negative consequences. In other cases, the responses are more dramatic and become the alternative food system types described in this module, both global and community-based. These different variants of the transition may increasingly create aspects of a "complete" food system, e.g. production, distribution, and consumption pathways in an integrated whole, as compared to their initial state as outliers, regulations, or policy proposals. Through different drivers and feedbacks from the natural and human sub-systems, a transition to new food system types occurs. These new types often coexist with a more dominant food system, which is certainly the case with the coexistence of alternative food systems in the present day with the still-dominant global corporate food system.
Figure 10.1.20.: A model of the way that a crisis in the social and environmental sustainability of a food system can drive a transition to a new type of food system, considered as Coupled Human-Natural Systems (CNHS). See text for more detailed description and example of this process. Credit: Steven Vanek and Karl Zimmerer, based on the National Science Foundation Coupled Human-Natural Systems program.
Click for a text description of Figure 10.1.10
Add text description here
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.01%3A_Food_Systems/13.1.10%3A_Divergence_and_Transition_of_Food_Systems-_Human-Natural_I.txt
|
Introduction
What are the impacts of Food Systems on the Natural Systems that support our food production? You will learn about system-levels impacts and impact assessment in module 10.2. You have already considered many of these impacts on the environment in earlier modules, for example, plant domestication, nutrient cycling, water use and water pollution. You will learn about assessing impacts that emerge from the behavior of a whole food system, and practice life cycle assessment (LCA), one method used for assessing whole-system impacts.
13.02: Assessing Food System Impacts on Natural Systems and Sustainability
Human System Impacts on the Environment Within Food Systems
Figure 10.2.1.: The collection, transport, and application to the soil of plant nutrients using animal manure (left), and the destructive effects of erosion from tillage on sloped land (right) are two human system impacts on the earth system. Credit: Steven Vanek
Figure 10.2.2.: Fertilizer production, like in this old print of a factory in Philadelphia, Pennsylvania, and a large new plant in India, represents a substantial contribution to the energy expenditure and greenhouse gas footprint of food production via modern agroecosystems. Credits: top, Boston Public Library, used with permission via a creative commons license; bottom: used by permission through the Wikimedia Commons.
In modules one and two of this course, and most recently in this module, we represent food systems as coupled human-natural systems. Throughout the course, we have tried to emphasize the dramatic impacts that human food production has had and continues to have on earth's natural systems. Here are some examples from previous modules:
• Changing river basins to create irrigation systems
• Exposing soils to greater rates of erosion, and stabilizing some soils against erosion
• Domestication and development of new crop types, including transgenic engineering of new crop traits in the recent past
• Contributions of agriculture to greenhouse gas emissions that are warming planet earth and leading to climate change
Different types of food systems – global, smallholder, and alternative, as we summarized in module 10.1 -- may all impact the earth's natural systems in a different way and to different degrees. You may recognize on the short list of examples above that the impacts from these changes and the creation of agroecosystems by humans may have both positive and negative aspects. For example, irrigation and crop breeding both have as objectives increasing the productive potential of crops. They may carry other unforeseen consequences, such as depletion and collapse of water resources, changes in the dietary quality of food with domestication and breeding, and greater use of herbicides in the case of Roundup-ready crops. These human system actions within the food system improve production can be seen as the initial driving arrow as part of a human-natural system coupling (Fig. 10.2.3) and generally involve management, reorganization of the ecosystem, and energy and nutrient inputs (e.g. the use of fossil fuels to create fertilizers). The natural system then responds with positive and negative impacts on productivity and other natural system processes, which can include positive and negative consequences. These consequences eventually determine the level of sustainability of the food system. The massive extent of food systems and food production globally, within different types of food systems, translates into a large effect, or leverage, on the sustainability of human societies. To promote the sustainability of food systems, we must understand how food systems as a whole affect measures of sustainability. In this unit, we will first refer to the different human system impacts on natural systems, and then allow you to practice life-cycle assessment (LCA) to compare the energy use of two food production systems in the Andes and North America.
Figure 10.2.3.: Human system impacts of food production activities. The human system reorganizes and provides energy and nutrient inputs to the natural system (1), which induces changes in natural system processes such as nutrient cycling, greenhouse gas emissions, and agrobiodiversity (2a). The changes in the natural system allow the human system to produce food but may also alter ecosystem services in harmful or unintended ways (2b: e.g. water pollution, global warming, and biodiversity changes in land and ocean systems). Credit: Karl Zimmerer / Steven Vanek
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.02%3A_Assessing_Food_System_Impacts_on_Natural_Systems_and_Sustainability/13.2.01%3A_Ear.txt
|
Life Cycle Assessments
Life cycle assessments or life cycle analyses (LCAs) are defined as “a tool to analyze the potential environmental impacts of products at all stages in their life cycle” (International Standards Organization). Analogous to the food supply chain activity you completed in module 10.1, LCAs follow products (foods and otherwise) from production, through transport and assembly steps, to the consumption or operation of the product, and in some cases even its disposal. In contrast to the supply chain descriptions in module 10.1, at each of these stages of production, transport, consumption, and disposal, LCAs keep a running total of environmental costs or impacts of the product. Common impacts that are tracked by LCAs across product life cycles are greenhouse gas emissions, water pollution impacts, and energy use. As such LCAs are a key tool in analyzing the impacts of human on natural earth surface systems within the coupled natural-human food system (Fig. 10.2.3). LCAs require some careful thinking about where to draw the boundaries of the system for considering the life cycle of a product. For example, an LCA devoted to carrots would probably include the energy required to operate the refrigerated truck used to transport the carrots but not the energy needed to make the truck. Also, many LCAs are “cradle to grave” and include both impacts of all raw materials used in production as well as disposal impacts for the product, but some do not focus on the entire life cycle and assess other segments of the lifecycle such as “cradle to farm-gate” or “cradle to plate” in the case of food products.
Life cycle analyses are an excellent way of putting into practice a geosciences "habit of mind" of using systems thinking. Because food systems are complex, we think about a way to measure its performance and then explore all the linkages in the system within that single metric or measurement parameter (see module 1.2 for a discussion of complex systems behavior). That is, we don't content ourselves with just thinking about a crop plant in a field, the entire farm field, or the highway where foods are transported; we go several levels up to measure impacts along the entire pathway or web of interacting system parts. Along the way, it is likely that we will start to think in new ways about the linkages between parts of the system, about the most important contributions to impact, or about previously hidden factors or unexpected outcomes that explain the performance of the system.
Required Reading
National Center for Appropriate Technology (NCAT): Life Cycle Assessment of Agricultural Systems, pp. 1-3 and figure 3 for light bulb LCA on page 9.
You'll notice that the presentation of compact fluorescent light bulbs is somewhat dated since there has now been a big move to LED light bulbs that are further reducing energy usage for lighting. We continue to feature this presentation of LCA from the NCAT because it is one of the better non-technical introductions to the subject and also relates LCA concepts to agriculture. See the resources below if you want to read more about LCAs, including a detailed PowerPoint comparing different types of light bulbs.
Additional Reading on LCAs (optional)
1. Colin Sage's book, Environment, and Food, pp. 167-172, Chapter 5, Final foods and their consequences. You may remember that the first few pages of this book were assigned as a reading in module 1.1. This document may be available through your E-Reserve System.
2. A PowerPoint comparison of light bulbs to complement the reading from NCAT above, H. Dillon and C. Ross: "Updating the LED Life-Cycle Assessment".
3. A video of a "six-minute crash course, LCA 6 minute crash course Life cycle thinking and sustainability in design, by Leyla Acarogluintroducing.
13.2.03: Usi
Life cycle assessments are often used in two important ways. The first is to compare the costs or impacts of different products or production systems in a rigorous way, seen in Fig. 10.2.4 which compares the phosphorus water pollution resulting from three ways of producing pork meat in France (Basset-Mens and Van Der Werf, 2005). This “cradle to farm-gate” pork LCA includes the cropping used to produced pig feed as well as the animal raising methods at pig farms that use three different standard methods. In this example, two different ways of expressing LCA results show how different messages can emerge depending on how results are presented. The graph at left shows the impacts at a per-area of land used basis (per hectare), while the graph at right expresses the impact as a per-kg of food produced, which means that if a production method yields more on a per-area basis, its impact can be reduced compared to one with less productivity per hectare. Organic and red-label humane methods with straw-bedded barns pollute less on a per-hectare basis, but the organic methods are not less polluting than the conventional treatments on a per-kg of pig basis because more area is needed to both raise the pigs and grow the crops for feed in the organic system. Therefore if demand (in kg pork) remains the same for pigs while consumers switch from conventional to organic pork, total water pollution from phosphorus in pig production is unlikely to decline, at least according to this study. Rather, the red-label option seems to be able to shrink pollution per kg of pork consumed in the food system. Meanwhile, if you limited your viewpoint to a single watershed with a delimited area, you would say that both the red-label and organic methods reduce pollution. Despite some of the benefits of organic management generally in reducing toxins in the environment and building soil quality (for example), this study can give us some pause in thinking about the particular system we are talking about (e.g. organic hog production versus organic apple production, for example) and the need to respect specific case analyses and the measures used in LCA analysis. When we talk about a whole food system, it may be best to employ a per-kg of food produced approach.
Figure 10.2.4.: Results of a life cycle assessment for eutrophication impacts (water pollution, see modules 3 and 4; in this case, via phosphorus or P runoff into surface waterways of pork production under different standard methods in France: European Union (EU) standard best conventional methods using confinement of hogs; a "red label" which denotes an defined-origin producers' standard using more humane methods and sustainable but non-organic crop practices, and organic standard methods including more humane treatment and organically produced crops. (data adapted from Basset-Mens and Van Der Werf, 2005). Credit: Steven Vanek
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.02%3A_Assessing_Food_System_Impacts_on_Natural_Systems_and_Sustainability/13.2.02%3A_Lif.txt
|
The second main way of using LCA is to assess which steps or process inputs in the production, consumption, and disposal of a product are most responsible for human negative impacts of practices. These “hot spots” in the analysis can then be the focus for better measurement to confirm the findings of the LCA and/or innovations in practices that eliminate these practices or limit their impact. One type of LCA uses the common measure of external energy inputs for food production (i.e., those not related to solar energy that is used by plants "for free") to analyze one aspect of the sustainability of food production. These energy inputs are visualized in Figure 10.2.5.
Figure 10.2.5.: Principal human-managed energy inputs involved in food production, in addition to resources of soil and water as well as the solar energy captured by plants. Energy expended in irrigating crops, and in the growing of seed, would be additional flows that might be especially important in other systems. These energy flows are summed up using an LCA approach, noting which ones account for most of the impact, creating "hot spots" that should be researched further and targeted to increase efficiency and reduce the impacts of food production. Credit: Steven Vanek
Dissecting an LCA
An LCA for energy use is illustrated below in Fig. 10.2.6 which shows the comparison of total energy used in different crop production practices in a long-term trial of farming practices in Switzerland. This graph shows the energy used in food production in two formats: stacked colored bars as kilowatt-hours (kWh) energy equivalent per land area under production (i.e. per Ha or 100 x 100 m area) of food production, and also as a total watt-hours (Wh) per kg of food produced (dark green lines and points above the stacked bars).
It's worth considering these results and the units used in more detail. First, for comparison, a typical U.S. home uses about 72 kWh per day for heating, cooling, and electricity, if we boil all these energy uses down to one energy equivalent* (calculations based on the U.S. Energy Information Administration, 2009 data). Some further "ballpark" or rough calculations allow us to see that the fertilizer-based system (bar at right) uses a total of about 100 days of mean household energy**to produce food on one hectare in a year (per year), while the organic system (bar at left) uses a little over half this amount of energy. Meanwhile, if we express this daily household energy use as the energy used for food per kg of food, the 72 kWh become 94 kg of food in the fertilizer-based case at right, and 144 kg of food in the organic management case at left ***. Expressing LCA results as energy per land area and energy per kg food produced are common approaches, analogous to the pollution impact analysis on the previous page. In the summative assessment on the next page, you will use an LCA to calculate energy inputs per kg of potato production in two systems.
* That is the amount of heat and light given off by 30 100-watt light bulbs burning for 24 hours.
** That is, about 7200 kWh (height of rightmost split bar on the left axis showing energy use per hectare), divided by about 72 kWh household use per day, which is equal to 100 days.
*** Dividing 72 kWh by the energy amounts per kg from the green point+line data above the stacked bars, e.g. 72 kWh / 0.77 kWh per kg or 93 kg food for the fertilizer based-case, and 72 kWh / 0.50 kWh per kg, or 144 kg of food, for the organic case.
Figure 10.2.6.: Results from a life-cycle analysis on energy use equivalents to compare three farming methods growing a variety of food crops in a long-term research trial (DOK Trial, Therwil, Switzerland). The left Y-axis and stacked colored bars give the energy use per land area (i.e. per Ha or 100 x 100 m land area) and might be helpful to know if we have the total production land area of a farm or region. Meanwhile, the plotted points above each stack and right y-axis give the energy use per kg of dry matter of crops produced, which may be a more useful measure for a food system where we want to adjust for the productivity of different food systems. It is notable that the largest differences between the three systems derive from in the energy used to manufacture chemical fertilizer (yellow bars). Figure adapted from data cited in Nemecek, T. 2005. Life Cycle Assessment of Agricultural Systems: Introduction. Credit: Adapted from data in the study cited in caption (Steven Vanek)
Fertilizer Use as a "Hot Spot" in the Analysis
Two additional observations: first, in this LCA there emerged large differences in energy use that have to do almost completely with the energy used to produce chemical fertilizer, especially of nitrogen fertilizers like those produced in the large fertilizer plant in India shown in Fig. 10.2.2. Energy inputs to fertilizer production are especially high for nitrogen fertilizer because it takes a great deal of energy to fix inert nitrogen in the atmosphere (N2) into reactive forms like ammonium and nitrate that can be easily taken up by crops (see module 5 and other previous modules). Fertilizer use emerges as a "hot spot" in this analysis and might prompt managers or policymakers to work towards reducing fertilizer use by incorporating aspects of the organic and manure-based system into the more conventional, fertilizer-based system. Many energy inputs in agriculture, such as these fertilizer inputs or tractor fuel that are tallied in the LCA above, are important to consider because they represent non-renewable fossil fuel energy sources that contribute to greenhouse gas emissions and anthropogenic climate change through the release of carbon dioxide. The LCA thus helps to measure natural system impacts and sustainability of food systems. Second, this LCA used energy as a yardstick to measure the impact of food production. As we will note for your summative assessment, such an LCA using energy inputs is only ONE measure of sustainability, and may not capture other measures of sustainability, like forest clearing needed to establish agroecosystems, runoff of nutrients that contribute to dead zones, pesticide effects on beneficial insects like pollinators, or whether farming practices provide sustained income and other livelihoods to farmers. As an example of using a different yardstick for LCAs, consider the emissions of greenhouse gases (GHG) by different pork production systems on the previous page, in which the organic management system, in fact, had higher potential to pollute waterways with phosphorus runoff per kg of pork produced, than either conventional or "best practices" red label standard in the European Union. This result contrasts with the favorable result shown on this page for organic management when energy inputs were used as a yardstick.
Question 1 - Short Answer on types of impacts from food system activities
One of the skills involved in building life-cycle analyses is the ability to conceptualize all the different impacts on natural systems related to the different functions of production, transport, and consumption of a product. The activities give below in each question form a part of the functioning of food systems. Identify an impact or impacts on the natural system (e.g. soil erosion, air pollution, water pollution) that would most likely result from these activities, based on the material in this module and previous modules. Then check your answers by clicking on "click for answer" as a review.
Transporting food by ship and truck:
(add text box)
Click for answer
Answer:
greenhouse gas emissions from burning fuel, other air pollution from burning fuel
Applying manure to soils:
(add text box)
Click for answer
Answer:
water pollution from nitrogen and phosphorus drainage and runoff if over-applied, and no measures in place to retain nutrients in soils (cover crops, riparian barriers that keep nutrients out of waterways). Greenhouse gases, especially N2O, will be emitted if manures are applied on waterlogged soils, and methane as a greenhouse gas is associated with the animal production (especially cows) used to produce the manure.
Applying fertilizer to soils:
(add text box)
Click for answer
Answer:
greenhouse gases because of fertilizer manufacturing, greenhouse gas due to denitrification and N20 emission, water pollution if over-applied as in the case of manure above.
Tillage of soils:
(add text box)
Click for answer
Answer:
if practiced irresponsibly, soil erosion; siltation of waterways; greenhouse gas emission from mechanized tillage with tractors; emission of carbon as carbon dioxide that was once part of organic matter in soils.
Pesticide and herbicide application:
(add text box)
Click for answer
Answer:
toxicity to organisms in the environment, e.g. honey bees, other beneficial insects and water pollution; evolving resistance of pests and weeds, greenhouse gas emissions from manufacture, transport to farms, and application with tractors
Question 2 - Short Answer
As preparation for doing your own life-cycle analysis, make a list of all of the energy needs you can think of that go into both manufacturing and operating a car. You may want to also refer to the NCAT/ATTRA required reading to review an example life cycle analysis:
(add text box)
Click for answer
Answer:
mining and refining metal ore to make steel, refining oil and synthetic production of plastics and paint, energy for water supply to all manner of production processes, burning gasoline or diesel in the engine for running the car, etc.
13.2.05: Sum
Instructions
This assignment has been broken down into three steps. First, you need to understand the LCA Spreadsheet, next you will need to complete the spreadsheet and finally you will complete the Summative Assessment Worksheet using the results of the completed spreadsheet. Please read all of the instructions very carefully. They are presented in the next three pages.
Submitting Your Assignment
Please complete the Module 10 Summative Assessment in Canvas.
13.03: Summary and Final Tasks
Summary
In the first part of Module 10, you learned about some of the formal concepts around food systems seen either as food production, transport, and consumption chains, and as types of coupled natural-human systems, and explored these concepts using real examples: food products as windows into production and transport chains, and food system examples from around the United States and the world. In the second part, module 10.2, you learned about and practiced a skill for measuring impacts of food production activities and other human processes on the environment: Life cycle analysis. These are vital tools that you can use to understand human-environmental linkages that pertain to food, one of the main goals of section III of this course.
Reminder - Complete all of the Module 10 tasks!
You have reached the end of Module 10! Double-check the to-do list on the Module 10 Roadmap to make sure you have completed all of the activities listed there before you begin Module 11.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/13%3A_Food_Systems/13.02%3A_Assessing_Food_System_Impacts_on_Natural_Systems_and_Sustainability/13.2.04%3A_Usi.txt
|
Overview
Human-Environment Interactions: Resilience, Vulnerability, and Adaptive Capacity (RACV) of Food Systems
In Module 11, we focus on human-environment interactions in food systems under stress. Just as a human body does not persist in a constant state of perfect health, farms, fisheries and other components of food systems face adversity. These components must have sources of resilience and restoration to overcome these challenges. Shocks and perturbations from the natural world are a major negative coupling force from the natural systems to human societies and are sometimes compounded by problems and crises within societies. Such shocks are most evident where the natural world meets human management in production areas, and so Module 11.1 focuses on the resilience and vulnerability of agriculture. As a premier example of this, we build on the material from module 2 and learn about the way that humans’ manipulation of seeds and plant varieties has created agrobiodiversity. Agrobiodiversity, along with crop management techniques, make food production systems resilient or vulnerable to shocks and perturbations. In Module 11.2 we take up the theme of food access and food insecurity as a major example of vulnerability and an ongoing challenge for a significant proportion of humanity. Food insecurity also manifests as acute crises that carry the formal designation of famines. We will also study these since they are large-scale failures of the modern food system, which currently produces enough food for every person on earth. Just as health sciences and medicine are ways to improve and guarantee health for all persons, our hope is that by understanding vulnerability and resilience in food systems we can address food insecurity for all people as a facet of sustainable food systems. Addressing food insecurity is a serious consideration that you will contemplate in your capstone project.
Goals
• Describe the concepts of resilience, adaptive capacity, and vulnerability (RACV) in a food system.
• Explain food access and food insecurity as a key challenge to food systems.
• Appraise the value of human seed systems and agrobiodiversity as human system components that incorporate crops as natural components and foster resilience.
• Apply concepts of RACV to understand changes in seed systems and food production in examples.
• Analyze stresses and shocks from climate change and food system failure that lead to both gradual changes in food systems and acute crises such as famines.
Learning Objectives
After completing this module, students will be able to:
• Define the concepts of perturbations and shocks, resilience, adaptive capacity, and vulnerability in the context of agri-food systems.
• Define and describe agrobiodiversity within food production systems and changes in this agrobiodiversity over time.
• Define the concepts of food access, food security, food insecurity, malnutrition, and famine.
• Give examples of resilience, adaptive capacity, and vulnerability in food systems.
• Give examples of support systems for biodiversity in land use and food systems.
• Evaluate recent examples in land use and food systems of resilience, adaptive capacity, and vulnerability (RACV).
• Analyze an example of a recent famine and understand how multiple factors of vulnerability and shocks combine to create widespread conditions of food insecurity known as famines.
• Understand scales at which resilience and vulnerability come into play, including farm, community, regional, and international scales.
• Propose principles embodying RACV for incorporation into a proposal/scenario for an example food system (capstone project).
Assignments
Print
Module 11 Roadmap
Please note that some portions of the Summative Assessment may need to be completed prior to class. Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 11 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Nabhan, G.P. p. 129-138 in Chapter 9, "Rediscovering America and Surviving the Dust Bowl: The U. S. Southwest " in Where Our Food Comes From: Retracing Nikolay Vavilov's Quest to End Famine. Washington: Island Press. (Module 11.1)
3. Bittman, Mark. "Don't Ask How to Feed the 9 Billion," NYT, Nov 12, 2014 (Module 11.2)
4. Deering, K. 2014. Stepping up to the challenge – Six issues facing global climate change and food security. CCAFS (Climate Change and Food Security Program)-UN (United Nations), 2014. (Module 11.2)
1. You are on the course website now.
2. From University Course reserves
3. Online: Don't Ask How to Feed the 9 Billion,
4. Online: Stepping up to the challenge – Six issues facing global climate change and food security
To Do
1. Summative Assessment: The Anatomy of the Somali Famine (2010-2012)
2. Take Module Quiz
3. Submit Capstone Stage 4 Assignment
1. In the course content: Summative Assessment; then take quiz in Canvas
2. In Canvas
3. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
14: Human-Environment Interactions
In this module, we consider resilience, adaptive capacity, and vulnerability (RACV) of food systems through the lens of agrobiodiversity and seed systems. We will build on the awareness of human-natural system interactions that was explored in module 10.2. In this module, we examine the way that shocks and perturbations affect human systems and the ways in which human systems have found to cope with these shocks that produce resilience within food systems. You will learn about agrobiodiversity at a crop and varietal level as an important case of adaptive capacity that provides resilience to shocks for food systems within different types of food systems (e.g. smallholder, globalized). You will apply this learning to examining RACV in a case study from the southwestern United States.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/14%3A_Human-Environment_Interactions/14.01%3A_Resilience_Adaptive_Capacity_and_Vulnerability_%28RACV%29-_Agrob.txt
|
In this module, we consider resilience, adaptive capacity, and vulnerability (RACV) of food systems through the lens of agrobiodiversity and seed systems. We will build on the awareness of human-natural system interactions that was explored in module 10.2. In this module, we examine the way that shocks and perturbations affect human systems and the ways in which human systems have found to cope with these shocks that produce resilience within food systems. You will learn about agrobiodiversity at a crop and varietal level as an important case of adaptive capacity that provides resilience to shocks for food systems within different types of food systems (e.g. smallholder, globalized). You will apply this learning to examining RACV in a case study from the southwestern United States.
14.2.01: Introduction to Food
In this module, we will introduce the concepts surrounding the global challenge of food access and insecurity and the vulnerability of agri-food systems and particular populations to market and climate shocks. The concepts used in this unit build on the ideas of shocks and perturbations, resilience, adaptive capacity, and vulnerability of agri-food systems that were covered in unit 11.1. The unit, therefore, illustrates an urgent aspect of the analysis of the agri-food system as a coupled natural-human system.
14.02: Food Access and Food Insecurity
Food access is a variable condition of human consumers, and it affects all of us each and every day. If you have ever traveled through an isolated area of the country or the world and encountered difficulty in encountering food that is customary or nutritious to eat, or within reach of your travel budget, you have an inkling of what it means to have issues with food access. For those with little capacity for food self-provisioning from farms or gardens, food access is determined by factors influencing the spatial accessibility, affordability, and quality of food sellers. The consistent dependability of adequate food access helps to enable food security whereby a person’s dietary needs and food preferences are met at levels needed to main a healthy and active life. Famines are conditions of extreme food shortage defined by specific characteristics (see below). Food-insecure conditions, such as acute and chronic hunger, are important conditions that affect many people both in the United States and in other countries.
Food Access:
Determined among consumers by the spatial accessibility and affordability of food retailers---specifically such factors as travel time to shopping, availability of healthy foods, and food prices---relative to the access to transportation and socioeconomic resources of food buyers. You examined both of these in the Module 3 nutrition activity that used the United States Atlas of Food Access. Some people and places, especially those with low-income, may face greater barriers in accessing healthy and affordable food retailers, which may negatively affect diet and food security. Food access among growers of food, whether full-time farmers or part-time farmers (including many smallholders), is influenced through the ability of e.g. farmers to produce and store enough food to complement purchased food or food themselves entirely, referred to as self-provisioning capacity.
Food Security:
“when all people at all times have access to sufficient, safe, nutritious food to maintain a healthy and active life. Commonly, the concept of food security is defined as including both physical and economic access to food that meets people's dietary needs as well as their food preferences” (World Health Organization)
Components of food security: Some food programs, such as the Food and Nutrition Technical Assistance (FANTA) unit of the U.S. Agency for International Development (USAID), have found it helpful to analyze food security as composed of:
• food availability (production and/or markets that deliver sufficient amounts of food)
• food access (see above definition)
• food utilization: the ability to exercise cultural food preferences and the effective use of food within households and communities to guarantee equitable nutrition.
Famine:
Famine is generally understood as acute (versus chronic) food shortages at crisis levels across a wide area, with disastrous health and mortality outcomes. While there are various formal definitions of famine, many experts say that there must be evidence of three specific outcomes before a famine can be declared:
1. At least 20 percent of households face extreme food shortages with limited ability to cope; Note the explicit linkage to reduced adaptive capacity of famine victims (see module 11.1)
2. The prevalence of acute malnutrition across the famine region, in a generalized way, must exceed 30 percent.
3. Death rates from hunger must exceed 2 deaths per 10,000 people per day.
(World Food Program definition, from Zero Hunger).
Food-insecure conditions: acute vs. chronic hunger and malnutrition:
The definitions above imply concepts of acute and chronic that are broadly analagous to their definitions in the medical field. An acute food shortage is one that occurs suddenly, while chronic conditions go on month after month or year after year. Most climate and price shocks provoke acute impacts or crises; while chronic malnutrition of vulnerable or poor populations within countries can go on year after year, provoking long-term negative health and livelihood impacts. Both are considered failures of food systems. Acute food insecurity is rare in wealthier countries, but chronic under-nutrition and poor nutrition can be common especially among the poor, and is one of the current crises faced in the United States.
Smallholders:
You may already be familiar with this term and absorbed some of the characteristics of smallholder farmers through our focus on the food systems that these farmers occupy around the world (Module 10.1). In formal terms, smallholders are food producers whose households typically own less than 2-3 hectares (approximately 7 acres) of farmland. Demographically, smallholders number approximately 2.0-2.5 billion people worldwide, which makes them a major stakeholder group and "target population" for global food and agricultural policy. The socioeconomic characteristics of smallholders vary widely. Some smallholders, including ones in the U.S. and Europe, may include locally well-to-do “hobby farmers” while the majority of smallholders are relatively poor, both in these countries and in the far more numerous populations of smallholders in countries such as China, India, and Brazil, and well as many other less developed and developing nations. The food access of smallholders typically combines some self-provisioning along with significant reliance on food acquisitions at stores and markets for staple foods such as grains, noodles, sugar, and oils.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/14%3A_Human-Environment_Interactions/14.01%3A_Resilience_Adaptive_Capacity_and_Vulnerability_(RACV)-_Agrobiodi.txt
|
A global overview of food insecurity can be obtained by mapping average daily calorie supply per person for each country (see Figure 11.2.1). Mapped values are shown as ranging from less than 2,000 calories per person (e.g., in Ethiopia and Tanzania) to the range of 2,000-2,500 calories per person, which covers several countries in Africa as well as India and other countries in Asia in addition to Latin America and the Caribbean. Calories are a reasonable way to begin to understand large-scale patterns related to the lack of food access around the world. Nevertheless just looking at calories hides other aspects of human nutrition, such as the need for a diverse diet that satisfies human requirements for vitamins, minerals, and dietary fiber, which were described in module 3.
Figure 11.2.1.: Global Food Insecurity: calorie measures. This map is a good first approximation though it does not address vitamin or micronutrient-based malnutrition, which was described in module 3 of this course. Credit: Atlas of Population and Environment
Required Readings
The following brief readings are good ways to appreciate the analyses and debates surrounding food insecurity and the challenges of "feeding the world", especially in the emerging scenario of climate change impacts on food production. They form part of the required reading for this module and will help you to better understand the materials and the summative assessment.
1. Bittman, Mark. "Don't Ask How to Feed the 9 Billion" NYT, Nov 12, 2014./li>
2. Deering, K. 2014. Stepping up to the challenge – Six issues facing global climate change and food security, CCAFS (Climate Change and Food Security Program)-UN (United Nations), 2014. Read the page headings on each challenge and the brief description of the response below.
14.2.03: Food Shortages Chroni
Food Crises and Interacting Elements of the Natural and Human Systems
This section employs the framework of Coupled Natural-Human Systems (CHNS) in order to illustrate the interacting elements of natural and human systems that can combine to produce severe food shortages, chronic malnutrition, and famine food systems around the world. These CHNS concepts build on the diagrams and concepts in module 10 and 11.1. You will also apply these concepts in the summative assessment on the next page.
As you read this brief description consult figure 11.2.2 below. It depicts that interacting conditions within the human and natural systems, combined with driving forces and feedbacks, are at the core of many cases of severe food shortages, chronic malnutrition, and famine in agri-food systems.
The best place to begin interpreting Figure 11.2.2 is by focusing on the driving forces emanating out of both the human and natural systems. Human system drivers often involve political and military instability and/or market failures and volatility (such as prices). Most cases of famine, as well as many instances of severe food shortages and chronic malnutrition, involve these human drivers. In addition, human drivers not only drive vulnerability in natural systems but may act first and foremost on human systems, reducing the adaptive capacity of consumers and producers, for example by reducing the purchasing power of poor populations during price spikes.
Figure 11.2.2 also shows that drivers emanate from the natural system. Climate change and variation, such as drought and flooding, often contribute to cases of famine, as well as severe food shortages and chronic malnutrition.
These drivers, however, are only PART of the causal linkages of severe food shortages, chronic malnutrition, and famine. Similarly important are the conditions of poor resilience (potentially arising as result of weak social infrastructure), low levels of adaptive capacity and poverty. Poverty is tragically involved as a cause of nearly all cases of severe food shortages, chronic malnutrition, and famine. For Mark Bittman, the author of the required reading on the previous page, the link between poverty and failures of food systems, rather than a failure of any other human or natural factors such as food production, food distribution, or overpopulation, is the central thesis he advances in his short opinion piece. You may want to glance again at this reading in order to remind yourself of why poverty is so deeply implicated in the failures of agri-food systems.
Weak or inadequate resilience (R) and adaptive capacity (AC), along with vulnerability (V), are also symptomatic of natural systems prone to severe food shortages, chronic malnutrition, and famine. For example, cropping and livestock systems unable to tolerate extreme conditions illustrate a low level of adaptive capacity (AC) that can contribute significantly to the failure of agri-food systems.
Figure 11.2.2.: Anatomy of severe food shortages, chronic malnutrition, and famine in agri-food systems based on the framework of Coupled Natural-Numan Systems (CNHS). Many of the resilience, adaptive capacity, and vulnerability (RACV) concepts introduced in module 11.2 are depicted here in the negative, e.g. a lack of agroecosystem resilience. Acute food insecurity and famine often involves a "perfect storm" or coming together of more than one of the RACV factors shown here. We note that some of the shocks are generated internal to the human system (large blue circular arrow at center left), with political and economic instability, and other negative human system drivers, fostering vulnerability of farmers and consumers in the human system as well as negative impacts on biodiversity and productivity of ecosystems. Credit: Karl Zimmerer and Steven Vanek
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/14%3A_Human-Environment_Interactions/14.02%3A_Food_Access_and_Food_Insecurity/14.2.02%3A_Global_Overview_of_Fo.txt
|
Instructions
Download the worksheet and follow the detailed instructions provided.
Anatomy of a Famine: multifactorial failures of adaptive capacity to climate and social shocks.
This worksheet relies heavily on the data resources presented by the Food Security and Nutrition Analysis Unit – Somalia and the Famine Early Warning System Network (FEWS Net).
This worksheet uses maps, tables, and graphs to guide you in analyzing a tragic famine in Somalia between 2010 and 2012 as a case of adaptive capacity and vulnerability (see Module 9.2 for the definition of a famine). As many as 260,000 people died in this famine, half of them children under five years old (optional: see Somalia famine 'killed 260,000 people', May 2, 2013). You should read carefully through the case study presented in the worksheet (download above) and answer the question in each section, e.g. “Question A1” and the two summary questions at the end.
Submitting Your Assignment
Upload the completed the Module 11 Summative Assessment in Canvas.
14.03: Summary and Final Tasks
Summary
Module 11 analyzes the way in which food production and food systems are vulnerable to shocks and perturbations, such as extreme weather, a changing climate, and economic and political crises like those caused by wars. However, food producers like farmers, and food systems generally, don't merely absorb or suffer these shocks. Rather, farmers and other participants in food system exhibit adaptive capacity or capacities, part of a more general system property called resilience, which allows them to respond to and partially blunt the impacts of perturbations. In addition to forms of adaptive capacity such as migration, wage labor, and irrigated crops, which allow farmers to access food in difficult conditions, a major form of adaptive capacity we have examined in module 11.1 is that of agrobiodiversity: the different crops and crop varieties possessed by a community or society. This range of crops help these communities and a whole food system to respond over time to new and different conditions for food production and even escape extreme conditions.
However, as module 11.2 and the summative assessment indicate, there are situations where farmers become extremely vulnerable to shocks and economic marginalization. This may take the form of food insecurity and consequent malnutrition, a topic that was first introduced in module 3. There are also situations, such as the Somali famine of 2012, and earlier famines, in which a combination of climatic and political conditions become so extreme that widespread hunger and mortality occurs. Knowing about the principles of adaptive capacity and vulnerability, and the terrible consequences of vulnerability in famines may help you to act constructively to help global society to end acute hunger, as well as more chronic food insecurity around the world.
Reminder - Complete all of the Module 11 tasks!
You have reached the end of Module 11! Double-check the to-do list on the Module 11 Roadmap to make sure you have completed all of the activities listed there before you begin module 12 where you will finalize your capstone project.
15: Capstone Project Stage 4 Assignment
Strategies for Sustainability
(Modules 10 and 11)
You’ve now completed the content modules for the Future of Food course. In Stage 4 of the capstone project, you’ll gather a bit more data about your region. In Stage 5, you will put together your final web page which will present your assessment of the current status of your assigned regional food system, projections for future scenarios of increased human population growth and increased temperatures in your region, and your proposed strategies to enhance the resilience and sustainability of your region’s food systems.
In Stage 4, you will gather data related to what you’ve learned in Modules 10 and 11. Also, you will explore population projections for your assigned region, so you can begin to assess the potential future resilience of the food systems of your assigned region.
Capstone Stage 4
Click for a text description of the Capstone Stage 4 diagram.
Capstone Stage 4 (food systems and sustainability)
describe regional food systems
assess using life-cycle concepts
discuss agrobiodiversity using RACV
discuss food insecurity/security
research population growth projections
propose strategies for sustainability
What to do for Stage 4?
• Download and complete the CapstoneProject_Stage4.docx worksheet that contains a table summarizing the data you need to collect to complete this stage. Remember, you need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research.
• Add questions and continue to research the questions in your worksheet files.
• Keep track of all of the resources and references you use.
• Add relevant data, maps, and figures to your PowerPoint file.
• Revise your CHNS diagram and/or create a new one incorporating topics from Modules 10 and 11.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/14%3A_Human-Environment_Interactions/14.02%3A_Food_Access_and_Food_Insecurity/14.2.04%3A_Summative_Assessment-.txt
|
Future Food Scenarios
Stage 5 is the final stage of the semester-long capstone project. There are two parts to this assignment.
1. You will write a 1500-2000 word paper (approximately 6-8 pages double-spaced). For this paper, you will use the data you've been gathering in the previous capstone stages throughout the semester to write a paper. Your paper will summarize the data and information you’ve gathered in stages 1, 2, 3, and 4 about the food systems of your assigned region. You may incorporate images, maps, and/or charts in your final paper, but keep in mind that these do not go toward your final count. You must include proper citations and references in your paper for any work (ideas, quoted material, maps/charts/images) that is not your own. If at any time you have a doubt about needing a citation, you should go ahead and include one or reach out to your instructor for assistance.
2. You will write a 500-word reflection (approximately 2 pages) in which you will discuss your progress on the capstone and the value of the analysis.
You will follow the instructions from your teacher to write and submit the paper. Your paper must include the following information:
Capstone Stage 5 Final Web Page
Click for a text description of Capstone Stage 5 Diagram.
Capstone Stage 5 - final web page
• Assess current status of regional food system
• Discuss future scenarios - temperature + human population growth
• Assess resilience and vulnerabilities of system to future scenarios
• Propose strategies for sustainability and resilience
What to do for Stage 5?
Grading Information and Rubric for Final Capstone Paper or Website:
Rubric
Criteria 9 6 3 1
Completeness of paper & all supporting documents: Conforms to all instructions and guidelines All specific instructions are met and exceeded; no components are omitted. Most instructions are met with only 1 to 2 minor omissions Some components are present with the omission of several key elements Missing most components of the project, minimal conformity to guidelines.
Identification of the key food systems of the region Clearly and thoroughly identifies the regional food systems with a clear application of material from Modules 1, 2, & 10 Satisfactory identification of the regional food systems some mention of material from modules 1, 2, & 10 Minimal identification of the regional food systems some mention of material from modules 1, 2, & 10 Little to no identification of the regional food systems some mention of material from modules 1, 2, & 10
Assessment of the regional food system and the physical environment of the region (water resources, soils, crops, climate) Thoroughly articulates specified elements with in-depth & accurate application of key concepts from Modules 4, 5, 6 & 9 Satisfactory articulation of specified elements with some application of key concepts from Modules 4, 5, 6 & 9 Minimal articulation of specified elements with little application of key concepts from Modules 4, 5, 6 & 9 Little to no articulation and application of key concepts from Modules 4, 5, 6 & 9
Analysis of the resilience of the regional food system based on data and facts Thoughtful and thorough consideration of potential vulnerabilities using concepts from Module 11 Satisfactory consideration of potential vulnerabilities using concepts from Module 11 Minimal consideration of potential vulnerabilities with little use of concepts from Module 11 Little to no consideration of potential vulnerabilities with little use of concepts from Module 11
Proposes reasonable strategies for sustainability and resilience based on data and facts Clearly develops viable & insightful strategies with well‐ supported data & research Develops viable strategies supported by some data and research Develops minimal strategies supported with limited data and research Little to no strategies provided or not supported by data and research
Criteria 5 3 2 1
Overall professionalism and timing Advanced ‐ no typos, or grammatical concerns, attention to detail with superior effort demonstrated A solid effort with few typos, or grammatical concerns, attention to detail evident with some effort demonstrated Minimal effort with numerous typos, or grammatical concerns, little attention to detail minimal effort demonstrated Little to no effort demonstrated with extensive typos, or grammatical concerns, little to no to attention to detail
Total Points
(out of 50)
1. Summary of Current Regional Food System
• Summarize the data and information that you’ve gathered throughout the semester about your assigned regional food system(s) and the interaction between those food systems and the environment, as well as any relevant socioeconomic, cultural and policy factors.
• Provide an overview of the current status of your assigned regional food system(s). Summarize the data and information that you acquired in the previous modules to present the current status of your regional food system. Details are provided in the Stage 5 worksheet document.
2. Discussion of future scenarios
• What are projections for regional human population growth in your assigned region?
• What are the projections for temperature increases in your assigned region?
3. Analysis of the resilience of future food system
• Provide a discussion of the resilience of your food system given the potential of increasing human population growth and increasing temperatures.
• Consider possible impacts of climate change and human population growth on the regional food system and the resilience and/or vulnerability of the food system to those changes.
4. Proposed strategies for sustainability enhanced resilience
• Propose strategies that contribute to the increased resilience of your assigned regional food systems in the face of human population growth and rising temperatures and evaporation rates.
Stage 5 Individual Assessment (Reflection)
Your final individual assessment for the course is a short (500 word) essay reflecting on your capstone project.
What to do for Stage 5 Individual Assessment:
• Write a 500-word essay reflecting on the following:
• A summary of your experience with the capstone project
• Most challenging aspect of capstone
• Most interesting aspect of capstone
• How does the capstone and/or course relate to your major (or if undeclared, your possible choice of major)? What will you take away from the course that you might use in your future studies?
• You are expected to put some thought into this essay. Your essay should be well-structured and reflect the depth of the research you performed in the capstone project.
• Submit your essay electronically via Canvas
Rubric for Stage 5 Individual Assessment (Reflection)
Rubric
Criteria Possible Points
500-word essay submitted electronically by the deadline with reasonable size font and margins 1
Essay is organized, logical, and thoughtful. 2
Concepts are presented clearly and presentation demonstrates a clear understanding of the capstone project and the material covered. 3
Essay demonstrates a clear understanding of the connection between human food systems and natural earth systems. 3
Essay is free of grammatical and spelling errors. 1
Total Possible Points 10
Files to Download (if you haven't already done so)
Capstone Project Stage 5: Final Paper Project & Reflection
16: Capstone Project Stage 5
Capstone Stage 1: Introduction to your regional food system and the region’s soil resources.
Click for a text description of the Capstone Stage 1 image.
Capstone Stage I - Introduction to your regional food system, history and diet/nutrition
• Describe physical environment
• Describe human environment
• Explore history of the food system
• Discuss diet and nutrition
Instructions
• Confirm with your instructor which region you will be studying and introduce yourself to your group members.
• Download and complete the CapstoneProject_Stage1.docx worksheet that contains a table summarizing the data you’ll need to collect to complete this stage. You need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research. Do not just write one-word answers.
• Create a powerpoint file that you’ll use to store maps, data, graphs, photos, etc. that you collect related to your assigned region. For every piece of information that you put in your powerpoint file, you MUST include a citation that clearly explains where that piece of information came from.
• Create a Word document to list questions that you have about your region related to the key course topics covered so far. Include in this document a record or your efforts to answer the questions so far.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/16%3A_Capstone_Project_Stage_5/16.01%3A_Capstone_Project_Stage_1.txt
|
Water, crops, and climate
The diagram below summarizes the topics you will explore in Stage 2 for your assigned region.
Capstone Stage 2 Water, Crops and Climate.
Click for a text description of Capstone Stage 2 image.
Capstone Stage 2 - Water, soils, and crops
How do water and soils influence regional crops?
• Describe precipitation and water resources
• Assess water pollution
• Describe soil types
• Describe soil impacts
• Describe regional crops
• Discuss relationship between crops, soils, and climate
What to do for Stage 2?
• Download and complete the CapstoneProject_Stage2.docx worksheet that contains a table summarizing the data you’ll need to collect to complete this stage. Remember, you need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research.
• Add relevant data to your PowerPoint file.
• Add questions and continue to research the questions in your word file.
• Capstone Project Overview: Where do you stand?
• At this stage, you should have started to investigate your assigned region and have added information, maps and data to your worksheets and powerpoint file for Stages 1 and 2.
Upon completion of stage 2, you should have at this point:
1. Confirmed which region you will study for your capstone project and identified the members of your group.
2. Initiated research and data compilation in the Stages 1 and 2 sections of the Future Food Scenarios table.
• a. Stage 1: Regional food setting, history of regional food systems, soil types
• b. Stage 2: Water resources, c, ops and climate
3. Created a powerpoint file to hold the data that you are collecting about the food system of your assigned region. The information you may have:
• a. Labeled map of your region
• b. Soil map of your region
• c. Precipitation and temperature map of your region
• d. Major crops and crop families grown in your region
4. Compiled an initial list of questions you have about your region related to key course topics and initiated significant efforts to answer.
16.03: Capstone Project Stage 3
Vulnerability and Resilience
At this stage, you should have collected quite a bit of data related to the physical environment of your region (water, soils, and climate) as well as related to the regional food system, including the history of the regional food system and which crops are grown in your region. You may also have discovered some impacts that the regional food system is having on soil and water resources in the region.
In stage 3, you will explore the vulnerabilities in your regional food system and the potential resilience of the system. The diagram below summarizes what you will cover in Stage 3.
Capstone Stage 3
Click for a text description of the Capstone Stage 3 diagram.
Capstone Stage 3 (soil/crop management, pests, and climate change)
System stressors and management strategies:
• describe soil conservation practices
• describe pest management strategies
• discuss agrobiodiversity of region
• discuss future climate scenarios
What to do for Stage 3?
• Download and complete the CapstoneProject_Stage3.docx worksheet that contains a table summarizing the data you’ll need to collect to complete this stage. Remember, you need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research.
• Add questions and continue to research the questions in your word file.
• Keep track of all of the resources and references you use.
• Add relevant data, maps, and figures to your PowerPoint file.
16.04: Capstone Project Stage 4
Strategies for Sustainability and Final Presentation
You’ve now completed the content modules for the Future of Food course. In this final stage of the capstone project, you’ll put together your final group presentation which will present your assessment of the current status of your assigned regional food system, projections for future scenarios of increased human population growth and increased temperatures in your region, and your proposed strategies to enhance the resilience and sustainability of your region’s food systems.
Stage 4 is broken down into two parts. Stage 4a is the last piece of data gathering and research that you need to do. Stage 4b is the preparation of your final presentation, which will summarize the data and information you’ve gathered in stages 1, 2, 3, and 4a.
Capstone Stage 4
Click for a text description of the Capstone Stage 4 Diagram.
Capstone Stage 4 (food system and sustainability)
• describe regional food systems
• assess using the life-cycle concepts
• discuss agrobiodiversity using RACV
• discuss food insecurity/security
• research population growth projections
• propose strategies for sustainability
What to do for Stage 4a?
• Download and complete the CapstoneProject_Stage4a.docx worksheet that contains a table summarizing the data you need to collect to complete this stage. Remember, you need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research.
• Add questions and continue to research the questions in your word file.
• Keep track of all of the resources and references you use.
• Add relevant data, maps, and figures to your PowerPoint file
Capstone Stage 4b diagram
Click for a text description of the Capstone Stage 4b diagram.
Capstone Stage 4b (final presentation)
• assess current status
• discuss future scenarios
• assess resilience and vulnerabilities to future scenarios
• propose strategies for sustainability and resilience
Stage 4b – Final Presentation
Your final presentation should address the following topics associated with your assigned region:
1. Assessment of current status of regional food system
• a. include climate, water, soils, nutrients, crops, agricultural practices, existing food systems
2. Discussion of future scenarios
• a. what are projections for regional human population growth?
• b. what are projections for temperature increases in your assigned region?
3. The resilience of the current system to future projections
• a. given the current status of the region’s food system, how resilient will the system be in the face of the projects changes in human population and temperature?
4. Strategies for sustainability and resilience
• a. propose strategies for increasing the sustainability and resilience of your assigned region’s food systems in the face of the projected human population growth and increased temperatures.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/04%3A_Food_Systems_and_Sustainability/16%3A_Capstone_Project_Stage_5/16.02%3A_Capstone_Project_Stage_2.txt
|
Sergio Capareda
Biological and Agricultural Engineering Department
Texas A&M University
College Station, Texas, USA
Introduction
This chapter introduces the importance of analyzing the energy balance and the economic viability of biomass conversion systems. In principle, the energy used for biomass production, conversion, and utilization should be less than the energy content of the final product. For example, one of the largest energy components for growing biomass is fertilizer (Pimentel, 2003), so this component must be included in the energy systems analyses. This chapter also introduces some biomass conversion pathways and describes the various products and co-products of conversions, with a focus on the techno-economic indicators for assessing the feasibility of a particular conversion system. Sustainability evaluation of biomass-derived fuels, materials, and co-products includes, among others, three key components: energy balance, environmental impact, and economic benefit. This chapter focuses primarily on energy balance and economic issues influencing bioenergy systems.
Concepts
The major commercial fuels used in the world today are natural gas, gasoline (petrol), aviation fuel, diesel, fuel oils, and solid fuels such as coal. These commercial fossil fuels could be replaced with biofuels and solid fuels derived from biomass by using conversion technologies. There are specific biomass resources that are well-suited to each conversion technology. For example, sugar crops (sugarcane and sweet sorghum) are good feedstock materials for the conversion of bioethanol; oil crops (soybean and canola oil) are ideal feedstock for biodiesel production; and lignocellulosic biomass (e.g., wood wastes, animal manure or grasses) is the prime substrate for making biogas. Thermal conversion systems convert all other biomass resources into valuable products.
Replacement of these primary fuels with bio-based alternatives is one way to address energy sustainability. Heat and electrical power, needed worldwide, can also be produced through the conversion of biomass through thermo-chemical conversion processes such as pyrolysis and gasification to produce synthesis gas (or also called syngas, a shorter version). Syngas can be combusted to generate heat and can be thoroughly cleaned of tar and used in an internal combustion engine to generate mechanical or electrical power. Future world requirements for other basic energy and power needs can be met using a wide range of biomass resources, including oil and sugar crops, animal manure, crop residues, municipal solid wastes (MSW), fuel wood, aquatic plants like micro-algae, and dedicated energy farming for energy production. The three primary products of thermal conversion are solid bio-char, liquid, and synthesis gas.
A biomass energy conversion system can produce one or more of four major products: heat, electricity, fuel, and raw materials. The goal of any conversion process is to achieve the highest conversion efficiency possible by minimizing losses. The energy conversion efficiency for any type of product can be calculated as:
$\text { Energy Conversion Efficiency }(\%)=\frac{\text { Energy Output }(\mathrm{MJ})}{\text { Energy Input }(\mathrm{MJ})} \times 100 \% \label{1}$
There are three fundamental biomass conversion pathways (Figure $1$): physico-chemical, biological, and thermal. Physicochemical conversion is the use of chemicals or catalysts for conversion at ambient or slightly elevated temperatures. Biological is the use of specific microbes or enzymes to generate valuable products. Thermo-chemical conversion occurs at elevated temperature (and sometimes pressure) for conversion. The products from biomass conversions can replace common fossil-resource-derived chemicals (e.g., lactic acid), fuel (e.g., diesel), and material (e.g., gypsum). This chapter focuses on energy derived by bioconversion.
Biodiesel Production
Refined vegetable oils and fats are converted into biodiesel, which is compatible with diesel fuel, by physicochemical conversion using a simple catalytic process using methanol (CH3OH) and sodium hydroxide (NaOH) at a slightly elevated temperature. The process is called transesterification. Vegetable oils are also called triglycerides because their chemical structure is composed of a glycerol attached to three fatty acid molecules by ester bonds. When the ester bonds are broken by a catalyst, glycerin is produced and the fatty acid compound is converted into its methyl ester form, which is the technical term for biodiesel. The combination of methanol and sodium hydroxide results in a compound called sodium methoxide (CH3ONa), which is the most common commercial catalyst for biodiesel production. The basic mass balance for the process is:
100 kg vegetable oil + 10 kg catalysts → 100 kg biodiesel + 10 kg glycerin
The energy balance depends on the specific facility design. For the biodiesel product to be considered viable, the energy in the biodiesel must exceed the energy used to produce the vegetable oil used for the process. In a commercial system, the transesterification process is split into several stages (Figure $2$). Methanol and catalysts are recovered after all the stages to minimize catalyst consumption. Crude glycerin is also recovered at each stage to minimize the use of excess methanol. The remaining catalyst—the amount must be calculated accurately—is then introduced at the last stage of the process. This last stage reaction minimizes unreacted mono-glycerides (or remaining glycerol that still has a fatty acid chained to it via an ester bond). If soybean oil is used, the resulting biodiesel product is called soybean methyl ester, the most common biodiesel product in the United States. In Europe, canola (rapeseed) oil is the most common feedstock, which produces rapeseed methyl ester. The glycerin co-product is further purified to improve its commercial value.
Bioethanol Production
Bioethanol, which is compatible with gasoline or petrol, is produced from sugar, starchy, or lignocellulosic crops using microbes or enzymes. Sugars from crops are easily converted into ethanol using yeast (e.g., Saccharomyces cerevisiae) or other similar microbes, while starchy crops need enzymes (e.g., amylases) that convert starch to sugar, with the yeasts then acting on the sugars to produce bioethanol. Lignocellulosic crops need similar enzymes (e.g., enzymes produced by Trichoderma reesei) to break down cellulose into simple sugars. The basic mass balance for the conversion of plant sugars from biomass into ethanol (C2H6O) also yields heat:
$\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}+\text { yeast } \rightarrow 2 \mathrm{C}_{2} \mathrm{H}_{6} \mathrm{O}+2 \mathrm{CO}_{2}(+\text { heat })$
The most common feedstock for making bioethanol in the United States is dry milled corn (maize; Zea mays). In the process (Figure $3$), dry corn kernels are milled, then water is added to the powdered material while being heated (or gelatinized) in order to cook the starch and break it down using the amylase enzyme (saccharification). This process converts starch into sugars. The resulting product (mainly glucose) is then converted into bioethanol using yeasts fermentation for 3-5 days with a mass balance of:
$2 \mathrm{C}_{6} \mathrm{H}_{10} \mathrm{O}_{5}+\mathrm{H}_{2} \mathrm{O}+\text { amylase } \rightarrow \mathrm{C}_{12} \mathrm{H}_{22} \mathrm{O}_{11}$
or
$\mathrm{C}_{12} \mathrm{H}_{22} \mathrm{O}_{11}+\mathrm{H}_{2} \mathrm{O}+\text { invertase } \rightarrow 2 \mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}$
In this representation, complex starch molecules are represented by repeating units of polymers of glucose [(C6H10O5)n] with n being any number of chains. The enzyme amylase reduces this polymer into simple compounds, such as sucrose (C12H22O11), a disaccharide having just two molecules of glucose. Alternatively, the enzyme invertase is used to break down sucrose into glucose sugar. A yeast, such as the commercial yeast Ethanol Red (distributed by Fermentis of Lesaffre, France and sold worldwide) acts on the sugar product to convert the sugar into bioethanol. The resulting product (a broth) is called beer because its alcohol content is very close to 10%. The solid portion is called distillers grain, which is usually dried and fed to animals. The beer is distilled to yield solids (known as bottoms or still bottoms) and to recover 90-95% of the bioethanol (usually 180-190 proof), which is then purified using molecular sieves. (A molecular sieve is a crystalline substance with pores of carefully selected molecular dimensions that permit the passage of, in this case, only ethanol molecules.) The final separated and purified product may then be blended with gasoline or used alone.
Biogas Production
Biogas, which is composed primarily of methane (CH4; also called natural gas) and carbon dioxide (CO2), is produced from lignocellulosic biomass by microbes under anaerobic conditions. Suitable microbes are commonly found in the stomachs of ruminant animals (e.g., cows). These microbes convert complex cellulosic materials into organic acids via hydrolysis or fermentation; these large organic acids are further converted into simpler organic acids (e.g., acetic acids) and hydrogen gas. Hydrogen gas and some organic acids that include CO2 are further converted into CH4 and CO2 as the respiratory gases of these microbes. Biogas (CO2 + CH4) is the same as natural gas (CH4) if the CO2 component is removed. Natural gas is a common fuel derived by refining crude oil.
There are various designs of high-rate anaerobic digesters for biogas production (Figure $4$), which are commonly used in wastewater treatment plants worldwide. Simpler digesters use upflow and downflow anaerobic filters, basic fluidized beds, expanded beds, and anaerobic contact processes. One popular design from the Netherlands is the upflow anaerobic sludge blanket (or UASB) (Letingga et al., 1980). Improvements to the UASB include the anaerobic fluidized bed and expanded bed granular sludge blanket reactor designs. High-rate systems are commonly found in Europe, but there are few in the US. Most biogas plants in the US are simply covered lagoons.
Biomass Pyrolysis
Pyrolysis is a thermal conversion process at elevated temperatures in complete absence of oxygen or an oxidant. Figure 1.1.5 shows outputs and applications of pyrolysis. The primary products are solid bio-char, liquid, and gaseous synthesis gas. The ratios of these co-products depend on temperature, retention time, and type of biomass used. The quality and magnitude of products are also dependent on the reactor used. The simple rules of biomass pyrolysis processes are:
1. 1. Solid bio-char (or charcoal) yield is maximized at the lowest pyrolysis temperature and the longest residence time.
2. 2. Liquid yield is usually maximized at temperatures between 400°C and 600°C.
3. 3. Synthesis gas, or syngas, is maximized at the highest operating temperature. The main components of syngas are carbon monoxide (CO) and hydrogen (H2). Other component gases include lower molecular weight hydrocarbons such as CH4, ethylene (C2H4), and ethane (C2H6).
Bio-char may be used as a soil amendment to provide carbon and nutrients when applied to agricultural land. A high-carbon bio-char may also be upgraded into activated carbon, a very high-value adsorbent material for water and wastewater treatment processes. The highest value for the bio-char is achieved when the carbon is purified of all inorganics to generate graphene products, which are among the hardest materials made from carbon.
The quality of liquid product (bio-oil) is enhanced or improved with short residence times such as those in fluidized bed pyrolysis systems but not with auger pyrolyzers. Auger pyrolyzers usually have long residence times. Short residence times give rise to less viscous bio-oil that is easy to upgrade into biofuel (gasoline or diesel) using catalysts. Bio-oil from pyrolysis process has a wide range of applications (Figure $5$). Valuable chemicals can be extracted; the unaltered bio-oil can be upgraded via catalytic processes to generate transport fuels; and may be co-fired in an engine to generate electricity or in a boiler to generate heat.
Syngas may simply be combusted as it is produced to generate heat. However, syngas may need to be cleaned of tar before use in an internal combustion engine. To generate electrical power, this internal combustion engine is coupled with a generator.
Biomass Gasification
Gasification is a partial thermal conversion of biomass to produce syngas. In older textbooks, this gas is also synonymously called “producer gas.” Syngas can be combusted to generate heat or cleaned of tar and used in an internal combustion engine to generate electricity. The synthesis gas may also be used as feedstock to produce bio-butanol using microbes that also produce biofuel co-products. There are numerous types and designs of gasifiers, including fixed bed systems (updraft, downdraft, or cross-draft gasifiers) and moving bed systems (fluidized bed gasification systems).
A fluidized bed gasification system is shown in Figure $6$. Biomass is continuously fed to a large biomass bin. The fluidized bed reactor contains a bed material, usually refractory sand, to carry the heat needed for the reaction. The air-to-fuel ratio is controlled so the amount of air is below the stoichiometric requirement for combustion (i.e., combustion is incomplete) to ensure production of synthesis gas instead of heat and water vapor. The solid remaining after partial thermal conversion is high carbon bio-char that is removed via a series of cyclones. The simplest application of this system is the production of heat by combusting the synthesis gas. If electrical power is needed, then the synthesis gas must be cleaned of tar to be used in an internal combustion engine to generate electricity. The conversion efficiencies of gasification systems are typically less than 20%. An average value to use for a quick estimate of output is around 15% overall conversion efficiency.
Biomass Combustion
Direct combustion of biomass has been a traditional practice for centuries; burning wood to produce heat for cooking is an example. Combustion is the most efficient thermal conversion process for heat and power generation purposes. However, not many biomass products can be combusted because of the high ash and water content of most agricultural biomass products. The ash component can melt at higher combustion temperatures, resulting in phenomena called slagging and fouling. Melted ash forms slag that accumulates on conveying surfaces (fouls) as it cools.
Economic Evaluation of Bioenergy Systems
Commercial bioenergy facilities depreciate every year. There is no accurate estimate of depreciation values but a potential investor may use this parameter to save on capital costs each year from the proceeds of the commercial facility such that at the end of the life of the facility, the investor is prepared to invest in higher-yielding projects.
There are a number of simple methods that engineers may use for economic depreciation analyses of bioenergy facilities. A basic economic evaluation is required early in the design of the system to ascertain feasibility prior to significant capital investment. Evaluation of the economic feasibility begins with the analysis of the fixed (or capital) expenditures and variable (or operating) costs (Watts and Hertvik, 2018). Fixed expenditures include the capital cost of assets such as biomass conversion facilities, land, equipment, and vehicles, as well as depreciation of facilities and equipment, taxes, shelter, insurance, and interest on borrowed money. Variable costs are the daily or monthly operating costs for the production of a biomass product. Variable costs are associated with feedstock and chemicals, repair and maintenance, water, fuel, utilities, energy, labor, management, and waste disposal. Figure $7$ shows the relationship between these two basic economic parameters. Fixed costs do not vary with time and output while variable costs increase with time and output of product. The total project cost is the sum of fixed and variable costs. Variable costs per unit of output decrease with increased amount of output, so the profitability of a product may depend on the amount produced.
In order to evaluate the economic benefits of a bioenergy project, some other economic parameters are commonly used (Stout, 1984), including net present value; benefit cost ratio, payback period, breakeven point analysis, and internal rate of return. The analyses must take into account the relationship between time and the value of money. The basic equations for estimating the present and future value of investments are:
$\text { Present Value }=\mathrm{PV}=\mathrm{FV} \times \frac{1}{(1+R)^{n}} \label{2}$
$\text { Future Value }=\mathrm{FV}=\mathrm{PV} \times(1+R)^{n} \label{3}$
where R = rate of return or discount rate (decimal)
n = number of periods (unitless)
The internal rate of return is a discounted rate that makes the net present value of all cash flows from a particular project equal to zero. The higher the internal rate of return, the more economically desirable the project. The net present value (Equation 1.1.4) is the difference between the present value of cash inflows and the present value of cash outflows. A positive net present value means that the project earnings exceed anticipated costs. The benefit cost ratio (Equation 1.1.5) is the ratio between the project benefits and costs. Values greater than 1 are desirable. The payback period (Equation 1.1.6) is the length of time required to recover the cost of investment.
When estimating the fixed cost of a project, the major cost components are the depreciation and the interest on borrowed money. There are many ways to estimate the depreciation of a facility. The two most common and simple methods are straight-line depreciation (Equation 1.1.7), and the sum-of-years’ digit depreciation method (SYD) (Equation 1.1.8).
$\text { Straight Line Depreciation }(\)=\frac{\text { Principal-Salvage Value }}{\text { Life of Unit }} \label{7}$
$\text { SYD Depreciation(\) }=\text { Depreciation Base } \times \frac{\text { Remaining Useful Life }}{\text { Sum of Years' Digits }} \label{8}$
In Equation 1.1.8, the depreciation base is the difference between the initial capital cost ($) and the salvage value of the asset ($). The sum of the years’ digits is the sum series: 1, 2, 3, up to n, where n is the useful life of the asset in years, as shown in Equation 1.1.9:
$\text { Sum of Years' Digits }=\mathrm{SYD}=\frac{n(n+1)}{2} \label{9}$
The other large portion of capital cost is interest on borrowed money. This is usually the percentage (interest rate) charged by the bank based on the amount of the loan. The governing equation without including the salvage value (Equation 1.1.10) is similar to the amortization calculation for a loan amount:
$\text { Annuity }=A=P \times\left(\frac{r \times(1+r)^{n}}{(1+r)^{n}-1}\right) \label{10}$
where
A = annuity or payment amount per period ($) P = initial principal or amount of loan ($)
r = interest rate per period (%)
n = total number of payments or period (unitless)
There are many tools used for economic evaluation of energy system, but one of the most popular is the HOMER Pro (or hybrid optimization model for energy renewal) developed by Peter Lilienthal of the US Department of Energy (USDOE) since 1993 (Lilienthal and Lambert, 2011). The model includes systems analysis and optimization for off grid connected power systems for remote, stand-alone, and distributed generation application of renewables. It has three powerful tools for energy systems simulation, optimization, and economic sensitivity analyses (Capareda, 2014). The software combines engineering and economics aspects of energy systems. This type of tool is used for planning and design of commercial systems, but its simple equations can be used first to assess the fundamental viability of a biomass conversion project.
Sustainability Issues in Biomass Energy Conversion Systems
The US Department of Energy (USDOE) and US Department of Agriculture (USDA) define sustainable biofuels as those that are “economically competitive, conserve the natural resource base, and ensure social well-being.” The conservation of resource base points to the conservation of energy as well, that is, the fuel produced must have more energy than the total energy used to produce the fuel. One of the most common indicators of sustainability for biomass utilization is energy use throughout the life cycle of production. There are two measures used for this evaluation: the net energy ratio (NER) (Equation 1.1.11) and net energy balance (NEB) (Equation 1.1.12). NER must be greater than 1 and NEB must be positive for the system to be considered sustainable from an energy perspective.
$\mathrm{NER}=\frac{\text { Energy Content of Fuel }(\mathrm{MJ})}{\text { Energy Required to Produce the Biofuel }(\mathrm{MJ})} \label{11}$
The heating value of the biofuel is defined as the amount of heat produced by the complete combustion of the fuel measured as a unit of energy per unit of mass.
Applications
Engineers assigned to design, operate, and manage a commercial biodiesel plant must decide what working system to adopt. The cheapest and most common is the use of gravity for separating the biodiesel (usually the top layer) and glycerin (the bottom layer). An example of this commercial operational facility is the 3 million gallon per year (MGY) (11.36 ML/yr) biodiesel plant in Dayton, Texas, operated by AgriBiofuels, LLC. This facility began operation in 2006 and is still in operation. The biodiesel recovery for this facility is slightly lower than those with computer-controlled advanced separation systems using centrifuges. This facility is also not following the ideal process flow (shown in Figure 1.1.2) used by many other commercial facilities. Thus, one would expect their conversion efficiency and biodiesel recovery to be lower.
Biodiesel production is an efficient biomass conversion process. The ideal mass balance equation, presented earlier, is:
100 kg vegetable oil + 10 kg catalysts → 100 kg biodiesel + 10 kg glycerin
The relationship shows that an equivalent mass of biodiesel is produced for every unit mass of vegetable oil used, but there are losses along the way and engineers must consider these losses when designing commercial facilities. In a commercial biodiesel facility, the transesterification process is split into several reactors (e.g., Figure $2$). However, to save on capital costs, some plant managers simply divide the process into two stages. Separating glycerin and biodiesel fuel is also an issue that the engineer will be faced with. Efficient separation systems that use centrifuges are expensive compared with physical separation, and this affects the overall economy of the facility. If the initial capital available is limited, investors will typically opt for cheaper, physical gravity separation instead of using centrifuges. Crown Iron Works (in Blaine, MN) sells low cost biodiesel facilities that employ gravity separation while GEA Wesfalia (Oelde, Germany) sell more expensive biodiesel facilities that use separation by centrifuge. The latter, expensive, system is more efficient at separating glycerin and biodiesel fuel and may be beneficial in the long term, allowing the facility to sell glycerin products with minimal contamination. The engineer may compare these systems in terms of costs and efficiencies. Ultimately Equation 1.1.2 is used for designing and sizing a commercial plant to determine the daily, monthly, or yearly vegetable oil requirement. This means the engineer must determine the agricultural land area required both for the facility and the supply of biomass. There are standard tables of oil yields from crops that are used. For example, the highest oil yield comes from palm oils, with more than 7,018 kg oil production per hectare compared with 2,245 kg/ha for soybean oil (Capareda, 2014).
Designing, building, and operating a commercial bioethanol facility also requires knowledge primarily on the type of feedstock to use. Unlike a biodiesel plant, where the manager may have various options for using numerous vegetable oil types without changing the design, a bioethanol plant is quite limited to the use of a specific feedstock. The main choices are sugar crops, starchy crops, or lignocellulosic biomass. Designs for these three different types of feedstock are not the same; using lignocellulosic biomass as feedstock is the most complex. The simplest are sugar crops but sugary juice degrades very quickly and so the majority of commercially operating bioethanol plants in the US use starchy crops like corn. Corn grains may be dried, ground, and stored in sacks for future conversion without losing its potency. Examples of commercial bioethanol plants using lignocellulosic feedstocks are those being built by POET (Sioux Falls, South Dakota) in Emmetsburg, Iowa, using corncobs (25 MGY or 94.6 ML/yr), and another by Dupont (Wilmington, Delaware) in Nevada, Iowa, using corn stover (30 MGY or 113.6 ML/yr).
Bioethanol is an efficient biofuel product. Engineers must be aware of energy and mass balances required for biofuels production even though other waste materials are also used for the processes. As the potential bioethanol yields from crops varies, the design is for a specific feedstock. The greatest potential bioethanol yield comes from the Jerusalem artichoke (Helianthus tuberosus) (11,219 L/ha). Compare this to corn (maize, Zea mays) at a reported yield of only 2,001 L/ha (Capareda, 2014) and sorghum (sorghum spp.) cane (4,674 L/ha) or grain (1,169 L/ha).
While yields are important, the location of a project is also a significant factor in selecting the resource input for a bioethanol or biodiesel production facility. For example, the Jerusalem artichoke has the highest bioethanol yield but only grows in temperate conditions. When the bioethanol business started to boom in the US around 2013, there was an issue with the disposal of a by-product of the process, the distillers grain. During those initial periods, these co-products were simply disposed of with very minimal secondary processing (e.g., animal feed) or to a landfill. Options for secondary valorization (i.e., to enhance the price or value of a primary product) have now emerged such as further energy recovery and as a raw material for products such a films and membranes. Key issues for engineers include sizing of plants and determining the daily, weekly, and monthly resource requirements for the feedstock, which can be calculated using Equations 1.1.3, 1.1.4, and 1.1.5, modified for inefficiency in practice.
A growing number of animal facilities have taken advantage of the additional energy recovered from anaerobic digestion of manure by converting their lagoons into biogas production facilities. In the US, the covered lagoon is still the predominant biogas digester design. The operation is very simple since the microorganisms needed for biogas production already exist in the stomachs of ruminants. Key issues for engineers are sizing (based on animal numbers), energy recovery rates, sizing of power production (engine) facilities, sludge production and energy remaining in the sludge and economic feasibility. There is increasing interest in designing systems that use sludge for pyrolysis to recover as much energy as possible from the feedstock. When these additional processes are adopted, the energy recovery from the waste biomass is improved and there is less overall waste. While the sludge is an excellent source of nutrients for crops, its energy value must be judged against its fertilizer value. Financially the energy case probably wins out, but a holistic analysis would be needed to judge the most desirable option from a sustainability perspective.
The economics of a biofuel facility are dependent on the price of the initial feedstock used. For example, 85% of the cost of producing biodiesel fuel comes from the cost of the initial feedstock. As a potential candidate for biodiesel production, if the price of refined vegetable oil is the same as the price of diesel fuel, it is not economical to turn the vegetable oil into a biofuel. The remaining 15% is usually the cost of catalysts used for the conversion process (Capareda, 2014). If biodiesel is made from any refined vegetable oil, the processing cost is the greatest component of the conversion process. The cost of chemicals and catalysts together usually amounts to approximately $0.06/L ($0.22/gal). Chemicals or catalysts are not the limiting factors in making biodiesel. This statement applies to biofuels in general. These production expenses are not a large part of the biofuels production expense. Biodiesel catalysts are rather cheap and abundant. They will not usually run out nor gets too expensive as production is increased.
In the bioethanol production process, the cost of the bioethanol fuel is also mainly affected by the price of the initial feedstock used, such as corn, as well as the enzymes used for the process. The process also uses significant volumes of water, but only minimal electricity. For example, for every 3.785 L (1 gallon) of bioethanol produced, 1.98 m3 (70 ft3) of natural gas and 155.5 L (41 gal) of water is required (Capareda, 2014). The electricity usage is around 0.185 kWh/L (0.7 kWh/gal). Hence, if the electricity cost is $0.10/kWh, then one would only spend around$0.0158/L (0.07/gal). Natural gas is used to heat up the beer and recover pure bioethanol. Because of the abundant use of water, this water input must be recycled for the process to be effective and efficient. The current industry standard for bioethanol production from grains is around 416.4 to 431.2 L/tonne (2.8–2.9 gal/bushel). Newer feedstocks for bioethanol production must exceed this value.
The economics of power production via thermal conversion such as pyrolysis or gasification is dependent upon the sale of electrical power. If a MW power plant using biomass is operated continuously for a year the electrical power should sell for $0.12/kWh to achieve a gross revenue of$1M. A preliminary economic evaluation of the economic return of a gasification for power facility can be completed by adjusting selling cost. Finally, the economics of biofuels production from biomass resources are also dependent on the price of crude oil from commercial distributors and importers. Biodiesel and bioethanol are mixed with commercial diesel and gasoline and are priced similarly. With a crude oil price below $100/barrel, the production cost for biodiesel and bioethanol must also be under$100/barrel.
The question of the sustainability of fuel production and usage must be addressed. Many biofuels produced from biomass resources in the USA are now being categorized according to their potential greenhouse gas reductions and are standardized under the renewable fuels standard (RFS) categories (Figure $8$). As shown, cellulosic biofuels—mainly bioethanol and biodiesel (also coded as D3/D7, respectively) coming from lignocellulosic biomass—have a reported 60% greenhouse gas reduction compared with biomass-based diesel, which only has a 50% GHG reduction potential (also coded as D4). Biodiesel from vegetable oils and ethanol from corn have lower GHG reductions potential than cellulosic biofuels and biomass-based diesel. The code D6 is for renewable fuels in general, produced from renewable biomass and is used to replace quantity of fossil fuel present in transportation fuel, heating fuel, or jet fuel (e.g., corn ethanol) and also not falling under any of the other categories. The code D5 is for advanced biofuels other than ethanol derived from corn starch (sugarcane ethanol), and biogas from other waste digesters.
While net energy ratio (NER) and net energy balance (NEB) are important, they have to be combined with estimates of CO2 emissions and perhaps with land use to understand the foundations of sustainability of the use of biomass resources. A simple life cycle assessment (LCA) of coal and biomass for power generation (Mann and Spath, 1999) reported 1,022 g CO2 emissions per kWh of electrical power produced by coal compared to only 46 g CO2/kWh of electrical power by biomass. Contrary to the perception that using biomass would have zero net CO2 emissions, there is actually some CO2 produced for every kWh of electricity generated. It is also important to recognize the competing uses of land and biomass by society (Figure $9$). On one hand, biomass is used for food and feed (the food chain), and on the other for materials and energy (the bioeconomy). All uses have to consider climate change, food security, resource depletion, and energy security. Countries around the world need to create a balance of the use of biomass resources toward a better environment. Future engineers must be able to evaluate the use of biomass resources for materials and biofuels production as well as relate this to climate change and energy security without depleting already limited resources.
The US Department of Energy created a hierarchy of materials and products from biomass resources (Figure $10$). On top of the pyramid are high-value fine chemicals, such as vanillin and phenol derivatives, worth more than $6,500 per tonne. Phenol derivatives have the potential to be further converted into expensive lubricants (Maglinao et al., 2019). Next are high-value carbon fibers such as graphene, followed by phenolic substances. There are also new products such as 100% biomass-based printed integrated circuit boards developed by IBM (International Business Machines Corporation, Armonk, NY). Biofuels are in the middle of the pyramid, valued around$650 per tonne, with simple energy recovery by combustion at the bottom.
Examples
Engineers who manage biorefineries need to be aware of energy and mass balances to determine resource allocations, as well as conversion efficiencies to improve plant operations. The process of estimation includes simple conversion efficiency calculations and determining the economic feasibility of the biorefinery.
Example $1$
Example 1: Conversion efficiency calculations
Problem:
The ideal mass and energy balance is difficult to achieve. Plant managers must be able to estimate how close their operations are compared to the ideal conditions. The most common problem faced by a plant manager is to determine the conversion efficiency of refined vegetable oil into biodiesel. This example shows how the actual plant is operated and how close it is to the ideal mass balance. The energy content of refined canola oil is 39.46 MJ/kg and that of canola oil biodiesel was measured in the laboratory to be 40.45 MJ/kg. During an actual run, only about 95% biodiesel is produced from this refined canola oil input instead of the ideal 100% mass yield.
Determine the energy conversion efficiency of this facility from turning refined canola oil energy into fuel energy in biodiesel.
Solution
1. 1. The energy of the output biodiesel product unit of weight is calculated using the 95% mass yield of biodiesel as follows:
2. $\text { Biodiesel Output }(\mathrm{MJ})=\frac{40.45 \mathrm{MJ}}{\mathrm{kg}} \times 0.95 \mathrm{~kg} \text { Biodiesel }=38.43 \mathrm{MJ}=36,424 \mathrm{Btu}$
3. 2. Using Equation 1.1.1, the conversion efficiency per unit of weight is:
4. $\text { Conversion Efficiency }(\%)=\frac{38.43 \mathrm{MJ}}{39.46 \mathrm{MJ}} \times 100 \%=97.4 \%$
Biodiesel production is perhaps one of the most efficient pathways for the conversion of vegetable oil into biofuel, having very close to 100% energy conversion efficiency.
Example $2$
Example 2: Sizing commercial biodiesel plants
Problem:
Planning to build a commercial biodiesel facility requires taking inventory of input resources needed. In this example, the engineer must determine the amount of soybean oil needed (L/year) to build and operate a 3.785-million liter (1-million gallon per year, MGY) biodiesel plant. The densities of soybean oil and its equivalent biodiesel (also called soybean methyl ester) are as follows:
Calculate the soybean oil requirements for a daily basis and a monthly basis.
Solution
1. 1. 3.785 million liters of biodiesel product is converted into its mass units as:
2. $\text { Biodiesel Mass Requirement }\left(\frac{\text { tonnes }}{\text { year }}\right)=\frac{3.785 \times 10^{6} \mathrm{~L}}{\text { year }} \times 0.88 \frac{\mathrm{kg}}{\mathrm{L}} \times \frac{\text { tonne }}{1000 \mathrm{~kg}}=3330.8 \frac{\text { tonnes }}{\text { year }}=3,671.6 \frac{\text { tonnes }}{\text { year }}$
3. 2. This biodiesel mass of 3,330.8 tonnes per year is then equivalent to the mass of soybean oil required for the plant. This unit must be converted into volumetric units for trading vegetable oils, as:
4. $\text { Soybean Oil Volume Requirement} \left(\frac{\mathrm{L}}{\text { year }}\right)=\frac{3,330.8 \text { tonnes }}{\text { year }} \times \frac{1,000 \mathrm{~kg}}{1 \text { tonne }} \times \frac{\mathrm{L}}{0.917 \mathrm{~kg}}=3,632,279 \frac{\mathrm{L}}{\text { year }}=959,651 \frac{\text { gallons }}{\text { year }}$
5. 3. Thus, the yearly soybean oil requirement for this biodiesel facility is more than 3.6 million liters (0.96 million gallons). The monthly and daily requirements are calculated as:
6. $\text { Soybean Oil Mass Requirement }\left(\frac{\mathrm{L}}{\text { month }}\right)=\frac{3,632,279 \mathrm{~L}}{\text { year }} \times \frac{1 \text { year }}{12 \text { months }}=302,689 \frac {\mathrm{L}} {\text { month }}=79,971 \frac{\text { gallons }}{\text { month }}$
Further, this soybean oil requirement value may also be used to estimate the required acreage for soybean oil if one has data on soybean oil crop yield per acre. For example, a reported soybean oil yield of around 2,245 kg/ha (2000 lb/acre) (Capareda, 2014) will result in an estimated 1,483.6 ha (3,664 ac) needed for dedicated soybean land for this plant use year-round.
Example $3$
Example 3: Energy balance in the recovery of bioethanol
Problem:
Bio-ethanol may be produced from sweet sorghum via fermentation of its sugars. The sweet sorghum juice is fermented using yeasts (Saccharomyces cerevisiae). The resulting fermented product, called beer, has about 10% bioethanol. Higher percentage ethanol is required for engine use and may be separated from this fermented product through a simple distillation process. The liquid fermented product is simply heated until the bio-ethanol vapor is evaporated (around 80°C, the evaporation temperature of pure ethanol) and this vapor is condensed or liquefied in a simple condenser. In village-level systems, fuel wood is used to heat up the boiler where the fermented material is placed.
A village-level ethanol production scheme based on sweet sorghum has the following data for a series of experiments. In the first experiment, the operator was not mindful of the amount of fuel wood used for the recovery of highly concentrated ethanol and used too much, about 20 kg of waste fuel wood for the boiler. In addition, the boiler was not insulated during this run. In the second experiment, the operator insulated the boiler and was very careful in the use of fuel wood to adjust the boiler temperature below the boiling point of pure ethanol. Only about 10 kg of fuel wood was used, about half of the initial experiment. Assume that the energy of fuel wood is 20 MJ/kg and the heating value of ethanol is around 18 MJ/L. In both experiments, 120 liters of liquid fermented material (beer) was used and 13 liters of highly concentrated ethanol was recovered. Discuss the energy balance for each experiment.
Solution
1. 1. In the first experiment, the operator used about 400 MJ of input energy and produced 13 liters of ethanol with an energy content of 234 MJ:
2. $\text { Energy from the Fuel Wood }=20 \mathrm{~kg} \text { fuel wood } \times \frac{20 \mathrm{MJ}}{\mathrm{kg}}=400 \mathrm{MJ}$
3. $\text { Energy from the Ethanol }=13 \mathrm{~L} \times \frac{18 \mathrm{MJ}}{\mathrm{L}}=234 \mathrm{MJ}$
4. Clearly, the operator used more energy from the fuel wood than that of the recovered ethanol, demonstrating an unsustainable process.
5. 2. The second experiment used only about 200 MJ of input wood energy, which is slightly less than the energy from the produced ethanol of 234 MJ.
6. $\text { Energy from the Fuel Wood }=10 \mathrm{~kg} \text { fuel wood } \frac{20 \mathrm{MJ}}{\mathrm{kg}}=200 \mathrm{MJ}$
7. By careful use of fuel wood, more energy from the bioethanol is recovered from a relatively efficient recovery process.
Note that there are other energy amounts expended from planting, harvesting, and transport of the sweet sorghum feedstock and this experiment is only one portion of the life cycle of bioethanol production, recovery, and use.
Example $4$
Example 4: Biogas production and use from animal manure
Problem:
Sizing a biogas facility is one task assigned to an engineer who operates a commercial biogas facility. One common calculation is to determine the electrical power produced from the manure collected from a 500-head dairy facility. Usually, one would need electrical power for 8 hours per day. The thermal conversion efficiency of an internal combustion engine is approximately 25% with a mechanical-to-electrical conversion efficiency of 80%. The specific methane yield was found to be 0.23 m3 biogas/kg volatile solids per day (Hamilton, 2012; ASABE Standard D384.2). Each mature dairy cow produces an average of 68 kg manure per head per day with a percentage of 7.5 volatile solids. The energy content of biogas was 24.2 MJ/m3 (650 Btu/ft3).
Size the generator to use for this facility.
Solution
1. 1. The amount of methane produced from a 500-head facility is calculated as follows:
2. $\text { Biogas }\left(\frac{\mathrm{m}^{3}}{\text { day }}\right)=500 \text { head } \times \frac{68 \mathrm{~kg} \text { wet manure }}{\text { head per day }} \times \frac{0.075 \mathrm{~kg} \mathrm{VS}}{\mathrm{kg} \text { wet manure }} \times \frac{0.23 \mathrm{~m}^{3} \text { biogas }}{\mathrm{kg} \mathrm{VS}}=586.5 \frac{\mathrm{m}^{3}}{\text { day }}$
3. 2. The theoretical power production is calculated as follows:
4. $\text { Power }(k W)=\frac{586.5 \mathrm{~m}^{3}}{\text { day }} \times \frac{1 \text { day }}{8 \mathrm{hrs}} \times \frac{24,200 \mathrm{~kJ}}{\mathrm{~m}^{3}} \times \frac{1 \mathrm{hr}}{3600 \mathrm{~s}} \times \frac{\mathrm{kW}}{\mathrm{kJ} / \mathrm{s}}=492.8 \mathrm{~kW}$
5. 3. The actual power produced based on 25% engine efficiency and 80% mechanical-to-electrical efficiency is calculated as follows:
6. $\text { Actual Power }(\mathrm{kW})=492.8 \mathrm{~kW} \times 0.25 \times 0.80=98.6 \mathrm{~kW}$
A generator with a size close to 100 kW of power output will be required.
Example $5$
Example 5: Basic biomass pyrolysis energy and mass balances
Problem:
The thermal conversion of waste biomass into useful energy is a common calculation for an engineer. This simple example is the conversion of coconut shell (waste biomass) into bio-char (useable fuel). In the experiment, the engineer used 1 kg of coconut shell and pyrolyzed this at a temperature of 300°C. The measured energy content of this high-energy density biomass was 20.6 MJ/kg. The pyrolysis experiment produced about 0.80 kg of bio-char. The heating value of the bio-char was measured to be 22 MJ/kg. Minimal solids and gaseous products were produced in this low-temperature pyrolysis process. Determine the overall conversion efficiency (ηe) for the bio-char conversion process and also calculate the amount of energy retained in the bio-char and the energy lost through the process.
Solution
1. 1. Equation 1.1.1 is used directly to estimate the conversion efficiency for bio-char production.
2. $\text { Energy Conversion Efficiency }(\%)=\frac{\text { Energy Output }(\mathrm{MJ})}{\text { Energy Input }(\mathrm{MJ})} \times 100 \%$ (Equation $1$)
3. First, calculate the total energy of the bio-char per unit kg of material pyrolyzed as:
4. $\text { Bio-char Energy }(\mathrm{MJ})=0.80 \mathrm{~kg} \times \frac{22 \mathrm{MJ}}{\mathrm{kg}}=17.6 \mathrm{MJ}$
5. 2. The overall conversion efficiency (ηe) is then calculated as follows:
6. $\eta_{\mathrm{e}}=\frac{17.6 \mathrm{MJ}}{20.6 \mathrm{MJ}} \times 100 \%=85.4 \%$
7. This value also indicates the percentage of energy retained in the bio-char.
8. 3. The energy lost through the process is simply the difference between the original energy of the biomass and the energy retained in the bio-char as follows:
9. $\text { Energy } \operatorname{Loss}(\mathrm{MJ})=20.6 \mathrm{MJ}-17.6 \mathrm{MJ}=3 \mathrm{MJ}$
10. 4. This energy loss is equivalent to 14.6%, the difference between 100% and the process conversion efficiency of 85.4%.
Notice the high yield of solid bio-char at this pyrolysis temperature, with a minimal yield of liquid and gaseous synthesis gas, which are considered losses at this point. However, at much higher pyrolysis temperatures, more liquid and gaseous synthesis gases are produced. Example 1.1.6 shows the uniqueness of the pyrolysis process in generating a wider range of co-products. Complete energy and mass balances of the process may also be estimated to evaluate overall conversion efficiencies.
Example $6$
Example 6: Basic biomass pyrolysis energy and mass balances
Problem:
An engineer conducted an experiment to pyrolyze 1.23 kg of sorghum biomass (heating value = 18.1 MJ/kg) at a temperature of 600°C in an auger pyrolyzer. The primary purpose of the experiment was to determine the energy contained in various co-products of the process. The input energy includes that from the auger motor (5 amps, 220 V) and tube furnace (2,400 Watts). The time of testing was 12 minutes. Data gathered during the experiments and other associated parameters needed to perform complete energy and mass balances are as follows:
• Amount of bio-char produced = 0.468 kg
• Volume of bio-oil produced = 225 mL
• Density of = 1.3 g/mL
• Volume of syngas produced = 120 L
• Heating value of bio-char = 23.99 MJ/kg
• Heating value of = 26.23 MJ/kg
The heating values of syngas produced as well as their composition is in the table below.
Primary Gases
H2 CH4 CO
% Yield
20%
10%
15%
Density (kg/m3)
0.0899
0.656
1.146
HV (MJ/kg)
142
55.5
10.112
Determine an energy and mass balances for this process and report how much energy was contained in each of the co-products as well as the overall conversion efficiency.
Solution
1. 1. Draw a schematic of the complete mass and energy balance process as in Figure $11$.
2. 2. Calculate the energy contained in the original biomass as:
3. $\text { Biomass Energy }(\mathrm{MJ})=1.23 \mathrm{~kg} \times \frac{18.1 \mathrm{MJ}}{\mathrm{kg}}=22.26 \mathrm{MJ}$
4. 3. Calculate the input energy from the furnace as:
5. $\text { Thermal Energy }(\mathrm{MJ})=2.4 \mathrm{~kW} \times \frac{12 \mathrm{hr}}{60} \times \frac{3.6 \mathrm{MJ}}{1 \mathrm{kWh}}=1.728 \mathrm{MJ}$
6. 4. Calculate the input energy from the auger as:
7. $\text { Auger Energy }(\mathrm{MJ})=220 \mathrm{~V} \times 5 \mathrm{~A} \times \frac{\mathrm{kW}}{1,000 \mathrm{VA}} \times \frac{12 \mathrm{hr}}{60} \times \frac{3.6 \mathrm{MJ}}{1 \mathrm{kWh}}=0.792 \mathrm{MJ}$
8. 5. Calculate the energy contained in the bio-char as:
9. $\text { Bio-char Energy }(\mathrm{MJ})=0.468 \mathrm{~kg} \times \frac{23.99 \mathrm{MJ}}{\mathrm{kg}}=11.23 \mathrm{MJ}$
10. 6. Calculate the energy contained in the bio-oil as:
11. $\text { Energy }(\mathrm{MJ})=225 \mathrm{~mL} \times \frac{26.23 \mathrm{MJ}}{\mathrm{kg}} \times \frac{1.3 \mathrm{~g}}{\mathrm{~mL}} \times \frac{\mathrm{kg}}{1000 \mathrm{~g}}=7.67 \mathrm{MJ}$
12. 7. The total energy content of syngas is the sum of energy in the component gases. As given, about 120 L of syngas was produced, with 20% H2 (24 L), 10% CH4 (12 L) and 15% CO (18 L). The resulting energy content of the bio-oil is calculated as:
13. $\mathrm{H}_{2}(\mathrm{MJ})=24 \mathrm{~L} \times \frac{0.0899 \mathrm{~kg}}{\mathrm{~m}^{3}} \times \frac{1 \mathrm{~m}^{3}}{1,000 \mathrm{~L}} \times \frac{142 \mathrm{MJ}}{\mathrm{kg}}=0.306 \mathrm{MJ}$
14. $\mathrm{CH}_{4}(\mathrm{MJ})=12 \mathrm{~L} \times \frac{0.0656 \mathrm{~kg}}{\mathrm{~m}^{3}} \times \frac{1 \mathrm{~m}^{3}}{1,000 \mathrm{~L}} \times \frac{55.5 \mathrm{MJ}}{\mathrm{kg}}=0.437 \mathrm{MJ}$
15. $\mathrm{CO}(\mathrm{MJ})=18 \mathrm{~L} \times \frac{1.145 \mathrm{~kg}}{\mathrm{~m}^{3}} \times \frac{1 \mathrm{~m}^{3}}{1,000 \mathrm{~L}} \times \frac{10.112 \mathrm{MJ}}{\mathrm{kg}}=0.208 \mathrm{MJ}$
16. The total energy content of syngas is:
17. $\text { Syngas }(\mathrm{MJ})=0.306 \mathrm{MJ}+0.437 \mathrm{MJ}+0.208 \mathrm{MJ}=0.951 \mathrm{MJ}$
18. Most of the energy is still retained in the bio-char (11.23 MJ), followed by the bio-oil (7.67 MJ), and the syngas (0.951 MJ).
19. 8. The energy balance is:
20. $\text { Input Energy }(\mathrm{MJ})=22.2 \mathrm{MJ}+1.73 \mathrm{MJ}+0.792 \mathrm{MJ}=24.722 \mathrm{MJ}$
21. $\text { Output Energy }(\mathrm{MJ})=11.23 \mathrm{MJ}+7.67 \mathrm{MJ}+0.951 \mathrm{MJ}=19.851 \mathrm{MJ}$
22. 9. Calculate the conversion efficiency as:
23. $\text { Conversion Efficiency }(\%)=\frac{\text { Output }}{\text { Input }} \times 100 \%=\frac{19.851}{24.722} \times 100 \%=80.3 \%$
Example $7$
Example 7: Present and future value of investment in a bioenergy systems facility
Problem:
An investor deposited $1,100,000 in a bank in 2007 rather than investing it in a commercial biodiesel facility. Determine its estimated future value in 2018 using the present value equation, assuming a bank rate of return of 2.36%. Compare this to investing the money in operating a biodiesel facility with year 11 return of$2M.
Solution
1. 1. This is a simple future value calculation using Equation 1.1.3:
2. $\mathrm{FV}=\mathrm{PV} \times(1+r)^{n}$ (Equation $3$)
where FV = future value of cash flow ($) PV = present value of cash flow ($)
r = rate of return or discount rate (decimal)
n = number of periods (unitless)
$\text { Future Value }=\ 1,100,000 \times(1+0.0236)^{11}=\ 1,421,758$
1. 2. If invested in a bank, the future value after 11 years would be about $1.4M compared to a return of$2M from investing in the biodiesel facility. In this example, investing $1.1M in a biodiesel facility generated more value than putting the money in a bank. Example $8$ Example 8: Depreciation of a biodiesel plant Problem: An engineer was asked to report the yearly depreciation for a biodiesel facility whose initial asset value is$1,100,000. The lifespan of the facility is 20 years and the salvage value of all equipment and assets at the end of this life is 10% of the initial capital value of the facility. Use the straight-line method and sum-of-digits method for depreciation calculations. Describe the yearly variations in depreciation for each method.
Solution
The straight-line method uses Equation 1.1.7:
$\text { Straight Line Depreciation }(\)=\frac{\text { Principal-Salvage Value }}{\text { Life of Unit }}$ (Equation $7$)
$=\frac{\ 1,100,000-\ 110,000}{20}=\ 50,000 / \text { year }$
The yearly depreciation after year 1 is $50,000 per year. The sum of digits method first calculates the sum of digits as follows: (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13 + 14 + 15 + 16 + 17 + 18 + 19 + 20) = 210 The factor to estimate the depreciation for year 1 uses the reverse order—the life of the facility in the numerator and the sum of digits in the denominator with year 1 having a factor of 20/210 and so on. $\text { Year } 1 \text { Depreciation }(\)=\frac{20}{210} \times(\ 1,100,000-\ 110,000)=\ 94,285$ $\text { Year } 2 \text { Depreciation }(\)=\frac{19}{210} \times(\ 990,000)=\ 89,571$ $\text { Year } 3 \text { Depreciation }(\)=\frac{18}{210} \times(\ 990,000)=\ 84,857$ Continue calculations for years 4 through 19 using years 17 through 2 in the numerator. . . . $\text { Year } 20 \text { Depreciation }(\)=\frac{1}{210} \times(\ 990,000)=\ 4,714$ Note that in both methods, the end of project asset cost is approximately equal to the salvage value given. If the data were plotted, a rapid decline in value over the first few years using the sum of digits method usually reflects the actual depreciation of many facilities. Example $9$ Example 9: Calculation of net present value, benefit cost ratio, payback period, and internal rate of return for a biodiesel facility Problem: An engineer can be asked to evaluate a number of projects to estimate funding requirements. Comparative economic indicators can be used to compare one project proposal to another. Common indicators are net present value (NPV), benefit cost ratio (BCR), payback period (PBP), and internal rate of return (IRR). A 1,892,500 L/yr (half a million gallon per year) biodiesel facility with an initial capital cost of$1,100,000 and 10% salvage value has the following baseline data:
Capital Cost (CC) $1,100,000 Repair & Maintenance Cost 3% of Capital Cost Biodiesel Plant Cost per million L$581,241/ML ($2,200,000/MG) Interest 7.5% Tax & Ins. 2% of CC Conv. Eff. 99% Life 20 years Labor 8 hrs/day Labor Cost$15/hr
Vegetable Oil Price
$0.13/L ($0.50/gal)
Operation
365 days/yr
One Manager
$60,000/yr Processing Cost$0.13/L ($0.50/gal) Personnel 6 full-time Selling$0.53/L ($2.00/gal) Glycerin 10% Yield Glycerin 0.18/L ($0.7/gal)
Biodiesel
1,873,575 L (495,000 gals)
Tax Credit
28%
Discount Rate
2.36%
Depreciation
Straight Line
Salvage Value
10% of initial capital cost
The data can be used to calculate some economic performance data:
• Average yearly gross income for the project with tax credit = $1,314,506 • Average yearly net income for the project with tax credit =$279,305
• Average discounted net benefits per year = $220,614 • Average discounted costs per year =$817,670
• Average discounted gross benefits = $1,038,284 Use these data to calculate NPV, BCR, PBP, and IRR. Solution 1. 1. Calculate the NPV from Equation 1.1.4, 2. $\mathrm{NPV}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \frac{\text { cash inflow }}{(1+\mathrm{i})^{\mathrm{n}}}-\text { cash outflow }$ (Equation $4$) 3. or simply get the difference between the average yearly discounted benefits and the yearly average discounted costs: 4. $\mathrm{NPV}=\ 1,038,284-\ 817,670=\ 220,614$ 5. The NPV value is positive. Hence, the project is economically feasible. 6. 2. The BCR is the ratio between the discounted benefits and discounted costs as shown: 7. $\mathrm{BCR}=\frac{\text { project benefits }}{\text { project costs }}$ (Equation $5$) 8. $=\frac{\ 1,0.38,284}{\ 817,670}=1.27$ 9. The BCR is greater than 1, also showing the project is feasible. 10. 3. The PBP is the ratio of the initial capital costs and the yearly average discounted net revenue: 11. $\text { PBP }(\text { years })=\frac{\text { project costs }}{\text { annual cash inflows }}$ (Equation $6$) 12. $=\frac{\ 1,100,000}{\ 220,614}=5 \text { year }$ 13. 4. To calculate the internal rate of return, compare the discounted net benefits throughout the life of the project with an assumed discount factor (discount rate). Manually, this is a trial-and-error method whereby the assumed discount rate results to net benefits greater than zero (positive) and less than zero (negative). The discount rate where the net benefit is exactly equal to zero is the internal rate of return for the project. For example, when the above data is encoded in a spreadsheet and the assumed discount rate is 30%, the discounted net benefit is estimated at −$173,882, a negative value. However, when the discount rate of 20% is used, the discounted net benefit is calculated as $260,098, a positive value. Hence, the internal rate of return must be between these assumed values (that is, between 20% and 30%). By ratio and proportion (plotting these values in X-Y Cartesian coordinates, like cash flow, and comparing the smaller triangle with the larger triangle, the X being the discounted factor above 20% and Y the net benefits in$), the internal rate of return is then calculated as follows:
14. $\frac{\mathrm{X}}{\ 260,098}=\frac{(30 \%-20 \%)}{(\ 260,097+\ 173,882)}$
15. $X=6 \%$
16. $\mathrm{IRR}=20 \%+6 \%=26 \%$
17. Thus the IRR must be around 26%, a positive value and higher than the bank interest rate of 7.5%. The project is then declared economically feasible using this parameter. (Note: When calculated by spreadsheet, the IRR values will be slightly different from this manual method).
Example $10$
Example 10: Determining net energy ratio and net energy balance for corn ethanol with and without co-products recycling
Problem:
To assess the merit of converting biomass into fuel as recommended by USDA it is possible to use of the net energy ratio (NER) and net energy balance (NEB) for a facility. Numerous studies conducted by USDA for corn ethanol production from wet milling and dry milling have established baseline data.
The total energy used for each process, without considering the use of co-products as sources of additional energy:
• Total energy used for the dry milling process = 19.404 MJ/L
• Total energy used for the wet milling process = 20.726 MJ/L
• Heating value of ethanol produced = 21.28 MJ/L
The total energy used for each process when all by the products of the system are used to supply energy requirements for the facility:
• Total energy used for the dry milling process = 15.572 MJ/L
• Total energy used for the wet milling process = 16.482 MJ/L
• Heating value of ethanol produced = 21.28 MJ/L
Determine whether it is better to use wet or dry milling, and whether it is better to use co-products as a source of energy within the facility.
Solution
1. 1. The NER is calculated using Equation 1.1.11:
2. $\mathrm{NER}=\frac{\text { Energy Content of Fuel }(\mathrm{MJ})}{\text { Energy Required to Produce the Biofuel }(\mathrm{MJ})}$ (Equation $11$)
For the dry mill process, $\mathrm{NER}=\frac{21.28 \mathrm{MJ} / \mathrm{L}}{19.404 \mathrm{M} / \mathrm{L}}=1.10$
For the wet mill process, $\mathrm{NER}=\frac{21.28 \mathrm{MJ} / \mathrm{L}}{20.726 \mathrm{M} / \mathrm{L}}=1.03$
1. The dry mill process has a higher NER than the wet mill process.
2. 2. The NEB is calculated using Equation 1.1.12:
3. $\text { NEB = Biofuel Heating Value(MJ) - Energy Required to Produce the Biofuel(MJ) }$ (Equation $12$)
4. For the dry mill process, $\mathrm{NEB}=21.28 \frac{\mathrm{MJ}}{\mathrm{L}}-19.404 \frac{\mathrm{J}}{\mathrm{L}}=1.876$
5. For the wet mill process, $\mathrm{NEB}=21.28 \frac{\mathrm{MJ}}{\mathrm{L}}-20.726 \frac{\mathrm{J}}{\mathrm{L}}=0.554$
6. The dry milling process is better than the wet milling process according to both the NER and NEB when co-products are not used to supply energy.
7. 3. The NER for the dry mill process with co-products allocation is:
8. $\mathrm{NER}=\frac{21.28 \mathrm{MJ} / \mathrm{L}}{15.572 \mathrm{M} / \mathrm{L}}=1.37$
9. The NER for the wet mill process when co-products are reused is:
10. $\mathrm{NER}=\frac{21.28 \mathrm{MJ} / \mathrm{L}}{16.482 \mathrm{M} / \mathrm{L}}=1.29$
11. 4. The NEB for the dry mill process when co-products are reused for the process is:
12. $\mathrm{NEB}=21.28 \frac{\mathrm{MJ}}{\mathrm{L}}-15.572 \frac{\mathrm{J}}{\mathrm{L}}=5.708$
13. The NEB for the wet mill process when co-product are reused for the process is:
14. $\mathrm{NEB}=21.28 \frac{\mathrm{MJ}}{\mathrm{L}}-16.482 \frac{\mathrm{J}}{\mathrm{L}}=4.798$
15. The dry milling process remains the better option and both NER and NEB indicate that the co-products should be used as part of the system design.
Image Credits
Figure 1. Capareda, S. (CC By 4.0). (2020). Pathways for the conversion of biomass resources into energy.
Figure 2. Capareda, S. (CC By 4.0). (2020). Schematic of the commercial process of making biodiesel fuel.
Figure 3. Capareda, S. (CC By 4.0). (2020). Schematic of the commercial process for making bioethanol via dry milling.
Figure 4. Capareda, S. (CC By 4.0). (2020). Schematic representation of various designs of high-rate biogas digesters.
Figure 5. Capareda, S. (CC By 4.0). (2020). The outputs and applications of biomass pyrolysis.
Figure 6. Capareda, S. (CC By 4.0). (2020). Schematic diagram of a fluidized bed gasifier.
Figure 7. Capareda, S. (CC By 4.0). (2020). The relationships between capital expenditure (CAPEX) and operating expenditure (OPEX) for a bioenergy project.
Figure 8. Capareda, S. (CC By 4.0). (2020). Schematic of US nested renewable fuels categories under the renewable fuels standards.
Figure 9. Capareda, S. (CC By 4.0). (2020). The role of biomass resources for a sustainable low-carbon future.
Figure 10. Capareda, S. (CC By 4.0). (2020). Hierarchy of biomass utilization from high-value, low-volume applications (top) to low-value, high-volume applications (bottom).
Figure 11. Capareda, S. (CC By 4.0). (2020). Distribution of mass and energy from all products of the pyrolysis process.
References
Capareda, S. (2014). Introduction to biomass energy conversions. Boca Raton, FL: CRC Press. https://doi.org/10.1201/b15089.
Hamilton, D. W. (2012). Organic matter content of wastewater and manure, BAE 1760. Stillwater: Oklahoma Cooperative Extension Service.
Lettinga, G., van Velsen, A. F., Hobma, S. W., de Zeeuw, W., & Klapwijk, A. (1980). Use of the upflow sludge blanket (USB) reactor concept for biological wastewater treatment, especially for anaerobic treatment. Biotech. Bioeng., 22(4), 699-734. doi.org/10.1002/bit.260220402.
Lilienthal, P., & Lambert, T. W. (2011). HOMER. The micropower optimization model. Getting started guide for HOMER legacy. Ver. 2.68. Boulder, CO and Golden, CO: Homer Energy and NREL USDOE.
Maglinao, R. L., Resurreccion, E. P., Kumar, S., Maglinao, A. L., Capareda, S., & Moser, B. R. (2019). Hydrodeoxygenation-alkylation pathway for the synthesis of a sustainable lubricant improver from plant oils and lignin-derived phenols. Ind. Eng. Chem. Res., 1-50. https://doi.org/10.1021/acs.iecr.8b05188.
Pimentel, D. (2003). Ethanol fuels: Energy balance, economics, and environmental impacts are negative. Natural Resour. Res., 12(2), 127-134. https://doi.org/10.1023/A:1024214812527.
Stout, B. A. 1984. Energy use and management in agriculture. Belmont, CA: Breton Publ.
Watts, S., & Hertvik, J. (2018). CapEx vs OpEx for IT & Cloud: What’s the difference? The Business of IT Blog, January 2 issue. Retrieved from https://www.bmc.com/blogs/capex-vs-opex/.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/01%3A_Energy_Systems/1.01%3A_Bioenergy_Conversion_Systems.txt
|
Hamed El Mashad
Biological and Agricultural Engineering Department
University of California, Davis
Davis, CA, USA
and
Agricultural Engineering Department
Mansoura University
El Mansoura, Egypt
Ruihong Zhang
Biological and Agricultural Engineering Department
University of California, Davis
Davis, CA, USA
Key Terms
Anaerobic digesters Suspended growth Sizing
Biochemical and physical processes Biogas cleaning Yield estimation
Fixed growth Biogas upgrading Commercial uses
Introduction
Fossil fuel is currently the main energy source in the world. With its limited supplies and the environmental pollution caused by its use, there is a need to increase the use of renewable energy. Sources of renewable energy include the sun, winds, tides, waves, rain, geothermal heat, and biomass. Biomass is plant or animal material that can be used to produce bioenergy as heat or fuel. The technologies for converting biomass into bioenergy can be classified as biochemical, physicochemical, and thermal-chemical technologies. The main biochemical technologies include anaerobic digestion to produce biogas and fermentation to produce alcohols such as ethanol and butanol. The main physicochemical technology is transesterification to produce biodiesel, and the main thermal-chemical technologies are combustion to produce heat, torrefaction to produce solid fuels, pyrolysis to produce oil, and gasification to produce syngas. The selection of a specific technology depends on the composition of the available biomass as well as the desired bioenergy considering economics, social implications, and environmental impact.
Biogas energy is produced by anaerobic digestion of organic matter, which is carried out by a consortium of microorganisms in the absence of oxygen. Airtight vessels called digesters or reactors are used for the process. Biogas is a mixture of methane (CH4), carbon dioxide (CO2), and traces of other gases, such as ammonia (NH3) and hydrogen sulfide (H2S). Anaerobic digestion technology can be used to treat organic materials, such as food residues and wastewater, thus reducing the amount of material to be disposed of, while generating bioenergy.
This chapter introduces biogas production using anaerobic digestion of organic waste (e.g., food scraps, animal manure, grass clippings and straws). It introduces the processes involved in anaerobic digestion, the major factors that influence these processes, the biogas produced, and common types of digesters. It also presents methods for determining biogas and methane yields.
Outcomes
After reading this chapter, you should be able to:
• • Explain the microbiological, chemical, and physical processes in anaerobic digestion
• • Describe the types of anaerobic digester used for biogas production and factors influencing their performance
• • Describe some methods of cleaning biogas for energy generation
• • Estimate the quantity of biogas, methane, and energy that can be produced from an organic material
• • Calculate the volume of a digester to treat a certain amount of a substrate
Concepts
Anaerobic digestion is a bioconversion process that is carried out by anaerobic microorganisms including anaerobic bacteria and methanogenic archaea to break down and convert organic matter into biogas, which is mainly a mixture of CH4 and CO2.
Biochemical Processes
Anaerobic digestion involves four major biochemical processes: hydrolysis, acidogenesis, acetogenesis, and methanogenesis. Figure $1$ shows these processes for the conversion of organic substrates (such as proteins, carbohydrates, and lipids) into biogas.
Hydrolysis converts complex organic matter using extracellular and intracellular enzymes from the microorganisms to monomer or dimeric components, such as amino acids, single sugars, and long chain fatty acids (LCFA). During acidogenesis, the hydrolysis products are converted by acidogenic bacteria into smaller molecules such as volatile fatty acids (VFA), alcohols, hydrogen, and NH3. In acetogenesis, alcohols and VFA (other than acetate) are converted to acetic acid or hydrogen and CO2. The acidogenic and acetogenic bacteria are a diverse group of both facultative and obligate anaerobic microbes including Clostridium, Peptococcus, Bifidobacterium, Corynebacterium, Lactobacillus, Actinomyces, Staphylococcus, Streptococcus, Desulfomonas, Pseudomonas, Selemonas, Micrococcus, and Escherichia coli (Kosaric and Blaszczyk, 1992). During methanogenesis, acetic acid and methanol (an alcohol) are converted to CH4 and CO2. In addition, CO2 and hydrogen are converted into CH4. Methanogenic archaea include a diverse group of obligate anaerobes such as Methanobacterium formicicum, Methanobrevibacter ruminantium, Methanococcus vannielli, Methanomicrobium mobile, Methanogenium cariaci, Methanospirilum hungatei, and Methanosarcina barkei (Kosaric and Blaszczyk, 1992). Examples of the conversion of selected compounds during anaerobic digestion are shown in Table $1$.
Table $1$: Examples of conversion of selected compounds during anaerobic digestion.
Sub-processes Examples
Hydrolysis
Conversion of carbohydrates and proteins:
$\text { Cellulose }+\mathrm{H}_{2} \mathrm{O} \rightarrow \text { sugars }$
$\text { Proteins }+\mathrm{H}_{2} \mathrm{O} \rightarrow \text { amino acids }$
Acidogenesis
Conversion of glucose into acetic and propionic acids:
$\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6} \rightarrow 3 \mathrm{CH}_{3} \mathrm{COOH}$
$\mathrm{C}_{6} \mathrm{H}_{12} \mathrm{O}_{6}+2 \mathrm{H}_{2} \rightarrow 2 \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{COOH}+2 \mathrm{H}_{2} \mathrm{O}$
Acetogenesis
Conversion of propionate and butyrate into acetate and hydrogen as follows:
$\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{COO}^{-}+3 \mathrm{H}_{2} \mathrm{O} \rightarrow \mathrm{CH}_{3} \mathrm{COO}^{-}+\mathrm{HCO}_{3}^{-}+\mathrm{H}^{+}+3 \mathrm{H}_{2}$
$\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{CH}_{2} \mathrm{COO}^{-}+2 \mathrm{H}_{2} \mathrm{O} \rightarrow 2 \mathrm{CH}_{3} \mathrm{COO}^{-}+\mathrm{H}^{+}+2 \mathrm{H}_{2}$
$4 \mathrm{H}_{2}+2 \mathrm{HCO}_{3}^{-}+\mathrm{H}^{+} \rightarrow \mathrm{CH}_{3} \mathrm{COO}^{-}+4 \mathrm{H}_{2} \mathrm{O}$
Methanogenesis
Conversion of acetic acid, carbon dioxide and hydrogen, and methanol to methane:
$4 \mathrm{CH}_{3} \mathrm{COOH} \rightarrow 4 \mathrm{CO}_{2}+4 \mathrm{CH}_{4}$
$\mathrm{CO}_{2}+4 \mathrm{H}_{2} \rightarrow \mathrm{CH}_{4}+2 \mathrm{H}_{2} \mathrm{O}$
$4 \mathrm{CH}_{3} \mathrm{OH}+6 \mathrm{H}_{2} \rightarrow 3 \mathrm{CH}_{4}+2 \mathrm{H}_{2} \mathrm{O}$
Types of Anaerobic Digesters
Anaerobic digesters can be categorized based on how the microorganisms inside the digester interact with the substrate. There are three attributes used: (1) how the microorganisms are grown: suspended growth or fixed growth, (2) the feeding of substrate into the vessel as a batch, a plug, or continuously and (3) the number of stages, single or multistage. Further design considerations are whether the contents are actively mixed, whether the orientation is predominantly vertical or horizontal, and whether the flow through the vessel is downwards or upwards.
Suspended Growth Anaerobic Digesters
Suspended growth digesters are usually used for substrates with a high content of suspended solids, such as municipal wastewater and diluted solid waste. They can be operated as a batch process (Figure $2$) or as plug flow (Figure $3$), where a batch of substrate moves through the vessel as a block of material, called a plug. The microorganisms are dispersed throughout the reactor when the digester contents are mixed, such as continuous stirred tank reactors (CSTR), or anaerobic contact reactor (ACR) which is a CSTR with effluent solids recycled from a settling tank for solids. In a CSTR, solid retention time (SRT) equals the hydraulic retention time (HRT), which is the average time the solids and liquid remain in the bioreactor vessel. CSTR systems are operated at HRT and SRT ranging from 10 to 30 days. The ACR has a longer SRT (>50 days) than the HRT (0.5–5 days) because part of effluent solids is recycled back into the digester.
Figure $2$ shows a schematic of a suspended growth batch anaerobic digester. These are simple to design and operate. They are usually an air-tight vessel with inflow and outflow ports to supply fresh substrate and remove spent substrate, a biogas outlet port, and a port for removing solids. These systems are commonly deployed at small scale and for testing the anaerobic biodegradability of different materials. Operation starts with mixing a fixed amount of substrate with inoculum, which is active bacterial culture taken directly from a running reactor. Afterwards, anaerobic conditions are maintained for the digestion time (i.e., the retention time), which should ensure the depletion of all the available substrate.
The CSTR is typically used to treat agricultural and municipal wastes with total solid (TS) contents of 3% to 12%. They are usually operated at controlled temperatures, so the vessel, constructed either below or above ground, is equipped with a heating system and thermal insulation to maintain a constant internal temperature. Plug flow digesters (Figure $3$) are constructed as long pipes or channels, above or below ground, with a gas tight cover. The digester contents travel through the vessel where they are converted into biogas until reaching the outlet. The residence time is determined by the time elapsed between the feed of fresh substrate and discharge of the digested materials. They are used to treat relatively high TS of 12% to 16%.
Covered lagoons are commonly used to treat wastewater with low solids content (<3%), such as flushed animal manure. Manure lagoons on livestock farms can be upgraded to be anaerobic covered lagoons using a non-permeable covering to collect the biogas and double synthetic liners to prevent ground water contamination by seepage of the digester content. Covered lagoon digesters can be mixed or non-mixed (i.e., have mechanical agitation or not) and can be operated as plug flow or CSTR systems. They usually operate at ambient temperatures dictated by the local climate.
There is also a class of suspended growth systems called high rate systems, which are characterized by using longer SRT than HRT. These systems are usually used for diluted wastewater with an SRT of >20 days, which is achieved by retaining the microorganisms in the digester. The long SRT enables treatment at high organic loading rates (amount of organic material processed per unit time). HRT can range from hours to days, depending on the characteristics of the wastewater. Designs include anaerobic sludge bed reactors (ASBR) and upflow sludge blanket reactors (UASB). In the ASBR, the retention of microorganisms is achieved by solids settling in the reactor prior to effluent removal. In the UASB, microorganisms form granules and are retained in the reactor.
Fixed Growth Anaerobic Digesters
In fixed growth digesters, microorganisms are grown on solid media allowing SRT longer than HRT. These systems are also high rate systems. Fixed growth anaerobic digesters are used to treat soluble organic wastes (i.e., low suspended solids content) that do not require hydrolysis. Media, such as plastic or rocks, are usually used to support the attachment and growth of microorganisms, which form biofilms. As wastewater passes over the growth media, contaminants are absorbed and adsorbed by the biofilms and degraded. Therefore, these digesters can be operated at higher organic loading rates than the suspended growth digesters.
Anaerobic filters are a type of fixed-growth anaerobic digester (Figure $4$). In these systems, much of the sludge containing active microorganisms is retained inside the digester by being attached as a biofilm to a solid (inert) carrier material. Anaerobic filters are operated in up-flow mode, meaning the inflow is below the outlet in the digestion chamber.
Factors Affecting Anaerobic Digestion and Biogas Production
Anaerobic digestion processes are affected by many factors, including substrate composition, temperature, pH, organic loading, retention time, and mixing, which in turn affect the yield and rate of biogas production. Process stability (i.e., the consistency of the biogas production rate) depends on maintenance of the biochemical balance between the acidogenic and methanogenic microorganisms. Process stability also depends on the chemical composition and physical properties of the substrate, digester configuration, and process parameters such as temperature, pH, and NH3 concentration.
Substrate Composition and Characteristics
Substrate composition, particularly physical and chemical characteristics, is an important factor affecting design of biomass handling and digestion systems, performance of anaerobic digestion, biogas yield, and downstream processing of the digested materials. Materials with large particle sizes (e.g., crop residues and energy crops) may need to be ground before being fed into the anaerobic digester. The grinding process can aid in the conversion process because small particles can be degraded faster than large ones. Moreover, grinding can help when handling the substrate and mixing the digester contents. Mixed wastes, such as municipal solid waste, usually contain inorganic materials (e.g., metals and construction debris) and need a separation process to remove these inorganic materials. Organic matter is mainly composed of carbon (C), hydrogen (H), and oxygen (O). It also contains many nutrient elements including macronutrients (e.g., nitrogen (N), potassium, magnesium, and phosphorus) and micronutrients (zinc, manganese, cobalt, nickel, and copper). Example compositions are given in Table 1.2.2. All these nutrients are needed by microorganisms in order to break down and convert organic matter into biogas. An appropriate C: N ratio in the substrate is in the range of 20–25 C to 1 N. Most organic wastes, such as animal manure and food waste, contain enough nutrients to support the growth of microorganisms.
The organic matter content of a substrate is described in terms of volatile solids (VS), chemical oxygen demand (COD), or biochemical oxygen demand (BOD). VS are used to characterize substrates with a high solids content, while COD and BOD are used to characterize substrates that have a low solids content, such as wastewater. VS is the organic fraction of total solids (TS) or dry matter. BOD is used to describe the biodegradability of a substrate, while COD is the amount of oxygen needed to chemically oxidize the organic matter in a substrate. If the chemical composition of a substrate is known, the COD can be calculated using the chemical reaction:
$\mathrm{C}_{\mathrm{a}} \mathrm{H}_{\mathrm{b}} \mathrm{O}_{\mathrm{c}} \mathrm{N}_{\mathrm{d}}+\left(a+\frac{b}{4}-\frac{c}{2}-\frac{3 d}{4}\right) \mathrm{O}_{2} \rightarrow a \mathrm{CO}_{2}+\left(\frac{b}{2}-\frac{3 d}{2}\right) \mathrm{H}_{2} \mathrm{O}+d \mathrm{NH}_{3}$
where a, b, c, and d are number of atoms of carbon, hydrogen, oxygen, and nitrogen, respectively, and allow calculation of the amount of oxygen required for the reaction, i.e.,
$\left[\left(a+\frac{b}{4}-\frac{c}{2}-\frac{3 d}{4}\right) \mathrm{O}_{2}\right]=\mathrm{COD}$.
Table $2$: Composition of selected organic wastes (dry weight basis), (Zhang, 2017)
Sample C/N C
(%)
N
(%)
P
(%)
K
(%)
S
(%)
Ca
(%)
Mg
(%)
B
(ppm)
Zn
(ppm)
Mn
(ppm)
Fe
(ppm)
Cu
(ppm)
Na
(ppm)
Co
(ppm)
Ni
(ppm)
Tomato waste
13.0
40.3
3.1
0.3
1.1
0.3
2.4
0.7
72.9
40.1
183.6
4482.8
23.6
1528.5
2.5
14.0
Tomato pomace
17.0
57.8
3.5
0.5
1.0
0.2
0.3
0.3
17.6
40.1
53.8
510.3
14.3
477.0
0.4
3.0
Rice straw
77.0
38.6
0.5
0.1
2.8
0.1
0.2
0.2
6.6
33.5
492.2
432.2
4.9
2054.0
1.3
2.0
Egg liquid waste
8.0
61.8
7.8
0.6
0.7
0.7
0.4
0.1
1.3
18.1
1.5
68.0
15.9
7165.0
<0.1
5.0
Commercial food waste
16.0
43.7
2.7
0.5
2.4
0.3
3.5
0.2
18.7
170.8
34.1
443.7
9.1
3443.0
0.4
2.0
Supermarket vegetable waste
22.0
45.6
2.1
0.4
2.9
0.2
0.3
0.2
38.6
126.6
22.0
187.1
10.4
1669.5
0.2
15.0
Cardboard
231.0
46.2
0.2
0.0
0.0
0.2
0.4
0.0
42.4
18.6
26.3
255.8
10.3
1950.5
0.3
3.0
Dairy manure
18.0
34.0
1.9
0.8
2.6
0.5
1.5
1.5
70.0
280.0
210.0
2100
110.0
7790
<20
Chicken manure
9.0
31.9
3.7
1.8
2.8
0.6
10.3
0.6
34.6
325.3
312.2
739.4
36.1
4162.0
0.5
12.0
Temperature
Temperature is an important factor affecting the performance of anaerobic digestion because it affects the kinetics of the processes. Microorganisms are usually classified by the optimum temperature and the temperature range at which they grow. The normal classification is psychrophilic (<25°C), mesophilic (25 to 45°C), and thermophilic (45 to 65°C), but in theory there is the extreme of hyperthermophilic anaerobic archaea and bacteria that can grow in geothermal environments with optimal growth temperatures of 80°C to 110°C (Stetter, 1996). Thermophilic digestion may produce biogas with a higher CO2 content than mesophilic digestion due to the low solubility of CO2 in water at high temperatures.
The growth rate of microorganisms increases with increasing temperature up to an optimum. Above the optimum temperature, growth declines due to the thermal denaturation of the cell protein. The growth will cease when the essential protein of the cell is destroyed. Figure 1.2.5 shows the relative growth rate of methanogens at different temperature ranges. Within the temperature range of one species, the growth rate exponentially increases with temperature. Thermodynamically, most biochemical reactions require less energy to proceed at high temperatures. The rate of most chemical reactions approximately doubles with a temperature increase of 10°C (Stanier et al., 1972). The energy required to heat up the substrate and to keep the digester at the desired temperature is greater at higher temperatures.
pH
The pH of a digester is affected by the interaction between the composition of the substrate, its buffering capacity, and the balance between the rates of acidification and methanogenesis. If the rate of methanogenesis is lower than acidogenesis, the pH might reach values below 6, which can cause inhibition to methanogenic archaea. The relationship between pH and methanogenic activity is a bell shaped curve (Figure $6$) with a maximum methanogenic activity at pH values between about 6.8 and 8 (Speece, 1996; Khanal, 2008). An optimum pH near neutrality should be maintained in the anaerobic digester for biogas production.
Organic Loading
Organic loading (or initial loading) is a measure of the amount of organic matter, expressed in terms of the amount of VS or COD that enters a batch digester at the beginning of a process cycle. It is an important parameter that affects digester sizing because it determines the concentration of functional microbial biomass per unit mass of substrate. For continuously fed digesters, organic loading rate (OLR), usually defined as the amount of organic matter fed per unit volume of the digester per day, depends on the biodegradation kinetics of the substrate, digester design and operating conditions. For example, a CSTR treating animal manure with a TS content of 1–6% is usually operated at an OLR of 1.6 to 4.8 kg m3 day−1 and an HRT of 15 to 30 days.
Retention Time
Retention time is the time for the substrate to remain in the digester to be processed by the microorganisms. The appropriate retention time depends on the chemical and physical characteristics of the substrate and the rate of microbial metabolism. Complex substrates, such as agricultural wastes (e.g., animal manure), usually have low biodegradation rates and so need longer retention times (20–30 days), while highly biodegradable materials, such as food waste, may need shorter retention times (<15 days) to convert the biodegradable organic matter into biogas.
Mixing
Mixing affects the performance of anaerobic digesters by ensuring homogenization of the reactor contents by breaking of the substrate particles and exposing large surface areas of the substrate to microorganisms. Adequate mixing prevents development of stratification inside digesters, which could result in unfavorable micro-environments for the methanogens, such as regions rich in toxic compounds or with low pH. Mixing also helps to maintain a uniform temperature in the digester and prevents the formation of a scum layer. The requirement to achieve proper mixing depends on the digester shape, type of mixing systems, and solids content inside the digester. For example, a rectangular tank poses difficulty for mixing compared to cylindrical and egg-shaped reactors because it is difficult to mix into the corners. Digesters can be mixed with mechanical mixers, recirculation of biogas, or recirculation of reactor contents. The selection of the mixing system depends on the density of substrate (i.e., solid concentration), required mixing intensity, homogeneity, availability and cost of mixing equipment, and maintenance and energy consumption costs.
Process Configuration
Anaerobic digestion processes can be carried out in single stage digesters or in multistage digesters. Single stage digesters are usually used for materials that have balanced degradation rates of hydrolysis, acidogenesis, and methanogenesis and have enough buffer capacity to maintain the pH of the digester around neutral. However, for highly biodegradable materials, such as food waste, multiple stage (mostly two stages) digestion systems are usually used. In these systems, hydrolysis and acidogenesis are the predominant processes in the first stage, with low pH (4–6) due to the high concentrations of VFA. The biogas produced from the first stage contains high contents of CO2 and hydrogen and low content of CH4. In the second stage, methanogenesis predominates when VFA are consumed by the methanogenic archaea and the pH is in the range of 6.8–8. The biogas produced from the second stage has high CH4 content (50–70%).
Ammonia Concentration
The anaerobic digestion of protein-rich substrates may produce high NH3 concentrations that can cause inhibition or even toxicity to anaerobic microorganisms. Microorganisms need N for their cell synthesis. Approximately 6.5% of the influent N is used for cell production. Fermentative bacteria can usually utilize both amino acids and NH3, but methanogenic bacteria only use NH3 for the synthesis of bacteria cells (Hobson and Richardson, 1983). High NH3 concentrations can cause inhibition, or even toxicity, to methanogenic microorganisms. Inhibition is indicated by a decrease in NH3 production and increasing VFA concentrations. When there is a total cessation of the methanogenic activity, free NH3 is usually the main cause. This is because microorganism cells are more permeable to free NH3 than to ammonium ions. The concentration of free NH3 depends on the total NH3, temperature, and pH.
Estimation of Biogas and Methane Yields
Theoretical Estimation of Yield
Biogas and CH4 yields can be estimated theoretically from the chemical composition of the substrate or measured using batch digestion experiments. Biogas and CH4 yield from a completely biodegradable organic substrate with the composition (CaHbOcNd) can be determined using Buswell’s equation (Buswell and Mueller, 1952):
$\mathrm{C}_{\mathrm{a}} \mathrm{H}_{\mathrm{b}} \mathrm{O}_{\mathrm{c}} \mathrm{N}_{\mathrm{d}}+\left(\frac{4 a-b-2 c+3 d}{4}\right) \mathrm{H}_{2} \mathrm{O} \rightarrow\left(\frac{4 a+b-2 c-3 d}{8}\right) \mathrm{CH}_{4}+\left(\frac{4 a-b+2 c+3 d}{8}\right) \mathrm{CO}_{2}+d \mathrm{NH}_{3}$
This equation does not consider the needs of organic matter for cell maintenance and anabolism. From Buswell’s equation, the total amount of biogas produced from one mole of the biodegradable organic substrate can be calculated as a summation of CH4 and CO2, i.e.:
$\left[\left(\frac{4 a+b-2 c-3 d}{8}\right)+\left(\frac{4 a-b+2 c+3 d}{8}\right)\right]$
The volume of the biogas or methane yield per each gram of the substrate L g−1 [VS] can be calculated using the molar volume of an ideal gas as 22.4 L at the standard temperature and pressure as:
$M_{\mathrm{y}}=\frac{\left(\frac{4 a+b-2 c-3 d}{8}\right) \times 22.4}{12 a+b+16 c+14 d}$
where My = methane content in the biogas, % (mole/mole or v/v).
Assuming that biogas is composed mainly of methane and carbon dioxide and ammonia production is insignificant, methane content in the biogas can be calculated as follows:
$M_{\mathrm{C}}=\frac{\left(\frac{4 a+b-2 c-3 d}{8}\right) \times 100}{\left(\frac{4 a+b-2 c-3 d}{8}\right)+\left(\frac{4 a-b+2 c+3 d}{8}\right)}$
where Mc = methane content in the biogas, % (mole/mole or v/v).
The CH4 production after a long degradation time is called the methane potential. Methane yield can be expressed as the volume of gas produced per unit mass of the substrate (L [CH4]/kg [substrate]), VS (L [CH4]/kg [VS]) or COD (L [CH4]/kg [COD]). Theoretical CH4 yield and content of selected substrates computed using the Buswell equation (Table 1.2.3) are usually underestimated because CO2 is more soluble in water than CH4. In anaerobic digesters, CH4 content of the biogas ranges from 55% to 70%, depending on the substrate and operation conditions of digesters (Table 1.2.3). Substrates rich in lipids should produce biogas rich in methane.
Table $3$: Composition and theoretical methane yield and methane content of selected substrates (e.g., Angelidaki & Sanders, 2004).
Substrate Type Formula Gas Yield[a] (L g−1 [VS]) Methane Content[b] (%)
CH4 CO2 NH3
Carbohydrate
(C6H10O5)n
0.415
0.415
0.000
50.0
Protein
C5H7NO2
0.496
0.496
0.198
50.0
Lipid
C57H104O6
1.014
0.431
0.000
70.2
Acetate
C2H4O2
0.374
0.374
0.000
50.0
Ethanol
C2H6O
0.731
0.244
0.000
75.0
Propionate
C3H6O2
0.530
0.379
0.000
58.3
[a] Yields are at standard temperature and pressure (see text).
[b] Assuming the biogas is composed of methane and carbon dioxide.
Modeling the Anaerobic Digestion Process to Estimate Yield
There are mechanistic models that describe the anaerobic digestion process, which can be used to predict the performance of anaerobic digesters. One of the most used is the Anaerobic Digestion Model No. 1 (ADM1), developed by the International Water Association Task Group for Mathematical Modelling of Anaerobic Digestion Process (Batstone et al., 2002). ADM1 is structured around biochemical sub-processes, including hydrolysis, acidogenesis, acetogenesis, and methanogenesis. While a mechanistic modelling approach is necessary for advanced design, a simple first-order kinetic model can be used to calculate the methane yield from different substrates, such as food waste, animal manure, and crop residues, and be used for preliminary design. The first-order kinetics for a batch digester can be written as:
$\frac{d S}{\mathrm{dt}}=-k S$
where t = digestion time (days)
k = first-order degradation kinetic rate constant (day−1)
S = concentration of the biodegradable organic matter (expressed as VS, COD, or BOD) in the digester (kg m−3)
With the concentration of the biodegradable substrate at the beginning of the digestion time designated as S0 (kg m−3), the equation can be expressed as:
$S=S_{0} \mathrm{e}^{-\mathrm{kt}}$
Equation 1.2.4 can be used to predict the remaining substrate concentration (S) in the digester after a period of digestion time (t) if the initial substrate concentration (S0) and degradation kinetic rate constant are known. The amount of degraded organic matter that is converted into methane, and the amount of methane produced can be calculated as:
$S_{\mathrm{deg}}=V_{\mathrm{w}}\left(S_{0}-S\right)$
$M_{\mathrm{p}}=M_{\mathrm{y}} S_{\mathrm{deg}}$
where Sdeg = degraded organic matter in the digester (kg)
Vw = working volume of digester (i.e., volume of liquid inside the digester) (m3)
Mp = amount of methane produced (m3)
My = methane yield (m3 kg−1)
Equations 1.2.4, 1.2.5 and 1.2.6 can be used to fit experimental data describing the substrate concentration at time steps throughout the process to determine the first-order degradation kinetic rate constant. They can also be used to predict degraded organic matter in the digester and methane yield at different digestion times if the first-order degradation kinetic rate constant is known from the literature or from experiments.
Estimation of Energy Production from a Substrate
The amount of energy contained in a fuel (e.g., biogas) is expressed using the higher heating value (HHV) or lower heating value (LHV). The HHV is the total heat produced from a complete combustion of a unit (usually 1 m3) of the gas under a constant pressure and all the water formed by the combustion reaction condensed to the liquid state. The LHV is the net caloric value produced from the combustion of a unit amount of the fuel and all the water formed during the combustion reaction remains in the vapor state. Methane is used to calculate the amount of energy contained in the biogas because it is the main combustible gas. At standard temperature and pressure, methane has a LHV of approximately 36 MJ m−3. Therefore, the LHV of biogas containing 65% methane is approximately 23.4 MJ m−3, which is calculated by multiplying the LHV of methane with the methane content of the biogas.
The amount of energy that is produced from an anaerobic digester can be estimated using the amount of organic matter that is treated in a certain period of time (e.g., day), biogas yield of the substrate, and methane content of the biogas. Based on the TS and VS content of the substrate, the amount of organic matter to be treated can be calculated as:
$\phi_{\mathrm{om}}=Q \times T_{\mathrm{sc}} \times V_{\mathrm{sc}}$
where ϕom = amount of organic matter to be treated per day, kg [VS] day−1
Q = amount of feedstock to be treated (kg day−1)
Tsc = total solids contents, %, wet basis
Vsc = volatile solids contents, % of Tsc
The daily biogas and methane production can be calculated as:
$B_{\mathrm{dp}}=\phi_{\mathrm{om}} B_{\mathrm{y}}$
$M_{\mathrm{dp}}=B_{\mathrm{dp}} M_{\mathrm{C}}$
where Bdp = daily biogas production, m3 day−1
By = biogas yield production, m3 kg−1 [VS]
Mdp = daily methane production, m3 day−1
Mc = methane content in the biogas, % vol vol−1
The daily energy production from biogas can be calculated as:
$E_{\mathrm{dp}}=B_{\mathrm{dp}} \times C_{\mathrm{vb}}$
or
$E_{\mathrm{dp}}=M_{\mathrm{dp}} \times C_{\mathrm{vm}}$
where Edp = daily energy production, MJ day−1
Cvb = calorific value of biogas, MJ m−3
Cvm = calorific value of methane, MJ m−3
Sizing Anaerobic Digesters
Anaerobic digester performance is controlled by the number of active microorganisms that are in contact with the substrate. Therefore, increasing the number of active bacteria can increase the conversion rate and, consequently, higher organic loading rates can be used. The total volume (Vt) of a digester is calculated from the working volume (Vw) and head space volume (Vh) as:
$V_{\mathrm{t}}=V_{\mathrm{w}}+V_{\mathrm{h}}$
The head space volume is the gas volume above the liquid that is sometimes used for gas storage. The head space volume is usually about 10% of the working volume. The required working volume of a continuously fed anaerobic digester can be determined from the amount of organic matter (expressed as VS or COD) to be treated per day and the OLR:
$V_{\mathrm{w}}=\frac{\phi_{\mathrm{om}}}{\mathrm{OLR}}$
where Vw = working volume of digester, m3
OLR = organic loading rate
The working volume can also be determined from the volume of waste to be treated per day and the hydraulic retention time of the digester:
$V_{\mathrm{w}}=V_{\mathrm{df}} \times \mathrm{HRT}$
where Vdf = volumetric feed to the digester, m3 day−1
HRT = hydraulic retention time
Biogas Cleaning and Upgrading
Biogas cleaning and upgrading processes are important to remove harmful and undesired compounds and increase the quality of the biogas as a fuel. Biogas cleaning is the removal of impurities such as hydrogen sulfide and organic compounds, and upgrading is the removal of CO2 and water vapor, resulting in a relatively pure methane (biomethane) that can be used as automobile fuel or injected in a natural gas pipeline.
Table 1.2.4 shows a typical composition of biogas from agricultural waste digestion and municipal solid waste landfills.
Biogas Cleaning
Removing hydrogen sulfide is important prior to using biogas because it is corrosive and toxic. In the presence of water vapor, hydrogen sulfide forms sulfuric acid, which can cause serious corrosion of the metallic components of the digester and biogas handling equipment. The removal of hydrogen sulfide can be carried out using chemical precipitation by the addition of metal ions (usually ferric ions) to the digester vessel or chemical absorption by passing the biogas through a ferric solution (e.g., ferric choloride (known as an iron sponge)) as:
Table $4$: Typical composition of biogas from different materials (Coombs, 1990).
Component Agricultural Wastes Municipal Solid Waste Landfills
Methane
50–80%
45–65%
Carbon dioxide
30–50%
34–55%
Water vapor
Saturated
Saturated
Hydrogen sulfide
100–7,000 ppm
0.5–100 ppm
Hydrogen
0–2%
0–1%
Ammonia
50–100 ppm
Trace
Carbon monoxide
0–1%
Trace
Nitrogen
0–1%
0–20%
Oxygen
0–1%
0–5%
Organic volatile compounds
Trace
5–100 ppm
$3 \mathrm{H}_{2} \mathrm{~S}+2 \mathrm{FeCl}_{3} \rightarrow \mathrm{Fe}_{2} \mathrm{~S}_{3} \downarrow+6 \mathrm{H}^{+}+6 \mathrm{Cl}$
In addition, hydrogen sulfide can be removed using biological oxidation by chemotrophic bacteria such as Thiobacillus thioparus. However, commercial application of biological oxidation is limited.
Siloxanes are volatile organic compounds that are usually found in the biogas produced from landfills. During combustion reactions, they are converted into silicon dioxide (SiO2) and microcrystalline quartz that deposit on engine parts, causing problems such as wearing. Activated carbon or silica gel are commonly used as adsorbents to remove these organic compounds from the biogas.
Biogas Upgrading
Removal of CO2 is important to increase the energy content of biogas, to reduce the required volumes for biogas storage, and to achieve the quality needed for compliance with the specifications of natural gas for distribution with fossil gas and the specifications for compressed natural gas engines. Moreover, the presence of CO2 can cause corrosion to equipment and pipelines if it mixes with water to form carbonic acid. Carbon dioxide can be removed from biogas with water or chemical scrubbing systems, in which water or chemical solvents (e.g., sodium hydroxide and amine) react with CO2:
CO2 + H2O ↔ H2CO3
CO2 + NaOH → NaHCO3
Carbon dioxide can also be removed from biogas by using membranes and pressure swing adsorption (PSA) systems. Membranes have selective permeability. They allow different compounds (e.g., gases) to move across the membrane at different rates. When biogas is pumped under pressure (up to 4000 kPa) through a membrane made of polymers, carbon dioxide is separated from methane. In the pressure swing adsorption system, biogas flows under pressure (up to 1000 kPa) through a porous material that allows methane to pass through while absorbing and removing carbon dioxide. The adsorbent materials in commercial systems include carbon molecular sieves, activated carbon, silica gel, and zeolites. Before the adsorbent material is completely saturated with carbon dioxide, it needs to be regenerated and then reused. The regeneration process is carried out by reducing the pressure in the vessel to pressures close to ambient and then to a vacuum.
Some adsorbent materials used for carbon dioxide can also adsorb hydrogen sulfide, oxygen, and nitrogen. However, the absorbance of hydrogen sulfide on these materials is not reversible.
Biogas collected from digesters is saturated with water vapor. The water content of biogas depends on the operating temperature of the digester. At lower temperatures there will be less water vapor in the biogas. Water vapor is removed to protect pipelines and equipment from corrosion through the formation of acids (e.g., sulfuric and carbonic acids). Water vapor can be removed by condensation or chemical drying (e.g., absorption). Water vapor condensation can be forced by reducing the dew point using a cooling system such as a chiller and heat exchanger. A fluid is cooled in the chiller and pumped through one side of the heat exchanger to reduce the temperature of the biogas that flows in the other side of the heat exchanger. In chemical drying, agents such as silica gel, magnesium oxide, aluminum oxide, or activated carbon are used to absorb the water vapor. After saturation, the drying agents are regenerated by heating to around 200°C. To maintain continuous operations, two columns filled with the drying agents are used to make sure that unsaturated drying agent is used while the saturated one is regenerated.
Applications
Experimentation to Determine Digestion Properties
The biogas and methane yields can be determined by using batch anaerobic digestion experimental set-ups ranging from the very simple (Figure $7$) to a sophisticated automated methane potential test system (AMPTS) (Figure 1.2.8). Anaerobic batch digestion tests can be carried out at small scale (0.1–1 liter) to determine biogas and biomethane yields and biodegradability of a substrate. The simple batch method can be conducted using affordable laboratory equipment; an AMPTS is more expensive but can be automated and is more accurate. The AMPTS allows measurement of biogas production through time.
A simple anaerobic batch digestion system (Figure $7$) is composed of a vessel, which is normally a bottle sealed with a cap and an opening to let the biogas out. Based on the composition (TS and VS) of the substrate, an amount of the substrate that gives 3 g VS is used to start the digestion. The substrate is put in the vessel and inoculum added. The inoculum is a seed material taken from an active anaerobic digester. The pH of the digester should be approximately 7. The digester is flushed with an inert gas, such as helium or argon, for approximately two minutes to ensure anaerobic condition by removing oxygen from both liquid and head space. The digester is sealed with a rubber stopper and connected to a gas bag (called a Tedlar bag) to collect the biogas. The digester is incubated at a constant temperature (35°–50°C) for up to 25 days. During the incubation time, the contents are mixed intermittently using a stirrer or by manual shaking for about one minute, but without breaking the seal of the bottle. Each treatment should be replicated and a control using just inoculum is used to estimate the biogas produced by the inoculum alone. The collected biogas can be measured using liquid displacement or gas tight syringe. The pH is measured at the end of the digestion time. Biogas yield (L g−1 VS) is determined by dividing the cumulative biogas by the initial amount of the VS in the digester at the beginning of the digestion. The methane yield is calculated by multiplying biogas yield by the methane content of biogas that can be measured using a gas chromatograph.
An AMPTS (Figure $8$) is composed of three parts: a water bath with a temperature control, a CO2 fixation unit, and a gas tip meter. The vessels are incubated in the water bath at a constant temperature. All the vessels are continuously mixed using mechanical mixers. The CO2 fixation unit is used to remove CO2 from the biogas. The gas measuring unit (tip meter) can determine the amount of methane production from each individual digester. The tip meter is connected to a data logger that continuously records the methane production. All procedures for preparing simple anaerobic batch digesters are also applied in the AMPTS.
Figure $9$ shows daily biogas production and cumulative biogas yield determined from a batch anaerobic digester, with a capacity of 1 L, treating cafeteria food waste at an initial VS loading of 4 g L−1 and a temperature of 50°C. The biogas production rates are high at the beginning of the batch digestion and then decline until reaching almost zero. This is due to the reduction of the organic matter contained in the substrate over the digestion time until all the available organic matter is consumed by microorganisms.
The data of methane production and remaining substrate concentration in batch digestion tests can be used to determine the first-order degradation kinetic rate constant using Equations 1.2.4, 1.2.5, and 1.2.6. Methane production is calculated by multiplying biogas production by methane content of the biogas (which is usually measured using gas chromatography). Figure 1.2.10 shows methane yields of various organic wastes after a digestion time of 25 days. As can be seen, the substrate composition affects the methane yield. The experimental data from batch digestion tests could be used to determine the proper HRT and vessel size for pilot and full-scale systems to treat a specific amount of substrate. For example, the digestion time required to convert all or part of the biodegradable organic matter in a certain substrate to biogas could be used as a basis for determining the proper HRT to convert the substrate into biogas in a continuously fed digester at the same temperature. Once the HRT is determined, the effective volume can be determined using Equation 1.2.14.
Commercial Uses of Biogas
In addition to utilizing biogas for electricity generation using generators and fuel cells, and for heating purposes, biogas can be upgraded to biomethane (also known as renewable natural gas, RNG). Biomethane is very similar to natural gas, therefore, most equipment used for natural gas can be operated with biomethane. Biomethane can be used as a transportation fuel in the form of renewable compressed natural gas (CNG) or liquefied natural gas (LNG). The U.S. Environmental Protection Agency defined renewable CNG and LNG as biogas or biogas-derived “pipeline quality gas” that is liquefied or compressed for transportation purposes. For these uses, biogas must be cleaned and upgraded, either onsite adjacent to the digester or pumped to a central facility that processes biogas from multiple digesters in the vicinity. Biomethane could also be sold to utility companies by injection into natural gas pipelines. Biomethane must meet high quality standards for injection in the pipelines (Table $5$).
Table $5$: Quality specification of biomethane to be injected into the California gas pipeline (Coke, 2018).
Quality Parameter Value
Water content (kg per 1000 m3 at 55.15 bar)
0.11
Hydrogen sulfide (ppm)
4
Total sulfur (ppm)
17
Carbon dioxide (%)
1
Hydrogen (%)
0.1
Examples
Example $1$
Example 1: Theoretical methane production
Problem:
A cafeteria wants to manage waste food by using it as the feedstock for an anaerobic digestor. What is the theoretical methane production at standard temperature and pressure from 1,000 kg of organic food waste with the chemical formula C3.7H6.4O1.8N0.2? What is the expected methane content of the biogas assuming it consists of only methane and carbon dioxide?
Solution
Applying Buswell’s equation:
$\mathrm{C}_{3.7} \mathrm{H}_{64} \mathrm{O}_{18} \mathrm{~N}_{0.2}+\left(\frac{4(3.7)-6.4-2(1.8)+3(0.2)}{4}\right) \mathrm{H}_{2} \mathrm{O} \rightarrow\left(\frac{4(3.7)+6.4-2(1.8)-3(0.2)}{8}\right) \mathrm{CH}_{4}+\left(\frac{4(3.7)-6.4+2(1.8)+3(0.2)}{8}\right) \mathrm{CO}_{2}+0.2 \mathrm{NH}_{3}$
$\mathrm{C}_{3.7} \mathrm{H}_{6.4} \mathrm{O}_{1.8} \mathrm{~N}_{0.2}+1.35 \mathrm{H}_{2} \mathrm{O} \rightarrow 2.125 \mathrm{CH}_{4}+1.575 \mathrm{CO}_{2}+0.2 \mathrm{NH}_{3}$
This means that 1 mole (82.4 g) of the organic food waste produces 2.125 mole of CH4 and 1.575 mole of CO2.
Calculate methane yield using Equation 1.2.1:
$M_{\mathrm{y}}=\frac{\left(\frac{4 a+b-2 c-3 d}{8}\right) \times 22.4}{12 a+b+16 c+14 d}$ (Equation $1$)
$M_{\mathrm{y}}=\frac{2.125 \times 22.4}{82.4}=0.577 \mathrm{~L} \mathrm{~g}^{-1}[\mathrm{VS}]$
$\text { Amount of methane production from } 1,000 \mathrm{~kg}=0.577 \times 1,000 \times 1,000=575,000 \mathrm{~L}=577 \mathrm{~m}^{3}$
Calculate the methane content using Equation 1.2.2:
$M_{\mathrm{C}}=\frac{\left(\frac{4 a+b-2 c-3 d}{8}\right) \times 100}{\left(\frac{4 a+b-2 c-3 d}{8}\right)+\left(\frac{4 a-b+2 c+3 d}{8}\right)}$ (Equation $2$)
$M_{\mathrm{C}}=\frac{2.125 \times 100}{2.125+1.575}=57.4 \%$
Example $2$
Example 2: Design of an anaerobic digester for dairy manure
Problem:
A dairy farmer wants to build an anaerobic digester to treat the manure produced from 1,000 cows. Each cow produces 68 kg of manure per day. The volatile solid (VS) of the manure is 11% (wet basis). The digester is to be operated at an organic loading rate of 2 kg [VS] m−3 day−1 and a temperature of 35°C. The gas headspace volume is 10% of the working volume. Biogas yield from manure is 288 L kg−1 [VS] and the methane content is 65%. Assume all the manure produced on the dairy will be treated in the digester. Calculate:
1. (a) the volume of the digester required,
2. (b) the daily biogas and methane production, and
3. (c) the daily energy production from biogas if the biogas has a calorific value of 23 MJ m−3.
Solution
The amount of organic matter to be treated per day (ϕom) can be calculated using the number of cows, the amount of manure produced from each cow per day, and the volatile solids contents of manure as follows:
\begin{aligned} &\text { The amount of organic matter to be treated per } \operatorname{day}\left(\phi_{\mathrm{m}}\right)=\text { number of cows } \times \text { amount of manure produced from each cow } \times \text { volatile solids content of manure }=1,000 \times 68 \times\left(\frac{11}{100}\right)=7,480 \mathrm{~kg}[\mathrm{VS}] \text { day }^{-1} \end{aligned} Calculate the working volume of the digester using Equation 1.2.13: \( V_{\mathrm{w}}=\frac{\phi_{\mathrm{Om}}}{\mathrm{OLR}}=\frac{7,480}{2}=3,740 \mathrm{~m}^{3}
Calculate the total volume (Vt) of the digester using Equation 1.2.12:
$V_{\mathrm{t}}=V_{\mathrm{w}}+V_{\mathrm{h}}$
$V_{\mathrm{t}}=3,740+\left(\frac{10}{100}\right)(3,740)=4,114 \mathrm{~m}^{3}$
Calculate the daily biogas production using Equation 1.2.8:
$B_{\mathrm{dp}}=\phi_{\mathrm{om}} B_{\mathrm{y}}$
$B_{dp}=7,480 \times \frac{288}{1,000} = 2,154.2 \text{m}^{3} \text {day}^{-1}$
Calculate the methane production using Equation 1.2.9:
$M_{dp}=B_{dp} \times M_{C}$
$M_{dp} = 2,154.2 \times \frac{65}{100} = 1,400.2 \text{m}^3 \text{day}^{-1}$
Calculate the energy production using Equation 1.2.10:
$E_{dp} = B_{dp} \times CV_{B}$
$E_{dp} = 2,154.2 \times 23 = 49,546.6 \ MJ \ \text{day}^{-1}$
Example $3$
Example 3: Modeling and kinetics
Problem:
A batch digester with a volume of 5 L treats an organic substrate at an initial loading of 5 g [VS] L−1 for 25 days. The substrate has an ultimate methane yield of 350 mL g−1 [VS] degraded. Determine the concentration of the biodegradable substrate in the effluent and total amount of methane produced over 25 days if the first-order degradation kinetic rate constant is 0.12 day−1.
Solution
Solution:
The concentration of the biodegradable VS in the digester effluent can be calculated using Equation 1.2.4:
$S=S_{0}e^{-kt}$ (Equation $4$)
After one day of digestion, the VS concentration is:
$S=5[e^{-0.12(1)}]=4.434 \ \text{g} \ \text{L}^{-1}$
This calculation can be repeated for every day over the digestion time (25 days). The results of these calculations are plotted in Figure $11$.
The amounts of the degraded organic matter and methane produced can be predicted using Equations 1.2.5 and 1.2.6. After one day of the digestion, these amounts can be calculated as:
$S_{deg}=V_{w}(S_{0}-S)$ (Equation $5$)
$S_{deg}=5(5-4.434)=2.83 \ \text{g}$
$M_{p}=M_{y}S_{deg}$ (Equation $6$)
$M_{p}=350 \times 2.83 = 989.45 \ \text{mL} =0.9894 \ \text{L}$
These calculations can be repeated for every day over the digestion time (25 days). The results of these calculations are plotted in Figure $12$.
Image Credits
Figure 1. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). The steps of anaerobic digestion of complex organic matter into biogas (derived from Pavlostathis and Giraldo-Gomez, 1991 and El Mashad, 2003).
Figure 2. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). A schematic of suspended growth anaerobic digester.
Figure 3. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). A schematic of a plug flow digester.
Figure 4. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). A schematic of an anaerobic filter.
Figure 5. El Mashad and Zhang. (CC By 4.0). (2020). Relative growth rate of methanogens under psychrophilic, mesophilic and thermophilic conditions (derived from Lettinga, et al., 2001. https://www.sciencedirect.com/science/article/pii/S0167779901017012#FIG1)
Figure 6. El Mashad, H. and Zhang, R. (CC By 4.0). (2020). Relative activity of methanogenic archaea at different pH (derived from Speece, 1996; Khanal, 2008).
Figure 7. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). A schematic of an experimental set-up of a batch digester system.
Figure 8. Bioprocess Control. (2020). Experimental set-up of Automated Methane Potential Test System (AMPTS). Adapted from https://www.bioprocesscontrol.com/. [Fair Use].
Figure 9. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). Daily biogas production and cumulative biogas yield of cafeteria food waste.
Figure 10. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). Methane yield of selected organic wastes.
Figure 11. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). Concentration of biodegradable VS in the digester.
Figure 12. El Mashad, H. & Zhang, R. (CC By 4.0). (2020). Total cumulative methane production.
References
Angelidaki, I., & Sanders, W. (2004). Assessment of the anaerobic biodegradability of macropollutants. Rev. Environ. Sci. Biotechnol., 3(2), 117-129.
Batstone, D. J., Keller, J., Angelidaki, I., Kalyuzhnui, S. V., Pavlostanthis, S. G., Rozzi, A., . . . Vavilin, V. A. (2002). Anaerobic Digestion Model No. 1. IWA task group for mathematical modelling of anaerobic digestion processes. London: IWA Publishing.
Buswell, A. M., & Mueller, H. F. (1952). Mechanism of methane fermentation. Ind. Eng. Chem., 44(3), 550-552. https://doi.org/10.1021/ie50507a033.
Coke, C. (2018). Pipeline injection of biomethane in California. BioCycle, 59(3), 32. Retrieved from https://www.biocycle.net/2018/03/12/pipeline-injection-biomethane-california/.
Coombs, J. (1990). The present and future of anaerobic digestion. In A. Wheatley (Ed.), Anaerobic digestion: A waste treatment technology. Critical Rep. Appl. Chem., 31, 93-138.
El Mashad, H. M. (2003). Solar thermophilic anaerobic reactor (STAR) for renewable energy production. PhD thesis. The Netherlands: Wageningen University. Retrieved from edepot.wur.nl/121501.
Hobson, P. N., & Richardson, A. J. (1983). The microbiology of anaerobic digestion. In B. F. Pain & R. Q. Hepherd (Eds.), Anaerobic digestion of farm waste. Technical Bulletin 7. Reading, UK: The National Institute for Research in Dairying.
Khanal, S. K. (2008). Anaerobic biotechnology for bioenergy production: Principles and applications. Ames, IA: Wiley-Blackwell.
Kosaric, N., & Blaszczyk, R. (1992). Industrial effluent processing. In: J. Lederberg (Ed.), Encyclopedia of microbiology (Vol. 2, pp. 473–491). New York, NY: Academic Press.
Lettinga, G., Rebac, S., & Zeeman, G. (2001). Challenge of psychrophilic anaerobic wastewater treatment. Trends in Biotechnol. 19(9), 363-370. https://doi.org/10.1016/S0167-7799(01)01701-2.
Pavlostathis, S. G,. & Giraldo-Gomez, E. (1991). Kinetics of anaerobic treatment: A critical review. Critical. Rev. Environ. Control, 21 (5/6), 411-490. https://doi.org/10.1080/10643389109388424.
Speece, R. E. (1996). Anaerobic biotechnology for industrial wastewater treatments. Nashville, TN: Archae Press.
Stanier, R. Y., Doudoroff, M., & Adelberg, E. A. (1972). General microbiology. Englewood Cliffs, NJ: Prentice-Hall.
Stetter, K. O. (1996). Hyperthermophilic procaryotes. FEMS Microbiol. Rev., 18, 149-158. https://doi.org/10.1111/j.1574-6976.1996.tb00233.x.
Zhang, M. (2017). Energy and nutrient recovery from organic wastes through anaerobic digestion and digestate treatment. PhD diss. Davis, CA: University of California, Davis.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/01%3A_Energy_Systems/1.02%3A_Biogas_Energy_from_Organic_Wastes.txt
|
B. Brian He
Biological Engineering
University of Idaho
Moscow, ID, USA
Scott W. Pryor
Agricultural and Biosystems Engineering and College of Engineering
North Dakota State University, Fargo, ND, USA
Key Terms
Feedstocks Conversion process Properties
Chemistry Process configuration Storage and handling
Introduction
Biodiesel is the term given to a diesel-like fuel made from biologically derived lipid feedstocks, such as vegetable oils, animal fats, and their used derivatives such as waste cooking oils. Biodiesel is a renewable fuel that can be made from a diverse array of domestic feedstocks, has low safety concerns for use and handling, and can have relatively low environmental impact from production and use.
Biodiesel has several properties that make it a safer fuel than conventional petroleum-based diesel. While conventional diesel is categorized as a flammable fuel, biodiesel is rated as combustible, which means it has a low vapor pressure, is resistant to static sparks, and is much less likely to self-ignite during storage. During transportation, tankers carrying pure biodiesel are not required to display warning signs in the United States.
Biodiesel is especially of interest to farmers because of the potential for on-farm production using harvested crops. Oil can be extracted from oilseeds relatively easily, and this oil can then be used to make biodiesel to run farm machinery. It provides farmers an additional resource for economic welfare and an additional choice for managing cropland. In addition, using biodiesel from domestically grown feedstocks can decrease a country’s dependence on imported oil, thus enhancing national energy security. On the other hand, concerns are sometimes raised about converting oils and fats, which could serve as food resources, into fuels (Prasad and Ingle, 2019).
Biodiesel is typically considered an environmentally friendly fuel. Production and combustion of biodiesel results in less air pollution than using conventional diesel. According to a study sponsored by the U.S. Department of Agriculture and the Department of Energy, using biodiesel in urban buses can reduce total particulate matter (PM), carbon monoxide (CO) and sulfur oxides (SOx) by 32%, 35% and 8%, respectively (Sheehan et al., 1998).
The diesel engine is named for Rudolf Diesel, who invented it in the 1890s. Diesel’s engines could run on various fuels including vegetable oils. At the Paris Exposition in 1900, Diesel demonstrated his engines running on peanut oil and made this famous statement:
The use of vegetable oils for engine fuels may seem insignificant today. But such oils may become in course of time as important as petroleum and the coal tar products of the present time.
Diesel’s vision was valid in that vegetable oils can still be used directly as a fuel for diesel engines. However, raw vegetable oils without pre-processing are not an ideal fuel for modern diesel engines due to their high viscosity and other chemical properties. Burning raw vegetable oils in today’s diesel engines results in heavy carbon deposits in the cylinders, which can stall the engine in a short period of time.
To overcome this problem, research was conducted starting in the late 1930s to chemically process vegetable oils into a mixture of short-chained alkyl fatty acid esters. This fuel has a much lower viscosity and is thus better suited for use in diesel engines. During the petroleum crisis in the 1970s, the use of alkyl fatty acid esters as a fuel for diesel engines became more popular. Two decades later, in the 1990s, the name “biodiesel” was coined and gained popularity.
In the early 1980s, Mittelbach and his team at the Technical University of Graz in Austria were the first to research biodiesel as a diesel fuel. The commercialization of biodiesel started with a pilot biodiesel production facility by an Austrian company, Gaskoks, in 1987. The European Biodiesel Board (EBB), a non-profit organization promoting the use of biodiesel in Europe, was founded in 1997.
Biodiesel research and utilization in the U.S. started around the same time as in Europe. Dr. Charles Peterson and his research team at the University of Idaho conducted a series of research projects on the use of vegetable oil as tractor fuel. The team worked on biodiesel production, engine testing, emission assessment, and field utilization. The National Biodiesel Board (NBB) was founded in the U.S. in 1992 and has conducted health and environmental assessments on biodiesel utilization. The NBB also registered biodiesel with the U.S. Environmental Protection Agency (USEPA) as a substitute fuel for diesel engines. Supported by the NBB and the biodiesel research community, biodiesel was established as an industry sector. Total biodiesel production reached approximately 7.2 billion L in the USA in 2018 with an additional 39.4 billion L produced globally.
Although biodiesel can be used as a pure diesel-replacement fuel called B100, it is typically available as a diesel/ biodiesel blend at retail pumps. Biodiesel blends are designated to indicate a volumetric mixture such as B5 or B20 for 5% or 20% biodiesel, respectively, in conventional diesel.
Outcomes
After reading this chapter, you should be able to:
• • Describe the advantages and limitations of using biodiesel in diesel-powered engines
• • Describe biodiesel production processes
• • Explain how biodiesel is similar to and different from conventional petroleum-based diesel
• • Describe how feedstock composition and properties affect biodiesel properties
• • Explain the important unit operations commonly used for producing biodiesel
• • Calculate proportions of vegetable oil, methanol, and catalyst needed to make a given quantity of biodiesel, and the size of the reactor required for conversion
Concepts
Biodiesel Chemistry
To qualify as biodiesel in the U.S., a fuel must strictly comply with the ASTM definition of a “fuel comprised of mono-alkyl esters of long chain fatty acids derived from vegetable oils or animal fats, designated B100” (ASTM, 2015). It must also meet all of the quality parameters identified in that standard. In Europe, the definition of biodiesel is covered by the European standard EN 14214 (CEN, 2013). The generic name for vegetable oils (more generally plant oils) or animal fats issimply fat or lipid. The primary distinguishing factor between a fat and an oil is that a fat is a solid at room temperature while an oil is a liquid. The primary compounds in both oils and fats are a group of chemicals called triglycerides (Figure $\PageIndex{1a}$).
Glycerol (Figure $\PageIndex{1b}$), also known as glycerin, is a poly-hydric alcohol with three alcoholic hydroxyl groups (-OH). Pure glycerol is colorless, odorless, and hygroscopic. Fatty acids (Figure $\PageIndex{1c}$) are a family of carboxylic acids with relatively long carbon chains.
Triglycerides, also called triacylglycerols, are the glycerol esters of fatty acids, in which three fatty acids attach chemically to a glycerol carbon backbone where the hydroxyl (OH) groups are attached. Triglycerides in oils and fats may contain fatty acid chains of 10 to 24 carbons (C10-C24) but are most commonly 16 to 18 carbons (C16-C18) in length. The three fatty acids attached to the glycerol molecule can be the same or different. The alkyl chain length of fatty acids, the presence and number of double bonds contained in the fatty acid chains, and the position and orientation of the double bonds collectively determine the chemical and physical properties of the triglyceride. Some examples are provided in Table $1$.
Table $1$: Fatty acids commonly seen in oils and fats.
Abbreviation Common Name Formula Chemical Structure MW[1]
C12:0[2]
lauric acid
C12H24O2
CH3(CH2)10COOH
200.3
C14:0
myristic acid
C14H28O2
CH3(CH2)12COOH
228.4
C16:0
palmitic acid
C16H32O2
CH3(CH2)14COOH
256.5
C18:0
stearic acid
C18H36O2
CH3(CH2)16COOH
284.5
C18:1
oleic acid
C18H34O2
CH3(CH2)7CH:CH(CH2)7COOH
282.5
C18:2
linoleic acid
C18H32O2
CH3(CH2)3(CH2CH:CH)2(CH2)7COOH
280.5
C18:3
linolenic acid
C18H30O2
CH3(CH2CH:CH)3(CH2)7COOH
278.5
C20:0
arachidic acid
C20H40O2
CH3(CH2)18COOH
312.6
C20:1
eicosenoic acid
C20H38O2
CH3(CH2)7CH:CH (CH2)9COOH
310.5
C20:5
eicosapentaenoic
C20H30O2
CH3(CH2CH:CH)5(CH2)3COOH
302.5
C22:1
erucic acid
C22H42O2
CH3(CH2)7CH:CH(CH2)12COOH
338.6
[1] MW = molecular weight, g/mol
[2] Cx:y stands for a chain of x carbon atoms with y double bonds in that chain.
Biodiesel Properties
Biodiesel is a commercialized biofuel used by consumers around the globe. Several international standards have been developed and approved to assure engine manufacturers and diesel engine customers that biodiesel meets specified fuel quality requirements. As a commercial product, biodiesel must comply with the specifications defined by the ASTM Standard D6751 (ASTM, 2015) in North America or EN14214 (CEN, 2013) in Europe. Several other countries have also developed their own standards; in many cases, they are based on the ASTM and EN standards. Table $2$ summarizes the specifications for biodiesel fuel according to these two standards.
Biodiesel properties are affected by both the feedstock and the conversion process. Meeting specification for all parameters in the relevant standards must be documented before a fuel can be marketed. However, some fuel properties are more critical than others in terms of use. In the USA, biodiesel sulfur content must be no more than 15 ppm for Grade S15, and 500 ppm for Grade S500, to qualify as an ultra-low sulfur fuel. If virgin vegetable oils are used as the feedstock, sulfur content in the biodiesel is typically very low. However, if used cooking oils or animal fats are used, the sulfur content in biodiesel must be carefully monitored to meet the required specification.
Table $2$: Major specifications for biodiesel (B100).
Property Units ASTM D6751[a] EN14214
Grade 1B
(S15)
Grade 2B
(S15)
Sulfur (15 ppm or lower level) (maximum)
ppm
15
15
[b]
Cold soak filterability (maximum)
Sec.
200
360
[b]
Mono-glyceride (maximum)
% mass
0.40
[b]
0.8
Calcium & magnesium combined (maximum)
ppm (μg/g)
5
5
Flash point (closed cup) (minimum)
°C
93
101
Alcohol control (one of the following shall be met)
a) Methanol content (maximum)
mass %
0.2
0.2
b) Flash point (minimum temperature)
°C
130
[b]
Water and sediment (maximum)
% volume
0.050
0.005
Kinematic viscosity (40°C)
mm2/s
1.9–6.0
3.5–5.0
Sulfated ash (maximum)
% mass
0.02
0.02
Copper strip corrosion
No. 3
No. 1
Cetane number (minimum)
47
51
Cloud point
°C
Must be reported
[b]
Carbon residue (maximum)
% mass
0.05
0.03
Acid number (maximum)
mg KOH/g
0.50
0.5
Free glycerol (maximum)
% mass
0.02
0.02
Total glycerol (maximum)
% mass
0.24
0.25
Phosphorus content (maximum)
% mass
0.001
0.001
Distillation temperature (90%) (maximum)
°C
360
[b]
Sodium and potassium combined (maximum)
ppm (μg/g)
5
5
Oxidation stability (minimum)
hours
3
6
[a] Grade refers to specification for monoglycerides and cold soak filterability. S15 indicates maximum sulphur content of 15 ppm.
[b] Not specified in the standard
A liquid fuel’s flash point refers to the lowest temperature at which its vapor will be combustible. Biodiesel has a high flash point, making it safe for handling and storage. The flash point, however, may drop if the residual alcohol from the biodiesel production process is inadequately removed. To maintain a high flash point, biodiesel alcohol content cannot be more than 0.2%. Cloud point and cold soak filterability are both properties relating to flowability at cold temperatures and are important for biodiesel use in relatively low temperature environments. Cloud point refers to the temperature at which dissolved solids begin to precipitate and reduce clarity. Cold soak filterability refers to how well biodiesel flows through a filter at a specified temperature (4.4°C). Biodiesel is limited in its use in colder climates because it typically has a much higher cloud point (−6°C to 0°C for rapeseed and soybean based biodiesel and up to 14°C for palm oil based biodiesel) than conventional No. 2 diesel (−28°C to −7°C). Generally, methyl esters of long-chain, saturated fatty acids have high cloud points, especially in comparison to conventional diesel fuel. Although there are commercial additives available for improving biodiesel cold flow properties, their effectiveness is limited. Cold flow properties can be a limiting factor related to the biodiesel blend used (e.g., B2 vs. B10 or B20) in colder climates or at colder times of the year.
The presence of monoglycerides in biodiesel is an indicator of incomplete feedstock conversion and can adversely affect fuel combustion in an engine. Monoglycerides also contribute to measurements of both total glycerine and free glycerol. Total glycerol should be 0.24% or lower to avoid injector deposits and fuel filter clogging problems in engine systems.
Biodiesel viscosity is significantly lower than that of vegetable oil but is higher than conventional diesel in most cases. Biodiesel viscosity will vary based primarily on the fatty acid carbon chain length and level of saturation in the feedstock. Although specified biodiesel viscosity levels range from 2.8 to 6.1 mm2/s at 40°C, typical values are greater than 4 mm2/s at that temperature (Canackci and Sanli, 2008), while No. 2 conventional diesel has a specified viscosity range of 1.9–4.1 mm2/s at 40°C with typical values less than 3.0 mm2/s (ASTM, 2019).
Most biodiesel fuels have a higher cetane number than conventional diesel. Cetane number measures the ability of a fuel to ignite under pressure and a high cetane number is generally advantageous for combustion in diesel engines. Typical values are approximately 45–55 for soybean-based biodiesel and 49–62 for rapeseed-based biodiesel. The higher cetane number of biodiesel is largely attributed to the long carbon chain and high degree of unsaturation in fatty acid esters. Acid number of biodiesel fuel is an indication of free fatty acid content in biodiesel, which affects the oxidative and thermal stabilities of the fuel. To ensure biodiesel meets the specification of acid number, feedstocks with high free fatty acid content must be thoroughly treated and the finished product adequately washed.
Mineral ash contents of combined calcium and magnesium, combined sodium and potassium, and carbon residue have a harmful effect on biodiesel quality by leading to abrasive engine deposits. Phosphorus content is also regulated closely because of its adverse impact on the catalytic converter. Good quality control practices are vital in controlling residual mineral content in biodiesel. Biodiesel instability can also be affected negatively by excess water and sediment because of inadequate refining, or from contamination during transport or storage. Biodiesel tends to absorb moisture from the air, making it susceptible to such contamination. It can absorb 15–25 times more moisture than conventional petroleum-based diesel (He et al., 2007). Excess water can be controlled by adequately drying the moisture from biodiesel after water washing, and through proper handling and storage of the fuel.
Biodiesel Feedstocks
The primary feedstocks for making biodiesel are vegetable oils and animal fats. Typical properties are given in Table $3$. The feedstocks for biodiesel production can be any form of triglycerides. The most commonly used feedstocks include soybean oil, rapeseed/canola oil, and animal fats. Used cooking oils and/or yellow/trap greases can also be used but may be better as supplements to a feedstock supply with more consistent quality and quantity. Feedstock choice for biodiesel production is generally based on local availability and price. Vegetable oils and/or animal fats all have existing uses and markets. The availability of each type of feedstock varies widely depending on current market conditions, and changes almost on a yearly basis. Before a biodiesel production facility is constructed, securing adequate feedstock supply is always the number one priority. Based on their availability, soybean oil and corn oil are the major feedstocks in the U.S., while rapeseed/canola oil is the most common feedstock used in Europe. Other major producing countries include Brazil and Indonesia which rely on soybean oil and palm oil, respectively.
Table $3$: Typical fatty acid composition of common oils and fats.[1]
Oils and Fats Fatty Acid Profiles (% m/m)
C12:0 C14:0 C16:0 C18:0 C18:1 C18:2 C18:3 C20:1
Plant Oils
Algae oil
12–15
10–20
4–19
1–2
5–8
35–48[2]
Camelina
12–15
15–20
30–40
12–15
Canola, general
1–3
2–3
50–60
15–25
8–12
Canola, high oleic
1–3
2–3
70–80
12–15
1–3
Coconut oil
45–53
16–21
7–10
2–4
5–10
1–2.5
Corn
1–2
8–16
1–3
20–45
34–65
1–2
Cottonseed
0–2
20–25
1–2
23–35
40–50
Grape seed oil
5–11
3–6
12–28
58–78
Jatropha
11–16
6–15
34–45
30–50
3–5[4]
Flax (linseed) oil
4–7
2–4
25–40
35–40
25–60
Mustard seed oil
1–2
8–23
10–24
6–18
5–13 &
20–50[3]
Olive
9–10
2–3
72–85
10–12
0–1
Palm oil
0.5–2
39–48
3–6
36–44
9–12
Palm kernel oil
45–55
14–18
6–10
1–3
12–19
Peanut
8–9
2–3
50–65
20–30
Rapeseed (high erucic/oriental)
1–3
0–1
10–15
12–15
8–12
45–60[3] &
7–10[4]
Rapeseed (high oleic /canola)
1–5
1–2
60–80
16–23
10–15
Safflower (high linoleic)
3–6
1–3
7–10
80–85
Safflower (high oleic)
1–5
1–2
70–75
12–18
0–1
Sesame oil
8–12
4–7
35–45
37–48
Soybean oil
6–10
2–5
20–30
50–60
5–11
Soybean (high oleic)
2–3
2–3
80–85
3–4
3–5
Sunflower
5–8
2–6
15–40
30–70
Sunflower (high oleic)
0–3
1–3
80–85
8–10
0–1
Tung oil
3–4
0–1
4–15
75–90
Animal Fats
Butter
7–10
24–26
10–13
28–31
1–3
0–1
Chicken fat
Lard
1–2
25–30
10–20
40–50
6–12
0–1
Tallow
3–6
22–32
10–25
35–45
1–3
[1] Compiled from various sources: Peterson et al., 1983; Peterson, 1986; Goodrum and Geller 2005; Dubois et al., 2007; Kostik et al., 2013; Knothe et al., 2015.
[2] C20:5
[3] C22:1
[4] C20:0
Compared to other oilseeds, soybeans have a relatively low oil content, typically 10–20% of the seed mass. However, soybean yields are relatively high, typically 2,500–4,000 kg/ha (2,200–3,600 lb/acre), and the U.S. and Brazil are the two largest soybean producers in the world. Due to the large production and trade of soybeans, approximately 11 million metric tons (24.6 billion lbs) of soybean oil were on the market in the 2016–2017 season; of that, 2.8 million metric tons (6.2 billion lbs) were used for biodiesel production (USDA ERS, 2018a).
In recent years, corn oil has been used increasingly and has become the second largest feedstock for making biodiesel in the U.S. Corn planted in the U.S. is mainly used for animal feed, corn starch or sweeteners, and for ethanol production. Corn oil can be extracted in a facility producing corn starch or sweeteners and is also increasingly being extracted from different byproducts of the ethanol industry. The total supply of corn oil in the U.S. was approximately 2.63 million metric tons (5.795 billion lbs) in 2017 (USDA ERS, 2018b). The quantity of corn oil used for biodiesel production was approximately 717,000 metric tons (1.579 billion lb), or approximately 10% of the total biodiesel market. Canola oil is the third largest feedstock with a use of approximately 659,000 metric tons (1.452 billion lbs) in 2017 (USDA EIA, 2018).
Rapeseed belongs to the Brassica family of oilseed crops. Original rapeseed, including the cultivars planted in China and India, contains very high contents of erucic acid and glucosinolates, chemicals undesirable in animal feed. Canola is a cultivar of rapeseed developed in Canada with very low erucic acid and glucosinolates contents. While the oilseed crop planted in Europe is still called rapeseed there, it is essentially the same plant called canola in North America. The yield of rapeseed in Europe is high, in the range of 2,000–3,500 kg/ha (1,800–3,100 lb/acre) and is planted almost exclusively for biodiesel production.
Other plant oils, including palm and coconut oil, can also be used for producing biodiesel and are especially popular in tropical nations due to very high oil yields per acre. Plant species with high oil yields, requiring low agricultural inputs and with the ability to grow on marginal lands, such as camelina and jatropha, are of particular interest and have been researched for biodiesel production. Oils from safflower, sunflower, and flaxseed can be used for making biodiesel, but their high value in the food industry makes them uneconomical for biodiesel production.
Some strains of microalgae have a high lipid content and are also widely researched and used to produce algal oil as a biodiesel feedstock. They are considered a promising feedstock because of their potential to be industrialized or produced in an industrial facility rather than on agricultural land. Microalgae can be cultivated in open ponds, but high-oil strains may be better suited to production in closed photo-bioreactors. The potential yield of microalgal oil per unit land can be as high as 6,000 L/ha/y (1600 gal/ac/y), more than 10 times that of canola or soybeans. Currently, however, microalgal lipids are not used for industrial biodiesel production because of their high production cost.
Like plant oils, animal fats contain similar chemical components and can be used directly for biodiesel production. In 2017, approximately 1.2 million metric tons (2.6 billion lbs) of used cooking oils and animal fats were used for biodiesel production in the U.S., accounting for 23% of the total used cooking oils and animal fats in the U.S. market (Swisher, 2018) and less than 20% of U.S. biodiesel production.
Conversion Process
Biodiesel is made by reacting triglycerides (the chemicals in oils and fats) with an alcohol. The chemical reaction is known as transesterification. In transesterification of oils and/or fats, which are the glycerol esters of fatty acids (Figure $2$), the glycerol needs to be transesterified by another alcohol, most commonly methanol. The three fatty acids (R1, R2, and R3) react with the alkyl groups of the alcohol to produce fatty acid esters, or biodiesel. Those fatty acids from the triglyceride are replaced by the hydroxyl groups from the alcohol to produce glycerol, a by-product. The glycerol can be separated from the biodiesel by gravity, but the process is typically accelerated through a centrifugation step. If methanol (CH3–OH) is used as the alcohol for the transesterification reaction, methyl groups attach to the liberated triglyceride fatty acids (Rx–CH3), as illustrated in Figure $2$. The resulting mixture after glycerol separation is referred to as fatty acid methyl esters (or FAME as commonly called in Europe), and biodiesel after further refining. Without the glycerol skeleton, the mixture of FAME is much less viscous than the original vegetable oil or animal fat, and its fuel properties are suitable for powering diesel engines.
The transesterification of oils and fats involves a series of three consecutive reactions. Each fatty acid group is separated from the glycerol skeleton and transesterified individually. The intermediate products are diglycerides (when two fatty acid groups remain on the glycerol backbone) and monoglycerides (when one fatty acid group remains on the glycerol backbone). Transesterification reactions are also reversible. The diglyceride and monoglyceride intermediate products can react with a free fatty acid and reform triglycerides and diglycerides, respectively, under certain conditions. The degree of reverse reaction depends on the chemical kinetics of transesterification and the reaction conditions. In practical application, approximately twice the stoichiometric methanol requirement is added in order to drive the forward reactions and to ensure more complete conversion of oils and fats into biodiesel. The excess methanol can be recovered and purified for reuse in the system.
The density of vegetable oil at 25°C is in the range of 903–918 kg/m3 (7.53–7.65 lb/gal) depending on the specific feedstock (Forma et al., 1979). The density of biodiesel is approximately 870–880 kg/m3 (7.25–7.34 lb/gal) (Pratas et al., 2011). Comparison reveals that vegetable oil is approximately 4% heavier than biodiesel. While planning for biodiesel production, it is an acceptable assumption that each volume of biodiesel produced requires an equal volume of vegetable oil.
To calculate the exact volume of chemicals (i.e., reactant methanol and catalyst) needed for the transesterification, the molecular weight of the vegetable oil is needed. However, as seen from Table $3$, vegetable oils vary in fatty acid composition depending on oil source and even on the specific plant cultivar. There is no defined molecular weight for all vegetable oil, but an average molecular weight is used for calculations. Based on the hydrolysis of fatty acid esters of glycerol, the molecular weight of vegetable oil (a mixture of fatty acid glycerol esters), MWave, can be calculated as:
$MW_{ave}=MW_{gly}\ -\ 3MW_{water} \ +\ 3MW_{ave,FA}$
where MWgly = molecular weight of glycerol = 92.09 kg/kmol
MWwater = molecular weight of water = 18.02 kg/kmol
MWave,FA = average molecular weight of fatty acids in the oil
The water is subtracted in the equation because three individual fatty acids are joined to the single glycerol molecule in a condensation reaction that produces three water molecules in the process. The opposite reaction, hydrolysis, would split the fatty acid from the glycerol through incorporation of the water molecule ions into the products. The overall average molecular weight of vegetable oil fatty acids is calculated as:
$\frac{1}{MW_{ave,FA}}=\sum \frac{C_{i,FA}}{MW_{1,FA}}$
where Ci,FA = mass fraction of a particular fatty acid
MWi,FA = molecular weight of that particular fatty acid
The difference between the weight of the methyl group (–CH3; 15 kg/kmol) and that of the hydrogen atom (–H; 1 kg/kmol) on the carboxyl group of fatty acids is 14 atomic mass units. To find the average molecular weight of fatty acid methyl esters (FAME) or biodiesel, MWave,FAME, the following formula can be used:
$MW_{ave,FAME}=MW_{ave,FA} \ + \ 14$
Use of a Catalyst
The transesterification reaction will occur even at room temperature if a vegetable oil is mixed with methanol, but would take an extraordinarily long time to approach equilibrium conditions. A catalyst and elevated temperatures are typically used to help the reaction move forward and dramatically reduce the reaction time. The catalysts suitable for transesterification of oils and fats are either strong acids or strong bases; the latter are most commonly used, especially for virgin vegetable oils. Sodium hydroxide (NaOH) and potassium hydroxide (KOH) are inexpensive choices for use as base catalysts; they are typically available commercially as solid flakes or pellets. Before being used as a catalyst for transesterification, the solid form of NaOH or KOH needs to be prepared by reacting with methanol to form a homogenous solution. This dissolving process is a chemical reaction to form soluble methoxide (–OCH3), as shown in Figure $3$.
The methoxide is the active species for catalysis in the system. Therefore, the solution of sodium methoxide (NaOCH3) or potassium methoxide (KOCH3) in methanol are the preferred form of the catalysts for large continuous-flow biodiesel production. Solutions of NaOCH3 or KOCH3 in methanol are commercially available in 25–30% concentrations.
Other Factors Affecting Conversion
Note in Figure $3$ that one mole of water is formed per mole of KOH reacted. Water in the transesterification of oils and/or fats is undesirable because it potentially leads to the hydrolysis of triglycerides to free fatty acids, which in turn react with the base catalyst, either KOH or KOCH3, to form soap. This soap-making process is called saponification (Figure $4$). Soap in the system will cause the reaction mixture to form a uniform emulsion, making the separation of biodiesel from its by-product glycerol impossible. Therefore, special attention is needed to avoid significant soap formation. Thus, prepared methoxide is preferred to hydroxide as the catalyst for use in biodiesel production, so water can be minimized in the system.
Transesterification of oils and/or fats requires a catalyst for realistic conversion rates, but the reaction will still take up to eight hours to complete if it is carried out at room temperature. Therefore, the process temperature also plays a very important role in the reaction rate, and higher reaction temperatures reduce the required reaction time. When the reaction temperature is maintained at 40°C (104°F), the time for complete transesterification can be shortened to 2–4 hours. If the reaction temperature is at 60°C (140°F), the time can be reduced even further to 1–2 hours for a batch reactor. The highest reaction temperature that can be applied under atmospheric pressure is limited by the boiling temperature of methanol, 64.5°C (148°F). Typical reaction temperatures for transesterification of oils and fats in large batch operations are in the range of 55–60°C (130–140°F). Higher temperatures can be used but require a closed system under pressure.
There are situations in which high amounts of free fatty acids (higher than 3% on a mass basis) exist naturally in feedstocks, such as used vegetable oils and microalgal lipids. To transesterify feedstocks with high free fatty acid content, direct application of base catalysts, either as hydroxide (–OH) or methoxide (–OCH3), is not recommended because of the increased likelihood of soap formation. Instead, a more complicated two-step transesterification process is used. In the first step, a strong acid, such as sulfuric acid (H2SO4), is used as a catalyst to convert most of the free fatty acids to biodiesel via a chemical process called esterification (Figure $5$). In the second step, a base catalyst is used to convert the remaining feedstock (mainly triglycerides) to biodiesel.
Safe Handling of Chemicals in Biodiesel Production
Conversion of oils and/or fats to biodiesel is a chemical reaction so a good understanding of the process chemistry, safe chemical processing practices, and all regulations is necessary to ensure safe and efficient biodiesel production. First aid stations must be in place in biodiesel laboratories and production facilities. Although biodiesel itself is a safe product to handle, some of the components involved in production can be hazardous. The chemicals in biodiesel production can include methanol, sodium or potassium hydroxide, and sulfuric acid, all of which have safety concerns related to storage and use. Extreme caution must be practiced in handling these chemicals during the whole process of biodiesel production. The appropriate Material Safety and Data Sheets for all chemicals used should be reviewed and followed to maintain personal and environmental safety.
Applications
Biodiesel Production Systems
The fundamental unit operations for transesterification of a feedstock with low free fatty acid content, such as virgin soybean or canola oil, using KOH as catalyst are illustrated in Figure $6$. The catalyst solution is prepared by reacting it with methanol, in the case of hydroxide flakes, or by mixing it with a measured amount of methanol, in the case of methoxide solution, in a mixer. The prepared catalyst/methanol solution is added to the vegetable oil/fat in the reactor under gentle agitation. The reactor may be an individual or a series of stirred tanks, or some other reactor type. As discussed above, the transesterification reaction typically takes place in 1–2 hours at 55–60°C (130–140°F).
Crude glycerol is the term used for the glycerol fraction after initial separation. It contains some residual methanol, catalyst and a variety of other chemical impurities in the triglyceride feedstock. Crude glycerol is either refined on site or sold to a market for further processing. Although there are many uses of glycerol in industries from food to cosmetics to pharmaceuticals, the economics of refining severely limits its use. The grey water from biodiesel washing is a waste product containing small quantities of methanol, glycerol, and catalyst. It needs adequate treatment before it can be discharged to a municipal wastewater system.
Process Configuration
Biodiesel can be produced in a batch, semi-continuous, or continuous process. The economics of process configuration are largely dependent on production capacity. Batch processes require less capital investment and are easier to build. A major advantage of batch processing is the flexibility to accommodate variations in types and quantities of feedstock. Challenges of batch processing include lower productivity, higher labor needs, and inconsistent fuel quality. Continuous-flow biodiesel production processes can be scaled more easily and are preferred by larger producers. In continuous-flow processes, fuel quality is typically very consistent. The higher initial capital costs, including costs for complicated process control and process monitoring, are mitigated in large operations by greater throughput and higher quality product. As a result, the net capital and operating costs per unit product is less than that of batch processes. The types of reactors for transesterification can be simple stirred tanks for batch processes and continuously stirred tank reactors (CSTR) for continuous-flow processes.
Upon completion of the reaction, the product mixture passes to a separator, which can be a decanter for a batch process or a centrifuge for the continuous-low system. The crude glycerol, which is denser than the biodiesel layer, is removed. Any residual catalyst in the biodiesel layer is then neutralized by a controlled addition of an acid solution. In the same unit, most of the excess methanol and some residual glycerol is concentrated in the aqueous acid solution layer and withdrawn to a methanol recovery unit, where the methanol is concentrated, purified, and recirculated for reuse.
The neutralized biodiesel layer is washed by gentle contact with softened water to further remove residual methanol and glycerol. The washed biodiesel layer is dried by heating to approximately 105°C (220°F) until all moisture is volatized. The finished biodiesel after drying is tested for quality before being transferred to storage tanks for end use or distribution.
Biodiesel Storage and Utilization
Biodiesel has relatively low thermal and oxidative stabilities. This is due to the unsaturated double bonds contained in the oil and fat feedstocks. Therefore, biodiesel should be stored in cool, light-proof containers, preferably in underground storage facilities. The storage containers should be semi-sealed to minimize air exchange with the environment, reducing the possibility of oxidation and moisture absorption of biodiesel. Where permitted, the headspace of the storage containers can be filled with nitrogen to prevent biodiesel from coming into contact with oxygen. If biodiesel will be stored for longer than six months before use, adding a biocide and a stability additive is necessary to avoid microbial activity in the biodiesel. Biodiesel storage and transportation containers should not be made of aluminum, bronze, copper, lead, tin, or zinc because contact with these types of metals will accelerate degradation. Containers made of steel, fiberglass, fluorinated polyethylene, or Teflon can be used.
Biodiesel is a much stronger solvent than conventional diesel. Storage tanks for conventional diesel may have organic sludge build-up in them. If using such tanks for biodiesel storage, they should be thoroughly cleaned and dried to prevent the sludge from being dissolved by biodiesel and potentially causing problems to fuel lines and fuel filters. Similar problems can occur when using biodiesel in older engines with petroleum residues in fuel tanks or transfer lines. For more information on handling and storing biodiesel, readers are recommended to consult “Biodiesel Handling and Use Guide” (5th ed.) prepared by the National Renewable Energy Laboratory of the U.S. Department of Energy (Alleman et al., 2016).
Examples
Example $1$
Example 1: Volumes of soybean oil for biodiesel production
Problem:
Last year, a farmer used a total of 13,250 L of diesel fuel to run the farm’s machinery and trucks. After attending a workshop on using biodiesel on farms for both economic and environmental benefits, the farmer has decided to use a B20 blend of biodiesel in all the farm’s vehicles. The average annual yield of soybeans on the farm is 2,800 kg/ha. The soybeans contain 18.5% oil on a mass basis, and the efficiency of soybean oil extraction through mechanical pressing is approximately 80%. The density of soybean oil is 916 kg/m3.
Answer the following questions to help the farmer develop the details needed:
1. (a) How much pure biodiesel (B100) is needed to run the farm’s vehicles using a B20 blend (i.e., a mixture of 20% biodiesel and 80% of conventional diesel on a volume basis)?
2. (b) How much soybean oil is needed to produce sufficient B100 to blend with conventional diesel?
3. (c) What field area will yield enough soybeans for the needed quantity of oil?
Solution
1. (a) Given that the farmer uses 13,250 L of diesel fuel yearly, if 20% of the quantity is replaced by biodiesel, the quantity of pure biodiesel must be:
$13,250 \ \text{L} \times0.20=2,650 \ \text{L}$
1. The farmer will still need to purchase conventional diesel fuel, which is 80% of the total consumption:
$13,250 \ \text{L} \times0.80=2,650 \ \text{L}$
1. Therefore, 2,650 L of pure biodiesel (B100) is needed to blend with 10,600 L of conventional diesel to make a total of 13,250 L of a B20 blend for the farm’s vehicles.
2. (b) As an estimate of how much soybean oil (in kg) is needed, each volume of biodiesel requires approximately one volume of soybean oil (or other oil) to produce it, as noted in the Conversion Process section. Therefore, the initial estimate for the quantity of soybean oil is the same as the required quantity of pure biodiesel, i.e., 2,650 L of soybean oil.
Calculate the mass quantity of soybean oil by multiplying the volume of soybean oil by the density of soybean oil (916 kg/m3 or 0.916 kg/L):
$2,650 \ \text{L} \times0.916 \frac{\text{kg}}{\text{L}}=2,427 \ \text{kg}$
1. (c) The given soybean yield is 2,800 kg/ha, the oil content of soybean is 18.5%, and the oil extraction efficiency is 80%. Therefore, each ha planted in soybean will yield:
(2800 kg) (0.185) (0.80) = 414.4 kg of soybean oil
1. The area of soybean field for produce the needed 2427 kg of soybean oil is:
2,427 kg / 414.4 kg/ha = 5.86 ha
In summary, the farmer needs to plant at least 5.86 ha of soybeans to have enough soybean oil to produce the biodiesel needed to run the farm’s vehicles.
Example $2$
Example 2: Average molecular weight of soybean oil
Problem:
The farmer had the farm’s soybean oil analyzed by a commercial laboratory via a gas chromatographic analysis and obtained the following fatty acid profile on a mass basis:
Palmitic (C16:0) Stearic (C18:0) Oleic (C18:1) Linoleic (C18:2) Linolenic (C18:3)
Profile
9%
4%
22%
59%
6%
MWi,FA (kg/kmol)
256.5
284.5
282.5
280.5
278.5
1. (a) What is the average molecular weight of the soybean oil?
2. (b) What is the average molecular weight of biodiesel from this soybean oil?
Solution
1. (a) First, calculate the average molecular weight of fatty acids (MWave,FA) in the soybean oil using Equation 1.3.2:
2. $\frac{1}{MW_{ave,FA}} = \sum\frac{C_{i,FA}}{MW_{i,FA}}$ (Equation $2$)
3. $\frac{1}{MW_{ave,FA}} = \frac{9 \%}{256.5}+\frac{4 \%}{284.5}+\frac{22 \%}{282.5}+\frac{59 \%}{280.5}+\frac{6 \%}{278.5}$
4. $= \frac{0.09 \%}{256.5}+\frac{0.04 \%}{284.5}+\frac{0.22 \%}{282.5}+\frac{0.59 \%}{280.5}+\frac{0.06 \%}{278.5}$
= 0.003589 kmol/kg
1. Therefore, MWave,FA = 1 / 0.003589 = 278.6 kg/kmol.
2. Next, calculate the average molecular weight of the soybean oil using Equation 1.3.1:
3. $MW_{ave}=MW_{gly}-3MW_{water}+3MW_{ave,FA}$ (Equation $1$)
$= 92.09 - (3 \times18.02)+(3 \times278.6) \frac{kg}{kmol}$
Therefore, MWave= 873.7 kg/kmol.
1. (2) Calculate the average molecular weight of biodiesel using Equation 1.3.3:
2. $MW_{ave,FAME}=MW_{ave,FA}+14$ (Equation $3$)
$=278.6+14\frac{kg}{kmol}$ (Equation $3$)
Therefore, MWave,FAME = 292.6 kg/kmol.
In summary, the average molecular weights of the soybean oil and biodiesel are 873.7 and 292.6 kg/kmol, respectively.
Example $3$
Example 3: Chemicals in converting soybean oil to biodiesel
Problem:
As determined in Example 1.3.1, the farmer needs to produce 2,650 L of pure biodiesel (B100; molecular weight = 880 kg/kmol) to run the farm’s vehicles and machinery with B20 blends. In converting soybean oil into biodiesel, the methanol (CH3OH, molecular weight = 32.04 kg/kmol) application rate needs to be 100% more than the stoichiometrically required rate to ensure a complete reaction. The application rate of the potassium hydroxide catalyst (KOH, molecular weight = 56.11 kg/kmol) is 1% of the soybean oil on a mass basis. How much methanol and potassium hydroxide, in kg, are needed to produce the required biodiesel? The average molecular weights of the soybean oil and biodiesel are 873.7 and 292.6 kg/kmol, respectively.
Solution
First, write out the transesterification of soybean oil to biodiesel with known molecular weights (MW) (similar to Figure 1.3.2):
Next, convert the quantity of biodiesel from volume to mass by the biodiesel density, 880 kg/m3 = 0.880 kg/L:
$2,650 \text{ L} \times 0.880 \frac{\text kg}{\text L} = 2,332 \text{ kg}$
Next, calculate the amount of methanol from the stoichiometric ratio of the transesterification reaction.
methanol : biodiesel
• The stoichiometric mass ratio 3 × 32.04 : 3 × 292.6
• The unknown mass ratio (kg) M : 2,332
• Or (3 × 292.6) × M = (3 × 32.04) × 2,332
Therefore, the quantity of methanol is
$M = (3 \times 32.04)\times2,332\text{kg} / (3 \times 292.6) =255.5 \text{kg}$
Next, calculate the total amount of methanol with 100% excess, as required:
$M’ = 2M = 2 \times 255.5 = 511 \text{ kg}$
Finally, calculate the quantity of catalyst KOH needed. Since the application rate of the catalyst KOH is 1% of the soybean oil, before the quantity of KOH can be calculated, the quantity of soybean oil must be obtained from the stoichiometric ratio of the transesterification reaction.
soybean oil : biodiesel
• The stoichiometric mass ratio 873.7 : 3 × 292.6
• The unknown mass ratio (kg) S : 2,332
• Or (3 × 292.6) × S = 873.7 × 2,332 kg
The quantity of soybean oil is, then:
$S = 873.7 \times 2,332 \text{ kg} / (3 \times 292.6) = 2,321 \text{ kg}$
Therefore, the quantity of catalyst KOH is calculated as 1% of the oil:
$2,332 \text{ kg} \times 0.01 = 23.2 \text{ kg}$
In summary, the quantities of methanol and potassium hydroxide are 511 kg and 23.2 kg, respectively.
Image Credits
Figure 1. He, B. (CC By 4.0). (2020). Chemical structure of triglycerides, glycerol, and fatty acids. R, R1, R2, and R3 represent alkyl groups typically with carbon chain lengths of 15–17 atoms.
Figure 2. He, B. (CC By 4.0). (2020). Transesterification of triglycerides with methanol. R1, R2, and R3 are alkyl groups in chain lengths of, most commonly, 15–17 carbons.
Figure 3. He, B. (CC By 4.0). (2020). Chemical reaction between methanol and potassium hydroxide to form potassium methoxide.
Figure 4. He, B. (CC By 4.0). (2020). Saponification between potassium hydroxide and a fatty acid.
Figure 5. He, B. (CC By 4.0). (2020). Esterification of a fatty acid reacting with methanol (in the presence of an acid catalyst) to yield a methyl ester and water.
Figure 6. He, B. (CC By 4.0). (2020). Schematic illustration of a biodiesel production system.
The chemical formula in Example 3. He, B. (CC By 4.0). (2020).
References
Alleman, T. L., McCormick, R. L., Christensen, E. D., Fioroni, G., Moriarty, K., & Yanowitz, J. (2016). Biodiesel handling and use guide (5th ed.). Washington, DC: National Renewable Energy Laboratory, U.S. Department of Energy. DOE/GO-102016-4875. https://doi.org/10.2172/1332064.
ASTM. (2015). D6751-15ce1: Standard specification for biodiesel fuel blend stock (B100) for middle distillate fuels. West Conshohocken, PA: ASTM Int. https://doi.org/10.1520/D6751-15CE01.
ASTM, D975. (2019). D975-19c: Standard specification for diesel fuel. West Conshohocken, PA: ASTM Int. https://doi.org/10.1520/D0975-19C.
Canakci, M., & Sanli, H. (2008). Biodiesel production from various feedstocks and their effects on the fuel properties. J. Ind. Microbiol. Biotechnol., 35(5), 431-441. https://doi.org/10.1007/s10295-008-0337-6.
CEN. (2013). EN14214+A1 Liquid petroleum products—Fatty acid methyl esters (FAME) for use in diesel engines and heating applications—Requirements and test methods. Brussels, Belgium: European Committee for Standardization.
Dubois, V., Breton, S., Linder, M., Fanni, J., & Parmentier, M. (2007). Fatty acid profiles of 80 vegetable oils with regard to their nutritional potential. European J. Lipid Sci. Technol., 109(7), 710-732. doi.org/10.1002/ejlt.200700040.
Forma, M. W., Jungemann, E., Norris, F. A., & Sonntag, N. O. (1979). Bailey’s industrial oil and fat products. D. Swern (Ed.), (4th ed., Vol. 1, pp 186-189). New York, NY: John Wiley & Sons.
Goodrum, J. W., & Geller, D. P. (2005). Influence of fatty acid methyl esters from hydroxylated vegetable oils on diesel fuel lubricity. Bioresour. Technol., 96(7), 851-855. https://doi.org/10.1016/j.biortech.2004.07.006.
He, B. B., Thompson, J. C., Routt, D. W., & Van Gerpen, J. H. (2007). Moisture absorption in biodiesel and its petro-diesel blends. Appl. Eng. Agric., 23(1), 71-76. https://doi.org/10.13031/2013.22320.
Knothe, G., Krabl, J., & Van Gerpen, J. (2015). The biodiesel handbook (2nd ed.). AOCS Publ.
Kostik, V., Memeti, S., & Bauer, B. (2013). Fatty acid composition of edible oils and fats. J. Hygienic Eng. Design, 4, 112-116. Retrieved from http://eprints.ugd.edu.mk/11460/1/06.%20Full%20paper%20-%20Vesna%20Kostik%202.pdf.
Peterson, C. L. (1986). Vegetable oil as a diesel fuel: Status and research priorities. Trans. ASAE, 29(5), 1413-1422. https://doi.org/10.13031/2013.30330.
Peterson, C. L., Wagner, G. L., & Auld, D. L. (1983). Vegetable oil substitutes for diesel fuel. Trans. ASAE, 26(2), 322-327. https://doi.org/10.13031/2013.33929.
Prasad, S., & Ingle, A. P. (2019). Chapter 12. Impacts of sustainable biofuels production from biomass. In M. Rai, & A. P. Ingle (Eds.), Sustainable bioenergy—Advances and impacts (pp. 327-346). Cambridge, MA: Elsevier. https://doi.org/10.1016/B978-0-12-817654-2.00012-5.
Pratas, M. J., Freitas, S. V., Oliveira, M. B., Monteiro, S. l. C., Lima, A. l., & Coutinho, J. A. (2011). Biodiesel density: Experimental measurements and prediction models. Energy Fuels, 25(5), 2333-2340. https://doi.org/10.1021/ef2002124.
Sheehan, J., Camobreco, V., Duffield, J., Graboski, M., & Shapouri, H. (1998). Life cycle inventory of biodiesel and petroleum diesel for use in an urban bus. NREL/SR-580-24089. Golden, CO: National Renewable Energy Laboratory. https://doi.org/10.2172/658310.
Swisher, K. (2018). U.S. market report: Fat usage up but protein demand down. Render Magazine. Retrieved from http://www.rendermagazine.com/articles/.
USDA EIA. (2018). Monthly biodiesel production report. Table 3, U.S. inputs to biodiesel production. Washington, DC: USDA EIA. Retrieved from www.eia.gov/biofuels/biodiesel/production/.
USDA ERS. (2018a). U.S. bioenergy statistics. Table 6, Soybean oil supply, disappearance and share of biodiesel use. Washington, DC: USDA ERS. Retrieved from https://www.ers.usda.gov/data-products/us-bioenergy-statistics/.
USDA ERS. (2018b). U.S. bioenergy statistics. Table 7, Oils and fats supply and prices, marketing year. Washington, DC: USDA ERS. Retrieved from https://www.ers.usda.gov/data-products/us-bioenergy-statistics/.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/01%3A_Energy_Systems/1.03%3A_Biodiesel_from_Oils_and_Fats.txt
|
Shahab Sokhansanj, Ph.D., P.Eng.
University of Saskatchewan
Saskatoon, Saskatchewan, Canada
Key Terms
Field capacity Bale density Round baling
Material capacity Bale storage Square baling
Introduction
An important issue in a biomass-based bioenergy system is the transportation of feedstock from the field to the processing facility. Baling, which is the dense packing of biomass into a manageable form, is of importance because it is an energy-consuming process that determines the efficiency of the bioenergy system. Bale density is the most important factor influencing the logistics (number of vehicles, storage volume, duration of use) and cost (labor and energy) of harvesting and delivering biomass to a biorefinery. Unless the biomass is packed to sufficient density, the energy required for transport may exceed the energy release by the bioconversion processes. This chapter discusses two types of balers, round and square; the relationship between biomass density and energy required to make bales; and the pros and cons of the different bale types. The chapter discusses proper methods for handling bales in order to minimize dry matter losses during storage.
Concepts
The contribution of energy and power to the quality of life is indispensable. Countries that enjoy a high quality of life are the ones that consume the most energy per capita. Fuels generate power to run factories, to mobilize motorized transport, and to heat and cool buildings. Until the sixteenth century, most energy came either directly from the sun or indirectly by burning biomass, mostly wood and other plant material. The introduction of coal brought a new era in industrial development. By the nineteenth century, oil and gas revolutionized industrial development.
The development of agriculture happened in parallel with the exploitation of new sources of energy. Farmers abandoned back-breaking farming practices and adopted powered equipment like tractors and cultivators. Farmers who had used hand tools to cut and stack a crop in the field started to use machines that were able to do these tasks more efficiently. Large land preparation equipment, fertilizer applicators, and crop protection equipment, along with new harvest, handling, and transport equipment, were developed. This was possible because of fossil fuel products like gasoline and diesel.
Fossil fuels powered mechanization tools to produce ample food and clothing for humankind to this day. Unfortunately, the use of fossil fuels resulted in some unexpected consequences. The additional carbon dioxide (CO2) and other gases released from fossil fuel combustion increased the concentration of greenhouse gases in the atmosphere impacting or contributing to climate change effects. With time, fossil fuels will become expensive for farmers because of limited availability and policy-based penalties for causing unwanted air polluting emissions. Focus on renewable energy sources as an alternative to displace fossil fuels in society has increased. Biomass, for example, can be used much more efficiently beyond conventional burning as a feedstock for producing biofuels.
The farm equipment manufacturing industry has developed a number of machines for harvesting and post-harvest handling of grains, fruits, and vegetables. Residues such as straws and leaves have traditionally had little financial value, so the industry had not developed many machines to exploit whole crops or residues, instead focusing on extracting only the valuable part of crops, like grain and fruit. The remaining part of the crop such as straw, leaves, and branches were left on the field mostly unused.
Since the late twentieth century, there has been a demand for equipment to collect and package straws, grasses, and whole plants, which coincided with other developments such as restrictions on burning residues (because of air quality) and the operation of conservation tillage systems. The farm equipment industry has developed equipment, such as balers, to gather whole plants, straws, and grasses into round or square packages of much higher density than can be achieved by passive stacking of the material. The dense bales take less space for storing and transporting biomass.
Densification
Baling is the most used method for on-farm densifying and packaging of biomass (Figure 1.4.1). Density is the mass of biomass in the bale divided by the volume of the bale:
$\rho=\frac{M}{V}$
where ρ = density (kg/m3)
M = mass of bale (kg)
V = volume of bale (m3)
The density of bales typically varies from 140 to 240 kg/m3 depending on the type of biomass and the pressure used on the biomass when forming the bale. Bale density influences the cost of baling and delivering biomass to the point of its use. Harvesting, storage, transportation, and processing can contribute up to 50% of bioenergy feedstock cost (Shinners and Friede, 2018), so this is an important consideration when operating the system. Transport equipment has a maximum volume and mass (payload) per trailer, so optimizing bale density minimizes transport costs. Creating a dense bale requires power to form the bale and power to transport it during the operation. Bales can be stacked at a location on the farm prior to transport to a bioenergy facility. For energy applications, the dense bales are typically transported to a pelleting facility where the bales are broken and re-compacted into denser pellets.
A range of biomass crops are baled, such as alfalfa (Medicago sativa), timothy (Phleum pretense), grasses in general (Poaceae), wheat (Triticum spp.), maize/corn (Zea mays) and soybean (Glycine max). Biomass crops may be harvested as whole plants (mowed) or separated in the field using a combine harvester that splits the grain from the other plant material. Regardless of the crop, when cut in the field, the material that will be baled is left as a windrow (a low-density linear pile of material parallel to the direction of machine travel). Materials are usually left in the windrow to dry. The ideal moisture content for safe baling and storage depends on the crop but is typically less than 30%. There can be losses due to shattering if the field moisture content is too low, or the equipment must use energy needlessly if too wet. Wet biomass may spoil due to fungal and bacterial growth during storage, which can interfere with refining processes to make biofuels. Depending on the weather, the length of time the plant remains in the field to dry ranges from a few hours to a few days. When ready, a baler picks up the material from the windrow to form bales. Modern balers are mobile, i.e., the equipment moves around the field.
A number of factors determine the choice between round or square bales. Round bales are preferred in wetter regions as they can shed the rain. Square bales are preferred in dry regions as they stack better. In North America, smaller farms tend to use round balers and larger farms tend to use large square balers. Table 1.4.1 lists some characteristics of the balers operating on North American farms. In some countries, such as Ireland, small farmers tend to favor small square bales as they are easier to handle once made.
Table $1$: Typical values for bales.
Bale Categories Dimensions (width × depth × length for square; diameter × depth for round) Mass (kg) Farm Size Productivity Typical Cost (\$/h)
Small square
356 × 457 × 914 mm
24
Small farms
Low
120
Large square
914 × 1219 × 2438 mm
435
Large farms
High
250
Small round
1219 × 1219 mm
228
Small farms
Medium
130
Large round
1829 × 1829 mm
769
Large farms
High
150
Square Baling
A square baler (Figure 1.4.2) consists of a pick-up mechanism (1) to lift the biomass from the windrow and delivers it into feed rolls (2). A set of knives on a rotating wheel (3) cuts the biomass to a set length. A pitman arm (5) connects a flywheel (4) eccentrically (off-center) to a plunger (6). This arrangement converts the rotation of the flywheel into a reciprocating motion, to move the plunger back and forth in the bale chamber.
The power needed to form the bale comes from the tractor power takeoff (PTO). Each rotation plunges biomass as it enters the baling chamber. The reciprocating plunger compresses the loose material to form a bale. The process of feeding hay into the bale chamber and compressing it is repeated until the bale is formed. The density of the bale is determined by adjusting spring-loaded or hydraulic upper and lower tension bars on the bale chamber. A bale-measuring wheel (8) rotates as the bale moves through the bale chamber.
The bale length is controlled by adjusting the number of rotations of the measuring wheel. The tying mechanism (9) is synchronized with the plunger movement. When the plunger is at its rear position and the biomass is fully compressed, a set of needles delivers the twine to the tying mechanism. As the twine is grasped by the tying mechanism, the needles retract, and the bale is tied. Once compressed and tied, the bale is ejected from the bale chamber. Square bales are usually produced in several sizes (Table 1.4.1) and the weight depends upon the baler design, type of biomass, and moisture content, but typically ranges from 24 to 908 kg.
Figure 1.4.3 shows a plot of instantaneous power requirements for a square baler (PAMI, 1990). The peak power requirements are a result of the plunger action. In a typical alfalfa crop, average power input varied from 23 to 30 kW while the instantaneous peak power input was 110 kW. Average drawbar power requirements for towing the baler in the field varied from 5 to 8 kW and peaked at 20 kW in soft or hilly fields. To fully utilize baler capacity, PAMI (1990) recommends a tractor with a minimum PTO rating of 68 kW (90 hp). A tractor with an 83 kW (110 hp) PTO rating would be required in hilly conditions.
Round Baling
A round baler (Figure 1.4.4) forms the biomass into a cylindrical bale. The round baler collects the biomass from the windrow using finger-like teeth, made from spring steel or a strong polyethylene material, and rolls the biomass inside the bale chamber using wide rollers or belts.
A round baler comes in two types. Those with a fixed-size chamber use fixed position rollers (Figure 1.4.5a), and those with a variable chamber use flexible belts (Figure 1.4.5b). A fixed chamber makes bales with a soft core. A variable chamber makes bales with a hard core. A soft-core bale is “breathable,” meaning the porosity is sufficient for the bale to continue drying when left in the field. The bale size remains fixed by the size of the chamber. In a variable chamber, a series of springs and retractable levers ensures a tight bale is formed from core to circumference. The operator sets the diameter of the bale and a target mass to achieve a required density. Following the formation of the bale, the forward motion of the machine and the inflow of biomass are stopped. Twine or a net is wrapped around the circumference of the bale using a moveable arm. Once the net has encircled the bale enough times to maintain shape and sufficiently contain the material, the arms return to the start position and the twine or net strands are cut. The net wrap covers more of the surface area of the bale, preventing material loss and easily maintaining the shape of the bale.
Once the bale is formed and wrapped, it is ejected from the bale chamber. Some round balers have hydraulic “kickers,” while others are spring loaded or rely on the spinning of the bale to roll the bale out of the chamber. Once the bale has been ejected from the baler, the back door to the chamber is closed, and the machine starts moving forward, taking in biomass until the next bale is ready. Variable-chamber, large round balers typically produce bales from 1.2 to 1.8 m diameter and up to 1.5 m width, weighing from 500 to 1000 kg, depending upon size, material, and moisture content. A typical round bale density ranges from 140 kg/m3 to 180 kg/m3.
(a) Fixed chamber configuration
(b) Variable chamber configuration
Figure $5$: The two types of round balers, (a) fixed chamber and (b) variable chamber (Freeland and Bledsoe, 1988).
Figure 1.4.6 shows typical PTO and drawbar power requirements for the John Deere 535 round baler at a material capacity of 16.1 t/h (PAMI, 1992). The instantaneous power recorded by the tractor is plotted against the bale weight to show the increase in power input while each bale is formed. The curves are an average of the highly fluctuating measured PTO data, which varied from 5 to 8 kW at no load to a maximum of 32 kW in alfalfa for full sized bales. PTO power input is highly dependent on material capacity (t/h). Drawbar power requirements at 11.5 km/h were about 8 kW when the bale reached to a full size. Although maximum horsepower requirements did not exceed 38 kW, additional power was required to suit other field conditions such as soft and hilly fields. The manufacturer suggested a 56 kW (75 hp) tractor to fully utilize baler capacity.
Assessing Baling Performance
ASABE Standards EP496 and D497 (ASABE Standards, 2015a,b) define the performance of field equipment in terms of field capacity and material capacity.
Field Capacity
Field capacity quantifies the rate of land processed (area per unit time) as:
$C_{a}=\frac{SWE_{f}}{10}$
where Ca = field area covered per unit of time (ha/h)
S = average field speed of the equipment while harvesting (km/h)
W = effective width (m)
Ef = field efficiency (decimal) (Table 1.4.2)
Field speed, S, can range from 4 to 13 km/h (Table 1.4.2). This range represents the variability in field conditions that affects the travelling speed of the equipment.
Effective width, W, is the width over which the machine works. It may be wider or narrower than the measured width of the machine depending on design, how the machine is used in the field with other equipment, and operator experience and skill. The effective width might be determined by the cut width of a mower ahead of the baler, when a wheel rake gathers the mowed crop into a swath for the baler to pick up.
Field efficiency, Ef (Table 1.4.2), is the ratio of the productivity of a machine under field conditions and the theoretical maximum productivity. Field efficiency accounts for failure to utilize the theoretical operating width of the machine, time lost because of operator’s lack of skill, frequent stoppages, and field characteristics that cause interruptions in regular operation. Travel to and from a field, major repairs, preventive maintenance, and daily service activities are not included in field time or field efficiency calculations.
Field efficiency is not a constant for a particular machine but varies with the size and shape of the field, pattern of field operation, crop yield, crop moisture, and other conditions. The majority of time lost in the field is due to turning and idle travel, material handling time, cleaning of clogged equipment, machine adjustment, lubrication, and refueling. Round balers have a lower efficiency than square balers because the shape of the round bale makes handling, transportation, and storage of the bale inefficient compared to handling a square bale (Kemmerer and Liu, 2011).
Table $2$: Range and typical values for biomass harvest equipment including balers (ASABE Standards, 2015a).
Biomass Harvest Equipment Field Efficiency Field speed Remarks
Range (%) Typical (%) Range (km/h) Typical (km/h)
Small square baler
60–85
75
4.0–10.0
6.5
Small to mid-size bales
Large square baler
70–90
80
6.5–13.0
8.0
Mid-size to large bales
Large round baler
55–75
65
5.0–13.0
8.0
Commercial round bales
Material Capacity
Material capacity is the mass of crop baled per hour, and is calculated using the field capacity (Ca) and the field yield:
$C_{m}=C_{a}Y$
where Cm = material capacity (t/h)
Y = average yield of the field (t/ha); it is the amount of biomass that is cut and placed in the swath ready for baling, not the total above-ground biomass in the field.
For crops grown for energy supply purposes, typically no more than 50% of the above ground biomass is cut and baled. In practice, yield (Y) may be as low as 25–30% of the total above ground biomass. The remaining 70–75% of the biomass is left in the field for soil conservation purposes. Removal of a higher percentage may also pick up undesired dirt and foreign material along with the biomass.
Energy Requirements
The bale density that can be achieved is dependent on the specifications of the machine (its dimensions and efficiency) and the mechanical energy that can be supplied to the baler.
Energy Requirements for Square Bales
We start from defining pressure and density in order to calculate energy and power input to make a square bale.
Pressure, P, is calculated using force over area,
$P = F/A$
where A = area on which the force is exerted (m2)
$f= \text{force (kN)}$
Force (kN) is derived from mass (M, kg),
$F= M (\text{kg}/1000) \times g \ (\text{m/}s^{2})$
where g = acceleration due to gravity (9.81 m/s2).
The power requirement is related to bale density. The relationship is determined by first relating pressure to density, then calculating energy from force vs. displacement, and finally estimating power from the time rate of energy.
For the first step, a commonly used equation to relate pressure and density is (Van Pelt, 2003; Afzalinia and Roberge, 2013):
$P = (\frac{1}{k} \rho)^{1/n} \ \ \ _{k,n \ > \ 0}$
Table $3$: Coefficients k and n of pressure density (Hofstetter and Liu, 2011).
Biomass crop k n
Stover
29.48
0.330
Wheat straw
38.79
0.293
Switchgrass
100.99
0.137
where P = the pressure exerted by the plunger (kPa)
k and n = positive constants
$\rho = \text{density (kg/}m^{3})$
Hofstetter and Liu (2011) suggested values for k and n for several crops (Table 1.4.3).
During bale formation, the initial density is zero (empty chamber), and steadily increases to the maximum density possible given the plunger pressure.
The next step is to calculate energy from force and displacement. The total energy input required to make a bale is calculated by integrating the area under the pressure-displacement curve from 0 to Pmax. This integration yields energy input per unit mass (E) for a single stroke of the plunger to form what is known as a wafer. Equation 1.4.7 represents integration of force vs. displacement:
$E= \int_{0}^{P_{max}} (\frac{1}{\rho})dP$
where P = pressure (kN/m2)
E = energy input per unit mass (kJ/kg)
Substituting ρ from Equation 1.4.6 and integrating yields:
$E= \frac{1}{(1-n)k}P_{max}^{(-n+1)}$
Replacing Pmax with ρmax allows an estimate of specific energy, E (kJ/kg):
$E=\frac{1}{(1-n)k}(\frac{1}{k}\rho_{max} \ ^{\frac{1-n}{n}})$
When making a square bale, each stroke of the plunger makes a wafer of around 51 mm thickness. It would require around 19 strokes to make a 915 mm bale. For a complete bale the energy required, (Eop, kJ), can be calculated from E multiplied by the final mass of the bale,
$E_{op} = E \times M$
For the last step, the power (energy per unit time) required to make one bale is calculated by multiplying the specific energy (E) by the material capacity (Cm)
$P_{opt}=\frac{C_{m}E_{op}}{3.6e}$
where Popt = theoretical power to operate the square baler (kW)
e = efficiency factor that accounts for inefficiency of transmission of power from the PTO to the baler
In practice, ASABE Standard D497 (ASABE Standards, 2015a) suggests that about 4 kW is needed for a baler to run empty so this power overhead must be added to Popt.
Energy Requirements for Round Bales
For a round baler, ASABE Engineering Practice EP 496 (ASABE Standards, 2015b) recommends estimating the operating power for balers and other rotating machines using:
$P_{op}=a+bW+cF_{w}$
where Pop = power-takeoff required to operate the round baler (kW)
W = working width of the baler (m)
Fw = material feed rate, wet mass (t/h)
a, b, and c = machine-specific parameters (Table 1.4.4)
Table $4$: Typical parameter values for Equation 1.4.12 for balers from ASAE D497 (ASABE Standards, 2015a).
Baler Type a (kW) b (kW/m)* c (kWh/t)
Small square
2.0
0
1.0
Large square
4.0
0
1.3
Large round, variable chamber
4.0
0
1.1
Large round, fixed chamber
2.5
0
1.8
* Non-zero values are reported for machinery such as mowers and rakes.
Comparing the power requirements, Tremblay et al. (1997) found that the variable chamber baler required an average PTO power of 10.2 kW compared to a fixed chamber baler that required an average PTO power of 13.3 kW. Also, the peak PTO power required was considerably less for the variable chamber (14.5 kW) compared to fixed chamber (37.5 kW). This means a much larger tractor would normally be required to operate a fixed chamber baler. For flexible operation in terms of tractor required and size and density of bales, a flexible chamber round baler is perhaps the best option.
Energy Requirements for Pulling a Baler
The power required to drive the tractor and tow the baler is determined from the draft force (D, kN):
$D=r\ m\ g\ /\ 1000$
where r = ratio, resistance to travel
m = mass of pulled equipment and its load (kg)
g = gravitational acceleration constant = 9.81 m/s2
Resistance to travel is an additional draft force that must be included in computing power requirements. Values of resistance to travel depend on transport wheel dimensions, tire pressure, soil type, and soil moisture. Motion resistance ratios are defined in ASAE S296 (ASABE Standards, 2018). The value of r can be estimated using (ASABE Standards, 2015a):
Table $5$: Values of soil index factor Bn, slippage sl, and draft coefficient Xd for various surfaces on which equipment is towed (ASABE Standards, 2015a).
Surface Condition Bn sl Drawbar Xd[a]
Hard—concrete
80
0.04–0.08
0.88
Firm soil
55
0.08–0.10
0.77
Tilled soil
40
0.11–0.13
0.75
Soft soil
20
0.14–0.16
0.70
[a] Xd represents the ratio of draft power to PTO power. The listed values are for 4-wheel drive tractors.
$r=\frac{1}{B_{n}}+0.04+\frac{0.5sl}{\sqrt{B_{n}}}$
where Bn = soil index factor (Table 1.4.5)
sl = decimal value representing tractor wheel slippage (Table 1.4.5)
Given the speed and draft force (kN), draft power is calculated by:
$P_{d}=\frac{DS}{3.6}$
where Pd = the tractor draft (pull) power (kW)
S = the average forward speed of the baler (km/h)
Applications
Handling and Storing Bales
Bale Stacking
Once the bales are made, they must be removed from the field before the land can be prepared for the next crop. Tractors and loaders equipped with grabbing devices pick up and load the bales onto a trailer for transport out of the field. The bales are then stacked either next to the field or in a central storage site by using a tractor or a loader. HSE (2012) recommends building stacks on firm, dry, level, freely draining ground, which should be open and well ventilated, away from overhead electric poles. Use of stones or crushed rock on the ground beneath a stack to make it level and to stop water rising into the stack is recommended. The site should be away from any potential fire hazards and sources of ignition with good road access so bales can be transported to and from the stack safely. There must be sufficient space to allow tractors, trailers and other vehicles adequate room to maneuver.
Figure 1.4.7 shows the correct configuration of stacking square bales and round bales, with a wide base that narrows as the stack gets higher. The maximum height of the stack should not be greater than 1.5 times the width of the base. Generally, a stack of no more than 10 bales on hard surfaces and 8 bales on soft surfaces is recommended. Square bales must be laid with each row is offset from the row below, such that there is no continuous gap between them. Round bales are stacked in a pyramid with fewer bales in each direction than in the layer below. The outside round bales need a chock at each of the bales in the lowest layer to prevent them from rolling out (Figure 1.4.7). As with square bales, round bales should be laid to cover the gap between two bales underneath.
Once a stack is formed, the weight of each bale becomes an issue for the stability of the pile. The weight of a large bale may range from 300 kg to more than 500 kg. The bales at the top press onto the lower bales causing their slow deformation. The degree of deformation depends upon bale density and moisture content, and the length of time they remain in the stack. A lower density and a higher moisture bale tends to deform more than a higher density and a dryer bale.
Dry Matter Loss
Moisture content at the time of baling plays an important role in the amount of dry matter loss that may happen during baling and later during storage. For leafy biomass like alfalfa, the recommended moisture content for baling is less than 30% and for storage less than 15% to 20%; however, for longer storage, a lower moisture content of 10% to 12% is preferred. Square bales tend to lose less moisture than round bales, but regardless of shape, it is important to make bales as near to the target moisture as possible.
Losses can be mechanical and microbial. Mechanical losses mostly occur during bale handling, such as building the stack or removing the bales from the stack. Some physical removal of biomass (known as leaching) may also take place due to rain wash. Also, the carbohydrates in freshly cut green biomass can decay to CO2, water, and heat.
The most prevalent dry matter loss is due to microbial activity, which causes the deterioration of the plant material and loss of dry matter. The growth of microbes on the biomass is directly related to the moisture content. Dry biomass adsorbs moisture from rain when exposed and becomes a host for mold to develop. Cover and duration of storage both influence dry matter loss (Table 1.4.6). For example, the dry matter loss from an uncovered bale on the ground may range from 5% to 20% within 9 months of storage. If storage time increases to 12 to 18 months, dry matter loss can increase to 20% to 35% of the mass of the bales. Storing bales under a roof will limit losses to 2% to 5%. Research shows there is not much difference between dry matter loss for round bales vs. square bales when stored in similar conditions (Wilcke et al., 2018).
Table $6$: Percent dry matter loss for different methods of storing biomass (Lemus, 2009).
Storage Method Storage Period (months)
0 to 9 12 to 18
Ground
Covered with a tarp
5–9
10–15
Exposed
5–20
20–35
Elevated
Covered with a tarp
2–4
5–10
Exposed
3–15
12–35
Barn
Enclosed
~2
2–5
Open sides
2–5
3–10
The range of dry matter loss (Table 1.4.6) stems from differences in climate, crop type, and initial moisture content of the biomass. Nevertheless, these numbers are good for making a decision on the kind of storage system to be chosen for bales. In terms of capital expenditure, storing on the ground is the least expensive and storing in an enclosed barn is the most expensive.
Decision Factors for Square vs. Round Bales
The selection of round or square bales depends on several factors including crop species to be baled, regional climate conditions, volume of crop to be harvested, types of storage available, tractor power, and ancillary services available. Key advantages and disadvantages for round and square bales are listed in Table 1.4.7.
Examples
Table $7$: Advantages and disadvantages of square bales and round bales.
Square Bale Round Bale
Advantages Advantages
• More efficiently uses space in transport and storage
• Better shape retention during storage
• Easier to stack
• Greater availability of balers and handling equipment
• Lower price for balers
• Greater ability to shed water if bales are stored uncovered
Disadvantages Disadvantages
• Greater moisture absorption by bales stored without cover
• Less efficient use of space in hauling and storing bales
• A tendency for bales to lose their shape during storage
Example $1$
Example 1: Field and material capacity
Problem:
A field of hay is cut by using a disk mower cutting 5 m swaths. Following a few days of drying, a rotary rake is used to windrow the hay for baling. Calculate the field capacity and material capacity of three balers: small square, large square, and round for a yield of 7 t/ha of hay. Which machine would you choose?
Solution
The effective width is 5 m as this is the swath width of the mower. Calculate field capacity using Equation 1.4.2 and material capacity using Equation 1.4.3:
$C_{a}=\frac{SWE_{f}}{10}$ (Equation $2$)
$C_{m}=C_{a}Y$ (Equation $3$)
where Ca = field area covered per unit of time (ha/h)
S = average field speed of the equipment while harvesting (km/h)
W = effective width (m)
Ef = field efficiency (decimal)
Cm = material capacity (t/h)
Y = average yield of the field (t/ha)
Use typical values from Table 1.4.2 for speed and efficiency of each type of baler. Table 1.4.8 lists the input values and calculation results for field capacity and material capacity. The large square baler can process the largest area per hour, therefore it can also process the greatest mass per hour. Thus, with typical values for speed and efficiency, the large square baler would be selected if the only criteria were field and material capacity.
Table $8$: Input values and calculation results for Example 1.4.1.
Baler Width of cut, W
(m)
Field speed, S
(km/h)
Field efficiency, Ef
(%)
Field capacity, Ca
(ha/h)
Yield, Y
(t/ha)
Material capacity, Cm
(t/h)
Small square baler
5
6.5
75
2.44
7
17.06
Large square baler
5
8.0
80
3.20
7
22.40
Round baler
5
8.0
65
2.60
7
18.20
Example $2$
Example 2: Maximum bale density and mass
Problem:
A farmer is making square bales of cornstalk at 35% moisture content (wet mass basis). The compressed bale dimensions are 914 mm × 1219 mm × 2438 mm. Determine the maximum density and mass of each bale given the mass equivalent of force exerted on the cross section (914 mm × 1219 mm) bale is 20 tonne (t).
Solution
The maximum density is a function of the maximum pressure exerted on the pressure exerted on the bale cross section. First, calculate the force on the cross section of the bale (Equation 1.4.5) using the given mass equivalent of force as 20 t, which is 20,000 kg, and acceleration due to gravity as 9.8 m/s2:
$F= M\ (\text{kg/1000)} \times g\ (\text{m}/s^{2})$ (Equation $5$)
$F= 20000 \times 9.8/1000 = 196 \text{ kN}$
Calculate the pressure exerted on the bale cross section using Equation 1.4.4:
$P=F/A$ (Equation $4$)
$P= 196 \text{ kN}/ (0.914 \times 1.219 \ m^{2}) = 175.92 \text{ kPa}$
Calculate bale density by solving Equation 1.4.6 for ρ, using values of k and n from Table 1.4.3:
$P_{max} = (\frac{1}{k}\rho)^{1/n} \ _{k,n \ >\ 0}$ (Equation $6$)
$\rho=kP^{n}=29.48(175.92)^{0.33}=162.1 \text{kg/}m^{3}$
The mass of the bale can be calculated from density and the dimensions of the bale:
$M = \rho V = 162.1 \text{ kg/}m^{3} \times (0.914 \times 1.219 \times 2.438 m^{3}) = 440.32 \text{kg}$
Example $3$
Example 3: Specific and operating energy
Problem:
For the baler specified in Example 1.4.2, calculate specific energy of the baler and, from this, the operating energy required to make one bale.
Solution
Calculate specific energy using Equation 1.4.9:
$E = \frac{1}{(1-n)k}(\frac{1}{k}\rho_{max}\ ^\frac{1-n}{n})$ (Equation $9$)
$= \frac{1}{(1-0.33)29.48}(\frac{162.36}{29.48}) ^\frac{1-0.33}{0.33}$
Now, calculate the operating energy using Equation 1.4.10:
$E_{op}=E\times M$ (Equation $10$)
$E_{op}= 1.62 \text{ kJ/kg} = 713.32 \text{ kJ}$
Example $4$
Example 4: Operating power
Problem:
For the baler in Examples 1.4.2 and 1.4.3, power transmission from the tractor PTO to the baler will not be 100% efficient. Assuming 50% transmission efficiency of power from the tractor to the baler, estimate the operating power that must be supplied to the baler.
Solution
Estimate the theoretical operating power, Popt, using Equation 1.4.11, with e = 0.50:
$P_{opt}=\frac{C_{m}E_{op}}{3.6e}$ Equation $11$)
$P_{opt}=\frac{(22,400 \text{ kg/h}) \times 1.62 \text{ kJ/kg}}{(3600 \text{ s/h})(0.50)} = 20.16 \text{ kW}$
Applying the ASABE D497 assertion that about 4 kW is needed for the machine to run when empty, the Popt is:
$P_{opt}= 20.16+4=24.16 \text{ kW}$
Example $5$
Example 5: Power requirements of a round baler
Problem:
A farmer has the option of using a round baler with a fixed chamber, an operating width of 2 m, a feed intake of 18.2 t/h, and a mass of 15,800 kg, that produces bales of 1.83 m diameter, 1.83 m width or depth, and 180 kg/m3 density. Calculate (a) the power requirement for the fixed chamber round baler, (b) the draft force of the machine, and (c) the draft power of the tractor required to pull the machine through the field.
Solution
1. (a) Equation 1.4.12 can be used to estimate the power requirement. A bale of almost 2 m wide would be regarded as a large bale (Table 1.4.1), so the parameters for Equation 1.4.12 can be taken from Table 1.4.4 accordingly:
$P_{opt}= a+bW+cF$ (Equation $12$)
$P_{opt}= 2.5+(0 \times 2)+(1.8 \times18.2)= 35.26 \text{kW}$
1. (b) The draft force of the machine can be calculated using Equation 1.4.13:
$D = r\ m\ g / 1000$ (Equation $13$)
1. First, calculate the motion resistance, r, using Equation 1.4.14 with values from Table 1.4.5. Assume the machine is working on a soft soil surface and with average slippage. Thus, from Table 1.4.5, Bn = 20, sl = 0.15 (average of 0.14 and 0.16):
$=\frac{1}{B_{n}}+0.04+\frac{0.5sl}{\sqrt{B_{n}}}$ (Equation $14$)
$r=\frac{1}{20}+0.04+\frac{0.5(0.15)}{\sqrt{20}} = 0.16771$
1. Next, calculate the mass of bale plus baler:
2. Bale volume: V = π r2 L = 3.14 (0.915 m)2 (1.83 m) = 4.814 m3
3. Bale mass: M = V ρ = 4.814 m3 × 180 kg/m3 = 866.5 kg
4. Mass of bale plus baler: m = 866.5 + 15,800 = 16,666.5 kg
5. Substituting values in Equation 1.4.13 yields the draft force of the baler:
$D = r\ m\ g\ / 1000 = (0.16771 \times 16,666.5 \times 9.81) /1000 = 27.4 \text{ kN}$
1. (c) From the draft force, calculate the draft power, Pd, for the given speed, S, using Equation 1.4.15:
$P_{d}=\frac{D(\text{kN})S(\frac{km}{h})}{3.6}$ Equation $15$)
$P_{d}=\frac{27.4(\text{kN})8(\frac{km}{h})}{3.6} = 60.89 \text{ kW}$
Example $6$
Example 6: Dry matter loss
Problem:
A stack of round bales from Example 1.4.5 are to be stored with an average moisture content of 15% (wet mass basis). Estimate the dry matter loss from the bales when covered with tarp and stored on the ground for 9 months and 18 months.
Solution
The bale wet mass is 866.5 kg (calculated in Example 1.4.5). Calculate the bale dry mass using the given average moisture content of 15% (wet mass basis):
$\text{Bale dry mass} = 866.5 \times (1-0.15)=736.53 \text{ kg}$
Assume a midrange dry matter loss from Table 1.4.6, or percent dry mass of 7.5% for 9 months and 12.5% for 18 months. Use the values of percent dry mass loss to calculate the dry matter loss:
$\text{Dry matter loss after 9 months} = 736.53 \times (0.075)=55.2 \text{ kg}$
$\text{Dry matter loss after 18 months} = 736.53 \times (0.125)=92.1 \text{ kg}$
Image Credits
Figure 1. Krone. (CC by 4.0). (2020). Illustration of a square baler processing straw. Used with written permission. Retrieved from https://www.krone-northamerica.com/.
Figure 2. Krone. (CC by 4.0). (2020). Inline square baler operation. Used with written permission. Retrieved from https://www.krone-northamerica.com/.
Figure 3. Sokhansanj, S. (CC by 4.0). (2020). An experimental plot of power in a square baler.
Figure 4. Krone. (CC by 4.0). (2020). The round baler makes a cylindrical bale. Used with written permission. Retrieved from https://www.krone-northamerica.com/.
Figure 5. Freeland and Bledsoe. (CC By 1.0). (1988). The two types of round balers. Retrieved from ASABE publication Transactions.
Figure 6. Sokhansanj, S. (CC By 4.0). (2020). Measured power to form round bales.
Figure 7. Examples of stacking large square bales and round bales. Square bale photo adapted from background removed: Courtesy of Ryley Schmidt, Barr-Ag Inc. Alberta. Round bale picture credit: Evelyn Simak / A stack of straw bales / CC BY-SA 2.0. (details of licence can be found here: https://commons.wikimedia.org/wiki/File:A_stack_of_straw_bales_-_geograph.org.uk_-_1501535.jpg)
References
Afzalinia, S., & Roberge, M. (2013). Modeling of the pressure-density relationship in a large cubic baler. J. Agric. Sci. Technol., 15(1), 35-44.
ASABE Standards. (2015a). ASAE D497.7 MAR2011 (R2015): Agricultural machinery management data. St. Joseph, MI: ASABE. http://elibrary.asabe.org
ASABE Standards. (2015b). ASAE EP496.3 FEB2006 (R2015) Cor.1: Agricultural Machinery Management St. Joseph, MI: ASABE. http://elibrary.asabe.org
ASABE Standards. (2018). ANSI/ASAE S296.5 DEC2003 (R2018): General terminology for traction of agricultural traction and transport devices and vehicles. St. Joseph, MI: ASABE. http://elibrary.asabe.org
Freeland, R. S., & Bledsoe, B. L. (1988). Energy required to form large round hay bales—Effect of operational procedure and baler chamber type. Trans. ASAE, 31(1), 63-67. http://dx.doi.org/10.13031/2013.30666.
Hofstetter, D. W., & Liu, J. (2011). Power requirement and energy consumption of bale compression. ASABE Paper No. 1111266, St. Joseph, MI: ASABE.
HSE (2012). Safe working with bales in agriculture. The Health and Safety Executive 05/12 INDG125(rev3). 10 pages. Retrieved from https://www.hse.gov.uk/pubns/indg125.pdf.
Kemmerer, B., & Liu, J. (2011). Large square baling and bale handling efficiency—A case study. Agric. Sci., 3(2), 178-183. http://dx.doi.org/10.4236/as.2012.32020.
Lemus, R. (2009). Hay storage: Dry matter losses and quality changes. Retrieved from http://pss.uvm.edu/pdpforage/Materials/CuttingMgt/Hay_Storage_DM_Losses_MissSt.pdf.
PAMI (1992). Evaluation report 677. John Deere 535 round baler. Retrieved from http://pami.ca/pdfs/reports_research_updates/(4a)%20Balers%20and%20Baler%20Attachments/677.PDF.
PAMI (1990). Evaluation report 628. Vicon MP800 square baler. Retrieved from http://pami.ca/pdfs/reports_research_updates/(4a)%20Balers%20and%20Baler%20Attachments/628.PDF.
Shinners, K., & Friede, J. (2018). Energy requirements for biomass harvest and densification. Energies, 11(4), 780. http://doi.org/10.3390/en11040780.
Tremblay, D., Savoie, P., & Lepha, Q. (1997). Power requirements and bale characteristics for a fixed and a variable chamber baler. Canadian Agric. Eng., 39(1), 73-75. Retrieved from https://pdfs.semanticscholar.org/cb81/3812beeb7dcee3ecd34e5dbf39617869b8a6.pdf.
Van Pelt, T. (2003). Maize, soybean, and alfalfa biomass densification. Agric. Eng. Intl. Manuscript EE 03 002. Retrieved from https://pdfs.semanticscholar.org/8d9f/46c0431869b9f2b8edbedb4fcc5e657b7ac2.pdf.
Wilcke, W., Cuomo, G., Martinson, K., & Fox, C. (2018). Preserving the value of dry stored hay. Retrieved from https://extension.umn.edu/forage-harvest-and-storage/preserving-value-dry-stored-hay.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/01%3A_Energy_Systems/1.04%3A_Baling_Biomass-_Densification_and_Energy_Requirements.txt
|
Yeyin Shi
Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
Guangjun Qiu
College of Engineering, South China Agricultural University, Guangzhou, Guangdong, China
Ning Wang
Department of Biosystems and Agricultural Engineering, Oklahoma State University, Stillwater, Oklahoma, USA
Introduction
Measurement and control systems are widely used in biosystems engineering. They are ubiquitous and indispensable in the digital age, being used to collect data (measure) and to automate actions (control). For example, weather stations measure temperature, precipitation, wind, and other environmental parameters. The data can be manually interpreted for better farm management decisions, such as flow rate and pressure regulation for field irrigation. Measurement and control systems are also part of the foundation of the latest internet of things (IoT) technology, in which devices can be remotely monitored and controlled over the internet.
A key component of a measurement and control system is the microcontroller. All biosystems engineers are required to have a basic understanding of what microcontrollers are, how they work, and how to use them for measurement and control. This chapter introduces the concepts and applications of microcontrollers illustrated with a simple project.
Concepts
Measurement and Control Systems
Let’s talk about measurement and control systems first. As shown in Figure 2.1.1, signals can be generated by mechanical actuators and measured by sensors, for example, the voltage signal from a flow rate sensor. The signal is then input to a central control unit, such as a microcontroller, for signal processing, analysis, and decision making. For example, to see if the flow rate is in the desired range or not. Finally, the microcontroller outputs a signal to control the actuator, e.g., adjust the valve opening, and/or at the same time display the system status to users. Then the actuator is measured again. This forms an endless loop that runs continuously until interrupted by the user or time out. If we view the system from the signal’s point of view, the signal generated by the actuators and measured by the sensors are usually analog signals which are continuous and infinite. They are often pre-processed to be amplified, filtered, or converted to a discrete and finite digital format in order to be processed by the central control unit. If the actuator only accepts analog signals, the output signal to control the actuator from the central control unit needs to be converted back to the analog format. As you can tell, the central control unit plays a critical role in the measurement and control loop. Microcontroller is one of the most commonly used central control units. We will focus on microcontrollers in the rest of the chapter.
Microcontrollers
A microcontroller is a type of computer. A computer is usually thought of as a general-purpose device configured as a desktop computer (personal computer; PC or workstation), laptop, or server. The “invisible” type of computer that is widely used in industry and our daily life is the microcontroller. A microcontroller is a miniature computer, usually built as a single integrated circuit (IC) with limited memory and processing capability. They can be embedded in larger systems to realize complex tasks. For example, an ordinary car can have 25 to 40 electronic control units (ECUs), which are built around microcontrollers. A modern tractor can have a similar number of ECUs with microcontrollers handling power, traction, and implement controls. Environmental control in greenhouses and animal houses, and process control in food plants all rely on microcontrollers. Each microcontroller for these applications has a specific task to measure and control, such as air flow (ventilation, temperature) or internal pressure, or to perform higher-level control of a series of microcontrollers. Understanding the basic components of a microcontroller and how it works will allow us to design a measurement and control system.
A microcontroller mainly consists of a central processing unit (CPU), memory units, and input/output (I/O) hardware (Figure 2.1.2). Different components interact with each other and with external devices through signal paths called buses. Each of these parts will be discussed below.
The CPU is also called a microprocessor. It is the brain of the microcontroller, in charge of the primary computation and system internal control. There are three types of information that the CPU handles: (1) the data, which are the digital values to be computed or sent out; (2) the instructions, which indicate which data are required, what calculations to impose, and where the results are to be stored; and (3) the addresses, which indicate where a data or an instruction comes from or is sent to. An arithmetic logic unit (ALU) within the CPU executes mathematical functions on the data structured as groups of binary digits, or “bits.” The value of a bit is either 0 or 1. The more bits a microcontroller CPU can handle at a time, the faster the CPU can compute. Microcontroller CPUs can often handle 8, 16, or 32 bits at a time.
A memory unit (often simply called memory) stores data, addresses, and instructions, which can be retrieved by the CPU during processing. There are generally three types of memory: (1) random-access memory (RAM), which is a volatile memory used to hold the data and programs being executed that can be read from or written to at any time as long as the power is maintained; (2) read-only memory (ROM), which is used for permanent storage of system instructions even when the microcontroller is powered down. Those instructions or data cannot be easily modified after manufacture and are rarely changed during the life of the microcontroller; and (3) erasable-programmable read only memory (EPROM), which is semi-permanent memory that can store instructions that need to be changed occasionally, such as the instructions that implement the specific use of the microcontroller. Firmware is a program usually permanently stored in the ROM or EPROM, which provides for control of the hardware and a standardized operating environment for more complex software programmed by users. The firmware remains unchanged until a system update is required to fix bugs or add features. Originally, EPROMS were erased using ultraviolet light, but more recently the flash memory (electrically erasable programmable read-only memory; EEPROM) has become the norm. The amount of RAM (described in bytes, kilobytes, megabytes, or gigabytes) determines the speed of operation, the amount of data that can be processed and the complexity of the programs that can be implemented.
Digital input and output (I/O) ports connect the microcontroller with external devices using digital signals only. The high and low voltage in the signal correspond to on and off states. Each digital port can be configured as an input port or an output port. The input port is used to read in the status of the external device and the output port is used to send a control instruction to an external device. Most microcontrollers operate over 0 to +5V with limited current because the voltage signal is not used directly, only the binary status. If the voltage and current are to be used to directly drive a device, a relay or voltage digital analog convertor is required between the port and device. Usually digital I/O ports communicate or “talk” with external devices through standard communication protocols, such as serial communication protocols. For example, a microcontroller can use digital I/O pins to form serial communication ports to talk to a general-purpose computer, external memory, or another microcontroller. Common protocols for serial communication are UART (universal asynchronous receiver-transmitter), USB (universal serial bus), I2C (inter-integrated circuit), and SPI (serial peripheral interface). Analog input and output (analog I/O) ports can be connected directly to the microcontroller. Many sensors (e.g., temperature, pressure, strain, rotation) output analog signals and many actuators require an analog signal. The analog ports integrate either an analog to digital (A/D) converter or digital to analog (D/A) converter.
The CPU, memory, and I/O ports are connected through electrical signal conductors known as buses. They serve as the central nervous system of the computer allowing data, addresses, and control signals to be shared among all system components. Each component has its own bus controller. There are three types of buses: the data bus, the address bus, and the control bus. The data bus transfers data to and from the data registers of various system components. The address bus carries the address of a system component that a CPU would like to communicate with or a specific data location in memory that a CPU would like to access. The control bus transmits the operational signal between the CPU and system components such as the read and write signals, system clock signal, and system interrupts.
Finally, clock/counter/timer signals are used in a microcontroller to synchronize operations among components. A clock signal is typically a pulse sequence with a known constant frequency generated by a quartz crystal oscillator. For example, a CPU clock is a high frequency pulse signal used to time and coordinate various activities in the CPU. A system clock can be used to synchronize many system operations such as the input and output data transfer, sampling, or A/D and D/A processes.
Microcontroller Software and Programming
The specific functions of a microcontroller depend on its software or how it is programed. The programs are stored in the memory. Recall that the CPU can only execute binary code, or machine code, and performs low-level operations such as adding a number to a register or moving a register’s value to a memory location. However, it is very difficult to write a program in machine code. Hence, programming languages were developed over the years to make programming convenient. Low-level programming languages, such as assembly language, are the most similar to machine code. They are typically hardware-specific and not interchangeable among different types of microcontrollers. High-level programming languages, such as BASIC, C, or C++, tend to be more generic and can be deployed among different types of microcontrollers with minor modifications.
The programming languages for a specific microcontroller are determined by the microcontroller manufacturer. High-level programming languages are dominant in today’s microcontrollers since they are much easier for learning, interpretation, implementation, and debugging. Programming a microcontroller often requires references to manuals, tutorials, and application notes from manufacturers. Online digital courses and online community-based learning are often good resources as well.
The example presented later in this chapter is a hands-on project using a microcontroller board called Arduino UNO. Arduino is a family of open-source hardware and software, single-board microcontrollers. They are popular and there are many online resources available to help new users develop applications. The microcontrollers are easy to understand and easy to use in real world applications with sensors and actuators (Arduino, 2019). The programming language of the Arduino microcontrollers is based on a language called Processing, which is similar to C or C++ but much simpler (https://processing.org/). The code can be adapted for other microcontrollers. In order to convert codes from a high-level language to the machine code to be executed by a specific CPU, or from one language to another language, a computer program called a compiler is necessary.
Programs can be developed by users in an integrated development environment (IDE), which is a software that runs on a PC or laptop to allow the microcontroller code to be programmed and simulated on the PC or laptop. Most programming errors can be identified and corrected during the simulation. An IDE typically consists of the following components:
• • An editor to program the microcontroller using a relevant high-level programming language such as C, C++, BASIC, or Python.
• • A compiler to convert the high-level language program into low-level assembly language specific to a particular microcontroller.
• • An assembler to convert the assembly language into machine code in binary bit (0 or 1) format.
• • A debugger to error check (also called “debug”) the code, and to test whether the code does what it was intended to do. The debugger typically finds syntax errors, which are statements that cannot be understood and cannot be compiled, and redundant code, which are lines of the program that do nothing. The line number or location of the error is shown by the debugger to help fix problems. The programmer can also add error testing components when writing the code to use the debugger to help confirm the program does what was originally intended.
• • A software emulator to test the program on the PC or laptop before testing on hardware.
Not all components listed above are always presented to the user in an IDE, but they always exist. For the development of some systems, a hardware emulator might also be available. This will consist of a printed circuit board connected to the PC or laptop by ribbon cable joining I/O ports. The emulator can be used to load and run a program for testing before the microcontroller is embedded on a live measurement or control system.
Designing a Microcontroller-Based Measurement and Control System
The following workflow can help us design and build a microcontroller-based measurement and control system.
Step 1. Understand the problem and develop design objectives of the measurement and control system with the end-users. Useful questions to ask include:
• • What should be the functions of the system? For example, a system is needed to regulate the room temperature of a confined animal housing facility within an optimal range.
• • Where or in what environment does the measurement or control occur? For example, is it an indoor or outdoor application? Is the operation in a very high or low temperature, a very dusty, muddy, or noisy environment? Is there anything special to be considered for that application?
• • Are there already sensors or actuators existing as parts of the system or do appropriate ones need to be identified? For example, are there already thermistors installed to measure the room temperature, or are there fans or heaters installed?
• • How frequently and how fast should things be measured or controlled? For example, it may be fine to check and regulate a room temperature every 10 seconds for a greenhouse; however, the flow rate and pressure of a variable-rate sprayer running at 5 meters per second (about 12 miles per hour) in the field need to be monitored and controlled at least every second.
• • How much precision does the measurement and control need? For example, is a precision of a Celsius degree enough or does the application need sub-Celsius level precision?
Step 2. Identify the appropriate sensors and/or actuators if needed for the desired objectives developed in the previous step.
Step 3. Understand the input and output signals for the sensors and actuators by reading their specifications.
• • How many inputs and outputs are necessary for the system functions?
• • For each signal, is it a voltage or current signal? Is it a digital or analog signal?
• • What is the range of each signal?
• • What is the frequency of each signal?
Step 4. Select a microcontroller according to the desired system objective, the output signals from the sensors, and the input signals required by the actuators. Read the technical specifications of the microcontroller carefully. Be sure that:
• • the number and types of I/O ports are compatible with the output and input signals of the sensors and actuators;
• • the CPU speed and memory size are enough for the desired objectives;
• • there are no missing components between the microcontroller, the sensors, and actuators such as converters or adapters, and if there are any, identify them; and
• • the programming language(s) of the microcontroller is appropriate for the users.
Step 5. Build a prototype of the system with the selected sensors, actuators, and microcontroller. This step typically includes the physical wiring of the hardware components. If preferred, a virtual system can be built and tested in an emulator software to debug problems before building and testing with the physical hardware to avoid unnecessary hardware damage.
Step 6. Program the microcontroller. Develop a program with all required functions. Load it to the microcontroller and debug with the system. All code should be properly commented to make the program readable by other users later.
Step 7. Deploy and debug the system under the targeted working environment with permanent hardware connections until everything works as expected.
Step 8. Document the system including, for example, specifications, a wiring diagram, and a user’s manual.
Applications
Microcontroller-based measurement and control systems are commonly used in agricultural and biological applications. For example, a field tractor has many microcontrollers, each working with different mechanical modules to realize specific functions such as monitoring and maintaining engine temperature and speed, receiving GPS signals for navigation and precise control of implements for planting, spraying, and tillage. A linear or center pivot irrigation system uses microcontrollers to ensure flow rate, nozzle pressure, and spray pattern are all correct to optimize water use efficiency. Animal logging systems use microcontrollers to manage the reading of ear tags when the animals pass a weighing station or need to be presented with feed. A food processing plant uses microcontroller systems to monitor and regulate processes requiring specific throughput, pressure, temperature, speed, and other environmental factors. A greenhouse control system for vegetable production will be used to illustrate a practical application of microcontrollers.
Modern greenhouse systems are designed to provide an optimal environment to efficiently grow plants with minimal human intervention. With advanced electronic, computer, automation, and networking technologies, modern greenhouse systems provide real-time monitoring as well as automatic and remote control by implementing a combination of PC communication, data handling, and storage, with microcontrollers each used to manage a specific task (Figure 2.1.3). The specific tasks address the plants’ need for correct air composition (oxygen and carbon dioxide), water (to ensure transpiration is optimized to drive nutrient uptake and heat dispersion), nutrients (to maximize yield), light (to drive photosynthesis), temperature (photosynthesis is maximized at a specific temperature for each type of plant, usually around 25°C) and, in some cases, humidity (to help regulate pests and diseases as well as photosynthesis). In a modern greenhouse, photosynthesis, nutrient and water supplies, and temperature are closely monitored and controlled using multiple sensors and microcontrollers.
As shown in Figure 2.1.3, the overall control of the greenhouse environment is divided into two levels. The upper-level control system (Figure 2.1.4) integrates an array of lower-level microcontrollers, each responsible for specific tasks in specific parts of the greenhouse, i.e., there may be multiple microcontrollers regulating light and shade in a very large greenhouse.
At the lower level, microcontrollers may work in sub-systems or independently. Each microcontroller has its own suite of sensors providing inputs, actuators controlled by outputs, an SD (secure digital) card as a local data storage unit, and a CPU to run a program to deliver functionality. Each program implements its rules or decisions independently but communicates with the upper-level control system to receive time-specific commands and to transmit data and status updates. Some sub-systems may be examined in more detail and more frequently.
The ventilation sub-system is designed to maintain the temperature and humidity required for optimal plant growth inside the greenhouse. A schematic of a typical example (Figure 2.1.5) shows the sub-system structure. Multiple temperature and humidity sensors are installed at various locations in the greenhouse and connected to the inputs of a microcontroller. Target temperature and humidity values can be input using a keypad connected to the microcontroller (Figure 2.1.6) or set by the upper-level control system. Target values are also called “control set points” or simply “set points.” They are the values the program is designed to maintain for the greenhouse. The microcontroller’s function is to compare the measured temperature and humidity with the set point values to make a decision and adjust internal temperature. If a change is needed, the microcontroller controls actuators to turn on a heating device to raise the temperature (if temperature is below set point) or a cooling system fan (if temperature is above set point) to bring the greenhouse to the desired temperature and humidity.
The control panel in a typical ventilation system is shown in Figure 2.1.6. Here a green light indicates that the heating unit is running, while the red lights indicate that both the cooling unit and exhaust fans are off. The LCD displays the measured temperature and relative humidity inside the greenhouse (first line of text), the set point temperature and humidity values (second line of text), the active components (third line of text) and system status (fourth line of text). As the measured temperature is cooler than the set point, the heating unit has been turned on to increase the temperature from 22°C to 25°C. When the measured temperature reaches 25°C, the heating unit will be switched off. It is also possible to program alarms to alert an operator when any of the measured values exceed critical set points.
The nutrient and water supply sub-system (Figure 2.1.7) provides plants with water and nutrients at the right time and the right amounts. It is possible to program a preset schedule and preset values or to respond to sensors in the growing medium (soil, peat, etc.). As in the temperature and humidity sub-system, the user can manually input set point values, or the values can be received from the upper-level system. Ideally, multiple sensors are used to measure soil moisture and nutrient levels in the root zone at various locations in the greenhouse. The readings of the sensors are interpreted by the microcontroller. When measured water or nutrient availability drops below a threshold, the microcontroller controls an actuator to release more water and/or nutrients.
The lighting sub-system (Figure 2.1.8) is designed to replace or supplement solar radiation provided to the plants for photosynthesis. Solar radiation and light sensors are installed in the greenhouse. The microcontroller reads data from these sensors and compares them with set points. If the measured value is too high, the microcontroller actuates a shading mechanism to cover the roof area. If the measured value is too low, the microcontroller activates the shading mechanism to remove all shading and, if necessary, turns on supplemental light units.
The upper-level control system is usually built on a PC or a server, which provides overall control through an integration of the subsystems. All of the sub-systems are connected to the central control computer through serial or wireless communication, such as an RS-232 port, Bluetooth, or Ethernet. The central control computer collects the data from all of the subsystems for processing analysis and record keeping. The upper-level control system can make optimal control decisions based on the data from all subsystems. It also provides an interface for the operator to manage the whole system, if needed. The central control computer also collects all data from all sensors and actuators to populate a database representing the control history of the greenhouse. This can be used to understand failure and, once sufficient data are collected, to implement machine learning algorithms, if required.
This greenhouse application is a simplified example of a practical complex control system. Animal housing and other environmental control problems are of similar complexity. Modern agricultural machinery and food processing plants can be significantly more complex to understand and control. However, the principle of designing a hierarchical system with local automation managed by a central controller is very similar. Machine learning and artificial intelligence are now being used to achieve precise and accurate controls in many applications. Their control algorithms and strategies can be implemented on the upper-level control system, and the control decisions can be sent to the lower-level subsystems to implement the control functions.
Example
Example $1$
Example 1: Low-Cost Temperature Measurement and Control System
Problem:
A farmer wants to develop a low-cost measurement and control system to help address heat and cold stresses in confined livestock production. Specifically, the farmer wants to maintain the optimal indoor temperature of 18° to 20°C for a growing-finishing pig barn. A heating/cooling system needs to be activated if the temperature is lower or higher than the optimal range. The aim is to make a simple indicator to alert the stock handlers when the temperature is out of the target range, so that they can take action. (Automatic heating and cooling control is not required here.) Design and build a microcontroller-based measurement and control system to meet the specified requirements.
Solution
Complete the recommended steps discussed above.
Step 1. Understand the problem.
• • Functions—We need a system to monitor the ambient temperature and make alerts when the temperature is out of the 18° to 20°C range. The alert needs to indicate whether it is too cold or too hot, and the size of the deviation from that range.
• • Environment—As a growing-finishing pig barn can be noisy, we will use a visual indicator as an alert rather than a sound alert.
• • Existing sensors or actuators—For this example, assume that heating and cooling mechanisms have been installed in the barn. We just need to automate the temperature monitoring and decision-making process.
• • Frequency—The temperature in a growing-finishing pig barn usually does not change rapidly. In this example, let’s assume the caretakers require the temperature to be monitored every second.
• • Precision—In this project, let’s set the requirements for the precision at one degree Celsius for the temperature control.
Step 2. Identify the appropriate sensors and/or actuators.
The sensor that will be used in this example to measure temperature is the Texas Instruments LM35. It is one of the most widely used, low-cost temperature sensors in measurement and control systems in industry. Its output voltage is linearly proportional temperature, so the relationship between the sensor output and the temperature is straightforward.
We will use an RGB LED to light in different colors and blink at different rates to indicate the temperature and make alerts. This type of LED is a combination of a red LED, a green LED, and a blue LED in one package. By adjusting the intensity of each LED, a series of colors can be made. In this example, we will light the LED in blue when a temperature is lower than the optimal range, in green when the temperature is within the optimal range, and in red when the temperature is higher than the optimal range. In addition, the further the temperature has deviated from the optimal range, the faster the LED will blink. In this way, we alert the caretakers that a heating or cooling action needs to be taken and how urgent the situation is.
Step 3. Understand the input and output signals.
The LM35 series are precision integrated circuit temperature sensors with an output voltage linearly proportional to the Celsius (C) temperature (LM35 datasheet; http://www.ti.com/lit/ds/symlink/lm35.pdf). There are three pins in the LP package of the sensors as shown in Figure 2.1.9. A package is a way that a block of semiconductors is encapsulated in a metal, plastic, glass, or ceramic casing.
• • The +VS pin is the positive power supply pin with voltage between 4V and 20V (in this project, we use +5V);
• • The VOUT pin is the temperature sensor analog output of no more than 6V (5V for this project);
• • The GND pin is the device ground pin to be connected to the power supply negative terminal.
The accuracy specifications of the LM35 temperature sensor are given with respect to a simple linear transfer function:
$V_{out} =10 \text{mV}\ ^{\circ}C \times T$
where VOUT is the temperature sensor output voltage in millivolts (mV) and T is the temperature in °C.
In an RGB LED, each of the three single-color LEDs has two leads, the anode (or positive pin) where the current flows in and the cathode (or negative pin) where the current flows out. There are two types of RGB LEDs: common anode and common cathode. Assume we use the common cathode RGB LED as show in Figure 2.1.10 but the other type would also work. The common cathode (–) pin 2 will connect to the ground. The anode (+) pins 1, 3, and 4 will connect to the digital output pins of the microcontroller.
Step 4. Select a microcontroller.
There are many general-purpose microcontrollers available commercially, such as the Microchip PIC, Parallax BASIC Stamp 2, ARM, and Arduino (Arduino, 2019). In this example, we will select an Arduino UNO microcontroller board based on the ATmega328P microcontroller (https://store.arduino.cc/usa/arduino-uno-rev3) (Figure 2.1.11). The microcontroller has three types of memory: a 2KB RAM where the program creates and manipulates variables when it runs; a 1KB EEPROM where long-term information such as the firmware of the microcontroller is stored, and 32KB flash memory that can be used to store the programs you developed. The flash memory and EEPROM memory are non-volatile, which means the information persists after the power is turned off. The RAM is volatile, and the information will be lost when the power is removed. There are 14 digital I/O pins and 6 analog input pins on the Arduino UNO board. There is a 16 MHz quartz crystal oscillator. ATmega-based boards, including the Arduino UNO, take about 100 microseconds (0.0001 s) to read an analog input. So, the maximum reading rate is about 10,000 times a second, which is more than enough for our desired sampling frequency of every second. The board runs at 5 V. It can be powered by a USB cable, an AC-to-DC adapter, or a battery. If an USB cable is used, it also serves for loading, running, and debugging the program developed in the Arduino IDE. The Arduino UNO microcontroller is compatible with the LM35 temperature sensor and the desired control objectives of this project.
Step 5. Build a prototype.
The materials you need to build the system are:
• • Arduino UNO board × 1
• • Breadboard × 1
• • Temperature sensor LM35 × 1
• • RGB LED × 1
• • 220 Ω resistor × 3
• • Jumper wires
Figure 2.1.12 shows the hardware wiring.
• • Pin 1 of the temperature sensor goes to the +5V power supply on the Arduino UNO board;
• • Pin 2 of the temperature sensor goes to the analog pin A0 on the Arduino UNO board;
• • Pin 3 of the temperature sensor goes to one of the ground pin GND on the Arduino UNO board;
• • Digital I/O pin 2 on the Arduino UNO board connects with pin 4 (the blue LED) of the RGB LED through a 220 Ω resistor;
• • Digital I/O pin 3 on the Arduino UNO board connects with pin 3 (the green LED) of the RGB LED through a 220 Ω resistor;
• • Digital I/O pin 4 on the Arduino UNO board connects with pin 1 (the red LED) of the RGB LED through a 220 Ω resistor; and
• • Pin 2 (cathode) of the RGB LED connects to the ground pin GND on the Arduino UNO board.
An electronics breadboard (Figure 2.1.13) is used to create a prototyping circuit without soldering. This is a great way to test a circuit. Each plastic hole on the breadboard has a metal clip where the bare end of a jumper wire can be secured. Columns of clips are marked as +, −, and a to j; and rows of clips are marked as 1 to 30. All clips on each one of the four power rails on the sides are connected. There are typically five connected clips on each terminal strip.
(a) (b)
Figure $13$: A breadboard: (a) front view (b) back view with the adhesive back removed to expose the bottom of the four vertical power rails on the sides (indicated with arrows) and the terminal strips in the middle. (Picture from Sparkfun, https://learn.sparkfun.com/tutorials/how-to-use-a-breadboard/all).
Step 6. Program the microcontroller.
The next step is to develop a program that runs on the microcontroller. As we mentioned earlier, programs are developed in IDE that runs either on a PC, a laptop, or a cloud-based online platform. Arduino has its own IDE. There are two ways to access it. The Arduino Web Editor (https://create.arduino.cc/editor/) is the online version that enables developers to write code, access tutorials, configure boards, and share projects. It works within a web browser so there is no need to install the IDE locally; however, a reliable internet connection is required. The more conventional way is to download and install the Arduino IDE locally on a computer (https://www.arduino.cc/en/main/software). It has different versions that can run on Windows, Mac OS X, and Linux operating systems. For this project, we will use the conventional IDE installed on a PC running Windows. The way the IDE is set up and operates is similar between the conventional one and the web-based one. You are encouraged to try both and find the one that works best for you.
Follow the steps on the link https://www.arduino.cc/en/Main/Software#download to download and install the Arduino IDE with the right version for your operating system. Open the IDE. It contains a few major components as shown in Figure 2.1.14: a Code Editor to write text code, a Message Area and Debug Console to show compile information and error messages, a Toolbar Ribbon with buttons for common functions, and a series of menus.
Make sure that you disconnect the plug-ins of all the wires and pins the first time you power on the Arduino board either with a USB cable or a DC power port. It is a good habit to never connect or disconnect any wires or pins when the board is power on. Connect the Arduino UNO board and your PC or laptop using the USB cable. Under “Tools” in the main menu (Figure 2.1.15) of the Arduino IDE, select the right board from the drop-down menu of “Board:” and the right COM port from the drop-down menu of “Port:” (which is the communication port the USB is using). Then disconnect the USB cable from the Arduino UNO board.
Now let’s start coding in the Code Editor of the IDE. An Arduino board runs with a programming language called Processing, which is similar to C or C++ but much simpler (https://processing.org/). We will not cover the details about the programming syntax here; however, we will explain some of them along with the programming structure and logic. At the same time, you are encouraged to go to the websites of Arduino and the Processing language to learn more details about the syntax of Arduino programming.
Arduino programs have a minimum of 2 blocks—a setup block and an execution loop block. Each block has a set of statements enclosed in a pair of curly braces:
/*
Setup Block
*/
void setup() { // Opening brace here
Statements 1; // Semicolon after every statement
Statements 2;
...
Statements n;
} // Closing brace here
/*
Execution Loop Block
*/
void loop() { // Opening brace here
Statements 1; // Semicolon after every statement
Statements 2;
...
Statements n;
} // Closing brace here
There must be a semicolon (;) after every statement to indicate the finish of a statement; otherwise, the IDE will return an error during compiling. Statements after “//” in a line or multiple lines of statements between the pair of “/*” and “*/” are comments. Comments will not be compiled and executed, but they are important to help the readers understand the code.
The program logic flowchart is shown in Figure 2.1.16. To better understand the code, we will separate the code into a few parts according to the logic flowchart. Each part will have its associated code shown in a grey box with explanations. You can copy and paste them into the Code Editor in the Arduino IDE. When writing the codes, be sure to save them frequently.
Program Part 1—Introductive Comments
Here we use multiple lines of statements to summarize the general purpose and function of the code.
/*
This program works with an Arduino UNO board, a temperature sensor and an RGB LED to measure and indicate the ambient temperature.
If the temperature measured is within 18 and 20 degree Celsius, it is considered as optimal temperature and the LED is lit in green color.
If the temperature measured is lower than 18 degree Celsius, it is considered as cold and the LED is lit in blue color and blinks. The colder the temperature, the faster the LED blinks.
If the temperature measured is higher than 20 degree Celsius, it is considered as hot and the LED is lit in red color and blinks. The hotter the temperature, the faster the LED blinks.
*/
Program Part 2—Declarations of Global Variables and Constants
In this part of the program, we define a few variables and constants that will be used later for the entire program, including the upper and lower thresholds of the optimal temperature range and the numbers of the digital pins for the red, green, and blue LEDs inside the RGB LED, respectively. For example, the first statement here, “const int hot = 20” means that a constant (“const”) integer (“int”) called “hot” is created and assigned to the value of “20” which is the upper limit of the optimal temperature range. The third statement here, “const int BluePin = 2,” means that a constant (“const”) integer (“int”) called “BluePin” is created and assigned to the value of “2” which will be used later in the setup block of the program to set digital pin 2 as the output pin to control the blue LED.
const int hot = 20; // Set a threshold for hot temperature in Celsius
const int cold = 18; // Set a threshold for cold temperature in Celsius
const int BluePin = 2; // Set digital I/O 2 to control the blue LED in the RGB LED
const int GreenPin = 3; // Set digital I/O 3 to control the green LED in the RGB LED
const int RedPin = 4; // Set digital I/O 4 to control the red LED in the RGB LED
Program Part 3—Setup Block
As mentioned earlier, the setup block must exist even if there are no statements to execute. It is executed only once before the microcontroller executes the loop block repeatedly. Usually the setup block includes the initialization of the pin modes and the setup and start of serial communication between the microcontroller and the PC or laptop where the IDE runs. In this example, we set the analog pin A0 as the input of the temperature sensor measurements, digital pins defined earlier in Part 2 of the code as output pins to control the RGB LED, and start the serial communication with a typical communication speed (9600 bits per second) so that everything is ready for the microcontroller to execute the loop block.
void setup() {
pinMode(A0, INPUT); // Temperature sensor analog input pin
pinMode(BluePin, OUTPUT); // Blue LED digital output pin
pinMode(GreenPin, OUTPUT); // Green LED digital output pin
pinMode(RedPin, OUTPUT); // Red LED digital output pin
Serial.begin(9600); // Set up baud rate as 9600 bits per second
}
Program Part 4—Execution Loop Block
The loop part of the program is what the microcontroller runs repeatedly unless the power of the microcontroller is turned off.
Program Part 4.1—Start the loop and read in the analog input from the temperature sensor:
void loop() {
int sensor = analogRead(A0); // Read in the value from the analog pin connected to
// the temperature sensor
float voltage = (sensor / 1023.0) * 5.0; // Convert the value to voltage
float tempC = (voltage – 0.5) * 100; // Convert the voltage to temperature using the
/* scale factor; 0.5 is the deviation of the output voltage versus temperature from the best-fit straight line derived from sensor calibration */
Serial.print(“Temperature: ”);
Serial.print(tempC); // Print the temperature on the Arduino IDE output console
Here you see two types of variables, the integer (“int”) and the float (“float”). For an Arduino UNO, an “int” is 16 bit long and can represent a number ranging from −32,768 to 32,767 (−2^15 to (2^15) − 1). A “float” in Arduino UNO is 32 bit long and can represent a number that has a decimal point, ranging from −3.4028235E+38 to 3.4028235E+38. Here, we define the variable of the temperature measured from the LM35 sensor as a float type so that it can represent a decimal number and is more accurate.
Program Part 4.2—Check if the temperature is lower than the optimal temperature range. If yes, turn on the LED in blue and blink it according to how much the temperature deviated from the optimal range:
if (tempC < cold) { // If the temperature is colder than the optimal temperature range
Serial.println(“It’s cold.”);
// the optimal range
if (temp_dif <= 10) {
/* Calculate LED blink interval in milliseconds based on the temperature deviation from the optimal range; the further the deviation, the faster the LED blinks until turning into a solid blue */
// Blink the LED in blue:
digitalWrite(BluePin, HIGH); // Turn on the blue LED
digitalWrite(GreenPin, LOW); // Turn off the green LED
digitalWrite(RedPin, LOW); // Turn off the red LED
delay(LED_blink_interval); // Keep this status for a certain amount of time in milliseconds
digitalWrite(BluePin, LOW); // Turn off the blue LED
delay(LED_blink_interval); // Keep this status for a certain amount of time in milliseconds
}
else {
digitalWrite(BluePin, HIGH); // Turn off the blue LED
digitalWrite(GreenPin, LOW); // Turn off the green LED
digitalWrite(RedPin, LOW); // Turn on the red LED
}
}
Here, we define an integer variable called “LED_blink_interval” which is inversely proportional to the deviation of the temperature from the optimal range “temp_dif.” A coefficient 4000 is used here to convert the number to something close to 1000. Arduino always measures the time duration in millisecond, so delay(1000) means delay for 1000 millisecond, or 1 second.
Program Part 4.3—Check if the temperature is higher than the optimal temperature range. If yes, turn on the LED in red and blink it according to how much the temperature deviated from the optimal range:
else if (tempC > hot) {
// If the temperature is hotter than the optimal temperature range
Serial.println(“It’s hot.”);
// Calculate how much the temperature deviated from the optimal range
if (temp_dif <= 10) {
/* Calculate LED blink interval in milliseconds based on the temperature
deviation from the optimal range; the further the deviation, the faster the LED blinks until turning into a solid red */
// Blink the LED in red:
digitalWrite(BluePin, LOW); // Turn off the blue LED
digitalWrite(GreenPin, LOW); // Turn off the green LED
digitalWrite(RedPin, HIGH); // Turn on the red LED
delay(LED_blink_interval); // Keep this status for certain time in ms
digitalWrite(RedPin, LOW); // Turn off the red LED
delay(LED_blink_interval); // Keep this status for certain time in ms
}
else {
digitalWrite(BluePin, LOW); // Turn off the blue LED
digitalWrite(GreenPin, LOW); // Turn off the green LED
digitalWrite(RedPin, HIGH); // Turn on the red LED
}
Program Part 4.4—If the temperature is within the optimal range, turn on the LED in green:
else { // Otherwise the temperature should be fine; turn the LED on in solid green
Serial.println(“The temperature is fine.”);
digitalWrite(BluePin, LOW); // Turn off the blue LED
digitalWrite(GreenPin, HIGH); // Turn on the green LED
digitalWrite(RedPin, LOW); // Turn off the red LED
}
delay(10);
}
After the program is written, use the “verify” button in the IDE to compile the code and debug errors if there are any. If the code has been transcribed accurately, there should be no syntax errors or bugs. If the IDE indicates errors, it is necessary to work through each line of code to make sure the program is correct. Be aware that sometimes the real error indicated by the debugger is in the lines before or after the location indicated. Some common errors include missing variable definition, missing braces, wrong spelling for a function, and letter capitalization error. Some other errors, such as the wrong selection of variable type, often cannot be caught during the compile stage, but we can use the “Serial.print” function to print the results or intermediate results on the serial monitor to see if they look reasonable.
Once the program code has no errors, connect the PC or laptop with the Arduino UNO board without any wire or pin plug-ins using the USB cable. Check if the selections for the type of board and port options under “Tools” in the main menu are still right. Use the “upload” button in the IDE to upload the program code to the Arduino board. Disconnect the USB cable from the board, and now plug in all the wires and pins. Re-connect the board and open the “Serial Monitor” from the IDE. The current ambient temperature should display in the serial monitor, and the LED lights color and blink accordingly. If any further errors occur, they will show in the message area at the bottom part of the IDE window. Go back to debugging if this happens. If there are no errors and everything runs correctly, test how the measurement system works by changing the temperature around the sensor to see the corresponding response of the LED color and blinking frequency. This can be done by breathing over the sensor or placing it close to a cup of iced water or in a fridge for a short time. When the room temperature is in the set point range (about 18°C to 20°C) the green LED should be lit. Once the temperature is too high, only the red LED should be lit. When the temperature is too low, only the blue LED should be lit. If this does not work, check that you have created different temperatures by using a laboratory thermometer and then check the program code.
Step 7. Deploy and debug.
Deploy and debug the system under the targeted working environment with permanent hardware connections until everything works as expected.
We leave this step of making the permanent hardware connections for you to complete if interested. In practice, the packaging of the overall system will be designed to accommodate the working environment. The completed final product will be tested extensively for durability and reliability.
Step 8. Document the system.
Write documentation such as system specifications, wiring diagram, and user’s manual for the end users. At this stage, an instruction and safety manual would be written, and, if necessary, the product can be sent for local certification. Now the system you developed is ready to be signed off and handed over to the end users!
Image Credits
Figure 1. Alciatore, D.G., and Histand, M.B. (CC By 4.0). (2012). Main components in a measurement and control system. Introduction to mechatronics and measurement systems. Fourth edition. McGraw Hill.
Figure 2. Alciatore, D.G., and Histand, M.B. (2013). Microcontroller architecture. Adapted from Introduction to mechatronics and measurement systems. Fourth edition. McGraw Hill.
Figure 3. Qiu, G. (CC By 4.0). (2020). A diagram of a modern greenhouse system.
Figure 4. Qiu, G. (CC By 4.0). (2020). The core structure of a greenhouse control system.
Figure 5. Qiu, G. (CC By 4.0). (2020). The schematic of ventilation system.
Figure 6. Qiu, G. (CC By 4.0). (2020). The schematic of the control panel in ventilation system.
Figure 7. Qiu, G. (CC By 4.0). (2020). The nutrient and water supply system.
Figure 8. Qiu, G. (CC By 4.0). (2020). The schematic of lighting system.
Figure 9. Texas Instrument. (2020). Texas Instruments LM35 precision centigrade temperature sensor in LP package (a) and its pin configuration and functions (b). Retrieved from http://www.ti.com/lit/ds/symlink/lm35.pdf
Figure 10. Amazon. (2020). A 5 mm common cathode RGB LED and its pinout. Retrieved from www.amazon.com/Tricolor-Diffused-Multicolor-Electronics-Components/dp/B01C3ZZT8W/ref=sr_1_36?keywords=rgb+led&qid=1574202466&sr=8-36
Figure 11. Arduino. (2020). An Arduino UNO board and some major components. Retrieved from https://store.arduino.cc/usa/arduino-uno-rev3
Figure 12. Shi, Y. (CC By 4.0). (2020). Wiring diagram for setting up the test platform.
Figure 13. Sparkfun. (CC By 4.0). (2020). A breadboard: (a) front view (b) back view with the adhesive back removed to expose the bottom of the four vertical power rails on the sides (indicated with arrows) and the terminal strips in the middle Retrieved from https://learn.sparkfun.com/tutorials/how-to-use-a-breadboard/all
Figure 14. Shi, Y. (CC By 4.0). (2020). The interface and anatomy of Arduino IDE.
Figure 15. Shi, Y. (CC By 4.0). (2020). Select the right board and COM port in Arduino IDE.
Figure 16. Shi, Y. (CC By 4.0). (2020). Program logic flowchart.
References
Alciatore, D. G., and Histand, M. B. 2012. Introduction to mechatronics and measurement systems. 4th ed. McGraw Hill.
Arduino, 2019. https://www.arduino.cc/ Accessed on March 15, 2019.
Bolton, W. 2015. Mechatronics, electronic control systems in mechanical and electrical engineering. 6th ed. Pearson Education Limited.
Carryer, J. E., Ohline, R. M., and Kenny, T.W. 2011. Introduction to mechatronic design. Prentice Hall.
de Silva, C. W. 2010. Mechatronics—A foundation course. CRC Press.
University of Florida. 2019. What makes plants grow? edis.ifas.ufl.edu/pdffiles/4h/4H36000.pdf.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/02%3A_Information_Technology_Sensors_and_Control_Systems/2.01%3A_Basic_Microcontroller_Use_for_Measurement_and_Control.txt
|
Nathalie Gorretta
University of Montpellier, INRAe and SupAgro
Montpellier, France
Aoife A. Gowen
UCD School of Biosystems and Food Engineering
University College Dublin, Ireland
Introduction
Optical sensors are a broad class of devices for detecting light intensity. This can be a simple component for notifying when ambient light intensity rises above or falls below a prescribed level, or a highly sensitive device with the capacity to detect and quantify various properties of light such as intensity, frequency, wavelength, or polarization. Among these sensors, optical spectroscopic sensors, where light interaction with a sample is measured at many different wavelengths, are popular tools for the characterization of biological resources, since they facilitate comprehensive, non-invasive, and non-destructive monitoring. Optical sensors are widely used in the control and characterization of various biological environments, including food processing, agriculture, organic waste sorting, and digestate control.
The theory of spectroscopy began in the 17th century. In 1666, Isaac Newton demonstrated that white light from the sun could be dispersed into a continuous series of colors (Thomas, 1991), coining the word spectrum to describe this phenomenon. Many other researchers then contributed to the development of this technique by showing, for example, that the sun’s radiation was not limited to the visible portion of the electromagnetic spectrum. William Herschel (1800) and Johann Wilhelm Ritter (1801) showed that the sun’s radiation extended into the infrared and ultraviolet, respectively. A major contribution by Joseph Fraunhofer in 1814 laid the foundations for quantitative spectrometry. He extended Newton’s discovery by observing that the sun’s spectrum was crossed by a large number of fine dark lines now known as Fraunhofer lines. He also developed an essential element of future spectrum measurement tools (spectrometers) known as the diffraction grating, an array of slits that disperses light. Despite these major advances, Fraunhofer could not give an explanation as to the origin of the spectral lines he had observed. It was only later, in the 1850s, that Gustav Kirchoff and Robert Bunsen showed that each atom and molecule has its own characteristic spectrum. Their achievements established spectroscopy as a scientific tool for probing atomic and molecular structure (Thomas, 1991; Bursey, 2017).
Many terms are used to describe the measurement of electromagnetic energy at different wavelengths, such as spectroscopy, spectrometry, and spectrophotometry. The word spectroscopy originates from the combination of spectro (from the Latin word specere, meaning “to look at”) with scopy (from the Greek word skopia, meaning “to see”). Following the achievements of Newton, the term spectroscopy was first applied to describe the study of visible light dispersed by a prism as a function of its wavelength. The concept of spectroscopy was extended, during a lecture by Arthur Schuster in 1881 at the Royal Institution, to incorporate any interaction with radiative energy according to its wavelength or frequency (Schuster, 1911). Spectroscopy, then, can be summarized as the scientific study of the electromagnetic radiation emitted, absorbed, reflected, or scattered by atoms or molecules. Spectrometry or spectrophotometry is the quantitative measurement of the electromagnetic energy emitted, reflected, absorbed, or scattered by a material as a function of wavelength. The suffix “-photo” (originating from the Greek term phôs, meaning “light”) refers to visual observation, for example, printing on photographic film, projection on a screen, or the use of an observation scope, while the suffix “-metry” (from the Greek term metria, meaning the process of measuring) refers to the recording of a signal by a device (plotter or electronic recording).
Spectroscopic data are typically represented by a spectrum, a plot of the response of interest (e.g. reflectance, transmittance) as a function of wavelength or frequency. The instrument used to obtain a spectrum is called a spectrometer or a spectrophotometer. The spectrum, representing the interaction of electromagnetic radiation with matter, can be analyzed to gain information on the identity, structure, and energy levels of atoms and molecules in a sample.
Two major types of spectroscopy have been defined, atomic and molecular. Atomic spectroscopy refers to the study of electromagnetic radiation absorbed or emitted by atoms, whereas molecular spectroscopy refers to the study of the light absorbed or emitted by molecules. Molecular spectroscopy provides information about chemical functions and structure of matter while atomic spectroscopy gives information about elemental composition of a sample. This chapter focuses on molecular spectroscopy, particularly in the visible-near infrared wavelength region due to its relevance in biosystems engineering.
Concepts
Light and Matter Interaction
Spectroscopy is based on the way electromagnetic energy interacts with matter. All light is classified as electromagnetic radiation consisting of alternating electric and magnetic fields and is described classically by a continuous sinusoidal wave-like motion of the electric and magnetic fields propagating transversally in space and time. Wave motion can be described by its wavelength $\lambda$ (nm), the distance between successive maxima or minima, or by its frequency ν (Hz), the number of oscillations of the field per second (Figure 2.2.1). Wavelength is related to the frequency via the speed of light c (3 × 108 m s−1) according to the relationship given in Equation 2.2.1.
$\lambda = \frac{c}{v}$
Sometimes it is convenient to describe light in terms of units called “wavenumbers,” where the wavenumber is the number of waves in one centimeter. Thus, wavenumbers are frequently used to characterize infrared radiation. The wavenumber, $\bar{\nu}$ is formally defined as the inverse of the wavelength, $\lambda$ expressed in centimeters:
$\bar{\nu}=\frac{1}{\lambda}$
The wavenumber is therefore directly proportional to frequency, ν:
$v = c\bar{\nu}$
leading to the following conversion relationships:
$\bar{\nu} (\text{cm}^{-1}) = \frac{10^{7}}{\lambda{\text{(nm)}}}$
$\lambda{\text{(nm)} = \frac{10^{7}}{\bar{\nu}(\text{cm}^{-1})}}$
The propagation of light is described by the theory of electromagnetic waves proposed by Christian Huygens in 1878 (Huygens, 1912). However, the interaction of light with matter (emission or absorption) also leads to the particle nature of light and electromagnetic waves as proposed by Planck and Einstein in the early 1900s. In this theory, light is considered to consist of particles called photons, moving at the speed c. Photons are “packets” of elementary energy, or quanta, that are exchanged during the absorption or emission of light by matter.
Table $1$: Conversion relationships between $\lambda$ and $\bar{\nu}$.
Wavelength
$\lambda$
Wavenumber
$\bar{\nu}$
Relation
Unit
cm
cm−1
$\bar{\nu}= \frac{1}{\lambda}$
nm
cm−1
$\bar{\nu}= \frac{10^{7}}{\lambda}$
The energy of photons of light is directly proportional to its frequency, as described by the fundamental Planck relation (Equation 2.2.6). Thus, high energy radiation (such as X-rays) has high frequencies and short wavelengths and, inversely, low energy radiation (such as radio waves) has low frequencies and long wavelengths.
$E =h\nu=\frac{hc}{\lambda}=hc\bar{\nu}$
where E = energy of photons of light (J)
h = Plank’s constant = 6.62607004 × 10−34 J·s
ν = frequency (Hz)
c = speed of light (3 ×108 m s−1)
$\lambda$ = wavelength (m)
The electromagnetic spectrum is the division of electromagnetic radiation according to its different components in terms of frequency, photon energy or associated wavelengths, as shown in Figure 2.2.2. The highest energy radiation corresponds to the γ-ray region of the spectrum. At the other end of the electromagnetic spectrum, radio frequencies have very low energy (Pavia et al., 2008). The visible region only makes up a small part of the electromagnetic spectrum and ranges from 400 to about 750 nm. The infrared (IR) spectral region is adjacent to the visible spectral region and extends from about 750 nm to about 5 × 106 nm. It can be further subdivided into the near-infrared region (NIR) from about 750 nm to 2,500 nm which contains the short wave-infrared (SWIR) from 1100–2500 nm, the mid-infrared (MIR) region from 2,500 nm to 5 × 104 nm, and the far-infrared (FIR) region from 5 × 104 nm to 5 × 106 nm (Osborne et al., 1993).
When electromagnetic radiation collides with a molecule, the molecule’s electronic configuration is modified. This modification is related to the wavelength of the radiation and consequently to its energy. The interaction of a wave with matter, whatever its energy, is governed by the Bohr atomic model and derivative laws established by Bohr, Einstein, Planck, and De Broglie (Bohr, 1913; De Broguie, 1925). Atoms and molecules can only exist in certain quantified energy states. The energy exchanges between matter and radiation can, therefore, only be done by specific amounts of energy or quanta $\Delta{E} =h\nu$. These energy exchanges can be carried out in three main ways (Figure 2.2.3): absorption, emission, or diffusion.
In absorption spectroscopy, a photon is absorbed by a molecule, which undergoes a transition from a lower-energy state Ei to a higher energy or excited state Ej such that Ej – Ei = hν. In emission spectroscopy, a photon can be emitted by a molecule that undergoes a transition from a higher energy state Ej to a lower energy state Ei such that Ej – Ei = hν. In diffusion or scattering spectroscopy, a part of the radiation interacting with matter is scattered in many directions by the particles of the sample. If, after an interaction, the photon energy is not modified, the interaction is known as elastic. This corresponds to Rayleigh or elastic scattering, which maintains the frequency of the incident wave. When the photon takes or gives energy to the matter and undergoes a change in energy, the interaction is called inelastic, corresponding, respectively, to Stokes or anti-Stokes Raman scattering. Transitions between energy states are referred to as absorption or emission lines for absorption and emission spectroscopy, respectively.
Absorption Spectrometry
In absorption spectrometry, transitions between energy states are referred to as absorption lines. These absorption lines are typically classified by the nature of the electronic configuration change induced in the molecule (Sun, 2009):
• Rotation lines occur when the rotational state of a molecule is changed. They are typically found in the microwave spectral region ranging between 100 μm and 1 cm.
• Vibrational lines occur when the vibrational state of the molecule is changed. They are typically found in the IR, i.e., in the spectral range between 780 and 25,000 nm. Overtones and combinations of the fundamental vibrations in the IR are found in the NIR range (Figure 2.2.2).
• Electronic lines correspond to a change in the electronic state of a molecule (transitions of the energetic levels of valence orbitals). They are typically found in the ultraviolet (approx. 200–400 nm) and visible region (approx. 200–400 nm). In the visible region (350–800 nm), molecules such as carotenoids and chlorophylls absorb light due to their molecular structure. This visible spectral range is also used to evaluate color (for instance, of food or vegetation). In the ultraviolet spectral range, fluorescence and phosphorescence can be observed. While fluorescence and phosphorescence are both spontaneous emission of electromagnetic radiation, they differ in the way the excited molecule loses its energy after it has been irradiated. The glow of fluorescence stops right after the source of excitatory radiation is switched off, whereas for phosphorescence, an afterglow can last from fractions of a second to hours.
The spectral ranges selected for measurement and analysis depend on the application and the materials to be characterized. Absorption spectroscopy in the visible and NIR is commonly used for the characterization of biological systems due to the many advantages associated with this wavelength range, including rapidity, non-invasivity, non-destructive measurement, and significant incident wave penetration. Moreover, the NIR range enables probing of molecules containing C-H, N-H, S-H, and O-H bonds, which are of particular interest for characterization of biological samples (Pasquini, 2018; 2003). In addition to the chemical characterization of materials, it is possible to quantify the concentration of certain molecules using the Beer-Lambert law, described in detail below.
Beer-Lambert Law
Incident radiation passing through a medium undergoes several changes, the extent of which depends on the physical and chemical properties of the medium. Typically, part of the incident beam is reflected, another part is absorbed and transformed into heat by interaction with the material, and the rest passes through the medium. Transmittance is defined as the ratio of the transmitted light intensity to the incident light intensity (Equation 2.2.7). Absorbance is defined as the logarithm of the inverse of the transmittance (Equation 2.2.8). Absorbance is a positive value, without units. Due to the inverse relationship between them, absorbance is greater when the transmitted light is low.
$T= \frac{I}{I_{0}}$
$A=log(\frac{1}{T})=log(\frac{I_{0}}{I})$
where T = transmittance
I = transmitted light intensity
I0 = incident light intensity
A = absorbance (unitless)
The Beer-Lambert law (Equation 2.2.9) describes the linear relationship between absorbance and concentration of an absorbing species. At a given wavelength λ, absorbance A of a solution is directly proportional to its concentration (C) and to the length of the optical path (b), i.e., the distance over which light passes through the solution (Figure 2.2.4, Equation 2.2.9). When the concentration is expressed in moles per liter (mol L−1), the length of the optical path in centimeters (cm), the molar absorptivity or the molar extinction coefficient ε is expressed in L mol−1 cm−1.
Molar absorptivity is a measure of the probability of the electronic transition and depends on the wavelength but also on the solute responsible for absorption, the temperature and, to a lesser extent, the pressure.
$A=\epsilon bC$
where A = absorbance (unitless)
ε = molar absorptivity or molar extinction coefficient = Beer-Lambert proportionality constant (L mol−1 cm−1)
b = path length of the sample (cm)
C = concentration (mol L−1)
Beer-Lambert Law Limitations
Under certain circumstances, the linear relationship between the absorbance, the concentration, and the path length of light can break down due to chemical and instrumental factors. Causes of nonlinearity include the following:
• • Deviation of absorptivity coefficient: The Beer-Lambert law is capable of describing the behavior of a solution containing a low concentration of an analyte. When analyte concentration is too high (typically >10 mM), electrostatic interactions between molecules close to each other result in deviations in absorptivity coefficients.
• • High analyte concentrations can also alter the refractive index of the solution which in turn could affect the absorbance obtained.
• • Scattering: Particulates in the sample can induce scattering of light.
• • Fluorescence or phosphorescence of the sample.
• • Non-monochromatic radiation due to instrumentation used.
Non-linearity can be detected as deviations from linearity when the absorbance is plotted as a function of concentration (see example 1). This is usually overcome by reducing analyte concentration through sample dilution.
Spectroscopic Measurements
Spectrometers are optical instruments that detect and measure the intensity of light at different wavelengths. Different measurement modes are available, including transmission, reflection, and diffuse reflection (Figure 2.2.5). In transmission mode, the spectrometer captures the light transmitted through a sample, while in reflectance mode, the spectrometer captures the light reflected by the sample. In some situations, e.g., for light-diffusing samples such as powders, reflected light does not come solely from the front surface of the object; radiation that penetrates the material can reappear after scattering of reflection within the sample. These radiations are called diffuse reflection.
Spectrometers share several common basic components, including a source of light energy, a means for isolating a narrow range of wavelengths (typically a dispersive element), and a detector. The dispersive element must allow light of different wavelengths to be separated (Figure 2.2.6).
The light source is arguably the most important component of any spectrophotometer. The ideal source is a continuous one that contains radiation of uniform intensity over a large range of wavelengths. Other desirable properties are stability over time, long service life, and low cost. Quartz-tungsten halogen lamps are commonly used as light sources for the visible (Vis) and NIR regions, and deuterium lamps or high-powered light emitting diodes may be used for the ultraviolet region.
The light produced by the light source is then focused and directed to the monochromater by an entrance slit. A grating diffraction element is then used to split the white light from the lamp into its components. The distance between the lines on gratings (“grating pitch”) is of the same order of magnitude as the wavelength of the light to be analyzed. The separated wavelengths then propagate towards the sample compartment through the exit slit.
Depending on the technology used for the detector, the sample can be positioned before or after the monochromater. For simplicity, this chapter describes a positioning of the sample after the monochromater; the entire operation described above is valid regardless of the positioning of the sample.
In some spectrometers, an interferometer (e.g. Fabry-Pérot or Fourier-transform interferometer for UV and IR spectral range, respectively) is used instead of a diffraction grating to obtain spectral measurements. In this case, the initial beam light is split into two beams with different optical paths by using mirror arrangements. These two beams are then recombined before arriving at the detector. If the optical path lengths of the two beams do not differ by too much, an interference pattern is produced. A mathematical operation (Fourier transform) is then applied to the obtained interference pattern (interferogram) to produce a spectrum.
Once the light beams have passed through the samples, they will continue to the detector or photodetector. A photodetector absorbs the optical energy and converts it into electrical energy. A photodetector is a multichannel detector and can be a photodiode array, a charge coupled device (CCD), or a complementary metal oxide semiconductor (CMOS) sensor. While photodetectors can be characterized in many different ways, the most important differentiator is the detector material. The two most common semiconductor materials used in Vis-NIR spectrometers are silicon (Si) and indium gallium arsenide (InGaAs).
Spectral Imaging
Spectral imaging is a technique that integrates conventional imaging and spectroscopy to obtain both spatial and spectral information from an object. Multispectral imaging usually refers to spectral images in which <10 spectral bands are collected, while hyperspectral imaging is the term used when >100 contiguous spectral bands are collected. The term spectral imaging is more general. Spectral images can be represented as three-dimensional blocks of data, comprising two spatial and one wavelength dimension.
Two sensing modes are commonly used to acquire hyperspectral images, i.e., reflectance and transmission modes (Figure 2.2.7). The use of these modes depends on the objects to be characterized (e.g., transparent or opaque) and the properties to be determined (e.g. size, shape, chemical composition, presence of defects). In reflectance mode, the hyperspectral sensor and light are located on the same side of the object and the imaging system acquires the light reflected by the object. In this mode, the lighting system should be designed to avoid any specular reflection. Specular reflection occurs when a light source can be seen as a direct reflection on the surface of an object. It is characterized by an angle of reflection being equal to the angle of incidence of the incoming light source on the sample. Specular reflection appears as bright saturated spots on acquired images impacting their quality. In transmittance mode, the detector is located in the opposite side of the light source and captures the transmitted light through the sample.
Applications
Vegetation Monitoring in Agriculture
The propagation of light through plant leaves is governed primarily by absorption and scattering interactions and is related to chemical and structural composition of the leaves. Spectral characteristics of radiation reflected, transmitted, or absorbed by leaves can thus provide a more thorough understanding of physiological responses to growth conditions and plant adaptations to the environment. Indeed, the biochemical components and physical structure of vegetation are related to its state of growth and health. For example, foliar pigments including chlorophyll a and b, carotenoids, and anthocyanins are strong absorbers in the Vis region and are abundant in healthy vegetation, causing plant reflectance spectra to be low in the Vis relative to NIR wavelength range (Asner, 1998; Ollinger, 2011) (Figure 2.2.8). Chlorophyll pigments absorb violet-blue and red light for photosynthesis, the process by which plants use sunlight to synthesize organic matter. Green light is not absorbed by photosynthesis and reflectance spectra of green vegetation in the visible range are maximum around 550 nm. This is why healthy leaves appear to be green. The red edge refers to the area of the sudden increase in the reflectance of green vegetation between 670 and 780 nm. The reflectance in the NIR plateau (800–1100 nm) is a region where biochemical absorptions are limited and is affected by the scattering of light within the leaf, the extent of which is related to the leaf’s internal structure. Reflectance in the short wave-IR (1100–2500 nm) is characterized by strong water absorption and minor absorptions of other foliar biochemical contents such as lignin, cellulose, starch, protein, and cellulose.
Stress conditions on plants, such as drought and pathogens, will induce changes in reflectance in the Vis and NIR spectral domain due to degradation of the leaf structure and the change of the chemical composition of certain tissues. Consequently, by measuring crop reflectance in the Vis and NIR regions of the spectrum, spectrometric sensors are able to monitor and estimate crop yield and crop water requirements and to detect biotic or abiotic stresses on vegetation. Vegetation indices (VI), which are combinations of reflectance images at two or more wavelengths designed to highlight a particular property of vegetation, can then be calculated over these images to monitor vegetation changes or properties at different spatial scales.
The normalized difference vegetation index (NDVI) (Rouse et al., 1974) is the ratio of the difference between NIR and red reflectance, divided by the sum of the two:
$NDVI = \frac{R_{NIR}-R_{R}}{R_{NIR}+R_{R}}$
where RNIR = reflectance in the NIR spectral region (one wavelength selected over the 750–870 nm spectral range) and RR = reflectance in the red spectral region (one wavelength selected over 580–650 nm spectral range). Dividing by the sum of the two bands reduces variations in light over the field of view of the image. Thus, NDVI maintains a relatively constant value regardless of the overall illumination, unlike the simple difference which is very sensitive to changes in illumination. NDVI values can range between −1 and +1, with negative values corresponding to surfaces other than plant cover, such as snow or water, for which the red reflectance is higher than that in the NIR. Bare soils, which have red and NIR reflectance about the same order of magnitude, NDVI values are close to 0. Vegetation canopies have positive NDVI values, generally in the range of 0.1 to 0.7, with the highest values corresponding to the densest vegetation coverage.
NDVI can be correlated with many plant properties. It has been, and still is, used to characterize plant health status, identify phenological changes, estimate green biomass and yields, and in many other applications. However, NDVI also has some weaknesses. Atmospheric conditions and thin cloud layers can influence the calculation of NDVI from satellite data. When vegetation cover is low, everything under the canopy influences the reflectance signal that will be recorded. This can be bare soil, plant litter, or other vegetation. Each of these types of ground cover will have its own spectral signature, different from that of the vegetation being studied. Other indices to correct NDVI defects or to estimate other vegetation parameters have been proposed, such as the normalized difference water index or NDWI (Gao, 1996), which uses two wavelengths located respectively in the NIR and the SWIR regions (750–2500 nm) to track changes in plant moisture content and water stress (Eq. 2.2.11). Both wavelengths are located in a high reflectance plateau (Fig. 2.2.8) where the vegetation scattering properties are expected to be about the same. The SWIR reflectance is affected by the water content of the vegetation. The combination of the NIR and the SWIR wavelength is thus not sensitive to the internal structure of the leaf but is affected by vegetation water content. The normalized difference water index is:
$NDWI=\frac{R_{NIR}-R_{SWIR}}{R_{NIR}+R_{SWIR}}$
where RNIR is the reflectance in the NIR spectral region (one wavelength selected over the 750–870 nm spectral range) and RSWIR is the reflectance in the SWIR spectral region around 1240 nm (water absorption band). Gao (1996) proposed using RNIR equal to reflectance at 860 nm and RSWIR at 1240 nm.
Absorption spectroscopy is widely used for monitoring and characterizing vegetation at different spatial, spectral, and temporal scales. Sensors are available mainly for broad-band multispectral or narrow-band hyperspectral data acquisition. Platforms are space-borne for satellite-based sensors, airborne for sensors on manned and unmanned airplanes, and ground-based for field and laboratory-based sensors.
Satellites have been used for remote sensing imagery in agriculture since the early 1970s (Bauer and Cipra, 1973; Doraiswamy et al., 2003) when Landsat 1 (originally known as Earth Resources Technology Satellite 1) was launched. Equipped with a multispectral scanner with four wavelength channels (one green, one red and two IR bands), this satellite was able to acquire multispectral images with 80 m spatial resolution and 18-day revisit time (Mulla 2013). Today, numerous multispectral satellite sensors are available and provide observations useful for assessing vegetation properties far better than Landsat 1. Landsat 8, for example, launched in 2013, offers nine spectral bands in the Vis to short-wave IR spectral range (i.e., 400–2500 nm) with a spatial resolution of 15–30 m and a 16-day revisit time. Sentinel-2A and Sentinel-2B sensors launched in 2015 and 2017, respectively, have 13 spectral bands (400–2500 nm) and offer 10–30 m multi-spectral global coverage and a revisit time of less than 10 days. Hyperspectral sensors, however, are still poorly available on satellites due to their cost and their relatively short operating life. Among them, Hyperion (EO-1 platform) has 220 spectral bands over the 400–2500 nm spectral range, a spatial resolution of 30 m, and a spectral resolution of 10 nm. The next generation, such as PRISMA (PRecursore IperSpettrale della Missione Applicativa) with a 30 m spatial resolution and a wavelength range of 400–2505 nm and the EnMAP (Environmental Mapping and Analysis Program) with a 30 m spatial resolution and a wavelength range of 400–2500 nm (Transon et al., 2018), indicate the future for this technology.
Some companies now use satellite images to provide a service to help farmers manage agricultural plots. Farmstar (www.myfarmstar.com/web/en) and Oenoview (https://www.icv.fr/en/viticulture-oenology-consulting/oenoview), for example, support management of inputs and husbandry in cereal and vine crops, respectively. However, satellite-based sensors often have an inadequate spatial resolution for precision agriculture applications. Some farm management decisions, such as weed detection and management, require images with a spatial resolution in the order of one centimeter and, for emergent situations (such as to monitor nutrient stress and disease), a temporal resolution of less than 24 hours (Zhang and Kovacs, 2012).
Airborne sensors are today able to produce data from multispectral to hyperspectral sensors with wavelengths ranging from Vis to MIR, with spatial resolutions ranging from sub-meter to kilometers and with temporal frequencies ranging from 30 min to weeks or months. Significant advancements in unmanned aerial vehicle (UAV) technology as well as in hyperspectral and multispectral sensors (in terms of both weight and image acquisition modes) allow for the combination of these tools to be used routinely for precision agricultural applications. The flexibility of these sensors, their availability and the high achievable spatial resolutions (cm) make them an alternative to satellite sensors. Multispectral sensors embedded on UAV platforms have been used in various agricultural studies, for example, to detect diseases in citrus trees (Garcia-Ruiz et al., 2013), grain yield in rice (Zhou et al., 2017) and for mapping vineyard vigor (Primicerio et al., 2012). UAV systems with multispectral imaging capability are used routinely by companies to estimate the nitrogen needs of plants. This information, given in near real-time to farmers, helps them to make decisions about management. Information extracted from airborne images are also used for precision farming to enhance planning of agricultural interventions or management of agricultural production at the scale of farm fields.
Ground-based spectroscopic sensors have also been developed for agricultural purposes. They collect reflectance data from short distances and can be mounted on tractors or held by hand. For example, the Dualex Force A hand-tool leaf clip (https://www.force-a.com/fr/produits/dualex) is adapted to determine the optical absorbance of the epidermis of a leaf in the ultraviolet (UV) optical range through the differential measurement of the fluorescence of chlorophyll as well as the chlorophyll content of the leaf using different wavelengths in the red and NIR ranges. Using internal model calibration, this tool calculates leaf chlorophyll content, epidermal UV-absorbance and a nitrogen balance index (NBI). This information could then be used to obtain valuable indicators of nitrogen fertilization, plant senescence, or pathogen susceptibility. Other examples are the nitrogen sensors developed by Yara (https://www.yara.fr/fertilisation/outils-et-services/n-sensor/) that enable adjustment of the nitrogen application rate in real time and at any point of the field, according to the crop’s needs.
Food-Related Applications
Conventional, non-imaging, spectroscopic methods are widely used for routine analysis and process control in the agri-food industry. For example, NIR spectroscopy is commonly used in the prediction of protein, moisture, and fat content in a wide range of raw materials and processed products, such as liquids, gels, and powders (Porep et al., 2015). Ultraviolet-Vis (UV-Vis) spectroscopy is a valuable tool in monitoring bioprocesses, such as the development of colored phenolic compounds during fermentation of grapes in the process of winemaking (Aleixandre-Tudo et al., 2017). The Beer-Lambert law (Equation 2.2.9) can be used to predict the concentration of a given compound given its absorbance at a specific wavelength.
While conventional spectroscopic methods are useful for characterizing homogeneous products, the lack of spatial resolution leads to an incomplete assessment of heterogeneous products, such as many foodstuffs. This is particularly problematic in the case of surface contamination, where information on the location, extent, and distribution of contaminants over a food sample is required. Applications of Vis-NIR spectral imaging for food quality and safety are widespread in the scientific literature and are emerging in the commercial food industry. The heightened interest in this technique is driven mainly by the non-destructive and rapid nature of spectral imaging, and the potential to replace current labor- and time-intensive analytical methods in the production process.
This section provides a brief overview of the range and scope of such applications. For a more comprehensive description of these and related applications, several informative reviews have been published describing advances in hyperspectral imaging for contaminant detection (Vejarano et al., 2017), food authentication (Roberts et al., 2018), and food quality control (Gowen et al. 2007; Baiano, 2017).
Contaminant Detection
The ability of spectral imaging to detect spatial variations over a field of view, combined with chemical sensitivity, makes it a promising tool for contaminant detection. The main contaminants that can be detected in the food chain using Vis-NIR include polymers, paper, insects, soil, bones, stones, and fecal matter. Diffuse reflectance is by far the most common mode of spectral imaging utilized for this purpose, meaning that primarily only surface or peripheral contamination can be detected. Of concern in the food industry is the growth of spoilage and pathogenic microorganisms at both pre-harvest and post-harvest processing stages, since these result in economic losses and potentially result in risks to human health. Vis-NIR spectral imaging methods have been demonstrated for pre-harvest detection of viral infection and fungal growth on plants, such as corn (maize) and wheat. For instance, decreases in the absorption of light in wavebands related to chlorophyll were found to be related to the destruction of chloroplasts in corn ears due to Fusarium infection (Bauriegel et al., 2011). Fecal contamination acts as a favorable environment for microbial growth, thus many studies have focused on the detection of such contamination over a wide variety of foods, including fresh produce, meat, and poultry surfaces. For example, both fluorescence and reflectance modalities have been shown to be capable of detecting fecal contamination on apples with high accuracy levels (Kim et al., 2007). Recent studies have utilized spectral imaging transmittance imaging for insect detection within fruits and vegetables, resulting in high detection levels (>80% correct classification) (Vejarano et al., 2017).
Food Authentication
Food ingredient authentication is necessary for the ever expanding global supply chain to ensure compliance with labeling, legislation, and consumer demand. Due to the sensitivity of vibrational spectroscopy to molecular structure and the development of advanced multivariate data analysis techniques such as chemometrics, NIR and MIR spectroscopy have been used successfully in authentication of the purity and geographical origin of many foodstuffs, including honey, wine, cheese, and olive oil. Spectral imaging, having the added spatial dimension, has been used to analyze non-homogeneous samples, where spatial variation could improve information on the authentication or prior processing of the food product, for example, in the detection of fresh and frozen-thawed meat or in adulteration of flours (Roberts et al., 2018).
Food Quality Control
Vis-NIR spectral imaging has been applied in a wide range of food quality control issues, such as bruise detection in mushrooms, apples, and strawberries, and in the prediction of the distribution of water, protein, or fat content in heterogeneous products such as meat, fish, cheese, and bread (Liu et al., 2017). The dominant feature in the NIR spectrum of high moisture foods is the oxygen-hydrogen (OH) bond-related peak centered around 1450 nm. The shape and intensity of this peak is sensitive to the local environment of the food matrix, and can provide information on changes in the water present in food products. This is useful since many deteriorative biochemical processes, such as microbial growth and non-enzymatic browning, rely on the availability of free water in foods. Vis-NIR spectral imaging has also been applied to quality assessment of semi-solid foods, as reviewed by Baiano (2017). For instance, transmittance spectral imaging has been used to non-destructively assess the interior quality of eggs (Zhang et al., 2015), while diffuse reflectance spectral imaging has been used to study the microstructure of yogurt (Skytte et al., 2015) and milk products (Abildgaard et al., 2015).
Examples
Example $1$
Example 1: Using the Beer-Lambert law to predict the concentration of an unknown solution
Problem:
Data were obtained from a UV-Vis optical absorption instrument, as shown in Table 2.2.2. Light absorbance was measured at 520 nm for different concentrations of a compound that has a red color. The path length was 1 cm. The goal is to use the Beer-Lambert law to calculate the molar absorptivity coefficient and determine the concentration of an unknown solution that has an absorbance of 1.52.
Table $2$: Concentration (mol L−1) and corresponding absorbance at 520 nm for a red colored compound.
Concentration (mol L−1) Absorbance at 520 nm
0.001
0.21
0.002
0.39
0.005
1.01
0.01
2.02
Solution
The first step required in calculating the molar absorptivity coefficient is to plot a graph of absorbance as a function of concentration, as shown in Figure 2.2.9. The data follow a linear trend, indicating that the assumptions of the Beer-Lambert law are satisfied.
To calculate the molar absorptivity coefficient, it is first necessary to calculate the line of best linear fit to the data. This is achieved here using the “add trendline” function in Excel. The resultant line of best fit is shown in Figure 2.2.10. The equation of this line is y = 201.85x.
Compare this equation to the Beer-Lambert law (Equation 2.2.9):
$A=\epsilon bC$ (Equation $9$
where A = absorbance (unitless)
ε = molar absorptivity or molar extinction coefficient = Beer-Lambert proportionality constant (L mol−1 cm−1)
b = path length of the sample (cm)
C = concentration (mol L−1)
In this example, ε b = 201.85, where b is the path length, defined in the problem as 1 cm. Consequently, ε = 201.85 (L mol−1 cm−1). To calculate the concentration of the unknown solution, substitute the absorbance of the unknown solution (1.52) into the equation of best linear fit, resulting in a concentration of 0.0075 mol L−1.
This type of calculation can be used for process or quality control in the food industry or for environmental monitoring such as water quality assessment.
Example $2$
Example 2: Calculation of vegetation indices from a spectral image
Problem:
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) developed by the National Aeronautics and Space Administration (NASA) is one of the foremost spectral imaging instruments for Earth remote sensing (NASA, n. d.). An agricultural scene was gathered by flying over the Indian Pines test site in northwestern Indiana (U.S.) and consists of 145 × 145 pixels and 224 spectral reflectance bands in the wavelength range 400–2500 nm. The Indian Pines scene (freely available at https://doi.org/10.4231/R7RX991C; Baumgardner et al., 2015) contains two-thirds agricultural land and one-third forest or other natural perennial vegetation. There are also two major dual lane highways and a rail line, as well as some low-density housing, other structures, and smaller roads present in the scene. The ground truth image shows the designation of various plots and regions in the scene, and is designated into sixteen classes, as shown in Figure 2.2.11. The average radiance spectrum of four classes of land cover in the scene is plotted in Figure 2.2.12. Table 2.2.3 shows the data corresponding to the plots shown in Figure 2.2.11. Using the mean radiance values, calculate the NDVI and NDWI for each class of land cover. Please note: In this example, the mean radiance values are being used for illustration purposes. This simplification is based on the assumption that the radiation receipt is constant across all wavebands so radiance is assumed to be linearly proportional to reflectance (ratio of reflected to total incoming energy). Typically, vegetation indices are calculated from pixel-level reflectance spectra.
Grass-Pasture Grass-Trees Grass-Pasture-Mowed Hay-Windrowed Stone-Steel Towers
NDVI
0.38
0.24
0.03
0.09
−0.25
NDWI
0.5
0.38
0.45
0.35
0.35
By applying the calculation to each pixel spectrum in the image, it is possible to create images of the NDVI and NDWI, as shown in Figure 2.2.13. The NDVI highlights regions of vegetation in red, regions of crop growth and soil in light green-blue, and regions of stone in darker blue. The NDWI, sensitive to changes in water content of vegetation canopies, shows regions of high water content in red, irregularly distributed in the wooded regions.
Image Credits
Figure 1. Gorretta, N. (CC By 4.0). (2020). Schematic of a sinusoidal wave described by its wavelength.
Figure 2. Gorretta, N. (CC By 4.0). (2020). Electromagnetic spectrum.
Figure 3. Gorretta, N. (CC By 4.0). (2020). Simplified energy diagram showing (a) absorption, (b) emission of a photon by a molecule, (c) diffusion process.
Figure 4. Gorretta, N. (CC By 4.0). (2020). Absorption of light by a sample.
Figure 5. Gorretta, N. (CC By 4.0). (2020). Schematic diagram showing the path of light for different modes of light measurement, i.e. (a) transmission, (b) reflection, and (c) diffuse reflection.
Figure 6. Gorretta, N. (CC By 4.0). (2020). Spectrometer configuration: transmission diffraction grating.
Figure 7. Gorretta, N. (CC By 4.0). (2020). Hyperspectral imaging sensing mode: (a) reflectance mode, (b) transmission mode.
Figure 8. Gorretta, N. (CC By 4.0). (2020). A green vegetation spectrum.
Figure 9. Gowen, A. A. (CC By 4.0). (2020). Plot of absorbance at 520 nm as a function of concentration.
Figure 10. Gowen, A. A. (CC By 4.0). (2020). Plot of absorbance at 520 nm as a function of concentration showing line and equation of best linear fit to the data.
Figure 11. Gowen, A. A. (CC By 3.0). (2015). Indian Pines ground truth image showing various plots and regions in the scene, designated into sixteen classes. Citation might be: Baumgardner, M. F., L. L. Biehl, and D. A. Landgrebe. 2015. “220 Band AVIRIS Hyperspectral Image Data Set: June 12, 1992 Indian Pine Test Site 3.” Purdue University Research Repository. doi:10.4231/R7RX991C. This item is licensed CC BY 3.0.
Figure 12. Gowen, A. A. (CC By 4.0). (2020). Indian Pines average reflectance spectrum of four classes of land cover in the scene shown in figure 11.
Figure 13. Gowen, A. A. (CC BY 4.0). (2020). NDVI and NDWI calculation of Indian Pines images.
References
Abildgaard, O. H., Kamran, F., Dahl, A. B., Skytte, J. L., Nielsen, F. D., Thomsen, C. L., . . . Frisvad, J. R. (2015). Non-invasive assessment of dairy products using spatially resolved diffuse reflectance spectroscopy. Appl. Spectrosc., 69(9), 1096–1105. https://doi.org/10.1366/14-07529.
Aleixandre-Tudo, J. L., Buica, A., Nieuwoudt, H., Aleixandre, J. L., & du Toit, W. (2017). Spectrophotometric analysis of phenolic compounds in grapes and wines. J. Agric. Food Chem., 65(20), 4009-4026. https://doi.org/10.1021/acs.jafc.7b01724.
Asner, G. P. (1998). Biophysical and biochemical sources of variability in canopy reflectance. Remote Sensing Environ., 64(3), 234-253. https://doi.org/10.1016/S0034-4257(98)00014-5.
Baiano, A. (2017). Applications of hyperspectral imaging for quality assessment of liquid based and semi-liquid food products: A review. J. Food Eng., 214, 10-15. https://doi.org/10.1016/j.jfoodeng.2017.06.012.
Bauer, M. E., & Cipra, J. E. (1973). Identification of agricultural crops by computer processing of ERTS MSS Data. Proc. Symp. on Significant Results Obtained from the Earth Resources Technology Satellite. Retrieved from http://agris.fao.org/agris-search/search.do?recordID=US201302721443.
Baumgardner, M. F., Biehl, L. L., & Landgrebe, D. A. (2015). 220 Band AVIRIS hyperspectral image data set: June 12, 1992 Indian Pine Test Site 3. Purdue University Research Repository. https://doi.org/10.4231/R7RX991C.
Bauriegel, E., Giebel, A., & Herppich, W. B. (2011). Hyperspectral and chlorophyll fluorescence imaging to analyse the impact of Fusarium culmorum on the photosynthetic integrity of infected wheat ears. Sensors, 11(4), 3765-3779. https://doi.org/10.3390/s110403765.
Bohr, N. (1913). I. On the constitution of atoms and molecules. London Edinburgh Dublin Philosophical Magazine J. Sci., 26(151), 1-25. https://doi.org/10.1080/14786441308634955.
Bursey, M. M. (2017). A brief history of spectroscopy. Access Science. https://doi.org/10.1036/1097-8542.BR0213171.
De Broguie, L. V. 1925. On the theory of quanta. Paris, France.
Doraiswamy, P. C., Moulin, S., Cook, P. W., & Stern, A. (2003). Crop yield assessment from remote sensing. Photogrammetric Eng. Remote Sensing, 69(6), 665-674. doi.org/10.14358/PERS.69.6.665.
Farmstar. (n. d.). Farmstar: Have everything you need to manage your crops! Retrieved from www.myfarmstar.com/web/en.
Force A. (n. d.). Dualex scientific. Retrieved from https://www.force-a.com/fr/produits/dualex.
Gao, B.-c. (1996). NDWI: A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sensing Environ., 58(3), 257-266. https://doi.org/10.1016/S0034-4257(96)00067-3.
Garcia-Ruiz, F., Sankaran, S., Maja, J. M., Lee, W. S., Rasmussen, J., & Ehsani, R. (2013). Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees. Comput. Electron. Agric., 91, 106-115. https://doi.org/10.1016/j.compag.2012.12.002.
Gowen, A. A., O’Donnell, C. P., Cullen, P. J., Downey, G., & Frias, J. M. (2007). Hyperspectral imaging—An emerging process analytical tool for food quality and safety control. Trends Food Sci. Technol., 18(12), 590-598. doi.org/10.1016/j.jpgs.2007.06.001.
Huygens, C. (1912). Treatise on light. Macmillan. Retrieved from http://archive.org/details/treatiseonlight031310mbp.
Kim, M. S., Chen, Y.-R., Cho, B.-K., Chao, K., Yang, C.-C., Lefcourt, A. M., & Chan, D. (2007). Hyperspectral reflectance and fluorescence line-scan imaging for online defect and fecal contamination inspection of apples. Sensing Instrumentation Food Qual. Saf., 1(3), 151. doi.org/10.1007/s11694-007-9017-x.
Liu, Y., Pu, H., & Sun, D.-W. (2017). Hyperspectral imaging technique for evaluating food quality and safety during various processes: A review of recent applications. Trends Food Sci. Technol., 69, 25-35. doi.org/10.1016/j.jpgs.2017.08.013.
Mulla, D. J. (2013). Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng., 114(4), 358-371. https://doi.org/10.1016/j.biosystemseng.2012.08.009.
NASA (n. d.). Airborne visible/infrared imaging spectrometer: AVIRIS overview. NASA Jet Propulsion Laboratory, California Institute of Technology. https://www.jpl.nasa.gov/missions/airborne-visible-infrared-imaging-spectrometer-aviris/.
Ollinger, S. V. (2011). Sources of variability in canopy reflectance and the convergent properties of plants. New Phytol., 189(2), 375-394. doi.org/10.1111/j.1469-8137.2010.03536.x.
Osborne, B. G., Fearn, T., Hindle, P. H., & Osborne, B. G. (1993). Practical NIR spectroscopy with applications in food and beverage analysis (Vol. 2). Longman Scientific & Technical.
Pasquini, C. (2003). Near infrared spectroscopy: Fundamentals, practical aspects and analytical applications. J. Brazilian Chem. Soc., 14(2), 198-219. https://doi.org/10.1590/S0103-50532003000200006.
Pasquini, C. (2018). Near infrared spectroscopy: A mature analytical technique with new perspectives: A review. Anal. Chim. Acta, 1026, 8-36. https://doi.org/10.1016/j.aca.2018.04.004.
Pavia, D. L., Lampman, G. M., Kriz, G. S., & Vyvyan, J. A. (2008). Introduction to spectroscopy. Cengage Learning.
Porep, J. U., Kammerer, D. R., & Carle, R. (2015). On-line application of near infrared (NIR) spectroscopy in food production. Trends Food Sci. Technol., 46(2, Part A), 211-230. doi.org/10.1016/j.jpgs.2015.10.002.
Primicerio, J., Di Gennaro, S. F., Fiorillo, E., Genesio, L., Lugato, E., Matese, A., & Vaccari, F. P. (2012). A flexible unmanned aerial vehicle for precision agriculture. Precision Agric., 13(4), 517-523. doi.org/10.1007/s11119-012-9257-6.
Roberts, J., Power, A., Chapman, J., Chandra, S., & Cozzolino, D. (2018). A short update on the advantages, applications and limitations of hyperspectral and chemical imaging in food authentication. Appl. Sci., 8(4), 505. https://doi.org/10.3390/app8040505.
Rouse Jr., J. W., Haas, R. H., Schell, J. A., & Deering, D. (1974). Monitoring vegetation systems in the Great Plains with ERTS. NASA Special Publ. 351.
Schuster, A. (1911). Encyclopedia Britannica, 2:477.
Skytte, J., Moller, F., Abildgaard, O., Dahl, A., & Larsen, R. (n. d.). Discriminating yogurt microstructure using diffuse reflectance images. Proc. Scandinavian Conf. on Image Analysis (pp. 192-203). Springer. doi.org/10.1007/978-3-319-19665-7_16.
Sun, D.-W. (2009). Infrared spectroscopy for food quality analysis and control. Academic Press.
Thomas, N. C. (1991). The early history of spectroscopy. J. Chem. Education, 68(8), 631. https://doi.org/10.1021/ed068p631.
Remote Sensing, 10(2). https://doi.org/10.3390/rs10020157.
Vejarano, R., Siche, R., & Tesfaye, W. (2017). Evaluation of biological contaminants in foods by hyperspectral imaging: A review. Int. J. Food Properties, 20(sup2), 1264-1297. doi.org/10.1080/10942912.2017.1338729.
Zhang, C., & Kovacs, J. M. (2012). The application of small unmanned aerial systems for precision agriculture: A review. Precision Agric., 13(6), 693-712. doi.org/10.1007/s11119-012-9274-5.
Zhang, W., Pan, L., Tu, S., Zhan, G., & Tu, K. (2015). Non-destructive internal quality assessment of eggs using a synthesis of hyperspectral imaging and multivariate analysis. J. Food Eng., 157, 41-48. https://doi.org/10.1016/j.jfoodeng.2015.02.013.
Zhou, X., Zheng, H. B., Xu, X. Q., He, J. Y., Ge, X. K., Yao, X., . . . Tian, Y. C. (2017). Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J. Photogrammetry Remote Sensing, 130, 246-255. https://doi.org/10.1016/j.isprsjprs.2017.05.003.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/02%3A_Information_Technology_Sensors_and_Control_Systems/2.02%3A_Visible_and_Near_Infrared_Optical_Spectroscopic_Sensors_for_Biosystems_Engineeri.txt
|
Yao Ze Feng
College of Engineering, Huazhong Agricultural University and Key Laboratory of Agricultural Equipment in Mid-lower Yangtze River, Ministry of Agriculture and Rural Affairs Wuhan, Hubei, China
Introduction
Novel sensing technologies and data processing play a very important role in most scenarios across the wide varieties of biosystems engineering applications, such as environmental control and monitoring, food processing and safety control, agricultural machinery design and its automation, and biomass and bioenergy production, particularly in the big data era. For instance, to achieve automatic, non-destructive grading of agricultural products according to their physical and chemical properties, raw data from different types of sensors should be acquired and carefully processed to accurately describe the samples so that the products can be classified into different categories correctly (Gowen et al., 2007; Feng et al., 2013; O’Donnell et al., 2014; Baietto and Wilson, 2015; Park and Lu, 2016). For the environmental control of greenhouses, temperature, humidity, and the concentration of particular gases should be determined by processing the raw data acquired from thermistors, hydrometers, and electronic noses or optical sensors (Bai et al., 2018). Successful use of measurements relies heavily on data processing that converts the raw data into meaningful information for easier interpretation and understanding the targets of interest.
The purpose of data processing is to turn raw data into useful information that can help understand the nature of objects or a process. To make this whole procedure successful, particular attention should be paid to ensure the quality of raw data. However, the raw data obtained from biological systems are always affected by environmental factors and the status of samples. For example, the optical profiles of meat are vulnerable to temperature variation, light conditions, breeds, age and sex of animals, type of feeds, and geographical origins, among other factors. To ensure the best quality of raw data, data pretreatment is essential.
In this chapter, data pretreatment methods, including smoothing, derivatives, and normalization, are introduced. With good quality data, a modeling process correlating the raw data with features of the object or process of interest can be developed. This can be realized by employing different modeling methods. After validation, the established model can then be used for real applications.
Outcomes
After reading this chapter, you should be able to:
• • Describe the principles of various data processing methods
• • Determine appropriate data processing methods for model development
• • Evaluate the performance of established models
• • List examples of the application of data processing
Concepts
Data Pretreatment
Data Smoothing
To understand the features of biological objects, different sensors or instruments can be employed to acquire signals representing their properties. For example, a near-infrared (NIR) spectrometer is used to collect the optical properties across different wavelengths, called the spectrum, of a food or agricultural product. However, during signal (i.e., spectrum) acquisition, random noise will inevitably be introduced, which can deteriorate signal quality. For example, short-term fluctuations may be present in signals, which may be due to environmental effects, such as the dark current response and readout noise of the instrument. Dark current is composed of electrons produced by thermal energy variations, and readout noise refers to information derived from imperfect operation of electronic devices. Neither of them contribute to the understanding of the objects under investigation. In order to decrease such effects, data smoothing is usually applied. Some popular data smoothing methods include moving average (MV) and S-G (Savitzky and Golay) smoothing.
The idea of moving average is to apply “sliding windows” to smooth out random noises at each segment of the signal by calculating the average value in the segment so that the random noise in the whole signal can be reduced. Given a window with an even number of data points at a certain position, the average value of the original data within the window is calculated and used as the smoothed new value for the central point position. This procedure is repeated until reaching the end of the original signal. For the data points at the two edges of the signal that cannot be covered by a complete window, one can still assume the window is applied but only calculate the average of the data available in the window. The width of window is a key factor that should be determined carefully. It is not always true that the signal-to-noise ratio increases with window width since a too-large window will tend to smooth out useful signal as well. Moreover, since the average value is calculated for each window, all data points in the window are considered as equal contributors for the signal; this will sometimes result in signal distortion. To avoid this problem, S-G smoothing can be introduced.
Instead of using a simple average in the moving average process, Savitzky and Golay (1964) proposed assigning weights to different data in the window. Given an original signal X, the smoothed signal XS can be obtained as:
$XS_{i}=\frac{\sum^{r}_{j=-r} X_{i+j}W_{j}}{\sum{^{r}_{j=-r}W_{j}}}$
where 2r + 1 is window width and Wi is the weight for the ith data point in the window. W is obtained by fitting the data points in the window to a polynomial form following the least squares principle to minimize the errors between the original signal X and the smoothed signal XS and calculating the central points of the window from the polynomial. In applying S-G smoothing, the smoothing points and order of polynomials should be decided first. Once the two parameters are determined, the weight coefficients can then be applied to the data points in the window to calculate the value of the central point using Equation 2.3.1.
Figure 2.3.1 shows the smoothing effect by applying S-G smoothing to a spectrum of beef sample (Figure 2.3.1b-d). It is clearly shown that after S-G smoothing, the random noise in the original signal (Figure 2.3.1a) is greatly suppressed when the window width is 3 (Figure 2.3.1b). An even better result is achieved when the window width increases to 5 and 7, where the curve becomes smoother (Figure 2.3.1d) and the short fluctuations are barely seen.
Derivatives
Derivatives are methods for recovering useful information from data while removing slow change of signals (or low frequency signals) that could be useless in determining the properties of biological samples. For example, for a spectrum defined as a function y = f(x), the first and second derivatives can be calculated as:
$\frac{dy}{dx} = \frac{f(x+\Delta x)-f(x)}{\Delta x}$
From Equations 2.3.2 and 2.3.3, it can be understood that the offset (e.g., constant shift of signals) of the signal can be eliminated after first derivative processing, while both offset and slope in the original signal can be excluded after second derivative processing. Specifically, for the first derivative, the constant values (corresponding to the offset) can be eliminated due to the difference operation in the numerator of Equation 2.3.2. After the first derivative, the spectral curve with the same slope can be converted to a new offset and this can be further eliminated by a second derivative. Since offset variations and slope information always indicate environmental effects on the signal and irrelevant factors that are closely correlated with independent variables, application of derivative methods will help reduce such noises. Moreover, processing signals with derivatives offer an efficient approach to enhance the resolution of signals by uncovering more peaks, particularly in spectral analysis.
For biological samples with complicated chemical components, the spectra are normally the combination of different absorbance peaks arising from these components. Such superimposed peaks, however, can be well separated in second derivative spectra. Nevertheless, it should be noted that the signal-to-noise ratio of the signal will deteriorate with the increase of derivative orders since the noise is also enhanced substantially, particularly for the higher order derivatives, though high order derivatives are sometimes found to be useful in understanding the detailed properties of the objects. To avoid noise enhancement, a S-G derivative can be introduced where signal derivatives are attained by computing the derivatives of the polynomial. Specifically, the data points in a sliding window are fitted to a polynomial of a certain order following the procedure of S-G smoothing. Within the window, derivatives of the fitted polynomial are then calculated to produce new weights for the central point. When the sliding window reaches the end of the signal, derivatives of the current signal are then attained.
Figure 2.3.2 shows absorbance and derivative spectra of bacterial suspensions (Feng et al., 2015). It is demonstrated that after S-G derivative operation with 5 smoothing points and polynomial order of 2, the constant offset and linear baseline shift in the original spectrum (Figure 2.3.2a) are effectively removed in the first (Figure 2.3.2b) and second (Figure 2.3.2c) derivative spectra, respectively. Particularly, the second derivative technique is also a useful tool to separate overlapped peaks where a peak at ~1450 nm is resolved into two peaks at 1412 and 1462 nm.
Normalization
The purpose of data normalization is to equalize the magnitude of sample signals so that all variables for a sample can be treated equally for further analysis. For example, the surface temperature of pigs and environmental factors (temperature, humidity, and air velocity) can be combined to detect the rectal temperature of sows. Since the values for pig surface temperature can be around 39°C while the air velocity is mostly below 2 m/s, if these values are used directly for further data analysis, the surface temperature will intrinsically play a more dominant role than air velocity does simply due to its larger values. This may lead to biased interpretation of the importance of variables. Data normalization is also helpful when signals from different sensors are combined as variables (i.e., data fusion) to characterize biological samples that are complex in composition and easily affected by environmental conditions. However, since data normalization removes the average as well as the standard deviation of the sample variables, it might give confusing information about the samples if variabilities of variables in different units are important in characterizing sample properties.
Standard normal variate (SNV), or standardization, is one of the most popular methods used to normalize sample data (Dhanoa et al., 1994). Given a sample data X, the normalized Xnor can be obtained as:
$X_{nor}=\frac{X-mean(X)}{SD(X)}$
where mean(X) and SD(X) are the mean and standard deviation of X, respectively.
After SNV transformation, a new signal with a mean value of 0 and unit standard deviation is produced. Therefore, SNV is useful in eliminating dimensional variance among variables since all variables are compared at the same level. In addition, as shown in Figure 2.3.3, SNV is capable of correcting the scattering effect of samples due to physical structure of samples during light-matter interactions (Feng and Sun, 2013). Specifically, the large variations in visible NIR (vis-NIR) spectra of beef samples (Figure 2.3.3a) are substantially suppressed as shown in Figure 2.3.3b.
Modeling Methods
The purpose of modeling in data processing is mainly to establish the relationship between independent variables and dependent variables. Independent variables are defined as stand-alone factors that can be used to determine the values of other variables. Since the values of other variables depend on the independent variables, they are called dependent variables. For example, if size, weight, and color are used to classify apples into different grades, the variables of size, weight, and color are the independent variables and the grade of apples is the dependent variable. The dependent variables are calculated based on measured independent variables. During model development, if only one independent variable is used, the resultant model is a univariate model, while two or more independent variables are involved in multivariate models. If dependent variables are used during model calibration or training, the methods applied in model development are termed supervised. Otherwise, an unsupervised method is employed. The dataset used for model development is called the calibration set (or training set) and a new dataset where the model is applied for validation is the validation set (or prediction set).
The developed models can be used for different purposes. Basically, if the model is used to predict a discrete class (categorical), it is a classification model; and if it aims to predict a continuous quantity, it is a regression model. For instance, if spectra of samples are used to identify the geographical origins of beef, the spectra (optical properties at different wavelengths) are the independent variables and the geographical origins are the dependent variables. The established multivariate model describing the relationship between spectra and geographical origins is a classification model. In a classification model, the dependent variables are dummy variables (or labels) where different arbitrary numbers are used to represent different classes but with no physical meaning. On the other hand, if spectra of samples are used to determine the water content of beef, the developed model is then a regression model. The dependent variables are meaningful numbers indicating the actual water content. Simply, a classification model tries to answer the question of “What is it?” and a regression model tries to determine “How much is there?” There is a wide range of methods for regression or classification models. Some are described below.
Linear Regression
Linear regression is an analytical method that explores the linear relationship between independent variables (X) and dependent variables (Y). Simple linear regression is used to establish the simplest model that can be used to illustrate the relationship between one independent variable X and one dependent variable Y. The model can be described as:
$y = \beta_{0}+\beta_{1}X+E$
where X is the independent variable; Y is the dependent variable; $\beta_{0}$, $\beta_{1}$, are the regression coefficients; and E is the residual vector.
Simple linear regression is used when only one independent variable is to be correlated with the dependent variable. In the model, the two important coefficients, $\(\beta_{0}$ and $\beta_{1}$, can be determined by finding the best fit line through the scatter curve between X and Y via the least squares method. The best fit line requires minimization of errors between the real Y and the predicted $\hat{Y}$. Since the errors could be either positive or negative, it is more appropriate to use the sum of squared errors. Based on this, $\beta_{0}$ and $\beta_{1}$ can be calculated as:
$\beta_{1}=\frac{\sum^{n}_{i=1}(X_{i}-\bar{X})(Y_{i}-\bar{Y})}{\sum^{n}_{i=1}(X_{i}-\bar{X})^{2}}$
$\beta_{0}=\bar{Y}-\beta_{1}\bar{X}$
where $\bar{X}$ and $\bar{Y}$ are mean values of X and Y, respectively, and n is the number of samples.
Multiple linear regression (MLR) is a linear analysis method for regression in which the corresponding model is established between multiple independent variables and one dependent variable (Ganesh, 2010):
$Y=\beta_{0}+\sum^{n}_{j=i}\beta_{j}X_{j}+E$
where $X_{j}$ is the $j^{th}$ independent variable; Y is the dependent variable; $\beta_{0}$ is the intercept; $\beta_{1}$, $\beta_{2}$, . . . , $\beta_{n}$ are regression coefficients, and E is the residual matrix.
Although MLR tends to give better results compared with simple linear regression since more variables are utilized, MLR is only suitable for situations where the number of variables is less than the number of samples. If the number of variables exceeds the number of samples, Equation 2.3.8 will be underdetermined and infinite solutions can be produced to minimize residuals. Therefore, multiple linear regression is generally employed based on important feature variables (such as important wavelengths in spectral analysis) instead of all variables, if the number of variables is larger than that of samples.
Similar to simple linear regression, the determination of regression coefficients also relies on the minimization of prediction residuals (i.e., the sum of squared residuals between true Y values and predicted $\hat{Y}$). Specific procedures can be found elsewhere (Friedman et al., 2001).
Principal Component Analysis (PCA)
Due to the complicated nature of biological samples, data acquired to characterize samples usually involve many variables. For example, spectral responses at hundreds to thousands of wavelengths may be used to characterize the physical and chemical components of samples. Such great dimensionality inevitably brings difficulties in data interpretation. With the original multivariate data, each independent variable or variable combinations can be used to draw one-, two-, or three-dimensional plots to understand the distribution of samples. However, this process requires a huge workload and is unrealistic if more than three variables are involved.
Principal component analysis (PCA) is a powerful tool to compress data and provides a much more efficient way for visualizing data structure. The idea of PCA is to find a set of new variables that are uncorrelated with each other and attach the most data information onto the first few variables (Hotelling, 1933). Initially, PCA tries to find the best coordinate that can represent the most data variations in the original data and record it as PC1. Other PCs are subsequently extracted to cover the greatest variations of the remaining data. The established PCA model can be expressed as:
$X=TP^{T}+E$
where X is the independent variable matrix, T is the score matrix, PT is the loading matrix, and E is the residual matrix. The score matrix can be used to visualize the relationship between samples and the loadings can be used to express the relations between variables.
After PCA, the data can be represented by a few PCs (usually less than 10). These PCs are sorted according to their contribution to the explanation of data variance. Specifically, an accumulated contribution rate, defined as explained variance from the first few PCs over the total variance of the data, is usually employed to evaluate how many new variables (PCs) should be used to represent the data. Nevertheless, by applying PCA, the number of variables required for characterizing data variance is substantially reduced. After projecting the original data into the new PC spaces, data structure can be easily seen, if it exists.
Partial Least Squares Regression (PLSR)
As illustrated above, MLR requires that the number of samples be more than the number of variables. However, biological data normally contain far more variables than samples, and some of these variables may be correlated with each other, providing redundant information. To cope with this dilemma, partial least squares regression (PLSR) can be used to reduce the number of variables in the original data while retaining the majority of its information and eliminating redundant variations (Mevik et al., 2011). In PLSR, both X and Y are projected to new spaces. In such spaces, the multidimensional direction of X is determined to best account for the most variance of multidimensional direction of Y. In other words, PLSR decomposes both predictors X and dependent variable Y into combinations of new variables (scores) by ensuring the maximum correlation between X and Y (Geladi and Kowalski, 1986). Specifically, the score T of X is correlated with Y by using the following formulas:
$Y= XB+ E=XW^{*}_{a}C+E=TC+E$
$W^{*}_{a}=W_{a}(P^{T}W_{a})^{-1}$
where B is the regression coefficients for the PLSR model established; E is the residual matrix; Wa represents the PLS weights; a is the desired number of new variables adopted; P and C are loadings for X and Y, respectively. The new variables adopted are usually termed as latent variables (LVs) since they are not the observed independent variables but inferred from them.
The most important parameter in PLS regression is the determination of the number of LVs. Based on the PLSR models established with different LVs, a method named leave-one-out cross validation is commonly utilized to validate the models. That is, for the model with a certain number of LVs, one sample from the data set is left out with the remaining samples used to build a new model. The new model is then applied to the sample that is left out for prediction. This procedure is repeated until every sample has been left out once. Finally, every sample would have two values, i.e., the true value and the predicted value. These two types of values can then be used to calculate root mean squared errors (RMSEs; Equation 2.3.13 in the Model Evaluation section below) for different numbers of LVs. Usually, the optimal number of LVs is determined either at the minimum value of RMSEs or the one after which the RMSEs are not significantly different from the minimum RMSE. In Figure 2.3.4 for instance, using 6 latent variables would produce a very similar RMSE value to the minimum RMSE that is attained with 11 LVs; therefore, 6 latent variables would be more suitable for simpler model development.
In addition to the methods introduced above, many more algorithms are available for model development. With the fast growth of computer science and information technologies, modern machine learning methods, including artificial neural networks, deep learning, decision trees, and support vector machines, are widely used in biosystems engineering (LeCun et al., 2015; Maione and Barbosa, 2019; Pham et al., 2019, Zhao et al., 2019).
The model development methods described above can be used for both regression and classification problems. For regression, the final outputs are the results produced when the independent variables are input into the established models. For classification, a further operation is required to attain the final numbers for categorical representation. Normally, a rounding operation is adopted. For instance, a direct output of 1.1 from the model tends to be rounded down to 1 as the final result, which can be a label for a certain class. After such modification, the name of the regression method can be changed from PLSR to partial least squares discriminant analysis (PLS-DA), as an example. However, these numbers do not have actual physical meanings, and therefore they are often termed dummy variables.
Since a model can be established using different modeling methods, some of which are outlined above, the decision on which type of method to use is task-specific. If the objective is to achieve stable model with high precision, the one that can lead to the best model performance should be employed. However, if the main concern is simplicity and easy interpretation based on feasible application, a linear method will often be the best choice. In cases when a linear model fails to depict the correlation between X and Y, nonlinear models established by applying artificial neural networks or support vector machines could then be applied.
Model Evaluation
The full process of model development includes the calibration, validation, and evaluation of models. Model calibration tries to employ different modeling methods to the training data to find the best parameters for representation of samples. For example, if PLSR is applied to NIR spectral data to quantify beef adulteration with pork, the important parameters including the number of LVs and regression coefficients are determined so that when the spectra are inputted to the model, the predicted percentage of adulteration levels can be calculated. It is clear that this process simply works on the training data itself and the resultant model can best explain the data of the particular samples. However, since the modeling process is data specific, good model performance sometimes can be due to the modeling of noise and such models will fail to function with new, independent data. This problem is known as over-fitting and should be always avoided during modeling. Therefore, it is of crucial importance to validate the performance of the models using independent data, i.e., data that are not included in the calibration set and that are totally unknown to the established model.
Model validation is a process to verify whether similar model performance can be attained to that of calibration. There are basically two ways to conduct model validation. One is to use cross-validation, if there are not enough samples available. Cross-validation is implemented based on the training set and often a leave-one-out approach is taken (Klanke and Ritter, 2006). During leave-one-out cross-validation, one sample is left out from the calibration set and a calibration model is developed based on the remaining data. The left-out sample is then inputted to the developed model based on the other samples. This procedure terminates when all samples have been left out once. Finally, all samples will be predicted for comparison with the measured values. However, this method should be used with caution since it may lead to over-optimistic evaluation or model overfitting. Another approach, called external validation, is to introduce an independent prediction set that is not included in the calibration set and apply the model to the new, independent dataset. External validation is always preferred for model evaluation. Nevertheless, it is recommended to apply both cross-validation and external validation methods to evaluate the performance of models. This is particularly important in biosystems engineering because biological samples are very complex and their properties can change with time and environment. For meat samples, the chemical components of meat vary due to species, geographical origins, breeding patterns, and even different body portions of the same type of animal. The packaging atmosphere and temperature also have great influence on the quality variations of meat. Ideally, with a good and stable model, the results from cross-validation and external validation should be similar.
Model evaluation is an indispensable part of model development, which aims to determine the best performance of a model as well as to verify its validity for future applications by calculating and comparing some statistics (Gauch et al., 2003). For regression problems, two common parameters, coefficient of determination (R2), and root mean squared error (RMSE), are calculated to express the performance of a model. They are defined as follows:
$R^{2} = 1- \frac{\sum^{n}_{i=1}(Y_{i,meas}-Y_{i,pre})^{2}}{\sum^{n}_{i=1}(\bar{Y}-Y_{i,pre})^{2}}$
$\text{RMSE} = \sqrt{\frac{1}{n} \sum^{n}_{i=1}(Y_{i,meas}-Y_{i,pre})^{2}}$
where Yi,pre and Yi,meas, respectively, represent the predicted value and the measured value of targets for sample i; is the mean target value for all samples. An R2 of 1 and RMSE of 0 for all data sets would indicate a “perfect” model. Thus, the goal is to have R2 as close to 1 as possible and RMSE close to 0. In addition, a stable model has similar R2 and RMSE values for calibration and validation. It should be noted that R, the square root of R2, or correlation coefficient, is also frequently used to express the linear relationship between the predicted and measured values. Moreover, since different data sets may be used during model development, the above parameters can be modified in accordance. For example, R2C, R2CV and R2P can be used to represent the coefficients of determination for calibration, cross-validation, and prediction, respectively. Root mean squared errors for calibration, cross-validation, and prediction are denoted as RMSEC, RMSECV, and RMSEP, respectively.
For classification problems, a model’s overall correct classification rate (OCCR) is an important index used to evaluate the classification performance:
$\text{OCCR} = \frac{\text{Number of correctly classified samples}}{\text{Total number of samples}}$
The number of correctly classified samples is determined by comparing the predicted classification with the known classification. To investigate the detailed classification performance, a confusion matrix can be utilized (Townsend, 1971). A confusion matrix for binary classifications is shown in Table 2.3.1. In the confusion matrix, true positive and true negative indicate samples that are predicted correctly. False positives and false negatives are encountered when what is not true is wrongly considered as true and vice versa. Based on the confusion matrix, parameters can be attained to evaluate the classification model, including the sensitivity, specificity, and prevalence, among others:
Table $1$: Confusion matrix for binary classification.
Condition Positive Condition Negative
Predicted Positive
True positive (Power)
False positive (Type I error)
Predicted Negative
False negative (Type Il error)
True negative
$\text{Sensitivity} = \frac{\sum \text{True positive}}{\sum \text{Condition positive}}$
$\text{Specificity} = \frac{\sum \text{True negative}}{\sum \text{Condition negative}}$
$\text{Prevalence} = \frac{\sum \text{Condition positive}}{\sum \text{Total positive}}$
Applications
Beef Adulteration Detection
Food adulteration causes distrust in the food industry by leading to food waste due to food recall and loss of consumer trust. Therefore, it is crucial to use modern technologies to detect deliberate adulteration or accidental contamination. For example, a handheld spectrometer can be used to obtain spectra from beef samples. The raw spectra can be processed by the spectrometer to quantify the level, if any, of adulteration of each beef sample. To properly process the raw spectra, purposeful contamination experiments can be used to determine the appropriate pretreatment (or preprocessing) method(s) for the raw data. For example, Figure 2.3.5a shows spectra corresponding to different adulteration levels. Adulteration concentration in such an experiment should range from 0% to 100% with 0% being pure fresh beef and 100% for pure spoiled beef. The experiment should include a calibration dataset to develop the predictive relationship from spectra and an independent dataset to test the validity of the prediction. The following process can be used to determine the best preprocessing method for quantification of beef adulteration.
The raw spectral data (Figure 2.3.5a) have what is probably random noise with the signal, particularly at the lower wavelengths (400–500 nm). The reason for saying this is there are variations in spectral magnitude among the samples that do not change linearly with adulteration concentration. It is possible that these variations (noise in this application) are due to differences in chemical components of the samples, since spoiled meat is very different from fresh meat, so when the two are mixed in different proportions a clear signal should be visible. Noise might also be introduced due to small differences in the physical structure of samples causing variation of light scattering between the samples. Also note that there are only limited peaks and there is evident offset in the raw spectra. Therefore, different preprocessing methods including S-G smoothing, SNV, and the first and second derivatives can be applied to the raw spectra (Figure 2.3.5) and their performance in terms of improving the detection of beef adulteration compared.
Table 2.3.2 shows the performance of different preprocessing methods together with PLSR in determining the adulteration concentration. All the preprocessing methods applied lead to better models with smaller RMSEs, although such improvement is not very much. The optimal model was attained by using SNV as the preprocessing method, which had coefficients of determination of 0.93, 0.92, and 0.88 as well as RMSEs of 7.30%, 8.35%, and 7.90% for calibration, cross-validation, and prediction, respectively. Though second derivative spectra have contributed to better prediction precision (7.37%), the corresponding model yielded larger RMSEs for both calibration and cross-validation. Therefore, the best preprocessing method in this case is SNV. This preprocessing method can be embedded in a handheld spectrometer, where the raw spectra of adulterated beef samples acquired can be normalized by removing the average and then dividing by the standard deviation of the spectra. The prediction model can then be applied to the SNV-preprocessed data to estimate levels of beef adulteration and to provide insights into the authenticity of the beef product.
Table $2$: Comparison of different data preprocessing methods combined with PLSR for predicting beef adulteration.
Methods RMSEC (%) RMSECV (%) RMSEP (%) R2C R2CV R2P LV
None
8.35
9.34
7.99
0.91
0.90
0.88
4
1st Derivative
8.05
8.78
7.92
0.92
0.91
0.88
3
2nd Derivative
7.92
10.03
7.37
0.92
0.88
0.90
4
SNV
7.30
8.35
7.90
0.93
0.92
0.88
4
S-G
7.78
8.90
7.91
0.93
0.91
0.88
5
C = calibration
CV = coefficient of variation
SEP = standard error of prediction
P = prediction
LV = latent variables
Bacterial Classification
Identification and classification of bacteria are important for food safety, for the design of processes such as thermal treatment, and to help identify the causes of illness when bacterial contamination has occurred. This example outlines how a classification system can be developed (Feng et al., 2015). A spectral matrix was derived by scanning a total of 196 bacterial suspensions of various concentrations using a near infrared spectrometer over two wavelength ranges, i.e., 400–1100 nm and 1100–2498 nm. A column vector that recorded the labels for each bacterium (i.e., its name or classification) was also constructed. This dataset were used to classify different bacteria including three Escherichia coli strains and four Listeria innocua strains. Since the dataset contained a large number (>1000) of variables, it was interesting to visualize the structure of the data to investigate potential sample clustering. By using appropriate modeling methods, it was possible to establish a model for classifying bacteria at species level.
PCA can be used to understand the structure of data. Since the scores of a PCA model can be used to elucidate the distribution of samples, it is interesting to draw a score plot such as Figure 2.3.6. The first two columns of the score matrix T are the scores for the first two PCs and is generated by using the first one as x-axis and the other as y-axis. The loading plots in Figure 2.3.6 can be created by plotting the first two columns of the loading matrix PT versus variable names (wavelengths in this case), respectively.
The first and second PCs have covered 58.34% and 35.04% of the total variance of the spectral data set, leading to 93.38% of the information explained. Based on such information, it is demonstrated clearly that the two bacteria are well separated along the first PC though very few samples mixed together. By investigating loading 1, it is found that five main wavelengths including 1392, 1450, 1888, 1950, and 2230 nm are important variables that contribute to the separation of the two bacterial species. Also, it is interesting to find that two clusters appear within either of the two bacterial species and such separation can then be explained by the four major wavelengths indicated in loading 2 (Figure 2.3.6c).
The next target is to establish a classification model in the 400–1100 nm region for the classification of these bacterial species. To achieve this, PLS-DA was employed where the spectral data and the bacterial labels are used as independent and dependent variables, respectively. Figure 2.3.7 shows the performance of the established model. The optimized model takes four latent variables to produce OCCRs of 99.25% and 96.83% for calibration and prediction, respectively. To calculate OCCRs, the predicted values of individual samples are first rounded to get values of 1 or 0 and these predicted labels are then compared with the true labels, following which Equation 2.3.14 is employed.
A confusion matrix showing the classification details for prediction is shown in Table 2.3.3. It shows that the true positive for detecting E. coli and L. innocua are 25 and 36, respectively. Accordingly, the sensitivity for detecting E. coli and L. innocua species are 0.93 (25/27) and 1 (36/36), respectively. All the above parameters for both calibration and prediction demonstrate that the two bacterial species can be well classified.
Table $3$: Confusion matrix for bacterial species classification.
Actual Class Predicted Class Total
E. coli L. innocua
E. coli
25
2
27
L. innocua
0
36
36
Total
25
38
63
In microbial safety inspection of food products, it is important to identify the culprit pathogens that are responsible for foodborne diseases. To achieve this, bacteria on food surfaces can be sampled, cultured, isolated, and suspended, and the model can be applied to the spectra of bacterial suspensions to tell us which of those two species of bacteria are present in the food product.
Examples
Example $1$
Example 1: Moving average calculation
Problem:
Fruit variety and ripeness of fruit can be determined by non-destructive methods such as NIR spectroscopy. A reflectance spectrum of a peach sample was acquired; part of the spectral data in the wavelength range of 640–690 nm is shown in Table 2.3.4. Though the spectrometer is carefully configured, there still might be noise present in the spectra due to environmental conditions. Apply the moving average method to smooth the spectrum and to reduce potential noise.
Solution
Various software, including Microsoft, MATLAB, and commercial chemometric software (the Unscrambler, PLS Toolbox etc.) are available for implementing the moving average. Taking Microsoft Excel as an example, the “average” function is required. Given a spectrum presented column-wise (for example, column B), the value for the smoothed spectrum at cell B10 can be obtained as average(B9:B11) if the window size is 3, and average(B8:B12) or average(B7:B13) if the window size is 5 or 7, respectively. For both ends of the spectrum, only the average of values present in the window of a particular size is calculated. For instance, the spectral value at 639.8 nm after moving average smoothing under the window size of 3 can be obtained as the mean values of the original spectrum at 639.8, 641.1 and 642.2 nm, that is, (0.4728 + 0.4745 + 0.4751)/3 =0.4741.
Figure 2.3.8 shows the smoothed spectrum, the result of using the moving average method. Note that the spectra are shifted 0.01, 0.02, and 0.03 unit for the Win = 3, Win = 5, and Win = 7 spectra to separate the curves for visual presentation purposes. It is clear that for the original data, there is slight fluctuation and such variation is diminished after moving average smoothing.
Table $4$: Spectral data of a peach sample in the 640–690 nm range.
Wavelength
(nm)
Reflectance Wavelength
(nm)
Reflectance
639.8
0.4728
665.2
0.4755
641.1
0.4745
666.5
0.4743
642.4
0.4751
667.7
0.4721
643.6
0.4758
669.0
0.4701
644.9
0.4766
670.3
0.4680
646.2
0.4777
671.5
0.4673
647.4
0.4791
672.8
0.4664
648.7
0.4807
674.1
0.4661
650.0
0.4829
675.3
0.4672
651.2
0.4850
676.6
0.4689
652.5
0.4854
677.9
0.4715
653.8
0.4854
679.2
0.4747
655.0
0.4851
680.4
0.4796
656.3
0.4838
681.7
0.4862
657.6
0.4826
683.0
0.4932
658.8
0.4814
684.3
0.5010
660.1
0.4801
685.5
0.5093
661.4
0.4789
686.8
0.5182
662.7
0.4782
688.1
0.5269
663.9
0.4765
689.3
0.5360
Example $2$
Example 2: Evaluation of model performance
Problem:
As pigs cannot sweat, it is important to be able to rapidly confirm that conditions in a pig house are not causing them stress. Rectal temperature is the best indicator of heat stress in an animal, but it can be difficult to measure. A pig’s surface temperature, however, can be measured easily using non-contact sensors. Table 2.3.5 shows the performance of two PLSR models used to predict the rectal temperature of pigs by using variables including surface temperature and several environmental conditions. Model 1 is a many-variable model and Model 2 is a simplified model that utilizes an optimized subset of variables. Determine which model is better. The performance of models is presented by R and RMSEs for calibration, cross-validation, and prediction.
Solution
The first step is to check whether R is close to 1 and RMSE to 0. Correlation coefficients range from 0.66 to 0.87 (Table 2.3.5), showing obvious correlation between the predicted rectal temperature and the real rectal temperature. By investigating the RMSEs, it is found that these errors are relatively small (0.25°–0.38°C) compared with the measured range (37.8°–40.2°C). Therefore, both models are useful for predicting the rectal temperature of pigs.
The second step is to check the stability of the established models by evaluating the difference among Rs or RMSEs for calibration, cross-validation, and prediction. For the specific example, although the best correlation coefficient for calibration (RC) and root mean squared error for calibration (RMSEC) were attained for the many-variable model, its performance in cross-validation and prediction was inferior to that of the simplified model. Most importantly, the biggest difference among Rs of the many-variable model was 0.21, while only a tenth of such difference (0.02) was found for the simplified model. A similar trend was also observed for the RMSEs where the maximum differences of 0.05°C and 1.3°C were yielded for the simplified and many-variable models, respectively. These results strongly demonstrate that the simplified model is much more stable than the many-variable model.
Table $5$: Comparison of the performance of two models, many-variable Model 1 and simplified Model 2 (Feng et al., 2019). RC, RCV, and RP are correlation coefficients for calibration, cross-validation, and prediction, respectively.
Model RC RCV RP RMSEC
(°C)
RMSECV (°C) RMSEP
(°C)
LV
Model 1
0.87
0.66
0.76
0.25
0.38
0.37
4
Model 2
0.80
0.78
0.80
0.30
0.32
0.35
2
The third step can evaluate the simplicity of the model. In this example, four latent variables were employed to establish the many-variable model while only two were needed for the simplified model. Above all, the simplified model showed better prediction ability, particularly for cross-validation and prediction, with fewer latent variables. Therefore, it is considered as the better model.
Image Credits
Figure 1. Feng, Y. (CC By 4.0). (2020). S-G smoothing of a spectral signal.
Figure 2. Feng, Y. (CC By 4.0). (2020). NIR derivative spectra of bacterial suspensions.
Figure 3. Feng, Y. (CC By 4.0). (2020). SNV processing of vis-NIR spectra of beef samples adulterated with chicken meat.
Figure 4. Feng, Y. (CC By 4.0). (2020). Plot of root mean squared error (RMSE) as a function of number of latent variables (LV) for a PLSR model.
Figure 5. Feng, Y. (CC By 4.0). (2020). Preprocessing of beef spectra.
Figure 7. Feng, Y. (CC By 4.0). (2020). PLS-DA classification model performance in the visible-SWNIR range (400–1000 nm).
Figure 8. Feng, Y. (CC By 4.0). (2020). Example of moving average smoothing of a peach spectrum.
Acknowledgement
Many thanks to Mr. Hai Tao Zhao for his help in preparing this chapter.
References
Bai, X., Wang, Z., Zou, L., & Alsaadi, F. E. (2018). Collaborative fusion estimation over wireless sensor networks for monitoring CO2 concentration in a greenhouse. Information Fusion, 42, 119-126. https://doi.org/10.1016/j.inffus.2017.11.001.
Baietto, M., & Wilson, A. D. (2015). Electronic-nose applications for fruit identification, ripeness and quality grading. Sensors, 15(1), 899-931. https://doi.org/10.3390/s150100899.
Dhanoa, M. S., Lister, S. J., Sanderson, R., & Barnes, R. J. (1994). The link between multiplicative scatter correction (MSC) and standard normal variate (SNV) transformations of NIR spectra. J. Near Infrared Spectroscopy, 2(1), 43-47. https://doi.org/10.1255/jnirs.30.
Feng, Y.-Z., & Sun, D.-W. (2013). Near-infrared hyperspectral imaging in tandem with partial least squares regression and genetic algorithm for non-destructive determination and visualization of Pseudomonas loads in chicken fillets. Talanta, 109, 74-83. https://doi.org/10.1016/j.talanta.2013.01.057.
Feng, Y.-Z., Downey, G., Sun, D.-W., Walsh, D., & Xu, J.-L. (2015). Towards improvement in classification of Escherichia coli, Listeria innocua and their strains in isolated systems based on chemometric analysis of visible and near-infrared spectroscopic data. J. Food Eng., 149, 87-96. https://doi.org/10.1016/j.jfoodeng.2014.09.016.
Feng, Y.-Z., ElMasry, G., Sun, D.-W., Scannell, A. G., Walsh, D., & Morcy, N. (2013). Near-infrared hyperspectral imaging and partial least squares regression for rapid and reagentless determination of Enterobacteriaceae on chicken fillets. Food Chem., 138(2), 1829-1836. https://doi.org/10.1016/j.foodchem.2012.11.040.
Feng, Y.-Z., Zhao, H.-T., Jia, G.-F., Ojukwu, C., & Tan, H.-Q. (2019). Establishment of validated models for non-invasive prediction of rectal temperature of sows using infrared thermography and chemometrics. Int. J. Biometeorol., 63(10), 1405-1415. https://doi.org/10.1007/s00484-019-01758-2.
Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning. No. 10. New York, NY: Springer.
Ganesh, S. (2010). Multivariate linear regression. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (pp. 324-331). Oxford: Elsevier. https://doi.org/10.1016/B978-0-08-044894-7.01350-6.
Gauch, H. G., Hwang, J. T., & Fick, G. W. (2003). Model evaluation by comparison of model-based predictions and measured values. Agron. J., 95(6), 1442-1446. doi.org/10.2134/agronj2003.1442.
Geladi, P., & Kowalski, B. R. (1986). Partial least-squares regression: A tutorial. Anal. Chim. Acta, 185, 1-17. https://doi.org/10.1016/0003-2670(86)80028-9.
Gowen, A. A., O’Donnell, C. P., Cullen, P. J., Downey, G., & Frias, J. M. (2007). Hyperspectral imaging: An emerging process analytical tool for food quality and safety control. Trends Food Sci. Technol., 18(12), 590-598. doi.org/10.1016/j.jpgs.2007.06.001.
Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. J. Ed. Psychol., 24, 417-441. https://doi.org/10.1037/h0071325.
Klanke, S., & Ritter, H. (2006). A leave-k-out cross-validation scheme for unsupervised kernel regression. In S. Kollias, A. Stafylopatis, W. Duch, & E. Oja (Eds.), Proc. Int. Conf. Artificial Neural Networks. 4132, pp. 427-436. Springer. doi: doi.org/10.1007/11840930_44.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi.org/10.1038/nature14539.
Maione, C., & Barbosa, R. M. (2019). Recent applications of multivariate data analysis methods in the authentication of rice and the most analyzed parameters: A review. Critical Rev. Food Sci. Nutrition, 59(12), 1868-1879. https://doi.org/10.1080/10408398.2018.1431763.
Mevik, B.-H., Wehrens, R., & Liland, K. H. (2011). PLS: Partial least squares and principal component regression. R package ver. 2(3). Retrieved from https://cran.r-project.org/web/packages/pls/pls.pdf.
O’Donnell, C. P., Fagan, C., & Cullen, P. J. (2014). Process analytical technology for the food industry. New York, NY: Springer. doi.org/10.1007/978-1-4939-0311-5.
Park, B., & Lu, R. (2015). Hyperspectral imaging technology in food and agriculture. New York, NY: Springer. doi.org/10.1007/978-1-4939-2836-1.
Pham, B. T., Jaafari, A., Prakash, I., & Bui, D. T. (2019). A novel hybrid intelligent model of support vector machines and the MultiBoost ensemble for landslide susceptibility modeling. Bull. Eng. Geol. Environ., 78(4), 2865-2886. doi.org/10.1007/s10064-018-1281-y.
Savitzky, A., & Golay, M. J. (1964). Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem., 36(8), 1627-1639. doi.org/10.1021/ac60214a047.
Townsend, J. T. (1971). Theoretical analysis of an alphabetic confusion matrix. Perception Psychophysics, 9(1), 40-50. doi.org/10.3758/BF03213026.
Zhao, H.-T., Feng, Y.-Z., Chen, W., & Jia, G.-F. (2019). Application of invasive weed optimization and least square support vector machine for prediction of beef adulteration with spoiled beef based on visible near-infrared (Vis-NIR) hyperspectral imaging. Meat Sci., 151, 75-81. https://doi.org/10.1016/j.meatsci.2019.01.010.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/02%3A_Information_Technology_Sensors_and_Control_Systems/2.03%3A_Data_Processing_in_Biosystems_Engineering.txt
|
Daniel M. Queiroz
Department of Agricultural Engineering
Universidade Federal de Vicosa
Viçosa, Minas Gerais, Brazil
John K. Schueller
Department of Mechanical and Aerospace Engineering
University of Florida
Gainesville, Florida, USA
Key Terms
Mechanics of traction Traction devices Tractors
Engine power Transport devices Pulled implements
Tractive force
Introduction
Tractors were created to reduce human and animal labor inputs and increase efficiency and productivity in crop production activities (Schueller, 2000). The main use of tractors is to pull implements such as tillage tools, planters, cultivators, and harvesters in the field and, to some extent, on the road (Renius, 2020). To pull implements efficiently, a tractor needs to generate traction between the tires and the soil surface. Traction is the way a vehicle uses force to move over a surface.
Quite early in tractor development, the direct transfer of power from tractors to implements was made possible by using power take-offs (PTOs) that transfer rotary power to implements and machines and by using hydraulic systems to lift and lower implements and to move parts of attached machines. Pulling implements is still the most common use of tractor power. The field capacity of agricultural machines, i.e., the field area that can be covered per unit time, has caused bigger implements to be developed and used. The increased sizes require greater traction from the pulling tractor. More efficient systems to create the tractive force are necessary to provide the large forces necessary to pull those implements.
The efficiency of how tractors convert the power generated by an engine to the power required to pull the implements depends on many variables associated with the tractor and the soil conditions. Traction is especially important in agriculture as field soils are not as firm as the roads used by cars and trucks. This chapter presents the basic principles of traction applied to agricultural machinery.
Outcomes
After reading this chapter, you should be able to:
• • Explain how tractors develop tractive force
• • Describe the effect of some important variables on the tractive force
• • Calculate how much power a tractor can develop when pulling an implement
• • Calculate the power requirements to match tractors to implements
Concepts
Traction and Transport Devices
According to the American Society of Agricultural and Biological Engineers (ASABE Standards, 2018), there are two types of surface contact devices associated with the motion of a vehicle: traction devices and transport devices. A traction device receives power from an engine and uses the reactions of forces from the supporting surface to propel the vehicle, while a transport device does not receive power, but is needed to support the vehicle on a surface while the vehicle is moving over that surface. Wheels, tires, and tracks can be traction devices if they are connected to an engine or other power source; if not connected, they are transport devices. The main components of an agricultural tractor are presented in Figure 3.1.1. In this example, the tractor is 2-wheel drive, so the large rear wheels, which receive power from the engine, are the traction devices, and the small front wheels are the transport devices. All wheels would be traction devices if the tractor were 4-wheel drive. The engine is connected to the traction device by the drive train, often consisting of a clutch, transmission, differential, axles, and other components. (The drive train is not discussed in this chapter.) The drawbar is an attachment point through which the tractor can apply pulling force to an implement.
Mechanics of Traction
The simplest way of analyzing the traction produced by a traction device, such as a wheel or track, is to consider friction forces that act at the contact between a traction device and the surface when the system is in equilibrium. For simplification the machine is assumed to be moving at a constant velocity on a non-variable surface (Figure 3.1.2). A traction device (hereafter simplified to the most common implementation as a “wheel”) has two main functions: to support the load acting on the wheel axle (W) and to produce a net tractive force (H). The force W is generally called the dynamic load acting on the wheel. The dynamic load depends on how the weight of the tractor at that point in time is distributed to each wheel. If the system is in equilibrium, the surface reacts to W by applying a vertical reaction force (R) to the wheel. In the contact between the surface and the wheel, a friction force (Ff) is generated. To keep equilibrium in the horizontal direction, the magnitude of the net tractive force H is equal to the magnitude of the friction force Ff. To produce a net tractive force H, the friction force needs to be overcome. This is done by applying a torque (T) to the wheel axle. This torque is proportional to the torque produced by the tractor engine according to the drive train, including the current transmission ratio.
When moving, the wheel (Figure 3.1.2) rotates with a constant angular velocity (ω), and this angular speed is proportional to the engine rotation speed, depending on the gearing ratio in the drive train. The wheel has an actual velocity va, which is equal to the angular velocity multiplied by the wheel’s rolling radius reduced by the slip (as discussed below). In an equilibrium situation, ω and va are constants. The power transferred to the wheel axle (Pw) can be calculated as the product of the torque (T) and the angular velocity (ω), as shown in Equation 3.1.1. The tractive power developed by the wheel (Pt) is the product of the net tractive force (H) and the actual velocity (va), as shown in Equation 3.1.2. The tractive efficiency of the wheel (TE) can be calculated as the ratio between tractive power and the wheel axle power, as shown in Equation 3.1.3.
$P_{W}=T\omega$
$P_{t}=H\nu_{a}$
$T_{E}=\frac{P_{t}}{P_{W}}$
where Pw = power transferred to the wheel axle (W)
T = torque transferred to the wheel axle (N m)
ω = angular velocity of the wheel (rad s−1)
Pt = tractive power developed by the wheel (W)
H = net tractive force (N)
va = actual velocity of the wheel (m s−1)
TE = tractive efficiency of the wheel (dimensionless)
The friction force (Ff in Figure 3.1.2) is generated by the interaction between the wheel and the surface. The friction force can be calculated by multiplying the reaction force (R) by the equivalent friction coefficient (μ). Table 3.1.1 presents some typical values. Because R is equal to the dynamic load acting on the wheel axle (W) and the net tractive force is equal to the friction force, the tractive force can be calculated as the product of the equivalent coefficient of friction and the dynamic load, as:
Table $1$: Equivalent coefficient of friction for a tractor wheel working on different surfaces.
Surface type Equivalent coefficient of friction (μ)[a]
Soft soil
0.26–0.31
Medium soil
0.40–0.46
Firm soil
0.43–0.53
Concrete
0.91–0.98
[a] These values were estimated based on data presented by Kolator and Bialobrzewski (2011).
$H= \mu W$
where μ = coefficient of friction (dimensionless).
The theoretical velocity (vt) is determined by the wheel’s rotational velocity (ω) times the rolling radius (r) as shown in Equation 3.1.5, but the actual wheel velocity (va) is less due to the relative motion at the interface between the wheel and the surface. This relative motion is the travel reduction ratio, commonly called slip, and is defined as the ratio of the loss of wheel velocity to the theoretical velocity, that is, the velocity that wheel would have if there was no loss. Equation 3.1.6 shows how the travel reduction ratio can be estimated:
$\nu_{t} = \omega r$
$s= \frac{\nu_{t}-\nu_{a}}{\nu_{t}}$
where vt = theoretical velocity of the wheel (m s−1)
r = wheel rolling radius (m s−1)
s = travel reduction ratio, or slip (dimensionless)
The travel reduction ratio is an important variable for wheel tractive force analysis. The travel reduction ratio of a wheel can vary from 0 to 1 depending on wheel and surface conditions. When the travel reduction ratio is equal to 0, there would be no relative motion between the periphery of the wheel and the surface. The wheel rotation causes a perfect translational motion relative to the surface. However, experience has shown that for a wheel to develop a tractive force, there must be relative motion (slip) between the wheel and the surface. Therefore, a wheel generating tractive force needs to have a travel reduction ratio greater than zero. When a wheel generates more tractive force, the travel reduction ratio increases, and the actual wheel velocity reduces. When the travel reduction ratio is equal to 1, the wheel does not move forward when it rotates. The models used to calculate the tractive force generally use the travel reduction ratio as one of the variables.
Another important concept when analyzing the traction process of a moving wheel is the motion resistance force (Fr) (Figure 3.1.3). If a wheel is moving, the wheel and the surface deform. Energy is spent to produce this deformation. The resistance produced by the wheel and surface deformations must be overcome to allow the wheel to move. Considering the existence of the motion resistance force, in the contact between the wheel and the surface, it is necessary to generate a friction force greater than the motion resistance force at the wheel-surface contact to produce a tractive force. This friction force is now termed the gross tractive force (denoted by F). Thus, the gross tractive force would be the net tractive force generated by the wheel if there was no motion resistance. Adding the concepts of motion resistance and gross tractive forces to Figure 3.1.2 results in Figure 3.1.3, which is an improved representation of forces acting on a wheel.
If the wheel represented in Figure 3.1.3 has no motion in the vertical direction (z axis), the wheel is in static equilibrium in this direction. In this condition, the summation of forces in the z (vertical) direction is zero. Therefore,
$\sum F_{z} = 0$
$R-W=0$
$R=W$
where Fz = any force applied to the wheel in z direction (N)
R = vertical reaction force of the wheel (N)
If the actual speed of the wheel represented in Figure 3.1.3 is constant, the horizontal forces are in static equilibrium in this direction and the sum of the horizontal forces is zero. Therefore,
$\sum F_{x}=0$
where Fx = any force applied to the wheel in x direction (N)
F = gross tractive force (N)
Fr = motion resistance force (N)
Based on Equation 3.1.12, the gross tractive force (F) must be the net tractive force (H) plus the motion resistance force (Fr). If both sides of Equation 3.1.12 are divided by the dynamic load (W) acting on the wheel, resulting in Equation 3.1.13, three dimensionless numbers, i.e., μn, μg, and ρ, are created as shown in Equations 3.1.15, 3.1.16, and 3.1.17. The first one is the net traction ratio n), defined as the net tractive force divided by the dynamic load. The second one is the gross traction ratio (μg), defined as the gross tractive force divided by the dynamic load. And the third one is the motion resistance ratio (μ), defined as the motion resistance force divided by the dynamic load.
$\frac{H}{W}=\frac{F}{W}-\frac{F_{r}}{W}$
Equation 3.1.14 shows that μn, μg, and ρ are not independent. By using a technique called dimensional analysis, functions were developed to predict how μg and ρ change as a function of the wheel variables and soil resistance. This analysis is presented by Goering et al. (2003) and is beyond the scope of this chapter. If μg, ρ, and W are known, the tractive force generated by the wheel can be predicted using Equation 3.1.18:
$H= (\mu_{g}-\rho)W$
The R, F, and Fr forces (Figure 3.1.3) act on a point called the wheel resistance center. This point is not aligned with the direction of the dynamic load W but is a little bit ahead of it. This horizontal distance is called the horizontal offset (e). The static analysis of a towed wheel (Figure 3.1.4) shows that the wheel resistance center is not aligned with the direction of the dynamic wheel load. In a towed wheel, there is no torque applied to its axle. The soil reaction (G) at the resistance center is the resultant of the R and Fr forces. The direction of the G force passes through the wheel center. To move the towed wheel at a constant actual velocity (va), a net tractive force (H) equal to the motion resistance force (Fr) needs to be applied to the wheel. For the wheel to keep an angular velocity constant, the sum of the momentums at the center of the wheels must equal zero. Goering et al. (2003) showed that the horizontal offset can be calculated with Equations 3.1.19-3.1.21.
$Re-F_{r}r = 0$
where e = horizontal offset (m).
By using Equation 3.1.18, the tractive force can be predicted. The other important information in the wheel traction analysis is to predict how much torque needs to be transferred to the wheel axle to generate the tractive force (H). In Equation 3.1.5, the wheel radius is used to convert the rotational angular velocity to the theoretical wheel velocity. The wheel radius can also be used to calculate the torque necessary to produce the wheel tractive force. The torque (T) necessary to keep the angular velocity of the wheel constant and produce the net tractive force is the product of the gross tractive force and the torque radius of the wheel, as given by:
where rt = torque radius of the wheel (m).
The wheel radius defined in Equation 3.1.5 is different from the torque radius of the wheel defined in Equation 3.1.22 because of the interaction of the wheel and the surface, which varies on a soft soil surface. Generally, a rolling radius based on the distance from the center of the wheel axle to a hard surface is used. Therefore, Equation 3.1.23 can be used to estimate the torque acting at wheel axle:
$T=Fr$
Engine Power Needed to Produce a Tractive Force
ASABE Standards (2015) presented a diagram (Figure 3.1.5) of the approximate typical power relationship for agricultural tractors. Tractors can be specified by their engine gross flywheel rated power (Pe). One of the standards used to define the engine gross flywheel rated power is SAE J1995 (SAE, 1995). The rated power defined by this standard is the mechanical power produced by the engine without some of its accessories (such as the alternator, the radiator fan, and the water pump). Therefore, the engine gross flywheel rated power is greater than the net power produced by the engine. The approximate engine net flywheel power can be estimated by multiplying the gross flywheel power by 0.92. The power at the tractor PTO is about equal to the engine gross flywheel power multiplied by 0.83 or the engine net flywheel power multiplied by 0.90.
The power that the tractor can generate to pull implements, often termed drawbar power because many implements are attached to the tractor’s drawbar, depends on the tractor type, i.e., 2-wheel drive (2WD), mechanical front wheel drive (MFWD), 4-wheel drive (4WD), or tracked. The surface condition where the tractor is used has an even greater effect. Using these two pieces of information, coefficients that show estimates of the relationship between the drawbar power and the PTO power is given in Figure 3.1.5.
The drawbar power required to pull an implement is:
$P_{DB} = F_{i} \nu_{i}$
where PDB = drawbar power (W)
Fi = force required to pull an implement (N)
vi = implement velocity (m s−1)
The force required to pull an implement depends on the implement. For example, the force required to pull a planter Fp is the force required per row times the number of rows:
$F_{p} = f_{r}n_{r}$
where fr = force required per row of planter (N row−1)
nr = number of rows
Once the required drawbar power is determined, the values in Figure 3.1.5 can be used to calculate the estimated needed gross flywheel rated power of a tractor to pull the implement.
Applications
The concepts of traction and tractor power are necessary for properly matching the tractor to an implement. Agricultural operations cannot be performed if the tractor cannot develop enough power or traction to pull the implement. As implements have increased in size over the years, it is necessary that the tractors have enough power and enough traction for the tasks they have to perform. Choosing a tractor that is too large will negatively impact agricultural profitability because larger tractors cost more than smaller tractors. An oversize tractor may also increase fuel consumption and exhaust emissions. This is significant because even the most efficient tractors get less than 4 kWh of work per liter of diesel fuel.
Tractors range greatly in size (e.g., Figure 3.1.6). For example, one large contemporary manufacturer sells tractors from 17 to 477 kW. The weight of the tractor must be enough to generate sufficient traction force, as shown in Equation 3.1.18. However, besides the cost of adding weight, additional weight may increase soil compaction and depress crop yields. It is therefore necessary to understand these concepts to design tractors and implements. The capabilities of the tractor’s engine, power transmission elements, and wheels need to be appropriately scaled. There needs to be a trade-off between making them large and powerful with making them compact and inexpensive. The above analyses can be used to guide tractor choice and design.
The concepts are also applied to other types of agricultural machinery, such as self-propelled harvesters and sprayers. For these machines to be able to complete their tasks, they need to be able to move across agricultural soils. The same calculations can be used to determine if there is enough power and to design the various components on those machines. The wheels, axles, and power transmission components must be able to withstand the forces, torques, and power during the machines’ use.
Examples
Example $1$
Example 1: Tractive force
Problem:
Calculate the tractive force produced by a tractor wheel that works on a firm soil with a dynamic load of 5 kN. The wheel velocity is 2 m s−1. If the tractive efficiency is 0.73, what is the power that needs to be transferred to the wheel axle?
Solution
Assume an equivalent coefficient of friction of 0.48, the mean value for firm soil presented in Table 3.1.1. Calculate the tractive force using Equation 3.1.4:
$H= \mu W=0.48 \times 5 = 2.4 \text{ kN}$
Now, calculate the tractive power for the tractor wheel using Equation 3.1.2:
$P_{t} = H\nu_{a} = 2.4 \times 2 = 4.8 \text{ kW}$
Calculate the power that needs to be transferred to the wheel axle for using Equation 3.1.3 with the given tractive efficiency of 0.73:
$P_{W} = \frac{P_{t}}{T_{E}} = \frac{4.8}{0.73} = 6.58 \text{ kW}$
This value of needed power can be used to design the various power transmission components. The power consumption can also be used to calculate the power demanded of the ultimate power source, probably an engine, to calculate fuel consumption and, thereby, costs of a particular field operation.
Example $2$
Example 2: Torque and travel reduction ratio, or slip
Problem:
A wheel on another tractor receives 40 kW from the tractor powertrain. The wheel rotates at 25 rpm, which is an angular velocity, ω, of 2.62 rad s−1. (Note: 2π rad × 25 rpm/60 min s−1 = 2.62 rad s−1.) If the rolling radius of the wheels is 0.81 m and the speed of the tractor is 2 m s−1, calculate the torque acting on the wheel and the travel reduction ratio (commonly known as slip).
Solution
Calculate the torque acting on the wheel T for a power Pw of 40 kW using Equation 3.1.1:
$T=\frac{P_{W}}{\omega} = \frac{40}{2.62} = 15.28 \text{ Nm}$
Calculate the power to be transferred to the wheel for producing 2.4 kN of tractive force at 2 m s−1 of wheel speed using Equation 3.1.3:
$P_{W} = \frac{P_{t}}{T_{E}} = \frac{4.8}{0.73} =6.58 \text{ kW}$
Calculate the theoretical velocity of the wheel vt for a rolling radius r of 0.81 m using Equation 3.1.5:
$\nu_{t} = \omega r = 2.62 \times 0.81 = 2.12 \text{ m}s^{-1}$
Since the actual velocity of the wheel is 2 m s−1, which is less than the theoretical velocity of the wheel, calculate the travel reduction ratio s using Equation 3.1.6:
$s= \frac{\nu_{t}-\nu_{a}}{\nu_{t}} = \frac{2.12-2.00}{2.12} = 0.057, \text{or} \ 5.7\%$
In addition to providing guidance to the design of the agricultural machine and its power consumption, calculation of the slip is useful to determine how fast the operation will be performed. Excessive slip can also have adverse effects on the soil’s structure and inhibit plant growth.
Example $3$
Example 3: Tractive force and power
Problem:
Consider a wheel that works with a dynamic load of 10 kN, a motion resistance ratio of 0.08, and a gross traction ratio of 0.72. Find the tractive force that the wheel can develop. If this wheel rotates at 40 rpm and the rolling radius of the wheel is 0.71 m, how much power is necessary to move this wheel?
Solution
Calculate the gross tractive force developed by the wheel F using Equation 3.1.16:
$F= \mu_{g} W = 0.72 \times 10 = 7.2 \text{ kN}$
Calculate the motion resistance Fr of this wheel using Equation 3.1.17:
$F_{r} = \rho W = 0.08 \times 10 = 0.80 \text{ kN}$
The tractive force H developed by the wheel, according to Equation 3.1.12, is the difference between the gross tractive force and the motion resistance:
$H = F-F_{r} = 7.2 - 0.8 = 6.4 \text{ kN}$
Calculate the torque necessary to move this wheel using Equation 3.1.23:
$T = Fr = 7.2 \times 0.71 = 5.11 \text{ kN m}$
Calculate the power Pw necessary to turn the wheel using Equation 3.1.1:
$P_{W} = T \omega = T \frac{2\pi N}{60} = 5.11 \times \frac{2 \times \pi \times 40}{60} = 21.4 \text{ kW}$
Example $4$
Example 4: Engine gross flywheel power
Problem:
Calculate the necessary power of a MFWD tractor to pull a 30-row planter. According to ASABE Standards (2015), a force of 900 N per row is required to pull a drawn row crop planter if it is only performing the seeding operation. The speed of the tractor will be 8.1 km h−1 (2.25 m s−1). The soil is in the tilled condition. Consider that the tractor should have a power reserve of 20% to overcome unexpected overloads.
Solution
Calculate the drawbar force needed to pull the planter using Equation 3.1.25:
$F_{p} = f_{r}n_{r} = 900 \times 30 = 27,000 \text{ N}$
Calculate the drawbar power PDB needed to pull the planter using Equation 3.1.24:
$P_{DB} = F_{p} \nu_{p} = 27,000 \times 2.25 = 60,750 \text{ W}$
Therefore, the tractor needs to produce a drawbar power of 60.75 kW. From Figure 3.1.5, find that the coefficient that relates the drawbar power to the PTO power of the tractor for a MFWD tractor working on tilled soil condition is 0.72. Thus, the tractor PTO power PPTO should be:
$P_{PTO} = \frac{P_{DB}}{0.72} = \frac{60.75}{0.72} = 84 \text{ kW}$
Considering that the coefficient that relates the PTO power to the engine gross flywheel power is 0.83 (Figure 3.1.5), the engine gross flywheel power Pe is:
$P_{e} = \frac{P_{PTO}}{0.83} = \frac{84.375}{0.83} = 102 \text{ kW}$
Considering a reserve of power of 20% to overcome unexpected overloads, the tractor selected should have an engine gross flywheel power at least 20% greater than that needed to pull the 30-row planter, or 1.2 × 102 kW = 122 kW.
These calculations will help the farm manager select the proper tractor for the operation.
Image Credits
Figure 1. Queiroz, D. (CC By 4.0). (2020). Schematic view of a two-wheel drive agricultural tractor.
Figure 2. Queiroz, D. (CC By 4.0). (2020). Simplified diagram of the variables related to a wheel developing a net tractive force.
Figure 3. Queiroz, D. (CC By 4.0). (2020). Diagram of the variables related to a wheel developing a net tractive force (H) including the gross tractive force (F) and the motion resistance force (Fr).
Figure 4. Queiroz, D. (CC By 4.0). (2020). Diagram of forces acting on a towed wheel.
Figure 5. ASABE Standard ASAE D497.7 (CC By 4.0). (2020). Diagram of the approximate power relationships in agricultural tractors (types are defined in the main text) and soil conditions.
Figure 6. Schueller, J. (CC By 4.0). (2020). Typical contemporary (a) small and (b) large tractors.
References
ASABE Standards. (2018). ANSI/ASAE S296.5 DEC2003 (R2018): General terminology for traction of agricultural traction and transport devices and vehicles. St. Joseph, MI: ASABE.
ASABE Standards. (2015). ASAE D497.7 MAR2011 (R2015): Agricultural machinery management data. St. Joseph, MI: ASABE.
Goering, C. E., Stone, M. L., Smith, D. W., & Turnquist, P. K. (2003). Traction and transport devices. In Off-road vehicle engineering principles (pp. 351-382). St. Joseph, MI: ASAE.
Kolator, B., & Białobrzewski, I. (2011). A simulation model of 2WD tractor performance. Comput. Electron. Agric. 76(2): 231-239.
Renius, K. T. (2020). Fundamentals of tractor design. Cham, Switzerland: Springer Nature.
SAE. (1995). SAE J1995_199506: Engine power test code—Spark ignition and compression ignition—Gross power rating. Troy, MI: SAE.
Schueller, J. K. (2000). In the service of abundance: Agricultural mechanization provided the nourishment for the 20th century’s extraordinary growth. Mech. Eng. 122(8):58-65.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/03%3A_Machinery_Systems/3.01%3A_Traction.txt
|
Roberto Oberti
Department of Agricultural and Environmental Science, University of Milano, Milano, Italy
Peter Schulze Lammers
Department of Agricultural Engineering, University of Bonn, Bonn, Germany
Key Terms
Tillage Spreading Field performance parameters
Planting Spraying Application rate and quality
Introduction
Field crops are most often grown to provide food for humans and for animals. Growing field crops requires a sequence of operations (Figure 3.2.1) that usually starts with land preparation followed by planting. These two stages are known as crop establishment. Crop growth requires a supply of nutrients through application of fertilizers as well as protection against weeds, diseases, and pest insects using biological, chemical, and/or physical treatments. Finally, the crop is harvested and transported to processing locations. This general sequence of operations can be more complex or specifically modified for a particular crop or cropping system. For example, crop establishment is only required once, while crop protection and fertilization may be repeated multiple times annually.
Engineering is integral to maximize the productivity and efficiency of these operations. This chapter introduces some of the engineering concepts and equipment used for crop establishment and crop protection in arable agriculture.
Outcomes
After reading this chapter, you should be able to:
• Describe the fundamental principles of agricultural mechanization
• Apply physics concepts to some aspects of crop establishment and protection equipment
• Calculate field performance for land preparation, planting, fertilizing, and plant protection based on operating parameters
Concepts
Field Performance Parameters
Regardless of the specific operation, the work of a field machine is evaluated through some fundamental parameters: the operating width and the field capacity of the machine.
Operating Width
The operating, or working, width w of a machine is the width of the portion of field worked by each pass of the machine. In field work, especially with large equipment, the effective operating width can be less than the theoretical width due to unwanted partial overlaps between passes.
Field Capacity of a Machine
An important parameter to be considered when selecting a machine for an operation is the field capacity, which represents the machine’s work rate in terms of area of land or crop processed per hour. The theoretical field capacity of the machine (also called the area capacity) can be computed as:
$c_{t} = ws$
where Ct = theoretical field capacity (m2 h−1)
w = operating width (m)
s = field speed (m h−1)
Ct is typically expressed in ha h−1. Figure 2 illustrates the field area worked during a time interval, t.
In actual working conditions, this theoretical capacity is reduced by idle times (e.g., turnings, refills, transfers, or breaks) and possible reductions in working width or in nominal field speed due to operational considerations, resulting in an actual field capacity:
$c_{a} = e_{f}C_{t}$
where ef is the field efficiency (decimal). Its value largely depends on the operation, which can be estimated for given operations and working conditions.
Tillage
Definition of Tillage
Soil preparation by mechanical interventions is called tillage. The major function of tillage is loosening the soil to create pores so they can contain air and water to enable the growth of roots. Other main tasks of tillage are crushing soil aggregates to required sizes, reduction or elimination of weeds, and admixing of plant residues. Tillage needs to be adapted to the soil type and condition (such as soil water or plant residue content) and conducted at a proper time.
Soil Mechanics
Soil is classified by grain sizes into sand, silt, and clay categories. Loam is a mixture of these soil types. Soil is subjected to shear stress and reacts by strain when it is tilled. The tillage tool moving through the soil causes a force which causes a stress between adjacent soil grains. This leads to a deformation or strain of the soil. Sandy soil is characterized by low shear strength and high friction, while clay is characterized by high cohesion and, after cracking, by low friction. Tillage tools usually act as wedges. They engage with the soil and cause relative movement in a shear plane where some of the soil moves with the tool while the adjacent soil stays in place. Energy is expended in shearing and lifting of the soil and overcoming the friction on the tool.
Primary Tillage
Primary tillage tools or implements are designed for loosening the soil and mixing or incorporating crop residues left on the field surface after harvest. Subsequent soil treatment to prepare a seedbed is secondary tillage. A typical implement for primary tillage is the plow (spelled “plough” in some countries), which is used for deep soil cultivation.
The three most common kinds of plow are moldboard, chisel, and disc (Figure 3.2.3). The moldboard plow body and its action are shown in Figure 3.2.3a. A plow share cuts the soil horizontally and the attached moldboard upturns the soil strip and turns it almost upside down in the furrow made by the previous plow body. The heel makes sure the plow follows the proper path. These parts are connected by a supporting part (breast) which is connected by a leg to the frame of the plow. Chisel plows do not invert all the soil, but they mix the top soil layer, including residues, into deeper portions of the soil. Chisel plows use heavy tines with shares on the bottom of the tines (Figure 3.2.3b). A disc plow uses concave round discs (Figure 3.2.3c) to cut the soil on the furrow bottom and turns the soil with the rotating motion of the disc.
The draft forces to pull a plow are provided by a tractor to which it is attached. Equation 3.2.3 is one way used to calculate the draft force needed to pull a moldboard plow. This calculation follows Gorjachkin (1968):
$F_{z} = nF_{v} \rho_{r} + ikw_{f}d \ + i \epsilon w_{f} dv^{2}$
where Fz = draft force (N)
n = number of gauge wheels
Fv = vertical force (N)
ρR = rolling resistance
i = number of moldboards or shares
k = static factor (N cm−2) ranging from 2 to 14 depending on soil type
wf = width of furrow (cm)
d = depth of furrow (cm)
ε = dynamic factor (N s2 m−2 cm−2), ranging from 0.15 to 0.36 depending on soil type and moldboard design
v = traveling speed (m s−1)
About half the energy consumed in plowing, the effective energy, does the work of cutting (13–20%), elevating and accelerating the soil (13–14%), and deformation (14–15%). The remaining energy is spent on noneffective losses (e.g., friction), which do not contribute to tillage effectiveness.
Secondary Tillage (Seedbed Preparation)
Secondary tillage prepares the seedbed after primary tillage. Implements for secondary tillage are numerous and of many different designs. A harrow is the archetype of secondary tillage; it consists of tines fixed in a frame. Cultivators are heavier, with longer tines formed as chisels in a rigid frame or with a flexible suspension. Soil is opened by shares and the effect is characterized by the tine spacing, depth of furrow, and speed.
(a)
(b)
(c)
Figure $3$: (a) moldboard plow body; (b) chisel (tine) of a cultivator; (c) disc plow.
Most tillage implements are pulled by tractors and limited by traction, as discussed in another chapter. (Power tillers exist as rotary harrows with vertical axles or rotary cultivators with horizontal axles; these are discussed below.) The mechanical connection of the tractor power take-off (PTO) to the tillage implement provides the power to drive the implement’s axles, which are equipped with blades, knives, or shares. Each blade cuts a piece of soil (Schilling, 1962) and the length of that bite is determined as a function of the tractor travel speed and axle rotation speed according to:
$B= \frac{\nu\ 10,000}{rz\ 60}$
where B = bite length (cm)
v = travel speed (km h−1)
r = rotary speed (min−1)
z = number of blades per tool assembly
The cut soil is often flung against a hood or cover which helps crush soil agglomerations, with the axle rotation speed affecting the impact force.
Planting
Crops are sown (planted) by placing seeds in the soil (or in some cases, discussed below, by transplanting). Basic requirements are:
• • equal distribution of seeds in the field,
• • placing the seeds at the proper depth of soil, and
• • covering the seeds.
The seed bed should be prepared by tilling the soil to aggregates the size of the seed grain, and gently compacting the soil at the placement depth of the seed, e.g., 2–4 cm for wheat and barley. Germination is triggered by soil temperature (e.g., 2–4°C for wheat) and soil water content. The seeding rate of cereal grains is in the range of 200 to 400 grains per m2 resulting in 500 to 900 heads per m2 as a single plant may produce multiple heads. As the metering of seeds is mass-based, commercial seed is indexed by the mass of one thousand grains (Table 3.2.1).
The appropriate distribution of seeds is a fundamental condition for a successful crop yield. Seeds can be broadcasted, which means they are scattered randomly (Figure 3.2.4). This is commonly done for crops such as grasses and alfalfa. But the seeds for most crops are deposited in rows. Common distances between rows for cereals are 12 to 15 cm. For crops commonly called row crops (e.g., soybean or maize), the row spacing is 45 to 90 cm. If the rows are at a suitable distance, the wheels of farm machines can avoid driving on the plants as they grow from the seeds.
Table $1$: Characteristics of seeds.
Thousand Grain Mass
(g)
Bulk Density
(kg L−1)
Seed Rate
(kg ha−1)
Area per Grain
(cm2)
Wheat (Triticum aestivum)
25–50
0.76
100–250
22–37
Barley (Hordeum vulgare)
24–48
0.64
100–180
27–48
Maize (Zea mays)
100–450
0.7
50–80
-
Peas (Pisum sativum)
78–560
0.79
120–280
-
Rapeseed (Brassica napus)
3.5–7
0.65
6–12
-
Within rows, seeds have a random spacing if seed drills are used, or have a fixed distance between seeds if precision seeders (discussed below) are used. Seed drills are commonly used for small grains and consist of components that:
• • hold the seeds to be planted,
• • meter (singulate) the seed,
• • open a row furrow in the soil,
• • transport the seeds to the soil,
• • place the seeds in the soil, and then
• • cover the open furrow with soil.
To extend the area per seed when sowing in rows, a wider opening of the furrow allows band seeding (Figure 3.2.4).
(a)
(b)
Figure $6$: (a) Seed deposition in a row, drilled; (b) frequency of seed distances, drilled.
Regular seed drills often meter the seeds with a studded roller as seen in Figure 3.2.5. The ideal is to have a uniform distribution, but there will be variation, including infrequent longer distances, as shown by Figure 3.2.6 (Heege, 1993).
A second very common metering device is a cell wheel, which is used for central metering in pneumatic seed drills. A rotating cell wheel is filled by the seeds in the hopper and empties into an air stream via a venturi jet (Figure 3.2.7). The grains are entrained by air and collide with a plate. A relatively uniform distribution of grains occurs along the circumference of the plate where pipes for transporting the grains to the coulters are arranged.
The frequency of the seed distances (Figure 3.2.6) as metered by the feed cells or studded rollers corresponds to an exponential function (Heege, 1993):
$p_{z} = \frac{1}{\bar{x}}e^{-\frac{x_{i}}{\bar{x}}}$
where pz = seed spacing frequency
xi = seed spacing (cm)
$\bar{x}$ = mean seed spacing (cm)
The accuracy of the longitudinal seed distribution is indicated by the coefficient of variation (Müeller et al., 1994):
$CV= \frac{\sqrt{\frac{(x_{i} - \bar{x})^{2}}{N-1}}}{\bar{x}} 100\%$
where CV = coefficient of variation (%)
N = number of measured samples
xi = spacing (cm)
$\bar{x}$ = mean spacing (cm)
A CV smaller than 10% is considered good, while above 15% is unsatisfactory.
Precision Seed Drills
Crops such as maize, soybeans, sugar beets, and cotton have higher yields if seeded as single plants. Precision seed drills singulate the seeds and place them at constant separation distances in the row. Cell wheels single the seeds out of the bulk, move them by constant rotational speed to the coulter, and drop them in the seed furrow with a target distance between each seed. Filling of the cells is a crucial function of precision seed drills. Every cell needs to be filled with one, and only one, seed. The cell wheels rotate through the bulk of seeds in the hopper and are filled by gravity, or the filling is accomplished by an air stream sucking the grains to the holes of the cell wheels (Figure 3.2.8). Then the seeds are released from the cell wheels and drop onto the bottom of the seed furrow. The trajectory of a single seed grain is affected by gravity (including the time for the seed to drop). To avoid rolling of the seeds in the furrow, the backward speed of the seed relative to the drill should match the forward speed of the seed drill.
The time required for the seed to drop is:
$t= \sqrt{\frac{2h}{g}}$
where t = time of dropping (s)
h = height of cell above seed furrow (m)
g = acceleration due to gravity (9.8 m s−2)
(a)
(b)
Figure $8$: Precision seeder singling seed grains for seed placement with definite spacing: (a) mechanical singling by cell wheel, (b) pneumatic singling device with cell wheel.
Transplanting
In the case of short vegetation periods or intensive agriculture (e.g., crops that require production in a short time), some crops are not direct-seeded into fields but instead are transplanted.
Seeds may be germinated under controlled conditions, such as in greenhouses (glasshouses). The small plant seedlings may be grown on trays or in pots then transplanted into fields where they will produce a harvestable crop. In the case of rice, which is often not direct-seeded, the plants are grown in trays and then transplanted. When transplanting rice, an articulated mechanical device punches the root portion together with the upper parts of the plants out of the tray and presses it into the soil, while keeping the plants fully saturated with water.
Other plants are propagated by vegetative methods (cloning or cuttings). Special techniques are required for potatoes. “Seed” tubers are put in the hopper of the planter and planted in ridges. Row distances are commonly 60–90 cm, which produce 40,000–50,000 potatoes per ha.
Compared to seeds, small plants (whether seedlings, tubers, cuttings, etc.) are easily damaged. The fundamental requirements of transplanting, both manual (which is labor-intensive) and mechanized, are:
• • no damage to the seedlings,
• • upright positioning of the seedlings in the soil at a target depth,
• • correct spacing between plants in a row, and
• • close contact of the soil with the roots.
Fertilization
Crop yield is strictly related to the availability of nutrients that are absorbed by the plant during growth. As crops are harvested, nutrients are removed from the soils. The major nutrients (nitrogen (N), phosphorus (P), and potassium (K)) usually have to be replaced by the application of fertilizers in order to maintain the soil’s productivity. Minor nutrients are also sometimes needed.
Organic fertilizers are produced within the farming process, and their use can be seen as a conservation recycling of nutrients. The concentration of nutrient elements in organic fertilizers is low (e.g., 1 kg of livestock slurry can contain 5 g of N or less), requiring very large structures for long-term storage, suitable protection from nutrient volatilization or dilution, large machines for field distribution, and the application of large amounts for sufficient fertilization of crops. Based on their solid contents, organic fertilizers are classified as slurry (solid content less than 14%) that can be pumped and managed as fluids, or as manure (solid content at least 14%) that are managed as solids with scrapers and forks.
Mineral fertilizers are produced by industrial processes and are characterized by a high concentration of nutrient chemicals, prompt availability for plant uptake, ease of storage and handling, and stability over time. The most used form of mineral fertilizers worldwide is solid granules (e.g., urea prills, calcium ammonium nitrate, potassium chloride, and N-P-K compound fertilizers). Other techniques rely on the distribution of liquid solutions or suspensions of mineral fertilizers or on soil injection of anhydrous ammonia.
Application Rate
During distribution operations (including fertilization as well as distribution of other inputs such as pesticides), the application rate is the amount of material distributed per unit of surface area, i.e., for solids by mass:
$AR=M/A$
and for liquids by volume:
$AR=V/A$
where AR = application rate (kg ha−1 or L ha−1)
M = mass of material distributed (kg)
V = volume of material distributed (L)
A = field area receiving the material (ha)
Dose of Application
The dose of application, D, refers to the amount of active compound (e.g., chemical nutrient, pesticide ingredient) distributed per unit of surface area:
$D=c_{AC}AR$
where D = dose of application (kgAC ha−1)
cAC = content of active compound in the raw material or solution distributed at the application rate (g kg−1 or g L−1)
Longitudinal and Lateral Uniformity of Distribution
The uniformity of the application rate during distribution is of fundamental importance for the agronomic success of the operation. The machine must be able to guarantee a suitable uniformity along both the traveling (longitudinal uniformity) and the transversal (lateral uniformity) directions.
The longitudinal uniformity of the distribution to the field is obtained by appropriate metering of the mass (or volume) flow out of the machine, by means of control devices such as adjustable discharge gates or valves. The rate of material flow to be set depends on the desired application rate, the traveling speed, and the working width of the distributing machine. This can be seen by dividing by time both the numerator and denominator of Equation 3.2.8 (and similarly for 3.2.9), which leads to:
$AR = (M/t)/(A/t)$
The numerator of the right side of the equation is the mass outflow Q (typically expressed in kg min−1), and the denominator is the theoretical field capacity of the machine Ct, or 0.1 w s (see Equation 3.2.1).
Then, it follows that, for AR in units of kg ha−1:
$AR = \frac{(Q \text{ kg min}^{-1}) (60 \text{ min h}^{-1})}{0.1 ws} = \frac{600Q}{ws}$
Rearranging:
$Q= \frac{AR \ ws}{600}$
where Q is the value of material flow (kg min−1) to be set in order to obtain a desired application rate AR (kg ha−1), when the distributing machine works at a speed s (km h−1) and with a working width w (m).
Similarly, for liquid material distributed, the volume rate q (L min−1) is calculated as follows:
$q= \frac{AR \ ws}{600}$
The lateral uniformity of distribution along the working width is obtained by ensuring two conditions: a controlled distribution pattern and an appropriate overlapping of swath distance between the adjacent passes of machine. A properly operating distribution system can maintain the regular shape of the distribution pattern, which can be triangular, trapezoidal, or rectangular, depending on the distribution system. The overall lateral distribution is obtained by the proper overlapping of the individual pattern produced by each pass of the equipment (the working width). The uniformity of distribution can be tested by travelling past trays placed on the ground and measuring the amount of fertilizer deposited in the individual trays. Coefficient of variation analyses similar to that discussed above for seeding (Equation 3.2.6) can be performed to evaluate the uniformity.
Fertilizer Spreader Types and Functional Components
A fertilizer spreader is a machine that carries, meters, and applies fertilizer to the field. There are many types of fertilizer spreaders with different characteristics, depending on the fertilizer material and local farming needs.
Slurry tankers are often used for spreading organic fertilizers that can be pumped, while manure spreaders are used for drier materials with higher solids content, often including straw or plant residues in addition to animal waste. Granular mineral fertilizers are distributed by centrifugal spreaders or by pneumatic or auger spreaders. Liquid fertilizers are usually distributed by boom sprayers or by micro-irrigation systems, and anhydrous ammonia by pressure injectors.
All fertilizer spreaders include three main functional components: the hopper or tank, the metering system, and the distributor. A hopper (for solid materials) or a tank (for liquids and slurries) is the container where the fertilizer is loaded. In tractor-mounted spreaders, the hopper capacity is generally below 1000–1500 kg, while for trailed equipment the capacity can reach 5000 kg. The load capacity of slurry tankers and manure spreaders is much higher (from 3 m3 to more than 25 m3), since the application rates for organic fertilizers are very high to compensate for their low concentration of nutrients. Hoppers and tanks are treated to be corrosion resistant, while slurry tankers are typically made of stainless steel for similar reasons.
The fertilizers are fed from the hopper or tank either by gravity (centrifugal spreader), a mechanical conveyor (pneumatic spreader or manure spreader), or by pressure (slurry tanker), through the metering system toward the distribution system. The mass outflow Q (kg min−1) in fertilizer spreaders is often metered by an adjustable gate, which can change the outlet’s opening area to set the fertilizer application rate (Equation 13). Since the flow characteristics of granular material through a given opening depends on particle size, shape, density, friction, etc., a calibration procedure is necessary to establish the mathematical relationship between gate opening and mass flow Q for a specific fertilizer. This is generally carried out by disabling the distributor system, setting the metering gate in a defined position, collecting the fertilizer discharged during a given time (e.g., 30 s) with a bucket, and finally computing the mass flow obtained. This procedure may be repeated for multiple metering gate positions, although the manufacturer usually provides instructions to extrapolate from a single measurement point (calibration factor) to a full relationship between gate opening and flow.
In a manure spreader, mass outflow is metered by varying the speed of a floor conveyor or of a hydraulic push-gate in the case of very large machines. Flow control in pressurized slurry tankers is made through a metering valve or by varying the pump speed for machines with direct slurry pumping.
The metered flow is then spread by the distributor across the distribution width. In centrifugal spreaders, the distribution is produced by two (occasionally one in small machines) rotating discs powered by the tractor PTO or by hydraulic or electric motors. On each disc, two or more radial vanes impress a centrifugal acceleration to fertilizer granules that are propelled away with velocities ranging between 15 m s−1 and more than 50 m s−1, within a certain direction angle resulting from the combination of tangential and radial components of the velocity. The granules then follow an almost parabolic (drag friction decelerates the particle) trajectory in the air, obtaining a very large distribution width.
In addition to the rotational speed of the discs, a crucial parameter in defining the spreading pattern in a centrifugal spreader is the feeding position, i.e., the dropping position of the granules on the disc that, in turn, defines the time during which each particle is accelerated by the vanes and hence its launching velocity. By changing the feeding position, together with the metering gate opening, the distribution pattern and width can be kept uniform for different fertilizer granules or it can be used to obtain specific distribution patterns, such as for spreading near field borders.
In pneumatic spreaders, the fertilizer granules are fed into a stream of carrier air generated by a fan. The air stream transports the fertilizer through pipes mounted on a horizontal boom and the fertilizer is finally distributed by hitting deflector plates. The spacing between plates is about 1–2 m producing a small overlapping of spreading, which results in uniform transversal distribution across the whole working width.
Manure spreaders usually have two or more rotors mounted on the back of the spreader. The rotors are equipped with sharp paddle assemblies that shred and spread manure particles over a distributing width of 5–8 m. Slurry is spread in similar widths by a pressurized flow into a deflector plate or by means of soil applicators that deposit the slurry directly on or into the ground.
Crop Protection
The development and productivity of crops require protection against the competition by undesired plants (weeds), against infestations by diseases (fungi, viruses, and bacteria), and damage from pest insects. This can be obtained through the integration of one or more different approaches, including rotation of crops and selection of resistant varieties, crop management techniques, distribution of beneficial organisms, and application of physical (e.g., mechanical or thermal) or chemical treatments.
The current primary method of crop protection is the use of chemical protection products, commonly pesticides, which play a vital role in securing worldwide food and feed production. Pesticide formulations are sometimes distributed as fumigation, powder, or solid granules, such as during seeding. But the technique most used is liquid application, after dilution in water, by means of a pressurized liquid sprayer.
Droplet Size
To optimize the biological efficacy of pesticides, the liquid is atomized into a spray of droplets. The number of droplets and their size affect the spray’s ability to cover a larger surface, to hit small targets, and to penetrate within foliage. Each spray provides a range, or distribution, of droplet sizes. The droplet size is usually represented as a volume median diameter (VMD or DV0.5) in μm and is classified as in Table 3.2.2. Crop protection applications mostly use droplets ranging from fine to very coarse diameters.
Droplet Drift vs. Adhesion and Coverage
The effect of drag and buoyancy forces increases as droplet size decreases. This makes finer sprays more prone to drift, i.e., to be transported out of the target zone by air convection. Moreover, in dry air, evaporation of water reduces the droplet size during transport, especially of small droplets, further amplifying drift risks. Besides the reduced crop protection efficacy, spray drift is a major concern for pesticide deposition on unintended targets, contamination of surface water and surrounding air, and risks due to over-exposure for operators and other people.
On the other hand, coarse droplets cover less target area with the same liquid volume (Figure 3.2.9), and their adhesion on target surfaces after impact can be problematic. If the kinetic energy at impact overcomes capillary forces, the droplet shatters or bounces, resulting in runoff instead of adhering to the surface as liquid.
Table $2$: Droplet size classifications in accordance with ANSI/ASABE S572.1 (ASABE Standards, 2017), and typical use in crop protection applications. CP and SP refer to the action mode of the product: CP = contact product; SP = systemic product.
Droplet Size Category Symbol VMD
(μm)
Typical Use
Very fine
VF
<140
Greenhouse fogging
Fine
F
140–210
CP on tree crops
Medium
M
210–320
CP on arable crops
Coarse
C
320–380
SP on crops; CP on soil
Very coarse
VC
380–460
CP on soil; anti-drift applications
Extremely coarse
EC
460–620
Anti-drift applications; liquid fertilizers
Ultra coarse
UC
>620
Liquid fertilizers
As a consequence, the optimal droplet size distribution is a matter of careful optimization: while a fine spray can take advantage of air turbulence and be beneficial for improving coverage in a dense canopy, medium-coarse spray is preferred to decrease drift risks with product losses in air, water, and soil. Coarse to very coarse sprays need to be used when wind velocity is above the optimal range (1–3 m s−1) and treatments cannot be postponed.
Action Mode and Application Parameters
Spray characteristics have to be adapted to the features of the target and crop and to the pesticide action mode. There are mainly two broad groups of pesticide modes of action: contact pesticides, with a protection efficacy restricted to the areas directly reached by the chemical in a sufficient amount; and systemic pesticides, with a protection efficacy depending on the overall absorption by the plant of a sufficient amount of chemical and its internal translocation to the site of action.
Contact products generally require high deposit densities (75–150 droplets cm−2) for a dense coverage of the target surface, as obtained with closely spaced droplets of finer sprays. On the other hand, for systemic products, coverage of the surface is less important provided that a sufficient dose of pesticide is delivered to, and absorbed by, the plant. Hence, lower deposit densities (20–40 droplets cm−2) are used, associated with coarser sprays.
Application Rate
By combining the droplet size and deposit density chosen for a pesticide treatment, the application rate AR, i.e., the liquid volume per unit of sprayed area, can be computed as:
$AR = \frac{\text{liquid volume}}{\text{sprayed area}} = (\text{mean drop volume})(\frac{\text{number of drops}}{\text{sprayed area}})$
The determination of the sprayed area should take into account that, for soil treatments, the target surface is the field area, whereas for plant treatments, it is the total vegetation surface of the plant. The relationship between the two is usually expressed as leaf area index, LAI, which is the ratio between the leaf surfaces of the target and the surface of the field in which it is growing. At early stages of growth, an LAI of about 1 is usually assumed (as for soil), while with further development LAI increases to 5 or more, depending on the crop. The previous expression can then be rewritten as:
$AR = \frac{4}{3}\pi (\frac{VMD}{2})^{3} \times n_{d} \times LAI = \frac{\pi}{6} VMD^{3}(\mu m^{3})(10^{-15}L \ \mu m^{3}) \times n_{d}(cm^{-2})(10^{8} cm^{2} ha^{-1}) \times LAI$
that is,
$AR = 10^{-7} \times \frac{\pi}{6} VMD^{3} \times n_{6}\times LAI$
where AR = application rate (L ha−1)
VMD = volume median diameter of the spray (μm)
nd = the deposit density on the target surface (number of droplets cm−2)
LAI = leaf area index of sprayed plants (decimal; = 1 for soil and early growth stages)
Functional Components of the Sprayer
The sprayer is the machine that carries, meters, atomizes, and applies the spray material to the target. The main functional components of a sprayer are shown in Figure 3.2.10.
The tank contains the water-pesticide mixture to be applied, with capacities that vary from 10 L for human-carried knapsack models to more than 5 m3 for large self-propelled sprayers. Tanks are made of corrosion-resistant and tough material, commonly polyethylene plastic, suitably shaped for easy filling and cleaning. To keep uniform mixing of the liquid in the tank, suitable agitation is provided by the return of a part of the pumped flow or, more rarely, by a mechanical mixer.
The pump produces the liquid flow in the circuit, working against the resistance generated by the components of the system (valves, filters, nozzles, etc.) and by viscous friction. The higher the resistance that the pump must overcome, the greater the pressure of liquid in the circuit.
Diaphragm pumps (Figure 3.2.11) are the most common type used in sprayers, because they are lightweight, low cost, and can handle abrasive and corrosive chemicals. The pumping chamber is sealed by a flexible membrane (diaphragm) connected to a moving piston. When the piston moves to increase the chamber volume (Figure 3.2.11, left), liquid enters by suction through the inlet valve. As the piston returns, the diaphragm reduces the chamber volume (Figure 3.2.11, right), propelling the liquid through the outlet valve.
As for any positive displacement pump, diaphragm pumps deliver a constant flow for each revolution of the pump shaft, regardless of changes of pressure (within the working range):
$q_{p} = 10^{-3}V_{p}n_{p}$
where qp = flow rate delivered by the pump (L min−1)
Vp = pump displacement (cm3)
np = rotational speed of the pump shaft (min−1)
In tractor-coupled sprayers, the pump is actuated by the tractor PTO shaft to provide the spraying liquid hydraulic power necessary to operate the circuit. The hydraulic power is:
$P_{hyd} = pq_{p}/ 60000$
where p = pressure of the circuit (kPa)
qp = flow rate produced by the pump (L min−1)
Some sprayers use centrifugal pumps. In these cases, the flow from the pump will not be positive displacement and will depend upon the pressure the pump has to pump against.
Control valves in the circuit enable the desired functioning of the sprayer, by controlling flow direction and volume in the different sections, and by maintaining a desired liquid pressure that, in turn, defines the spray characteristics and the distributed volume.
Since pressure is a fundamental parameter for spray distribution, a pressure gauge with appropriate accuracy and measurement range (e.g., two times the expected maximum pressure) is always installed in the sprayer circuit.
Nozzles are the core component of the sprayer that atomizes the pesticide-water mixture into droplets, producing a spray with a specific pattern to cover the target. The most common atomizing technology in sprayers is the hydraulic nozzle (Figure 3.2.12), which breaks up the stream of liquid as it emerges by pressure from a tiny orifice into spray droplets.
For a given liquid (i.e., for a given density and surface tension), the operating pressure and the orifice area directly determine the size of the droplets of the produced spray. In particular, by increasing the pressure with a specific nozzle, the size of the droplets decreases. Conversely, for a given pressure the size of the droplets increases with the area of the nozzle orifice.
Flow Rate Metering by Pressure Control
The discharge flow rate through a nozzle with a given orifice size can be metered by setting the liquid pressure in the circuit before the nozzle. The Bernoulli equation, which describes the conservation of energy in a flowing liquid, can be applied to the liquid flow at two points of the nozzle body: one in the nozzle chamber before entering the nozzle orifice (point 1 in Figure 3.2.12) and the other at the outlet of the orifice (point 2 in Figure 3.2.12). Neglecting the energy losses due to viscous friction, the Bernoulli equation gives:
$P_{1} + \frac{1}{2}\rho \nu_{1}^{2} + \rho gz_{1} = p_{2}+ \frac{1}{2} \rho \nu_{2}^{2} + \rho gz_{2}$
where p1 = absolute pressure of the liquid in the circuit
p2 = atmospheric pressure
ρ = density of the liquid
v1 and v2 = mean velocities of the liquid before entering the orifice and just after it
g = acceleration due to gravity
z1 and z2 = vertical positions of the two considered points
From flow continuity, it is also obvious that:
qn= A1v1 = A2v2
where qn = flow rate through the nozzle
A1 and A2 = area of sections of the nozzle chamber and orifice, respectively
Due to the tiny diameter of the orifice, the fluid velocity v2 in the orifice is much larger than that in the chamber v1, which can be neglected in the equation. Moreover, due to the small distance between the two points, we can consider z1z2. The Bernoulli equation for the nozzle then simplifies as:
$p \simeq \frac{1}{2} \rho (\frac{q_{n}}{A})^{2}$
that can be rearranged, leading to the nozzle equation:
$q_{n} = 1.0 c_{d}A_{o} \sqrt{\frac{2p}{\rho}}$
where qn = flow rate discharged by the nozzle (L min−1)
1.9 = a constant resulting from units adjustment
cd = discharge coefficient that accounts for the losses due to viscous friction through the orifice = <1 (decimal) (typically proportional to v22)
Ao = area of the nozzle orifice (mm2)
p = operating pressure of the circuit (kPa), i.e., p = p2p1 the differential pressure to the atmosphere
In practical applications, Equation 3.2.16 is used in the form:
$q_{n} = k_{n} \sqrt{p}$
where kn is a nozzle-specific efflux coefficient that incorporates its construction characteristics and viscous losses. The value of kn (commonly in the range of 0.03 to 0.2 L min−1 kPa−1/2) can be derived from flow-pressure tables provided by the nozzle manufacturer.
Equation 3.2.17 shows that the discharged volume rate of pesticide-water mixture can be varied by adjusting the circuit pressure p. Increasing the pressure will increase the flow rate and decrease the spray droplet size simultaneously. However, there is usually a limited working range of pressure (depending on nozzle type, this can be from 150 kPa up to 800 kPa, rarely above) because outside that range, the spray droplets will be either too large or too small. In this working range of pressure, the flow rate increases proportionally to the square root of pressure; if larger changes in discharge rate are needed, a nozzle with a different orifice area (i.e., different kn) has to be selected.
Sprayer Application Rate Metering
For a required application rate AR (corresponding to a defined dose of pesticide), the sprayer volume rate, q, to be discharged in the field has to be set to a value computed by applying Equation 3.2.12, which includes the operating speed and width of the machine. By dividing the total outflow rate q by the number of nozzles equipping the sprayer, the nozzle flow rate qn is obtained.
Once an appropriate nozzle is chosen (i.e., a nozzle able to deliver qn within the usual working range of pressures), the circuit pressure has to be fine tuned, by means of the control valve, until the liquid pressure value (read on the pressure gauge) is the one obtained by solving Equation 3.2.17 for p and using the kn value from the nozzle manufacturer, i.e.:
$p = (\frac{q_{n}}{k_{n}})^{2}$ Figrue
This relationship is also used in sprayer electronic controllers to achieve a constant application rate as the sprayer speed varies, or to adjust the application rate for different areas in the fields.
Applications
The concepts and calculations discussed above are widely used to design crop production equipment, and also for the adjustment and management of equipment to suit local conditions on individual farms.
Tillage Equipment
Plows are used for deep tillage operations and are unique in soil movement as they invert soil to be almost upside down, as shown in Figure 3.2.13. Disc plows cultivate the soil in shallow layers aiming at weed elimination, loosening the soil and uprooting crop plants remaining after harvest.
In contrast, PTO-driven implements (such as shown in Figure 3.2.14) cultivate the soil more intensively, breaking it into smaller pieces. The intensity is controlled by the axle rotation speed and the tractor speed resulting in a bite length as pointed out by Equation 3.2.4. Powered implements use the engine power of the tractor more efficiently because slip of wheels due to non-optimal track conditions in the fields are avoided. They are smaller than primary tillage machines in length and weight and are therefore appropriate for combining with other tillage tools (e.g., rollers) or seed drills.
Some other implements used on farms for primary and secondary tillage are illustrated in Figure 3.2.15.
(a)
(b)
Figure $15$: (a) Tine cultivator with tines line spacing and tine spacing; (b) disc cultivator in A-type formation to compensate lateral forces.
Planting Equipment
The most common technique in sowing seeds is drilling with metering by wheels for each row as shown in Figure 3.2.16. Each row needs a metering wheel and a share. The hopper supplies all rows and extends over the entire working width. The metering wheels grasp seeds from the hopper bottom and transport them over a bottom flap to drop them via the seed tubes into the share and from there into the soil. As a result, the seeds are not distributed in the row with constant distances but are placed randomly. The function of the marker (Figure 3.2.16a) is to guide the tractor in the subsequent path.
Centralized hoppers can increase capacity of seeding machines. The centralized hopper has only one metering wheel, under the conical hopper bottom. An air stream conveys the grain via a distributor to the shares using flexible pipes.
(a)
(b)
Figure $16$: (a) Regular seed drill; (b) pneumatic seed drill.
Seeds can be sown in well prepared soil (after secondary tillage), which is the regular case (Figure 3.2.17), or under other soil conditions when minimum tillage or no till is applied. Minimum tillage cultivates the soil without deep intervention (e.g., plowing) and no tillage means seeding without any tillage manipulation of the soil. A typical part of the seeding machine is the hopper for fertilizer (between tractor and seeder; Figure 3.2.17b). This combination offers fertilization and seeding in the same pass.
Machines that plant potatoes, transplant seedlings, and so on, have mechanisms that are quite varied, depending upon what is to be planted into the soil. One example for potatoes is displayed in Figure 3.2.18a. A chain with catch cups passes through a pile of potatoes in the hopper and picks up potatoes. If there is more than one potato on a cup, the excess potatoes fall off in the horizontal section. The potatoes are then transported down and placed at a constant separation distance in the open furrow.
A rice transplanter is displayed in Figure 3.2.18b. The seedlings are kept in trays gliding down to the transplanting mechanism. A crank arm for each row will pick out a single seedling and place it in the soil.
(a)
(b)
Figure $17$: (a) Tractor-mounted seed drill working in a well prepared soil with small wheels for recompaction of the cover soil after embedding the seeds (wheat); (b) precision seeder sowing maize (corn) in well prepared soil with larger recompaction wheels.
Fertilizer Distribution Equipment
The most common fertilization equipment is the centrifugal spreader (Figure 3.2.19). This machine is often powered by a tractor through the PTO shaft and often mounted on the tractor’s three point-hitch. Large units (hopper capacity above 1500 kg) can have their own wheels and be pulled by a tractor, or be mounted on trucks. The fertilizer granules flow by gravity, with aid of an agitator, from two outlets on the hopper bottom. The outlets area is adjustable through a sliding gate, which meters the mass outflow Q (kg min−1) to control the rate of fertilizer application to the field.
(a)
(b)
Figure $18$: (a) Potato transplanter with device to complete the filling of cups; (b) rice transplanter with device moving the seedlings from the tray in the soil.
Under the outlets, the metered fertilizer drops on rotating discs (30 to 50 cm in diameter) that impart a centrifugal force on the fertilizer granules, thus distributing them at distances that can reach 50 m. Nevertheless, the working width of centrifugal spreaders is typically 18–24 m and rarely higher than 30 m. Centrifugal spreaders do not uniformly deposit the fertilizer across the working width, but rather with a triangular pattern that requires a partial overlap between two adjacent passes to obtain a uniform transversal distribution within the field. Field speeds typically range from 8 to 12 km h−1, but smooth ground conditions can enable applications faster than 15 km h−1.
Liquid Fertilizer Distributors
Use of liquid mineral fertilizers is rather limited in Europe, except in vegetable crops where the nutrient solution is distributed by a sprayer (see next section) or, much more frequently, in association with irrigation through micro-irrigation systems (fertigation). On the other hand, in North America, fertilization with liquid anhydrous ammonia is very common due to its high nitrogen content (82%) and low cost. Non-refrigerated anhydrous ammonia is applied from high pressure vessels, and it has to be handled with care to prevent hazardous situations. Equipment for application (Figure 3.2.20) includes injectors mounted on soil-cutting knives spaced 20–50 cm apart that reach a depth of 15–25 cm in the soil. When delivered in the soil, the ammonia turns from liquid into gas that reacts with water and rapidly converts to ammonium made available to plant roots. This molecule strongly adheres to mineral and organic matter particles in the soil, helping to prevent gaseous or leaching losses.
Slurry Tankers
A slurry tanker (Figure 3.2.21) is commonly used for distributing organic fertilizer in areas with livestock. The tanker is a trailed, massive piece of equipment mounted on a single or double axle frame (or three axles for tank capacity above 20 m3) equipped with wide wheels (up to 800 mm) to reduce soil compaction. In vacuum tankers, the stainless steel tank is pressurized at 150–250 kPa for spreading by compressed air, which is pumped in. During tank filling, the pump produces a negative pressure difference (vacuum) with the atmospheric pressure, enabling the slurry to be sucked into the tank by a flexible pipe. Slurry flow can also be obtained with direct slurry pumping by a multiple-lobe pump.
Traditional distribution from a slurry tanker involves a deflector or splash plate mounted on the back of the tank. The slurry impacts the plate and thus is spread over an umbrella pattern covering a width of 4–8 m. Splash plates have been banned by legislation in some countries due to odor emission and nutrient losses (e.g., by ammonia volatilization), so they have been replaced by soil applicators. Soil applicators have multiple hoses mounted on a horizontal boom ending with trailed openings, spaced about 20–30 cm apart, that deposit the slurry flowing through the hose directly on the soil. The soil applicator can also be an injector, made of a tine or a vertical disc tool, that makes a groove in the ground where the slurry is injected at depths ranging from 5 cm (for meadows) to 15–20 cm (tilled soil).
To obtain a uniform distribution of slurry flow among the multiple hose lines, soil applicators require the adoption of a homogenizer. This is a hydraulically-driven shredding unit that processes the slurry with rotating blades to cut fibers and clogs to ensure the regular and even feeding of all the hoses connected to the injectors.
Sprayers
In addition to the common functional parts of sprayers (tank, pump, valves, boom, nozzles), sprayers are manufactured in a wide variety of types for specific crops, various application techniques, environmental regulations, purchase costs, etc. CIGR (1999) provides information about various types of sprayers.
Boom sprayers (Figure 3.2.22) are the main type used for protection treatments on field crops (e.g., cereals, vegetables, and leguminous crops). They are named for the wide horizontal boom where nozzles are mounted. Booms often range from 8 to 36 m (and sometimes more) in width, with a height from the soil adjustable from 30 cm to more than 150 cm to ensure a good spray pattern at the level of the target. The boom is generally self-leveling to reduce travel undulation and provide more uniform spray application.
Nozzles are mounted on the booms with a typical spacing of 50 cm, although the spacing may range from 20–150 cm depending upon the specific application and the type of nozzle. The most used nozzle on boom sprayers is the fan type that can produce a wide spectrum of droplet size, from medium-fine to coarse spray, at low pressure (150–500 kPa), meeting most field crop spraying requirements.
A boom sprayer is commonly mounted on a tractor by the three point-hitch, or in the case of sprayers with large capacity tanks (above 1 m3), may be a trailed unit pulled by a tractor or self-propelled. Operating speed can vary, largely with field conditions and type of treatment, but during accurate protection treatments a speed range of 7–10 km h−1 is typical.
Examples
Example $1$
Example 1: Work rate and timeliness of row-crop planting
Problem:
A farmer has a six-row planter for planting maize with a row spacing of 75 cm. The farmer wants to know the field capacity of the planter and whether it can successfully plant 130 ha within five working days. If not, what size planter could do this task?
Assumptions:
• • Forward speed, s, = 9 km h−1. This is a typical value that depends on the seedbed (firmness, levelness, residue, etc.) and the characteristics of the equipment
• • Field efficiency, ef, = 0.65. This typical value allows for non-planting times, such as filling the planter with seeds and turning at the end of rows.
• • Five working days. This is given, but is very dependent upon the weather.
• • Eight hours per day of effective field time. This is the time that the planter is available for field work and does not include time for machine preparation, transfer to fields, operator breaks, and other non-planting activities.
Solution
The first step is to calculate the field capacity, Ca, using Equation 3.2.2:
$C_{a} = 0.1 e_{f}\ w \ s$ (Equation $2$)
We are given ef and s. The planter’s operating width, w, can be calculated as:
$\text{number of rows} \times \text{width per row} = 6 \text{ rows} \times 75 \text{ cm row}^{-1} \times (m/ 100 \ cm) = 4.5 m$
Substituting the values into Equation 3.2.2:
$C_{a} = 0.1 \times 0.65 \times 4.5m \times 9 \text{ km h}^{-1} = 2.63 \text{ ha h}^{-1}$
Therefore, the planter is capable of planting 2.63 ha every hour. If the planter is used to plant on five days for eight hours on each day, the area planted in that time is:
$A = (2.63 \text{ ha h}^{-1} \times (5 \text{ days} \times (8 \text{ h day}^{-1}) = 105.2 \text{ ha}$
That is less than the required 130 ha. Perhaps the farm staff will have to work more hours, but one option for the farmer would be to get a larger planter, which may require a larger tractor. The following calculations help the farm manager select equipment and manage its use.
The field capacity of the new planter needs to be:
$C_{a} \geq (130 \text{ ha}) / (40 \ h) = 3.25 \text{ ha h}^{-1}$
Then, by rearranging Equation 3.2.2 the minimum operating width can be computed:
$w \geq (3.25 \text{ ha h}^{-1}) (10)/ ( 0.65 \times 9 \text{ km h}^{-1}) = 5.56 \text{ m}$
This width corresponds to a number of rows:
$Nr \geq (5.56 \text{ m}) / ( 0.55 \text{ m row}^{-1}) = 7.41 \text{ rows}$
Therefore, the farmer should get an 8-row planter (i.e., the next market size ≥ 7.41) to accomplish the planting of 130 ha within 40 hours of work.
Example $2$
Example 2: Draft force while plowing
Problem:
When designing the frame and hitch of a plow, an engineer needs to know the draft force to ensure that the frame and hitch have enough strength. The draft force also affects tractor selection, since the draft force and speed determine the required pulling power. Determine the draft force needed to pull the plow at a speed of 7 km h−1 given the following information about the plow:
• • 4-share plow
• • 1 gauge wheel
• • 5000 N weight on gauge wheel
• • 0.15 gauge wheel rolling resistance factor
• • 40 cm furrow width
• • 30 cm furrow depth
• • 5 N cm−2 static factor
• • 0.21 N s2 m−2 cm−2 dynamic factor
Solution
Calculate the draft force using Equation 3.2.3:
$F_{z} =nF_{v}\rho _{r} + ikw_{f}d+i\epsilon w_{f}dv^{2}$ (Equation $3$)
where Fz = draft force (N)
n = number of gauge wheels = 1
Fv = vertical force = 5000 N
ρR = rolling resistance = 0.15
i = number of moldboards or shares = 4
k = static factor = 5 N cm−2
wf = width of furrow = 40 cm
d = depth of furrow = 30 cm
ε = dynamic factor = 0.21 N s2 m−2 cm−2
v = traveling speed = 7 km h−1
Draft force $F_{z}=1 \times 5000 \times 0.15 + 4 \times 5 \times 40 \times 30 + 4 \times 0.21 \times 40 \times 30 \times 7 = 31,806 \text{ N}$
Example $3$
Example 3: Length of a rotovator (rotary tiller) bite
Problem:
Determine the bite taken by each blade on the rotary tiller with these characteristics:
• • rotary tiller turning at 240 revolutions per minute
• • travelling at 5 km h−1
• • 4 blades on each tool assembly
Solution
Use Equation 3.2.4:
$B = \frac{\nu \ 10,000}{n \ z \ 60}$ (Equation $4$)
where B = bite length
v = travel speed = 5 km h−1
n = rotary speed = 240 min−1
z = number of blades per tool assembly = 4
Bite length, $B= 5 \times 1000 / (240 \times 4 \times 60) = 8.68 \text{ cm}$
Each blade takes an 8.68 cm bite. The size of this bite will affect the properties of the tilled soil.
Example $4$
Example 4: Nitrogen fertilization with a centrifugal spreader
Problem:
A test was conducted to determine if nitrogen fertilizer was being applied uniformly at the target application rate. The situation is described by the following:
• • centrifugal spreader with working width of 18 m
• • travel speed of 9 km h−1
• • desired nitrogen dose of 70 kgN ha−1
• • calcium ammonium nitrate is 27% nitrogen
• • spreader hopper holds 1000 kg of calcium ammonium nitrate
• • spreader tested with 50 cm by 50 cm trays collecting applied fertilizer
• • figure below shows the amount of fertilizer that was collected in each tray while testing the spreader
Analyze the collected data to determine the following:
1. (a) spreader flow rate (kg/min) of calcium ammonium nitrate to achieve desired nitrogen dose
2. (b) time between fillings of the hopper
3. (c) average application rate and coefficient of variation from the test
Solution
1. (a) By applying Equation 3.2.10, the amount of calcium ammonium nitrate needed to achieve 70 kgN ha−1 is:
2. $D = c_{AC} AR$ (Equation $10$)
where D = dose of application = 70 kgAC ha−1
cAC = content of active compound in the raw material = 0.27 kgN kg−1
AR = application rate
Rearranging and using the given information,
1. $AR= (70 \text{ kg}_{N}\text{ha}^{-1})/ (0.27 \text{ kg}_{N}\text{kg}^{-1}) = 259.3\text{ kg}\text{ ha}^{-1}$
2. This corresponds to a flow rate of the fertilizer (Equation 3.2.11):
3. $Q = AR \ w \ s / 600$ (Equation $11$)
4. $Q = (259.3 \text{ kg ha}^{-1}) \times 18 \text{ m} \times 9 \text{ km min}^{-1} / 600 (\text{min h}^{-1}) (\text{ m km ha}^{-1}) = 70 \text{ kg min}^{-1}$
5. Therefore, the flow out of the spreader must be adjusted to 70 kg min−1.
6. (b) The time between fillings is the time it takes to spread all the fertilizer from the spreader:
7. $t = 1000 \text{ kg} / (70 \text{ kg min}^{-1}) = 14.3 \text{ mins}$
8. The hopper must be refilled every 14.3 minutes.
9. (c) The average amount applied is found by summing the amounts of fertilizer in the trays and dividing by the number of trays:
10. $\bar{x} = (6.95+7.25+6.30+ …+6.80)/11=6.62\text{ g}$
11. The mean application rate can be found by dividing that amount by the surface area of a tray:
12. $\text{Mean AR} = \bar{x}/(\text{area of tray})$
13. $= (6.62 \text{ g}) \times (\text{kg}/1000 \text{ g}) / [(0.5 \text{ m}) \times (0.5 \text{ m}) \times (\text{ ha} /10000 \text{ m}^{2})]=264.8 \text{ kg ha}^{-1}$
14. That is close to the desired rate, but represents an error of:
15. $[(264.8 - 259.3)/259.3] \times 100\text{%}=2.1 \text{% error}$
The uniformity of distribution is quantified by the coefficient of variation (CV) of the collected material as shown by Equation 3.2.6:
1. $CV = \frac{\sqrt{\frac{\sum{(x_{i}-\bar{x})^{2}}}{N-1}}}{\bar{x}} 100\text{%}$ (Equation $6$)
where CV = coefficient of variation (%)
N = number of measured samples
xi = amount of fertilizer collected in each tray (g)
$\bar{x}$= mean amount (g)
That is, by using the appropriate values given in the fertilizer test figure:
1. $CV = \sqrt{\frac{(6.95-6.62)^{2} + (7.25-6.62)^{2} + (6.30 - 6.62)^{2}+…+(6.80-6.62)^{2}}{11-1}}\times \frac{100 \text{%}}{6.62} = 6.2 \text{%}$
2. Since a coefficient of variation under 10% is considered good, the field test shows that the spreader is performing satisfactorily.
Example $5$
Example 5: Sprayer pressure setting
Problem:
A fungicide treatment has to be sprayed to a crop at an application rate, AR = 250 L ha−1. For this kind of treatment the farmer mounts nozzles that deliver a flow qn = 1.95 L min−1 at a circuit pressure of 400 kPa. (This information is provided by the nozzle manufacturer.) Determine the proper pressure to be set in the sprayer circuit in order to distribute the fungicide at the desired application rate.
Assumptions:
• • boom width, w, = 24 m with a typical nozzle spacing, d, = 50 cm
• • forward speed, s, = 8 km h−1, usual for fungicide treatments (depends on wind conditions)
Solution
The first step is to calculate the volume rate q (L min−1) required to distribute the application rate by applying Equation 3.2.12:
$q = AR \ w \ s / 600$ (Equation $12$)
Substituting the given values into the equation:
$q = 250 \text{ L ha}^{-1} \times 24 \text{ m} \times 8 \text{ km h}^{-1} / 600 = 80 \text{ L min}^{-1}$
The number of nozzles equipping the boom is:
$(24 \text{ m}) / (0.50 \text{ m nozzle}^{-1}) = 48 \text{ nozzles}$
The required flow rate per nozzle qn is:
$q_{n}=(80 \text{ L min}^{-1}) / (48 \text{ nozzles}) = 1.67 \text{ L min}^{-1}$
In order to choose the pressure setting to obtain the desired flow of 1.67 L min−1 we need to calculate, for these nozzles, the discharge coefficient kn of Equation 3.2.17:
$q_{n}= k_{n} \sqrt{p}$ (Equation $17$)
By substituting the values provided by the nozzle manufacturer (qn = 1.95 L min−1, p = 400 kPa) we find:
$k_{n} = \frac{1.95 \text{ L min}^{-1}}{\sqrt{400 \text{ kPa}}} = 0.0975$
Then, by Equation 3.2.18 we compute the set value of the sprayer circuit pressure:
$p = (\frac{q_{n}}{k_{n}})^{2}$ (Equation $18$)
$p = (\frac{1.67}{0.0975})^{2} = 293$
Thus, the metering valve has to be adjusted until the circuit pressure reads 293 kPa (2.93 bar) on the pressure gauge.
Image Credits
Figure 1. Oberti, R. (CC By 4.0). (2020). Typical operations involved in growing field crops.
Figure 2. Oberti, R. (CC By 4.0). (2020). The field capacity of a machine.
Figure 3. Schulze Lammers, P. (CC By 4.0). (2020). (a) Moldboard plow body. (b) Chisel tine of a cultivator. (c) Disc plough.
Figure 4. Schulze Lammers, P. (CC By 4.0). (2020). Seed distribution (a) drilled seed, (b) band seed, (c) broadcasted seed, (d) precision seed.
Figure 5. Schulze Lammers, P. (CC By 4.0). (2020). Studded seed wheel for metering seeds, with bottom flap for adjustment to seed size.
Figure 6. Schulze Lammers, P. (CC By 4.0). (2020). (a) Seed deposition, drilled. (b) Frequency of seed distances, drilled.
Figure 7. Schulze Lammers, P. (CC By 4.0). (2020). Fluted pipe with plate for distribution of seeds along the circumference into the seed tubes, and cell wheel in detail.
Figure 8. Schulze Lammers, P. (CC By 4.0). (2020). Precision seeder singling seed grains for seed placement with definite spacing, (a) mechanical singling by cell wheel. (b) pneumatic singling device with cell wheel.
Figure 9. Oberti, R. (CC By 4.0). (2020). Reducing droplet size.
Figure 10. Oberti, R. (CC By 4.0). (2020). Schematic diagram of a sprayer.
Figure 11. Mancastroppa, S. (CC By 4.0). (2020). Diaphragm pump.
Figure 12. Oberti, R. (CC By 4.0). (2020). Hydraulic nozzle operation.
Figure 13. Schulze Lammers, P. (CC By 4.0). (2020). Tractor mounted moldboard plow working in the field.
Figure 14. Schulze Lammers, P. (CC By 4.0). (2020). Rotary tiller as an example of a PTO-driven tillage implement.
Figure 15. Schulze Lammers, P. (CC By 4.0). (2020). (a) Tine cultivator with tine line spacing and tine spacing. (b) Disc cultivator in A-type formation to compensate lateral forces.
Figure 16. Schulze Lammers, P. (CC By 4.0). (2020). (a) Regular seed drill. (b) Pneumatic seed drill.
Figure 17. Schulze Lammers, P. (CC By 4.0). (2020). (a) Tractor mounted seed drill working in a well prepared with small wheels for recompaction of cover soil after embedding the seeds (wheat). (b) Precision seeder sowing maize in well prepared soil with larger recompaction wheels.
Figure 18. Schulze Lammers, P. (CC By 4.0). (2020). (a) Potato transplanter with device to complete the filling of cup grippers. (b) Rice transplanter with device moving seedlings from the tray in the soil.
Figure 19. Mancastroppa, S. (CC By 4.0). (2020). Centrifugal fertilizer spreader.
Figure 20. Mancastroppa, S. (CC By 4.0). (2020). Equipment for anhydrous ammonia.
Figure 21. Mancastroppa, S. (CC By 4.0). (2020). Slurry tanker.
Figure 22. Mancastroppa, S. (CC By 4.0). (2020). Boom sprayer.
Example 4. Oberti, R. (CC By 4.0). (2020).
References
ASABE Standards. (2017). ANSI/ASABE S572.1: Spray nozzle classification by droplet spectra. St. Joseph, MI: ASABE.
CIGR. (1999). Plant production engineering. In B. A. Stout & B. Cheze (eds). CIGR handbook of agricultural engineering (vol. 3). St. Joseph, Michigan: ASAE.
Gorjachkin, W. P. (1968). Sobranie socinenij (vol. 2). Moscow: Kolow Press.
Heege, H. J. (1993). Seeding methods performance for cereals, rape, and beans. Trans. ASAE, 36(3): 653-661. https://doi.org/10.13031/2013.28382.
Müeller, J., Rodriguez, G., & Koeller, K. (1994). Optoelectronic measurement system for evaluation of seed spacing. AgEng ’94 Milano Report N. 94-D-053.
Schilling, E. (1962). Landmaschinen (2nd ed., p. 288).
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/03%3A_Machinery_Systems/3.02%3A_Crop_Establishment_and_Protection.txt
|
Tim Stombaugh
Biosystems and Agricultural Engineering
University of Kentucky
Lexington, Kentucky, USA
Key Terms
Performance Efficiency Dissociation
Productivity Functional processes Separation
Quality Engagement Transport
Variables
Note about units: In this list of variables, dimensions of variables are given. In the text, variable definitions include dimensions as well as example SI units for illustration.
Introduction
One unique skill that biosystems engineers must develop is the ability to understand how mechanical systems interact with biological systems. This interaction is very prevalent in the design of machinery and systems for harvesting grains such as corn (maize, Zea mays), soybean (Glycine max), wheat (Triticum), or canola (Brassica napus). The machines must traverse through a field on a biologically active soil to engage the plants growing in that field. The variability in plant and soil properties (e.g., maturity, moisture content, and structural integrity) within a field can be extensive. This variability presents a challenge to design engineers to conceive machines that can accommodate this variability and provide the machine operator with the flexibility needed to properly engage the plants. The goal of this chapter is to lay the engineering foundation needed to design machinery systems for harvesting grain crops.
Concepts
One key to becoming a great engineer is the ability to identify and understand the core problem to be solved. Too often, engineers focus on improving current solutions to problems rather than looking for better solutions. In the case of grain harvest, the engineer might be tempted to focus on ways to improve the grain combine (Figure 3.3.1), which is the machine most commonly used to harvest grain. The most unique and creative engineering solutions will often come only when the engineer focuses on identifying the fundamental problem to be solved.
With grain harvest, the core challenge is to recover a certain fraction or fractions of the plants in a grain crop that is grown in large fields. The fraction that is to be retrieved may vary by plant and by situation. In corn (maize, Zea mays) harvest, for example, the most commonly harvested plant fraction is the kernel, which is used in a variety of products including food, sugar, and biofuel production. For fresh market sweet corn harvest, the whole ear is recovered with the husks intact. In some animal production systems, the whole ear without husks is recovered for animal feed; in other animal production systems, the entire plant is harvested and ensiled for feed. In these examples, the maturity and moisture content of the plant material may be drastically different if the corn is being recovered for sugar production, animal feed, or human consumption. A further challenge may exist where multiple plant fractions are harvested for different purposes. In industrial hemp (cannabis) production, for example, the seeds of the plant might be recovered for oil or food production, and the plant stems might be recovered separately for fiber production. The plurality of production streams will not be independent of each other and must be considered in the design of the mechanization solution. This chapter focuses on systems where only the grain is recovered.
Performance
The performance of a grain harvesting machine or system can be measured using three general metrics: productivity, quality, and efficiency.
Productivity
The productivity of a harvesting machine or operation is a measure of how much useful work is accomplished. As described in ASABE Standard EP496.3 (2015a), it can be quantified using two primary metrics. First, it can be measured on an area basis indicating how much of the area of a field was covered per unit time. This metric is expressed as the effective field capacity (Ca) and can be calculated as:
$C_{a}=swE_{f}$
where Ca = effective field capacity in area per unit time (m2 h−1)
s = field speed in distance per unit time (m h−1)
w = machine width (m)
Ef = field efficiency (decimal)
Second, productivity can be measured on a material basis indicating how much of the grain is recovered per unit time. Material capacity (Cm) is related to the area capacity by:
$C_{m}=C_{a}y$
where Cm = material capacity in weight or volume per unit time (m3 h−1)
ya = recovered (harvested) crop yield in weight or volume per unit area (m3 m−2)
Table $1$: Bushel weight of common grain crops at standardized moisture content.
Commodity Moisture Content
(%)
Weight
(lb/bushel)
Weight
(kg/bushel)
Corn (Zea mays)
15.5
56
25.40
Soybean (Glycine max)
13
60
27.22
Sunflower (Helianthus annuus)
10
100
45.36
Wheat (Triticum aestivum)
13.5
60
27.22
Material capacity can be reported on either a volume or mass basis with appropriate density conversion. In international trade, grain quantity is typically reported in metric tons. In U.S. grain production, grain quantity is commonly measured using units of bushels. While a bushel is technically a volume measurement equaling 35.239 L, in grain production it is a unit that reflects a standardized weight of grain at a particular moisture content specified for that grain. The standardized weights of a grain bushel for some common crops are listed in Table 3.3.1.
Quality
The second measure of performance of a mechanized grain harvest system is product quality. Ideally, the product (grain) that is recovered is free from any foreign matter and damage, but this is rarely the case. Small pieces of plant material and other foreign matter are often captured with the grain. The machinery can also cause physical damage to the grains as they pass through the different mechanisms. Machine design, as well as crop and operating conditions, can have an effect on foreign matter and damage, which are often referred to collectively as dockage. The term dockage is used because producers generally incur a financial penalty (docked some amount) from the market value of the grain by the buyer if the grain is damaged, contains excessive foreign matter, or is not at the proper moisture content.
Efficiency
The third measure of performance, efficiency, can be quantified in two ways. The first is a time-based field efficiency (Ef) that relates the actual time required to complete a field operation to the theoretical completion time had there been no delays, such as turning around at the ends of the field, machine repair, and operator breaks. Field efficiency is calculated as:
$E_{f} = \frac{T_{t}}{T_{a}}$
where Tt = theoretical completion time
Ta = actual completion time
Efficiency can also be measured based on the completeness of the harvest operation. This harvest efficiency (Eh) is a measure of the amount of desirable product that is actually recovered relative to the amount of product that was originally available to the harvesting machine. It is calculated as:
$E_{h} = \frac{y_{a}}{y_{p}}$
where ya = actual yield of grain recovered measured in weight per unit area (kg m−2)
yp = potential yield of grain in the field measured in weight per unit area (kg m−2)
The antithesis of harvest efficiency is harvest loss (Lh), which is the amount of grain lost by the harvesting machine per unit area expressed as a percentage of the potential yield. It can be calculated as:
$L_{h} = 1-E_{h}$
When focusing on the harvesting operation, the potential yield of the crop is considered to be the harvestable grain that is still attached to the plants. Potential yield does not consider the grains that have fallen from the plants before the machine engages them. If harvest is delayed after the optimum time, potential yield will often decrease due to natural forces causing seeds to fall from the plants. This pre-harvest yield loss (yph) is the amount of grain that is lost before harvest expressed on a per area basis. It is the difference between the total yield that the plants actually produced (yt) and the potential harvestable yield:
$y_{ph} = y_{t}-y_{p}$
There are often strong interrelationships between productivity, grain quality, and efficiency of grain harvest. For example, an increase in productivity could be realized by an increase in speed or field efficiency; however, grain quality and harvest efficiency may be compromised. Finding the fiscally optimum operation point is a challenge to be addressed by both the design engineer and the machine operator. The engineer must understand the needs of the operator and incorporate the appropriate flexibility of control into the design of the machine.
Functional Processes
When considering the design of any mechanization system, the engineer should first carefully consider the potential processes that will have to be undertaken to complete the task. Srivastava et al. (2006) expand on a number functional process that could occur in grain harvest. These processes can be simplified to four main processes:
• • Engage the crop to establish mechanical control of the grain.
• • Dissociate or break the connection between the individual grains and the plant.
• • Separate the grain from all of the other plant material.
• • Transport the grain to the proper receiving facility.
Depending on the specific system, these functions could take place in varying order, and some processes may be repeated multiple times. Historically, before mobile grain combines were developed, a harvesting process involved gathering the whole plant from a field and transporting it to a central location where the grains were separated (threshed) from the plant material either by hand or with a stationary machine. With modern harvesting machines, the grain dissociation and separation is accomplished as the machine moves through the field, and the only material that is transported away from the field is the grain. Likewise, some functions may be accomplished multiple times. For instance, there are often several separation stages in a single machine, and the product might be transported multiple times between different mechanisms, temporary storage units, and transportation vehicles before reaching the final destination.
Engagement
The process of engaging the crop can occur in many different ways depending on the particular crop and machine configuration. Often there is some type of mechanism that will grasp or pull the standing crop toward the machine as it moves forward. The grasping mechanism could be mechanical, such as a rotating arm or chain, or it could involve other forces such as pneumatics or gravity. Engagement may include a cutting action that severs the part of the plant containing the grain from the rest of the plant. The grasping and cutting actions usually result in the material being caught on some surface where it can then be moved into the machine.
Dissociation
The dissociation function of a grain harvest process involves physically breaking the connection between the desired particle and the plant. The term threshing is often used to describe the dissociation function, but in some contexts, threshing could also include some separation and cleaning functions. The dissociation performance of a harvesting machine is quantified by the threshing loss, Lth, which is the amount of grain that remains connected to the plants expressed as a percentage of the total amount of grain that was presented to the threshing mechanism.
Engineers designing mechanisms to dissociate grain need to understand the basic principles of tensile and bending failure. Most grains are attached to the plant by some kind of stem and/or connective tissue. The connective tissue can often be understood as a cylindrical bundle of connective tissue. Most dissociation mechanisms apply a tensile force, a bending force, or a combination of the two on the seed relative to the plant (Figure 3.3.2). The goal of the force application is to cause a failure of the connective tissue between the grain and the stem or plant. Failure will occur when the stress in the connective tissue exceeds its ultimate yield stress, which is the stress at which the material will break. Obviously, it is desirable for the dissociation failure to occur as close to the individual grain as possible so that there is no stem or other plant material captured with the grain.
Tensile failure is the mode where the grain is pulled straight away from the stem until the connection fails. The linear tensile force induces stress in the connective tissue that can be calculated by the following equation:
$\sigma_{t} = \frac{F_{t}}{A}$
where σt = tensile stress in the connective tissue measured in force per unit area (N m−2)
Ft = tensile force acting on connective tissue (N)
A = cross-sectional area of the connective tissue (m2)
Bending failure occurs when the grain is rotated relative to the stem inducing bending stress in the connective tissue. Bending force or moment (M) can also be thought of as a rotational torque applied to the grain. Bending stress is described by the following equation:
$\sigma_{b} = \frac{M_{y}}{I}$
where σb = bending stress in the connective tissue measured in force per unit area (N m−2)
M = bending force acting on the connective tissue measured in force × distance (N m)
y = perpendicular distance to neutral axis (m)
I = area moment of inertia of the connective tissue cross section in units of length to the fourth power (m4)
The moment of inertia is a quantity that is based on the shape of the cross section of the member and is used to characterize the member’s ability to resist bending.
Bending stress is a bit more complicated than tensile stress because the stress in the member is not constant across its cross section. Fibers farther from the neutral axis of bending will experience higher stress, which can actually enhance dissociation.
One of the challenges with describing failure of plant materials mathematically is the wide variability that can occur. The strength of the connective tissue is affected by three main factors based on biological properties of the plant. The first factor is plant size. Some plants within a single crop may grow larger than others and may have more connective tissue between the stem and grain. This would correspond to larger area and moment of inertia in Equations 3.3.7 and 3.3.8, which would require more force to reach the ultimate stress.
The second factor affecting connective tissue strength is plant maturity. Most plants will lose their grains naturally when they reach maturity as a mechanism for propagation. Mathematically, this natural dissociation is described by a reduction in the yield strength of the connective tissue. Quite often, this natural maturity state corresponds with the optimum time for grain harvest; however, it is not always possible to harvest at that exact time. Therefore, the failure strength could vary significantly based on actual harvest time.
The third factor affecting the strength of the connective tissue is moisture content. The strength properties of plant material vary greatly with moisture content. Engineers need to understand a number of biological properties of the plant as they are affected by moisture content. For instance, the turgor pressure in a plant is the pressure exerted on the walls of the cells within the plant by the moisture in the cells. As turgor pressure decreases, which is caused by a reduction in moisture content, plants become less rigid. In some plants, this might weaken the plant structure, making dissociation easier, but in others it could make it more difficult to dissociate a grain because the plant material would be more elastic. Depending on weather conditions and solar intensity, turgor pressure can vary greatly throughout a single day, affecting the dissociation of the crop.
Further complicating the mathematical representation of plant strength is the fact that there can be significant variability of plant size, maturity, and moisture content between different regions of a field, between different plants, and even between different grains on a single plant. Engineers need to develop mechanisms that accommodate this variability and give machine operators the flexibility to adapt the machine to the various conditions.
Separation
Once the grains are dissociated from the plant, they must be separated from the rest of the plant material and any other undesirable material. This undesirable material is called material other than grain (MOG). The separation performance of a harvesting machine is quantified by the separation loss (Ls), which is the amount of free grain that cannot be separated from the MOG expressed as a percentage of the total amount of grain that was dissociated from the plants by the machine.
There are two main principles that are typically used to separate grain from MOG. The first is mechanical separation through sieving. A sieve is simply a barrier with holes of a correct size that allows the desired particles to pass through while preventing larger particles from passing, or vice versa, allow smaller undesirable particles to pass through while retaining the desired particles. Some grain sieving mechanisms rely on gravitational forces on the particles to cause them to pass downward through the sieve openings; others utilize centripetal forces of rotating mechanisms to force particles outward through the sieves. Most gravitational sieving mechanisms induce a shaking or bouncing motion on the material to enhance the separation process by facilitating particle motion downward through the mat of material as well as causing motion of the material across the sieve.
Consider the sieve plate connected to the parallel rotating bars as shown in Figure 3.3.3. This is a classic four-bar linkage mechanism. The sieve plate moves in a circular pattern while maintaining its horizontal orientation. If the design of the length of the rotating arms along with the rotational speed is correct, the material is bounced laterally across the plate. As it bounces, the grains move downward through the mat of material, then through appropriately-sized holes in the plate. The MOG travels across the plate and is deposited off the end of the sieve.
The velocity of the sieve plate (v) can be calculated from the following equation:
$v = r\omega$
where v = velocity of the sieve plate (m s−1)
r = length of the rotating support, or connecting, arms (m)
ω = rotational speed of the arms (radians s−1)
The velocity of the sieve plate is actually the tangential velocity of the rotating support arms. The direction of this velocity changes sinusoidally as the bars rotate. The vertical (vv) and horizontal (vh) components of the velocity can be described with the following equations:
$v_{v} = v\ cos\theta$
where θ = the angular position of the arms.
The bouncing motion of a particle is analyzed by considering the momentum of the particle in relation to the upward moving, but decelerating, plate to determine if and when the particle will leave the plate.
The second principle that is used to separate grains from MOG is aerodynamic separation. Quite often, the grain particles are denser and have significantly different aerodynamic properties than the MOG, especially the lighter leaf and hull particles. These differences are exploited to separate the MOG from the grain.
A particle that is moving through any fluid, including air, is subjected to gravity and drag forces (Figure 3.3.4). Gravity acts downward on the particle and produces the force represented by:
$F_{g} = mg$
where Fg = gravitational force (N, or m kg s−2)
m = mass of the particle (kg)
g = gravitational constant in units of length per unit time squared (9.81 m s−2)
The drag force acts in the opposite direction of the particle’s motion relative to the air. The drag force is calculated by:
$F_{d} = 0.5\rho_{air}v^{2}_{part}C_{d}A_{part}$
where Fd = drag force (N, or m kg s−2)
ρair = density of the air in units of mass per unit of volume (kg m−3)
vpart = velocity of the particle relative to the air in units of length per time (m s−1)
Cd = unitless drag coefficient of the particle
Apart = characteristic area of the particle (m2).
The motion of the particle is determined by the vector sum of the two forces and the fundamental motion equation:
$a = \frac{F_{r}}{m}$
where a = acceleration of the particle in the direction of the resultant force (m s−2)
Fr = resultant force on the particle (N, or m kg s−2)
The particle trajectory can be described mathematically by integrating the acceleration equation once to get the velocity equation, then a second time to find position as a function of time.
The drag coefficient is a function of many particle factors including its shape and surface texture. Many MOG particles, such as seed hulls and stem particles, have a more irregular shape and surface texture than the grains and, thus, will have a higher drag coefficient. Aerodynamic separation occurs by capitalizing on these drag differences as well as differences in mass between the grain and MOG particles. Particles can be separated if an air stream is directed through the mat of grain and MOG in such a manner that the MOG is directed in a different trajectory than the grains.
Consider an air stream created by a fan blowing straight upward at a falling particle. If the air speed is increased to the point that the drag force equals the gravity force, the particle will be suspended in the air stream. The velocity of the air at this point is, by definition, the terminal velocity of the particle. If the air speed is increased, the particle will move upward; if the air speed is decreased, the particle will move downward.
Consider a mixture of grain and MOG particles being dropped through a directed air stream as illustrated in Figure 3.3.4. If the air velocity is set slightly below the terminal velocity of the grains, their trajectory will be altered to the right somewhat, but they will continue to move downward. MOG particles that have a much higher drag force and correspondingly lower terminal velocity will be carried more upward and to the right by the air stream moving them out of the grain flow.
Transport
Once the grain is dissociated and separated from the MOG, it must be transported to a receiving station. This is usually accomplished in several steps or stages using a variety of mechanisms. Various types of conveyors are used to move the grain from one part of a machine to another or from one machine to another. At different stages of the process, the grain might be stored or carried in various bulk containers.
The principles involved with designing or analyzing bulk storage or transportation containers are primarily strength of materials. The designer first needs to determine what forces will be produced on the structure by the grain. Free body diagrams are analyzed to determine the magnitude and direction of all forces. One challenge in designing grain harvesting machinery is that the machines are often mobile. As the machines move across the rough terrain typically encountered in agricultural fields, dynamic forces are induced as the grain load bounces. Designers typically utilize a variety of techniques to predict the maximum dynamic loads that could be induced on a structure.
Once the forces are known, the designer then determines what stresses are induced in each structural member by the grain load. Stress (σ) describes the amount of force (F) being carried per unit area (A) of a given structural member:
$\sigma = \frac{F}{A}$
where σ = stress in units of force per unit area (N m−2)
F = force in the member (N)
A = characteristic area of the member (m2)
The stress in any part of a structural member cannot exceed the yield stress of the material or permanent damage (deformation) will be incurred. But even if permanent deformation is not induced in a structural member, engineers still need to be concerned about how much a structural member flexes or deflects. The deflection in a member is calculated from strain, which is:
$\epsilon = \frac{dL}{L} = \frac{\sigma}{E}$
where ε = strain (dimensionless)
L = length of the member (m)
E = modulus of elasticity reported in force per unit area (N m−2)
Stress and strain are related by the modulus of elasticity, also known as Young’s modulus. The lower the modulus of elasticity, the more deflection a given force will cause in a member. Some deflection can be good in a structure, especially when dynamic forces are involved, because it helps to absorb energy without causing high peak loads. In the case of a machine moving across a rough field, for example, some deflection in the structure can absorb some of the energy caused by uneven terrain and prevent structural failure.
For shorter distance transportation, several different conveying devices can be employed. When designing conveying devices, the designer is primarily concerned about the capacity of the device and the power required to convey the material. Some of the simplest conveying devices utilize paddles or buckets connected to chains (Figure 3.3.5) to drag or convey the grain. The capacity of paddle conveyors, which is the flow rate of material through the conveyor, is calculated simply by the amount of material carried by each paddle and the number of paddles that pass a point in a given amount of time:
$Q_{a} = Vn$
where Qa = actual flowrate in volume per unit time (m3 s−1)
V = volume of material carried by a single paddle (m3)
n = number of paddles that are discharged per unit time (s−1)
The volume of material that can be carried by the paddles is affected by a number of parameters. Grain properties such as particle shape, size, surface friction and moisture content affect the shape of the pile of grain on each paddle. The slope of the conveyor limits the size of the piles before the grain runs over the top of the paddle and back down the conveyor.
Another conveying device commonly used in grain harvest and handling is a screw conveyor, commonly known as an auger (Figure 3.3.6). Screw conveyors utilize a continuous helicoid plate, called flighting, attached to a rotating shaft. The capacity of a horizontal screw conveyor that is completely full of grain is the volume displaced by a single rotation of the shaft times the number of rotations in a given unit of time, which can be calculated by:
$Q_{t}=\frac{\pi}{4} (D^{2}-d^{2})P\omega$
where Qt = theoretical flowrate in volume per unit time (m3 s−1)
D = outside diameter of the flighting (m)
d = diameter of shaft (m)
P = pitch length of the flighting (m)
ω = rotational speed of the shaft (radians s−1)
When the conveyor is inclined, the flighting will no longer be full as the grain will tend to slide down around the flighting. The actual volumetric flowrate (Qa) can be calculated by:
$Q_{a}= Q_{t} \eta_{v}$
where ηv is the volumetric efficiency of the conveyor. Predicting the volumetric efficiency can be very challenging because it is affected by numerous factors, including conveyor slope, rotational speed, grain moisture content, particle size, particle to conveyor friction, and particle-to-particle friction. Because of this complexity, mathematical prediction is usually accomplished with empirical relationships.
The power required to convey the material is affected by gravitational and friction forces. If the grain is lifted any vertical distance, the conveyor must overcome the gravitational force opposing that lift. Power is defined as a force applied over a given distance in a given amount of time (force × distance/time). The force and time components of the gravitational power calculation come from the flow rate of the grain through the conveyor expressed in units of weight per unit of time. The density of the grain can be used to convert volumetric flow rate into a weight flow rate. The distance component of power is simply the vertical distance that the grain is lifted. The gravimetric component of power is:
$P_{q} = Q_{a}\rho_{grain}h$
where Pg = power required to overcome gravity (W or J/s)
Qa = actual volumetric flow rate of grain (m3 s−1)
ρgrain = density of the grain (kg m−3)
h = height that the material is lifted (m).
The friction component of power can be more complicated to compute. In the case of a paddle conveyor, the grain must be slid along the bottom of the conveyor surface. This friction force can sometimes be predicted quite well from the coefficient of friction between the grains and the surface of the conveyor. If that coefficient of friction gets too large, the forces due to friction on the grains at the interface between the grain pile and the conveyor surface will cause the grains within the pile to begin to move relative to each other. At this point, it becomes more difficult to mathematically describe the energy necessary to overcome these internal friction forces as well as the surface friction.
Frictional forces in screw conveyors are similar. The grain in a full horizontal conveyor slides along the outside wall of the conveyor tube as well as the flighting but does not move as much within the grain mass. As the conveyor is inclined and it is no longer completely full, the amount of motion within the grain mass increases and becomes more critical to the evaluation.
Applications
The most common machine used for grain harvest is the modern grain combine (Figure 3.3.1). Combines typically have an interchangeable attachment on the front called a header that engages the crop and passes certain fractions of that plant into the combine. The material then passes through a threshing mechanism that dissociates the grains from the plant stems and also performs some separation of grain from MOG. The grain and MOG are then passed through various separating and cleaning mechanisms. The MOG is generally passed longitudinally through the machine and expelled from the back. The clean grain generally moves downward through the machine to a catch reservoir on the bottom. From there, it is moved upward with paddle and/or screw conveyors to a holding tank on the top of the machine. A large screw conveyor is then used to periodically empty the contents of the holding tank into a truck or other vehicle, which transports the grain to a receiving station.
Engagement
Header attachments are used to engage the crop. The two most common types of header attachments on grain combines are the grain table and the corn or row head. Grain tables (Figure 3.3.1) are typically used in small grain crops such as wheat and soybean. They generally include a large gathering reel to engage the crop and pull it into the header. A cutting mechanism, typically a sickle bar, cuts the plant as it is pulled into the header to gain mechanical control of the grain. Since grain tables can be 12 m wide or wider, cross conveyors move the crop material from the ends of the header to the center where it is fed into the combine.
The height of the cut depends on the crop and the cultural practice of the operation. The threshing and separation processes in the combine are most efficient when the MOG entering the combine is minimized. In soybean, for example, the seed pods can grow very low on the plant stem; therefore, the crop must be cut near the ground to prevent losses. In crops like wheat where the grains grow in a head on the top of the plant stem, the cut height could be just low enough to capture the entire head but minimize the amount of MOG passed into the machine. In some production systems, though, the MOG might be used for animal bedding or bioenergy. In these cases, the header is operated lower so that more MOG is gathered, passed through the combine and deposited in a narrow line, called a windrow, behind the combine. The windrow can easily be gathered by another biomass harvesting machine in a separate operation. Occasionally, the MOG is gathered by another machine, such as a baler, attached directly to the combine (Figure 3.3.7).
In corn (maize), the grains are produced toward the middle of the plant. Cutting the plant to capture the ears for threshing would mean the introduction of large amounts of MOG into the harvest stream, hampering threshing and separation performance. Since corn is typically grown in rows spaced 0.5–0.75 m, corn heads are constructed with fingers that pass between the rows so that each row of corn can be engaged individually (Figure 3.3.8). Long parallel rollers on each side of the row grab the plant stems below the ear and pull them downward as the machine moves forward. Stripper plates above each roller are spaced such that the plants pass down between them, but the corn ears do not. As the plants are pulled downward, the ears are stripped off the plants. Ideally, all of the plant material is pulled down through the header and does not pass into the combine. Depending on stalk condition, some stalk breakage and leaf removal will occur, and that MOG will have to be separated in the combine. Chains with fingers above each stripper plate move the ears and MOG back into the header. Cross conveyors then move material from the edge of the header into the center where it is fed into the combine.
One of the performance measures of any header is its effectiveness in gathering all of the grain from the field into the combine. This engagement process is complicated by the fact that the grains tend to naturally fall off of the plants more easily when the crop is in its optimum harvest condition. Losses by the header are called shatter losses (Lsh). They are quantified as a percentage of the potential yield (yp), i.e., the available yield of the plants.
Shatter losses are affected by crop conditions, including maturity and moisture content. They are also affected by the design and operation of the header. For example, the speed of the gathering reel on grain tables must be matched to the forward speed of the machine. If the reel speed is too high, the plants are beaten aggressively, causing grains to fall off the plants before they can be caught on the header platform. If the reel speed is too slow, the plants could be pushed forward, again knocking grains onto the ground before they enter the header. Depending on crop conditions, the tangential speed of the reel is typically operated at least 25–50% faster than the forward speed of the combine to pull the plants into the header. Some machines utilize sensors and electronic controls to automatically adjust the reel speed to match machine forward speed.
Dissociation
The dissociation function in combines is generally accomplished by rotating cylinders called threshing cylinders. There are two basic configurations of threshing cylinders, distinguished by the direction the material moves through the cylinder. Some cylinders are mounted with their axis of rotation horizontal and perpendicular to the longitudinal axis of the machine. The material enters from one side of the cylinder and exits the other (Figure 3.3.9). Bars oriented along the outside of the cylinder rub the plant material against the stationary housing around the outside of the cylinder, which is called the concave. The rubbing action affects the dissociation of the grain from the plants. Holes in the concave facilitate a sieving action to separate some of the grain from the longer plant material.
The other common threshing configuration has the rotating threshing cylinder mounted parallel to the longitudinal axis of the machine. Material enters the end of the cylinder and moves in a helical pattern around and past the cylinder (Figure 3.3.10). Similar concave structures around the cylinder provide resistance to the flow to induce the dissociation and separation functions.
Threshing effectiveness is measured by the percentage of grains that are dissociated from the plants, the percentage of grains that are damaged during the threshing process, and the amount of MOG break-up. Excess amounts of small MOG particles can hamper separation efficiency since they can be indistinguishable from grains in the separation process. Threshing efficiency and grain damage are affected by plant properties, the design of the cylinder and concaves, and operational adjustments. Machine operators often have real-time control of the cylinder speed as well as the clearance between the cylinder and concave.
Separation
There are two different types of separation systems used in grain combines that are generally associated with the two types of threshing devices. Laterally oriented threshing cylinders generally feed the material stream onto a vibrating separator platform commonly called a straw walker. The oscillating plate is essentially a sieve allowing the smaller particles, including the grains, to fall through the sieve as the MOG is moved back through the machine.
On combines with axially oriented threshing cylinders, the latter portion of the cylinder and concave accomplish initial separation. These rotary separators utilize centripetal forces to separate grains outward through concave openings.
Regardless of the initial separator configuration, most combines pass the grain stream captured from the threshing unit and initial separation unit through an additional multi-stage cleaning sieve. Pneumatic separation is also applied in these sieves to enhance separation of grain from MOG.
Transport
The cleaned grain stream is conveyed from the bottom of the combine to a holding tank on the top of the machine using a combination of paddle and screw conveyors. The holding tanks vary in size with the size of the machine. Depending on the crop and operating conditions, the combine tank could be filled in as little as 3–4 minutes. In some operations, the combine is driven to the edge of the field when the tank is full so that it can be emptied into a truck for transport to a receiving station. This is often considered an inefficient use of a very expensive harvesting machine. Productivity of the harvesting operation is maximized if the combine can be operated as close to continuously as possible.
Combines can be unloaded while they are harvesting if a receiving vehicle can be driven alongside the combine. Over-the-road trucks are not typically used for this operation because of their relatively small tires. Traction in potentially soft soil conditions limits their mobility. Also, there is a concern regarding compaction of the soil in the field. Heavy loads on small tires will compact the soil under the tires causing damage that will affect the performance of future crops in the field.
In-field transport of grain is often accomplished with a grain cart (Figure 3.3.11). Grain carts are large transport tanks typically pulled by large agricultural tractors. Both the cart and tractor will be equipped with large tires or tracks to reduce the pressure on the soil.
With the use of grain carts, a logistical challenge arises around the best way to get the grain away from the combine to keep it harvesting. Many operations use multiple combines in a field simultaneously. Managers must decide how many grain carts are needed, how big those carts need to be, and how many trucks are needed to get the grain away from the field. Operationally, vehicle scheduling is a challenge to anticipate which combines in a multiple combine fleet must be emptied so that they do not fill up and become unproductive.
Examples
Example $1$
Example 1: Combine harvest efficiency
Problem:
One way to evaluate the harvest efficiency of a combine is to measure losses that occur as a combine moves through the field. This can be done by physically gathering and counting or weighing the grains found at different locations in the combine’s operating space.
Consider the combine in Figure 3.3.12 that was stopped while harvesting a very uniform crop of wheat. Field measurements were taken at three different locations as shown. At each point, a 1 m square area was selected as a representative test area. At point A in front of the combine, all of the standing plants in the test area were carefully cut and hand harvested to determine how much grain was available in the field. After that, the grains that were laying on the ground in that test area were gathered and weighed. At point B, all of the grains found within the test area were gathered and weighed. At point C, which is beyond the discharge trajectory of material being expelled from the back of the combine when it was stopped, all of the grains were collected and weighed separately by those that were still attached to the plants and those that were free. The following are the data collected at each location.
• Point A:
• 335 g unharvested grain
• 15 g free grains (grains laying on the ground)
• Point B:
• 40 g free grain
• Point C:
• 63 g free grain
• 14 g grain attached to plant
Determine the gathering, threshing, and separating efficiencies of this harvest operation.
Solution
The theoretical or potential yield, yp, of the crop is the harvestable grain that is still attached to the plants when the combine engages it. In this example, the potential yield is based on the unharvested seed at point A.
$y_{p} = \frac{0.335 \text{ kg}}{\text{m}^{2}} \cdot \frac{10000 \text{ m}^{2}}{ \text{ha}} = \frac{3350 \text{ kg}}{\text{ha}} = \frac{3.35 \ T}{\text{ha}}$
A simple unit conversion can be performed to convert the metric yield into common U.S. yield units of bu/acre as:
$y_{p} = \frac{3350 \text{ kg}}{\text{ha}} \cdot \frac{1 \text{ bu}}{27.22 \text{ kg}} \cdot \frac{1 \text{ ha}}{ 2.47\text{ acre}} = \frac{49.8 \text{ bu}}{\text{ha}}$
Note that the potential yield calculation does not consider the grains that had fallen from the plants before the machine engaged them. In this example, the pre-harvest yield loss, yph, was:
$y_{ph} = \frac{0.015 \text{ kg}}{\text{m}^{2}} \cdot \frac{10000 \text{ m}^{2}}{\text{ ha}} = \frac{150 \text{ kg}}{\text{ha}}$
As a percentage of the total available grain, the pre-harvest yield loss was:
$L_{ph} = \frac{150}{3350 + 150} \cdot 100 = 4.3 \text{%}$
The grain that was collected at point B under the combine includes the shatter losses as well as the pre-harvest losses. The pre-harvest losses are subtracted from the total grain at point B to determine grain lost as the header engaged the crop. The shatter loss, Lsh, is calculated as a percentage of the theoretical yield as follows:
$L_{sh} = \frac{(40 \text{ g}-15 \text{ g})}{335 \text{ g}} \cdot 100 = 7.5 \text{%}$
Threshing loss is a quantification of the grains that did not get dissociated from the plant. These grains are found at point C still attached to plant material. The threshing loss percentage is based on the total grain that actually enters the combine. In this example, the shatter loss is removed from the total available grain in calculating the threshing loss, Lth, as follows:
$L_{th} = \frac{14 \text{ g}}{(335 \text{ g}-25 \text{ g})} \cdot 100 = 4.5 \text{%}$
The separation loss is threshed grain that is not removed from the MOG stream and is lost out the back of the combine. The free grain collected at point C includes the separation loss as well as the shatter and yield losses. Therefore, the loss due only to separation, Ls, is:
$L_{s} = \frac{(63 \text{ g} - 15 \text{ g} - 25 \text{ g})}{(335 \text{ g} - 25 \text{ g})} \cdot 100=7.4 \text{%}$
The total harvest loss, Lh, is based on all the grain lost by the combine, which would be:
$L_{h} = \frac{63 \text{ g} + 24 \text{ g} - 15 \text{ g}}{335 \text{ g}} \cdot 100 = 19\text{%}$
The actual harvested yield, ya, then, is:
$y_{a} = \frac{3.35 \ T}{\text{ha}}- \frac{0.19(3.35 \ T)}{\text{ha}} = \frac{2.71 \ T}{\text{ha}}$
The harvest efficiency is (Equation 3.3.6):
$E_{h} = 1 -L_{h} = 1-0.19 = 0.81$
This can be verified by Equation 3.3.4:
$E_{h} = \frac{y_{a}}{y_{t}} = \frac{2.71 \ T}{\text{ha}} \cdot \frac{\text{ha}}{3.35 \ T} = 0.81$
The prudent manager would scrutinize these harvest efficiency numbers to determine if improvements are merited. For wheat harvest, these losses would probably be considered quite large. The manager may consider adjustment and/or operational changes to the combine that might reduce harvest losses.
Example $2$
Example 2: Reel speed
Problem:
One of the causes of shatter loss with grain tables is improper speed of the reel. The designers of a grain table need to provide ample adjustability in the rotational speed of the reel so that the operator can compensate for crop conditions and forward speed. Specifically, the designer needs to determine the range of speeds that the design must be able to achieve. Consider the grain table in Figure 3.3.13 that has a 1.3 m diameter reel. Determine the range of reel speeds that the design must be able to achieve.
Solution
As mentioned earlier, the tangential speed of the engaging devices on the end of the reel should typically be 25–50% greater than the combine forward speed, vf. ASAE Standard D497.7 is a great resource for operating parameters of common agricultural machinery. Table 3.3.3 of that standard (reprinted in part as Table 3.3.2 of this chapter) indicates that the typical forward speed of a self-propelled combine ranges from 3.0 to 6.5 km/hr. The minimum rotational speed of the reel would occur with the reel tangential speed 25% greater than the slowest forward speed of 3.0 km/hr. Conversely, the maximum speed would occur at 150% of 6.5 km/hr. The tangential speed, vt, is calculated using Equation 3.3.9:
Table $2$: Field efficiency and field speed for common harvesting machinery (excerpt from Table 3.3.3 in ASABE Standard D497.7, 2015b).
Field Efficiency Field Speed
Harvesting Machine Range % Typical % Range mph Typical mph Range km/h Typical km/h
Corn picker sheller
60–75
65
2.0–4.0
2.5
3.0–6.5
4.0
Combine
60–75
65
2.0–5.0
3.0
3.0–6.5
5.0
Combine (SP)
65–80
70
2.0–5.0
3.0
3.0–6.5
5.0
Mower
75–85
80
3.0–6.0
5.0
5.0–10.0
8.0
Mower (rotary)
75–90
80
5.0–12.0
7.0
8.0–19.0
11.0
Mower-conditioner
75–85
80
3.0–6.0
5.0
5.0–10.0
8.0
Mower-conditioner (rotary)
75–90
80
5.0–12.0
7.0
8.0–19.0
11.0
Windrower (SP)
70–85
80
3.0–8.0
5.0
5.0–13.0
8.0
Side delivery rake
70–90
80
4.0–8.0
6.0
6.5–13.0
10.0
Rectangular baler
60–85
75
2.5–6.0
4.0
4.0–10.0
6.5
Large rectangular baler
70–90
80
4.0–8.0
5.0
6.5–13.0
8.0
Large round baler
55–75
65
3.0–8.0
5.0
5.0–13.0
8.0
Forage harvester
60–85
70
1.5–5.0
3.0
2.5–8.0
5.0
Forage harvester (SP)
60–85
70
1.5–6.0
3.5
2.5–10.0
5.5
Sugar beet harvester
50–70
60
4.0–6.0
5.0
6.5–10.0
8.0
Potato harvester
55–70
60
1.5–4.0
2.5
2.5–6.5
4.0
Cotton picker (SP)
60–75
70
2.0–4.0
3.0
3.0–6.0
4.5
$v_{t} = r \omega$
At the minimum rotational speed, the tangential speed should be:
$v_{t} = v_{f}(1.25)$
Combining the equations, the minimum rotational speed is:
$\omega = \frac{v_{t}}{r} = \frac{v_{f}(1.25)}{r} = \frac{3 \text{ km}}{\text{hr}} \cdot \frac{1.25}{1} \cdot \frac{2}{1.3 \ m} \cdot \frac{1000 \ m}{\text{km}} \cdot \frac{1 \text{hr}}{60 \text{ mins}} \cdot \frac{1 \text{hr}}{2\pi} = 15.4 \ \text{rpm}$
It follows, then, that the maximum rotational speed is:
$\omega =\frac{v_{t}}{r}=\frac{6 \text{ km}}{\text{ hr}}\cdot \frac{1.5}{1} \cdot \frac{2}{1.3 \ m} \cdot \frac{1000 \ m}{\text{km}}\cdot \frac{1 \ hr}{60 \text{ min}} \cdot \frac{1 \text{ rev}}{2\pi} = 36.7 \ rmp$
The revolution units are added to the calculations by noting that radians are considered unitless, and there are 2π radians in one complete revolution. The conclusion is that the drive system for the reel on that grain table must be able to achieve speeds varying from 15.4 to 36.7 rpm, so the drive mechanism for the reel should be designed accordingly.
Example $3$
Example 3: Axle loads
Problem:
The design of the structure of a vehicle relies heavily on understanding the effects of all the forces on the machine. Consider a two-wheeled grain cart pulled by a tractor as shown in Figure 3.3.14. The task is to calculate the required size (diameter) of the cylindrical axles to support the cart wheel assembly. Assume that the grain load is evenly distributed in the tank of the cart and that the tank is laterally symmetrical, which means that the loads are evenly distributed between the left and right wheels of the cart. Besides the dimensions shown in Figure 3.3.15, the following data are given by a manufacturer for a very similar cart:
• Cart capacity: 850 bushels of corn (maize)
• Empty cart weight: 54 kN
• Tongue weight of empty cart: 11 kN
Solution
Because of the left/right symmetry of the cart, the free body analysis can be conducted in two dimensions looking at the side of the machine (Figure 3.3.15). The two cart wheels will have identical loads. Since the tractor supports 11 kN of the empty cart weight from the tongue at the hitch point (Fct), the rest of the empty cart weight, which is 54 kN – 11 kN = 43 kN, must be supported by the cart wheels (Fcw). Given the symmetry and uniform loading assumptions, the center of gravity of the grain load will be at the geometric center of the bin on the cart. The distance from the hitch point to the grain center of gravity, xg, is:
$x_{g}= 7 - \frac{5.5}{2}=4.25 \ m$
The weight of the grain is:
$F_{g}=\frac{850 \text{ bu}}{1} \cdot \frac{25.4 \text{ kg}}{\text{bu}} \cdot \frac{9.81 \text{ N}}{\text{ kg}} = 212 \text{ kN}$
The cart and grain must be supported by the cart wheels and by the tractor at the hitch point. These forces are represented as reaction forces Rw and Rt (Figure 3.3.15). Rw is the total weight that the cart wheels and axles must support, which can be calculated by summing the moments about the hitch point between the cart and tractor. If counter-clockwise rotation is positive, that moment equation is:
$R_{w}(4.6) - F_{cw}(4.6) - F_{g}(4.25) + F_{ct}(0)-R_{t}(0) = 0$
Note that because the tongue load and reaction force both pass through the hitch point; their moment arm distances are zero and they fall out of the equation. The moment equation is now solved for Rw:
$R_{w} = \frac{F_{cw}(4.6 \text{ m}) + F_{g}(4.25 \text{ m})}{4.6 \text{ m}} = \frac{43 \text{ kN}(4.6 \text{ m}) + 212 \text{ kNn}(4.25 \text{ m})}{4.6 \text{ m}} = 240 \text{ k}$
Since there are two wheels, each wheel must support 120 kN.
With the load known, it is possible to calculate the diameter of the cylindrical axle required to support the wheel. The axle (Figure 3.3.16) is a cantilever configuration since it is rigidly fixed to the frame on one end. The simplified configuration of the axle (Figure 3.3.16) shows that the reaction force from the wheel is applied 30 cm out from the base of the axle. The downward force of the cart and grain at the base of the axle and the upward reaction force from the tire will cause bending stress in the axle. The bending moment is:
$M=Fd=120,000 \text{ N }\cdot 0.3 \text{ m}= 3600 \text{ Nm}$
The maximum stress in the axle cannot exceed the yield stress of the material, which would cause permanent deformation in the axle, compromising its functionality and strength. The designer needs to know what material will be used to manufacture the axle and then determine the yield stress for that particular material. A number of material engineering handbooks and other resources can be consulted to find the yield stress of different materials. For this example, assume that mild steel would be used. The yield stress for mild steel (σy) can be found from a number of resources to be 250 MPa. Note that a Pa is defined as a N/m2. The bending stress in the axle is (Equation 3.3.8):
$\sigma_{b} = \frac{My}{I}$
Note that y is the distance from the neutral axis, which is the center of the circular shaft. The maximum stress will occur at the top and bottom of the axle. The equations for moment of inertia for different cross-sectional shapes can be found in a number of engineering handbooks or strength of materials text books. For a circular cross section, the moment of inertia is:
$I = \frac{1}{4} \pi r^{4}$
Substituting Equation 3.3.21 into Equation 3.3.8, the bending stress equation becomes:
$\sigma_{b} = \frac{4My}{\pi r^{4}}$
Since the critical failure point will be at the outermost fibers of the circular cross section, the stress will be calculated at y = r. Also, the stress at those outermost fibers should not exceed the yield stress; therefore,
$\sigma_{y} = \frac{4M}{\pi r^{3}}$
Now solve for r:
$r^{3}=\frac{4M}{\pi \sigma_{y}} \ \cdot \frac{3600 \text{ Nm}}{1} \ \cdot \frac{m^{2}}{350 \times 10^{6} \text{ N}}$
The calculated minimum radius is 2.6 cm.
Any calculated number or computer output should always be scrutinized to make sure that it represents a reasonable conclusion. In this case, an experienced engineer should be concerned that a 2.6-cm radius axle seems unusually small for a large grain cart. There are several factors that were not considered in the analysis. First, the load on the axle was the static weight of the cart and grain. There was no consideration for peak dynamic loads that would be induced as the vehicle moved across the terrain of a farm field. The dynamic analysis would also need to consider fatigue stress in the material due to repeated loading. There was no safety factor considered to compensate for inconsistencies in material properties of the axle or overloading of the cart by the operator. Depending on the method of attachment of the axle to the frame, there could be significant stress concentrations at sharp corners or weldments. These stress concentrations are usually identified with a finite element analysis of the structure. But even if catastrophic failure did not occur in the mechanism, the engineer should consider the effects of the elasticity of the axle. In this case, excessive elastic deflection in the axle could cause the tire to become misaligned, which could cause adverse tracking of the cart or unacceptable wear of the tire. All these factors would need to be addressed to achieve a final design that prevents failure and assures proper operation.
Image Credits
Figure 1. Stombaugh, T. (CC By 4.0). (2020). Typical grain combine.
Figure 2. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Forces acting on grain.
Figure 3. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Simple sieve mechanism.
Figure 4. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Drag forces.
Figure 5. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Paddle conveyor.
Figure 6. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Screw conveyor.
Figure 7. Stombaugh, T. (CC By 4.0). (2020). Combine and baler.
Figure 8. Stombaugh, T. (CC By 4.0). (2020). Row crop head.
Figure 9. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Conventional threshing cylinder.
Figure 10. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Rotary threshing cylinder.
Figure 11. Stombaugh, T. (CC By 4.0). (2020). Combine and grain cart.
Figure 12. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Combine test locations.
Figure 13. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Gathering reel.
Figure 14. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Tractor and grain cart.
Figure 15. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Forces on grain cart.
Figure 16. Stamper, D. & Stombaugh, T. (CC By 4.0). (2020). Axle configuration.
References
ASABE Standards. (2015a). ASAE EP496.3 FEB2996 (R2015): Agricultural machinery management. St. Joseph, MI: ASABE.
ASABE Standards. (2015b). ASAE D497.7 MAR2011 (R2015): Agricultural machinery management data. St. Joseph, MI: ASABE.
Srivastava, A. K., Goering, C. E., Rohrbach, R. P., & Buckmaster, D. R. (2006). Engineering principles of agricultural machines (2nd ed.). St. Joseph, MI: ASABE.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/03%3A_Machinery_Systems/3.03%3A_Grain_Harvest_and_Handling.txt
|
Francisco Rovira-Más
Agricultural Robotics Laboratory, Universitat Politècnica de València, Valencia, Spain
Qin Zhang
Center for Precision & Automated Agricultural Systems, Washington State University, Prosser, Washington
Verónica Saiz-Rubio
Agricultural Robotics Laboratory, Universitat Politècnica de València, Valencia, Spain
Key Terms
Control systems Analog and digital data Auto-guided tractors
Actuators Positioning Variable-rate application
Sensors Vision and imaging Intelligent machinery
Introduction
Visitors to local farm fairs have a good chance of seeing old tractors. Curious visitors will notice that the oldest ones, say, those made in the first three decades of the 20th century, are purely mechanical. As visitors observe newer tractors, they may find that electronic and fluid powered components appeared in those machines. Now, agricultural machinery, such as tractors and combines, are so sophisticated that they are fully equipped with electronic controls and even fancy flat screens. These controls and screens are the driver interface to electromechanical components integrated into modern tractors.
The term mechatronics is used to refer to systems that combine computer controls, electrical components, and mechanical parts. A mechatronics solution is not just the addition of sensors and electronics to an already existing machine; rather, it is the balanced integration of all of them in such a way that each individual component enhances the performance of the others. This outcome is achieved only by considering all subsystems simultaneously at the earliest stages of design (Bolton, 1999). Thus, mechatronics unifies the technologies that underlie sensors, automatic control systems, computing processors, and the transmission of power through mechanisms including fluid power actuators.
During the 20th century, agricultural mechanization greatly reduced the drudgery of farm work while increasing productivity (more land farmed by fewer people), efficiency (less time and resources invested to farm the same amount of land), and work quality (reduced losses at harvesting, more precise chemical applications, achieving uniform tillage). The Green Revolution, led by Norman Borlaug, increased productivity by introducing region-adapted crop varieties and the use of effective fertilizers, which often resulted in yields doubling, especially in developing countries. With such improvements initiated by the Green Revolution, current productivity, efficiency, and quality food crops may be sufficient to support a growing world population projected to surpass 9.5 billion by 2050, but the actual challenge is to do it in a sustainable way by means of a regenerative agriculture (Myklevy et al., 2016). This challenge is further complicated by the continuing decline of the farm workforce globally.
Current agricultural machinery, such as large tractors, sprayers, and combine harvesters, can be too big in practice because they must travel rural roads, use powerful diesel engines that are subjected to restrictive emissions regulations, are difficult to automate for liability reasons, and degrade farm soil by high wheel compaction. These challenges, and many others, may be overcome through the adoption of mechatronic technologies and intelligent systems on modern agricultural machinery. Mechanized farming has been adopting increased levels of automation and intelligence to improve management and increase productivity in field operations. For example, farmers today can use auto-steered agricultural vehicles for many different field operations including tilling, planting, chemical applications, and harvesting. Intelligent machinery for automated thinning or precise weeding in vegetable and other crops has recently been introduced to farmers.
This chapter introduces the basic concepts of mechatronics and intelligent systems used in modern agricultural machinery, including farming robots. In particular, it briefly introduces a number of core technologies, key components, and typical challenges found in agricultural scenarios. The material presented in this chapter provides a basic introduction to mechatronics and intelligent technologies available today for field production applications, and a sense of the vast potential that these approaches have for improving worldwide mechanization of agriculture in the next decades.
Concepts
The term mechatronics applies to engineering systems that combine computers, electronic components, and mechanical parts. The concept of mechatronics is the seamless integration of these three subsystems; its embodiment in a unique system leads to a mechatronic system. When the mechatronic system is endowed with techniques of artificial intelligence, the mechatronic system is further classified as an intelligent system, which is the basis of robots and intelligent farm machinery.
Automatic Control Systems
Machinery based on mechatronics needs to have control systems to implement the automated functions that accomplish the designated tasks. Mechatronic systems consist of electromechanical hardware and control software encoding the algorithm or model that automate an operation. An automatic control system obtains relevant information from the surrounding environment to manage (or regulate) the behavior of a device performing desired operations. A good example is a home air conditioner (AC) controller that uses a thermostat to determine the deviation of room temperature from a preset value and turn the AC on and off to maintain the home at the preset temperature. An example in agricultural machinery is auto-steering. Assume a small utility tractor has been modified to steer automatically between grapevine rows in a vineyard. It may use a camera looking ahead to detect the position of vine rows, such that deviations of the tractor from the centerline between vine rows are related to the proper steering angle for guiding the tractor in the vineyard without hitting a grapevine. From those two examples, it can be seen that a control system, in general, consists of sensors to obtain information, a controller to make decisions, and an actuator to perform the actions that automate an operation.
Actuation that relies on the continuous tracking of the variable under control (such as temperature or wheel angle) is called closed-loop control and provides a stable performance for automation. Closed-loop control allows the real-time estimation of the error (which is defined as the difference between the desired output of the controlled variable and the actual value measured by a feedback sensor), and calculates a correction command with a control function—the controller—for reducing the error. This command is sent to the actuator (discussed in the next section) for automatically implementing the correction. This controller function can be a simple proportion of the error (proportional controller, P), a measure of the sensitivity of change (derivative controller, D), a function dependent on accumulated (past) errors (integral controller, I), or a combination of two or three of the functions mentioned above (PD, PI, PID). There are alternative techniques for implementing automated controls, such as intelligent systems that use artificial intelligence (AI) methods like neural networks, fuzzy logic, genetic algorithms, and machine learning to help make more human-like control decisions.
Actuators
An electromechanical component is an integrated part that receives an electrical signal to create a physical movement to drive a mechanical device performing a certain action. Examples of electromechanical components include electrical motors that convert input electrical current into the rotation of a shaft, and pulse-width modulation (PWM) valves, such as variable rate nozzles and proportional solenoid drivers, which receive an electrical signal to push the spool of a hydraulic control valve to adjust the valve opening that controls the amount of fluid passing through. Because hydraulic implement systems are widely used on agricultural machinery, it is common to see many more electrohydraulic components (such as proportional solenoid drivers and servo drivers) than electrical motors on farm machines. However, as robotic solutions become increasingly more available in agriculture, applications of electrical motors on modern agricultural machinery will probably increase, especially on intelligent and robotic versions. The use of mechatronic components lays the foundation for adopting automation technologies to agricultural machinery, including the conversion of traditional machines into robotic ones capable of performing field work autonomously.
Intelligent Agricultural Machinery and Agricultural Robots
For intelligent agricultural machinery to be capable of performing automated field operations, it is required that machines have the abilities of: (1) becoming aware of actual operation conditions; (2) determining adaptive corrections suitable for continuously changing conditions; and (3) implementing such corrections during field operations, with the support of a proper mechanical system. The core for achieving such a capability often rests on the models that govern intelligent machinery, ranging from simple logic rules controlling basic tasks all the way to sophisticated AI algorithms for carrying out complex operations. These high-level algorithms may be developed using popular techniques such as artificial neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms (Russell and Norvig, 2003). As many of those intelligent machines could perform some field tasks autonomously, like a human worker could do, such machinery can also be referred to as robotic machinery. For example, when an autonomous lawn mower (Figure 3.4.1a) roams within a courtyard, it is typically endowed with basic navigation and path-planning skills that make the mower well fit into the category of robotic machinery, and therefore, it is reasonable to consider it a field robot. Though these robotic machines are not presently replacing human workers in field operations, the introduction of robotics in agriculture and their widespread use is only a matter of time. Figure 3.4.1b shows an autonomous rice transplanter (better called a rice transplanting robot) developed by the National Agriculture and Food Research Organization (NARO) of Japan.
(a)
(b)
Figure $1$: (a) Autonomous mower (courtesy of John Deere); (b) GPS-based autonomous rice transplanter (courtesy of NARO, Japan).
Many financial publications forecast that there will be a rapid growth of the market for service robots in the next two decades, and those within agricultural applications will play a significant role. Figure 3.4.2 shows the expected growth of the U.S. market for agricultural robots by product type. Although robots for milking and dairy management have dominated the agricultural robot market in the last decade, crop production robots are expected to increase their presence commercially and lead the market in the coming years, particularly for specialty crop production (e.g., tree fruit, grapes, melons, nuts, and vegetables). This transformation of the 21st century farmer from laborer to digital-age manager may be instrumental in attracting younger generations to careers in agricultural production.
Sensors in Mechatronic Systems
Sensors are a class of devices that measure significant parameters by using a variety of physical phenomena. They are important components in a mechatronic system because they provide the information needed for supporting automated operations. While the data to be measured can be in many forms, sensors output the measured data either in analog or digital formats (described in the next section). In modern agricultural machinery, sensor outputs are eventually transformed to digital format and thus can be displayed on an LCD screen or fed to a computer. This high connectivity between sensors and computers has accelerated the expansion of machinery automation. An intelligent machine can assist human workers in conducting more efficient operations: in some cases, it will simply entail retrieving clearer or better information; in other cases, it will include the automation of physical functions. In almost all situations, the contribution of reliable sensors is needed for machines to interact with the surrounding environment. Figure 3.4.3 shows the architecture of an intelligent tractor, which includes the typical sensors onboard intelligent agricultural machinery.
Even though sensors collect the data required to execute a particular action, that may not be enough because the environment of agricultural production is often complicated by many factors. For example, illumination changes throughout the day, adverse weather conditions may impair the performance of sensors, and open fields are rife with uncertainty where other machines, animals, tools, and even workers may appear unexpectedly in the near vicinity. Sensed data may be insufficient to support a safe, reliable, and efficient automated operation, and therefore data processing techniques are necessary to get more comprehensive information until it becomes sufficient to support automated operations. As a rule of thumb, there is no sensor that provides all needed information, and there is no sensor that never fails. Depending on specific needs, engineers often use either redundancy or sensor fusion to solve such a problem. The former acquires the same information through independent sources in case one of them fails or decreases in reliability, and the latter combines information of several sources that are complementary. Once the sensed information has been processed using either method, the actuation command can be calculated and then executed to complete the task.
Analog and Digital Data
As mentioned above, mechatronic systems often use sensors to obtain information to support automated operations. Sensors provide measurements of physical magnitudes (e.g., temperature, velocity, pressure, distance, and light intensity) represented by a quantity of electrical variables (such as voltages and currents). These quantities are often referred to as analog data and normally expressed in base 10, the decimal numbering system. In contrast, electronic devices such as controllers represent numbers in base 2 (the binary numbering system or simply “binary”) by adopting the on-off feature of electronics, with a numerical value of 1 assigned to the “on” state and 0 assigned to the “off” state.
A binary system uses a series of digits, limited to zeros or ones, to represent any decimal number. Each of these digits represents a bit of the binary number; a binary digit is called a bit. The leftmost 1 in the binary number 1001 is called the most significant bit (MSB), and the rightmost 1 is the least significant bit (LSB). It is common practice in computer science to break long binary numbers into segments of 8 bits, known as bytes. There is a one-to-one correspondence between binary numbers and decimal numbers. For instance, a 4-bit binary number can be used to represent all the positive decimal integers from 0 (represented by 0000) to 15 (represented by 1111). Signal digitization consists of finding that particular correspondence.
The process of transforming binary numbers to decimal numbers and vice versa is straightforward for representing positive decimal integers. However, negative and floating-point numbers require special techniques. While the transformation of data between two formats is normally done automatically, it is important to know the underlying concept for a better understanding of how information can be corrected, processed, distributed, and utilized in intelligent machinery systems. The resolution of digital data depends on the number of bits, such that more bits means more precision in the digitized measurement. Equation 3.4.1 yields the relationship between the number of bits (n) and the resulting number of digital levels available to code the signal (L). For example, using 4 bits leads to 24 = 16 levels, which implies that an analog signal between 0 V and 2 V will have a resolution of 2/15 = 0.133 V; as a result, quantities below 133 mV will not be detected using 4-bit numbers. If more accuracy is necessary, digitization will have to use numbers with more bits. Note that Equation 3.4.1 is an exponential relationship rather than linear, and quantization grows fast with the number of bits. Following with the previous example, 4 bits produce 16 levels, but 8 bits give 256 levels instead of 32, which actually corresponds to 5 bits.
$L=2^{n}$
where L = number of digital levels in the quantization process
n = number of bits
Position Sensing
One basic requirement for agricultural robots and intelligent machinery to work properly, reliably, and effectively is to know their location in relation to the surrounding environment. Thus, positioning capabilities are essential.
Global Navigation Satellite Systems (GNSS)
Global Navigation Satellite System (GNSS) is a general term describing any satellite constellation that provides positioning, navigation, and timing (PNT) services on a global or regional basis. While the USA Global Positioning System (GPS) is the most prevalent GNSS, other nations are fielding, or have fielded, their own systems to provide complementary, independent PNT capability. Other systems include Galileo (Europe), GLONASS (Russia), BeiDou (China), IRNSS/NavIC (India), and QZSS (Japan).
When the U.S. Department of Defense released the GPS technology for civilian use in 2000, it triggered the growth of satellite-based navigation for off-road vehicles, including robotic agricultural machinery. At present, most leading manufacturers of agricultural machinery include navigation assistance systems among their advanced products. As of 2019, only GPS (USA) was fully operational, but the latest generation of receivers can already expand the GPS constellation with other GNSS satellites.
GPS receivers output data through a serial port by sending a number of bytes encoded in a standard format that has gained general acceptance: NMEA 0183. The NMEA 0183 interface standard was created by the U.S. National Marine Electronics Association (NMEA), and consists of GPS messages in text (ASCII) format that include information about time, position in geodetic coordinates (i.e., latitude (λ), longitude (φ), and altitude (h)), velocity, and signal precision. The World Geodetic System 1984 (WGS 84), developed by the U.S. Department of Defense, defines an ellipsoid of revolution that models the shape of the earth, and upon which the geodetic coordinates are defined. Additionally, the WGS 84 defines a Cartesian coordinate system fixed to the earth and with its origin at the center of mass of the earth. This system is the earth-centered earth-fixed (ECEF) coordinate system, and it provides an alternative way to locate a point on the earth surface with the conventional three Cartesian coordinates X, Y, and Z, where the Z-axis coincides with the earth’s rotational axis and therefore crosses the earth’s poles.
The majority of the applications developed for agricultural machinery, however, do not require covering large surfaces in a short period of time. Therefore, the curvature of the earth has a negligible effect, and most farm fields can be considered flat for practical purposes. A local tangent plane coordinate system (LTP), also known as NED coordinates, is often used to facilitate such small-scale operations with intuitive global coordinates north (N), east (E), and down (D). These coordinates are defined along three orthogonal axes in a Cartesian configuration generated by fitting a tangent plane to the surface of the earth at an arbitrary point selected by the user and set as the LTP origin. Given that standard receivers provide geodetic coordinates (λ, φ, h) but practical field operations require a local frame such as LTP, a fundamental operation for mapping applications in agriculture is the real-time transformation between the two coordinate systems (Rovira-Más et al., 2010). Equations 3.4.2 to 3.4.8 provide the step by step procedure for achieving this transformation.
$a = 6378137$
$e = 0.0818$
$N_{0}(\lambda)=\frac{a}{\sqrt{1-e^{2} \cdot sin^{2}\lambda}}$
$X=(N_{0}+h) \cdot cos\lambda \cdot cos\phi$
$Y=(N_{0}+h) \cdot cos\lambda \cdot sin\phi$
$Z=[h+N_{0} \cdot (1-e^{2})] \cdot sin\lambda$
$\begin{bmatrix} N \E\D \end{bmatrix} = \begin{bmatrix} -sin \lambda \cdot cos\phi & -sin\lambda \cdot sin\phi & cos\lambda \ -sin\phi &cos\phi & 0 \ -cos\lambda \cdot cos\phi & -cos\lambda \cdot sin\phi & -sin\lambda \end{bmatrix} \cdot \begin{bmatrix} X-X_{0} \ Y-Y_{0} \ Z-Z_{0} \end{bmatrix}$
where a = semi-major axis of WGS 84 reference ellipsoid (m)
e = eccentricity of WGS 84 reference ellipsoid
N0 = length of the normal (m)
Geodetic coordinates:
λ= latitude (°)
φ = longitude (°)
h = altitude (m)
(X, Y, Z) = ECEF coordinates (m)
(X0, Y0, Z0) = user-defined origin of coordinates in ECEF format (m)
(N, E, D) = LTP coordinates north, east, down (m)
Despite the high accessibility of GPS information, satellite-based positioning is affected by a variety of errors, some of which cannot be totally eliminated. Fortunately, a number of important errors may be compensated by using a technique known as differential correction, lowering errors from more than 10 m to about 3 m. Furthermore, the special case of real-time-kinematic (RTK) differential corrections may further lower error to just centimeter level.
Sonar Sensors
In addition to locating machines in the field, another essential positioning need for agricultural robots is finding the position of surrounding objects during farming operations, such as target plants or potential obstacles. Ultrasonic rangefinders are sensing devices used successfully for this purpose. Because they measure the distance of target objects in terms of the speed of sound, these sensors are also known as sonar sensors.
The underlying principle of sonars is that the speed of sound is known (343 m s−1 at 20°C), and measuring the time that the wave needs to hit an obstacle and return to the sensor—the echo—allows the estimation of an object’s distance. The speed of sound through air, V, depends on the ambient temperature, T, as:
$V(m\ s^{-1})=331.3 + 0.606 \times T(^\circ C)$
The continuously changing ambient temperature in agricultural fields is one of many challenges to sonar sensors. Another challenge is the diversity of target objects. In practice, sonar sensors must send out sound waves that hit an object and then return to the sensor receiver. This receiver must then capture the signal to measure the elapsed time for the waves to complete the round trip. Understanding the limitations posed by the reflective properties of target objects is essential to obtain reliable results. The distance to materials that absorb sound waves, such as stuffed toys, will be measured poorly, whereas solid and dense targets will allow the system to perform well. When the target object is uneven, such as crop canopies, the measurements may become noisy. Also, sound waves do not behave as linear beams, but propagate in irregular cones that expand in coverage with distance. When objects are outside the cone, they may be undetected. Errors will often vary with ranges such that farther ranges lead to larger errors.
An important design feature to consider is the distance between adjacent ultrasonic sensors, as echo interference is another source of unstable behavior. Overall, sonar rangefinders are helpful to estimate short distances cost-efficiently when accuracy and reliability are not critical, as when detecting distances to the canopy of trees for automated pesticide spraying.
Light Detection and Ranging (Lidar) Sensors
Another common position-detecting sensor is lidar, which stands for light detection and ranging. Lidars are optical devices that detect the distance to target objects with precision. Although different light sources can be used to estimate ranges, most lidar devices use laser pulses because their beam density and coherency result in high accuracy.
Lidars possess specific features that make them favorable for field robotic applications, as sunlight does not affect lidars unless it hits their emitter directly, and they work excellently under poor illumination.
Machine Vision and Imaging Sensors
One important element of human intelligence is vision, which gives farmers the capability of visual perception. A basic requirement for intelligent agricultural machinery (or agricultural robots) is to have surrounding awareness capability. Machine vision is the computer version of the farmer’s sight; the cameras function as eyes and the computers as the brain. The output data of vision systems are digital images. A digital image consists of little squares called pixels (picture elements) that carry information on their level of light intensity. Most of the digital cameras used on agricultural robots are CCD (charge coupled devices), which are composed of a small rectangular sensor made of a grid of tiny light-sensitive cells, each of them producing the information of its corresponding pixel in the image. If the image is in black and white (technically called monochrome), the intensity level is represented in a gray scale between a minimum value (0) and a maximum value (imax). The number of levels in the gray scale depends on the number of bits in which the image is coded. Most of the images used in agriculture are 8 bits, which means that the image can distinguish 256 gray levels (28), where the minimum value is 0 representing complete black, and the maximum value is 255 representing pure white. In practical terms, human eyes cannot distinguish so many levels, and 8 bits are many times more than enough. When digital images reproduce a scene in color, pixels carry information of intensity levels for the three channels of red (R), green (G), and blue (B), leading to RGB images. The processing of RGB images is more complicated than monochrome images and falls outside the scope of this chapter.
Monocular cameras (which have one lens) constitute simple vision systems, yet the information they retrieve is powerful. When selecting a camera, engineers must choose important technical parameters such as the focal length of the lens, the size of the sensor, and optical filters when there are spectral ranges (colors) that need to be blocked from the image. The focal length (f) is related to the scope of scene that fits into the image, and is defined in Equation 3.4.10. The geometrical relationship described by Figure 3.4.4 and Equation 3.4.11 determines the resulting field of view (FOV) of any given scene. The design of a machine vision system, therefore, must include the right camera and lens parameters to assure that the necessary FOV is covered and the target objects are in focus in the images.
$\frac{1}{f} = \frac{1}{d_{1}}+\frac{1}{d_{2}}$
$\frac{d_{1}}{d_{2}} = \frac{A}{FOV}$
where f = lens focal length (mm)
d1 = distance between the imaging sensor and the optical center of the lens (mm)
d2 = distance between the optical center of the lens and the target object (mm)
A = horizontal dimension of the imaging sensor (mm)
FOV = horizontal field of view covered in the images (mm)
After taking the images, the first step of the process (image acquisition) is complete. The second step, analysis of the images, begins with image processing, which involves the delicate task of extracting the useful information from each image for its later use. Figure 3.3.5 reproduces the results of a color-based segmentation algorithm to find the position of mandarin oranges in a citrus tree.
Even though digital images reproduce scenes with great detail, the representation is flat, that is, in two dimensions (2D). However, real scenes are in three dimensions (3D), with the third dimension being the depth, or distance between the camera and the objects of interest in the scene. In the image shown in Figure 3.3.5, for instance, a specific orange can be located with precision in the horizontal and vertical axes, but how far it is from the sensor cannot be known. This information would be essential, for example, to program a robotic arm to retrieve the oranges. Stereo cameras (which are cameras with at least two lenses that meet the principles of stereoscopy) allow the acquisition of two (or more) images in a certain relative position to which the principles of stereoscopic vision apply. These principles mimic how human vision works, as the images captured by human eyes in the retinas are slightly offset, and this offset (known as disparity) is what allows the brain to estimate depth.
Estimation of Vehicle Dynamic States
The parameters that help understand a vehicle’s dynamic behavior are known as the vehicle states, and typically include velocity, acceleration, sideslip, and angular rates yaw, pitch, and roll. The sensors needed for such measurements are commonly assembled in a compact motion sensor called an inertial measurement unit (IMU), created from a combination of accelerometers and gyroscopes. The accelerometers of the IMU detect acceleration as the change in velocity of the vehicle over time. Once the acceleration is known, its mathematical integration gives an estimate of the velocity, and integrating again gives an estimate of the position. Equation 3.4.12 allows the calculation of instantaneous velocities from the acceleration measurements of an IMU or any individual accelerometer. Notice that for finite increments of time ∆t, the integral function is replaced by a summation. Similarly, gyroscopes can detect the angular rates of the turning vehicle; integrating these values leads to roll, pitch, and yaw angles, as specified by Equation 3.4.13. A typical IMU is composed of three accelerometers and three gyroscopes assembled along three perpendicular axes that reproduce a Cartesian coordinate system. With this physical configuration, it is possible to calculate the three components of acceleration and speed in Cartesian coordinates as well as Euler angles roll, pitch, and yaw. Current IMUs on the market are small and inexpensive, favoring the accurate estimation of vehicle states with small devices such as microelectromechanical systems (MEMS).
where Vt = velocity of a vehicle at time t (m s−1)
at = linear acceleration recorded by an accelerometer (or IMU) at time t (m s−2)
t = time interval between two consecutive measurements (s)
$\Theta_{t}$ = angle at time t (rad)
$\dot{\Theta}$ = angular rate at time t measured by a gyroscope (rad s−1)
Applications
Mechatronic Systems in Auto-Guided Tractors
As mentioned early in this chapter, mechatronic systems are now playing an essential role in modern agricultural machinery, especially on intelligent and robotic vehicles. For example, the first auto-guided tractors hit the market at the turn of the 21st century; from a navigation standpoint, farm equipment manufacturers have been about two decades ahead of the automotive industry. Such auto-guided tractors would not be possible if they had not been upgraded to state-of-the-art mechatronic systems, which include sensing, controlling, and electromechanical (or electrohydraulic) actuating elements. One of the most representative components never seen before on conventional mechanical tractors as an integrated element is the high-precision GPS receiver, which furnishes tractors with the capability to locate themselves in order to guide them following designated paths.
Early navigation solutions that were commercially available did not actually control the tractor steering system; rather, they provided tractor drivers with lateral corrections in real time, such that by following these corrections the vehicle easily tracked a predefined trajectory. This approach is easy to learn and execute, as drivers only need to follow a light-bar indicator, where the number of lights turned on is proportional to the sidewise correction to keep the vehicle on track. In addition to its simplicity of use, this system works for any agricultural machine, including older ones. Figure 3.4.6a shows a light-bar system mounted on an orchard tractor, where the red light signals the user to make an immediate correction to remain within the trajectory shown on the LCD screen.
Another essential improvement of modern tractors based on mechatronics technology is the electrohydraulic system that allows tractors to be maneuvered by wire. This means that an operation of the tractor, such as steering or lowering the implement installed on the three-point hitch, can be accomplished by an electronically controlled electrohydraulic actuating system in response to control signals generated by a computer-based controller. An electrohydraulic steering system allows a tractor to be guided automatically, by executing navigation commands calculated by an onboard computer based on received GPS positioning signals. One popular auto-steering application is known as parallel tracking, which allows a tractor being driven automatically to follow desired pathways in parallel to a reference line between two points, say A-B line, in a field recorded by the onboard GPS system. These reference lines can even include curved sectors. Figure 3.4.6b displays the control screen of a commercial auto-guidance system implemented in a wheel-type tractor. Notice the magnitude of the tractor deviation (the off-track error) from the predefined trajectory is shown at the top bar, in a similar fashion as the corrections conveyed through light-bars. The implementation of automatic guidance has reduced pass-to-pass overlaps, especially with large equipment, resulting in significant savings in seeds, fertilizer, and phytosanitary chemicals as well as reduced operator fatigue. Farmers are seeing returns on investment in just a few years.
(a)
(b)
Figure $6$: Auto-guidance systems: (a) Light-bar kit; (b) Parallel tracking control screen, where A is the path accuracy indicator, B is the off-track error, C represents the guidance icon, D provides the steering sensitivity, E mandates steer on/off, F locates the shift track buttons, G is the GPS status indicator, H is the A-B (0) track button, and I shows the track number.
Automatic Control of Variable-Rate Applications
The idea of variable rate application (VRA) is to apply the right amount of input, i.e., seeds, fertilizers, and pesticides, at the right time and at sub-plot precision, moving away from average rates per plot that result in economic losses and environmental threats. Mechatronics enables the practical implementation of VRA for precision agriculture (PA). Generally speaking, state-of-the-art VRA equipment requires three key mechatronic components: (1) sensors, (2) controllers, and (3) actuators.
Sub-plot precision is feasible with GPS receivers that provide the instantaneous position of farm equipment at a specific location within a field. In addition, vehicles require the support of an automated application controller to deliver the exact amount of product. The specific quantity of product to be applied at each location is commonly provided by either a prescription map preloaded to the vehicle’s computer, or alternatively, estimated in real time using onboard crop health sensors.
There are specific sensors that must be part of VRA machines. For example, for intelligent sprayers to be capable of automatically adapting the rate of pesticide to the properties of trees, global and local positioning in the field or related to crops is required. Fertilizers, on the other hand, may benefit from maps of soil parameters (moisture, organic matter, nutrients), as well as vegetation (vigor, stress, weeds, temperature). In many modern sprayers, pressure and flow of applied resources (either liquid or gaseous) must be tracked to support automatic control and eventually achieve a precise application rate. Controllers are the devices that calculate the optimal application rate on the fly and provide intelligence to the mechatronics system. They often consist of microcontrollers reading sensor measurements or loaded maps to calculate the instantaneous rate of product application based on internal algorithms. This rate is continuously sent to actuators for the physical application of product. Controllers may include small monitoring displays or switches for manual actuation from the operator cabin, if needed. Actuators are electromechanical or electrohydraulic devices that receive electrical signals from the controllers to regulate the amount of product to apply. This regulation is usually achieved by varying the rotational speed of a pump, modifying the flow coming from a tank, or changing the settings of a valve to adjust the pressure or flow of the product. Changing the pressure of sprayed liquids, however, results in a change of the droplet size, which is not desirable for pest control. In these cases, the use of smart nozzles that are controlled through PWM signals is recommended.
As VRA technology is progressing quickly, intelligent applicators are becoming available commercially, mainly for commodity crops. An intelligent system can automatically adjust the amount of inputs dispersed in response to needs, even permitting the simultaneous use of several kinds of treatments, resulting in new ways of managing agricultural production. For example, an intelligent VRA seeder has the ability to change the number of seeds planted in the soil according to soil potential, either provided by prescription maps or detected using onboard sensors. Control of the seeding rate is achieved by actuating the opening of the distributing device to allow the desired number of seeds to go through.
In many cases, a feedback control system is required to achieve accurate control of the application rate. For example, in applying liquid chemicals, the application rate may be affected by changes in the moving speed of the vehicle, as well as the environmental conditions. Some smart sprayers are programmed to accurately control the amount of liquid chemical by adjusting the nozzles in response to changes of sprayer forward speed. This is normally accomplished using electronically controlled nozzle valves that are commanded from the onboard processor. Such a mechatronic system could additionally monitor the system pressure and flow in the distribution circuit with a GPS receiver, and even compensate changes of the amount of liquid exiting the nozzles resulting from pressure or flow pattern changes in the circuit.
Redesigning a Tractor Steering System with Electrohydraulic Components
Implementing auto-guidance capabilities in a tractor requires that the steering system can be controlled electrically for automated turning of the front wheels. Therefore, it is necessary to replace a traditional hydraulic steering system with an electrohydraulic system. This could be accomplished simply by replacing a conventional manually actuated steering control valve (Figure 3.4.7a) by an electrohydraulic control system. Such a system (Figure 3.4.7b) consists of a rotary potentiometer to track the motion of the steering wheel, an electronic controller to convert the steering signal to a control signal, and a solenoid-driven electrohydraulic control valve to implement the delivered control signal.
The upgraded electrohydraulic steering system can receive control signals from a computer controller enabled to create appropriate steering commands in terms of outputs from an auto-guided system, making navigation possible without the input of human drivers to achieve autonomous operations with the tractor. As the major components of an electrohydraulic system are connected by wires, such an operation is also called “actuation by wire.”
Use of Ultrasonic Sensors for Measuring Ranges
Agricultural machinery often needs to be “aware” of the position of objects in the vicinity of farming operations, as well as the position of the machinery. Ultrasonic sensors are often used to perform such measurements.
In order to use an ultrasonic (or sonar) sensor, a microprocessor is often needed to convert the analog signals (which are in the range of 0–5 V) from the ultrasonic sensor to digital signals, so that the recorded data can be further used by other components of automated or robotic machinery. For an example consider the HC-SR04, which consists of a sound emitter and an echo receiver such that it measures the time elapsed between a sound wave being sent by the emitter and its return back from the targeted object. The speed of sound is approximately 330 m·s−1, which means that it needs 3 s for sound to travel 1,000 m. The HC-SR04 sensor can measure ranges up to 4.0 m, hence the time measurements are in the order of milliseconds and microseconds for very short ranges. The sound must travel through the air, and the speed of sound depends on environmental conditions, mainly the ambient temperature. If this sensor is used on a hot summer day with an average temperature of 35°C, for example, using Equation 3.4.9, the corrected sound speed will be slightly higher, at 352 m·s−1.
Figure 3.4.8 shows how the sensor was connected to and powered by a commercial product (Arduino Uno microprocessor, for illustration purposes) in a laboratory setup (also for illustration). After completing all the wiring of the system as shown in Figure 3.4.8, it is necessary to select an unused USB port and any of the default baud rates in the interfacing computer. If the baud rate and serial port are properly set in a computer with a display console, and the measured ranges have been set via software at an updating frequency of 1 Hz, the system could then perform one measurement per second. After the system has been set up, it is important to check its accuracy and robustness by moving the target object in the space ahead of the sensor.
Examples
Example $1$
Example 1: Digitization of analog signals
Problem:
Mechatronic systems require sensors to monitor the performance of automated operations. Analog sensors are commonly used for such tasks. A mechatronics-based steering mechanism uses a linear potentiometer to estimate the steering angle of an auto-guided tractor, outputting an analog signal in volts as the front wheels rotate. To make the acquired data usable by a computerized system to automate steering, it is necessary to convert the analog data to digital format.
Given the analog signal coming from a steering potentiometer, digitize the signal using 4 bits of resolution, by these steps.
1. 1. Calculate the number of levels coded by the 4-bit signal taking into account that the minimum voltage output by the potentiometer is 1.2 V and the maximum voltage is limited to 4.7 V, i.e., any reading coming from the potentiometer will belong to the interval 1.2 V–4.7 V. How many steps comprise this digital signal?
2. 2. Establish a correspondence between the analog readings within the interval and each digital level from 0000 to 1111, drafting a table to reflect the correlation between signals.
3. 3. Plot both signals overlaid to graphically depict the effect of digitizing a signal and the loss of accuracy behind the process. According to the plot, what would be the digital value corresponding to a potentiometer reading of 4.1 V?
Solution
The linear potentiometer has a rod whose position varies from retraction (1.2 V) to full extension (4.7 V). Any rod position between both extremes will correspond to a voltage in the range 1.2 V–4.7 V. The number of levels L encoded in the signal for n = 4 bits is calculated using Equation 3.4.1:
$L=2^{n}=2^{4} = \textbf{16 levels}$
Thus, the number of steps between the lowest digital number 0000 and the highest 1111 is 15 intervals. Table 3.4.1 specifies each digital value coded by the 4-bit signal, taking into account that the size of each interval ∆V is set by:
Table $1$: Digitization of an analog signal with 4 bits between 1.2 V and 4.7 V.
Bit 4-Bit Digital Signal Analog Equivalence (V)
1 2 3 4
1
1
1
1
1 1 1 1
4.70000
0
1 1 1 0
4.46666
0
1
1 1 0 1
4.23333
0
1 1 0 0
4.00000
0
1
1
1 0 1 1
3.76666
0
1 0 1 0
3.53333
0
1
1 0 0 1
3.30000
0
1 0 0 0
3.06666
0
1
1
1
0 1 1 1
2.83333
0
0 1 1 0
2.60000
0
1
0 1 0 1
2.36666
0
0 1 0 0
2.13333
0
1
1
0 0 1 1
1.90000
0
0 0 1 0
1.66666
0
1
0 0 0 1
1.43333
0
0 0 0 0
1.20000
$\Delta V = (4.7 - 1.2)/15 = 3.5/15 = 0.233 V$
A potentiometer reading of 4.1 V belongs to the interval between [4.000, 4.233], that is, greater or equal to 4 V and less than 4.233 V, which according to Table 3.4.1 corresponds to 1101. Differences below 233 mV will not be registered with a 4-bit signal. However, by increasing the number of bits, the error will be diminished and the “stairway” profile of Figure 3.4.9 will get closer and closer to the straight line joining 1.2 V and 4.7 V.
Example $2$
Example 2: Transformation of GPS coordinates
Problem:
A soil-surveying robot uses a GPS receiver to locate sampling points forming a grid in a field. Those points constitute the reference base for several precision farming applications related to the spatial distribution of soil properties such as compactness, pH, and moisture content. The location data (Table 3.4.2) provided by the GPS receiver is in a standard NMEA code format. Transform the data (i.e., the geodetic coordinates provided by a handheld GPS receiver) to the local tangent plane (LTP) frame to be more directly useful to farmers.
Solution
The first step in the transformation process requires the selection of a reference ellipsoid. Choose the WGS 84 reference ellipsoid because it is widely used for agricultural applications. Use Equations 3.4.2 to 3.4.7 and apply the transform function (Equation 3.4.8) to the 23 points given in geodetic coordinates (Table 3.4.2) to convert them into LTP coordinates. For that reference ellipsoid,
$a = \text{semi-major axis of WGS 84 reference ellipsion} = 6378137\ m$
$e = \text{eccentricity of WGS 84 reference ellipsion} = 0.0818$
$N_{0}(\lambda) = \frac{a}{\sqrt{1-e^{2} \cdot sin^{2}\lambda}}$ (Equation $4$)
$(N_{0}+h) \cdot cos\lambda \cdot cos\phi$ (Equation $5$)
$Y= (N_{0}+h) \cdot cos\lambda \cdot sin\phi$ (Equation $6$)
$Z = [h+N_{0} \cdot (1-e^{2})] \cdot sin\lambda$ (Equation $7$)
$\begin{bmatrix} N \E\D \end{bmatrix} = \begin{bmatrix} -sin \lambda \cdot cos\phi & -sin\lambda \cdot sin\phi & cos\lambda \ -sin\phi &cos\phi & 0 \ -cos\lambda \cdot cos\phi & -cos\lambda \cdot sin\phi & -sin\lambda \end{bmatrix} \cdot \begin{bmatrix} X-X_{0} \ Y-Y_{0} \ Z-Z_{0} \end{bmatrix}$ (Equation $8$)
The length of the normal N0 is the distance from the surface of the ellipsoid of reference to its intersection with the rotation axis and [λ, φ, h] is a point in geodetic coordinates recorded by the GPS receiver; [X, Y, Z] is the point transformed to ECEF coordinates (m), with [X0, Y0, Z0] being the user-defined origin of coordinates in ECEF; and [N, E, D] is the point being converted in LTP coordinates (m).
Table $2$: GPS geodetic coordinates of field points.
Point Latitude (°) Latitude (min) Longitude (°) Longitude (min) Altitude (m)
Origin
39
28.9761
0
−20.2647
4.2
1
39
28.9744
0
−20.2539
5.1
2
39
28.9788
0
−20.2508
5.3
3
39
28.9827
0
−20.2475
5.9
4
39
28.9873
0
−20.2431
5.6
5
39
28.9929
0
−20.2384
4.8
6
39
28.9973
0
−20.2450
5.0
7
39
28.9924
0
−20.2500
5.2
8
39
28.9878
0
−20.2557
5.2
9
39
28.9832
0
−20.2593
5.4
10
39
28.9792
0
−20.2626
5.2
11
39
28.9814
0
−20.2672
4.8
12
39
28.9856
0
−20.2638
5.5
13
39
28.9897
0
−20.2596
5.5
14
39
28.9941
0
−20.2542
5.0
15
39
28.9993
0
−20.2491
5.0
16
39
29.0024
0
−20.2534
5.1
17
39
28.9976
0
−20.2590
4.9
18
39
28.9929
0
−20.2643
4.9
19
39
28.9883
0
−20.2695
4.9
20
39
28.9846
0
−20.2738
4.8
21
39
28.9819
0
−20.2770
4.7
22
39
28.9700
0
−20.2519
4.5
MATLAB® can provide a convenient programming environment to transform the geodetic coordinates to a flat frame, and save them in a text file. Table 3.4.3 summarizes the results as they would appear in a MATLAB® (.m) file.
These 23 survey points can be plotted in a Cartesian frame East-North (namely in LTP coordinates) to see their spatial distribution in the field, with East and North axes oriented as shown in Figure 3.4.10.
A crucial advantage of using flat coordinates like LTP is that Euclidean geometry can be extensively used to calculate distances, areas, and volumes. For example, to calculate the total area covered by the surveyed grid, split the resulting trapezoid into two irregular triangles (Figure 3.4.11), one defined by the points A-B-C, and the other by the three points A-B-D. Apply Euclidean geometry to calculate the area of an irregular triangle from the measurement of its three sides using the equation:
Table $3$: LTP coordinates for the field surveyed with a GPS receiver.
Point East (m) North (m) Down (m)
Origin
0
0
0
1
15.5
−3.1
−0.9
2
19.9
5.0
−1.1
3
24.7
12.2
−1.7
4
31.0
20.7
−1.4
5
37.7
31.1
−0.6
6
28.2
39.2
−0.8
7
21.1
30.2
−1.0
8
12.9
21.6
−1.0
9
7.7
13.1
−1.2
10
3.0
5.7
−1.0
11
−3.6
9.8
−0.6
12
1.3
17.6
−1.3
13
7.3
25.2
−1.3
14
15.1
33.3
−0.8
15
22.4
42.9
−0.8
16
16.2
48.7
−0.9
17
8.2
39.8
−0.7
18
0.6
31.1
−0.7
19
−6.9
22.6
−0.7
20
−13.0
15.7
−0.6
21
−17.6
10.7
−0.5
22
18.3
−11.3
−0.3
$\text{Area} = \sqrt{K \cdot (K-a) \cdot (K-b)\cdot (K-c)}$ (Equation $14$)
where, a, b, and c are the lengths of the three sides of the triangle, and $K=\frac{a+b+c}{2}$.
The distance between two points A and B can also be determined by the following equation:
$L_{A-B} = \sqrt{(E_{A}-E_{B})^{2} +(N_{A}-N_{B})^{2}}$ (Equation $15$)
where LA-B = Euclidean distance (straight line) between points A and B (m)
[EA, NA] = the LTP coordinates east and north of point A (m)
[EB, NB] = the LTP coordinates east and north of point B (m), calculated in Table 3.4.3.
Using the area equation, the areas of the two triangles presented in Figure 3.4.11 are determined as 627 m2 for the yellow triangle (ADB) and 1,054 m2 for the green triangle (ABC), with a total area of 1,681 m2. The corresponding Euclidean distances are 50.9 m, 42.1 m, 60.0 m, 27.8 m, and 46.6 m, respectively, for LA-C, LC-B, LA-B, LA-D, and LD-B, as:
$L_{A-B} = \sqrt{(E_{A}-E_{B})^{2} +(N_{A}-N_{B})^{2}}=\sqrt{(16.2-18.3)^{2} +(48.7-(-11.3))^{2}} = 60.0$
We have not said anything about the Z direction of the field, but the Altitude column in Table 3.4.2 and the Down column in Table 3.4.3 both suggest that the field is quite flat, as the elevation of the points over the ground does not vary by much along the 22 points.
Figure 3.4.12 shows the sampled points of Figure 3.4.10 overlaid with a satellite image that allows users to know additional details of the field such as crop type, lanes, surrounding buildings (affecting GPS performance), and other relevant information.
Example $3$
Example 3: Configuration of a machine vision system for detecting cherry tomatoes on an intelligent harvester
Problem:
Assume you are involved in designing an in-field quality system for the on-the-fly inspection of produce on board an intelligent cherry tomato harvester. Your specific assignment is the design of a machine vision system to detect blemishes in cherry tomatoes being transported by a conveyor belt on the harvester, as illustrated in Figure 3.4.13. You are required to use an existing camera that carries a CCD sensor of dimensions 6.4 mm × 4.8 mm. The space allowed to mount the camera (camera height h) is about 40 cm above the belt. However, you can buy any lens to assure a horizontal FOV of 54 cm to cover the entire width of the conveyor belt. Determine the required focal length of the lens.
Solution
The first step in the design of this sensing system is to calculate the focal length (f) of the lens needed to cover the requested FOV. Normally, the calculation of the focal length requires knowing two main parameters of lens geometry: the distance between the CCD sensor and the optical center of the lens, d1, and the distance between the optical center of the lens and the conveyor belt, d2. We know d2 = 400 mm, FOV = 540 mm, and A, the horizontal dimension of the imaging sensor, is 6.4 mm, so d1 can be easily determined according to Equations 3.4.10 and 3.4.11:
$\frac{d_{1}}{d_{2}} = \frac{A}{FOV}$ (Equation $11$)
Thus,
$d_{1} = \frac{A \cdot d_{2}}{FOV} = \frac{6.4 \cdot 400}{540} = 4.74 \ mm$
The focal length, f, can then be determined using Equation 3.4.10:
$\frac{1}{f} = \frac{1}{d_{1}} + \frac{1}{d_{2}}$ (Equation $10$)
Thus,
$f= \frac{d_{1} \cdot d_{2}}{d_{1} + d_{2}} = \frac{4.74 \cdot 400}{4.74 + 400} = 4.68 \ mm$
No lens manufacturer will likely offer a lens with a focal length of 4.68 mm; therefore, you must choose the closest one from what is commercially available. The lenses commercially available for this camera have the following focal lengths: 2.8 mm, 4 mm, 6 mm, 8 mm, 12 mm, and 16 mm. A proper approach is to choose the best lens for this application, and readjust the distance between the camera and the belt in order to keep the requested FOV covered. Out of the list offered above, the best option is choosing a lens with f = 4 mm. That choice will change the original parameters slightly, and you will have to readjust some of the initial conditions in order to maintain the same FOV, which is the main condition to meet. The easiest modification will be lowering the position of the camera to a distance of 34 cm to the conveyor (d2 = 340 mm from the focal length equation). If the camera is fixed and d2 has to remain at the initial 40 cm, the resulting field of view would be larger than the necessary 54 cm, and applying image processing techniques would be necessary to remove useless sections of the images.
Example $4$
Example 4: Estimation of a robot velocity using an accelerometer
Problem:
Table $4$: Acceleration of a vehicle recorded with the accelerometer of Figure 3.4.14a.
Data point Time (s) Acceleration (g)
1
7.088
0.005
2
8.025
0.018
3
9.025
0.009
4
10.025
0.009
5
11.025
0.008
6
12.025
0.009
7
13.025
0.009
8
14.025
0.009
9
15.025
0.008
10
16.025
0.008
11
17.025
0.009
12
18.025
0.009
13
19.025
0.008
14
20.088
0.009
15
21.088
−0.009
16
21.963
−0.019
17
23.025
−0.001
The accelerometer of Figure 3.4.14a was installed in the agricultural robot of Figure 3.4.14c. When moving along vineyard rows, the output measurements from the accelerometer were recorded in Table 3.4.4, including the time of each measurement and its corresponding linear acceleration in the forward direction given in g, the gravitational acceleration.
1. 1. Calculate the instantaneous acceleration of each point in m·s−2, taking into account that one g is equivalent to 9.8 m·s−2.
2. 2. Calculate the time elapsed between consecutive measurements ∆t in s.
3. 3. Estimate the average sample rate (Hz) at which the accelerometer was working.
4. 4. Calculate the corresponding velocity for every measurement with Equation 3.4.12, taking into account that the vehicle started from a resting position (V0 = 0 m s−1) and always moved forward.
5. 5. Plot the robot’s acceleration (m s−2) and velocity (km h−1) for the duration of the testing run.
(a)
(b)
(c)
Figure $14$: (a) Accelerometer Gulf Coast X2-2; (b) sensor mounting; (c) in an agricultural robot.
Solution
Table 3.4.4 can be completed by multiplying the acceleration expressed in g by 9.8, and by applying the expression ∆t = tktk-1 to every point of the table, except for the first point t1 that has no preceding measurement.
Data point Time (s) Acceleration (g) Acceleration (m s−2) ∆t (s)
1
7.088
0.005
0.050
0
2
8.025
0.018
0.179
0.938
3
9.025
0.009
0.091
1.000
4
10.025
0.009
0.085
1.000
5
11.025
0.008
0.083
1.000
6
12.025
0.009
0.088
1.000
7
13.025
0.009
0.085
1.000
8
14.025
0.009
0.084
1.000
9
15.025
0.008
0.080
1.000
10
16.025
0.008
0.081
1.000
11
17.025
0.009
0.086
1.000
12
18.025
0.009
0.084
1.000
13
19.025
0.008
0.083
1.000
14
20.088
0.009
0.089
1.063
15
21.088
−0.009
−0.092
1.000
16
21.963
−0.019
−0.187
0.875
17
23.025
−0.001
−0.009
1.063
Average
0.996
According to the previous results, the average time elapsed between two consecutive measurements ∆t is 0.996 s, which corresponds to approximately one measurement per second, or 1 Hz. The velocity of a vehicle can be estimated from its acceleration with Equation 3.4.12. Table 3.4.5 specifies the calculation at each specific measurement.
Figure 3.4.15 plots the measured acceleration and the calculated velocity for a given time interval of 16 seconds. Notice that there are data points with a negative acceleration (deceleration) but the velocity is never negative because the vehicle always moved forward or stayed at rest. Accelerometers suffer from noisy estimates, and as a result, the final velocity calculated in Table 3.4.5 may not be very accurate. Consequently, it is a good practice to estimate vehicle speeds redundantly with at least two independent sensors working under different principles. In this example, for instance, forward velocity was also estimated with an onboard GPS receiver.
Table $5$: Velocity of a robot estimated with an accelerometer.
Data point Acceleration (m s−2) t (s) Velocity (m s−1) V (km h−1)
1
0.050
0
V1 = V0 + a1 · ∆t = 0 + 0.05 · 0 = 0
0.0
2
0.179
0.938
V2 = V1 + a2 · ∆t = 0 + 0.179 · 0.938 = 0.17
0.6
3
0.091
1.000
V3 = V2 + a3 · ∆t = 0.17 + 0.091 · 1 = 0.26
0.9
4
0.085
1.000
0.34
1.2
5
0.083
1.000
0.43
1.5
6
0.088
1.000
0.51
1.9
7
0.085
1.000
0.60
2.2
8
0.084
1.000
0.68
2.5
9
0.080
1.000
0.76
2.7
10
0.081
1.000
0.84
3.0
11
0.086
1.000
0.93
3.3
12
0.084
1.000
1.01
3.7
13
0.083
1.000
1.10
3.9
14
0.089
1.063
1.19
4.3
15
-0.092
1.000
1.10
4.0
16
-0.187
0.875
0.94
3.4
17
-0.009
1.063
0.93
3.3
Image Credits
Figure 1a. John Deere. (2020). Autonomous mower. Retrieved from www.deere.es/es/campaigns/ag-turf/tango/. [Fair Use].
Figure 1b. Rovira-Más, F. (CC BY 4.0). (2020). (b) GPS-based autonomous rice transplanter.
Figure 2. Verified Market Research (2018). (CC BY 4.0). (2020). Expected growth of agricultural robot market.
Figure 3. Rovira-Más, F. (CC BY 4.0). (2020). Sensor architecture for intelligent agricultural vehicles.
Figure 4. Rovira-Más, F. (CC BY 4.0). (2020). Geometrical relationship between an imaging sensor, lens, and FOV.
Figure 5. Rovira-Más, F. (CC BY 4.0). (2020). Color-based segmentation of mandarin oranges.
Figure 6. Rovira-Más, F. (CC BY 4.0). (2020). Auto-guidance systems: (a) Light-bar kit; (b) Parallel tracking control screen.
Figure 7. Zhang, Q. (CC BY 4.0). (2020). Tractor steering systems: (a) traditional hydraulic steering system; and (b) electrohydraulic steering system.
Figure 8. Adapted from T. Karvinen, K. Karvinen, V. Valtokari (Make: Sensors, Maker Media, 2014). (2020). Assembly of an ultrasonic rangefinder HC-SR04 with an Arduino processor. [Fair Use].
Figure 9. Rovira-Más, F. (CC BY 4.0). (2020). Digitalization of an analog signal with 4 bits between 1.2 V and 4.7 V.
Figure 10. Rovira-Más, F. (CC BY 4.0). (2020). Planar representation of the 23 points sampled in the field with a local origin.
Figure 11. Rovira-Más, F. (CC BY 4.0). (2020). Estimation of the area covered by the sampled points in the surveyed field.
Figure 12. Saiz-Rubio, V. (CC BY 4.0). (2020). Sampled points over a satellite image of the surveyed plot (origin in number 23).
Figure 13. Rovira-Más, F. (CC BY 4.0). (2020). Geometrical requirements of a vision-based inspection system for a conveyor belt on a harvester.
Figure 14a. Gulf Coast Data Concepts. (2020). Accelerometer Gulf Coast X2-2. Retrieved from http://www.gcdataconcepts.com/x2-1.html. [Fair Use].
Figure 14b & 14c. Saiz-Rubio, V. (CC BY 4.0). (2020). (b) sensor mounting. (c) mounting an agricultural robot.
Figure 15. Rovira-Más, F. (CC BY 4.0). (2020). Acceleration and velocity of a farm robot estimated with an accelerometer.
References
Bolton, W. (1999). Mechatronics (2nd ed). New York: Addison Wesley Longman Publishing.
Myklevy, M., Doherty, P., & Makower, J. (2016). The new grand strategy. New York: St. Martin’s Press.
Rovira-Más, F., Zhang, Q., & Hansen, A. C. (2010). Mechatronics and intelligent systems for off-road vehicles. London: Springer-Verlag.
Russell, S., & Norvig, P. (2003). Artificial Intelligence: a modern approach (2nd ed). Upper Saddle River, NJ: Prentice Hall.
Verified Market Research. (2018). Global agriculture robots market size by type (driverless tractors, automated harvesting machine, others), by application (field farming, dairy management, indoor farming, others), by geography scope and forecast. Report ID: 3426. Verified Market Research Inc.: Boonton, NJ, USA, pp. 78.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/03%3A_Machinery_Systems/3.04%3A_Mechatronics_and_Intelligent_Systems_in_Agricultural_Machinery.txt
|
Stacy L. Hutchinson
Kansas State University
Biological and Agricultural Engineering
Manhattan, Kansas, USA
Key Terms
Hydrologic cycle Infiltration and runoff Water balance
Soil water relationships Evapotranspiration Agricultural water management
Precipitation Water storage Urban stormwater management
Introduction
Water is central to many discussions regionally, nationally, and globally—be it the lack of water, the overabundance of water, or poor water quality—pushing us to seek answers on how to ensure we can maintain a safe, reliable, adequate water supply for human and environmental well-being. The United Nations (2013) defines water security as
. . . the capacity of a population to safeguard sustainable access to adequate quantities of acceptable quality water for sustaining livelihoods, human well-being, and socio-economic development, for ensuring protection against water-borne pollution and water-related disasters, and for preserving ecosystems in a climate of peace and political stability . . .
This highlights the need to understand the complex relationships associated with water and the need to research, develop, and implement engineered systems that assist with enhancing water security across the nation and world. This chapter focuses on understanding the system water balance, which is the fundamental basis for all water management decisions.
Outcomes
After reading this chapter, you should be able to:
• • Describe the basic components of a water balance, including precipitation, infiltration, evaporation, evapotranspiration, and runoff
• • Calculate a water budget
• • Use a water budget for the design and implementation of a simple water management system for irrigation or sustainable stormwater management
Concepts
Hydrologic Cycle
Hydrology is the study of how water moves around the Earth in continuous motion, cycling through liquid, gaseous, and solid phases. This cycle is called the hydrologic cycle or water cycle (Figure 3.4.1). At the global scale, the hydrologic cycle can be thought of as a closed system that obeys the conservation law; a closed system has no external interactions. The vast majority of water in the system continues to cycle through the three states of matter: liquid, gaseous, and solid.
Key processes in the hydrologic cycle are:
• • Precipitation (P), which is the primary input into a water budget. Precipitation describes all forms of water (liquid and solid) that falls or condenses from the atmosphere and reaches the earth’s surface (Huffman et al., 2013), including rainfall, drizzle, snow, hail, and dew.
• • Infiltration (In), which is the movement of water into the soil. Infiltrated water from precipitation and irrigation are the primary sources of water for plant growth.
• • Evaporation (E), which is the conversion of liquid or solid water into water vapor (gaseous water).
• • Transpiration (T), which is the process through which plants use water. Water is absorbed from the soil, moved through the plant, and evaporated from the leaves.
• • Evapotranspiration (ET), which is the combination of evaporation and transpiration to describe the water use, or output, from vegetated surfaces.
• • Runoff (R), which is the precipitation water that does not infiltrate into the soil. Runoff is generally an output, or loss of water, from the system or area of interest, but can also be an input if this water runs into the system or area of interest.
• • Deep seepage (DS), which is water that infiltrates below the root zone, which is the depth of the area of interest for the water budget.
Soil Water Relationships
The design of water management systems by biosystems engineers involves water moving through or being held in the soil. Thus, soil-water dynamics are a critical factor in the design process.
A volume of soil is comprised of solids and voids, or pore space. Porosity is the volume of voids as compared to the total volume of the soil. The proportions of solids and voids depend on the soil particles (sand, slit, clay, organic matter) and structure (known as peds), with coarse-textured soils (i.e., dominated by sand) having approximately 30% voids and finer-textured soils (i.e., containing more silt or clay) having as much as 50% void space. Water that infiltrates into the soil profile is stored in the soil voids. When all the void space is filled with water, the soil is at saturation water content.
Gravitational forces remove water up to 33 kPa (1/3 bar) of tension; this is drainage or gravitational water. The soil water content after gravitational drainage for approximately 24 hrs is called field capacity (FC) and is the maximum water available for plant growth. At this point, water is held in the soil in the smaller pore spaces by capillary action and surface tension, and can be removed from the soil profile by plant roots extracting the water from those pores. The water tension of the soil at field capacity (the suction pressure required to extract water from the pore spaces) is around 10 kPa (0.1 bars) for sands to 30 kPa (0.3 bars) for heavier soils containing more silt or clay. Plants are able to extract water from the soil profile with up to 1,500 kPa (15 bars) of tension; the water content at 1,500 kPa (15 bars) of tension is called the permanent wilt point (PWP). The total plant available water (AW) for a given soil profile is the difference between FC and PWP:
$AW = FC - PWP$
Plant water use is the primary means of removing water from the soil profile. Once all of the available water has been taken up by plants, some water remains in the soil, in very small pore spaces where very high suction pressure would be needed to remove that water. This film of water is more tightly bound to soil particles than can be extracted by plants and is called hygroscopic water. These relationships are shown in Figure 3.4.2.
The amount of water in the soil is called the soil water content or soil moisture content, and is often indicated using the symbol θ. When soil water content is expressed on a mass basis, i.e., mass of water in the soil compared to total mass of soil, it is called gravimetric water content (θg). Gravimetric water content can be calculated by expressing mass of water as a proportion of total wet mass of soil, known as wet weight water content, or as a proportion of total dry mass of soil, known as dry weight water content (Equation 3.4.2). When expressed on a volume basis, i.e., volume of water as a proportion of the total volume, with units cmw3 cm−3, the value is called volumetric water content (θv) (Equation 3.4.3). Gravimetric (dry weight) and volumetric water content are related through the bulk density of the soil (ρb), which is the mass of soil particles in a given volume of soil, expressed in g cm−3.
$\text{gravimetric water content} (\Theta_{g}) = \frac{\text{mass water}}{\text{mass dry soil}} = \frac{\text{mass total soil - mass dry soil}}{\text{mass dry soil}}$
$\text{volumetric water content} (\Theta_{v}) = \frac{\text{volume water}}{\text{total volume soil}} = \Theta_{g} \frac{\text{solid bulk density } (\rho_{b})}{\text{density of water }(\rho_{w})}$
As collection of a known volume of soil is more difficult than collecting a simple grab sample of soil, it is much easier to determine gravimetric water content by mass. However, volumetric soil moisture is much easier to use in calculations because it can be expressed as an equivalent depth of rainfall (mm) over a given area and, thus, be directly related to rainfall, which is most commonly reported in units of depth (such as mm) and never in mass. The volume of water input to the water budget can be calculated by multiplying the depth of rainfall by the area receiving rain.
Water Budget Calculation
A water budget, or water balance, is a measure of all water flowing into and out of an area of interest, along with the change of water storage in the area. This could be an irrigated field, a lake or pond, or green infrastructure such as a stormwater management system like a bioretention cell or a rain garden (a vegetated area to absorb and store stormwater runoff). At smaller scales, the hydrologic cycle is characterized using a water budget, which is the primary tool used for designing and managing water resources systems, including stormwater runoff management and irrigation systems. Water budgets are calculated for a defined system or area (e.g., field, pond) over a specified time period (e.g., rainfall event, growing season, month, year).
The water budget is calculated by quantifying the inputs, outputs, and change in water storage (∆S) of the system or project (Equation 4.1.4; Figure 4.1.3). While precipitation is the primary input to the water budget, others include runoff into the system (Rin) and water added through irrigation (Ir). Outputs from the system include runoff (Rout), deep seepage (DS), and evapotranspiration (ET). The change in system storage (∆S) may be positive, such as an increase in pond water level after a rainfall event, or negative, such as the decline in soil moisture from plant water use.
$P+Ir \ \pm R - ET - DS = \Delta S$
where P = precipitation
Ir = irrigation
R = net runoff (RinRout)
ET = evapotranspiration
DS = deep seepage
S = change in soil water storage
The first step in developing a water budget for water resources management is to collect input, output, and storage data. There are many sources of data for this, including the National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) (NOAA, 2019), which includes data from around the world. Within the U.S., the U.S. Geological Survey (USGS) real-time water data (USGS, 2020b) and the U.S. Department of Agriculture (USDA) Natural Resources Conservation Service (NRCS) Web Soil Survey (USDA-NRCS, 2019a) are also available. Each of these sources has extensive sets of data available to assist with management of water resources. Local-level data from state climatologists, research farms, and project gages should also be considered when available. High quality precipitation data are particularly valuable, and necessary, when designing and implementing water management systems including irrigation systems and green infrastructure.
Precipitation
Precipitation is the primary input into a water budget. The most important form of precipitation for agricultural and biological engineering applications in most parts of the world is rainfall. Precipitation varies significantly across regions (Figure 4.1.4) and throughout the year, but is relatively consistent over long time periods at a given location. This means long-term precipitation records can be used to calculate a water budget and for planning water resource management systems.
In addition to data available from NOAA NCEI and local sources mentioned above, long-term design storm information is available for the U.S. from the NOAA Atlas 14 point precipitation frequency estimates (HDSC, 2019). Statistical analysis of historic precipitation data is used to determine the magnitude and annual probability for various rainfall events. These values are used to size stormwater systems to deal with the majority of events, but not the most extreme.
Infiltration
Precipitation starts moving into the soil profile, i.e., infiltrating, as soon as it reaches the ground surface. Initial infiltration rates depend on the initial soil water content, but if the soil is drier than field capacity, can be very high as water fills depressions on the soil surface and begins moving into the soil. As the surface depressions fill and the surface soil becomes saturated, infiltration slows to steady-state and excess precipitation begins to run off the surface. Depending on the type of precipitation, initial soil water content, duration, and intensity (depth per time), precipitation may infiltrate into the soil profile and become soil water, or it will not infiltrate and will become runoff. The infiltration rate is related to soil bulk density, porosity, pore size distribution, and pore connectivity.
Runoff
Water that does not infiltrate into the soil is an output from the water budget (a loss from the water management system) unless the system is designed to collect it. The amount of runoff depends on land cover type, soil type, initial soil water content, and rainfall intensity; these four factors all interact and are not independent of each other. Land cover plays a significant role in determining the amount of runoff. Natural vegetated land usually has the highest infiltration rates. Infiltration rates decrease as natural land is converted to production land, due to increased compaction and changes in soil structure, and to developed land, due to impervious surfaces such as buildings and roads. Soil type also influences whether water can infiltrate. Soils with greater connected pore space will tend to generate less runoff. Initial soil water content is important because, if the soil is at or near saturation water content, the infiltration rate is more likely to be less than the rainfall rate. If rainfall rate is very high, runoff can occur even if the soil is quite dry. One of the most common methods for estimating likely surface runoff is the curve number method (USDA-NRCS, 2004). More information on the curve number method can be found in Chapter 10, Part 630, of the USDA-NRCS National Engineering Handbook (USDA-NRCS, 2004).
Evapotranspiration
Evapotranspiration (ET) is the primary means for removing water from the top of the soil system when plants are present. If there are no plants this loss will be confined to evaporation only. The amount of ET depends on the type and growth stage of the vegetation, the current weather, and soil water content at the location. The most accurate ET calculations involve an energy balance associated with incoming solar radiation and the mass transfer of water from moist vegetation surfaces to a drier atmosphere (Allen et al., 2005). ET is driven by solar radiation, but as the soil becomes drier, ET decreases. The relationship depends on the pore size distribution of the soil, so is regulated by the interaction of soil texture (sand, silt, and clay content) and structure (presence and type of soil peds).
Storage
Soils can store a significant amount of water and play an important role in controlling the rainfall-runoff processes, stream flow, and ET (Dobriyal et al., 2012; Meng et al., 2017). Available soil water influences productivity of natural and agricultural systems (Dobriyal et al., 2012). Understanding the available water stored in surface soils allows effective design and implementation of irrigation and stormwater management systems. The maximum storage is determined by the total porosity of the soil, which will be almost the same as the saturated water content. The ability of the soil to store more water at any given time is determined by the difference in current water content and saturated water content. These factors are influenced by soil texture, structure, porosity, and bulk density. More information about soils can be found from sources such as the FAO Soils Portal of the Food and Agriculture Organization of the United Nations (FAO, 2020), the European Soil Data Centre (https://esdac.jrc.ec.europa.eu/), and on the Web Soil Survey maintained by the USDA-NRCS (2019b).
Applications
Water is essential for life and critical for both food and energy production. With a finite supply of available fresh water and increasing global population demanding access to clean water for drinking, energy, and food production, proper water management is of the utmost importance. It is imperative to research, develop, and implement engineered systems that enhance local, regional, national, and global water security. Water balances are used in the design of systems to manage excess stormwater and to manage scarce available water in low-rainfall areas. Excess water management systems include components to retain and regulate runoff safely, while limited water management systems include components to provide irrigation water during dry seasons.
Urban Stormwater Management
As urbanization and land development occurs, the addition of impervious structures (roads, sidewalks, buildings) dramatically changes the hydrology of an area. Runoff volume linearly increases with increases in the impervious surface area. Hydrologic models and long-term stream flow monitoring show that, compared to pre-development, developed suburban areas have 2 to 4 times more annual runoff, and high-density areas have 15 times more runoff (Sahu et al., 2012; Suresh Babu and Mishra, 2012; Christianson et al., 2015). As runoff travels over pavement, rooftops, and fertilized lawns that make up much of the urban landscape, it picks up contaminants such as pathogens, heavy metals, sediments, and excess nutrients (Davis et al., 2001), creating both water quantity and water quality concerns.
Increases in the total volume of runoff from urban areas are caused not only by impervious structures. Even in low-density suburban areas where individual lots have large lawns and large public parks are created, current methods of construction and development greatly reduce the infiltration capacity of soils. Development typically involves stripping native vegetation from large areas of land, accelerating soil erosion rates 40,000 times (Gaffield et al., 2003). During this process, construction equipment compacts soil, reducing its ability to absorb runoff. Developed lawns can generate 90% as much runoff as pavement (Gaffield et al., 2003). Urban runoff may also include dry-weather flows from the irrigation of lawns and public parks. The final result of urban development is a significant increase in runoff that transports pollutants from dense urban centers to receiving water bodies, and small changes in land use can relate to large increases in flood potential and pollutant loading.
Over the past three decades, urban and urbanizing areas have started to increase the use of green infrastructure, or natural-based systems, for stormwater management. Green infrastructure works to reduce stormwater runoff and increase stormwater treatment on-site for floodwater and nonpoint, or diffuse, source pollution control using infiltration and biologically based treatment in the root zone. Green infrastructure is very different from traditional grey stormwater management systems, such as storm sewers. The more traditional grey systems use a centralized approach to water management, designed to quickly move runoff off the land and into nearby surface water with little to no storage or treatment. Green infrastructure is more resilient and can offer additional benefits, such as habitat, in developed and developing areas. For more information about green infrastructure see the U.S. Environmental Protection Agency (USEPA) green infrastructure webpage (USEPA, 2020).
Bioretention cells are one of the most effective green infrastructure systems. Bioretention cells are designed to infiltrate, store, and treat runoff water from impervious surfaces such as parking lots and roadways. Bioretention cells can be lined to prevent contaminants from moving into groundwater or unlined when there is no concern of groundwater contamination. Ideally, bioretention cells are designed to infiltrate and store the “first flush” rainfall event. (The first flush rainfall is the first 13-25 mm of rain that removes the majority of accumulated pollution from surfaces.) However, in many cases, there is not enough space to install such a large bioretention cell and the system is designed to fit the space in order to treat as much stormwater as possible.
Initial design and assessment of bioretention cell function is completed using a system water balance. Runoff from the impervious parking is directed toward stormwater management practices, like bioretention cells, where water infiltrates and is stored until removed from the soil (or growing media) through evapotranspiration.
Agricultural Water Management
Changing climate conditions, population growth, and urbanization present challenges for food supply. Agricultural intensification impacts local resources, particularly usable freshwater. This vulnerability is amplified by a changing climate, in which drought and variability in precipitation are becoming increasingly common. As a result, 52% of the world’s population is projected to live in regions under water stress by 2050 (Schlosser et al., 2014). Concurrently, varying water availability, along with limiting nutrients, will constrain future food and energy production.
Agricultural water management is key to optimizing crop production. The amount of water needed for crop growth depends on the crop type, location, and time of year, but averages roughly 6.5 mm day−1 during the growing season (Brouwer and Heibloem, 1986). Depending on location, installation of runoff management systems such as terraces (ASAE S268.6 MAR2017) and grassed waterways (ASABE EP 464.1 FEB2016) (ASABE Standards, 2016, 2017) may be needed to manage excess rainfall and reduce flooding and erosion within fields. Design standards with specific design criteria are available from the ASABE Standards (https://asabe.org/Publications-Standards/Standards-Development/National-Standards/Published-Standards).
In addition to managing runoff from rainfall events, supplementing natural rainfall with irrigation water for crop production may also be required (sometimes in the same places). Maintaining available water for crop growth and development has a significant impact on crop yields. Irrigation system design and management rely on the use of detailed soil water balances to minimize water losses and optimize crop production. A daily water budget to account for all inputs and outputs from the system can be calculated.
Examples
Example $1$
Example 1: Calculating soil water content
Problem:
The following information is provided to determine the amount of available water storage in a soil profile. A soil sample collected in the field has a wet weight of 238 g. After drying at 105°C for 24 hrs, the soil sample dry weight is 209 g. Careful measurement of the soil sample determines the volume of the soil core is 135 cm3. Determine the available water storage, in both mass and volume basis, in the soil profile. (a) What was the original water content of the soil on a gravimetric basis (mass of water per total mass) and (b) on a volumetric basis (volume of water per volume of bulk soil)?
Solution
1. (a) Calculate the gravimetric water content using Equation 3.4.2:
$\text{gravimetric water content } (\Theta_{g}) = \frac{\text{mass water}}{\text{mass dry soil}} = \frac{\text{total mass soil - dry soil mass}}{\text{dry soil mass}}$
$\text {mass water (g)} = 238 \text{ g total} - 209 \text{ g dry} = 29 \text{ g water}$
$\text {gravimetry water content} (\Theta_{g}) = \frac{29 \text{ g water}}{209 \text{ g dry}} = 0.139 \frac{\text{g water}}{\text{g dry}} = 13.9 \% \text{ water by mass}$
1. (b) Calculate the volumetric water content using Equation 4.1.3:
$\text{volumetric water content } (\Theta_{v}) = \frac{\text{volume water}}{\text{total volume}}$
$\text{volume of water} = \frac{\text{mass water}}{\text{density of water}} = \frac{29 \text{ g water}}{1 \text{ g per cm}^{3}} = 29 \text{ cm}^{3} \text{ water}$
$\text{volumetric water content } (\Theta_{v}) = \frac{29 \text{ cm}^{3} \text{ water}}{135 \text{ cm}^{3} \text{ dry}} = 0.215 \frac{ \text{cm}^{3} \text{ water}}{\text{cm}^{3} \text{ dry}} = 21.5 \% \text{ water by volume}$
Example $2$
Example 2: Determining plant available water
Problem:
Plant available water is one factor that helps to determine the need for irrigation as well as the available water storage in bioretention cells. A field has an established grass cover. The grass has an effective root zone depth of 0.90 m. The soil is a fine sandy loam with FC = 23% (vol) and PWP = 10% (vol), as shown in the water balance diagram.
(a) If the soil is at field capacity, how much available water (cm) is in the effective root zone? (b) If the field water content averages 18% (vol) in the root zone, what is the available water storage depth for rainfall?
Solution
1. (a) Calculate the available water using Equation 4.1.1:
$AW = FC - PWP = 23 \% \text{(vol)} - 10 \% \text{(vol)} = 13 \% \text{(vol)}$
1. Thus, 13% of the soil volume is available water for plant use. When the volumetric water content is considered on a per unit area of soil, e.g., (cm3 water cm−2 soil)/(cm3 soil cm−2 soil), the units become depth water/depth soil, e.g. cm water cm−1 soil profile. Thus, consider a unit area of soil and calculate the depth of available water:
$\text{available water} = \frac{13 \text{ cm water}}{100 \text{ cm soil profile}} \times 90 \text{ cm} = 11.7 \text{ cm water available in root zone}$
1. (b) If the soil water content is 18% (vol), calculate the available storage as the difference between FC and volumetric water content:
$\text{available storage} = FC - \Theta_{v} = 23 \% - 18 \% = 5 \%$
1. And the depth of available storage in the root zone is:
$\text{depth of available storage} = 0.05 \times 90 \text{ cm} = 4.5 \text{ cm}$
Thus, the soil profile would be able to store 4.5 cm in the 90 cm root zone.
Example $3$
Example 3: Using a water balance to design a simple pond to store runoff
Problem:
A developer is planning the layout of a small housing development on 16.2 ha (40 ac) of land near Manhattan, KS, USA. According to local ordinance, the developer must retain any increased runoff due to development from the 2-yr, 24-hr rainfall event (86 mm) (HDSC, 2019). Prior to development, the area was able to infiltrate and store approximately 50 mm of this rainfall event. With the increase of impervious land cover (e.g. houses and roads), it is expected that infiltration and storage will be reduced to 30 mm. Determine the pond volume required to store the difference in expected runoff.
Solution
Apply Equation 4.1.4 to the 16.2-ha site to determine the expected increase in runoff from the site to due development:
Water balance equation:
$\text{inputs} - \text{outputs} = \text{change in storage}$
$P+Ir \pm R - ET-DS=\Delta S$
Assumptions:
Pond is dry prior to rain
Ir = 0
ET = 0 for short duration events
DS = 0
Therefore, P ± R = ∆S
Pre-development:
P = 86 mm
R = ?
S = 50 mm
R = 86 mm – 50 mm = 36 mm of runoff
Post-development:
P = 86 mm
R = ?
S = 30 mm
R = 86 mm – 30 mm = 56 mm of runoff
Change in runoff:
S = 56 mm – 36 mm = 20 mm runoff
= 20 mm × 16.2 ha
= 0.02 m × 162,000 m2 = 3,240 m3
The pond must be designed to detain, or slow down, 20 mm of runoff from the developed land. This equates to 3,240 m3 of runoff water from the entire development of 16.2 ha.
Example $4$
Example 4: Estimate the amount of storage available in a bioretention cell
Problem:
Consider a bioretention cell located in the center of a parking lot. The parking lot, an area of 26 m by 12 m, is sloped to direct runoff into the bioretention cell. The cell contains an engineered growing media that is 60% sand and 40% organic compost, with a porosity of 45% by volume, planted with native grasses and forbs. The cell is 2.0 m wide, 1.2 m deep, and 12 m long.
1. (a) What is the maximum water storage volume of the bioretention cell?
2. (b) What is the largest storm (maximum precipitation depth) the cell can infiltrate if all storage is available?
Solution
Solution:
1. (a) Calculate the total volume of the bioretention cell:
2. $\text{volume of cell} = \text{length} \times \text{width} \times \text{depth}$
= 12 m × 2 m × 1.2 m = 28.8 m3
1. The maximum storage is equal to the total void space (porosity):
2. $\text{void space} = \text{volume of cell} \times \text{porosity}$
= 28.8 m3 × 0.45 = 12.96 m3
1. (b) Assuming all rainfall will run off the parking lot, the maximum storm depth that can be stored is:
2. $\text{rainfall} = \frac{\text{volume of cell void space}}{\text{area of parking lot}} = \frac{12.96m^{3}}{12m \times 26m} = 0.042m = 42mm$
3. −1, preparing the cell to store the next rainfall event.
Example $5$
Example 5: Development of an irrigation schedule for corn in a water limited area
Problem:
Given the following information, determine the daily changes in the soil water content. How much irrigation water should be added on the 10th day to raise the water content of the root zone back to the initial water content? The root zone is 1 m and the initial soil moisture content is 20% by volume. Assume all seepage passes through the root zone and is not stored.
Solution
Convert the initial soil amount in the profile at the beginning of day 1, θv = 20% of root zone, to units of depth of water in the root zone:
$\theta_{vi} = 0.2 \times 1000 \ mm = 200 \ mm\text{ of water}$
Apply Equation 4.1.4 to the root zone for each day:
$\text{Daily water balance} = P+Ir \pm R-ET -DS=\Delta S$
$\text{Day 1 inputs} = P+Ir \pm R = 0+0+0=0\ mm$
$\text{Day 1 outputs} = R+ET+\text{seepage} = 0+7+0 = 7 \ mm$
$\text{Day 1 } \Delta S = \text{inputs} - \text{outputs} = 0\ mm - 7 \ mm = -7 \ mm$
$\text{Day 1 } \theta_{v1} = \theta_{vi} - \Delta S = 200 \ mm - 7 \ mm = 193 \ mm$
There is 193 mm of water in the soil profile at the end of day 1.
It is easiest to complete the rest of these calculations by setting up a spreadsheet. The following table shows the daily water budget components, the summed inputs and outputs as calculated above, and the resulting soil water depth (mm) in the right-most column. Initial water = 200 mm. At water content is 183 mm. Thus, to return the amount of water in the profile to in the initial soil water content of 200 mm, 17 mm would need to be added on day 10.
Image Credits
Figure 1. USGS. (CC By 1.0). (2020). The water cycle. Retrieved from https://www.usgs.gov/media/images/water-cycle-natural-water-cycle
Figure 2. Hutchinson, S. L. (CC By 4.0). (2020). Soil water.
Figure 3. Hutchinson, S. L. (CC By 4.0). (2020). Water Balance.
Figure 4. Ruane et. Al. (2018). Annual Precipitation. Retrieved from https://data.giss.nasa.gov/impacts/agmipcf/
Figure 5. Hutchinson, S. L. (CC By 4.0). (2020). Example 2.
Figure 6. Hutchinson, S. L. (CC By 4.0). (2020). Example 4.
Figure 7. Hutchinson, S. L. (CC By 4.0). (2020). Example 5.
References
Allen, R. G., Walter, I. A., Elliott, R. L., Howell, T. A., Itenfisu, D., Jensen, M. E., & Snyder, R. L. (2005). The ASCE standardized reference evapotranspiration equation. Reston, VA: American Society of Civil Engineers.
ASABE Standards. (2016) EP 464.1: Grassed waterway for runoff control. St. Joseph, MI: ASABE.
ASAE Standards. (2017). S268.6: Terrace systems. St. Joseph, MI: ASABE.
Brouwer, C., & Heibloem, M. (1986). Irrigation water management: Irrigation water needs. Food and Agriculture Organization of the United Nations. Retrieved from http://www.fao.org/3/s2022e/s2022e00.htm#Contents
Christianson, R., Hutchinson, S., & Brown, G. (2015). Curve number estimation accuracy on disturbed and undisturbed soils. J. Hydrol. Eng., 21(2). http://doi.org/10.1061/(ASCE)HE.1943-5584.0001274.
Davis, A. P., Shokouhian, M., Sharma, H., & Minami, C. (2001). Laboratory study of biological retention for urban stormwater management. Water Environ. Res., 73(1), 5-14. doi.org/10.2175/106143001x138624.
Dobriyal, P., Qureshi, A., Badola, R., & Hussain, S. A. (2012). A review of the methods available for estimating soil moisture and its implications for water resource management. J. Hydrol., 458-459, 110-117. https://doi.org/10.1016/j.jhydrol.2012.06.021.
FAO. (2020). FAO soils portal. Food and Agriculture Organization of the United Nations. Retrieved from http://www.fao.org/soils-portal/en/.
Gaffield, S. J., Goo, R. L., Richards, L. A., & Jackson, R. J. (2003). Public health effects of inadequately managed stormwater runoff. Am. J. Public Health, 93(9), 1527-1533. https://doi.org/10.2105/ajph.93.9.1527.
HDSC. (2019). NOAA atlas 14 point precipitation frequency estimates. Hydrometeorological Design Studies Center. Retrieved from hdsc.nws.noaa.gov/hdsc/pfds/pfds_map_cont.html.
Huffman, R. L., Fangmeier, D. D., Elliot, W. J., Workman, S. R., & Schwab, G. O. (2013). Soil and water conservation engineering (7th ed.). St. Joseph, MI: ASABE.
Meng, S., Xie, X., & Liang, S. (2017). Assimilation of soil moisture and streamflow observations to improve flood forecasting with considering runoff routing lags. J. Hydrol., 550, 568-579. http://doi.org/10.1016/j.jhydrol.2017.05.024.
NOAA. (2019). Climate data online. National Oceanic and Atmospheric Administration, National Centers for Environmental Information. Retrieved from https://www.ncdc.noaa.gov/cdo-web/.
Ruane, A. C., Goldberg, R., & Chryssanthacopoulos, J. (2015) AgMIP climate forcing datasets for agricultural modeling: Merged products for gap-filling and historical climate series estimation, Agr. Forest Meteorol., 200, 233-248, http://doi.org/10.1016/j.agrformet.2014.09.016.
Sahu, R. K., Mishra, S., & Eldho, T. (2012). Improved storm duration and antecedent moisture condition coupled SCS-CN concept-based model. J. Hydrol. Eng. 17(11). https://doi.org/10.1061/(ASCE)HE.1943-5584.0000443.
Schlosser, C. A., Strzepek, K., Gao, X., Fant, C., Blanc, E., Paltsev, S., . . . Gueneau, A. (2014). The future of global water stress: An integrated assessment. Earth’s Future, 2(8), 341-361. doi.org/10.1002/2014EF000238.
Suresh Babu, P., & Mishra, S. (2012). Improved SCS-CN–inspired model. J. Hydrol. Eng. 17(11). https://doi.org/10.1061/(ASCE)HE.1943-5584.0000435
United Nations. (2013). Water security and the global water agenda. A UN-Water Analytical Brief. Retrieved from https://www.unwater.org/publications/water-security-global-water-agenda/.
USDA-NRCS. (2004). Chapter 10: Estimation of direct runoff from storm rainfall. In National engineering handbook, part 630 hydrology. U.S. Department of Agriculture Natural Resources Conservation Service. Retrieved from https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/national/water/manage/hydrology/?cid=STELPRDB1043063.
USDA-NRCS. (2019a). Web soil survey. USDA Natural Resources Conservation Service. Retrieved from https://websoilsurvey.sc.egov.usda.gov/App/HomePage.htm.
USDA-NRCS. (2019b). Soils. U.S. Department of Agriculture Natural Resources Conservation Service. Retrieved from https://www.nrcs.usda.gov/wps/portal/nrcs/main/soils/survey/.
USEPA (2020). Green infrastructure. U.S. Environmental Protection Agency. Retrieved from https://www.epa.gov/green-infrastructure.
USGS (2020a) The water science school. U.S. Geological Survey. Retrieved from https://water.usgs.gov/edu/watercycle.html.
USGS (2020b). National Water Information System: Web interface. U.S. Geological Survey. Retrieved from https://waterdata.usgs.gov/nwis.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/04%3A_Natural_Resources_and_Environmental_Systems/4.01%3A_Water_Budgets_for_Sustainable_Water_Management.txt
|
Leigh-Anne H. Krometis
Biological Systems Engineering
Virginia Tech
Blacksburg, Virginia, USA
Key Terms
Water pollution Source control Water quality standards
Ecological and ecosystem services Delivery control Nutrient management
Pollutant budget Assimilative capacity Urban stormwater planning
Introduction
Water is critical to all known forms of life, human and non-human. Poor management of water resources can result in risks to human health through the spread of toxic chemical and pathogenic microorganisms, reduction in species diversity through changes in water chemistry and/or habitat loss, economic hardship due to a failure to meet industrial, agricultural, and energy needs and political conflict or instability as neighboring states or nations struggle to equitably distribute water to their people.
Globally, 70% of freshwater withdrawals are used by the agricultural sector (World Bank, 2017). It is important to recognize, however, that these consumptive values can vary considerably by nation or global region depending on the local population size, ecology, climate, and primary industries present. In the United States, the U.S. Geological Survey (USGS) estimated that in 2011, 41% of consumptive water use (water that is not returned quickly to the same source from which it was taken) was devoted to hydroelectric power generation, 40% was used to support various forms of agriculture (aquaculture, livestock, crop irrigation), 13% supported domestic household use, and the remaining 6% was used for industrial purposes or in extractive industries (e.g., mining) (USGS, 2018). In contrast, the United Nations Food and Agriculture Organization (UNFAO) estimated in 2015 that over 64% of water in China and nearly 80% of water in Egypt supported agriculture (UNFAO, 2019). Although per capita water use has declined in recent years, the human population and its attendant need for clean water, affordable energy, and nutritious food continues to increase. Concurrently, non-human species diversity continues to decline as forest, soil, and water resources are increasingly exploited (MEA, 2005; Raudsepp-Hearne et al., 2010). More explicit consideration of the intricate feedback of food-energy-water systems within human populations and their impacts on other ecological services are needed to ensure sustainability.
This chapter introduces basic concepts related to water management and provides examples of best management practices that can be used to preserve and improve water quality. Here, we define ecosystem health as the capacity of a natural system to support human and non-human needs. In this chapter particular focus is on chemical, microbial, and physical constituents in water as drivers of ecosystem health.
Outcomes
After reading this chapter, you should be able to:
• • Define pollution in terms of assimilative capacity and use of water body
• • Explain the concept of ecological, or ecosystem, services and their relationship to water quality
• • Describe strategies for water pollution control, including pollutant budgets and stormwater best management practices
• • Calculate a range of water quality impairment and cost parameters
Concepts
Definition and Description of Water Pollution
The U.S. Environmental Protection Agency defines water pollution as “human-made or human-induced alteration of chemical, physical, biological, and radiological integrity of water” (USEPA 2018a). These alterations include the addition of specific pollutants (e.g., chemicals, microorganisms, sediment) to an aquatic system or the alteration of natural conditions, such as pH or temperature. In this context, the “integrity” of the water refers to the ability of the water to continue to perform appropriate human or ecological functions. These functions are spelled out explicitly by the European Environment Agency’s definition of pollution as “the introduction of substances or energy into the environment, resulting in deleterious effects of such a nature as to endanger human health, harm living resources and ecosystems, and impair or interfere with amenities and other legitimate uses of the environment” (EEA, 2019). While the two definitions are similar, it is critical to note that the EEA does not specify that pollution needs to be human-made.
A place where pollutants directly enter a receiving water such as a stream, river, or lake through an identifiable pipe or culvert (e.g., industrial outfall or wastewater treatment plant effluent) is referred to as a point source (PS) of pollution. Point sources are generally reasonably constant in flow and concentration (i.e., the pattern, type, and amount of pollution being discharged is consistent), because they tend to be governed by predictable or controlled processes. Places from which pollutants are transported to receiving waters via stormwater runoff (e.g., eroded sediment from construction sites and leachate from septic drainfields), or are more diffuse and less predictable in nature, are referred to as nonpoint sources (NPS) of pollution. NPS pollution is sometimes referred to as diffuse pollution, as the sources are distributed throughout the catchment, rather than originating from a distinct location. NPS discharges are generally highly variable and much more severe following significant weather events such as rainfall or seasonal events such as snowmelt. Consequently, NPS pollution is often more serious during high flows when greater quantities of pollutants are being transported to receiving waters, while PS pollution is more serious during low flows when there is less dilution of constant discharges (Novotny, 2003).
Although any changes to water through PS or NPS contributions may meet the technical definition of pollution, pollution is only considered a concern if it exceeds the receiving water’s waste assimilative capacity so that the water no longer supports its human or ecological purpose. Waste assimilative capacity is defined as the natural ability of a water body to absorb pollutants without harmful effects. Receiving waters can naturally process some level of pollution through dilution, photodegradation, and bioremediation. For example, native aquatic plants use nutrients including nitrogen and phosphorus to grow; however, very high nutrient contributions from anthropogenic sources can stimulate algal overgrowth, leading to harmful blooms, eutrophication and aquatic ecosystem collapse (Withers et al., 2014).
Ecological Services and Water Quality Decisions
Historically, human and ecological uses of water resources were sometimes regarded as separate or even competing aims. There is an increasing effort to acknowledge the inherent linkages and interdependence of human and ecological well-being through the concept of ecological, or ecosystem, services. Rather than promoting conservation of habitat and non-human species diversity solely for the sake of nature, the concept of ecological services recognizes that preservation and restoration of natural ecosystems also protects functions that ensure the sustainability of human communities. Ecological or ecosystem services are classified into four categories: regulating services (climate, waste, disease, buffering); provisioning services (food, fresh water, raw materials, genetic resources); cultural services (inspiration, spiritual, recreational, educational, scientific); and supporting services (nutrient cycling, habitats, primary production). Making ecological services (such as supporting fish populations, carbon sequestration, nutrient cycling, and flood mitigation) explicit allows for their quantification and inclusion in cost-benefit analyses associated with future land use planning and the allocation of funds for water quality improvements (Keeler et al., 2012; APHA, 2013; Hartig et al., 2014). Continuing research aims to uncover and quantify additional linkages between human health and well-being and ecosystem integrity, including promotion of mental health and community cohesiveness (Sandifer et al., 2015). This is in keeping with the mission of the American Society of Agricultural and Biological Engineers, whose members “ensure that we have the necessities of life: safe and plentiful food to eat, pure water to drink, clean fuel and energy sources, and a safe, healthy environment in which to live” (ASABE, 2020).
Designated Uses and Water Quality Standards
Water quality standards vary around the world. In some jurisdictions (e.g., countries, regions), minimum water quality standards are set for all water bodies regardless of use; in other jurisdictions, appropriate levels of different water quality constituents are generally determined based on chosen, intended, or planned uses of a water body. For example, in the U.S., states, tribes, and territories assign “designated” uses to surface waters to protect human health following water contact (e.g., drinking water reservoirs, recreation, fishing), to preserve ecological integrity (e.g., trout stream, biological integrity), and for economic or industrial use (e.g., navigation, sufficient flow for hydroelectric power). Acceptable levels of critical pollutants are then established to ensure the water body can continue to meet these designated uses. For instance, a water body used only for irrigation may have concentrations of nitrate (NO3), a soluble form of an important plant nutrient, that exceed safe levels for drinking water, without interfering with its use as irrigation water. Basing water quality standards on the designated use of the water body allows for these differences in quality requirements by use category to be taken into account.
An Example of Water Pollution Regulation: U.S.A.
The U.S. Clean Water Act, introduced in 1972, remains the primary regulatory mechanism to ensure surface waters in the U.S. continue to meet the designated uses while protecting human and ecological health. At its most basic, the Clean Water Act regulates point sources through the National Pollutant Discharge Elimination System (NPDES), which requires permits for discrete discharges to ensure implementation of best practicable technology and appropriate monitoring.
NPS pollution is primarily regulated through the Total Maximum Daily Load (TMDL) program, which requires states to monitor surface water and compile lists of waters that do not meet standards applicable for their designated uses, which are then classified as impaired and require TMDL development (Keller and Cavallaro, 2008; USEPA, 2019). The acronym TMDL has two distinct definitions: (1) the mathematical quantity of a targeted pollutant that a receiving water can absorb without harmful effects (Equation 4.2.1); and (2) the restoration process developed to bring that water body back into compliance with water quality standards (Freedman et al., 2004). Through this restoration process, acceptable levels of pollutant discharges are identified that will not exceed the waste assimilative capacity of the water body so that it can maintain pollutant levels appropriate to its designated use. Mathematically, TMDL is defined as:
$TMDL = PS + NPS+MOS$
where TMDL = maximum permissible total quantity of targeted pollutant that can be added to the receiving water each day (mass day−1)
PS = all point source contributions of the targeted pollutant (mass day−1), regulated through the NPDES process
NPS = all nonpoint source contributions of the targeted pollutant (mass day−1)
MOS = a margin of safety (mass day−1)
TMDL is calculated on a load (mass) basis, e.g., mg day-1, and for each individual pollutant that is compromising the use of the water body in question. Margins of safety are included to account for future land development, changes in climate, and uncertainties in measurements and modeling used in TMDL development. Once a total maximum daily load is determined for a water body to meet the relevant water quality standards (including how much pollutant can be allowed from PS and NPS), then treatment systems and land use changes can be designed to meet that maximum daily load.
Determining the allowable total maximum daily load combines mass balance and concentration information, which are described in more detail later in this chapter. While this calculation is simple, the most important part of solving the problem is keeping track of units and identifying the necessary data and information needed to complete the task. Occasionally, there may be an abundance of data but not all of it is valuable to the engineer, thus it is essential for an engineer to master the skills to identify exactly what data are needed to complete the calculation.
Engineering Strategies for Water Quality Protection
Strategies to preserve surface water integrity from degradation or to address water quality impairments are often referred to collectively as best management practices (BMPs). The USEPA defines a BMP as “a practice or combination of practices determined by an authority to be the most effective means for preventing or reducing pollution to a level compatible with water quality goals” (USEPA, 2018b). This term is more broadly encompassing than the National Academies’ Stormwater Control Measure (SCM), which primarily refers to structural practices implemented in urban areas to intercept stormwater (NRC, 2009). In addition to structural practices, the term BMP can be used to describe non-structural efforts to protect water quality, including public participation, community education, and pollutant budgeting, and is used to describe these efforts in a variety of land-water environments, including urban, agricultural, and industrial (e.g., mining, forestry) landscapes.
Water quality protection strategies can be broadly categorized as implementing either source control or delivery control. The function of many strategies for water quality protection can be described very simply using a mass balance:
$M_{in} = \Delta S + M_{out}$
where Min = mass of pollutant entering the system of interest (e.g., field, structure) (kg)
S = mass of pollutant retained or treated by the system of interest (kg)
Mout = mass of pollutant leaving the system of interest (kg)
This simple relationship is the foundation for the design, performance evaluation, and costing of BMPs. Application of Equation 4.2.2 to different types of strategies is described in the following sections.
Source Control
Source control refers to efforts to reduce the presence or availability of the pollutant in the land-water system (e.g., eliminating pesticide use) or to prevent transport of the pollutant from its original source (e.g., discouraging erosion by managing tillage in a field). Widespread use of chemical fertilizers has facilitated a more than doubling of cereal grain production globally over the past 50 years, allowing for the feeding of an ever-increasing population (Tilman et al., 2002). However, overuse of fertilizers can lead to nutrient losses via runoff to surface water and/or leaching to groundwater following precipitation events if soil amendments are not applied via an appropriate method at the time of year best suited to promote plant growth. Excessive nutrient loadings can result in eutrophication and aquatic biology impairments, as well as difficulty in meeting municipal drinking water needs. Use of fertilizers in excess of crop needs also represents an unnecessary and nontrivial expenditure for the producer. When applied to a source control practice such as nutrient management, the variables in Equation 4.2.2 are defined as:
Min = mass of nutrient applied to the crop
S = mass of nutrient taken up by the crop + mass of nutrient adsorbed by the soil
Mout = mass of nutrient leaving the field in runoff, lateral flow through the soil, and in deep seepage.
Delivery Control
Delivery control refers to efforts to reduce pollutant movement to source waters after pollutants are moved from their point of origin. Often, delivery control efforts involve interception, treatment, and/or storage of pollutants in water (e.g., riparian buffer, detention basin) prior to their discharge into a receiving water. When applied to a source control practice such as a detention basin, the variables in Equation 4.2.2 are defined as:
Min = mass of pollutant in inflow
S = mass of pollutant treated or retained
Mout = mass of pollutant in outflow
An example of delivery control is a detention basin in which runoff water is collected and the particles are allowed to settle out before the water flows out of the basin. A detention basin can be placed at the outlet of a watershed in which soil erosion is occurring (the NPS) to reduce the mass load of sediment flowing out of the watershed as part of a plan to meet the TMDL. Theoretical particle removal by size class in the detention basin can be calculated by assuming a theoretical stormwater basin of depth D, width W, and length L (Figure 4.2.1):
Assuming a constant inflow rate of Q, the average vertical (“fill”) velocity is approximated as flow divided by cross-sectional area of the basin, or:
$V_{y} = \frac{Q}{W \times L} \text{ for depth of water in the basin <D}$
where Vy = average vertical (“fill”) velocity (m h−1)
Q = inflow rate (m3 h−1)
W = width (m)
L = length (m)
When the basin is full of water and inflow continues, water will flow out of the basin, and:
$Q_{R} = \frac{Q}{W \times L} \text{ when depth of water in the basin = D}$
where QR = overflow rate (m h−1)
Under laminar flow conditions (smooth flow at low velocity), the theoretical rate at which a particulate will settle from the runoff water is governed by Stokes’ law:
$V_{s} = \frac{gd^{2}(\rho_{p} - \rho_{w})}{18\mu}$
where Vs = settling velocity (m s−1)
ρp = density of particle (kg m−3)
ρw = density of the fluid (water), (kg m−3)
g = acceleration due to gravity (m s−1 s−1)
d = particle diameter (m)
μ = viscosity of water, 10−3 N s m−2 at 20°C
Equations 4.2.3 and 4.2.5 can be used together to determine dimensions of the basin that will allow even small and light particles, such as fine silts, to settle to the bottom of the basin for a given inflow rate. Notice that both Equations 4.2.3 and 4.2.5 represent a velocity in the vertical direction; Equation 4.2.3 describes the change in water depth over time as the inflow fills the basin, and Equation 4.2.5 describes the vertical velocity of the particles in the water. When these are equal to each other, the dimensions of the basin are sufficient to allow the sediment particles to settle to the bottom of the basin before the now-clean water begins to leave the basin from the top (Figure 4.2.1). Detention basins can be sized to completely remove particles of a minimum size and density by setting overflow rate (Equation 4.2.4) equal to the theoretical settling velocity for that design size particle, e.g.:
$Q_{R} = \frac{Q}{W \times L} = \frac{gd^{2}(\rho_{p}-\rho_{w})}{18\mu}$
Given a design inflow rate Q, and size and density of the small particles carried in with the flow, combinations of W and L (the dimensions of the basin) can be found that will accomplish the objective of settling the small particles to the bottom. This reduces the load carried in the outflow, reducing the consequences of sediment pollution downstream and helping to meet targets for maximum allowable load.
Cost-benefit analysis (CBA) at its simplest uses an estimate of the monetary value of the benefits of a project (b, any currency, e.g., $or €) divided by the costs (c, must be same currency as b), as: $BCR = \frac{b}{c}$ where BCR is the benefit-to-cost ratio (unitless). In practice, it requires more detailed considerations, particularly concerning when the costs are incurred and when the benefits will accrue, because the value of a unit of money changes through time. To ensure BCR is meaningful, all costs should be adjusted to a reference time period, using inflation data for these adjustments. At its simplest, to calculate BCR for an engineered structure for water quality protection, it is necessary to know who has a vested interest (the stakeholders) and what benefits they want to prioritize. Once this is done, the production costs can be estimated; the benefits, expressed as monetary values, can be estimated; all costs converted to the same currency; and all adjusted to represent the same time period. In general, it is relatively straightforward to estimate the cost of a project because a design can be converted into a bill of materials and a construction schedule, and the operating cost can be estimated from current practice. The benefits can be much more difficult to cost but could be estimated from the medical costs for human illness or the willingness of people to pay for a cleaner environment. Putting a price on non-human ecosystem services that can be damaged by poor water quality requires ingenuity. For example, the cost of eutrophication (a result of excess nutrient loadings) on local ecology, recreation, and aesthetics could be quantified by loss of fish yields related to tourism or fishing permits, local home prices, or tourism revenue, but quantifying the cost of a lost species that most people are not even aware of is much more difficult. Applications Designated Uses and Water Quality Standards As noted earlier, the concept of basing water quality standards on the designated use of water bodies allows for differences in water quality requirements by use category to be taken into account. Establishing water quality standards based on designated uses provides challenges to policymakers. They must consider common designated uses for surface freshwaters and decide which uses should have more stringent standards. For example, in considering standards for drinking water reservoirs vs. fishing, drinking water might generally be expected to require more stringent standards, as this implies direct human contact. However, it is worth noting that a drinking water reservoir is directed into a treatment plant, which may remove pollutants of concern (though at some expense); and for some contaminant and species interactions, human drinking water standards are insufficient to provide protection, e.g., the selenium drinking water standard set by the USEPA is 50 ppb, but there is research suggesting selenium levels over 5 ppb may be toxic to some freshwater fish due to bioaccumulation. Considerations to deliberate upon when thinking about irrigation vs. navigation include the fact that irrigation involves potential application to plants that then could be consumed by humans and so this water would likely need to be of higher quality. However, it is also useful to think about water quality issues that might impede navigation, for example, extreme eutrophication. Filamentous algae can tangle motors and docks (and irrigation intake pumps). One very difficult use comparison is habitat maintenance vs. recreation. Full-body-contact recreation by humans could involve ingestion and/or submersion in water. Habitat maintenance could involve water chemistry and habitat not compatible with human submersion. Source Control: Nutrient Management Nutrient management, which is the science and practice of managing the application of fertilizers, manure, amendments, and organic materials to agricultural landscapes as a source of plant nutrients, is a source-control BMP designed to simultaneously support water quality protection and agroeconomic goals. This pollution control strategy sets a nutrient budget whereby primary growth-limiting nutrients (generally nitrogen, phosphorus, and potassium, or N-P-K) are applied only in amounts to meet crop growth needs. Fertilizer application is intentionally timed to coincide with times of maximum crop need (e.g., prior to or just after germination) and to avoid high-risk transport periods (e.g., avoiding prior to large rainfall events or when the ground is frozen). In minimizing the amount of fertilizer applied, the risk of loss to the environment and the cost of production are also minimized. In its simplest form, nutrient management planning can be thought of in terms of a mass balance (Equation 4.2.2). Using a mass balance approach also requires deciding on the appropriate scale of analysis; it may be appropriate to consider inputs and outputs on a per-unit-land-area basis, and/or it may be appropriate to consider a whole farm. The latter can be especially useful for managing nutrients in a combined animal and plant production system, where the animals generate waste that contains a concentration of nutrients, and where the animal waste (manures) can be applied to an area of land to meet the nutrient demand of the plants. Nutrient concentration information can be converted to nutrient mass information for use in a mass-balance approach by multiplying the concentration by the relevant total area or volume: $\text{Mass (in an area or volume)} = \text{concentration (per unit area or volume)} \times \text{total area or volume}$ Units in Equation 4.2.8 will vary depending on the specific application, so it is important to keep track of the units and convert units as needed. Common units for concentration on a per unit volume basis are mg L−1 and g cm−3. On a per unit area basis, common units are kg ha−1. It is relatively straightforward to estimate nutrient application rates. For example, if N demand of a crop is known, and the available N in a wastewater or manure is known, it is possible to calculate whether field application to manage the wastewater or manure is likely to exceed crop demand and thus cause pollution. Nitrogen needed for a field (kg) can be calculated as $\text{Nitrogen needed by the crop} = \text{area} \times \text{crop nitrogen demand per unit area}$ where area (ha) can be determined from maps or farm records and crop demand (kg ha−1) can be taken from advisory/extension service or agronomy guidelines. If the N content of a wastewater is known, Equation 4.2.8 can be used to calculate the available supply of nitrogen. The difference between amount spread and amount needed indicates whether polluting losses are likely. While nutrient management is relatively simple conceptually and practically in terms of chemical fertilizers, the practice becomes much more complicated when animal wastes such as manure are used as a source of crop nutrients and soil organic matter. The use of manure as a fertilizer and soil conditioner has proven a successful agricultural strategy since the Neolithic Revolution and continues to be recommended as a sustainable means of recycling nutrients in agricultural systems today. However, because manures are quite heterogeneous in composition, matching manure nutrient content with crop needs can be quite complex. Other complicating factors in nutrient management plans, particularly those reliant on manure, include the impacts of historical land uses on soil nutrient levels and additional potentially harmful components of animal wastes. Years of fertilization with manure have resulted in P saturation of many agricultural soils (Sims et al., 1998). Given that P is generally the growth-limiting nutrient for freshwater systems (i.e., additional P is likely to result in eutrophication), many agricultural nutrient management guidelines are P-based and so do not permit addition of fertilizer beyond crop P needs. This can render disposal of manures difficult if surrounding croplands have P-saturated soils. Manures and agricultural wastes can also contain additional contaminants of human health concern, including pathogenic microorganisms and antibiotics. Consequently, crops for human consumption cannot be fertilized with animal manures unless there is considerable oversight and pre-treatment (e.g., composting) (USFDA, 2018). While the previous examples have focused primarily on agricultural landscapes, nutrient management is also widely applied in urban landscapes as well to minimize nutrient loss following fertilization of ornamental plants, lawns, golf courses, etc. (e.g., Chesapeake Stormwater Network, 2019). Delivery Control: Detention Basins and Wetlands One example of a common pollutant in water systems is excess sediment that arrives in the water body with surface runoff, and which carries eroded particles from the soil over which the water has moved. Human activities, including agriculture, urban development, and resource extraction, have been estimated to move up to 4.0 to 4.5 × 1013 kg yr−1 of soil globally (Hooke, 1994, 2000). Given the sheer magnitude of earth-moving activities involved, it is perhaps inevitable that these activities accelerate erosion, i.e., the wearing away and loss of local soils. Erosion represents a significant concern as it results in the degradation of soil quality and the contamination of local receiving waters. Eroded sediments alone can threaten aquatic ecology through sedimentation of habitat, physical injury to aquatic animals, and disruption of macroinvertebrate biological processes (Govenor et al., 2017). In addition, these sediments can carry with them additional adsorbed pollutants, including bacteria (Characklis et al., 2005), metals (Herngren et al., 2005), nutrients (Vaze and Chiew, 2004), and some emerging organic contaminants (Zgheib et al., 2011). Eroded sediments can also compromise storage capacity of lakes and reservoirs. Detention, or “settling,” basins (also called ponds) are a popular BMP in the USA and beyond that are implemented in a variety of landscapes to prevent eroded soils from contaminating local waterways. In recent decades, low impact development (LID) practices have started to emerge as BMPs. LID “refers to systems and practices that use or mimic natural processes that result in the infiltration, evapotranspiration or use of stormwater in order to protect water quality and associated aquatic habitat” (USEPA, 2018c). LID is a design approach to managing stormwater runoff in urban and suburban environments, both in new developments and retrofitting older developments. Although the term LID was first coined in the U.S., this paradigm is now widely practiced elsewhere (Saraswat et al., 2016; Hager et al., 2019). Specific BMPs used to support LID include wetlands, which rely on both physical (e.g., settling) and biological (e.g., denitrification) processes to remove water quality pollutants, and bioretention cells, which use infiltration through a bioactive media to remove contaminants and decrease peak flows (Figure 4.2.2). Selection of an appropriate BMP requires knowledge of the specific target pollutants requiring treatment, available land and land cost, and stakeholder preferences and capacity for continuing maintenance. LID approaches also consider the broader ecological impacts beyond the reduction of a target pollutant by the BMPs employed, including habitat restoration and carbon/nutrient cycling. The advent of these strategies to manage stormwater has partially led to the creation of a new subdiscipline of agricultural and biological engineering during the past few decades, known as ecological engineering. Ecological engineering is defined as “the design of sustainable ecosystems that integrate human society with its natural environment for the benefit of both” (Mitsch, 2012). As with any emerging discipline, there is substantial current research codifying ecological engineering design guidelines and quantifying expected outcomes of relevant BMP implementation (Hager et al., 2019). Urban Stormwater Planning An important aspect of urban planning is effective stormwater control. The selection of appropriate BMPs for each urban setting depends on the specifics of the situation. For example, consider an urban community that is particularly concerned about maintaining a small downstream reservoir for aquatic recreation. Samples from this reservoir must occasionally be tested for levels of fecal coliform bacteria. The presence of fecal coliform indicates that the water has been contaminated with human or other animal fecal material and that it is possible other pathogenic organisms are present. To ensure fecal coliform values are lower than the recommended levels, specific BMPs can be implemented. Implementing source control practices, such as dog waste collection stations, could be part of the solution. In addition, one or more delivery control practices, such as bioretention cells, detention basins, or a wetland basin, would be required to remove fecal coliforms from stormwater flows. The design of these urban features to reduce coliform transport to local streams and the reservoir requires knowledge of local climate, specifically rainfall patterns and some idea of the loading that might be expected, specifically, the number and magnitude of sources of coliforms. To evaluate which BMP is most appropriate and to obtain design guidelines, a tool such as the International Stormwater BMP Database (Clary et al., 2017) can be used. The database includes data and statistical analyses from over 700 BMP studies, performance analysis results, and other information (International Stormwater BMP Database, 2020). Interpretation of the results of the statistical analysis have to consider issues such as whether the magnitude of the average decrease or the reliability of the BMP is most important, whether the BMP might actually export bacteria, how location specific the data might be, and how useful a particular BMP might be for related pollutants, in this case for something like E. coli. Ultimately size and cost calculations need to be used to select a specific design. Examples Example $1$ Example 1: Quantifying ecosystem services Problem: Presently in the U.S. Midwest there is concern that the use of fertilizer on agricultural lands to maximize crop production may result in downstream concentrations of nitrate that render the water more difficult and costly to treat for human consumption. The current maximum permissible concentration of nitrate in drinking water is 10 mg L−1. Assume that the average nitrate concentration in a drinking water treatment plant intake is 12.3 mg L−1. The plant must treat and distribute 1.5 × 108 L of water per day to meet consumer demand. Treating water to remove nitrate costs$2 kg−1. What is the minimum cost of nitrate treatment per year?
Solution
The cost of nitrate treatment is expressed in units of $kg−1 of nitrate. Thus, to determine the total cost, determine the mass of nitrate treated using Equation 4.2.2: $\text{mass nitrate in inflow} = \text{mass nitrate treated} + \text{mass nitrate in outflow}$ In this case, the concentration of nitrate in the inflow is 12.3 mg L−1. The concentration of nitrate in the outflow should not exceed 10 mg L−1. The difference can be used to estimate the minimum amount of nitrate that must be treated: $\text{mass nitrate treated} = \text{(concentration in inflow - concentration in outflow)} \times \text{volume}$ $= (12.3\ mg L^{-1} - 10.0\ mg L^{-1} )\times 1.5 \times 10^{8}\ L \text{ day}^{-1}$ $= 3.45 \times 10^{8}\ mg \text{ day}^{-1} \times (1\ kg / 10^{6} \ mg) = 345 \text{ kg day}^{-1}$ The annual cost of treatment can then be calculated as: $2 \text{ kg}^{-1} \times 345 \text{ kg day}^{-1} \times 365 \text{ days year}^{-1} = 251,850$ This calculation provides no contingency for inefficiency in the plant. If a safety margin of 1 mg L−1 were included, the outflow concentration would be 9 mg L−1, and the calculation would be: $\text{mass nitrate treated} = (12.3 \text{ mg L}^{-1}-9.0 \text{ mg L}^{-1}) \times 1.5 \times 10^{8} \text{ L day}^{-1}$ $= 4.95 \times 10^{8} \text{ mg day}^{-1} \times (1 \text{ kg}/10^{6} \text{ mg}) = 495 \text{ kg day}^{-1}$ and $= 2 \text{ kg}^{-1} \times 495 \text{ kg day}^{-1} \times 365 \text{ days year}^{-1} = 361,350$ A cost benefit analysis would have to be used to decide whether it was worth paying$109,500 per year for what might be seen as greater certainty that outflow water quality would be better than the permissible limit.
Example $2$
Example 2: Calculating a TMDL
Problem:
You are a water quality manager tasked with ensuring that a stream within a small, rapidly urbanizing watershed remains in compliance with applicable state standards. At present, water quality monitoring indicates that nitrate-nitrogen (NO3-N) levels (mg 100 mL−1) in grab samples are just below the state standard. Knowing that future development will likely increase nutrient discharges, you decide to calculate a current TMDL value for future reference based on a current inventory of loadings to the stream. An inventory of local NPDES permits provides the loadings in Table 4.2.1; water quality models estimate that nonpoint sources contribute roughly 2.3 × 109 g month−1 of NO3-N. Prior experience indicates that the margin of safety should be equivalent to 35% of total current nonpoint and point source loadings in order to account for errors, growth, and missing data. What TMDL value (in Mg day−1) do you report for this stream under the current conditions?
Table $1$: Average daily discharge and NPDES permitted loading from local point sources.
Source Average Daily Discharge, L day−1 Permitted Loading (per day)
Wastewater treatment plant
6.4 × 106
5.6 × 106 E. coli;
0.7 Mg NO3-N
Mid-sized concentrated animal feeding operation (CAFO)
1.0 × 104
4.4 × 105 E. coli;
0.2 Mg sediment
City storm sewer 1
5.3 × 105
10.4 Mg sediment
City storm sewer 2
0.13 × 105
3.2 × 107 Mg NO3-N
Solution
Calculate the TMDL using Equation 4.2.1; specifically, sum the point (PS) and nonpoint (NPS) source loads of NO3-N and add a margin of safety (MOS):
$TMDL = PS+NPS+MOS$
Point sources of NO3-N, based on the inventory of local NPDES permits, are a wastewater treatment plant and city storm sewer #2. The total PS loadings per day are:
$PS = 0.7 Mg\ + 3.2 \times 10^{7} Mg = 3.2 \times 10^{7} Mg\ NO_{3}-N$
The loading from the wastewater treatment plant is negligible compared to that of the city storm sewer.
Nonpoint sources of NO3-N are 2.3 × 109 g month−1. Assuming 30 days per month yields the NPS loading per day:
$NPS = 2.3 \times 10^{9} \text{ g month}^{-1} / (30 \text{ days month}^{-1}) = 77\ Mg\ NO_{3}-N$
Since the specified margin of safety is 35% of the total PS and NPS loadings, the TMDL is:
$TMDL = PS+NPS+0.35 (PS+NPS)=1.35\times (PS +NPS)=1.35 \times [(3.2 \times 10^{7})+ 77]$
$=4.32 \times 10^{7}\ Mg\ NO_{3}-N \text{ day}^{-1}$
Example $3$
Example 3: Nutrient management to meet crop needs
Problem:
You are advising a producer who is managing 30.3 ha in continuous cultivation for corn (maize; Zea mays) silage. You have determined from agronomic advice that for the soil type and cultivar the crop needs 326 kg ha−1 of nitrogen after initial planting. An adjacent dairy has a slurry (mixture of manure and milking parlor wastewater) that could be used as a source of nitrogen. Laboratory analyses indicate that the slurry contains 15.6 kg available nitrogen per 1000 L of slurry.
1. (a) How much slurry would be required to completely fertilize the field to meet crop needs?
2. (b) Assuming the available slurry spreader can spread no less than 47,000 L ha−1, what is the minimum quantity of slurry that can be applied?
3. (c) Is the application of slurry to the field likely to cause pollution?
Solution
1. (a) To calculate the total amount of slurry needed to provide the needed amount of nitrogen to the cropped area, first, calculate the total amount of nitrogen needed in the field:
2. $N \text{ needed in the field} = 30.3\text{ ha} \times 326 \text{ kg N ha}^{-1} = 9,877.8 \text{ kg N}$
3. Then, calculate the amount of slurry needed to provide the needed N, based on the N content of the slurry:
4. $\text{slurry needed in the field} = 9,877.8 \text{ kg N} \times (1,000\ L/15.6 \text{ kg N}) = 633,192\ L$
5. (b) The machine can apply a minimum of 47,000 L ha−1. Using the available slurry, the amount of nitrogen that would be applied at this rate is:
6. $15.6 \text{ kg N} / 1,000\text{ L} \times47,000 \text{ L ha}^{-1} \times 30.3 \text{ ha} = 22,216 \text{ kg N in the field.}$
7. (c) As the minimum application rate would result in 22,216 kg N applied to the field, and the crop only needs 9,877.8 kg N, there will be an excess of 12,338.2 kg N applied to the field, so it is likely to cause pollution. The producer could consider several options: dilute the available slurry; find another source of slurry with lower concentration of available nitrogen; or find a slurry spreader with a lower minimum spreading rate.
Example $4$
Example 4: Calculating theoretical detention basin removals by particle size class
Problem:
Assuming theoretical conditions as described above, what is the surface area of a detention basin required to remove 100% of particulates greater than 0.1 mm in size and with a density of 2.6 g cm−3? Given the size of the watershed and typical design storm, the basin will need to be designed to treat 10 × 106 m3 of water over 24 hours.
Solution
Detention basins can be sized to completely remove particles of a minimum size and density by setting the overflow rate equal to the theoretical settling velocity for that design size particle.
Calculate overflow rate, QR, as expressed by Equation 4.2.4:
$Q_{R} = \frac{Q}{W \times L}$ (Equation $4$)
$Q_{R} = \frac{Q}{W \times L} = \frac{\frac{(10 \times 10^{6}\ m^{3})}{24\ hr(3600\ s\ hr^{-1})}}{W \times L} = \frac{11.57\ m^{3}s^{-1}}{W \times L}$
Calculate the settling velocity (Equation 4.2.5):
$V_{s} = \frac{gd^{2}(\rho_{p}-\rho_{w})}{18\mu}$ (Equation $5$)
where Vs = settling velocity (m s−1)
ρp = density of particle = 2.6 g cm−3 = 2,600 kg m−3
ρw = density of the fluid (water) = 1,000 kg m−3
g = acceleration due to gravity = 9.81 m s−2
d = particle diameter = 0.1 mm = 0.0001 m
μ = viscosity of water, 10−3 N s m−2 at 20°C
$V_{s} = \frac{(9.81 m\ s^{-2})(0.0001\ m)^{2}(2,600\ kg\ m^{-3} - 1,000\ kg\ m^{-3})}{18(10^{-3}\ N\ s\ m^{-2})} = 0.00872\ m\ s^{-1}$
Set overflow rate equal to settling velocity and solve for the required surface area, or W × L, of the detention basin:
$\frac{11.57\ m^{3}s^{-1}}{W \times L} = 0.00872\ m\ s^{-1}$
$W \times L = \frac{11.57\ m^{3}s^{-1}}{0.00872\ m\ s^{-1}} = 1,327\ m^{2}$
The required surface area of the detention basin is 1,327 m2.
Image Credits
Figure 1. Krometis, Leigh-Anne. H. (CC By 4.0). (2020). Theoretical stormwater basin dimensions.
Figure 2. Krometis, Leigh-Anne. H. (CC By 4.0). (2020). Bioretention cells for urban stormwater control in Brazil (top) and the USA (bottom). These cells are designed to temporarily store water, allowing sediments to settle, and using plants for nutrient uptake. Note the use of local native vegetation.
References
APHA. (2013). Improving health and wellness through access to nature. APHA Policy Statement 20137. American Public Health Association. Retrieved from https://www.apha.org/policies-and-advocacy/public-health-policy-statements/policy-database/2014/07/08/09/18/improving-health-and-wellness-through-access-to-nature
ASABE (2020). About the profession. https://asabe.org/About-Us/About-the-Profession
Characklis, G. W., Dilts, M. J., Simmons, III, O. D., Likirduplos, C. A., Krometis, L. A., & Sobsey, M. D. (2005). Microbial partitioning to settleable solids in stormwater. Water Res.39(9), 1773-1782.
Chesapeake Stormwater Network. (2019). Chesapeake Stormwater Network’s urban nutrient management guidelines. Retrieved from https://chesapeakestormwater.net/bmp-resources/urban-nutrient-management/
Clary, J., Strecker, E., Leisenring, M., & Jones, J. (2017). International stormwater BMP database: New tools for a long-term resource. Proc. Water Environment Federation WEFTEC 2017, Session 210–219, pp. 737-746.
Freedman, P. L., Nemura, A. D., & Dilks, D. W. (2004). Viewing total maximum daily loads as a process, not a singular value: Adaptive watershed management. J. Environ. Eng., 130, 695-702. https://doi.org/10.1061/(ASCE)0733-9372(2004)130:6(695)
Govenor, H., Krometis, L., & Hession, W. C. (2017). Invertebrate-based water quality impairments and associated stressors identified through the US Clean Water Act. Environ. Manag. 60(4), 598-614.
Hager, J., Hu, G., Hewage, K., & Sadiq, R. (2019). Performance of low-impact development best management practices: A critical review. Environ. Rev., 27(1), 17-42. https://doi.org/10.1007/s00267-017-0907-3.
Hartig, T., Mitchell, R., de Vries, S., & Frumkin, H. (2014). Nature and public health. Ann. Rev. Public Health, 35: 207-228. https://doi.org/10.13140/RG.2.2.15647.61600.
Herngren, L., Goonetilleke, A., & Ayoko, G. A. (2005). Understanding heavy metal and suspended solids relationships in urban stormwater using simulated rainfall. J. Environ. Manag., 76(2), 149-158. https://doi.org/10.1016/j.jenvman.2005.01.013.
Hooke, R. L. (1994). On the efficacy of humans as geomorphic agents. GSA Today 4. Retrieved from https://www.geosociety.org/gsatoday/archive/4/9/pdf/i1052-5173-4-9-sci.pdf.
Hooke, R. L. (2000). On the history of human as geomorphic agent. Geol., 28, 843-846.
International Stormwater BMP Database (2020). http://www.bmpdatabase.org/.
Keeler, B. L., Polasky, S., Brauman, K. A., Johnson, K. A., Finlay, J. C., O’Neill, A., . . . Dalzell, B. (2012). Linking water quality and well-being for improved assessment and valuation of ecosystem services. Proc. Natl. Acad. Sci. USA 109: 18619-18624. http://doi.org/10.1073/pnas.1215991109.
Keller, A. A., & Cavallaro, L. (2008). Assessing the US Clean Water Act 303(d) listing process for determining impairment of a waterbody. J. Environ. Manag., 86, 699-711. http://doi.org/10.1016/j.jenvman.2006.12.013.
MEA. (2005). Ecosystems and human well-being: Biodiversity synthesis. Millennium Ecosystem Assessment. Washington, DC: World Resources Institute. Retrieved from https://www.millenniumassessment.org/documents/document.354.aspx.pdf.
Mitsch, W. 2012. What is ecological engineering? Ecol. Eng., 45, 5-12. https://doi.org/10.1016/j.ecoleng.2012.04.013.
Novotny, V. (2003). Water quality: Diffuse pollution and watershed management. New York, NY: J. Wiley & Sons.
NRC. (2009). Urban stormwater management in the United States. National Research Council. Washington, DC: The National Academies Press. https://doi.org/10.17226/12465.
Raudsepp-Hearne, C., Peterson, G. D., Tengö, M., Bennett, E. M., Holland, T., Benessaiah, K., . . . Pfeifer, L. (2010). Untangling the environmentalist’s paradox: Why is human well-being increasing as ecosystem services degrade? BioSci. 60, 576-589. https://doi.org/10.1525/bio.2010.60.8.4.
Sandifer, P. A., Sutton-Grier, A. E., & Ward, B. P. (2015). Exploring connections among nature, biodiversity, ecosystem services, and human health and well-being: Opportunities to enhance health and biodiversity conservation. Ecosyst. Services 12, 1-15. https://doi.org/10.1016/j.ecoser.2014.12.007.
Saraswat, C., Kumar, P., & Mishra, B. (2016). Assessment of stormwater runoff management practices and governance under climate change and urbanization: An analysis of Bangkok, Hanoi and Tokyo. Environ. Sci. Policy 64, 101-117. https://doi.org/10.1016/j.envsci.2016.06.018.
Sims, J. T., Simard, R. R., & Joern, B. C. (1998). Phosphorus loss in agricultural drainage: Historical perspective and current research. J. Environ. Qual., 27(2), 277-293. doi.org/10.2134/jeq1998.00472425002700020006x.
Tilman, D., Cassman, K. G., Matson, P. A., Naylor, R., & Polasky, S. (2002). Agricultural sustainability and intensive production practices. Nature 418, 671-677. https://doi.org/10.1038/nature01014.
UNFAO. (2019). United Nations Food and Agriculture Organization Aquastat database. Retrieved from http://www.fao.org/nr/water/aquastat/water_use/index.stm.
USEPA. (2018a.) Section 404 of the Clean Water Act. U.S. Environmental Protection Agency. Retrieved from https://www.epa.gov/cwa-404/clean-water-act-section-502-general-definitions.
USEPA. (2018b). Terms and acronyms. U.S. Environmental Protection Agency. Retrieved from https://iaspub.epa.gov/sor_internet/registry/termreg/searchandretrieve/termsandacronyms/search.do.
USEPA. (2018c). Urban runoff: Low impact development. U.S. Environmental Protection Agency. Retrieved from https://www.epa.gov/nps/urban-runoff-low-impact-development.
USEPA. (2019). US EPA’s national summary webpage on water quality impairments and TMDL development. U.S. Environmental Protection Agency. Retrieved from https://ofmpub.epa.gov/waters10/attains_index.home.
USFDA. (2018). Food Safety Modernization Act. U.S. Food and Drug Administration. Retrieved from https://www.fda.gov/food/guidanceregulation/fsma/.
USGS. (2018). Water use in the United States. U.S. Geological Survey. Retrieved from https://water.usgs.gov/watuse/.
Vaze, J., & Chiew, F. H. S. (2004). Nutrient loads associated with different sediment sizes in urban stormwater and surface pollutants. J. Environ. Eng., 130(4), 391-396. https://doi.org/10.1061/(ASCE)0733-9372(2004)130:4(391).
Withers, P., Neal, C., Jarvie, H., & Doody, D. (2014). Agriculture and eutrophication: Where do we go from here? Sustainability 6(9), 5853-5875. https://doi.org/10.3390/su6095853.
World Bank. (2017). Globally, 70% of freshwater is used for agriculture. Retrieved from https://blogs.worldbank.org/opendata/chart-globally-70-freshwater-used-agriculture.
Zgheib, S., Moilleron, R., Saad, M., & Chebbo, G. (2011). Partition of pollution between dissolved and particulate phases: What about emerging substances in urban stormwater cathments? Water Res., 45(2), 913-925. http://doi.org/10.1016/j.watres.2010.09.032.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/04%3A_Natural_Resources_and_Environmental_Systems/4.02%3A_Water_Quality_as_a_Driver_of_Ecological_System_Health.txt
|
Jaana Uusi-Kämppä
Water Quality Impacts, Natural Resources
Natural Resources Institute Finland (Luke)
Key Terms
Detachment Water erosion Universal Soil Loss Equation
Transport Wind erosion Measurement
Deposition Tillage erosion Monitoring
Introduction
Soil is a major natural resource in food production and therefore it is important to take care of soil in a sustainable manner. In cropland areas, topsoil is degraded by depleting available nutrients and by the removal of soil material from the soil surface via erosion caused by water or wind. Erosion usually occurs more rapidly when the soil is disturbed by human activity or during extreme weather conditions such as high precipitation or drought. Soil loss from a field decreases soil fertility and hence crop yield because of depletion of nutrients, reduction in soil organic carbon, and weakening of soil physical properties (Zhang and Wang, 2006). Recent global estimates suggest that soil erosion removes between 36 and 75 billion tonnes of fertile soil every year (Borelli et al., 2017; Fulajtar et al., 2017) causing adverse impacts to agricultural land and the environment.
In addition to the loss of fertile soil from cropland, erosion processes cause burying of crops and many environmental problems, such as siltation and pollution of receiving watercourses and degradation of air quality. Agrichemicals such as phosphorus and some pesticides adsorbed to eroded soil particles may be transported from croplands. In receiving water bodies, the chemicals may desorb and cause algal blooms or damage the local ecosystems. Due to the many harmful effects caused by soil erosion, it is important to understand erosion processes and how to monitor and prevent them, as well as how to reduce harmful environmental impacts both in the source and impacted (or target) areas. These topics are explored in more detail in this chapter.
Outcomes
After reading this chapter, you should be able to
• • Define soil erosion and explain erosion mechanics and transport mechanisms
• • Describe measurement and monitoring methods for quantifying erosion
• • Explain and apply the Universal Soil Loss Equation (USLE) to estimate soil loss by water
• • Calculate average annual soil loss and the effect of different tillage practices on erosion rates
Concepts
What is Soil Erosion?
Soil erosion is a natural geomorphological process by which surface soil is loosened and carried away by an erosive agent such as water or wind. Other agents, such as freezing and thawing, gravity, tillage, and biological activity cause soil movement. Human activity has accelerated erosion for many years, with changes in land use making soil prone to accelerated erosion so that loss is more rapid than replenishment. Tillage, and especially plowing, generally keeps the soil surface bare during winter. Bare soil is prone to erosion, whereas permanent grass or winter plant cover (i.e., cover crop or stubble) on the soil surface protects soils from erosion. Soil erosion is a local, national, and global problem. In the future, erosion processes may be intensified due to the increase in extreme weather events predicted with climate change. New erosion areas also appear due to deforestation, clearing land for cultivation, and global warming.
Soil Erosion Processes
The process of soil erosion consists of three different parts: detachment, transport, and deposition. First, soil particles are detached by the energy of falling raindrops, running water, or wind. Soil particles with the least cohesion are easiest to be loosened. The detached soil particles are then transported by surface runoff (also known as overland flow) or wind. Finally, the soil particles start to settle out, or deposit, when the velocity of overland flow or wind and sediment transport capacity decrease. Deposited particles are called sediment. Heavier particles, such as gravel and sand, deposit first, whereas fine silt and clay particles can generally be carried for a longer distance and time before deposition. Although particles of fine sand are more easily detached than those of a clay soil, clay particles are more easily transported than the sand particles in water (Hudson, 1971).
In addition to the energy of water or wind used in both detachment and transport of soil particles, gravity may impact erosion either directly, i.e., soil moving downhill without water (e.g., slump mass-movement), or indirectly (e.g., pulling rain to the Earth or drawing floodwaters downward). Bioturbation, which is reworking of soils and sediments by animals or plants, may also play an important role in sediment transport. For example, uprooted trees, invertebrates living underground and moving through the soil (e.g., earthworms), and many mammals burrowing into soil (e.g., moles) can cause soil transport downslope (Gabet et al., 2003).
In some other erosion processes, cycles of freezing and thawing or wetting and drying of clay soils weaken or break down soil aggregates and make the soil more susceptible to erosion. In boreal areas (i.e., northern areas with long winters and short, cool to mild summers), soil erosion may be high during snowmelt periods as a result of soils saturated by water, limited vegetation cover, and high overland flow (Puustinen et al., 2007). Soil erodibility is high in recently thawed soils, since high water content decreases the cohesive strength of soil aggregates (Van Klaveren and McCool, 1998).
Tillage Erosion
Soil erosion caused by tillage has also become more important with the development of mechanized agriculture, while soil erosion caused by water and wind has moved the Earth for millions of years. Tillage erosion has intensified with increased tillage speed, depth, and size of tillage tools, and with the tillage of steeper and more undulating lands (Lindstrom et al., 2001). The amount of soil moved by tillage can exceed that moved by interrill and rill erosion (Lindstrom et al., 2001). In agricultural areas, tillage is the main contributor to accelerated erosion rates. In certain areas, e.g., the U.S. and Belgium, tillage erosion has created soil banks of several meters high near field borders (Lindstrom et al., 2001). The net soil movement by tillage is generally presented as units of volume, mass, or depth per unit of tillage width (e.g., liter m−1, kg m−1, or cm m−1, respectively).
Types of Soil Erosion Caused by Water on Cropland
Soil erosion caused by water can be classified into several forms including splash, sheet, interrill, rill, gully and bank (Toy et al., 2002). Splash erosion is caused by raindrop impact (Fernández-Raga et al., 2017). Small soil particles are broken off of the aggregate material by the energy of falling drops and are splashed into the air (Figure 4.3.1). Particles may deposit on the soil surface nearby or on flowing water.
Sheet erosion occurs when a thin layer of soil is evenly removed from a large area by raindrop splash and runoff water moving as a thin layer of overland flow. It occurs generally on uniform slopes. Sheet erosion is assumed to be the first phase of the erosion process, and the soil losses are assumed to be rather small (Toy et al., 2002).
Rills are small channels, less than 5 cm deep. They exist when overland flow (or surface runoff) begins to concentrate in several small rivulets of water on the soil surface. Detachment of soil particles is caused by surface runoff (Toy et al., 2002). In general, if a small channel can be obliterated with normal farming operations, it is a rill rather than a channel. After obliteration, rills tend to form in a new location.
The areas between rills are called interrill areas, and the erosion there is defined as interrill erosion (Toy et al., 2002). Interrill erosion is a type of sheet erosion because it is uniform over the interrill area. Detachment occurs by raindrop impact, and both surface runoff and detached soil particles tend to flow into adjacent rills.
Gullies are large, wide channels that are carved by running water (Figure $2$). Ephemeral gullies may occur on croplands and they are able to be filled with soil during tillage operations (Toy et al., 2002). The macrotopography of the surface allows the formation of ephemeral gullies after refilling by tillage. Gullies may sometimes be large enough to prevent soil cultivation. These gullies are called permanent, or classic, gullies. This kind of gully erosion causes severe damage to a field and produces high sediment loads to water.
Bank erosion is direct removal of soil particles from a streambank by flowing water. Bank erosion is the progressive undercutting, scouring and slumping of the sides of natural stream channels and constructed drainage channels (OMAFRA, 2012b).
Types of Wind Erosion
Suspension, saltation, and surface creep are three types of soil movement during wind erosion (Figure $3$). The dominant manner of erosion depends principally on soil type and particle size. Pure sand moves by surface creep and saltation. Soils with high clay content move under saltation. The sediment moved by creep and saltation may deposit very near the source area, along a fence, in a nearby ditch, or a field (Toy et al., 2002). In suspension, fine particles (diameter less than 0.1 mm) are moved into the atmosphere by strong winds or through impact with other particles. They can be carried extremely long distances before returning to earth via rainfall or when winds subside. In saltation, bouncing soil particles (diameter 0.1–0.5 mm) move near the soil surface. A major fraction of soil moved by wind is through the saltation process. In surface creep, large soil particles (diameter 0.5–1 mm), which are too heavy to be lifted into the air, roll and slide along the soil surface. Particles can be trapped by a furrow or a vegetated area.
Factors Influencing Water and Wind Erosion
Soil erosion is affected by several factors such as climate, rainfall, runoff, slope, topography, wind speed and direction, soil characteristics, soil cover like vegetation or mulch, and farming techniques. For example, in arid climates with steep slopes without good plant cover, during heavy rains the soil erosion is much higher than in level fields with robust plant cover in a mild climate. As another example, soils with high organic matter are naturally more cohesive and, thus, less susceptible to detachment than soils with low organic matter.
Water erosion occurs in areas where rainfall intensity, duration, and frequency are high enough to cause runoff. Wind erosion is most common in arid and semi-arid areas where dry and windy conditions occur. When rainfall water exceeds infiltration (i.e., permeation of water into soil by filtration) into the soil surface, runoff starts to occur. Infiltration capacity depends on soil type. For example, water infiltrates more rapidly into sandy soils than into clay soils; however, water infiltration can be improved in clay-textured soil by aggregate formation. The aggregates, consisting of fine sand, silt, and clay, are typically formed together with a mixed adhesive including organic matter, clays, iron (Fe) and aluminum (Al) oxides, and lime. At first, rainwater runoff has an impact on light materials (i.e. silt, organic matter, and fine sand particles) in soil, whereas during heavy rainfalls, larger particles are also carried by runoff water. Topography (i.e., slope length and gradient) is also an important factor for water erosion, with longer or steeper slopes being associated with greater erosion rates.
Soil surfaces covered by dense vegetation or mulches are less prone to water erosion due to their protection against the erosive power of raindrops and runoff water. Plants also use water, and their roots bind soil particles. Wind erosion can be counteracted by vegetation, which provides shelter from wind, intercepts wind-borne sediment, and keeps the soil surface moist.
Mechanical disturbance (e.g., soil tillage) buries vegetation or residues that would ordinarily serve as protection from erosion. Anthropogenic, i.e., human-induced, influences, such as changes in land management (animal production vs. crop production) and crop pattern (crop rotation vs. monoculture), use of heavier agricultural machinery, and soil compaction, increase the water and wind erosion potential of soils. Reduced tillage and no-till practices on croplands have been successful in reducing erosion. Globally, intensive deforestation causes soil erosion in new agricultural areas, increasing the net erosion rate.
Estimation and Modeling of Soil Erosion
The average annual erosion rate can be estimated using mathematical models. One of the most widely used models for estimating soil loss by water erosion is the Universal Soil Loss Equation, USLE (Wischmeier and Smith, 1978), and its update the Revised Universal Soil Loss Equation (RUSLE) or Modified Universal Soil Equation (MUSLE). According to the USLE, the major factors affecting erosion are local climate, soil, topography (length and steepness of cropland), cover management, and conservation practices.
The standard erosion plot is 22.13 m long and 4.05 m wide, with a uniform 9% slope in continuous fallow, tilled up and down the slope (Wischmeier and Smith, 1978), and is the experimental basis for the development of the empirical USLE model. The soil loss is evaluated as follows by the USLE:
$A = R\ K\ LS\ C\ P$
where A = computed average annual soil loss (Mg ha−1 yr−1) from sheet and rill erosion
R = rainfall erosivity factor (MJ mm ha−1 h−1 yr−1)
K = soil erodibility factor (Mg ha h ha−1 MJ−1 mm−1)
LS = topographic factor (combines the slope length and the steepness factors L and S) (dimensionless)
C = crop management factor (dimensionless, ranging between 0 and 1)
P = conservation practice factor (dimensionless, ranging between 0 and 1; the high value, 1, is assigned to areas with no conservation practices)
Each value on the right can be estimated from figures or tables. To minimize soil loss (A), any one value on the right needs to decrease. The units of R and K in Equation 4.3.1 are a result of adapting the USLE to use in SI units. The USLE was derived using customary U.S. units (e.g., tons, inches, acres). With international application of USLE, adoption of SI units was important. Several authors (e.g., Foster et al., 1981) have described approaches for use of the USLE in SI units.
Rainfall Erosivity Factor (R)
The rainfall and runoff factor (R), is related to the energy intensity of annual rainfall, plus a factor for runoff from snowmelt or applied water (irrigation) (Wischmeier and Smith, 1978). Rainfall erosivity defines the potential ability of the rain to produce erosion. Erosivity depends solely on rainfall properties (e.g., drop velocity, drop diameter, rainfall rate and duration) and frequency of a rainstorm. The greatest erosion occurs when rainfall with high intensity beats a bare soil surface without any plant cover. Plants or stubble are good cover against rainfall erosivity.
The National Soil Erosion Research Laboratory has presented a figure of the aerial erosion index for different areas of the U.S. varying from <200 to 10,000 (Foster et al., 1981). Several regional and global rainfall erosivity maps (e.g., ESDAC, 2017) are available. Erosivity also varies according to the season (Toy et al., 2002), being highest during winter and early spring in boreal areas.
Soil Erodibility Factor (K)
The soil erodibility factor is the soil loss rate per erosion index unit for a specified soil as measured on a standard erosion plot. It is based on the soil texture, soil structure, percent organic matter, and profile-permeability class (Wischmeier and Smith, 1978; Foster et al., 1981) and reflects the susceptibility of a soil type to erosion. Soils high in clay content have low K factor values because the clay soils are highly resistant to detachment of soil particles. In general, there is little control over the K factor since it is largely influenced by soil genesis. However, some management choices can result in small changes to the K factor. For example, by increasing the percent of organic carbon in soil, the K factor can be decreased, since organic matter increases soil cohesion.
The K factor in SI units (Mg ha h ha−1 MJ−1 mm−1) can be estimated using a regression equation that considers soil texture, organic matter content, structure, and permeability (Mohtar, n.d.):
$K = 2.8 \times 10^{-7} \times M^{1.1.4} (12 - a) + 4.3 \times 10^{-3} (b-2) + 3.3 \times 10^{-3} (c - 3)$
where M = particle size parameter = (% silt + % very fine sand) × (100 – % clay)
a = organic matter content (%)
b = soil structure code (very fine granular = 1; fine granular = 2; medium or coarse granular = 3; blocky, platy, or massive = 4)
c = soil profile permeability class (rapid = 1; moderate to rapid = 2; moderate = 3; slow to moderate = 4; slow = 5; very slow = 6)
The K factor can also be read from nomographs, e.g., Foster et al. (1981) provided a nomograph in SI units.
In reality, soil erodibility is more complicated than Equation 4.3.2 suggests. How erodible a soil is depends not only on the physical characteristics of the soil but also its treatment, which effects how cohesive the soil aggregates are. Some variations of the USLE, such as the Second Revised USLE (RUSLE2) use a more complicated and dynamic K factor to account for management effects.
Topographic Factor (LS)
The topographic factor (called also slope length factor) describes the combined effect of slope length and slope gradient. This factor represents a ratio of soil loss under given conditions to that on the standard plot with 9% slope. Thus, LS = 1 for slope steepness of 9% and slope length of 22.13 m (Wischmeier and Smith, 1978); LS > 1 for steeper, longer slopes than that, and <1 for gentler, shorter slopes. For example, LS factor values for a 61 m long slope with steepness of 5%, 10%, 14%, and 20% are 0.758, 1.94, 3.25, and 5.77, respectively. The LS factors for 122 m and 244 m long slopes with constant steepness of 10% are 2.74 and 3.87, respectively. The steeper and longer the slope, the higher the erosion risk. The LS factor can be determined from a chart or tables in standard references (Wischmeier and Smith, 1978), or from equations where both slope length and steepness have been taken into consideration, e.g., Wischmeier and Smith (1978):
$LS = (\frac{\lambda}{22.13})^{m} (65.41\ sin^{2}\theta+4.56\ sin\theta+0.065)$
where λ = slope length (m)
θ = angle of slope
m = 0.5 if the slope is 5% or more, 0.4 on slopes of 3.5 to 4.5%, 0.3 on slopes of 1 to 3%, and 0.2 on uniform gradients of less than 1%
Equations such as Equation 4.3.3 were derived for specific conditions, so care must be taken in using the appropriate equation for the given situation. These equations can be found in various USLE references. There is limited ability to change the LS factor, except for, notably, breaking a long slope into shorter slope lengths through the installation of terraces.
Cover Management Factor (C)
The cover management factor is a ratio that compares the soil loss from an area with specified cover and management to that from an identical area in tilled continuous fallow. The value of C on a certain field is determined by several variables, such as crop canopy, residue mulch, incorporated residues, tillage, and land use residuals (Wischmeier and Smith, 1978).
The factor may roughly be determined by selecting the cover type and tillage method that corresponds to the field and then multiplying these factors together (OMAFRA, 2012a). The height and density of a canopy reduces the rainfall energy. Residue mulch near the soil surface is more effective to reduce soil loss than equivalent percentages of canopy cover (Wischmeier and Smith, 1978). For example, incorporating plant residue at the soil surface by shallow tillage offers a greater residual effect than moldboard plowing. The C factor for crop type varies from 0.02 (hay and pasture) to 0.40 (grain corn). The C factor for tillage method varies from 0.25 (no-till or zone tillage) to 1.0 (fall plow). However, local investigation of the C factor is highly recommended because of varying cultivation practices, and because of the interaction of the timing of crop cover development and the timing of rainfall energy, which varies from place to place. Selection of crops and tillage systems can have a huge impact on the C factor.
Conservation Practice Factor (P)
The conservation (also support practice or erosion control) factor reflects the effects of various practices that will reduce the amount and rate of water runoff and, thus, reduce the erosion rate (Wischmeier and Smith, 1978). The most commonly used supporting cropland practices are cross-slope cultivation, contour farming, and strip cropping. The highest P factor value of 1 is given in the case when no influences from conservation practices are considered. The value of 1 is also given to “up and down slopes,” while “strip cropping, contour” gets the lowest value of 0.25 in the factsheet of Ministry of Agriculture, Food and Rural Affairs Ontario (OMAFRA, 2012a).
Measurement and Monitoring
Scientific research and erosion measurements are needed to understand erosion processes. Erosion is measured for three principal reasons (1) erosion inventories, (2) scientific erosion research, and (3) development and evaluation of erosion control practices (Toy et al., 2002). Measurements are also needed for the development of erosion prediction technology and implementation of conservation resources and development of conservation regulations, policies, and programs (Stroosnijder, 2005). Erosion measurements are used for development, calibration, and validation of methods of erosion prediction.
Temporal and Spatial Measurements
Erosion measurements are made at various temporal and spatial scales (Toy et al., 2002). For example, sampling duration can vary from a single rainstorm or windstorm to several years.
Spatially, water erosion measurements can range from interrill and rill sediment sources on hillslope or experimental plots to sediment discharge from watersheds. The presence of rills gives evidence of the possible erosion problems on the field. Sediment discharge from watersheds is used in reservoir design. Wind erosion measurements range also from small plots to agricultural fields and to entire regions.
Erosion Inventories
In planning erosion inventory measurements, the following issues should be included (Toy et al., 2002): selection of measurement site(s), measurement frequency and duration at the sites, and suitable measurement techniques. The selection of sites is made according to a sampling strategy. The measurement duration should be long enough to capture the temporal variability of erosion processes. The measurement technique is selected according to erosion type and study question.
How to Measure?
Erosion research is possible in the field (outdoor) or in the laboratory (Toy et al., 2002). Stroosnijder (2005) presents the following five fundamental ways to measure erosion: (1) sediment collection from erosion plots and watersheds, (2) change in surface elevation, (3) change in channel cross section dimensions, (4) change in weight, and (5) the radionuclide method. Both direct measurements and erosion prediction technology are used in erosion inventories. Commonly used erosion measurement techniques are cheap and fast but not very accurate. More accurate methods are costly and beyond the budget of many projects.
Experimental Fields and Catchments
In outdoor research settings, experimental plots, cropland fields or catchments are in use and runoff may be caused by natural or artificial rainfall. Temporal surface runoff (overland flow movement of water exclusively over the soil surface, down slope, during heavy rain) and subsurface discharge (drainage flow) from these sites can be measured and water sampled for sediment analyses. Sampling can be done automatically according to water volume or time. For indoor studies, soil blocks under a rainfall simulator (e.g., stationary drip-type rainfall simulator) can be used (Uusitalo and Aura, 2005). In both cases, representative water samples are collected for the sediment concentration analyses in the laboratory.
To predict the sediment load for a certain study area and time period, the concentration of analyzed water samples is multiplied by the water volume of the sampling period. Water flow (L s−1) can be measured in stream with a flow meter or V-notch weir, and on croplands with tipping buckets. Erosion amount (kg ha−1) is estimated by multiplying water flow (L s−1) by the time (s) and sediment concentration (g L−1) and finally dividing by the size of the study area (ha).
Also, continuously operating sensors for turbidity measurements from water can be used for measuring erosion from a study area. Turbidity is the degree to which water loses transparency due to suspended particles like sediment; the murkier the water, the more turbid it is. Turbidity sensors need good calibration and control water samples to evaluate sediment content. They must also be equipped with an automatic cleaning mechanism.
Change in Surface Elevation (Hillslope Scale)
The change in elevation is based on the principle that erosion and deposition by water or wind change the elevation of the land surface (Toy et al., 2002). The difference between the two measurements indicates the effect of erosion and deposition during that time interval. A lower elevation indicates erosion and higher elevation at the end of time interval indicates deposition.
One approach to measure change in elevation is to implant stakes or pins that remain in place in the soil for the duration of the study. The distance from the top of the stake or pin to the ground is measured at set time intervals. A decrease in distance corresponds to sedimentation whereas an increase means erosion (Stroosnijder, 2005). By multiplying the change in elevation by the soil bulk density, it is possible to convert the measurement to a mass of soil (Toy et al., 2002). In Figure 4.3.4, a soil roughness meter is used to measure changes in the surface of soil. The soil roughness meter has a set of pins that sit on the surface, so that soil surface position measurements can be made relative to the top of the structure of the roughness meter. By making repeated measurements at the same location, small changes in the surface elevation can be measured. It may also be used to determine soil erosion in rills.
Change in Channel Cross Section
Channel erosion can be estimated by measuring cross sections at spaced intervals, repeating this after some time and comparing and determining the change in volume of soil. The measurement can be done either manually or using airborne laser scanners (Stroosnijder, 2005). This technique is also suitable for estimating rill or gully erosion on croplands.
Change in Weight (Collected by Splash Cups and Funnels)
This method is based on the principle that the erosion process removes material from the source area (Toy et al., 2002). Test soil, packed in a cup or funnel placed in the soil, is weighed before and after an erosion event, and loss of weight is the erosion measurement. This technique is used in studies of soil detachment and transport by raindrop force (Stroosnijder, 2005). While affordable, and accurate at a small scale, the results using this method are representative of only a very small area, and may not scale well to the field level.
Radionuclide Method
Environmental radionuclides can be used as tracers to estimate soil erosion rates (Stroosnijder, 2005). A human-induced radionuclide of cesium (Cs137) was released into the atmosphere during nuclear weapon tests in the 1950s and 1960s. It spread to the stratosphere and gradually deposited on the land surface. In studies using this method, an undisturbed reference site, on which no or minimal erosion or sedimentation occurs, is needed (Fulajtar et al., 2017). The Cs137 concentration in the study soil is compared to the concentration in the reference site. If the study site contains less Cs137 than the reference site, erosion occurs there. If the study site has more Cs137 than the reference, sedimentation (deposition of soil particles) has occurred. In radionuclide studies, the time scale is usually much longer than in agronomic or environmental studies (Stroosnijder, 2005).
Wind Erosion Measurements
Wind erosion measurements require different measurement plans and equipment than water erosion measurements (Toy et al., 2002). While water erosion follows topography and water flow paths, windblown sediment cannot be collected at a single point (Stroosnijder, 2005). Soil gains and losses due to wind erosion require a number of measurement points, followed by geostatistical analyses. Since wind blows from various directions during the year and during a storm, sediment samplers must rotate with changing wind directions. Measurements must be made at various heights to determine the vertical distribution of the sediment load (Toy et al., 2002).
Impacts of Soil Erosion In-Field and Downstream
Soil erosion has impacts both on croplands where the erosion process starts (detachment) and in the place where it ends up (deposition, sedimentation) (Figure 4.3.5).
Impacts in Fields
In fields, fertile top soil material can be lost due to erosion processes. The finest particles from topsoil are generally transported from field areas under convex slopes making the areas less productive. The loss of finest particles reduces further the physical structure and fertility of soils (Hudson, 1971). Removal of fine particles or entire layers of soil or organic matter can weaken the structure and even change the texture, which can in turn affect the water-holding capacity of the soil, making it more susceptible to extreme conditions such as drought (OMAFRA, 2012b). Erosion of fertile topsoil results in lower yields and higher production costs.
Sediment may either increase fertility of soil or impair its productivity on productive land. For example, in Egypt, the fields along the Nile River are very productive due to nutritious sediments from the river water. In some cases, the sediment deposited on croplands may inhibit or delay the emergence of seeds, or bury small seedlings (OMAFRA, 2012b). Dredging of open ditches, sedimentation ponds, and waterways, in which sediment is mechanically removed, is becoming more common. However, it is questionable whether dredged sediment can be recycled back to agricultural fields (Laakso, 2017). The sediment may contain substances that are harmful to crops (herbicides) or decrease soil fertility (e.g., aluminum and iron hydroxides).
Impacts Downstream and in Air
In streams and watercourses, sediment can prevent water flow, fill in water reservoirs, damage fish habitats, and degrade downstream water quality. With an enrichment of nutrients, pesticides, salts, trace elements, pathogens, and toxic substances in soil particles in the field, soil erosion causes contamination of downstream water sources, wetlands, and lakes (OMAFRA, 2012b; Zhang and Wang, 2006). Because of the potential harmful impacts of deposited soil particles in water, the control of soil erosion in the field is important. Siltation of watercourses and water storages decreases the storage capacity of water reservoirs.
In addition, fine particles (<0.1 mm) transported by wind may also cause visibility problems on roads. They may also penetrate into respiratory ducts causing health problems.
Applications
For best results, erosion control should begin at the source area, by preventing detachment of soil particles. One of the most effective ways to prevent erosion is through crop and soil management. Detached particles can be trapped by different tools both on cropped field, field edges, and outside fields.
Decreasing the Effects of Erosivity (R) and Erodibility (K)
Erosivity is rather difficult to decrease since there are no tools to affect rainfall. Soil erodibility can be decreased by increasing soil organic matter in soil, e.g., by adding manure or other carbon sources to soil. Practices that reduce or mitigate loss of soil carbon in cropped land can also decrease erodibility. These methods include managing residue to return carbon to the soil and minimizing tillage to reduce the conversion of soil carbon to carbon dioxide gas. Decreasing soil erosion caused by water on highly erodible soils requires additional methods such as permanent grass cover or zero tillage.
The addition of manure, compost, or organic sludge into soil increases aggregate stability, porosity, and water-holding capacity (Zhang, 2006). Both inorganic (stone, gravel, and sand) and organic mulches (crop residue, straw, hay, leaves, compost, wood chips, and saw dust) are used to absorb the destructive forces of raindrops and wind. All these materials also obstruct overland flow and increase infiltration (Zhang, 2006). Mulch reduces erosion until the seedlings mature to provide their own protective cover. In addition, soils treated with amendments like gypsum or structure lime are more durable against erosion than untreated soils (Zhang, 2006). The effect of these soil amendments lasts for a certain period depending on soil and environmental conditions. To maintain the effect, the amendment must be reapplied at intervals.
Soil moisture can prevent erosion. A moist soil is more stable than a dry one, since the soil water keeps the soil particles together. Soil moisture is higher in untilled soils due to a higher percent of organic carbon and minimal evaporation from the soil covered by plant residues. For example, wind erosion can be controlled by wetting the soil.
Reducing the Effect of Topography
Long slopes can be shortened by establishing terraces, but it is difficult to make steep slopes gentler. Reducing the field width (e.g., by windbreaks) protects cropped land against the effects of wind (Figure 4.3.6).
Increasing the Effect of Cover and Management
Plants are excellent in the protection of soil. They keep the soil in place with their roots, intercept rainfall, provide cover from wind and runoff, increase water infiltration into soil, increase soil aggregation, and provide surface roughness that reduces the speed of water or air movement across the surface. Dense perennial grasses are the most effective erosion controlling plants.
Soil management techniques that disrupt the soil surface as little as possible are excellent at maintaining soil cover and structure. For example, eliminating tillage (called no-till, e.g., direct drilling) keeps the soil surface covered all year round (Figure 4.3.7). This method, where seed is placed without any prior soil tillage in the stubble, has become common in many dry growing regions to decrease erosion potential. In winter, the stubble remaining after harvest effectively reduces soil erosion compared to bare fields (e.g., plowed in fall). Reduced, or conservation, tillage is also a better choice than fall plowing that leaves the soil surface uncovered. Tillage decreases the organic matter in soils and, thus, has a negative effect on the aggregate stability of clay soils (Soinne et al., 2016). Tillage also disturbs soil structure and, thus, reduces infiltration capacity.
Controlled grazing causes less erosion than tilled croplands; however, the number of grazing animals must be kept low enough to prevent erosion caused by over-grazing. Crop rotation and use of cover crops also maintain soil fertility and, thus, help control erosion. Cover management affects soil erosion in increasing order: meadows < grass and legume catch crops turned under in spring < residue mulch on soil surface < small grain or vetch on fall-plowed seedbed and turned at a spring planting time < row crop canopy < shallow tillage < moldboard plow < burning / removing residues < short period rough fallow in rotation < continuous fallow.
Increasing the Effect of Support Practices
On steep slopes, erosion can be controlled by support practices like contour tillage (Figure 4.3.8), strip cropping on contour, and terrace systems (Wischmeier and Smith, 1978). Strip cropping protects against surface runoff on sloping fields and decreases the transport capacity of soil.
Tillage and planting on the contour is generally effective in reducing erosion. Contouring appears to be most effective on slopes in the 3–8% range (Wischmeier and Smith, 1978). On steeper slopes, more intervention is usually needed. Contour strip cropping (Figure 4.3.9) is a practice in which contoured strips of dense vegetation, e.g., grasses, legumes, or corn with alfalfa hay, are alternated with equal-width strips of row crops (e.g., soybeans, cotton, sugar beets), or small grain (Wischmeier and Smith, 1978). In erodible areas, grass strips usually 2 to 4 m wide are placed at distances of 10 to 20 m (Figure 4.3.10). They can be placed on critical areas of the field and the main purpose of these strips is to protect the land from soil erosion. Terracing can be combined with contour farming and other conservation practices making them more effective in erosion control (Wischmeier and Smith, 1978).
In terrace farming, plants may be grown on flat areas of terraces built on steep slopes of hills and mountains. Terracing can reduce surface runoff and erosion by slowing rainwater to non-erosive velocity. Every step (terrace) has an outlet which channel water to the next step.
If soil detachment and transport have taken place, the next consideration is to control deposition before the runoff enters a receiving watercourse. Narrow, 1 to 5 m wide, buffer strips under perennial grasses and wider buffer zones under perennial grasses and trees (Figure 4.3.11) have been established along rivers to prevent sediment transport to watercourses (Haddaway et al., 2018, Uusi-Kämppä et al., 2000). Grassed waterways (Figure 4.3.10) are established on concentrated water flows in fields to decrease water flow and, thus, decrease the erosion process in a channel.
Sediment basins, ponds and wetlands are also used to trap sediment (Uusi-Kämppä et al., 2000). Large particles and aggregates settle over short transport distances, while small clay and silt particles can be carried over long distances in water before their sedimentation.
Country-Specific Perspectives on Soil Erosion
Due to climatic factors (R), soil characteristics (K), landscape features (LS) and cropping practices (C), soil erosion varies geographically. Soil erosion by water is highest in agricultural areas with high rainfall intensity (R factor). In the U.S., the erosion index is great (1200–10,000 MJ mm ha−1 h−1 yr−1) in eastern, southern, and central parts where tropical storms and hurricanes occur. In Europe, the R factor is highest in the coastal area of the Mediterranean, from 900 to >1300 MJ mm ha−1 h−1 yr−1 (Panagos et al., 2015). In addition to climate, changes in cropping systems (C factor) influence the amount of erosion.
In northern Europe, the most erodible agricultural areas exist in southeast Norway (soil types are silty clay loams or silty clay), southern and central Sweden, and in southwestern Finland (with clay) due to the K factor. In these boreal areas, erosion risk is highest during late fall, winter, and spring due to surface runoff in frozen soil. Soil was previously covered by snow in winter; however, these areas have more frequently been subject to melting and runoff in winter during the last centuries due to climate change (R factor).
In the 1900s, global cropland area increased causing a similar reduction in grassland area (C factor). In Norway, the change in land use doubled soil erosion by water. In addition, extensive land levelling and putting brooks into pipes increased agricultural area in the same region in the 1970s and led to a two-to-three fold increase in erosion (Lundekvam et al., 2003), because levelling, i.e., creating smooth slopes instead of undulating ones, tended to increase the LS factor. Intensive erosion research started in the 1980s and since then Norwegian farmers have received national payments to implement erosion reducing methods, e.g., zero-tillage and growing cover crops in fall (C factor), or establishment of grassed waterways, buffer strips, and sedimentation ponds (P factor). Also, re-opening of piped brooks (decrease in L factor), and conversion of fall-tilled fields with high erosion risk into permanent grassland (C factor) have been subsidized.
In Finland, typical soil erosion processes in field are sheet erosion, rill erosion, and tillage erosion. Although the mean arable soil loss rate is low (460 kg ha−1 yr−1) according to estimations of the RUSLE2015 model (Lilja et al., 2017), there are areas where the erosion risk is higher than this. These high risk areas, with steep slopes and high percent crop production, exist in southwestern parts of the country. In Finland, erosion is mitigated to decrease losses of phosphorus, which can be desorbed from detached soil particles into receiving water bodies where it may cause eutrophication and harmful algal blooms. To decrease soil erosion, some agri-environmental measures are subsidized by the European Union and Finland. For example, fall plowing has been replaced by conservation tillage practices, e.g. no-tillage and direct drilling (C factor) or fields may be left under green cover crops for the winter (C factor). Grass buffer zones, erosion ponds, or wetlands may be installed and maintained between fields and water bodies to trap soil particles rich in phosphorus (P factor).
Examples
Example $1$
Example 1: Calculate average annual soil loss
Problem:
Use the USLE model to calculate the annual soil loss from a Finnish experimental site (slope steepness 6%, length 61 m, 60°48′N and 23°28′E). Annual rainfall is 660 mm, and erosivity is 311 MJ mm ha−1 h−1 yr−1 (Lilja et al., 2017). The site is plowed (up and down slope) in the fall and sown with spring wheat. Particle distribution: clay (<0.002 mm) 30%, silt (0.002–0.02 mm) 40%, very fine sand (0.02–0.1 mm) 25%, and sand (>0.1 mm) 5%. Organic matter in the soil is 2.8%. Soil structure is fine granular, and permeability is slow to moderate.
Solution
Determine the value of each factor in Equation 4.3.1:
$A = R\ K\ LS\ C\ P$ (Equation $1$)
R = rainfall erosivity factor; given in problem statement = 311 MJ mm ha−1 h−1 yr−1
K = soil erodibility factor; calculate using Equation 4.3.2:
$K = 2.8 \times 10^{-7} \times M^{1.14} (12-a) + 4.3 \times 10^{-3} (b -2) + 3.3 \times 10^{-3} (c-3)$ (Equation $2$)
where M = particle size parameter
$=( \text{% silt} +\text{% very fine sand}) \times (100 - \text{% clay}) = 65\% \times (100 - 30\%) = 4550$
a = organic matter content (%) = 2.8
b = soil structure code = 2 (fine granular)
c = soil profile permeability class = 4 (slow to moderate)
Thus, substituting values in Equation 4.3.2 yields:
$K = 0.041\text{ Mg ha h ha}^{-1} MJ^{-1} mm^{-1}$
LS = topographic factor; find from a published table, e.g., table 3 (Wischmeier and Smith, 1978) or the following excerpt from Factsheet table 3A (OMAFRA, 2012a):
Slope Length (m) Slope (%) LS Factor
61
10
1.95
8
1.41
6
0.95
5
0.76
4
0.53
For a slope length of 61 m and a slope steepness of 6%, LS = 0.95, or calculate LS using Equation 4.3.3:
$LS = (\frac{\lambda}{22.13})^{m} (65.41\ sin^{2}\theta+4.56\ sin\theta+0.065)$ (Equation $3$)
$LS = (\frac{61}{22.13})^{0.5} (65.41\ sin^{2}(6\%)+4.56\ sin(6\%) + 0.0065) = 0.95$
C = crop management factor = 0.35 for cereals
P = conservation practice factor = 1.0 for fall plowing up and down slope (OMAFRA, 2012a).
Substitute the values for each factor in Equation 4.3.1:
$A = R\ K\ LS\ C\ P$ (Equation $1$)
$= 311 \times 0.041 \times 0.95 \times 0.35 \times 1 \text{ Mg ha}^{-1} \text{ yr}^{-1} = 4.24 \text{ Mg ha}^{-1} \text{ yr}^{-1}$
Example $2$
Example 2: Effect of different tillage practices on erosion rates
Problem:
Use the USLE model to evaluate the change in erosion rate in the field runoff of the previous example when fall plowing (up and down slope) is changed (a) to spring plowing (cross slope) or (b) to no-till (up and down slope).
Solution
1. (a) Using Equation 4.3.1 with:
$R = 311 \text{ MJ mm ha}^{-1} h^{-1} yr^{-1}$
$K = 0.041 \text{ Mg ha h ha}^{-1} MJ^{-1} mm^{-1}$
LS = 0.95
$C = 0.35 (\text{cereals} \times 0.9 (\text{spring plow)} = 0.315$
$P= 0.75 (\text{cross slope)}$
$A = R\ K\ LS\ C\ P = 2.86 \text{ Mg ha}^{-1}yr^{-1}$
1. The erosion rate is 32% less due to cross slope plowing in spring compared to up and down plowing in fall.
2. (b) Using Equation 4.3.1 with:
$R = 311 \text{ MJ mm ha}^{-1} h^{-1} yr^{-1}$
$K = 0.041 \text{ Mg ha h ha}^{-1} MJ^{-1} mm^{-1}$
LS = 0.95
$C = 0.35 (\text{cereals} \times 0.25 (\text{no-till)} = 0.0875$
$P= 1 (\text{up and down slope)}$
$A = R\ K\ LS\ C\ P = 1.06 \text{ Mg ha}^{-1}yr^{-1}$
1. The erosion rate is 75% less due to direct drilling compared to up and down plowing in fall.
Image Credits
Figure 1. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Splash erosion. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 2. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Gully erosion Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 3. USDA ARS. (CC By 1.0). (2020). Wind erosion process. Retrieved from https://infosys.ars.usda.gov/WindErosion/weps/wepshome.html
Figure 4. Risto Seppälä / Luke. (CC By 4.0). (2020). Soil roughness meter.
Figure 5. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Sediment chokes. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 6. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Field windbreaks. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 7. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). No-till drilling. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 8. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Contoured field. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 9. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Alternating strips. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 10. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Grass helps. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
Figure 11. USDA Natural Resources Conservation Service. (CC By 1.0). (2020). Multiple rows of trees. Retrieved from https://photogallery.sc.egov.usda.gov/photogallery/#/
References
Borrelli, P., Robinson, D. A., Fleischer, L. R., Lugato, E., Ballabio, C., Alewell, C., . . . Panagos, P. (2017). An assessment of the global impact of 21st century land use change on soil erosion. Nature Commun., 8(1), 1-13. https://doi.org/10.1038/s41467-017-02142-7.
ESDAC. (2017). European soil data centre, Global rainfall erosivity. Retrieved from https://esdac.jrc.ec.europa.eu/content/global-rainfall-erosivity.
Fernandez-Raga, M., Palencia, C., Keesstra, S., Jordan, A., Fraile, R., Angulo-Martinez, M., & Cerda, A. (2017). Splash erosion: A review with unanswered questions. Earth-Sci. Rev., 171, 463-477. https://doi.org/10.1016/j.earscirev.2017.06.009.
Foster, G. R., McCool, D. K., Renard, K. G., & Moldenhauer, W. C. (1981). Conversion of the universal soil loss equation to SI metric units. JSWC, 36(6), 355-359.
Fulajtar, E., Mabit, L., Renschler, C. S., & Lee Zhi Yi, A. (2017). Use of 137Cs for soil erosion assessment. Rome, Italy: FAO/IAEA.
Gabet, E. J., Reichman, O. J., & Seabloom, E. W. (2003). The effects of bioturbation on soil processes and sediment transport. Ann. Rev. Earth Planetary Sci., 31(1), 249-273. https://doi.org/10.1146/annurev.earth.31.100901.141314.
Haddaway, N. R., Brown, C., Eales, J., Eggers, S., Josefsson, J., Kronvang, B., . . . Uusi-Kämppä, J. (2018). The multifunctional roles of vegetated strips around and within agricultural fields. Environ. Evidence, 7(14), 1-43. https://doi.org/10.1186/s13750-018-0126-2.
Hudson, N. (1971). Soil conservation. London, U.K.: B. T. Batsford.
Laakso, J. (2017). Phosphorus in the sediments of agricultural constructed wetlands. Doctoral thesis in Environmental Science. Helsinki, Finland: University of Helsinki, Department of Food and Environmental Sciences. Retrieved from https://helda.helsinki.fi/bitstream/handle/10138/224575/phosphor.pdf?sequence=1&isAllowed=y.
Lilja, H., Hyväluoma, J., Puustinen, M., Uusi-Kämppä, J., & Turtola, E. (2017). Evaluation of RUSLE2015 erosion model for boreal conditions. Geoderma Regional, 10, 77-84. https://doi.org/10.1016/j.geodrs.2017.05.003.
Lindstrom, M. J., Lobb, D. A., & Schumacher, T. E. (2001). Tillage erosion: An overview. Ann. Arid Zone, 40(3), 337-349.
Lundekvam, H. E., Romstad, E., & Øygarden, L. (2003). Agricultural policies in Norway and effects on soil erosion. Environ. Sci. Policy, 6(1), 57-67. https://doi.org/10.1016/S1462-9011(02)00118-1.
Mohtar, R. H. (no date). Estimating soil loss by water erosion. https://engineering.purdue.edu/~abe325/week.8/erosion.pdf.
OMAFRA. (2012a). Universal soil loss equation (USLE). Factsheet Order No. 12-051. Ontario Ministry of Agriculture, Food and Rural Affairs. Retrieved from http://www.omafra.gov.on.ca/english/engineer/facts/12-051.htm#1.
OMAFRA. (2012b). Soil erosion—Causes and effects. Factsheet Order No. 12-053. Ontario Ministry of Agriculture, Food and Rural Affairs. Retrieved from http://www.omafra.gov.on.ca/english/engineer/facts/12-053.htm.
Panagos, P., Ballabio, C., Borrelli, P., Meusburger, K., Klik, A., Rousseva, S., . . . Alewell, C. (2015). Rainfall erosivity in Europe. Sci. Total Environ, 511, 801–814. https://doi.org/10.1016/j.scitotenv.2015.01.008.
Puustinen, M., Tattari, S., Koskiaho, J., & Linjama, J. (2007). Influence of seasonal and annual hydrological variations on erosion and phosphorus transport from arable areas in Finland. Soil Tillage Res., 93(1), 44-55. https://doi.org/10.1016/j.still.2006.03.011.
Soinne, H., Hyväluoma, J., Ketoja, E., & Turtola, E. (2016). Relative importance of organic carbon, land use and moisture conditions for the aggregate stability of post-glacial clay soils. Soil Tillage Res., 158, 1-9. https://doi.org/10.1016/j.still.2015.10.014.
Stroosnijder, L. (2005). Measurement of erosion: Is it possible? CATENA, 64(2), 162-173. https://doi.org/10.1016/j.catena.2005.08.004.
Toy, T. J., Foster, G. R., & Renard, K. G. (2002). Soil erosion: Processes, prediction, measurement, and control. Hoboken, NJ: John Wiley & Sons.
USDA ARS. (2020). United States Department of Agriculture, Agricultural Research Service. Washington, DC: USDA ARS. Retrieved from https://infosys.ars.usda.gov/WindErosion/weps/wepshome.html.
Uusi-Kämppä, J., Braskerud, B., Jansson, H., Syversen, N., & Uusitalo, R. (2000). Buffer zones and constructed wetlands as filters for agricultural phosphorus. JEQ, 29(1), 151-158. doi.org/10.2134/jeq2000.00472425002900010019x.
Uusitalo, R., & Aura, E. (2005). A rainfall simulation study on the relationships between soil test P versus dissolved and potentially bioavailable particulate phosphorus forms in runoff. Agric. Food Sci., 14(4), 335-345. https://doi.org/10.2137/145960605775897713.
Van Klaveren, R. W., & McCool, D. K. (1998). Erodibility and critical shear of a previously frozen soil. Trans. ASAE, 41(5), 1315-1321. https://doi.org/10.13031/2013.17304.
Wischmeier, W. H., & Smith, D. D. (1978). Predicting rainfall erosion losses. A quide to conservation planning. Agriculture Handbook No. 537. Supersedes Agriculture Handbook No. 282. Washington, DC: USDA. Retrieved from https://naldc.nal.usda.gov/download/CAT79706928/PDF.
Zhang, T., & Wang, X. (2006). Erosion and global change. In R. Lal (Ed.), Encyclopedia of soil science (2nd. ed., Vol. 1, pp. 536-539). New York, NY: Taylor & Francis.
Zhang, X.-C. (2006). Erosion and sedimentation control: Amendment techniques. In R. Lal (Ed.), Encyclopedia of soil science (2nd ed., Vol. 1, pp. 544-552). New York, NY: Taylor & Francis.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/04%3A_Natural_Resources_and_Environmental_Systems/4.03%3A_Quantifying_and_Managing_Soil_Erosion_on_Cropland.txt
|
Robert Bedoić
University of Zagreb, Faculty of Mechanical Engineering and Naval Architecture
Zagreb, Croatia
Boris Ćosić
University of Zagreb, Faculty of Mechanical Engineering and Naval Architecture
Zagreb, Croatia
Tomislav Pukšec
University of Zagreb, Faculty of Mechanical Engineering and Naval Architecture
Zagreb, Croatia
Neven Duić
University of Zagreb, Faculty of Mechanical Engineering and Naval Architecture
Zagreb, Croatia
Key Terms
Substrates and feedstocks Biogas chemistry Biogas utilization
Pretreatment of feedstock Inhibition parameters Products
Operating modes Biogas yield Digestate management
Introduction
Anaerobic digestion is a set of biochemical processes where complex organic matter is decomposed by the activity of bacteria in an oxygen-free atmosphere into biogas and digestate. Understanding the basic principles of anaerobic digestion (AD), and its role in the production of renewable energy sources, requires familiarity with the chemical composition of substrates, degradation stages in the process, and use of the products, both biogas and digestate. Agri-food by-products are recognized as a sustainable source of biomass for AD. Post-harvesting residues, food industry by-products and decomposed food can be utilized for AD to achieve environmental benefits (including a reduction of landfilling and greenhouse gas emissions) with added production of green energy. Biogas is a mixture of gaseous compounds, with the highest portion being methane (about 50–70% by volume), followed by carbon dioxide (30–50%). In a large-scale operation, biogas is usually utilized as a fuel to run a gas engine in the combined production of heat and electricity (CHP), or to produce biomethane (a gas similar in its characteristics to natural gas), through biogas upgrade processes. Digestate, another product of anaerobic digestion, is usually nutrient-rich, non-degraded organic material that can be used as a soil conditioner and replacement for conventional synthetic organic fertilizers.
Outcomes
After reading this chapter, you should be able to:
• • Describe the chemistry of the AD (anaerobic digestion) process
• • Calculate biogas production from different substrates based on their elemental composition
• • Describe the factors influencing the AD process, including process inhibition
• • Identify substrates suitable for direct use in the AD process and substrates that need pre-treatment before feeding the digester
• • Describe methods of biogas and digestate utilization in the production of renewable energy
Concepts
Anaerobic Digestion Pathway
The concepts that underlie the AD process can be presented as a multi-step process (Lauwers et al., 2013), usually consisting of four main stages: hydrolysis, acidogenesis, acetogenesis, and methanogenesis.
Hydrolysis is the first stage of the AD process. In hydrolysis, large polymers (complex organic matter) are decomposed in the presence of hydrolytic enzymes into basic monomers: monosaccharides, amino acids, and long chain fatty acids. Hydrolysis can be represented using the following simplified chemical reaction:
C6H10O4 + 2H2O → C6H12O6 + H2
The intensity of the hydrolysis process can be monitored through hydrogen production in the gas phase. Hydrolysis occurs at low rates because polymer molecules are not easily degradable into basic monomer compounds. Usually, this stage is also the rate-limiting stage of the overall AD process. The lower the rate of hydrolysis, the lower the production of biogas.
The second stage of AD, acidogenesis, includes conversion of monosaccharides, amino acids, and long chain fatty acids resulting from hydrolysis into carbon dioxide, alcohols, and volatile fatty acids (VFAs). The following two reactions illustrate the breakdown of monomers into ethanol and propionic acid:
C6H12O6 ⇌ 2CH3CH2OH + 2CO2
C6H12O6 + 2H2 ⇌ 2CH3CH2COOH + 2H2O
Accumulation of VFAs caused by the acidogenesis process leads to a decrease in pH value. This phenomenon can contribute to significant problems in the operation of the AD process since it affects bacteria responsible for biogas production.
Acetogenesis is the third stage of the AD process, characterized by the production of hydrogen and acetic acid from basic monomers and VFAs. The reactions that describe the chemical processes occurring during acetogenesis are:
CH3CH2COO + 3H2O ⇌ CH3COO + HCO3 +H+ + 3H2
C6H12O6 + 2H2O ⇌ 2CH3COOH + 4H2 + 2CO2
CH3CH2OH + 2H2O ⇌ CH3COO + 3H2 +H+
Acetogenesis and acidogenesis occur simultaneously; there is no time delay between the two processes. Hydrogen formed in acetogenesis could inhibit metabolic activity of acetogenic bacteria and decrease the reaction. On the other hand, hydrogen formed in acetogenesis could become the reactant for the last stage of AD.
Methanogenesis is the fourth stage of the AD process. In general, methanogenic bacteria can form methane from acetic acid, alcohols, hydrogen, and carbon dioxide, according to Bochmann and Montgomery (2013):
CH3COOH ⇌ CH4+ CO2
CH3OH + H2 ⇌ CH4 + H2O
CO2 + 4H2 ⇌ CH4+ 2H2O
Biogas production is usually expressed in terms of a biogas yield—the amount of biogas produced by the mass of substrate. Al Seadi et al. (2008) and Frigon and Guiot (2010) have determined that each compound of biomass can be characterized by its theoretical biogas content and theoretical biogas yield, as presented in Table 4.4.1.
Data in Table 4.4.1 show that fats produce more biogas than proteins and carbohydrates, and that proteins and fats produce biogas with a higher methane content than carbohydrates. The share of methane is relevant, since the efficiency of biogas utilization is based on the share of methane in the biogas. A gas analyzer is used to determine the composition of biogas—not only its methane content, but also its content of other components such as carbon dioxide, water, hydrogen sulphide, etc. The volume of biogas produced can be measured by several methods; the water displacement method is the most common one (Bedoić et al., 2019a). The recorded biogas volume is then usually adjusted to 0°C and 101,325 Pa pressure so that it can be compared to other reported values.
Table $1$: Elemental formula, theoretical gas yields, and share of main compounds in biogas from different substrates (Al Seadi et al., 2008; Frigon and Guiot, 2010).
Polymers Elemental Formula Theoretical Biogas Yield (Nm3/kg TS)[a] Biogas Composition
CH4 (%) CO2 (%)
Proteins
C106H168O34N28S
0.700
70 – 71
29 – 30
Fats
C8H15O
1.200 – 1.250
67 – 68
32 – 33
Carbohydrates
(CH2O)n
0.790 – 0.800
50
50
[a] Nm3 = normal cubic meter; TS = total solids or dry weight.
In batch AD tests the profile of biogas can also be presented over the daily production or cumulative production (Figure 4.4.1). The biogas generation rate is highest at the start of the process and after a certain time, it decreases. When the profile of cumulative production of biogas remains constant over several days, it is an indication that the biodegradation of organic material has stopped.
The mass of the substrate can be expressed as total solids (TS) or volatile solids (VS). TS is the mass of solids in the substrate with water totally excluded (determined at 105°C), while VS is the mass of solids that is lost on ignition of TS at 550°C. TS and VS are used more commonly when anaerobic digestion is performed on a laboratory or pilot scale. For a large-scale operation the mass of fresh matter (FM; raw mass inserted into a digester, including water) is more commonly used.
Important Parameters for AD
Significant parameters for AD include constitution of the substrate that enters the reactor, pH, and temperature. The elements carbon (C), hydrogen (H), oxygen (O), nitrogen (N) and sulfur (S) are building blocks of organic matter that make up the polymeric carbohydrate, protein, and lipid molecules. Based on the elemental composition of a substrate used for anaerobic digestion it is possible to estimate its biodegradable properties. One of the most common ways to estimate the degradability of a substrate in AD is the carbon to nitrogen ratio (C:N). An optimum C:N of a substrate is between 25 and 30. Lower C:N values indicate a high nitrogen content, which could lead to ammonia generation and inhibition in the process. Higher C:N values indicate high levels of carbohydrates in a substrate, which makes it harder to disintegrate and produce biogas.
The pH range in the reactor depends on the feedstock used and its chemical properties. Liu et al. (2008) found that the optimum pH range inside the reactor is between 6.8 and 7.2, while the AD process can tolerate a range of 6.5 up to 8.0.
The anaerobic digestion process is highly sensitive to temperature changes. Van Lier et al. (1997) have studied the impact of temperature on the growth rate of bacteria in methanogenesis. In general, psychrophilic (2°C to 20°C) digestion is not used for commercial purposes, due to a high retention time of substrates. Feedstock co-digestion is usually performed in mesophilic (35°C to 38°C) or thermophilic (50°C to 70°C) conditions. Mesophilic anaerobic digestion is the most common system. It has a more stable operation than thermophilic anaerobic digestion, but a lower biogas production rate. Thermophilic anaerobic digestion shows advantages in terms of pathogen reduction during the process. Increasing the temperature of the process increases the organic acids inside the fermenter, but at the same time makes the process more unstable. Degradation of the feedstock under thermophilic conditions requires an additional heat supply to achieve and maintain such conditions in the digester.
Estimation of Biogas Yield of the Substrate Based on the Elemental Composition
The theoretical production of biogas can be estimated by knowing the elemental composition of the substrate. Gerike (1984) has determined an approach to find a molecular formula that represents the composition of the substrate in the form of CaHbOcNd where a, b, c, and d represent the number of carbon, hydrogen, oxygen, and nitrogen atoms, respectively; this is estimated from the elemental composition of the substrate, on a dry basis (TS). To represent the entire AD process in one stage, the following chemical reaction is used:
$C_{a}H_{b}O_{c}N_{d} + \frac{4a-b-2c+3d}{4}H_{2}O \rightarrow \frac{4a+b-2c-3d}{8}CH_{4}+ \frac{4a-b+2c+3d}{8}CO_{2}+dNH_{3}$
The reaction estimates that the entire organic compound, CaHbOcNd, is decomposed during the AD process into three products: methane, carbon dioxide and ammonia.
Substrates in AD are usually characterized through the value of oxygen demand. There are several types of oxygen demand; the ones usually used are the following.
• • Biochemical oxygen demand (BOD) is the measure of the oxygen equivalent of the organic substrate that can be oxidized biochemically using aerobic biological organisms.
• • Chemical oxygen demand (COD) is the measure of the oxygen equivalent of the organic substrate that can be oxidized chemically using dichromate in an acid solution. The amount of methane produced in the anaerobic digestion per unit COD is around 0.40 Nm3/kgO2.
• • Theoretical oxygen demand (ThOD) is the oxygen required to oxidize the organics to end products based on the elemental composition of the substrate. ThOD is used to estimate the oxygen demand of different substrates.
Koch et al. (2010) established a formula to estimate the ThOD of the substrate based on the molecular formula CaHbOcNd:
$ThOD = \frac{16 \times(2a+0.5(b-3d)-c)}{12a+b+16c+14d} (\frac{kg_{O_{2}}}{kgTS_{C_{a}H_{b}O_{c}N_{d}}})$
It does not reflect the fact that the organic substrate is not 100% degradable at any time.
Li et al. (2013) showed a way to estimate the theoretical biochemical methane potential (TBMP) of the organic substrate based on the elemental composition of the substrate, as:
$TBMP = \frac{22.4 \times (\frac{a}{2}+\frac{b}{8}-\frac{c}{4}-\frac{3d}{8})}{12a+b+16c+14d} (\frac{Nm^{3}_{CH_{4}}}{kgVS_{C_{a}H_{b}O_{c}N_{d}}})$
Since the organic matter cannot always be fully degraded during the AD process, it is valuable to conduct measurements (usually lab-scale) on the production of biogas, in order to investigate the actual degradability of the sample. For those purposes, biochemical gas potential (BGP) and biochemical methane potential (BMP) tests are used.
The BGP laboratory test is used in assessing the potential yield of a substrate in terms of biogas production and process stability (based upon pH and concentration of ammonia):
$BGP= V_{N,biogas} / m_{substrate}$
where VN,biogas = normalized cumulative volume of biogas (Nm3)
msubstrate = mass of substrate put in a reactor (kg FM or kg TS or kg VS).
The BMP test is usually used in assessing the feedstock potential, but in terms of biomethane production:
$BMP = V_{N,CH_{4}}/m_{substrate}$
where VN,CH4 = normalized cumulative volume of biomethane (Nm3)
msubstrate = mass of substrate put in a reactor (kg FM or kg TS or kg VS).
Both tests contribute to the evaluation of the use of prepared feedstock in the AD process. The ratio of BMP and BGP represents the share of the most important compound of biogas, methane:
$CH_{4} \text{ in biogas} = BMP/BGP$
Degradation of a substrate is expressed as the ratio of the actual BMP to the TBMP, which depends on the chemical constitution of the substrate, as:
$\text{Degradation (%)} = \frac{BMP}{TBMP} \times100$
Large-Scale AD
Parameters that need to be considered during design, and controlled during the operation of the AD process in a large-scale biogas production at a satisfactory level, are organic load rate and hydraulic retention time.
Organic load rate (OLR) represents the quantity of organic material fed into the digester. It is usually higher for co-digestion compared to mono-digestion. If OLR is too high, foaming and instability of the process can occur due to higher levels of acidic components in the digester. OLR can be calculated based on the input volume of feedstock:
$OLR = Q/V$
where Q = raw feedstock input in the digester per day (m3 d−1)
V = digester volume (m3)
A more common way to present the organic load rate is through the chemical oxygen demand:
$OLR_{COD} = OLR \times COD$
where COD = chemical oxygen demand of feedstock per unit volume of feedstock (kg O2 m−3).
Interpretation of the input feedstock through COD includes taking into consideration the chemical properties of substrates in the feedstock.
Hydraulic retention time (HRT) represents the time (days) that a certain quantity of feedstock remains in the digester:
$HRT = V/Q$
The required HRT depends on parameters such as feedstock composition, the operating conditions in the digester, and digester configuration. In order to avoid instability in the process usually caused by VFA accumulation, HRT should be >20 days.
Products of Anaerobic Digestion
Biogas is a mixture of gases, mainly composed of methane and carbon dioxide, while compounds like water, oxygen, ammonia, hydrogen, hydrogen sulfide, and nitrogen can be found in traces. Average data on the detailed composition of biogas is shown in Table 4.4.2.
Methane is the most important compound of biogas since it is an energy-rich fuel. Therefore, the uses of biogas are primarily related to extracting benefits from methane recovery in terms of producing renewable energy.
Apart from biogas, a digestate is the second product of anaerobic digestion. It can be described as a macronutrients-rich indigestible material that can be used in improving the quality of the soil. Pognani et al. (2009) have studied the breakdown of macronutrients and total indigestible solids (TS g kg−1), lignin, hemicellulose, and cellulose in digestate, based on different feedstock types as shown in Table 4.4.3.
Digestate of agri-food by-products typically contains a very low quantity of total solids, around 3.5%; the remainder is water. Total nitrogen is usually the most abundant macronutrient, at about 11% of total solids. Phosphorus is present in much lower quantities in total solids, with about a 1% share. The main indigestible compound in digestate is lignin, at almost 30%.
Lignin is difficult to biodegrade during the AD process; hence, it can be found in the digestate. Mulat et al. (2018) have successfully applied pretreatment methods (steam explosion and enzymatic saccharification) to increase the biodegradability of lignin. The efficiency of pretreatment can be defined as the increase in BMP (or BGP) over the BMP (or BGP) of non-pretreated substrate:
Table $2$: Detailed composition of biogas.
Compound Chemical symbol Content (%)
Methane
CH4
50 – 75
Carbon dioxide
CO2
25 – 45
Water vapor
H2O
2 (20°C) – 7 (40°C)
Oxygen
O2
<2
Nitrogen
N2
<2
Ammonia
NH3
<1
Hydrogen
H2
<1
Hydrogen sulfide
H2S
<1
Table $3$: Breakdown of macronutrients and indigestible compounds in digestate (Pognani et al., 2009). Feedstock TS
(g kg−1) Macronutrients Indigestible Compounds Total N
(g kg−1 TS) NH4-N
(g L−1) Total P
(g kg−1 TS) Lignin
(g kg−1 TS) Hemicelluloses
(g kg−1 TS) Celluloses
(g kg−1 TS)
Energy crops,
cow manure
slurry, and
agro-industrial
waste
35
105
2.499
10.92
280
42
68
Energy crops,
cow manure
slurry, agro-industrial
waste, and
OFMSW (organic fraction of municipal solid waste)
36
110
2.427
11.79
243
54
79
Inhibition Parameters in AD
Many factors influence inhibition, i.e., reduced biogas production, of the AD process, but the most frequent is the use of inadequate substrates and their high loads. According to Xie et al. (2016), inhibition of the AD process is a result of the accumulation of several intermediates:
• • free ammonia (FA), (NH3)
• • volatile fatty acids (VFAs)
• • long-chain fatty acids (LCFAs)
• • heavy metals (HMs).
Ammonia is a compound generated by the biological degradation of organic matter that contains nitrogen—primarily proteins. Typical protein-rich substrates for AD are slaughterhouse by-products (blood, rumen, stomach and intestinal content) and decomposed food (milk, whey, etc.). Process instability due to ammonia accumulation usually indicates accumulation of VFAs, which points to a decreasing of pH value (Sung and Liu, 2003). The critical ammonia toxicity range depends on the type of feedstock used in biogas production, but it goes from 3 to 5 g(NH4-N) L−1 and higher.
Inhibition limited by VFAs relates to the conversion of VFAs into acetic acid before methane is formed in the process of acetogenesis. Butyric acid is more likely to be converted into acetic acid compared to propionic acid. The ratio of concentrations of propionic and acetic acid in the digester is used as a valuable indicator of inhibition by VFAs; if the indicator is above 1.4, inhibition is present in the system.
Formation of LCFAs is more intense if the substrate contains more lipids. Examples of lipid-rich substrates are domestic sewage, oil-processing effluents, and slaughterhouse by-products. Ma et al. (2015) implied that a high concentration of LCFAs results in the accumulation of VFAs and lower methane yield. LCFAs can cause biochemical inhibition, increasing the degradation of microorganisms. Zonta et al. (2013) found that LCFAs could cause physical inhibition as a result of the adsorption of LCFAs on the surface of the microorganisms.
Heavy metals (HMs) are non-biodegradable inorganic compounds that can be found in the feedstock. Usually, municipal sewage and sludge are most dominant HM-rich substrates. During the AD process, HMs remain in the bulk volume in the digester. Therefore, the accumulation of HMs can reach a potentially dangerous concentration that can cause failure in the anaerobic digester operation. Some of the most notable HMs that could cause inhibition in the AD are Hg (mercury), Cd (cadmium), Cr (chromium), Cu (copper), As (arsenic), Zn (zinc), and Pb (lead).
Applications
Operating Modes of Anaerobic Digestion
Based on the number of substrates used in AD, there are two operating modes, mono-digestion and co-digestion. Mono-digestion is related to the use of only one substrate in AD, while co-digestion reflects the use of two or more substrates in preparing feedstock.
Usually, mono-digestion is an applicable method only on a farm level, where a single type of agricultural by-product is present, such as animal manure. Digestion of animal manure is usually performed in a small-scale digester that replaces inefficient storage of animal manure and contributes to the mitigation of greenhouse gas emissions (GHG).
Co-digestion involves mixing substrates in different ratios to keep properties of the mixture suitable for running the process with an optimum range. Co-digestion is a more advantageous method of energy recovery from organic material due to several benefits: better C:N ratio of the mixed feedstock, efficient pH and moisture content regulation, and higher biodegradability, thus a higher production of biogas (Das and Mondal, 2016). Patil and Deshmukh (2015) found that other variables important for the adequate running of AD could easily be adjusted through co-digestion performance, including moisture content and pH. Compared to mono-digestion, co-digestion has higher biogas yield, which is associated with the synergistic effects of the microorganisms present in the substrates.
AD processes are also classified by the moisture content of the feedstock into wet AD and dry AD. Wet AD is characterized by feedstock that can be mixed and pumped as liquid slurries, due to a low solid content (3% to 15%). Dry AD (sometimes called high solids AD) is performed in a pile, with the feedstock in stackable form. Tanks for large-scale AD are usually built of concrete with a corrosive-protective layer applied to the inner tank wall, in order to ensure longer durability in the gas/water interface zone.
AD processes can operate as large-scale continuous processes and as lab-scale batch processes, as well as intermediate fed-batch and semi-continuous processes. Large-scale biogas production processes (digester volume > 1,000 m3) are performed in biogas plants. Energy production in biogas plants is directly linked to the efficiency of biological conversion of feedstock in a digester. OLR and feedstock properties are controlled to maintain a stable and efficient process. Biogas produced can be utilized in various ways, to produce heat, electricity, or natural-gas like biomethane.
Laboratory (lab-scale; digester volume <1 L) AD is usually performed in a batch mode with a goal to investigate the BMP of substrates for the purposes of AD. The basic principle of batch AD is to put feedstock in a small reactor, add inoculum (colony of bacteria), seal it well, deaerate it to remove oxygen from the digester atmosphere, and monitor the production and the quality of biogas over time by certain laboratory methods (generally water displacement or eudiometer with a pressure gauge). Bedoić et al. (2019a) studied co-digestion of residue grass and maize silage with animal manure in a 250 mL reactor with biogas measurement by a water displacement method. A heated bath was used to maintain the constant temperature in the reactor, since AD is a temperature-sensitive process. As the AD process ran, the generated biogas left the reactor through the outlet hose and entered the upside-down graduated measuring jug filled with water. The volume of the water ejected from the measuring jug represented the volume of the biogas generated in the AD.
In addition to the continuous process in large scale digesters and the batch process usually performed at a laboratory scale, Ruffino et al. (2015) described two additional operating modes for AD: fed-batch and semi-continuous. The fed-batch process is usually considered in a semi-pilot setup, where the digester is started as batch and after a certain period of time products are withdrawn from the reactor. These modes are in between large-scale continuous mode and a lab-scale batch mode. A certain portion of new substrates is added, and the process continues. If repeated several times, this operating mode is known as a repeated fed-batch. The semi-continuous process is considered as a pilot setup, where the process is driven in the continuous mode, but operates with a lower volume digester. Semi-continuous AD has shown many advantages compared to the batch operation when investigating AD, mainly due to a dynamic component in the process, which reflects the behavior of continuous-large scale operation.
AD of Agri-Food By-Products
, where the generation of biodegradable material is presented as three major steps (Bedoić et al., 2019b): cultivation/harvesting/farming, processing, and consumption. In each step of the AWCB value chain, there is a generation of organic matter that can be used for AD. Five commodities were selected to represent agri-food by-product sources in the AWCB value chain. More detailed information about agri-food sources in the AWCB value chain for selected commodities is shown in Table 4.4.4.
During the first stage of the AWCB value chain (cultivation and harvesting), a certain amount of commodity is eaten or destroyed by animals (e.g., birds, rabbits, deer, wasps) or due to bad weather conditions and cannot be used as food (Bedoić et al., 2019b). By-products from this first stage of the AWCB value chain are mainly lignocellulosic matter, except for the case of manure. Since lignocellulosic matter contains an indigestible compound (lignin), intensive pretreatment methods are needed to enhance the degradation of this particular organic matter. On the other side, manure has a lower potential for biogas production compared to lignocellulosic matter, but it is important as a valuable source of nutrients.
The second stage of the AWCB value chain is the processing of commodities where additional residues are generated. Since there are many options to process a commodity, AWCB products in this stage require special consideration for AD. The most interesting, but at the same time the most challenging, AWCBs characterized by high oxygen demand are slaughterhouse remains. Slaughterhouse remains are characterized by an inappropriate (low) C:N of 6–14, which usually causes ammonia inhibition during AD (Moukazis et al., 2018). Co-digestion of olive pomace and apple co-products with cow slurry has demonstrated feasibility and economic attractiveness. Results of semi-continuous anaerobic co-digestion with different OLRs have shown that the mixture of this kind of substrates shows energy potential similar to mixtures of some energy crops and livestock combinations. Aboudi et al. (2016) studied mono-digestion of sugar beet cossettes and co-digestion with cow manure operating under mesophilic conditions in the semi-continuous anaerobic system. The results showed that co-digestion produced higher methane generation and no inhibition phenomena, compared to mono-digestion of sugar beet cossettes. Industrial crop by-products have shown potential to produce biogas through dry-AD with implemented technologies for pretreatment of substrates.
Table $4$: Agri-food residues suitable for AD, showing the sources in the AWCB value chain. Commodity Geographic Area Cultivation/Harvesting/Farming Processing Consumption
Cattle, dairy cows
India, USA, China
manure
blood, fatty tissue, skin, feet, tail, brain, bones, whey
decayed beef, milk, butter, cheese
Rice
China, India, Indonesia
straw
bran, hull
decayed rice
Apple
China, E.U., USA
pruning residues and leaves
apple pomace (peel, core, seed, calyx, stems), sludge
decayed apples
Sugar beet
Russia, France, USA
sugar beet leaves
molasses, sugar beet pulp, wash water, factory lime, sugar beet tops and tails
wasted sugar
Olives
Spain, Italy, Greece, Northern Africa
twigs and leaves, woody branches
mill wastewater, olive pomace
wasted olive oil, decayed olives
The third stage of the AWCB value chain is consumption, which includes materials such as food waste or spoiled food, mainly generated in households. It is quite difficult to estimate the composition of decayed food, due to the variety of different substrates present. However, some general facts about agri-food by-products as a feedstock for AD are that they are an ever present, everyday, nutrient rich, sustainable energy source. The nutrient-rich composition provides the potential for applying the digestate as a valuable soil conditioner. On the other side, some pretreatment techniques are required to increase relatively low biodegradability of food waste feedstock.
Pretreatment of Agri-Food By-Products to Enhance Biogas Production
Some organic compounds show low degradability if they enter the digester in their raw form. Ariunbaatar et al. (2014) presented several groups of pretreatment techniques that can be applied to increase the biodegradability of those substrates:
• • mechanical — disintegration and grinding solid parts of the substrates, which result in releasing cell compounds and increasing the specific surface area for degradation
• • thermal — used for pathogen removal, improving dewatering performance, and reducing the viscosity of the digestate; the most studied pretreatment method, applied at industrial scale
• • chemical — used for destructing the organic compounds by means of strong acids, alkalis, or oxidants
• • biological — includes both anaerobic and aerobic methods along with the addition of specific enzymes such as peptidase, carbohydrolase, and lipase
Pretreatments may be combined for further enhancement of biogas production and faster kinetics of AD. Usually, the applied combined pretreatment techniques are thermo-chemical and thermo-mechanical.
The influence of different pretreatment methods applied on substrates in terms of increased biogas production is shown in Table 4.4.5. The effectiveness of the pretreatment method (increased biogas production) depends on the applied pretreatment technique and substrate type. Significant effectiveness of pretreatment methods has been reported for slaughterhouse waste; since this material is not easily degradable, any process for biogas enhancement would be beneficial.
Biogas Utilization
Biogas generated from anaerobic digestion is an environmentally friendly, clean, renewable fuel. There are two basic end uses for biogas: production of heat and electricity (combined heat and power generation, or CHP), and replacement of natural gas in transportation and the gas grid. Raw biogas contains impurities such as water, hydrogen sulphide, ammonia, etc., which must be removed to make it usable in some applications.
CHP is usually done on-site in the biogas power plant. Internal combustion engines are most commonly used in CHP applications. A flow diagram of feedstock preparation, process operation, and the production of usable forms of energy in the CHP unit is shown in Figure 4.4.2. Depending on the type of raw substrate used for the AD process, the application of pretreatment technologies is optional. Substrates that in general show lower biodegradability like lignocellulosic biomass, rotten food, etc. are ground and homogenized in a mixing tank. After a certain retention time in the mixing tank, the feedstock is pumped into a digester where the production of biogas happens. Generated biogas flows through a gasometer in order to monitor its production (and the quality, if available). For instance, if a reduced biogas production in the process occurs as a result of inhibition, it would be detected by the lower flow rate on the gasometer. Precautions in the operation of a biogas plant require the use of a gas flare, where biogas can be burned if not acceptable to be used as a fuel for an internal combustion engine. Some of the heat and electricity produced is used by the biogas plant itself to cover internal needs for energy supply: electromotors for pumps and mixers, temperature control in the digester, etc.; some heat and electricity is distributed to final consumers.
Replacement of natural gas in transportation and the gas grid by biomethane is a relatively new approach in handling biogas from anaerobic digestion. The basic idea is to remove impurities in biogas, such as carbon dioxide, ammonia, and hydrogen sulphide, and produce biomethane, which further can be used as a replacement for natural gas in the gas grid or as a transportation fuel, either as CNG (compressed natural gas) or LNG (liquid natural gas). There are several technological solutions for removal of non-methane components from biogas.
Table $5$: Influence of pretreatment techniques on biogas yield for different substrates.
Substrate Pretreatment Technique AD Operating Mode Biogas and/or Biomethane Yield Increased Production Reference source
Before Pretreatment After Pretreatment
OFMSW
rotary drum
thermophilic batch
346 mL CH4/g VS
557 mL CH4/g VS
61%
(Zhu et al., 2009)
thermophilic pre-hydrolysis
thermophilic
(continuous 2-stage)
223 mmol CH4 /(L(reactor)∙d))
441.6 mmol CH4 /(L(reactor)∙d))
98%
(Ueno et al., 2007)
Food waste
size reduction by beads mill
mesophilic batch
375 mL(biogas)/g COD
503 mL(biogas)/g COD
34%
(Izumi et al., 2010)
thermal at 120°C (1 bar)
thermophilic batch
6.5 L(biogas)/L(reactor)
7.2 L(biogas)/L(reactor)
11%
(Ma et al., 2011)
400 pulses with electroporation
mesophilic continuous
222 L CH4/g TS
338 L CH4/g TS
53%
(Carlsson et al., 2008)
Slaughterhouse waste
pasteurization (70°C, 1 h)
mesophilic fed-batch
0.31 L(biogas)/g VS
1.14 L(biogas)/g VS
268%
(Ware and Power, 2016)
chemical pretreatment with NaOH
mesophilic batch
8.55 L(biogas)/kg FM
22.8 L(biogas)/kg FM
167%
(Flores-Juarez et al., 2014)
Lignocellulosic agro-industrial waste
enzymatic pretreatment of sugar beet residues
mesophilic fed-batch
163 mL(biogas)/d
183 mL(biogas)/d
12%
(Ziemiński et al., 2012)
hydrothermal NaOH pretreated rice straw
mesophilic batch
140 L(biogas)/kg VS
185 L(biogas)/kg VS
32%
(Chandra et al., 2012)
• • In pressure swing adsorption (PSA), carbon dioxide is removed from biogas by alternating pressure levels and its adsorption/desorption on zeolites or activated carbon.
• • In chemical solvent scrubbing (CSS), carbon dioxide is trapped in dissolved compounds or liquid chemical, i.e., alkaline salt solutions and amine solutions.
• • In pressurized water scrubbing (PWS), removal of carbon dioxide and hydrogen sulphide is based on their higher solubility in water compared to methane.
• • In physical solvent scrubbing (PSS), instead of trapping carbon dioxide and hydrogen sulphide in water, some organic compounds can be used, i.e., glycols.
• • Membrane separation is based on the different permeation rates of biogas compounds, when it undergoes high pressure across a nano-porous material (membrane) causing gas compound separation.
• • Cryogenic distillation uses the condensing and freezing of carbon dioxide at low temperatures, at which methane is in the gas phase.
• • Supersonic separation uses a specific nozzle to expand the saturated gas to supersonic velocities, which results in low temperature and pressure, which causes the change of aggregate state (condensation) and separation of compounds.
• • The industrial (ecological) “lung” uses an enzyme, carbonic anhydrase, to pull carbon dioxide into an aqueous phase and absorbed.
Due to a low investment price, high removal efficiencies, high reliability, or a wide range of contaminants removal, the most commonly applied upgrading technologies are water scrubbing, PSA, and chemical scrubbing. A combination of technologies is often used to process larger quantities of biogas to biomethane. However, upgrade technologies are generally expensive to purchase and can be costly to operate and maintain.
Digestate Management
A digestate is composed of two fractions, liquid and solid. After separating digestate material into fractions, different utilization methods can be applied, as Drosg et al. (2015) studied. The liquid fraction of digestate usually contains high concentrations of nitrogen and, therefore, it can be applied directly as a soil liquid fertilizer, without any processing required. Also, the liquid fraction can be re-fed to the digester and recirculated in the AD process. The solid fraction generally consists of non-degraded material (primarily lignin) which can cause odor emissions. To prevent this outcome, the solid fraction of digestate can be used as a feedstock for a composting process. The resulting compost is a biofertilizer that slowly releases nutrients and improves soil characteristics. The other option for solid digestate fraction utilization is to remove remaining moisture by drying and produce solid state fuel (pellets); this approach is not satisfactory as valuable nutrients present in solid digestate are lost. So far, using digestate as a biofertilizer seems to be the most sustainable option.
Examples
Example $1$
Example 1: Theoretical oxygen demand and theoretical biochemical methane potential
Problem:
A lignocellulosic substrate was analyzed for its elemental composition (Table 4.4.6). Calculate the (a) theoretical oxygen demand and (b) theoretical biochemical methane potential of this substrate.
Solution
1. (a) To calculate the theoretical oxygen demand, first estimate the elemental formula of the substrate (CaHbOcNd) based on the elements in the dry matter, since water (the remaining material) is not degradable during the AD process. Divide the share of elements by their relative atomic mass:
2. $\frac{47.2}{12} : \frac{5.8}{1} : \frac{44.2}{16} : \frac{2.8}{14}$
That results in the following values:
$3.933:5.800:2.763:0.200$
Then, it is necessary to divide all numbers by the lowest presented value, in this case 0.200:
$(3.933 : 5.800:2.763:0.200)/0.200$
The result of the applied action (a : b : c : d) is:
$19.7:29:13.8:1$
Which indicates the chemical formula of the lignocellulosic substrate as: C19.7H29O13.8N.
1. (b) Estimate theoretical oxygen demand using Equation 4.4.1:
2. $ThOD = \frac{16 \times (2a+0.5(b-3d)-c}{12a+b+16c+14d} (\frac{kg_{O_{2}}}{kgTS_{C_{a}H_{b}O_{c}N_{d}}})$ (Equation $1$)
3. $ThOD = \frac{16 \times (2\times 19.7+0.5(29–33\times 1)-13.8}{12\times 19.7+29+16\times 13.8+14\times 1} = \frac{1.235 kg_{O_{2}}}{kgTS_{C_{19.7}H_{29}O_{13.8}N}}$
If the entire lignocellulosic substrate is degraded during the AD process, TBMP can be estimated using Equation 4.4.2:
$TBMP = \frac{22.4 \times (\frac{a}{2}+\frac{b}{8}-\frac{c}{4}-\frac{3d}{8})}{12a+b+16c+14d} (\frac{Nm^{3}_{CH_{4}}}{kgVS_{C_{a}H_{b}O_{c}N_{d}}})$ (Equation $2$)
$TBMP = \frac{22.4 \times (\frac{19.7}{2}+\frac{29}{8}-\frac{13.8}{4}-\frac{3\times 1}{8})}{12\times 19.7+29+16\times 13.8+14 \times 1} (\frac{0.432Nm^{3}_{CH_{4}}}{kgVS_{C_{a}H_{b}O_{c}N_{d}}})$
Table $6$: Elemental composition of the lignocellulosic substrate.
Elements
Based on Fresh Matter [%]
Based on Dry Matter [%]
Carbon
8.9
47.2
Hydrogen
1.1
5.8
Oxygen
8.5
44.2
Nitrogen
0.53
2.8
TBMP is 0.432 Nm3 of biomethane per kg of substrate VS.
Example $2$
Example 2: Degradation calculation
Problem:
The BMP tests of the lignocellulosic substrate in Example 4.4.1 determined that the substrate has a BMP of 0.222 Nm3 kgVS−1. Determine the degradation of the substrate.
Solution
Calculate degradation using Equation 4.4.6:
$\text{Degradation (%)} = \frac{BMP}{TBMP} \times100$ (Equation $6$)
$\text{Degradation (%)} = \frac{0.222 Nm^{3} kgVS^{-1}}{0.432 Nm^{3}kgVS^{-1}} \times100 = 50.9\%$
The result shows that during the AD tests performed on the lignocellulosic matter, 50.9% of the substrate was degraded and biomethane/biogas produced.
Example $3$
Example 3: Pretreatment efficiency determination
Problem:
Lignocellulosic substrate from Example 4.4.1 has undergone a thermo-chemical pretreatment before entering the BMP test. Reported BMP of the pretreated substrate was 0.389 Nm3 kgVS−1. Calculate the increase in biomethane production by applying the thermo-chemical pretreatment method, i.e., what is the efficiency of the pretreatment method?
Solution
Calculate the efficiency of the pretreatment method (increase in biomethane production) using Equation 4.4.10:
$\text{Efficiency (%)} = \frac{\text{BMP(after pretreatment) - BMP(without pretreatment)}}{\text{BMP(without pretreatment)}}\times100$ (Equation $10$)
$\text{Efficiency (%)} = \frac{0.389\ Nm^{3}kgVS^{-1} - 0.222\ Nm^{3}kgVS^{-1}}{0.222\ Nm^{3}kgVS^{-1}} \times100=75\%$
This case shows that the efficiency of the applied pretreatment technique is 75%.
Example $4$
Example 4: BGP test on anaerobic digestion of rotten food
Problem:
BGP tests have been conducted on the anaerobic digestion of a rotten food mixture with an average C:N ratio of 12. The working volume of the laboratory reactor is 250 mL. The mass of raw feedstock put in the reactor was 100 g, with an average dry matter content of 5%. Inoculum and feedstock were mixed in the ratio of 1:1 based on the total solids content. The reactor operated under mesophilic conditions, with a temperature of 38°C. Biogas production was measured by the water displacement method each day over a 40-day period. Table 4.4.7 presents the recorded volume of biogas during the AD operation (normalized to 0°C and 1 atm). Calculate and graph the daily and cumulative biogas production over the test period. If the average share of methane in biogas was recorded as 55%, calculate the BMP of the rotten food.
Solution
Calculate the daily production of biogas in the studied example by dividing the volume of biogas produced each day by the reactor volume:
$V\text{(digester)} = 250\ mL$
$\text{Daily production of biogas} = \frac{V_{N,biogas}}{V\text{digester}}$
The computed daily biogas production values are plotted in Figure 4.4.3 in SI units Nm3/(m3∙d).
Cumulative production of biogas is determined as the sequential sum of biogas volume produced each day, expressed over the mass of total solids of feedstock put in the reactor (Figure 4.4.4).
The final value of cumulative biogas production (40th day), about 0.221 Nm3 kg−1 TS, is the BGP of the rotten food sample. Determine the value of the BMP of the analyzed feedstock using Equation 4.4.5 in the following form:
$BMP=\text{share of methane} \times BGP$
Insert reported values in the equation:
$BMP = 0.55 \times 0.221\ Nm^{3}kgTS^{-1} = 0.121\ Nm^{3}kgTS^{-1}$
BMP of the analyzed feedstock is calculated to be 0.121 Nm3 kgTS−1.
Example $5$
Example 5: Biogas plant
Problem:
A biogas facility operates under mesophilic conditions (38°C) and produces biogas from food processing by-products. The digester volume is 3,750 m3 while the average hydraulic retention time is 50 days. Average COD of inlet stream processed at the biogas plant is 75 kgO2 m−3.
1. (a) Determine the OLR expressed over the quantity of input stream and its chemical oxygen demand. Also, calculate the daily production of biogas in the digester, if the average methane share in biogas is about 65%.
2. (b) Assume the biogas facility started to operate with a different inlet stream, characterized with 40% higher COD value compared to inlet stream in (a). To keep the same organic load rate in terms of COD value, find the new HRT for the changed feedstock and the new production of methane.
Table $7$: Normalized biogas volume in the operation of batch AD of rotten food.
Time (day) Produced Biogas per Day (NmL) Time (day) Produced Biogas per Day (NmL)
1
0
21
29
2
3
22
24
3
9
23
25
4
17
24
23
5
25
25
20
6
32
26
18
7
39
27
16
8
45
28
14
9
50
29
12
10
56
30
11
11
62
31
9
12
67
32
9
13
72
33
8
14
74
34
6
15
69
35
4
16
61
36
3
17
53
37
3
18
50
38
3
19
45
39
2
20
35
40
1
Solution
1. (a) To determine the required variables Q, Qbiogas, and OLR, first calculate the input volume of feedstock per day using Equation 4.4.9:
$HRT = V/Q$ (Equation $9$)
$Q= V/HRT = 3,750\ m^{3}/50\ d=75\ m^{3}\ d^{-1}$
The calculated flow rate of feedstock is 75 m3 d−1.
Input COD value of the feedstock per day is calculated as the product of volume flow rate (75 m3 d−1) and COD (75 kgO2 m−3)
$COD_{input} = Q \times COD=75\ m^{3}/d \times 75\ kgO_{2}/m^{3} = 5,625\ kgO_{2}\ d^{-1}$
This results in input COD of 5,625 kgO2 d−1.
As stated above, 1 kg of input COD in the AD can produce 0.40 Nm3 CH4, so the flow rate of feedstock of 5,625 kgO2 d−1 can produce:
$Q_{N,CH_{4}} = \frac{5,625\ kgO_{2}}{d} \times \frac{0.40\ Nm^{3}}{kgO_{2}}$
This results in 2,250 Nm3 of CH4 per day in the biogas production unit.
To find the production of methane in the digester during the process at a temperature of 38°C, it is necessary to apply the following relation:
$Q_{38^{\circ}C,CH_{4}} = Q_{N,CH_{4}} \times (\frac{273+38}{273})$
That resulted in the production of methane in the digester of 2,563 m3 at a temperature of 38°C.
Furthermore, to determine the production of biogas in the digester, divide the quantity of produced methane by the share of methane in biogas:
$Q_{38^{\circ}C, biogas} = \frac{Q_{38^{\circ}C,CH_{4}}}{0.65}$
This results in the daily biogas production rate of 3,943 m3 in a digester at 38°C.
To determine the organic load rate based on the input volume, OLR, use Equation 4.4.7:
$OLR = Q/V$ (Equation $7$
$OLR = \frac{75\ m^{3}/d}{3,750\ m^{3}} = 0.02\ m^{3}\ d^{-1} \text{ feedstock} \ m^{-3} \text{ digester}$
Then use Equation 4.4.8 to express OLR in terms of COD value:
$OLR_{COD} = OLR \times COD$ (Equation $8$)
$OLR_{COD} = \frac{0.02\ m^{3}}{d\times m^{3}} \times \frac{75\ kgO_{2}}{m^{3}}= 1.5\ kgO_{2}\ d^{-1} m^{3} \text{ digester}$
1. (b) To determine the values of Q, HRT and Q(CH4) when new organic material (new feedstock) is entering the AD plant, first note that:
$V=3,750\ m^{3}$
$COD_{new} = COD_{old}+0.40\times COD_{old}= 1.4\times COD_{old}$
Therefore, the COD value of a new feedstock is assumed to be 105 kgO2 m−3.
Calculate the flow rate of the new feedstock using Equation 4.4.8 in the following modified form:
$OLR_{new} = \frac{OLR_{COD}}{COD_{new}}$
$OLR_{new} = \frac{1.5\ kgO_{2}\ d^{-1}\ m^{-3} \text{ digester}}{105\ kgO_{2}\ m^{-3} \text{ feedstock}}$
The new value of OLR is estimated to be 0.0143 m3 feedstock m−3 digester d−1.
Calculate the input flow rate of the new feedstock using Equation 4.4.7:
$Q_{new} = OLR_{new} \times V$
$Q_{new} = \frac{0.0143\ m^{3}}{d \times m^{3}} \times 3,750\ m^{3}$
The flow rate of the new feedstock is 53.63 m3 d−1. As expected, the input flow rate of the feedstock with higher COD is lower than the one in part (a) to maintain the same OLRCOD.
Since the input COD remains the same as in part (a), the production of methane is not changed, 2,563 m3 at 38°C.
HRT for the new feedstock is calculated with Equation 4.4.9 and found to be 70 days. Since the new feedstock has a lower flow rate compared to the one in part (a), it is necessary to prolong the period of feedstock retention to achieve the same OLRCOD.
Image Credits
Figure 1. Mähnert, P. (2020). (CC by 4.0). Theoretical biogas yield profiles for a batch test. Retrieved from https://opus4.kobv.de/opus4-slbp/files/1301/biogas03.pdf
Figure 2. Clarke Energy. (2020). Flow diagram of biogas CHP cogeneration. ICE = internal combustion engine. Retrieved from https://www.clarke-energy.com/biogas/. [Fair Use].
Figure 3. Bedoić, R. (CC By 4.0). (2020). Daily biogas production for example 4.
Figure 4. Bedoić, R. (CC By 4.0). (2020). Cumulative biogas production for example, 4.
References
Aboudi, K., Álvarez-Gallego, C. J., & Romero-García, L. I. (2016). Biomethanization of sugar beet byproduct by semi-continuous single digestion and co-digestion with cow manure. Bioresour. Technol., 200, 311-319. https://doi.org/10.1016/j.biortech.2015.10.051.
Al Seadi, T., Rutz, D., Prassl, H., Köttner, M., Finsterwalder, T., Volk, S., & Janssen, R. (2008). Biogas handbook. Esbjerg, Denmark: University of Southern Denmark Esbjerg.
Ariunbaatar, J., Panico, A., Esposito, G., Pirozzi, F., & Lens, P. N. L. (2014). Pretreatment methods to enhance anaerobic digestion of organic solid waste. Appl. Energy, 123, 143-156. https://doi.org/10.1016/j.apenergy.2014.02.035.
Bedoić, R., Čuček, L., Ćosić, B., Krajnc, D., Smoljanić, G., Kravanja, Z., . . . Duić, N. (2019a). Green biomass to biogas—A study on anaerobic digestion of residue grass. J. Cleaner Prod., 213, 700-709. https://doi.org/https://doi.org/10.1016/j.jclepro.2018.12.224.
Bedoić, R., Ćosić, B., & Duić, N. (2019b). Technical potential and geographic distribution of agricultural residues, co-products and by-products in the European Union. Sci. Total Environ., 686, 568-579. https://doi.org/10.1016/j.scitotenv.2019.05.219.
Bochmann, G., & Montgomery, L. F. R. (2013). Storage and pre-treatment of substrates for biogas production. In A. Wellinger, J. Murphy, & D. Baxter (Eds.), The biogas handbook: Science, production and applications (pp. 85-103). Woodhead Publishing. https://doi.org/10.1533/9780857097415.1.85.
Carlsson, M., Lagerkvist, A., & Ecke, H. (2008). Electroporation for enhanced methane yield from municipal solid waste. ORBIT 2008: Moving Organic Waste Recycling Towards Resource Management and Biobased Economy, 6, 1-8.
Chandra, R., Takeuchi, H., & Hasegawa, T. (2012). Hydrothermal pretreatment of rice straw biomass: A potential and promising method for enhanced methane production. Appl. Energy, 94, 129-140. https://doi.org/10.1016/j.apenergy.2012.01.027.
Clarke Energy (2020). Biogas. Retrieved from https://www.clarke-energy.com/biogas/.
Das, A., & Mondal, C. (2016). Biogas production from co-digestion of substrates: A review. Int. Res. J. Environ. Sci., 5(1), 49-57.
Drosg, B., Fuchs, W., Al Seadi, T., Madsen, M., & Linke, B. (2015). Nutrient recovery by biogas digestate processing. IEA Bioenergy. Retrieved from http://www.iea-biogas.net.
Flores-Juarez, C. R., Rodríguez-García, A., Cárdenas-Mijangos, J., Montoya-Herrera, L., Godinez Mora-Tovar, L. A., Bustos-Bustos, E., . . . Manríquez-Rocha, J. (2014). Chemically pretreating slaughterhouse solid waste to increase the efficiency of anaerobic digestion. J. Biosci. Bioeng., 118(4), 415-419. https://doi.org/10.1016/j.jbiosc.2014.03.013.
Frigon, J. C., & Guiot, S. R. (2010). Biomethane production from starch and lignocellulosic crops: A comparative review. Biofuels, Bioprod. Biorefin., 4(4), 447-458. doi.org/10.1002/bbb.229.
Gerike, P. (1984). The biodegradability testing of poorly water soluble compounds. Chemosphere, 13(1), 169-190. https://doi.org/10.1016/0045-6535(84)90018-3.
Izumi, K., Okishio, Y., Nagao, N., Niwa, C., Yamamoto, S., & Toda, T. (2010). Effects of particle size on anaerobic digestion of food waste. Int. Biodeterioration and Biodegradation, 64(7), 601-608. https://doi.org/10.1016/j.ibiod.2010.06.013.
Karak, N. (2016). Biopolymers for paints and surface coatings. In F. Pacheco-Torgal, V. Ivanov, & H. Jonkers (Eds.). Biopolymers and biotech admixtures for eco-efficient construction materials (pp. 333-368). Woodhead Publishing. https://doi.org/10.1016/B978-0-08-100214-8.00015-4.
Koch, K., Lübken, M., Gehring, T., Wichern, M., & Horn, H. (2010). Biogas from grass silage—Measurements and modeling with ADM1. Bioresour. Technol., 101(21), 8158-8165. https://doi.org/10.1016/j.biortech.2010.06.009.
Lauwers, J., Appels, L., Thompson, I. P., Degrève, J., Van Impe, J. F., & Dewil, R. (2013). Mathematical modelling of anaerobic digestion of biomass and waste: Power and limitations. Prog. Energy Combust. Sci., 39, 383-402. https://doi.org/10.1016/j.pecs.2013.03.003.
Li, Y., Zhang, R., Chen, C., Liu, G., He, Y., & Liu, X. (2013). Biogas production from co-digestion of corn stover and chicken manure under anaerobic wet, hemi-solid, and solid state conditions. Bioresour. Technol., 149, 406-412. https://doi.org/10.1016/j.biortech.2013.09.091.
Liu, C., Yuan, X., Zeng, G., Li, W., & Li, J. (2008). Prediction of methane yield at optimum pH for anaerobic digestion of organic fraction of municipal solid waste. Bioresour. Technol., 99, 882-888. https://doi.org/10.1016/j.biortech.2007.01.013.
Ma, J., Duong, T. H., Smits, M., Verstraete, W., & Carballa, M. (2011). Enhanced biomethanation of kitchen waste by different pre-treatments. Bioresour. Technol., 102(2), 592-599. https://doi.org/10.1016/j.biortech.2010.07.122.
Ma, J., Zhao, Q. B., Laurens, L. L. M., Jarvis, E. E., Nagle, N. J., Chen, S., & Frear, C. S. (2015). Mechanism, kinetics and microbiology of inhibition caused by long-chain fatty acids in anaerobic digestion of algal biomass. Biotechnol. Biofuels, 8(1). https://doi.org/10.1186/s13068-015-0322-z.
Mähnert, P. (2006). Grundlagen und verfahren der biogasgewinnung. Leitfaden Biogas (FNR), 13-25. Retrieved from https://opus4.kobv.de/opus4-slbp/files/1301/biogas03.pdf.
Moukazis, I., Pellera, F. M., & Gidarakos, E. (2018). Slaughterhouse by-products treatment using anaerobic digestion. Waste Manage., 71, 652-662. https://doi.org/10.1016/j.wasman.2017.07.009.
Mulat, D. G., Dibdiakova, J., & Horn, S. J. (2018). Microbial biogas production from hydrolysis lignin: Insight into lignin structural changes. Biotechnol. Biofuels, 11(61). https://doi.org/10.1186/s13068-018-1054-7.
Patil, V. S., & Deshmukh, H. V. (2015). A review on co-digestion of vegetable waste with organic wastes for energy generation. Int. J. Biol. Sci., 4(6), 83-86.
Pognani, M., D’Imporzano, G., Scaglia, B., & Adani, F. (2009). Substituting energy crops with organic fraction of municipal solid waste for biogas production at farm level: A full-scale plant study. Process Biochem., 44(8), 817-821. https://doi.org/10.1016/j.procbio.2009.03.014.
Ruffino, B., Fiore, S., Roati, C., Campo, G., Novarino, D., & Zanetti, M. (2015). Scale effect of anaerobic digestion tests in fed-batch and semi-continuous mode for the technical and economic feasibility of a full scale digester. Bioresour. Technol., 182, 302-313. https://doi.org/10.1016/j.biortech.2015.02.021.
Sung, S., & Liu, T. (2003). Ammonia inhibition on thermophilic anaerobic digestion. Chemosphere, 53(1), 43-52. https://doi.org/10.1016/S0045-6535(03)00434-X.
Ueno, Y., Tatara, M., Fukui, H., Makiuchi, T., Goto, M., & Sode, K. (2007). Production of hydrogen and methane from organic solid wastes by phase-separation of anaerobic process. Bioresour. Technol., 98(9), 1861-1865. https://doi.org/10.1016/j.biortech.2006.06.017.
Van Lier, J. B., Rebac, S., & Lettinga, G. (1997). High-rate anaerobic wastewater treatment under psychrophilic and thermophilic conditions. Water Sci. Technol., 35, 199-206. https://doi.org/10.1016/S0273-1223(97)00202-3.
Ware, A., & Power, N. (2016). What is the effect of mandatory pasteurisation on the biogas transformation of solid slaughterhouse wastes? Waste Manag., 48, 503–512. https://doi.org/10.1016/j.wasman.2015.10.013.
Xie, S., Hai, F. I., Zhan, X., Guo, W., Ngo, H. H., Price, W. E., & Nghiem, L. D. (2016). Anaerobic co-digestion: A critical review of mathematical modelling for performance optimization. Bioresour. Technol., 222, 498-512. https://doi.org/10.1016/j.biortech.2016.10.015.
Zhu, B., Gikas, P., Zhang, R., Lord, J., Jenkins, B., & Li, X. (2009). Characteristics and biogas production potential of municipal solid wastes pretreated with a rotary drum reactor. Bioresour. Technol., 100(3), 1122-1129. https://doi.org/10.1016/j.biortech.2008.08.024.
Ziemiński, K., Romanowska, I., & Kowalska, M. (2012). Enzymatic pretreatment of lignocellulosic wastes to improve biogas production. Waste Manag., 32(6), 1131-1137. https://doi.org/10.1016/j.wasman.2012.01.016.
Zonta, Ž., Alves, M. M., Flotats, X., & Palatsi, J. (2013). Modelling inhibitory effects of long chain fatty acids in the anaerobic digestion process. Water Res., 47(3), 1369-1380. https://doi.org/10.1016/j.watres.2012.12.007.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/04%3A_Natural_Resources_and_Environmental_Systems/4.04%3A_Anaerobic_Digestion_of_Agri-Food_By-Products.txt
|
Mélynda Hassouna
UMR Sol, Agro et hydrosystèmes et Spatialisation, INRAe, Agrocampus, France
Salvador Calvet
Institute of Animal Science and Technology, Universitat Politècnica de València, Valencia, Spain
Richard S. Gates
Agricultural & Biosystems Engineering, and Animal Sciences, Iowa State University, Ames, Iowa, USA
Enda Hayes
Air Quality Management Resource Centre, Department of Geography and Environmental Management, University of the West of England, Bristol, United Kingdom
Sabine Schrade
Ruminants Research Unit, Federal Department of Economic Affairs, Education and Research EAER, Agroscope, Ettenhausen, Switzerland
Key Terms
Emission processes Mass balance Ammonia
Measurement techniques Validation Greenhouse gases
Sampling Ventilation
Introduction
Animal housing and manure storage facilities are two principal on-farm sources of gaseous emissions to the atmosphere. The most important pollutants emitted are ammonia (NH3), methane (CH4), and nitrous oxide (N2O).
Ammonia (NH3) is a colorless gas with a pungent smell that can have impacts on environmental and human health (Figure 4.5.1). Ammonia is emitted by many agricultural activities, including crop production as well as animal production. Ammonia plays a key role in the formation of secondary particulate matter (PM) by reacting with acidic species such as sulfur dioxide (SO2) and nitrogen oxides (NOx) to form fine aerosols, and is thus called a particulate precursor. The PM created by the reaction of NH3 and acidic species in the atmosphere contributes to poor air quality including regional haze. These particles have an aerodynamic diameter of less than 2.5 microns and are generally referred to as “PM-fine.” They are readily inhaled and populations exposed to PM-fine have greater respiratory and cardiovascular health risks such as asthma, bronchitis, cardiac arrythmia and arrest, and premature death. Some emitted NH3 is subsequently deposited on land and water downwind of facilities, and can acidify soils and freshwater. The addition of available nitrogen (N) to low-nutrient ecosystems disturbs their balance and can alter the relative growth and abundance of plant species.
Nitrous oxide and CH4 are potent greenhouse gases (GHGs) which contribute to global warming. The global warming potential (GWP) is a factor specific to each GHG and allows comparisons of the relative global warming impacts between different GHGs. This factor indicates how much heat a given gas traps over a certain time horizon (usually 100 years), compared with an equal mass of carbon dioxide (CO2). Nitrous oxide GWP for a 100-year time horizon is 265 with a lifetime in the atmosphere of 114 years. Methane GWP for a 100-year time horizon is 28 with a lifetime in the atmosphere of 12 years.
Nitrous oxide, also known as “laughing gas,” is colorless and odorless, and contributes to the destruction of the atmospheric ozone layer. In agriculture, the main source of N2O emissions is soil, from crop fertilizer use, soil cultivation, and spreading of urine and manure. Other sources include industrial processes, and natural processes involving soils and oceans.
Methane is a volatile organic compound, odorless and flammable. In agriculture, the main sources are enteric fermentation (fermentation that takes place in the digestive systems of animals) and the degradation of manure. Methane contributes to ozone formation in the lower atmosphere, and to ozone layer depletion in the upper atmosphere.
Researchers and engineers have developed different approaches to reliably measure and quantify emissions of NH3, N2O, and CH4 from animal production facilities. The implementation of these methods helps to understand the production processes, to identify the influencing factors, and to develop mitigation techniques or practices. The specific characteristics of animal housing and the variability of the houses and animal production systems make the development and implementation of the different methods a real challenge.
Concepts
Animal Houses
Animal housing is designed to provide shelter and protection with control of feed consumption, diseases, parasites, and the interior thermal environment. An animal house is designed to take into account animal heat and moisture production, building characteristics (e.g., insulation and volume), and outdoor climate. Inside the house, animals produce the following critical components that affect emissions:
• • sensible heat that is transferred to the interior air by means of convection, conduction, and radiation, and causes an increase of air temperature;
• • latent heat that is generated through the evaporation of moisture from the lungs, skin, urine, and fecal material, and through increases of air humidity;
• • mixtures of feces and urine, which become a source of gaseous emissions (NH3, N2O, CO2, water vapor, CH4), and heat; and
• • CO2 from animal respiration.
The control of temperature, moisture, gas concentrations and dust concentrations inside the house is essential to achieve optimal conditions for animal growth and production. The optimal conditions vary as a function of animal age, species, and breed, and rely on the implementation of ventilation systems. The ventilation system partly controls the rate and total emissions from the building. There are two types of ventilation systems: mechanical and natural. The RAMIRAN European network (RAMIRAN, 2011) defines the two systems as:
• • mechanical ventilation, which is ventilation of a building, usually for pigs, poultry, or calves, through the use of electrically powered fans in the walls or roof that are normally controlled by the temperature in the building; and
• • natural ventilation, which is ventilation of a building, e.g., for cattle, by openings or gaps designed into the roof and/or sides of the building.
NH3, N2O, and CH4 Emissions Processes in Animal Housing and Influencing Factors
Ammonia is volatilized as a gas during manure management. It is mainly derived from the urea excreted in urine (or uric acid in the feces, in the case of poultry). The process of forming NH3 from urea is relatively fast and is outlined in Figure 4.5.2. Once excreted, urea is decomposed within a few hours to a few days into ammonium (NH4) by means of the enzyme urease, which is widely present in feces and soils. Ammonium is in equilibrium with dissolved NH3 enhanced at high pH values. A second physical equilibrium exists between dissolved and free NH3 in the manure matrix. Finally, the free NH3 can be released to the atmosphere. This is a mass transfer process affected by air velocity, diffusion from beneath the surface, and exposure to air on the surface of the manure.
This is a continuous process that starts in the animal housing itself and continues during manure management and land application. Several factors are involved in the amount of NH3 emitted to the atmosphere:
• • manure composition; the most relevant factors are the amount of urea excreted by the animals, the pH of the manure, and its moisture content;
• • the environmental conditions, particularly temperature and wind speed above the emitting surface;
• • the facilities for animal housing and manure management; and
• • management practices, particularly those altering the contact of manure with air, and urine with feces, by reducing time of exposure or contact surface.
On farms, N2O originates from the management of manure and its application to land as fertilizer. Emission of N2O occurs from the successive nitrification and denitrification of NH4. A first aerobic phase is required for the nitrification, while anoxic conditions are required for denitrification (Figure 4.5.3). These conditions are characteristic of the following situations:
• • composting with alternative wetting, mixing, and drying periods;
• • aerobic treatment of slurry;
• • air cleaners at the air exhaust of the animal house based on biological scrubbing of air; and
• • application of manure to soil and subsequent drying-wetting events.
Methane is produced during the anaerobic decomposition of organic matter. This occurs mainly during digestion in ruminants and decomposition of manure. In this process, the microbial breakdown of organic matter occurs in different stages, from more complex molecules to the simplest. The main mechanism is presented in Figure 4.5.4. Apart from the presence of organic matter and anoxic conditions, time (a few weeks) is needed to complete the process, and the process may be inhibited due to certain conditions, such as NH3 accumulation. In contrast to NH3, CH4 has very low solubility in water and, once produced, is released to the atmosphere through a characteristic bubbling, in the case of slurries.
For enteric fermentation in ruminants, key factors are feed composition and genetics. More digestible feeds reduce the amount of CH4. Feed constituents such as lipids or essential oils may reduce CH4 production through inhibition. Genetics also influence the amount of CH4 produced and can be modified through animal genetic selection.
CH4 emission during manure management is due to the presence of organic matter subjected to anaerobic conditions for sufficient time (about one month, at least) for methanogenic bacteria to develop. The amount and composition of organic matter determines the maximum potential for CH4 formation. Manure management practices that interrupt anaerobic conditions, reduce the load of organic matter, or feature biogas capture are potentially effective for mitigating emissions.
Measuring Emissions from Animal Housing
The most common approach used to determine gas emission rates from animal houses is based on quantifying ventilation rates and inlet and outlet concentrations of the gas (Figure 4.5.5). The mass flow rate of emitted gas (emission rate or ER) is proportional to the ventilation rate and the concentration difference between exhaust and outside air. Several different techniques are available to measure gas concentrations inside and outside the house and the ventilation rate.
Gas Concentration Measurement Techniques
The techniques used most often to measure NH3, CH4, and N2O concentrations of animal houses are either physical (optical, gas chromatography, chemiluminescence) or chemical (acid traps, active colorimetric tubes).
Optical Techniques
Optical techniques are based on the Beer-Lambert absorption law, which indicates that the quantity of light of a given wavelength absorbed is related to the number of gas molecules in the light’s path that are able to absorb it.
Optical techniques rely on the use of a light source, a chamber to contain the air sample during measurement, and a detector to quantify the target gas absorption. The main techniques used in animal houses are infrared (IR) spectroscopy (photoacoustic or Fourier transform), tunable diode laser absorption spectroscopy (TDLAS), off-axis integrated cavity output spectroscopy (OA-ICOS), and cavity ring-down spectroscopy (CRDS). Differences between these techniques include the detection principle of absorption and the type and wavelength of light sources (quantum cascade laser, tunable diode laser, or IR source). Techniques using lasers (a monochromatic source with a narrow band of wavelengths) are more selective, accurate, and stable than techniques with a polychromatic IR source (i.e., large band of wavelengths) because selection of the absorption of a specific wavelength from a polychromatic IR source is difficult to achieve.
One main advantage of optical techniques is that they make monitoring of concentration dynamics in near real time possible, including monitoring several gases with different concentrations at the same time (Powers and Capelari, 2016). Advantages of optical instruments include linear responses over a wide range of concentrations and the ability to measure concentrations both inside (where there could be a high concentration level) and outside (low concentration level) the animal house with the same instruments. Most optical instruments have response times adapted to measurement in animal houses. They are portable and can be used on site. Nevertheless, they can be expensive, must be calibrated, and still require accurate estimation of ventilation rate.
Gas Chromatography
A gas chromatograph separates components in the sample and measures their concentrations. The equipment has four basic elements: an injector, a column, an oven surrounding the column, and a detector. The sample is vaporized in the injector and swept by the carrier gas through a heated column. The column separates each compound according to its polarity and boiling point. The detector identifies and quantifies the compounds separated. The detectors include a flame ionization detector for CH4 and CO2 and an electron capture detector for N2O. This technique is accurate if the detector has been calibrated for the range of concentrations measured. It requires use of a carrier gas and regular calibration, which makes on-site implementation and continuous measurement difficult. It is often used to measure previously collected samples.
Chemiluminescence
Chemiluminescence is used to measure NH3 concentration. NH3 in the sample is first oxidized to N2O by a catalytic converter, and then the N2O is further oxidized to nitric oxide (NO) at high temperature and an elevated energy state. As the molecules return to a lower energy state, they release electromagnetic radiation at a specific wavelength, which is measured and quantified.
Acid Traps
An acid trap is a standard reference technique for measuring NH3. A known volume of air is pumped through an acid solution and recorded (Figure 4.5.6). The acid solution is later analyzed in the laboratory with a colorimetric or photometric method (Hassouna et al., 2016) to estimate the amount of NH3 trapped in the solution, as:
$NH_{3,trapped} = [N-NH_{4}^{+}]_{acid\ solution} \times m_{acid\ solution}$
where NH3,trapped = amount of NH3 trapped in the solution (kg)
[N-NH4+]acid solution = concentration of ammonium in the acid solution (kg kg−1)
macid solution = mass of the solution (kg)
From this, NH3 concentration in the air sample ($C_{N-NH_{3}, air}$ in kg m−3) can be calculated as:
$C_{N-NH_{3, air}} = \frac{NH_{3, trapped}}{V_{sample}}$
where Vsample (m3) is the volume of air that passed through the solution.
Strong acid solutions are used, such as boric acid, orthophosphoric acid, nitric acid, and sulfuric acid. The trap can be used for a few hours or a few days depending on the NH3 concentrations in the incoming air, the acid concentration, and volume of acid solution in the vials. Sampling time and concentration should be determined before the experiment as a function of the expected NH3 concentrations. Two vials with acid solution are used sequentially to avoid saturating a single solution. This technique provides a mean NH3 concentration over the sampling period and thus is not suitable for studies that require monitoring dynamics of NH3 concentrations in a house. Nevertheless, as it is not expensive or too time-consuming, it can be used to check the consistency of measurements made, for instance, with optical techniques.
Active Colorimetric Tubes
Active colorimetric tubes are a manual technique that can be used to estimate NH3, NO, volatile organic compounds (VOC), and CO2 concentrations. Tubes are manufactured to react to a specific range of concentrations of a specific target gas. Before measurement, both ends of a sealed test tube are cut open. The tube is connected tightly to a hand pump, which draws air through the tube. If present in the air, the target gas reacts with reagents in the tube. The strength of the reaction is proportional to the concentration of the gas in the air. A graduated scale is used to read the degree of color change in the tube, which indicates the concentration of the target gas. Many of the reactions used are based on pH indicators, such as bromophenol blue to measure NH3 concentrations. Gas concentration is expressed in ppm or mL m−3. This technique is reliable and simple to use. It can be used to estimate concentrations, but not to measure them continuously or accurately.
Ventilation Rate
In mechanically ventilated houses with modern ventilation control systems, ventilation rate could be one of many data recorded continuously. In such situations, the data are thus easily available for emission calculations. For other houses, or if the time step of recording is not suitable or recorded data are not reliable, the ventilation rate must be measured or assessed. Different methods to estimate the ventilation rate have been evaluated and described in the literature (Ogink et al., 2013; Wang et al., 2016). The method chosen depends on the type of ventilation (natural or mechanical), the accessibility of the exhaust to make physical measurements, the level of ventilation rate, and the desired degree of accuracy. Some techniques are indirect (tracer gas, heat balance), while others are direct (fan wheel anemometer, specialized instruments).
Tracer Gas Techniques Using Artificial Tracer Gases
Tracer gas techniques are commonly used to quantify the ventilation rate in many kinds of houses, but mainly those with natural ventilation. An external tracer gas should be safe, inert, measurable, not produced in the house, and inexpensive (Phillips et al., 2001; Sherman, 1990). The most common tracer gas used in animal houses is sulfur hexafluoride (SF6) (Mohn et al., 2018). A critical requirement is that of near-perfect air mixing inside the animal house to ensure that the tracer gas and the targeted gas (for emission calculations) being measured both disperse in a similar way. Air can be mixed artificially using a purpose-built ventilation duct (Figure 4.5.7). Tracer gases can be dosed automatically using a mass flow controller and critical orifices (Figure 4.5.8).
The basic principle for tracer gas techniques is conservation of mass (of both target gas and tracer gas). By monitoring the dosed mass flow and concentration at the sampling points of the tracer gas, the ventilation rate can be determined (Figure 4.5.9). A tracer gas release technique is chosen based on the ventilation rate, the detection limit of the device used to monitor tracer gas concentration, and the ability to control and monitor the dosed mass flow accurately. According to Ogink et al. (2013), three tracer gas release techniques can be distinguished:
• • constant injection method: tracer gas is injected at a constant rate, and its concentration is measured directly over a period of time and used to estimate the ventilation rate;
• • decay method: tracer gas is injected until its concentration stabilizes, then injection is stopped and the decay in concentration is used to calculate the ventilation rate; and
(a)
(b)
Figure $8$: (a) Tracer-gas dosing by steel tubes with critical orifices protected by steel elements next to the floors in a dairy housing; (b) gas bottles with mass-flow controller.
• • concentration method: tracer gas is distributed in the air of a house to a certain concentration to be constant.
Only the constant injection method and the decay method are common for measurements in animal houses.
To calculate the emission or mass flow of the target gas (e.g., NH3, CH4), a background correction of the concentration (Cx) must first be calculated for the target gases and the tracer gases:
$C_{x} = C_{x, id} - C_{x, bgd}$
where x = T (tracer gas) or G (target gas)
Cx,id = indoor gas concentration (μg m−3)
Cx,bgd = background gas concentration (μg m−3)
The ratio of the background concentrations of emitted (target) gas, CG, and tracer gas, CT, then corresponds to the ratio of their mass flow rates ($\dot{m}$, g d−1):
$\frac{\dot{m}_{G}}{\dot{m}_{T}} = \frac{C_{G}}{C_{T}}$
and thus
$\dot{m}_{G} = \frac{\dot{m}_{T}C_{G}}{C_{T}}$
Carbon Dioxide (CO2) Mass Balance or Tracer Gas Methods Using an Internal Tracer
The CO2 mass balance method is sometimes considered a tracer gas technique in which CO2 is used as an internal tracer, that is, not dosed but produced by animal respiration and manure. It can be used in naturally or mechanically ventilated houses. It is based on the hypothesis that ventilation rate determines the relationship between CO2 production in the house and the difference in CO2 concentrations between the inside and outside of the house. This method has been widely described (Blanes and Pedersen, 2005; Estellés et al., 2011; Samer et al., 2012) and is more accurate in buildings with no litter and no gas heating system.
The ventilation rate for the house can be calculated as:
$VR = \frac{\text{total heat per house}\times \text{ventilation flow per hpu}}{1000}$
where the total heat produced for the entire house is expressed in heat production units (hpu; 1 hpu is 1 kW of total animal metabolic heat production at 20°C) and the ventilation flow per hpu is in m3 h−1 hpu−1.
The International Commission of Agricultural Engineering provides a method to calculate total heat production (sensible plus latent) for different animal categories (Pedersen and Sällvik, 2002). For instance, for fattening pigs, the total heat produced for the entire house is calculated by multiplying the total heat per animal (in W animal−1) by the number of animals and converting to heat production units as:
$\text{total heat per house} = \text{total heat per animal} \times \text{number of animals}$
$\text{total heat per animal} = (5.09\times m^{0.75} +[1-(0.47+0.003 \times m)] \times [(n \times 5.09 \times m^{0.75}) - (5.09\times m^{0.75})]$
where m = animal mass (kg)
n = feed energy in relation to the heat dissipation due to maintenance (g d−1)
Ventilation flow per hpu varies as a function of animal activity at different times of the day and difference between indoor and outdoor CO2 concentrations:
$\text{Ventilation flow per hpu} = \frac{c \times (\text{relative animal activity})}{(CO_{2,indoors}-CO_{2,outdoors})\times 10^{-6}}$
where c = CO2 production (m3 h−1 hpu−1); varies as a function of animal type (Pedersen and Sällvik, 2002; Pedersen et al., 2008).
$CO_{2, indoors} \text{ and } CO_{2, outdoors} = \text{measured indoor and outdoor } CO_{2} \text{ concentrations at time h (mL m}^{-3})$
Relative animal activity is calculated as:
$\text{Relative animal activity} = 1-a\ sin[(\frac{2\pi}{24} \times (h+6-h_{min})]$
where a = constant expressing amplitude with respect to the constant value 1, which is a scaling factor based on empirical observation and which varies depending on the animal type (Pedersen et al., 2008)
h = time at sampling (this should be a decimal number 0 ≤ h ≤ 24), e.g., (2:10 = 2.2)
hmin = activity factor that relates to the time of day with minimum activity (hours after midnight) (Pedersen and Sällvik, 2002)
Use of Sensors
Fan wheel or hot wire anemometers can be used to quantify ventilation rate in mechanically ventilated houses that draw outlet air through ducts or exhaust fans. One important requirement is having access to exhaust flow where the measurements are to be made, which is not possible in many animal houses.
The anemometer measures air velocity, and ventilation rate (VR) is calculated as follows:
$VR=sA$
where s = mean airspeed (m h−1)
A = cross-sectional area of the ventilation duct or air stream (m2)
Proper methods must be utilized to obtain representative mean air velocity over the flow area, for example by selecting a sufficient number of measurement points and applying either log-linear or log-Tchebycheff rules (ISO 3966, 2008) for measurement points spacing.
Use of anemometers is not recommended in naturally ventilated houses because of their rapid change in air fluxes and large size of the open area, which would require many sensors to obtain a representative estimate of the ventilation rate.
In mechanically ventilated houses, continuous monitoring of the static pressure differential and the operating status (on-off) of each fan can be used to estimate the fan’s ventilation rate based on its theoretical or measured performance characteristics. Ideally, the in situ performance of each fan is determined first, and the house ventilation rate can be estimated by summing all operating fan flow rates. For example, Gates et al. (2004, 2005) developed and improved a fan assessment numeration system (FANS) to measure the in situ performance curve of ventilation fans operating in a negative pressure mechanically ventilated animal house (Figure 4.5.10). This unit is placed either against a fan on the inside of the house, or at the fan exterior with appropriate flexible ducting (Morello et al., 2014) to direct all airflow through the unit. A series of anemometers traverse the entire flow area to obtain a single mean air velocity, which is multiplied by the calibrated unit cross sectional area. A series of these measurements taken at different building static pressures provides an empirical fan performance curve, obtained, for example, from the regression equation of measured flow on building static pressure. Then, measurements of fan run-time and concurrent static pressure can be used to determine reasonably accurate airflow rates for each fan, and their sum is the building ventilation rate. Previous work has clearly shown that neglecting to account for building ventilation by means of direct measurement results in substantial loss in accuracy of estimates for ER, due to the variation among fans.
The Mass Balance Approach for a Global Estimation of N and C Emissions and Emissions Measurement Validation
A mass balance approach estimates emissions based on changes in livestock over time, without the need to measure emissions directly (Figure 4.5.11). The approach estimates total N or C emissions rather than emissions of specific gases (e.g., N-NH3, N-N2O, N2, C-CH4, C-CO2) or emission dynamics. The accuracy of mass balance calculations depends on the technical and livestock management data available, characterization of the manure and feed, and, in certain cases, the length of the period considered. To test the validity of the data used to calculate an N or C mass balance, those of non-volatile elements such as phosphorus (P) and potassium (K) must be calculated. As the latter elements are non-volatile, their mass balance deficits (difference between inputs and outputs) should be zero, but the data used in calculations will have uncertainty, especially under commercial conditions. If the mass balance deficit for P and K is too high (e.g., > 15%), then the estimates of total N and C emissions must be reconsidered.
The estimation of N, C, or water emissions (X emissions, where X is for N, C, or water) over the production period can be calculated according to the following equation:
$X_{inputs}-X_{outputs} = X_{emissions}$
Xinputs and Xoutputs are the quantity of X in all inputs and outputs. The estimation of these quantities requires careful data collection (quantities, chemical compositions) concerning animals, feed, eggs or milk (a function of animal production), litter, manure, and animal mortality. Models should be used to estimate the quantity of X in animals as a function of their weight.
Applications
Implementing the different emissions measurement methods in an animal house requires the development of a protocol based on the objectives of the project, the specifics of the animal house, the interior environment, and outdoor weather. Three important points that should be considered in a protocol concern the sampling, the data acquisition, and the validity check of the measurements.
Sampling and Sensor Locations
Evaluation of gas emissions requires measurement of inlet and outlet gas concentration. Sampling at air inlets or outlets is recommended if they can be identified, if their locations are fixed over the measurement period, and if they can be easily reached. When these conditions cannot be fulfilled, multiple locations inside the house are usually selected to provide a mean indoor concentration to accommodate spatial variability of indoor concentration. The same should be done for outdoor concentration.
The presence of animals inside the animal area makes installation of gas sensors or sampling tubes more complicated. Ideally, they should be installed when no animals are in the house, such as during an outdoor or vacancy period. They should be located where animals cannot bite, bump, or move them, and should be carefully protected from animals (Figure 4.5.12). Successful sensor placement requires a trade-off between minimizing animal disturbance and maximizing the representativeness of measurements.
The environment inside animal housing is generally harsh for sensors and the air sampling system. Direct exposure to the combination of humidity, NH3, and suspended particulate matter can damage the sensors (Figure 4.5.13). Furthermore, indoor air is generally warmer and more humid than outdoor air because of animal heat production or the use of a heating or cooling system inside the house. These differences should be considered when sampling indoor air; for example, sampling tubes should be heated and insulated if air samples are analyzed in a cooler place and condensation within sampling lines might be expected.
Data Acquisition
During the production period, emissions can vary greatly during a 24-hour cycle and over longer time intervals. Variability is due to the same parameters that affect spatial variability, changes in animal behavior, their excretion patterns and quantity, and whether or not they have outdoor access. For instance, fattening pigs and poultry will excrete more total ammonia nitrogen (TAN) each day as they grow, yielding ever higher potential for NH3 emissions. These two kinds of temporal dynamics (daily and production period) must be considered when measuring gas emissions.
Information concerning the production conditions (number of animals, feed and water consumption, animal mortality) and outdoor climate are also required for validation of measurements and comparison with emissions data already published. All operations made by the farmers or operators (for instance, feeding changes, litter supply, or cooling system implementation) and specific events (for instance, electric power shutdown) during the measurement period should be noted because they will be helpful for data analysis and interpretation.
Validation of Measurements
In order to achieve good quality measurements, data validation steps are necessary at several levels:
• • Validation measurements for parts or the whole measuring setup should be carried out in advance, especially if the setup, single components, and/or the measurement objective (e.g., housing system) was not measured in this configuration before.
• • Calibration of analytical devices and sensors has to be performed according to their specifications. For some analytical instruments, measurements with a reference method (e.g., acid traps for NH3) are recommended.
• • Frequent checks of operational mode and measurement values as well as housing and management conditions are necessary.
• • Plausibility checks of raw data and emission values (e.g., comparison of courses of gas concentrations and wind speed, data check in view of a predetermined plausible range based upon user scientific and technological knowledge) help to find outliers, non-logical values, etc. These incorrect values have to be eliminated according to predefined criteria.
• • Redundancy in measurements can enhance the reliability of the values. For instance, CO2 concentrations could be measured with both a gas chromatograph and an optical gas analyzer during startup or periodically over the project.
• • Comparison of the cumulative emissions for N and carbon (C) with N and C mass balance deficits over the measuring period.
Examples
Example $1$
Example 1: Calculate ammonia (NH3) emission rate from a mechanically ventilated pig house using the carbon dioxide mass balance approach
Problem:
The following case study is based on a project undertaken on behalf of the Irish Environmental Protection Agency to test the suitability of existing NH3 emission factors currently being used for different pig life stages and to explore the potential impact on Natura 2000 sites (i.e., Special Areas of Conservation and Special Protection Areas that may be sensitive to N). The monitoring equipment is described in detail by Kelleghan et al. (2016).
The house used mechanical ventilation, with air inlets along the side of the house and ceiling exhaust fans, but no access to the exhaust stream for direct measurement of ventilation rate. Gas concentrations were measured at 10 a.m. in a house with 406 fattening pigs of 81.25 kg animal−1 reared on a fully slatted floor. Indoor NH3 concentrations in the house were measured using a Los Gatos Research ultraportable ammonia analyzer (UAA) in combination with an eight-inlet multiport unit allowing for multiple sampling points. Outdoor concentrations were not measured; for the purposes of this study and the calculations, they were assumed to be zero. Indoor and outdoor CO2 was measured in the sample gas drawn by the UAA using a K30 CO2 sensor (Senseair, Sweden). Measured gas concentrations are presented in Table 4.5.1. Additional parameters adapted for finishing pigs are included in Table 4.5.2. Calculate the building ventilation rate and the NH3 emission rate.
Solution
Calculate the building ventilation rate using the CO2 mass balance approach expressed by Equation 4.5.6:
Table $1$: Concentrations measurement values.
Measured
CO2,indoors
915 ppm (mL m−3)
CO2,outdoors
403 ppm (mL m−3)
NH3,indoors
3.73 (mg m3)
NH3,outdoors
0 (mg m3)
$VR=\frac{\text{total heat per house} \times \text{ventilation flow per hpu}}{1000}$ (Equation $6$)
To calculate the total heat per house, first, calculate the relative animal activity with Equation 4.5.10 using the given values:
Table $2$: Specific parameters useful for calculations.
Parameter Value for This Experiment
a (Equation 4.5.11)
0.53
hmin (Equation 4.5.11)
1:40 AM = 1.7 in equation
n (Equation 4.5.9)
3.38 (g day−1)
c (Equation 4.5.10)
m3 h−1 hpu−1
$\text{Relative animal activity} = 1-a\ sin[(\frac{2\pi}{24})\times(h+6-h_{min})]$ (Equation $10$)
$\text{Relative animal activity} = 1-0.53\times sin[(\frac{2\pi}{24})\times(10+6-1.7)] = 1.2994$
Next, calculate the total heat production per animal with Equation 4.5.8:
$\text{Total heat per animal} = 5.09 \times m^{0.75} + [1-(0.47+0.003\times m)]\times [(n \times 5.09\times m^{0.75})-(5.09\times m^{0.75})]$ (Equation $8$)
$\text{Total heat per animal} = 5.09 \times 81.25^{0.75} + [1-(0.47+0.003\times 81.25)]\times [(0.00338 \times 5.09\times 81.25^{0.75})-(5.09\times 81.25^{0.75})] = 231.6\ W \text{ animal}^{-1}$
Thus, with 406 pigs, the total heat production for the house is:
$406 \times 231.6 = 94026.5\ W$
Next, calculate ventilation flow per heat producing unit (hpu) with Equation 4.5.9:
$\text{Ventilation flow per hpu} = \frac{c \times (\text{relative animal activity})}{CO_{2,indoors}-CO_{2,outdoors} \times 10^{-6}}$ (Equation $9$)
$\text{Ventilation flow per hpu} = \frac{0.185 \times (1.2994)}{(915-403) \times 10^{-6}} = 469.5$
Ventilation flow per hpu will equal 469.5 m3 h−1 hpu−1.
Finally, substitute the computed values into Equation 4.5.6 to calculate the building ventilation rate:
$VR = \frac{\text{total heat per house} \times \text{ventilation flow per hpu}}{1000}$ (Equation $6$)
$VR = \frac{94026.5 \times 469.5}{1000} = 44,145.2$
The building ventilation rate is 44145.2 m3 h−1.
To calculate NH3 emission rate, note that the NH3 emission rate for the house is proportional to the difference between indoor and outdoor NH3 concentrations multiplied by the ventilation rate:
$\frac{(3.73-0)\times 44,145.2}{3600} = 45.7$
The NH3 emission rate for this example is 45.7 mL s−1. This equates to 9.7 g day−1 animal−1.
Example $2$
Example 2: Calculate methane (CH4) emissions from a naturally ventilated dairy cattle house using tracer gas (SF6) measurement data
Problem:
This example is based on investigations carried out in experimental dairy housing for emission measurements (Mohn et al., 2018). The housing consists of two experimental compartments, each for 20 dairy cows, and a central section for milking, technical installations, an office, and analytics. The experimental compartments are naturally ventilated without thermal insulation and with flexible curtains as facades.
The diluted tracer gas, SF6, was dosed continuously through steel tubes with critical capillaries (every third meter) next to the aisles to mimic the emission sources. Stainless steel tubes and critical orifices were protected with metal profiles from damage by animals and contamination with excrement. To adjust analyzed tracer gas concentration in an optimal range (> 0.05 μg m−3, < 1.5 μg m−3SF6), tracer gas flow was set according to meteorological and ventilation conditions (e.g., curtains open/closed) by mass flow controllers. Integrative air samples at a height of 2.5 m with a piping system consisting of Teflon tubes and critical glass orifices (every second meter) allow a representative sample. Teflon filters protected the critical orifices from dust and insects. Flow rates for individual orifices of the dosing and sampling systems were monitored before and after every measuring period using mass flow meters. The analytical instrumentation for CH4 (cavity ring-down spectrometer, CRDS, Picarro Inc., Santa Clara, CA, USA) and SF6 analysis (GC-ECD, Agilent, Santa Clara, CA, USA) was located in an air-conditioned trailer in the central section. The two compartments were sampled alternately for 10 min. each. Further, once per hour, the background (approximately 25 m from housing, unaffected by the housing) was sampled, so at least two 10-min samples per compartment were obtained every hour.
To describe the measurement situation, relevant accompanying parameters such as housing and outdoor climate, animal parameters (e.g., live weight, milk yield, milk composition, milk urea content, urine urea content), and feed (quality and quantity, amount of trough residue) were recorded.
Calculate the CH4 emissions during one 10-min measurement taken on 23 September at 12:40 p.m. in a compartment with perforated floors holding 20 cows. The measured gas concentrations are presented in Table 4.5.3.
Solution
Calculate the background correction (Cx) according to Equation 4.5.3 using measured concentrations of SF6 $(C_{SF6})$ and CH4 $(C_{CH4})$:
Table $3$: Concentrations of CH4 and SF6 at 12:40 p.m., 23 September.
Parameter Value
SF6 mass flow
2.879 g d−1
SF6 background
0.052 μg m−3
SF6 housing, sampling points
1.820 μg m−3
CH4 background
1384.2 μg m−3
CH4 housing, sampling points
9118.6 μg m−3
$C_{x}=C_{x,sp}-C_{x,bgd}$ (Equation $3$)
$C_{CH4}=9118.6-1384.2=7734.4\ \mu g \ m^{-3}$
$C_{SF6}=1.820-0.052=1.768\ \mu g \ m^{-3}$
Calculate emission or mass flow calculation of CH4 using Equation 4.5.5:
$\dot{m}_{G} = \frac{\dot{m}_{T} \times C_{G}}{C_{T}}$ (Equation $5$)
$\dot{m}_{CH4} = \frac{2.879\ g \ d^{-1} \times 7734.4\ \mu g\ m^{-3}}{1.768\ \mu g\ m^{-3}} = 12,597.9\ g\ d^{-1}$
ER per cow (20 cows per compartment): 629.7 g d−1
Image Credits
Figure 1. Department for Environment Food & Rural Affairs. (CC By 4.0). (2018). Environmental impacts of ammonia emissions. Retrieved from https://www.gov.uk/government/publications/code-of-good-agricultural-practice-for-reducing-ammonia-emissions/code-of-good-agricultural-practice-cogap-for-reducing-ammonia-emissions
Figure 2. Calvet, S. (CC By 4.0). (2014). Process leading to ammonia emission and contributing factors. Adapted from Snoek et al.
Figure 3. Calvet, S. (CC By 4.0). (2001). Reactions leading to N2O emissions: nitrification (in green) and denitrification (in blue). Adapted from Wrage et al.
Figure 4. Calvet, S. (CC By 4.0). (2020). Process and microorganisms involved in methane formation.
Figure 5. Calvet, S. (CC By 4.0). (2020). Direct emission measurement in an animal house. The difference in concentration between indoors and outdoors and the ventilation rate must be measured (adapted from Calvet et al., 2013).
Figure 6. Hassouna, M. (CC By 4.0). (2020). Acid trap configuration.
Figure 7. Hassouna, M. (CC By 4.0). (2020). Duct for dispersing a tracer gas within an animal house.
Figure 8. Agroscope, Schrade, S. (CC By 4.0). (2020). (a) Tracer-gas dosing by steel tubes with critical orifices protected by steel elements next to the floors in a dairy housing; (b) gas bottles with mass-flow controller. Photos adapted from Agroscope. [Fair Use].
Figure 9. Agroscope, Schrade, S. (CC By 4.0). (2020). Principle of the tracer gas method.
Figure 10. Gates, R. (CC By 4.0). (2020). Fan Assessment Numeration System (FANS) developed by Gates et al. (2005) to describe the performance curve of a ventilation fan in an animal house. Note the unit can be used on either the inlet or exhaust side of the ventilation fan.
Figure 11. Hassouna, M. (CC By 4.0). (2016). The mass balance approach in animal houses.
Figure 12. Right. Agroscope, Schrade, S. (CC By 4.0). (2020). Sensors must be installed beyond the reach of animals.
Figure 12 Left. Hassouna, M. (CC By 4.0). (2020). Sensors must be installed beyond the reach of animals.
Figure 13. Hassouna, M. (CC By 4.0). (2020). Clogging of a sampling tube and its dust filter after three months of measurement in a dairy cattle house.
References
Blanes, V., & Pedersen, S. (2005). Ventilation flow in pig houses measured and calculated by carbon dioxide, moisture and heat balance equations. Biosyst. Eng., 92(4), 483-493. https://doi.org/10.1016/j.biosystemseng.2005.09.002.
Calvet, S., Gates, R. S., Zhang, G., Estelles, F., Ogink, N. W. M., Pedersen, S., & Berckmans, D. (2013). Measuring gas emissions from livestock buildings: A review on uncertainty analysis and error sources. Biosyst. Eng., 116(3), 221-231. https://doi.org/10.1016/j.biosystemseng.2012.11.004.
Estelles, F., Fernandez, N., Torres, A. G., & Calvet, S. (2011). Use of CO2 balances to determine ventilation rates in a fattening rabbit house. Spanish J. Agric. Res., 9(3), 8. doi.org/10.5424/sjar/20110903-368-10.
Gates, R. S., Casey, K. D., Xin, H., Wheeler, E. F., & Simmons, J. D. (2004). Fan assessment numeration system (FANS) design and calibration specifications. Trans. ASAE, 47(5), 1709-1715. https://doi.org/10.13031/2013.17613.
Gates, R. S., Xin, H., Casey, K. D., Liang, Y., & Wheeler, E. F. (2005). Method for measuring ammonia emissions from poultry houses. J. Appl. Poultry Res., 14(3), 622-634. https://doi.org/10.1093/japr/14.3.622.
Hassouna, M., Eglin, T., Cellier, P., Colomb, V., Cohan, J.-P., Decuq, C., . . . Ponchant, P. (2016). Measuring emissions from livestock farming: Greenhouse gases, ammonia and nitrogen oxides. France: INRA-ADEME.
ISO 3966. (2008). Measurement of fluid flow in closed conduits. Velocity area method for regular flows using Pitot static tubes.
Kelleghan, D. B., Hayes, E. T., & Curran, T. P. (2016). Profile of ammonia and water vapor in an Irish broiler production house. ASABE Paper No. 162461252. St. Joseph, MI: ASABE. https://doi.org/10.13031/aim.20162461252.
Mohn, J., Zeyer, K., Keck, M., Keller, M., Zahner, M., Poteko, J., . . . Schrade, S. (2018). A dual tracer ratio method for comparative emission measurements in an experimental dairy housing. Atmospheric Environ., 179, 12-22. https://doi.org/10.1016/j.atmosenv.2018.01.057.
Morello, G. M., Overhults, D. G., Day, G. B., Gates, R. S., Lopes, I. M., & Earnest Jr., J. (2014). Using the fan assessment numeration system (FANS) in situ: A procedure for minimizing errors during fan tests. Trans. ASABE, 57(1), 199-209. https://doi.org/10.13031/trans.57.10190.
Ogink, N. W. M., Mosquera, J., Calvet, S., & Zhang, G. (2013). Methods for measuring gas emissions from naturally ventilated livestock buildings: Developments over the last decade and perspectives for improvement. Biosyst. Eng., 116(3), 297-308. https://doi.org/10.1016/j.biosystemseng.2012.10.005.
Pedersen, S., & Sallvik, K. (2002). 4th Report of working group on climatization of animal houses. Heat and moisture production at animal and house levels. Research Centre Bygholm, Danish Institute of Agricultural Sciences.
Pedersen, S., Blanes-Vidal, V., Jorgensen, H., Chwalibog, A., Haeussermann, A., Heetkamp, M. J., & Aarnink, A. J. (2008). Carbon dioxide production in animal houses: A literature review. Agric. Eng. Int.: CIGR J.
Phillips, V. R., Lee, D. S., Scholtens, R., Garland, J. A., & Sneath, R. W. (2001). A review of methods for measuring emission rates of ammonia from livestock buildings and slurry or manure stores, Part 2: Monitoring flux rates, concentrations and airflow rates. J. Agric. Eng. Res., 78(1), 1-14. https://doi.org/10.1006/jaer.2000.0618.
Powers, W., & Capelari, M. (2016). Analytical methods for quantifying greenhouse gas flux in animal production systems. J. Animal Sci., 94(8), 3139-3146. https://doi.org/10.2527/jas.2015-0017.
RAMIRAN. (2011). Glossary of terms on livestock and manure management. Retrieved from http://ramiran.uvlf.sk/doc11/RAMIRAN%20Glossary_2011.pdf.
Samer, M., Ammon, C., Loebsin, C., Fiedler, M., Berg, W., Sanftleben, P., & Brunsch, R. (2012). Moisture balance and tracer gas technique for ventilation rates measurement and greenhouse gases and ammonia emissions quantification in naturally ventilated buildings. Building Environ., 50, 10-20. https://doi.org/10.1016/j.buildenv.2011.10.008.
Sherman, M. H. (1990). Tracer-gas techniques for measuring ventilation in a single zone. Building Environ., 25(4), 365-374. https://doi.org/10.1016/0360-1323(90)90010-O.
Snoek, D. J. W., Stigter, J. D., Ogink, N. W. M., & Groot Koerkamp, P. W. G. (2014). Sensitivity analysis of mechanistic models for estimating ammonia emission from dairy cow urine puddles. Biosyst. Eng., 121, 12-24. https://doi.org/10.1016/j.biosystemseng.2014.02.003.
Wang, X., Ndegwa, P. M., Joo, H., Neerackal, G. M., Stockle, C. O., Liu, H., & Harrison, J. H. (2016). Indirect method versus direct method for measuring ventilation rates in naturally ventilated dairy houses. Biosyst. Eng., 144, 13-25. https://doi.org/10.1016/j.biosystemseng.2016.01.010.
Wrage, N., Velthof, G. L., van Beusichem, M. L., & Oenema, O. (2001). Role of nitrifier denitrification in the production of nitrous oxide. Soil Biol. Biochem., 33(12), 1723-1732. https://doi.org/10.1016/S0038-0717(01)00096-7.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/04%3A_Natural_Resources_and_Environmental_Systems/4.05%3A_Measurement_of_Gaseous_Emissions_from_Animal_Housing.txt
|
Timothy J. Shelford
Department of Biological and Environmental Engineering
Cornell University
Ithaca, NY, USA
A. J. Both
Department of Environmental Sciences
Rutgers University
New Brunswick, NJ, USA
Key Terms
Psychrometric chart Shading Ventilation
Heating Mechanical cooling Installation cost
Lighting Evaporative cooling Operating cost
Introduction
Controlled environment crop production involves the use of structures and technologies to minimize or eliminate the potentially negative impact of the weather on plant growth and development. Common structures include greenhouses (which can be equipped with a range of technologies depending on economics, crops grown and grower preferences), and indoor growing facilities (e.g., growth chambers, plant factories, shipping containers, and vertical farms in high-rise buildings). While each type of growing facility has unique challenges, many of the processes, principles, and technology solutions are similar. This chapter describes approaches to environmental control in plant production facilities with a focus on technologies used for crop production and light control.
Outcomes
After reading this chapter, you should be able to:
• • List and explain the critical environmental control challenges for plant production in controlled environments
• • Perform design calculations for systems used for plant production in controlled environments
• • Calculate the installation and operating cost estimates of lighting systems for plant production in controlled environments
Concepts
Greenhouses were developed to extend the growing season in colder climates and to allow the production of perennial plants that would not naturally survive cold winter months. In providing an optimal environment for a crop, whether in a greenhouse or indoor growing facility, the air temperature is a critical factor that impacts plant growth and development. An equally important and related factor is the moisture content of the air (expressed as relative humidity). Plant growth depends on transpiration, a process by which water and nutrients from the roots are drawn up through the plant, culminating in evaporation of the water through the stomates located in the leaves. (Stomates are small openings that allow for gas exchange. They are actively controlled by the plant.) The transpiration of water through the stomates also results in cooling. Under high relative humidity conditions, the plant is unable to transpire effectively, resulting in reduced growth and, in some cases, physiological damage. Growers seek to create ideal growing environments in greenhouses and other indoor growing facilities by controlling heating, venting, and cooling (Both et al., 2015).
Psychrometric Chart
Knowledge of the relationship between temperature and relative humidity is critical in the design of heating, cooling, and venting systems to maintain the desired environmental conditions inside plant production facilities. The psychrometric chart (Figure 5.1.1) is a convenient tool to help determine the properties of moist air. With values of only two parameters (e.g., dry-bulb temperature and relative humidity, or dry-bulb and wet-bulb temperatures), other air properties can be read from the chart (some interpolation may be necessary). The fundamental physical properties of air used in the psychrometric chart are described below.
• • Dry-bulb temperature (Tdb, °C) is air temperature measured with a regular thermometer. In a psychrometric chart (Figure 5.1.1), the dry-bulb temperature is read from the horizontal axis.
• • Wet-bulb temperature (Twb, °C) is air temperature measured when air is cooled to saturation (i.e., 100% relative humidity) by evaporating water into it. The energy (latent heat) required to evaporate the water comes from the air itself. The wet-bulb temperature can be measured by keeping the sensing tip of a thermometer moist (e.g., by surrounding it with a wick connected to a water reservoir) while the thermometer is moved through the air rapidly, or by blowing air through the moist (and stationary) sensing tip. In a psychrometric chart (Figure 5.1.1), the wet-bulb temperature is read from the horizontal axis by following the line of constant enthalpy from the initial condition (e.g., the intersection of dry-bulb temperature and relative humidity combination) to the saturation line (100% relative humidity).
•
• • Dewpoint temperature (Td, °C) is the air temperature at which condensation occurs when moist air is cooled. In a psychrometric chart (Figure 5.1.1), the dewpoint temperature is read from the horizontal axis after a horizontal line of constant humidity ratio is extended from the initial condition (e.g., the intersection of dry-bulb temperature and relative humidity combination) to the saturation line (100% relative humidity).
• • Relative humidity (RH, %) is the level of air saturation (with water vapor). In a psychrometric chart (Figure 5.1.1), curved lines are of constant relative humidity.
• • The humidity ratio (kg kg−1) is the mass of water vapor evaporated into a unit mass of dry air. In a psychrometric chart (Figure 5.1.1), the humidity ratio is read from the vertical axis.
• • Enthalpy (kJ kg−1) is the energy content of a unit mass of dry air, including any contained water vapor. The psychrometric chart (Figure 5.1.1) typically presents lines of constant enthalpy.
• • Specific volume (m3 kg−1) is the volume of a unit mass of dry air; it is the inverse of the air density. The psychrometric chart (Figure 5.1.1) presents lines of constant specific volume.
Heating
A major expense of operating a greenhouse year-round in cold climates is the cost of heating. It is, therefore, important to understand the major modes of heat loss when designing or operating a greenhouse. Heat loss occurs from the structure directly through conduction, convection, and radiation. Depending on location, when estimating heat losses, it may be necessary to include heat loss around the outside perimeter, as well as the impact of high outside wind speeds and/or large temperature differences between the inside and outside of the greenhouse (Aldrich and Bartok, 1994).
Estimating Heat Needs
Estimating the heat losses due to conduction, convection, radiation, and infiltration, requires both the inside and outside air temperatures. The inside air temperature is usually based on the nighttime set point required by the crop. In the absence of specific crop requirements, typically 16°C can be used as a minimum. If the greenhouse is to be used year-round, typically the 99% winter design dry-bulb temperature is used for the outside temperature. The 99% winter design dry-bulb temperature is the outdoor temperature that is only exceeded 1% of the time (based on 30 years of data for the months December, January, and February collected at or near the greenhouse location). The term “exceeded” in the previous sentence means “colder than.” Such values for many locations throughout the world are published by ASHRAE (2013).
Calculating the exchange of heat (by conduction, convection, and radiation) is a complex process that usually involves making many simplifying assumptions. Solutions often require iterative calculations that are tedious without the help of computing tools. Computing software such as EnergyPlus™ (Crawley, 2001) and Virtual Grower (USDA-ARS, 2019) are available for heat loss calculations. However, even software packages developed for heat loss calculations may not necessarily provide accurate results.
Other methods that greatly simplify performing heat loss calculations using heat transfer coefficients are available. Heat transfer coefficients combine the effects of conduction, convection, and radiation in a single coefficient. Since these processes depend on many factors other than the temperature differential, their accuracy is not high, especially when conditions are extreme, or outside of typical operating ranges. However, for quick estimates that are not computationally intensive, coefficient-based calculations may be useful to a designer or operator. Equation 5.1.1 provides a means to solve for the conductive, convective, and radiative heat losses:
$q_{ccr} = UA_{c}(t_{i} - t_{o})$
where qccr = heat loss by conduction, convection, and radiation (W)
U = overall heat transfer coefficient (W m−2 °C−1)
Ac = area of the greenhouse surface (walls and roof) (m2)
to = ambient (outside) air temperature (°C); the 99% design temperature is commonly used for this parameter (see text)
The overall heat transfer coefficients for typical greenhouse materials are listed in Table 5.1.1.
Equation 5.1.2 is for solving the heat loss due to infiltration:
$q_{i} = \rho_{i} NV[c_{pi(t_{i}-t_{o})}+h_{fg}(W_{i}-W_{o})]$
where qi = heat loss by infiltration (W)
ρi = density of the greenhouse air (kg m−3)
N = infiltration rate (s−1)
V = volume of the greenhouse (m3)
cρi = specific heat of the greenhouse air (J kg−1 °C−1)
ti = greenhouse (inside) air temperature (°C)
to = outside air temperature (°C)
hfg = latent heat of vaporization of water at ti (J kg−1)
Wi = humidity ratio of the greenhouse air (kgwater kgair−1)
Wo= humidity ratio of the outside air (kgwater kgair−1)
Table $1$: Approximate overall heat transfer coefficients (U-values) for select greenhouse glazing methods and materials (ASAE Standards, 2003).
Greenhouse Covering U Value
(W m−2 °C−1)
Single glass, sealed
6.2
Single glass, low emissivity
5.4
Double glass, sealed
3.7
Single plastic
6.2
Single polycarbonate, corrugated
6.2–6.8
Single fiberglass, corrugated
5.7
Double polyethylene
4
Double polyethylene, IR inhibited
2.8
Rigid acrylic, double-wall
3.2
Rigid polycarbonate, double-wall1
3.2–3.6
Rigid acrylic, w/polystyrene pellets2
0.57
Double polyethylene over glass
2.8
Single glass and thermal curtain3
4
Double polyethylene and thermal curtain3
2.5
1 Depending upon the spacing between walls.
2 32 mm rigid acrylic panels filled with polystyrene pellets.
3 Only when the curtain is closed and well-sealed.
Select heat transfer coefficients (U-values; Table 5.1.1) and infiltration rates (Table 5.1.2) with caution when performing heat loss calculations. Infiltration rates depend highly on the magnitude and direction of the wind, among other factors.
Cooling and Cooling Methods
During warmer periods of the year, the temperature inside the growing area of a plant production facility could be much higher than the outside temperature (as occurs inside a closed car on a sunny day). High temperatures inside greenhouses can depress plant growth and, in extreme cases, kill a crop. Cooling systems are essential for plant production facilities that are used year-round.
Table $2$: Estimated infiltration rates for greenhouses by type and age of construction (ASAE Standards, 2003).
Type and Construction Infiltration Rate (N)1
New construction: s−1 h−1
Double plastic film
2.13 × 10−4–4.13 × 10−4
0.75–1.5
Glass or fiberglass
1.43 × 10−4–2.83 × 10−4
0.50–1.0
Old construction:
Glass, good maintenance
2.83 × 10−4–5.63 × 10−4
1.0–2.0
Glass, poor maintenance
5.63 × 10−4–11.13 × 10−4
2.0–4.0
1 Internal air volume exchanges per unit time (s−1 or h−1). High winds or direct exposure to wind will increase infiltration rates; conversely, low winds or protection from wind will reduce infiltration rates.
Mechanical Cooling (Air Conditioning)
Although air conditioning of greenhouses is technically feasible, the installation and operating costs can be very high, particularly during the summer months. The most economical time to use air conditioners in greenhouses is during the spring and autumn when the heat load is relatively low and the crop may benefit from CO2 enrichment. Air conditioning is an alternative to using ventilation to manage humidity and control temperature. By definition, air conditioning is a thermodynamic process that removes heat and moisture from an interior space (e.g., the interior of a controlled environment plant production facility) to improve its conditions. It involves a mechanical refrigeration cycle that forces a refrigerant through a circular process of expansion and contraction, resulting in evaporation and condensation, resulting in the extraction of heat (and moisture) from the plant growing area.
Mechanical cooling may be necessary for indoor growing facilities. Typically, indoor growing facilities operate with minimal exchange rates with the outside air, and so air conditioning becomes one of the ways to remove the humidity generated by plants during transpiration. It is essential to insulate and construct the building properly to minimize solar heat gain in indoor facilities that may add to the heat load. Additionally, it is crucial to know the heat load from electric lamps providing the energy needed for photosynthesis to size the air conditioner adequately.
Evaporative Cooling
Sometimes during the warm summer months, regular ventilation and shading (e.g., whitewash or movable curtains) are not able to keep the greenhouse temperature at the desired set point, thus, additional cooling is needed. Growers typically use evaporative cooling as a simple and relatively inexpensive cooling method. The process of evaporation requires heat. This heat (energy) comes from the surrounding air, thereby causing the air temperature to drop. Simultaneously, the humidity of the air increases as the evaporated water becomes part of the surrounding air mass. The maximum amount of cooling possible with evaporative cooling systems depends on the initial properties of the outside air, i.e., the relative humidity (the drier the air, the more water it can absorb, and the lower the final air temperature will be) and air temperature (warmer air can carry more water vapor compared to colder air). Two different evaporative cooling systems used to manage greenhouse indoor air temperatures during periods when using outside air for ventilation is not sufficient to maintain the set point temperatures are the pad-and-fan system and the fog system.
Pad-and-Fan System
Pad-and-fan systems include an evaporative cooling pad installed as a segment of the greenhouse wall, typically on the wall opposite the exhaust fans. Correctly installed pads allow all incoming ventilation air to pass through it before entering the greenhouse environment (Figure 5.1.2). The pads are made from corrugated material (impregnated paper or plastic) glued together in a way that allows maximum contact with the air passing through the wet pad material. Water is introduced at the top of the pad and released through small holes along the entire length of the supply pipe. These holes are spaced uniformly along the whole length of the pad to provide even wetting. Excess water is collected at the bottom of the pad and returned to a sump tank for reuse. The sump tank is fitted with a float valve to manage make-up water that compensates for the portion of the recirculating water lost through evaporation and to dilute the salt concentration that may increase in the remaining water over time. It is common practice to continuously bleed off approximately 10% of the returning water to a designated drain to prevent excessive salt build-up (crystals) on the pad material that may reduce pad efficiency. During summer operation, it is common to “run the pads dry,” i.e., to stop the flow of water while keeping the ventilation fans running at night to prevent algae build-up that can also reduce pad efficiency. The cooled (and humidified) air exits the pad and moves through the greenhouse picking up heat from the greenhouse interior. In general, pad-and-fan systems used in greenhouses experience a temperature gradient between the inlet (pad) and the outlet (exhaust fan). In properly designed systems, this temperature gradient is kept low (up to 4°–6°C is possible) to provide a uniform environment for all the plants.
The required evaporative pad area depends on the pad thickness and can be calculated by:
$A_{pad} = \frac{\text{total greenhouse ventilation fan capacity}}{\text{recommended air velocity through pad}}$
For example, for 10 cm thick pads, the fan capacity (in m3 s−1) should be divided by the recommended air velocity through the pad, 1.3 m s−1 (ASAE Standards, 2003). For 15 cm thick pads, the fan capacity should be divided by the recommended air velocity through the pad, 1.8 m s−1. The recommended minimum pump capacity is 26 and 42 L s−1 per linear meter of the pad, and the minimum sump tank capacity is 33 and 41 L per m2 of pad area for the 10 and 15 cm pads, respectively. For evaporative cooling pads, the estimated maximum water usage can be as high as 17–20 L h−1 per m2 of pad area.
High-Pressure Fog System
The other evaporative cooling system commonly used is the fog system. This system is typically used in greenhouses with natural ventilation systems because natural ventilation does not have the force to overcome the additional resistance to airflow resulting from an evaporative cooling pad. The nozzles of a fog system are typically installed throughout the greenhouse to provide a more uniform cooling pattern compared to the pad-and-fan system. The recommended spacing is approximately one nozzle for every 5–10 m2 of growing area. The water pressure used in greenhouse fog systems is relatively high (≥3,450 kPa) and enough to produce very fine droplets that evaporate before reaching plant surfaces. The water usage per nozzle is small, approximately 3.8–4.5 L h−1. Water for fogging systems should be free of any impurities to prevent clogging of the nozzle openings. Therefore, fog systems require water treatment (filtration and purification) and a high-pressure pump. Thus, fog systems can be more expensive to install compared to pad-and-fan systems, but the resulting cooling is more uniform.
Ventilation
To maintain optimum growing conditions, warm and humid indoor air needs to be replaced with cooler and drier outside air. Plant production facilities use either mechanical or natural ventilation to accomplish this. Mechanical ventilation requires inlet openings, exhaust fans, and electric power to operate the fans. When appropriately designed, mechanical ventilation can provide adequate cooling and dehumidification under a wide range of weather conditions throughout many locations with temperate climates. The typical design specification for maximum mechanical ventilation capacity is 0.05 or 0.06 m3 s−1 per m2 of floor area for greenhouses with or without a shade curtain, respectively. When deliberate obstructions to the air intake are present (such as insect exclusion screens and an evaporative cooling pad), the inlet area should be carefully sized to overcome the increased resistance to airflow that would result in a reduction in the total air exchange rate relative to fully opened and unobstructed inlets. In that case, ventilation fans should be able to overcome the additional airflow resistance created by the screen or evaporative cooling pad. Multiple and staged fans can provide different ventilation rates based on environmental conditions. Variable-speed fan motors allow for more precise control of the ventilation rate and can reduce overall electricity consumption.
Natural ventilation works on two physical phenomena: thermal buoyancy (warm air is less dense and rises), and the wind effect (wind blowing outside a structure creates small pressure differences between the windward and leeward sides of the structure causing air to move towards the leeward side). All that is needed are carefully placed inlet and outlet openings, vent window motors, and electricity to operate the motors. In some naturally ventilated greenhouses, the vent window positions are managed manually (e.g., in a low-tech plastic tunnel production system), eliminating the need for motors and electricity, but this increases the amount of labor, especially where frequent adjustments are necessary. Electrically operated natural ventilation systems use much less power than mechanical (fan) ventilation systems. When using a natural ventilation system, additional cooling can be provided by a fog system, for example, provided the humidity of the air is not too high. Unfortunately, natural ventilation does not work very well on warm days when the wind velocity is low (less than 1 m s−1) or when the facility uses a shade system that obstructs airflow. When using natural or forced ventilation alone, the indoor temperature cannot be lowered below the outdoor temperature without additional cooling capabilities (typically evaporative cooling).
For most freestanding greenhouses, mechanical ventilation systems usually move the air along the length of the greenhouse (i.e., the exhaust fans and inlet openings are installed in opposite end walls). To avoid excessive airspeed within the greenhouse, the inlet to fan distances are generally limited to 70 to 80 m, provided local climates are not too hot. Natural ventilation systems for freestanding greenhouses usually provide cross ventilation using sidewall windows and roof vents.
In gutter-connected greenhouses (Figure 5.1.3), mechanical ventilation system inlets and outlets can be installed in the side or end walls, while natural ventilation systems usually consist of only roof vents. Sidewall vents have limited influence on the ventilation of interior sections in larger greenhouses. The ultimate natural ventilation system is the open-roof greenhouse design that allows for the indoor temperature to seldom exceed the outdoor temperature. This kind of effect is not attainable with mechanically ventilated greenhouses due to the substantial amounts of air that such systems would have to move through the greenhouse to accomplish the same results.
Whatever the ventilation system used, uniform air distribution inside the greenhouse is essential because uniformity in crop production is only possible when all plants experience the same environmental conditions. Therefore, the use of horizontal airflow fans is common to ensure proper air mixing. The recommended horizontal airflow fan capacity is approximately 0.015 m3 s−1 per m2 of the growing area.
Lighting and Shading
Since light is the driving force for photosynthesis and plant growth, managing the light environment of a growing facility is of prime importance. For many crops, plant growth is proportional to the amount of light the crop receives over the entire growing period. Both the instantaneous light intensity and the daily light integral are important parameters to growers. Plant scientists define light in the 400–700 nm waveband as photosynthetically active radiation (PAR). PAR represents the (instantaneous) light intensity and has the units μmol m−2 s−1 (ASABE Standards, 2017). When referring to the amount of light a crop receives over some time, such as an hour or a day, the sum of the instantaneous PAR intensities is calculated, and the resulting values are often called light integrals. Usually, growers measure light integrals over an entire day (sunrise to sunrise), resulting in the daily light integral (DLI), with the unit mol m−2 d−1. Instantaneous measures of PAR may be used to trigger control actions such as turning supplemental lighting on or off. Some growers deploy movable shade curtains to manage the light intensity. Daily light integrals (DLIs) can be used by growers to ensure a consistent level of crop growth by maintaining a consistent integral from day to day (whether from natural light, supplemental lighting, or a mix), or to track the accumulated radiation input that serves as the energy source for photosynthesis. The total DLI received by a plant canopy is the sum of the amount of sunlight received plus any contribution from the supplemental lighting system (for greenhouse production). Equation 5.1.4 determines the instantaneous PAR intensity (μmol m−2 s−1) necessary to meet a DLI target (mol m−2 d−1) over a specific number of hours:
$\text{intensity}(\frac{\mu mol}{m^{2}s}) = \frac{DLI}{\text{h per day}} \times \frac{1\ h}{3,600\ s} \times \frac{1\times10^{6} \mu mol}{1\ mol}$
For example, using Equation 5.1.4, an intensity of 197 μmol m−2 s−1 is needed to deliver a target DLI of 17 mol m−2 d−1 over 24 h (one day).
Plant Sensitivity to Light
Human eyes have a different sensitivity to (natural) light (or radiation) compared to how plants respond to light (Figure 5.1.4). Human eyes are most sensitive to green wavelengths (peak at 555 nm), while most plants exhibit peak sensitivities in the blue (peaking at 430 nm) and orange-red part (peaking at 610 nm) of the visible light spectrum. This difference in sensitivity means the human eye is not a very useful “sensor” in terms of assessing whether a particular light environment is suitable for plant growth and development. While PAR is light across the 400–700 nm waveband, as shown in Figure 5.1.4, plants are also sensitive to UV (280–400 nm) and far-red (700–800 nm) radiation. Therefore, it is best to use specially designed sensors (PAR sensors and spectroradiometers) to evaluate the light characteristics in environments used for plant production.
Natural and Electric Lighting
Natural light from the sun is an essential aspect of greenhouse production, both in terms of plant growth and development, but also in terms of energy balance (greenhouse heating and cooling). In indoor growing facilities, light is solely provided by electric lighting, though the amount of natural light striking the external surface of the building containing an indoor growing facility can also substantially affect the energy balance of the facility.
Direct and Diffuse Sunlight
The earth’s atmosphere contains many particles (gas molecules, water vapor, and particulate matter) that can change the direction of the light from the sun. On a clear day, there are fewer particles in the atmosphere, and sunlight travels unimpeded before reaching the ground. This type of sunlight is called direct light or direct radiation. On cloudy days, the atmosphere contains more particles (mainly water vapor), and the interaction of sunlight with all those particles causes directional changes that are mostly random. As a result, on cloudy days, sunlight comes from many directions. This type of sunlight is called diffuse light or diffuse radiation. These frequent light-particle interactions will also result in a reduction in light intensity compared to direct radiation.
Depending on the make-up of the atmosphere (cloudiness), sunlight will reach the surface as direct radiation, diffuse radiation, or a combination of the two. Direct radiation does not reach the lower canopy layers shaded by plant tissues (mostly leaves); however, because diffuse radiation is omnidirectional, it can penetrate deeper into a plant canopy (particularly in a multi-layered, taller canopy). Therefore, though the amount of diffuse radiation may appear small, it can boost plant production because it reaches more of the plant surfaces involved in photosynthesis. Some greenhouse glazing materials (e.g., polyethylene film) diffuse incoming solar radiation more than others (Table 5.1.3), and while the overall light intensity is often lower in greenhouses covered with a diffusing glazing material, crop growth and development is not necessarily reduced proportionally because more of the canopy surfaces are receiving adequate light for photosynthesis.
The amounts of diffuse radiation are measured with a light sensor placed behind a disc that casts a precise shadow over the sensor, so it blocks all direct radiation. The amount of direct radiation is determined by using a second sensor that measures total (direct plus diffuse) radiation (direct radiation = total radiation – diffuse radiation).
Table $3$: Characteristics of glazing materials.
Glazing Material Direct PAR Transmittance
(%)
Infrared (heat) Transmittance[a]
(%)
Ultraviolet Transmittance[b]
(%)
Life Expectancy (years)
Glass
90
0
60–70
30
Acrylic[c]
89
0
44
10–15
Polycarbonate3
80
0
18
10–15
Polyethylene[d]
90
45
80
3–4
PE, IR & AC[d][e]
90
30
80
3–4
[a] for wavelengths above 3,000 nm
[b] for wavelengths between 300 and 400 nm
[c] twin wall
[d] single layer
[e] polyethylene film with an infrared barrier and an anti-condensate surface treatment
As sunlight reaches the external surfaces of the greenhouse structure, the light can be reflected, absorbed, or transmitted. Often these processes coincide. The quantities of reflected, absorbed, or transmitted light depend on the (glazing) materials involved, the time of day, the time of year, and whether the grower uses any control strategies (e.g., whitewash or shade curtains). Also, overhead equipment can block light and reduce the total amount of sunlight that reaches the plant canopy. It is not uncommon, even in modern greenhouses, for the plants to receive around 50–60% on average of the amount of sunlight available outside the greenhouse structure. Since every percent of additional light received by the plant canopy counts, it is essential to design greenhouses carefully with optimum light transmission in mind.
Effect of Greenhouse Orientation
Another consideration, particularly at higher latitudes, is the orientation of the greenhouse. At latitudes above 40 degrees, orienting the gutters of a greenhouse along an east-west direction can help capture the most amount of light during the winter months when the sun is low in the sky and the total amount of sunlight is also low. However, using such an orientation, shadow bands created by structural components and overhead equipment tend to move more slowly. This can be a particularly challenging issue when the crop is grown in the greenhouse for only a short amount of time (e.g., for leafy greens). In that case, it is preferable to orient the greenhouse north-south. Aside from any shadows, the intensity of sunlight is considered uniform throughout the growing area.
Shading
During bright sunny days, there is the risk of greenhouse crops being exposed to too much light, thus requiring the use of shade curtains to help reduce plant stress from high light intensities. On variably cloudy days, the light conditions inside a greenhouse can fluctuate rapidly from low light to high light conditions. Such swings in light conditions can negatively impact plant growth and development, so growers may have to deploy both the supplemental lighting system and the shade curtains to provide more stable growing conditions. Managing the supplemental lighting system often involves controlling the shade curtains.
Proper shading is essential for some crops. For example, lettuce grown in a greenhouse is subject to tipburn (Figure 5.1.5) if light, temperature, and humidity conditions are not kept within specific ranges.
One strategy is to apply a whitewash treatment to the greenhouse during peak solar radiation months and to wash it off at the end of the natural growing season when light conditions diminish. Drawbacks include increased labor costs and additional requirements for supplemental lighting. Movable shade curtains are another effective strategy for managing tipburn, if properly designed and used. Deploying shade curtains too late during the day can cause tipburn in lettuce (too much light increases the growth rate beyond the point where the transport of calcium can keep up), and deploying them too early can result in extra hours of supplemental lighting. Movable shade curtains, depending on the design, can also reduce heat loss during the night, but this dual use is often a compromise between optimum shading capabilities and maximum energy retention. A more comprehensive solution is two different curtains, each optimized for its purpose, but such a dual curtain system doubles the installation cost.
Common Types of Artificial Lighting
The two most common types of greenhouse lighting are gas discharge and light-emitting diode (LED) lamps (Figure 5.1.6). Gas discharge lamps, such as fluorescent (FL), metal halide (MH), and high-pressure sodium (HPS) lamps, produce light by passing a current through an ionized gas. The spectrum of light produced is a function of the gas used and the composition of the electrodes. MH lamps provide a more white-colored light, while HPS light is more yellowish orange (similar to traditional streetlamp light).
LED lamps use semiconductors that release energy in the form of photons when sufficient current is passed through them. The wavelength of light emitted is determined by the bandgap of the semiconductor and any phosphors used to convert the monochromatic LED light. Unlike gas discharge lamps, LEDs without phosphors produce light within a relatively narrow waveband. To get a broad-spectrum output, such as white light, manufacturers often use high-efficiency blue LEDs and convert the output to white light using yellow phosphors. Some plants benefit from small amounts of UV radiation (280–400 nm), but people working in these environments should wear special eye and skin protection to minimize the harmful effects presented by UV radiation.
Lighting Efficacy
At the time of this writing (early 2020), the most energy-efficient lamps available for supplemental lighting are LED-based fixtures (Mitchell et al., 2012; Wallace and Both, 2016). However, not all LED fixtures are designed for plant growth applications. When comparing the efficiency of lights, the wall-plug energy use of the fixture must be considered. Some LED fixtures rely on active cooling using ventilation fans (in some cases water cooling) to prevent overheating that can shorten their lifespans. Active cooling installation requires additional energy, which must be considered, in addition to other losses, such as from transformers and drivers. Ideally, manufacturers publish an efficacy measurement, i.e., light output divided by energy input, or μmol s−1 of PAR (light) output per W (electricity) input (μmol J−1) for their fixtures (Both et al., 2017). Efficacies for lamps used in plant growth applications are shown in Table 5.1.4. Fixture efficacies continue to increase with several LED fixtures now approaching 3 μmol J−1. Higher efficacy fixtures use less electrical energy to produce the same amount of light.
A Note on Lighting Units
In the horticulture industry, it is still common to use light units of lux, lumens, or foot-candles. However, this is not particularly useful since lux, lumens, and foot candles are based on the sensitivity of the human eye, which is most sensitive to the green part of the visible light spectrum (Figure 5.1.4). Ideally, the total light output of supplemental lighting devices should be reported in integrated PAR units (μmol s−1). Note that this unit is not the same as the unit used for instantaneous PAR intensity (μmol m−2 s−1). Users should be aware of this when purchasing lighting fixtures and make sure that the proper instruments were used to assess the light output.
Table $4$: Selected fixture efficacies for several different lamp types used for horticultural applications (CMH = ceramic metal halide, HPS = high-pressure sodium, LED = light emitting diode).
Lamp Type Power Consumption (W) Efficacy (μmol J−1)
Incandescent (Edison bulb)
102.4
0.32
Compact fluorescent (large bulb)
61.4
0.89
CMH (mogul base)
339
1.58
HPS (mogul base)
700
1.56
HPS (double ended)
1077
1.59
LED (bar, passively cooled)
214
2.39
Advantages and Disadvantages of Lighting Systems
HPS Lighting System
HPS lamps have long been the preferred lamp type for supplemental lighting applications (Both et al., 1997).
Advantages
• • Both lamps and fixtures (including the ballasts and reflectors) are relatively inexpensive and easy to maintain (e.g., bulb replacement and reflector cleaning).
• • Before LEDs became available, HPS lamps had the highest conversion efficiency (efficacy), and they produced a sufficiently broad spectrum that was acceptable for a wide range of plant species. The recent introduction of double-ended HPS lamps somewhat increased their efficacy.
Disadvantages
• • A major drawback of HPS lamps is the production of a substantial amount of radiant energy, necessitating adequate distance between the lights and the plant surfaces exposed to this radiation.
• • They require a warm-up cycle before they reach maximum output, and once turned off, need a cool-down period before using again.
• • As with all lamps, the light output of HPS lamps depreciates over time, requiring bulb replacements every 10,000–15,000 hrs.
Since HPS lamps have been in use for several decades, researchers and growers have learned how best to produce their crops with this light source. For example, while the radiant heat production can be considered a drawback, it can also be used as a management tool to help maintain a desirable canopy temperature, and this radiant heat can help reduce the amount of heat energy (provided by the heating system) needed to keep the set point temperature.
LED Lighting Systems
LED lamps (often consisting of arrays of multiple individual LEDs) are a relatively new technology for horticultural applications, and their performance capabilities are still evolving (Mitchell et al., 2015).
Advantages
• • The efficacy of carefully designed LED lamps has surpassed the efficacy of HPS lamps, and the heat they produce can be removed more easily by either natural or forced convection.
• • The resulting convective heat (warm air emanating from the lamp/fixture) is easier to handle in controlled environment facilities than the radiant heat produced by HPS lamps because air handling is a common process while blocking radiant heat is not.
• • LED lamps can be switched on and off rapidly and require a much shorter warm-up period than HPS lamps.
• • It is possible to modulate the light intensity produced by LED lamps, either by adjusting the supply voltage or by a process called pulse width modulation (PWM; rapid on/off cycling during adjustable time intervals). By combining (and controlling) LEDs with different color outputs in a single fixture, growers have much more control over the spectrum that these lamps produce, opening up new strategies for growing their crops. This benefit in particular will require (a lot of) additional research to be fully understood or realized.
• • LED lamps typically have a longer operating life (up to 50,000 h), but more testing is needed in plant production facilities to validate this estimate.
Disadvantages
• • LED lamps (fixtures) are more expensive compared to HPS fixtures with similar output characteristics.
• • LED lamps typically come as a packaged unit (including LEDs, housing, and electronic driver), making the replacement of failed components almost impossible.
• • Because plants are most sensitive to blue and red light in terms of photosynthesis, growers often use LED fixtures that produce a combination of red + blue = magenta light. The magenta light (Figure 5.1.6) makes it much more challenging to observe the actual color of plant tissue (which is essential for the observation of potential abnormalities resulting from pest and/or disease issues), and can make working in an environment with that spectrum more challenging (it has been reported to make some people uncomfortable).
• • Some LED lamps have (unperceivable) flicker rates that can have health effects on humans with specific sensitivities (e.g., people with epilepsy).
Applications
Heating Systems in Greenhouses
Greenhouses can be heated using a variety of methods and equipment to manage heat losses during the cold season. Typically, fuel is combusted to heat either air or water (steam in older greenhouses) which is circulated through the greenhouse environment. Some greenhouses use infrared heating systems that radiate heat energy to exposed surfaces of the plant canopy. The use of electric (resistance) heating is minimal because of the high operating cost. However, as the costs of fossil fuels rise, electric heating could become competitive even for extensive greenhouse operations in various locations.
Unit Heaters and Furnaces
Typical air heating systems include unit heaters and furnaces (Figure 5.1.7). Typically, the heat generated by the combustion process is transferred to the greenhouse air through a heat exchanger, or the air from the greenhouse used as the oxygen source for the combustion process and then released into the greenhouse. Using heat exchangers allows for the combustion air to remain separate from the greenhouse environment (separated combustion), thus minimizing the risk of releasing small amounts of potentially harmful gasses (e.g., ethylene, carbon monoxide) into the greenhouse environment. Also, it directly increases the air temperature without introducing additional moisture.
Using greenhouse air as a source of oxygen for combustion requires properly maintained combustion equipment and complete fuel combustion to ensure that only water vapor and carbon dioxide (CO2) are released into the greenhouse environment. An intermediate approach is to use greenhouse air for combustion and vent the combustion gases outdoors.
Fans are usually incorporated in air heating systems to move and distribute the warm air to ensure even heating of the growing environment. Some greenhouses use inflatable polyethylene ducts (the poly-tube system) placed overhead or under the benches or crop rows to distribute the air. Some use strategically placed horizontal or vertical airflow fans. Air heating systems are relatively easy to install at a modest cost, but have inadequate heat distribution compared to hot water heating systems.
Hot Water Heating Systems
Water-based heating systems consist of a boiler and a water circulation system (pumps, mixing valves, and plumbing) (Figure 5.1.8). The boiler generates the heat to raise the temperature of the circulating water. The heated water is pumped to heat the greenhouse through a pipe network or tube distribution system. Usually, the heating pipes are installed on the support posts, around the perimeter, and overhead (sometimes near gutters to enhance snowmelt using the released heat, and spaced evenly between more widely spaced gutters to provide uniform heat delivery). Some greenhouses have floor or bench heating with additional heating tubes installed in the floor or on/near the benches for root-zone heating. These root-zone heating systems have the advantage of providing independent control of root-zone temperatures and delivering uniform heat very close to the plant canopy. However, root-zone heating systems are typically not able to provide sufficient heating capacity during the coldest times of the year, necessitating the use of additional heating in the form of perimeter and overhead heating pipes. A significant benefit of water-based heating systems is the ability to “store” heat in large insulated water tanks. Boilers can be used during the day to produce CO2 for plant consumption, with any surplus heat stored for use during colder periods such as the night, when CO2 supplementation is not required.
Infrared Heating Systems
Infrared heating systems have the advantage of immediate heat delivery once turned on, but only exposed (in terms of line-of-sight) plant canopy surfaces will receive the radiant heat. Infrared heating sometimes provides non-uniform heating, especially in crops with a multi-layered canopy. Also, infrared heating systems are typically designed as line sources and require some distance between the source and the radiated canopy surfaces to accomplish uniform distribution. Finally, like hot air systems, infrared heating systems accumulate little heat storage during operation, so that in case of an emergency shutdown, little residual heat is available to extend the heating time before the temperature drops below critical levels.
Alternative Energy Sources and Energy Conservation
The volatility in fossil fuel prices experienced during the last decades has put a greater emphasis on energy conservation and alternative energy sources. Energy conservation measures employed include relatively simple measures such as sealing unintended cracks and openings in the greenhouse glazing, improved insulation of structural components and heat transportation systems, and timely equipment maintenance, as well as more advanced measures such as movable insulation/shade curtains, new heating equipment with higher efficiencies (e.g., condensing boilers, heat pumps, combined heat and power systems), and novel control strategies (e.g., temperature integration, where growers are more concerned with the average temperature a crop receives, within set boundaries, rather than tightly maintaining a specific set point temperature). Some growers delay crop production to periods when the weather is warmer, while others use lower set point temperatures (often requiring more extended production periods and with potential impacts on plant physiology).
Alternative energy heating sources (i.e., non-fossil fuels) used for greenhouse applications include solar electric, solar thermal, wind, hydropower, biomass, and geothermal (co-generation and ground-source, shallow or deep). Many alternative energy installations are viable only under specific climatic conditions and may require significant investments that may require (local or national) financial incentives. Developing energy conservation and alternative energy strategies for greenhouse operations remains challenging because of the considerable differences in size, scope, and local circumstances. Selecting an alternative energy system includes considering economic viability for the greenhouse operation as well as protection of the environment.
Evaporative Cooling Systems
Growers or greenhouse managers often use evaporative cooling as the most affordable way of reducing the air temperature beyond what the ventilation system can achieve by air movement only. The maximum amount of cooling provided by evaporative cooling systems depends on the initial temperature and humidity of the ambient (i.e., outdoor) air. These parameters can be measured relatively easily with a standard thermometer and a relative humidity sensor. With these measurements, the psychrometric chart can be used to determine the corresponding properties of the air, such as the wet-bulb temperature, humidity ratio, enthalpy, etc. With the known wet-bulb temperature, the wet-bulb depression can be calculated to determine the theoretical temperature drop possible by evaporative cooling. Since few engineered systems are 100% efficient, the actual temperature drop achieved by the evaporative cooling system is likely to be 80–90% of the theoretical wet-bulb depression.
Lighting System Design
The concepts described earlier can be used to control the instantaneous intensity and integrated light intensities needed to assess the light conditions in plant growth facilities. The information can be used to determine the parameters needed to select fixtures to modify the light environment in plant growth facilities, e.g., switching the supplemental lighting system on or off, opening or closing a shade curtain (in greenhouses) and, when multi-spectral LEDs are used, can include changing the light spectrum and/or their overall intensity.
Light Requirements
In designing a lighting system for a greenhouse or indoor growing facility, the first step is to determine the light requirements of a particular crop. Research articles or grower handbooks for the crop of interest can provide information about the recommended light intensity and/or the optimum daily light integral (see, for example, Lopez and Runkle, 2017). For crops such as leafy greens grown in a greenhouse, the minimum daily target integral may be as low as 8 to 14 mol m−2 d−1, or as high as 17 mol m−2 d−1 (the maximum daily integral for leaf lettuce before physiological damage occurs as a result of too much light). For vine crops, such as tomatoes, a minimum of 15 mol m−2 d−1 is typically tolerated, while the optimum target can exceed 30 mol m−2 d−1. Generally, as a rule of thumb, for vegetable crops, a 1% increase in the DLI results in a 1% increase in growth (up to a point; Marcelis et al., 2006). Considering the high cost of providing the optimum growing environment, it usually makes economic sense to optimize plant growth whenever possible (Kubota et al., 2016).
Once the DLI for the crop has been determined, the next step is to determine how much supplemental lighting is required to make up any shortfall in natural light. In an indoor growing facility, all light must be supplied by electric lamps, while in a greenhouse, natural lighting typically provides the bulk of the DLI throughout the year. Even in relatively gloomy regions the sun can provide over 70% of the required light for a year-round greenhouse lettuce crop.
Supplemental lighting for greenhouse production is mostly used during the dark winter months when the sun is low and the days are short. Typically, greenhouse lighting systems are designed such that they can provide enough light during the darkest months of the year. To estimate the amount of light available for crop production at a particular location, ideally one would average several years of data so that an atypical year would have a minor impact on the overall trends. In the U.S., a useful resource is the National Solar Radiation Database maintained by the National Renewable Energy Laboratory (NREL) in Golden, Colorado (https://nsrdb.nrel.gov/).
The solar radiation data (i.e., shortwave radiation covering the waveband of approximately 300–3,000 nm) available from NREL is not specifically used for plant production and is usually expressed in units of J m−2 per unit of time (e.g., an hour or a day). To convert this to a form more useful for planning supplemental lighting systems, the following multiplier can be used (Ting and Giacomelli, 1987):
$1\frac{MJ}{m^{2}\text{day}}\text{short wave radiation}=2.0804\frac{\text{mol}}{m^{2}\text{day}}\text{PAR}$
The NREL database covers several locations outside of the USA. For more specific location data, other weather databases maintained by national governments or local weather stations (e.g., radio or TV stations, airports) may have historic solar radiation data available from which average natural daily light integrals can be calculated.
For greenhouse production, the DLI does not have to be exactly the same each day to maximize production. During the seedling stage, many crops can tolerate DLIs much higher than during later stages of growth. For example, greenhouse lettuce typically is limited to 17 mol m−2 d−1 after the canopy has closed, to avoid damage from tipburn (Albright et al., 2000). However, seedlings can be provided with 20 mol m−2 d−1 and some varieties may even benefit from up to 30 mol m−2 d−1. Generally, for hydroponic lettuce, deviating no more than 3 mol m−2 d−1 from the target DLI is acceptable, provided any surplus (or deficit) is compensated for over the following two days.
Once the amount of supplemental lighting necessary has been determined (whether 100% of the DLI for an indoor growing facility or some other fraction of the DLI for a greenhouse), the next step is to determine what intensity of light is required. For indoor facilities, determining the required crop light level is straightforward. For a crop such as lettuce where there is no requirement for a night break, 24 hours of light per day can be applied. For a greenhouse, the calculation is the same, however, a portion of the DLI will be supplied by natural light. It comes down to a judgement call by the designer with respect to how they want to size the lighting system, and if they want to over- or under-size the lighting capacity to consider extremely dark days when the supplemental lighting system would need to provide nearly all of the light in a greenhouse. Most commercial greenhouse supplemental lighting systems provide an instantaneous intensity between 50 and 200 μmol m−2 s−1 at crop level.
Figure 5.1.9 shows the increase in DLI that can be realized by adding supplemental lighting at three different intensities (50, 100, and 150 μmol m−2 s−1), while operating the lamps for 18 hours per day during November, December, January, and February, for 12 hours per day during October, for 11 hours per day during March, and 2 hours per day during September and April for a total of 2,993 hours per year. As shown in Figure 5.1.9, using this lighting schedule and an intensity of 150 μmol m−2 s−1 results in a more constant light integral over the course of a year.
A significant factor affecting the hours per day that supplemental lighting should be supplied is electricity pricing. Many utilities offer incentives to encourage off-peak usage of electricity, to even out the demand for electricity to all of their customers. It varies by utility providers, but savings as high as 40% on the supply charges of electricity are common for purchasing off-peak power. Typical off-peak periods correspond with nighttime and early morning, for example from 9:00 pm to 7:00 am (10 hours). In addition to saving on the supply price of electricity, it may be possible to avoid demand charges as well. In commercial operations that use a lot of power, electric utilities often collect demand charges based on the largest 15-minute on-peak consumption (kW) during a monthly billing cycle. The demand charge can easily add thousands of dollars to the monthly cost of a grower’s electricity bill. During winter months, it may be unavoidable to light during peak use hours, but during the shoulder months when supplemental lighting is still necessary but is not used as much, it may be worthwhile to disable lighting during on-peak hours and make up any daily deficit the next day during off-peak hours.
Number of Fixtures to Achieve a Target Intensity
The number of fixtures needed to provide the desired intensity depends on the light output of each fixture and the mounting height. In addition, the characteristics of any reflector will affect both the uniformity and intensity of light delivered to the crop (Ciolkosz et al., 2001; Both et al., 2002). The mounting height is defined as the distance between the bottom of the lamp and the top of the plant canopy.
Ideally, the lighting manufacturer will have available an IES (Illuminating Engineering Society) file that contains data on the specific light output pattern of the fixture. Using the IES file and commercially available software, it is possible to design a layout to achieve a target light intensity at a specified mounting height. An additional consideration is the uniformity of the light. Ideally, the light should be as uniform as possible to produce consistent growth throughout the growing area. Keep in mind that, although the light intensity does not change much once the lamp density is determined (Table 5.1.5), light uniformity significantly improves with increasing mounting height. For example, a 0.4 ha greenhouse (assuming an available mounting height of 2.44 m) would need approximately 383 HPS lamps (400 W each, not including the power drawn by the ballast) for a uniform light intensity of 49 μmol m−2 s−1 and 786 lamps for the intensity of 100 μmol m−2 s−1. Additional mounting patterns and resulting average light intensities are shown in Table 5.1.5.
Table $5$: Estimated average light intensities at the top of the plant canopy (in μmol m−2 s−1) throughout a 0.4 ha greenhouse (10 gutter-connected bays of 7.3 m wide by 54.9 m long) for four different mounting heights and 400-watt HPS lamps. Note that these average light intensities are estimates without including edge effects (i.e., a drop in light intensity toward the outside walls) and these light intensities are estimates only; always consult with a trained lighting designer for an accurate calculation of expected light intensities in greenhouses.
Number of Lamps per Bay
(lamps per row, with lamp placement staggered from row to row)
Floor Area per Lamp
(m2)
Light Intensity for a Mounting Height of 2.44 m
(μmol m−2 s−1)
Light Intensity for a Mounting Height of 2.13 m
(μmol m−2 s−1)
Light Intensity for a Mounting Height of 1.83 m
(μmol m−2 s−1)
Light Intensity for a Mounting Height of 1.52 m
(μmol m−2 s−1)
38 (13-12-13)
10.6
49
50
51
52
58 (15-14-15-14)
6.9
75
77
79
80
78 (16-15-16-15-16)
5.15
100
103
105
107
123 (21-20-21-20-21-20)
3.26
149
154
158
162
158 (23-22-23-22-23-22-23)
2.54
202
206
210
213
An additional consideration in greenhouses is that increasing the number of fixtures results in additional blockage of the natural light. Furthermore, power supply wires, ballasts, and reflectors can all block the transmission of natural light, and the greenhouse may require additional superstructure to provide a place to mount the fixtures and help support their weight.
Examples
Example $1$
Example 1: Greenhouse heating
Problem:
Determine the required heating system capacity for a greenhouse with the following characteristics:
• • greenhouse dimensions: 330 by 330 m
• • greenhouse surface area (roof plus sidewalls): 12,700 m2
• • greenhouse volume: 50,110 m3
• • outdoor humidity level: 45%
• • nighttime temperature set point: 17°C
• • indoor humidity level: 75%
• • 99% design temperature (location specific): −15°C
• • greenhouse U-value: 6.2 W m−2 °C−1
Solution
The required heating system capacity is a function of the structural heat loss (conduction, convection, and radiation), the infiltration heat loss, and the conversion efficiency of the fuel source for the heating system.
First, calculate the structural heat loss using Equation 5.1.1:
$q_{ccr}=UA_{c}(t_{i}-t_{o})$ (Equation $1$)
= 6.2 × 12,700 [17 – (−15)] = 2,519,680 W = 2,519.68 kW
Next, determine the heat loss by infiltration using Equation 5.1.2:
$q_{i}=\rho _{i}NV[c_{pi}(t_{i}-t_{o})+h_{fg}(W_{i}-W_{o})]$ (Equation $2$)
Some assumptions are required to solve Equation 5.1.2. It is reasonable to assume that the air density of the greenhouse air is 1.2 kg m−3. The infiltration rate N can be estimated from data included in Table 5.1.2: a value of 0.0004 s−1 was selected (an older, glass-covered greenhouse with good maintenance). In order to determine the humidity ratios for the inside and outside air, we need to use the relative humidity of the inside and outside air. Using the psychrometric chart (Figure 5.1.1), the humidity ratios for the inside and outside air are 0.0091 and 0.0005 kg kg−1, respectively. The specific heat of greenhouse air at 17°C is 1.006 kJ kg −1 K−1 and the latent heat of vaporization of water at that temperature is approximately 2,460 kJ kg−1. These values were determined from online calculators (Engineering ToolBox, 2004, 2010), but can also commonly be found in engineering textbooks regarding heat and mass transfer. Entering these values in Equation 5.1.2:
$q_{i}=\rho _{i}NV[c_{pi}(t_{i}-t_{o})+h_{fg}(W_{i}-W_{o})]$ (Equation $2$)
= 1.2 × 0.0004 × 50,110 {1.006[17 – (−15)] + 2,460(0.0091–0.0005)}
= 1,283,169 W = 1,283.17 kW
Thus, the resulting heat loss is the sum of the structural heat loss (conduction, convection, and radiation) and the infiltration heat loss: 2,519.68 + 1,283.17 = approximately 3803 kW.
The heating system capacity is the total heat loss divided by the conversion efficiency of the fuel source. For natural gas with a conversion efficiency of 85%, the required overall heating system capacity is 3803/0.85 = 4,474 kW.
Note that if these calculations are done in a spreadsheet, it is easy to adjust the assumptions that were made so that the sensitivity of the final answer to the magnitude of the assumptions can be assessed. Also, in colder climates, additional heat can be lost around the perimeter of a greenhouse where cold and wet soil is in direct contact with the perimeter walkway inside the greenhouse. To prevent this perimeter heat loss, vertically placed insulation boards can be installed extending from ground level to a depth of 50–60 cm.
Example $2$
Example 2: Evaporative Cooling Pad
Problem:
Find the expected temperature drop of the air passing through the evaporative cooling pad given the following information:
• • the ambient (outside air) is at 25°C dry-bulb temperature and 50% relative humidity
• • the evaporative cooling pad efficiency is 80%
Solution
Using the psychrometric chart (Figure 5.1.10) and the initial conditions of the outside air of 25°C dry-bulb temperature and 50% relative humidity, start at the intersection of the curved 50% RH line with the vertical line for a dry-bulb temperature of 25°C. At this intersection, determine the following environmental parameters:
• • wet-bulb temperature = 17.8°C (from the starting point, follow the constant enthalpy line, 50.3 kJ kg−1 in this case, until it intersects with the 100% relative humidity curve)
• • dew point temperature = 13.7°C
• • humidity ratio = 0.0099 kg kg−1,
• • enthalpy = 50.3 kJ kg−1
• • specific volume = 0.858 m3 kg−1
Thus, the wet-bulb depression for this example equals 25 – 17.8 = 7.2°C. Using an overall evaporative cooling system efficiency of 80% results in a practical temperature drop of approximately 5.8°C (7.2°C × 0.8). As the air continues to travel through the greenhouse on its way to the exhaust fans, the exiting air will be warmed, and moisture from crop transpiration will be added so the exiting air will have higher energy content and specific humidity than the air moving through the evaporative cooling pad.
Example $3$
Example 3: Purchase and operating costs of crop lighting systems
Problem:
As mentioned previously, the performance of lamps in terms of their efficacy can vary significantly even when comparing the same type of lamp. For example, we measured HPS fixture efficacy values ranging from 0.94 to 1.7 μmol J−1. Along with the efficacy, the unit cost of purchasing lamps is also an important consideration when deciding on a lighting system. In this example, we look at the cost of purchasing and operating two types of lighting systems, in both a greenhouse and an indoor growing facility.
Find the yearly cost savings of operating an LED vs. HPS system, and how long the systems should be operated to justify (payback) the higher purchase price of the LED lighting system, in both a greenhouse and indoor (no natural light) production system, given the following:
• • HPS lighting system: 123 fixtures, each 400 W (plus 60 W for each ballast), cost of $300 per fixture (excluding installation cost), efficacy of 0.94 μmol J−1 • • LED lighting system: 55 fixtures, each 400 W (plus 20 W for each driver), cost of$1,200 per fixture (excluding installation cost), efficacy of 2.1 μmol J−1 (these LED fixtures are intended as a direct replacement for the HPS lighting system, meaning they deliver the same PAR intensity and distribution at crop level)
• • Greenhouse: 2,200 hours of supplemental lighting per year (600 h during on-peak electricity rates and 1,600h during off-peak electricity rates)
• • Indoor (no natural light) growing facility: 8,760 hours of lighting per year (5,100 h on-peak and 3,660 h off-peak)
• • Electricity prices of $0.14 per kWh on-peak, and$0.09 per kWh off-peak.
Solution
We can now compare the cost to purchase and operate the fixtures. The purchase price of the two systems is simply the unit cost multiplied by the number of units:
$\text{HPS purchase cost}=\frac{\ 300}{\text{fixture}}\times 123 \text{ fixtures} = \ 36,900$
$\text{LED purchase cost}=\frac{\ 1,200}{\text{fixture}}\times 55 \text{ fixtures} = \ 66,000$
For the greenhouse case, the electricity cost of the two lighting systems can be determined for both on-peak and off-peak use:
$\text{HPS on-peak cost}= 123 \text{ fixtures} \times \frac{460\ W}{\text{fixture}}\times \frac{1\ kW}{1000\ W}\times \frac{600 \text{ h on-peak}}{\text{year}} \times \frac{\ 0.14}{kWh} =\frac{\ 4,753}{\text{year}}$
$\text{HPS off-peak cost}= 123 \text{ fixtures} \times \frac{460\ W}{\text{fixture}}\times \frac{1\ kW}{1000\ W}\times \frac{1,600 \text{ h off-peak}}{\text{year}} \times \frac{\ 0.09}{kWh} =\frac{\ 8,148}{\text{year}}$
Adding these costs results in an annual electricity cost for HPS of $12,901 per year (excluding any potential demand charges). $\text{LED on-peak cost}= 55 \text{ fixtures} \times \frac{420\ W}{\text{fixture}}\times \frac{1\ kW}{1000\ W}\times \frac{600 \text{ h on-peak}}{\text{year}} \times \frac{\ 0.14}{kWh} =\frac{\ 1,940}{\text{year}}$ $\text{LED off-peak cost}= 55 \text{ fixtures} \times \frac{420\ W}{\text{fixture}}\times \frac{1\ kW}{1000\ W}\times \frac{1,600 \text{ h on-peak}}{\text{year}} \times \frac{\ 0.09}{kWh} =\frac{\ 3,326}{\text{year}}$ Adding these costs results in an annual electricity cost for LED of$5,266 per year (excluding any potential demand charges).
The annual cost savings for electricity consumption by using the LED instead of the HPS fixtures amounts to $12,901 –$5,266 = $7,635. The premium for purchasing LED instead of the HPS fixtures is$29,100 ($66,000 –$36,900). Therefore, it would take $\frac{\ 29,100}{\ 7,635} = 3.8\text{ years}$ of operation to recover (pay back) the higher purchase price of the LED fixtures in the greenhouse situation.
For the case of an indoor growing facility, where all of the lighting had to be supplied by the lamp fixtures, and assuming the lights needed to operate 24 hours a day to meet the target light integral, the annual cost savings for electricity consumption by using the LED instead of the HPS fixtures amounts to $34,933 ($50,035 – \$24,102). Therefore, it would take $\frac{\ 29,100}{\ 34,933} = 0.83\text{ years}$ of operation to recover (pay back) the higher purchase price of the LED fixtures.
Image Credits
Figure 1. Both, A. J. (CC By 4.0). (2020). Psychrometric chart.
Figure 2. Both, A. J. (CC By 4.0). (2020). Evaporative cooling system (pad and fan).
Figure 3. Both, A. J. (CC By 4.0). (2020). Gutter-connected greenhouses.
Figure 4. Both, A. J. (CC By 4.0). (2020). Sensitivity of human eye and plant.
Figure 5. Left. Both, A. J. (2020). Lettuce plants.
Figure 5. Right. Cornell University. (CC By 4.0). Lettuce Plants. Retrieved from https://urbanagnews.com/blog/prevent-tipburn-on-greenhouse-lettuce/. [Fair Use].
Figure 6. Both, A. J. (CC By 4.0). (2020). HPS and LED fixtures.
Figure 7. Both, A. J. (CC By 4.0). (2020). Unit heater.
Figure 8. Both, A. J. (CC By 4.0). (2020). Hot water heating system.
Figure 9. Both, A. J. (CC By 4.0). (2020). Solar radiation integrals.
Figure 10. Both, A. J. (CC By 4.0). (2020). Simplified psychrometric chart.
References
Albright, L. D., Both, A. J., & Chiu, A. J. (2000). Controlling greenhouse light to a consistent daily integral. Trans. ASAE, 43(2), 421–431. https://doi.org/10.13031/2013.2721.
Aldrich, R. A., & Bartok, J. W. (1994). Greenhouse engineering. NRAES Publ. No. 33. Retrieved from https://vdocuments.site/fair-use-of-this-pdf-file-of-greenhouse-engineering-nraes-33-by-.html.
ASABE Standards. (2017). ANSI/ASABE S640: Quantities and units of electromagnetic radiation for plants (photosynthetic organisms). St. Joseph, MI: ASABE.
ASAE Standards. (2003). ANSI/ASAE EP406.4: Heating, ventilating, and cooling greenhouses. Note: This is a withdrawn and archived standard. St. Joseph, MI: ASAE.
ASHRAE. (2013). ASHRAE Standard 169-2013: Weather data for building design standards. Atlanta, GA: ASHRAE.
Both, A. J., Albright, L. D., Langhans, R. W., Vinzant, B. G., & Walker, P. N. (1997). Electric energy consumption and PPF output of nine 400 Watt high pressure sodium luminaires and a greenhouse application of the results. Acta Hortic., 418, 195–202.
Both, A. J., Benjamin, L., Franklin, J., Holroyd, G., Incoll, L. D., Lefsrud, M. G., & Pitkin, G. (2015). Guidelines for measuring and reporting environmental parameters for experiments in greenhouses. Plant Methods, 11(43). https://doi.org/10.1186/s13007-015-0083-5.
Both, A. J., Bugbee, B., Kubota, C., Lopez, R. G., Mitchell, C., Runkle, E. S., & Wallace, C. (2017). Proposed product label for electric lamps used in the plant sciences. HortTechnol., 27(4), 544–549. https://doi.org/10.21273/horttech03648-16.
Both, A. J., Ciolkosz, D. E., & Albright, L. D. (2002). Evaluation of light uniformity underneath supplemental lighting systems. Acta Hortic., 580, 183–190.
Ciolkosz, D. E., Both, A. J., & Albright, L. D. (2001). Selection and placement of greenhouse luminaires for uniformity. Appl. Eng. Agric., 17(6), 875–882. https://doi.org/10.13031/2013.6842.
Commission Internationale de l’Eclairage. (1931). Proc. Eighth Session. Cambridge, UK: Cambridge University Press.
Crawley, D. B., Lawrie, L. K., Winkelmann, F. C., Buhl, W. F., Huang, Y. J., Pedersen, C.O., . . . Glazer, J. (2001). EnergyPlus: Creating a new-generation building energy simulation program. Energy Build., 33(4), 319–331. http://dx.doi.org/10.1016/S0378-7788(00)00114-6.
Engineering ToolBox. (2004). Air—Specific heat at constant pressure and varying temperature. Retrieved from https://www.engineeringtoolbox.com/air-specific-heat-capacity-d_705.html.
Engineering ToolBox. (2010). Water—Heat of vaporization. Retrieved from https://www.engineeringtoolbox.com/water-properties-d_1573.html.
Kubota, C., Kroggel, M., Both, A. J., Burr, J. F., & Whalen, M. (2016). Does supplemental lighting make sense for my crop?—Empirical evaluations. Acta Hortic., 1134, 403–411. http://dx.doi.org/10.17660/ActaHortic.2016.1134.52.
Lopez, R., & Runkle, E. S. (Eds.). (2017). Light management in controlled environments. Willoughby, OH: Meister Media Worldwide.
Marcelis, L. F. M., Broekhuijsen, A. G. M., Meinen, E., Nijs, E. M. F. M., & Raaphorst, M. G. M. (2006). Quantification of the growth response to light quantity of greenhouse grown crops. Acta Hortic., 711, 97–103. https://doi.org/10.17660/ActaHortic.2006.711.9.
Mitchell, C. A., Both, A. J., Bourget, C. M., Burr, J. F., Kubota, C., Lopez, R. G., . . . Runkle, E. S. (2012). LEDs: The future of greenhouse lighting! Chron. Hortic., 52(1), 6–12.
Mitchell, C. A., Dzakovich, M. P., Gomez, C., Lopez, R., Burr, J. F., Hernandez, R., . . . Both, A. J. (2015). Light-emitting diodes in horticulture. Hortic. Rev., 43, 1–87.
Sager, J. C., Smith, W. O., Edwards, J. L., & Cyr, K. L. (1988). Photosynthetic efficiency and phytochrome photoequilibria determination using spectral data. Trans. ASAE, 31(6), 1882–1889. https://doi.org/10.13031/2013.30952.
Ting, K. C., & Giacomelli, G. A. (1987). Availability of solar photosynthetically active radiation. Trans. ASAE, 30(5), 1453–1457. https://doi.org/10.13031/2013.30585.
USDA-ARS. (2019). Virtual grower 3.0. Washington, DC: U.S. Department of Agriculture. https://www.ars.usda.gov/research/software/download/?softwareid=309.
Wallace, C. & Both, A. J. (2016). Evaluating operating characteristics of light sources for horticultural applications. Acta Hortic., 1134, 435–443. https://doi.org/10.17660/ActaHortic.2016.1134.55.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/05%3A_Plant_Animal_and_Facility_Systems/5.01%3A_Plant_Production_in_Controlled_Environments.txt
|
Andrea Costantino
TEBE Research Group
Department of Energy
Politecnico di Torino
Torino, Italy
and
Institute of Animal Science and Technology
Universitat Politècnica de València
València, Spain
Enrico Fabrizio
TEBE Research Group
Department of Energy
Politecnico di Torino
Torino, Italy
Key Terms
Energy balance Supplemental heating Climate control
Mass balance Heat recovery Ventilation
Energy management Cooling systems
Introduction
Energy usage on farms is considered direct when used to operate machinery and climate control systems or indirect when is used to manufacture feed and agro-chemicals. Direct on-farm energy consumption was estimated to be 6 EJ yr−1, representing about 1.2% of total world energy consumption (OECD, 2008). If indirect energy is included, total farm energy consumption could be as much as 15 EJ yr−1, representing about 3.1% of global energy consumption. Housed livestock require adequate indoor climate conditions to maximize both production and welfare, particularly avoiding thermal stress. The task of the engineer is to improve the energy use efficiency of livestock housing and to minimize energy consumption. This can be achieved by improving the energy performance of the equipment used for climate control and the design of the building.
The focus of this chapter is on building design for efficient energy management in livestock housing. Improving building design requires understanding the mass and energy balance of the system to specify materials, dimensions, and equipment needed to maintain safe operating conditions. The importance of understanding the energy needs of buildings is illustrated by the report of St-Pierre et al. (2003), who estimated the economic losses by the dairy industry in the U.S. at $1.69 to$2.36 billion annually due to heat stress. Understanding and being able to use fundamental concepts for animal housing design provides the foundation for desirable welfare and more efficient production-centric animal housing.
Outcomes
After reading this chapter, you should be able to:
• • Describe the energy needs for livestock housing
• • Explain the energy management requirements of a livestock house
• • Describe the main climate control systems used for livestock housing and the features that affect the energy management
• • Calculate energy balances for livestock houses
Concepts
Energy and Mass Balance of a Livestock House
Thermodynamically, a livestock house is an open system that exchanges energy and mass (such as air, moisture, and contaminants) between the indoor and outdoor environments and the animals that occupy the internal volume (the enclosure). The law of conservation of energy and mass is the basic principle for the mass balance. The building walls, floor, and roof represent the control surfaces and enclose the control volume of the thermodynamic system represented by the livestock house and its internal surfaces, such as animals, interior walls, and equipment. Energy and mass balance equations allow the analysis of the thermal behavior of a livestock house, but calculating these balances is challenging because many factors affect the thermal behavior of these buildings. It is essential to understand which terms to consider, and what to assume as negligible.
Energy Balance
Sensible heat is the amount of heat exchanged by a body and the surrounding thermodynamic system that involves a temperature change. Latent heat is the heat absorbed or released by a substance during a phase change without a change in temperature. These two forms of heat can be illustrated using an example of heating a pot of water on a stove. Initially, the water is at room temperature (say, 25°C), and as the water is heated, its temperature increases. The heat causing the temperature increase is sensible heat and for water is equal to 4,186 kJ kg−1 K−1. When the water temperature reaches 100°C (boiling point of water at atmospheric pressure), the water changes phase from liquid to gas (steam). The heat provided during the phase change breaks the molecular bonds of the liquid water to transition to the gas phase, but the temperature does not change. The heat supplied to effect phase change is latent heat. The latent heat of vaporization for a unit of mass of water is 2,272 kJ kg−1 at 100°C and atmospheric pressure.
The energy balance of a livestock house, considering only the sensible heat, can be written as follows (Panagakis and Axaopoulos, 2008):
$\phi_{a}+\phi_{tr}+\phi_{sol}+\phi_{f}+\phi_{v}+\phi_{m}+(\gamma_{fog}\cdot \phi_{H})=\sum^{n}_{k=1}(M_{el,k}\cdot C_{el,k})\cdot \frac{dT_{air,i}}{dt}$
where $\phi_{a}$ = sensible heat flow from the animals inside the enclosure (W)
$\phi_{tr}$ = sensible heat flow due to transmission through the control surfaces but excluding the floor (W)
$\phi_{sol}$ = sensible heat flow due to solar radiation through both opaque and glazed building elements (W)
$\phi_{f}$ = sensible heat flow due to transmission through the floor (W)
$\phi_{v}$ = sensible heat flow due to ventilation (W)
$\phi_{m}$= sensible heat flow from internal sources, such as motors and lights (W)
$\gamma_{fog}$= Boolean variable for the presence ($\gamma_{fog}$=1) or not ($\gamma_{fog}$=0) of a fogging system inside the livestock house
$\phi_{fog}$= sensible heat flow due to fogging system (W)
$\gamma_{H}$= Boolean variable for the presence ($\gamma_{H}$=1) or not ($\gamma_{H}$=0) of a supplemental heating system inside the livestock house
$\phi_{H}$= sensible heat flow due to supplemental heating system (W)
$M_{el,k}$ = mass of the kth building element (kg)
$C_{el,k}$ = total heat capacity of kth building element (kJ kg−1 K−1)
$\frac{dT_{air,i}}{dt}$ = variation of the indoor air temperature $T_{air,i}$ with time t
When using Equation 5.2.1 for calculations and sizing, pay attention to the heat flows because each term could be positive or negative depending on the physical context. Usually, heat flows coming into a control volume (the animal house) are positive, and the ones flowing out are negative. For example, in Equation (5.2.1), the terms $\phi_{a}$ and $\phi_{sol}$ are always positive or zero, since they represent incoming heat flow from animals and solar radiation, respectively, while the values of $\phi_{tr}$ and $\phi_{v}$ could be positive or negative, depending on the difference in temperature inside and outside the animal house. The term $\phi_{f}$ depends on the floor construction. Although $\phi_{tr}$ and $\phi_{f}$ are both transmission heat flows through the control surface, they are always separated. Estimating the heat transfer through the ground is very challenging (Albright, 1990; Panagakis and Axaopoulos, 2008; Costantino et al., 2017), for example, in pig houses with ventilated pits for manure storage. To simplify the energy balance, the term $\phi_{tr}$ is often considered as the sum of $\phi_{tr}$ and $\phi_{f}$ and a corrective coefficient is used when $\phi_{f}$ is calculated.
The term $\phi_{fog}$ is always negative because it represents the sensible heat removed by water droplets of a fogging system. A fogging system provides cooling inside the animal house by putting a haze of tiny water droplets in the air to provide evaporative cooling for the animals. The term $\phi_{H}$ is always positive. The parameters $\gamma_{fog}$ and $\gamma_{H}$ should not have a value of 1 at the same time but can both be 0.
Sensible heat from the animals, $\phi_{a}$, depends on species and body mass and ambient temperature. Sensible and latent heat values can be found in the literature, for example from ASABE (2012), Hellickson & Walker (1983), or Lindley & Whitaker (1996), and more detailed data are available in Pedersen and Sällvik (2002), who express sensible and latent heat from animals as a function of animal weight, indoor air temperature, and animal activity. In complex animal houses the sensible heat flow from internal sources, such as motors (fans and automatic feeding systems) and lights term ($\phi_{m}$) can be included (Albright, 1990), but in many calculations it is excluded because is very small compared with $\phi_{a}$ (Midwest Plan Service, 1987). That exclusion is further justified when energy-efficient technologies such as LED/gas-discharge lamps and brushless motors are used.
The product $M_{el}\cdot C_{el}$ is the lumped effective heat capacity of a building element expressed in kJ K−1. For each building element (walls and roof) the amount or mass of material must be known, and the amount of heat energy needed to raise the temperature of a unit mass of the material by one degree Celsius. The fraction $\frac{dT_{air,i}}{dt}$ represents the variation of the indoor air temperature through time. This side of the equation represents the change in temperature of the building itself.
It is possible to include additional terms to Equation 5.2.1 (Albright, 1990; Esmay and Dixon, 1986) such as the sensible heat flow to evaporate the water inside the control volume from structures such as water troughs and a slurry store ($\phi_{e}$). Some authors consider it important (Hamilton et al., 2016), while others do not (Midwest Plan Service, 1987). Liberati and Zappavigna (2005) consider sensible heat exchange between manure (especially when collected in pits) and the air inside the enclosure ($\phi_{man}$) to be important in large-scale houses equipped with storage pits and manure when it is not removed frequently. A Boolean variable $\gamma_{man}$ may generalize Equation 5.2.1 further.
Equation 5.2.1 is a dynamic energy balance. If a large time step (perhaps a week or more) is assumed it can be written for steady-state conditions, meaning that the state variables that describe the system can be considered constant with time, and the terms of the balance represent the average values for the system. For large time steps or in steady-state conditions with constant indoor and outdoor air temperature, heat accumulation by the building itself can be considered to be zero, so Equation 5.2.1 becomes:
$\phi_{a}+\phi_{tr}+\phi_{sol}+\phi_{v}+\gamma_{fog}\cdot \phi_{fog}=0$
To obtain the energy balance of a livestock house in cold condition requiring supplemental heating, the energy balance becomes:
$\phi_{a}+\phi_{tr}+\phi_{H}+\phi_{sol}+\phi_{v}=0$
Figure 5.2.1 presents an illustration of the sensible heat balance of Equation 5.2.3 for simple dairy cow housing. Equation 5.2.3 can be used to design a basic livestock house. Undoubtedly, the presented formulation is a simplification, and in literature, other terms are introduced in the energy balance. The calculation of each term of the energy balance of Equation 5.2.3 is provided in greater detail later in this chapter.
Mass Balance
Mass balances are necessary to plan the management of contaminants, such as carbon dioxide (CO2), hydrogen sulfide (H2S), and ammonia (NH3), produced by the animals (Esmay and Dixon, 1986) and to regulate the indoor environment temperature, moisture content, and relative humidity. Along with temperature and relative humidity, the indoor air quality (IAQ) must be controlled by ventilation to avoid animal health problems. Calculating ventilation requirements for contaminant control is a mass balance problem. With low indoor air temperatures, a minimum ventilation flow rate (base ventilation) is used to dilute contaminants such as H2S and NH3. The minimum ventilation flow rate can be increased to reduce the moisture content. When the indoor air temperature is higher than the cooling setpoint temperature used to maintain animal comfort, the ventilation flow rate must be increased to cool the animals (Esmay and Dixon, 1986). The maximum ventilation flow rate must avoid high airspeeds that hurt animal welfare. If cooling cannot be achieved using mass flow, a fogging system can be used. The ventilation airflow can be expressed in m3s−1, m3h−1 or as ach (air changes per hour), which indicates how many times the volume of air inside the house is changed in one hour.
To estimate the ventilation flow rate for moisture control in a simple livestock house, Equation 5.2.4 (Panagakis and Axaopoulos, 2008) can be used:
$\dot{V}_{air}\cdot \rho_{air}\cdot(x_{air,i}-x_{air,o})+\dot{m}_{a}+(\gamma_{fog} \cdot \dot{m}_{fog})=\rho_{air,i} \cdot V_{air,i} \cdot \frac{dx_{air,i}}{dt}$
where $\dot{V}_{air}$ = ventilation air flow rate (m3 s−1)
$\rho_{air}$ = volumetric mass density (kg m3)
$x_{air,o}$ = specific humidity of the outdoor air (kgvapor kgair−1)
$x_{air,i}$ = specific humidity of the indoor air (kgvapor kgair−1)
$\dot{m}_{a}$= animal water vapor production (kgvapor s−1)
$\gamma_{fog}$ = Boolean variable that indicates the presence of fogging system
$\dot{m}_{fog}$ = water added through fogging (kgvapor s−1)
$\rho_{air,i}$ = volumetric mass density of inside air (kg m−3)
$V_{air,i}$ = volume of the inside air (m3)
$\frac{dx_{air,i}}{dt}$ = variation of $x_{air,i}$ in time t
In steady-state conditions and not considering the presence of fogging systems, the mass balance (Figure 5.2.2) can be simplified as:
$\dot{m}_{air} \cdot x_{air,i} - \dot{m}_{air} \cdot x_{air,i} + \dot{m}_{a} = 0$
Equation 5.2.5 is the basic formulation of the moisture mass balance in steady-state conditions for livestock houses.
Energy Management Calculations
The basis for sensible energy and mass balance calculations for livestock housing are Equations 5.2.3 and 5.2.5, respectively. In the following sections, the determination of each term of the energy balance (Equation 5.2.3) is presented. A similar approach could be used for Equation 5.2.5.
Heat Flow from the Reared Animals ($\phi_{a}$)
The animals produce and contribute considerably to heat flow in their housing. In cool conditions, this heat flow can help warm the building and decrease the need for supplemental heat. In warm conditions, this heat flow should be removed to avoid overheating and causing animal heat stress. Animals need to emit heat (both sensible and latent heat) for regulating their body temperature and maintaining their body functions. As an animal grows (usually the desired outcome of a meat production system, but not for dairy and laying hens), the animal produces more heat. The amount of heat also depends on indoor air temperature, production targets (such as the mass of eggs, milk, or meat), and the energy concentration of the feedstuff. Estimating heat production is also essential to calculate ventilation requirements.
Standard values for heat production are available (CIGR, 1999; ASABE, 2012), but a specific calculation is possible (Pedersen and Sällvik, 2002). First the total heat produced, $\phi_{a,tot}$ (sum of the sensible and latent heat), for the animal house is calculated for an indoor air temperature of 20°C. The formulation of the equation depends on animal species and production:
Broilers: $\phi_{a,tot} =10.62\cdot w_{a}^{0.75} \cdot n_{a}$
Laying hens: $\phi_{a,tot} =(6.28\cdot w_{a}^{0.75} + 25\cdot Y_{eggs}) \cdot n_{a}$
Fattening pigs: $\phi_{a,tot} =(5.09 \cdot +[1-(0.47+0.003 \cdot w_{a})] \cdot [5.09 \cdot w_{a}^{0.75} \cdot (Y_{feed}-1)]) \cdot n_{a}$
Dairy cows: $\phi_{a,tot} =(5.6\cdot w_{a}^{0.75} + 22\cdot Y_{milk} +1.6\cdot 10^{-5}\cdot Y_{pregnancy}^{3} )\cdot n_{a}$
where $w_{a}$ = average animal live weight (kg)
$n_{a}$ = number of animals inside the livestock house (animals)
$Y_{eggs}$ = egg production (kg day−1), usually between 0.04 (brooding production) and 0.05 kg day−1 (consumer eggs)
$Y_{feed}$ = dimensionless coefficient related to the daily feed energy intake by the pigs (values of $Y_{feed}$ are presented in Table 5.2.1)
$Y_{milk}$ = milk production (kg day−1)
$Y_{pregnancy}$ = number of days of pregnancy (days)
Next, the sensible heat produced ($\phi_{a}$) at a given indoor air temperature is calculated. If the indoor air temperature is in the thermoneutral zone, that is, a temperature range where the animal heat dissipation is constant (Pedersen and Sällvik, 2002) and the energy fraction used by animals for maintaining their homeothermy is at a minimum, at the house level $\phi_{a}$ can be calculated as:
Table $1$: Values of Yfeed for fattening pigs (Pedersen and Sällvik, 2002).
Pig Body Mass
(kg)
Yfeed
Rate of Gain: 700 g day−1 Rate of Gain: 800 g day−1 Rate of Gain: 900 g day−1
20
3.03
3.39
3.39
30
2.79
3.25
3.25
40
2.60
3.22
3.43
50
2.73
3.16
3.41
60
2.78
3.16
3.40
70
2.84
3.12
3.40
80
2.83
3.04
3.38
90
2.74
2.79
3.18
100
2.64
2.57
2.98
110
2.52
2.40
2.78
120
2.36
2.25
2.60
Broiler house: $\phi_{a}=\{0.61 \cdot[1000+20 \cdot (20-T_{air,i})]-0.228 \cdot T_{air,i}^{2} \} \cdot n_{hpu}$
Laying hen house: $\phi_{a}=\{0.67 \cdot[1000+20 \cdot (20-T_{air,i})]-9.8\cdot 10^{-8} \cdot T_{air,i}^{6} \} \cdot n_{hpu}$
Fattening pig house: $\phi_{a}=\{0.62 \cdot[1000+12 \cdot (20-T_{air,i})]-1.15\cdot 10^{-7} \cdot T_{air,i}^{6} \} \cdot n_{hpu}$
Dairy cow house: $\phi_{a}=\{0.71 \cdot[1000+4 \cdot (20-T_{air,i})]-0.408 \cdot T_{air,i}^{2} \} \cdot n_{hpu}$
where $T_{air,i}$ = indoor air temperature (°C)
$n_{hpu}$ = the number of heat-producing units (hpu) that are present inside the livestock house
One hpu is defined as the number of animals that produces 1000 W of total heat (sum of sensible and latent heat) at an indoor air temperature of 20°C and can be calculated as:
$n_{hpu} = \frac{\phi_{a,tot}}{1000}$
where $\phi_{a,tot}$ is calculated using Equations 5.2.6, 5.2.7, 5.2.8, or 5.2.9 depending on species and production system. Out of the thermoneutral zone, no clear relationship can be found between indoor air temperature and total heat production, but values can be calculated using the formulations present in Pedersen and Sällvik (2002).
Transmission Heat Flow through the Building Envelope ($\phi_{tr}$)
The term $\phi_{t}$ is being taken to represent the heat flow through the walls, roof, windows, doors and floor. It is calculated as (European Committee for Standardization, 2007):
$\phi_{tr}=[\sum^{n}_{j=1}(b_{tr,j} \cdot U_{j} \cdot A_{j})] \cdot (T_{air,o}-T_{air,i})$
where btr = dimensionless correction factor between 0 and 1
$U_{j}$ = thermal transmittance of the j building element (W m−2 K−1)
$A_{j}$ = total area of the j building element (m2)
$T_{air,o}$ = outdoor air temperature (°C)
The factor $b_{tr}$ is used to correct the heat flow when the forcing temperature difference is not the difference between the indoor and outdoor air, for example when the heat flow occurs toward unconditioned spaces (e.g. material storages and climate control rooms) or through the ground. In these cases, the air temperature difference between inside and outside can be used but the heat flow is decreased using $b_{tr}$. This coefficient can be computed in two cases: (1) if the adjacent space temperature is fixed and known, or (2) if all the heat transfer coefficients between the considered spaces can be numerically estimated. In most situations, $b_{tr}$ (unitless) is obtained from standards, (e.g., Table 5.2.2).
Table $2$: Values of $b_{tr}$ for different types of unconditioned spaces and floors (from EN 12831, European Committee for Standardisation, 2009).
Type of Unconditioned Space btr
Space with 1 wall facing on the outdoor environment
0.40
Space with 2 walls facing on the outdoor environment (no doors)
0.50
Space with 2 walls facing on the outdoor environment (with doors)
0.60
Space with 3 walls facing on the outdoor environment (with doors)
0.80
Floor in direct contact with the ground
0.45
Ventilated floor (e.g. pits and under-floor cavity)
0.80
Heat Flow Due to a Supplemental Heating System ($\phi_{H}$)
In most of the cases, $\phi_{H}$ is the unknown of the problem and the energy balance is solved with the aim of finding its value. A typical example is to solve the energy balance of Equation 5.2.3 for finding $\phi_{H}$ and sizing the heating capacity of the supplemental heating system. In other cases, $\phi_{H}$ could be equal to zero and the unknown of the problem could be with the aim of finding the needed ventilation flow rate to maintain a certain indoor air temperature and to cool the reared animals. Rarely, $\phi_{H}$ has to be estimated. For example, $\phi_{H}$ has to be estimated when the energy balance is solved with the aim of evaluating the indoor air temperature in given specific boundary conditions. An easy way to estimate $\phi_{H}$ is to consider the heating capacity reported in the technical datasheet of the equipment for supplemental heating.
More details about the supplemental heating systems are described below, in the Application section.
Heat Flow from Solar Radiation ($\phi_{H}$)
The heat flow due to solar radiation is dependent on the season, the farm location, and features of the building. In general terms, the solar heat flow can be split into two terms as follows (International Organization for Standardization, 2017):
$\phi_{sol} = \sum^{q}_{n=1}\phi_{sol,op,q} + \sum^{k}_{n=1} \phi_{sol,gl,k}$
where $\phi_{sol,op,q}$ = heat flows on the q opaque (e.g. walls and roof) surfaces (W)
$\phi_{sol,gl,k}$= heat flows on the k glazed (windows) surfaces (W)
For a generic opaque surface $\phi_{sol,op,q}$ is calculated as:
$\phi_{sol, op,q} = A_{q} \cdot U_{q} \cdot \alpha_{q} \cdot R_{ex} \cdot I_{sol,q} \cdot F_{sh,q}$
where $\alpha_{q}$ = solar absorption coefficient of the considered surface depending on the surface color (0.3 for light colors, 0.9 for dark colors)
$R_{ex}$ = external surface heat resistance (m2 K−1 W−1), generally assumed equal to 0.04 m2 K−1 W−1
$I_{sol,q}$ = solar irradiance incident on the considered surface (W m−2)
Fsh,q = shading correction factor
For a generic glazed surface k, $\phi_{s,gl,k}$ is calculated as:
$\phi_{s,gl,k} = A_{k} \cdot g_{gl} \cdot I_{sol,k} \cdot (1-F_{fr}) \cdot F_{sh,k} \cdot F_{sh,gl,k}$
where $g_{gl}$ = total solar energy transmittance of the transparent surface
$F_{fr}$ = frame area fraction
$F_{sh,gl,k}$ = shading reduction factor for movable shading provisions
The shading factors for both opaque and glazed components can be excluded for most livestock housing because they increase the complexity of the calculation, but they do not greatly affect the results.
Heat Flow Due to the Ventilation System ($\phi_{v}$)
The heat load due to the ventilation system can be expressed as
$\phi_{v} = \rho_{air} \cdot c_{air} \cdot \dot{V} \cdot (T_{air,sup}-T_{air,i})$
where $\rho_{air}$ = air volumetric mass density (kg m−3)
$c_{air}$ = air specific heat capacity (W h kg−1 K−1)
$\dot{V}$ = ventilation flow rate (m3 h−1)
$T_{air,sup}$ = supply air temperature (°C)
In the cool season, $T_{air,sup}$ usually has the same value of $T_{air,o}$, since the ventilation uses outdoor air. In the warm season, $T_{air,sup}$ could have values lower than$T_{air,o}$, since outdoor air is cooled before entering inside the building. The value of $T_{air,sup}$ can be estimated using the direct saturation effectiveness $\varepsilon$ (%) of an evaporative pad system, calculated as (ASHRAE, 2012):
$\varepsilon = 100 \cdot \frac{T_{air,o,db}-T_{air,sup,bd}}{T_{air,o,db}-T_{air,o,wb}}$
where $T_{air,o,db}$ = dry-bulb outdoor air temperature (°C)
$T_{air,sup,db}$ = dry-bulb temperature of the supply air leaving the cooling pad (°C)
$T_{air,o,wb}$ = wet-bulb temperature of the outdoor air entering in the pad (°C)
Equation 5.2.20 can be rearranged to estimate the air supply temperature ($T_{air,sup,db}$) in presence of evaporative pads for use in Equation 5.2.19.
Applications
The concepts describe the basis for calculating the energy balance of a simple animal house. These are usually quite straightforward structures built to standard designs, which differ around the world but serve a similar function of making animal production more efficient for the farmer. The calculation for the design of the animal house (the control structure) necessarily assumes a typical or average environment. In reality, weather and production management mean that there have to be components of the system that are dynamic and respond to external conditions. In this section, some of the technology required to help maintain a safe and efficient living environment for the animals are discussed.
Heating Animal Houses
Supplemental Heating Systems
In cold weather, a supplemental heat source may be needed to reach the air setpoint temperature for guaranteeing adequate living conditions for the livestock. This is common at the beginning of the production cycle when animal heat production is small and in cold seasons of the year. This energy consumption represents a major fraction of the total direct energy consumption of the farm (Table 5.2.3) and can be calculated using Equation 5.2.3.
Supplemental heating systems can be classified into localized heating and space heating systems. Localized heating systems create temperature variations in the zones where animals are reared. This allows young animals to move to a zone for optimum thermal comfort. To design a localized heating system, the term $\phi_{m}$ (as used in Equation 5.2.1) would have to be factored into the calculation to account for heat flow between the internal zones. Localized heating usually uses radiant heat, such as infrared lamps (for piglets) or infrared gas catalytic radiant heaters (for broilers). These systems emit 70% of their heat by radiation and the remaining 30% by convection; the radiation component directly heats the animals and floor while the convection component heats the air.
Space heating systems create a more uniform thermal environment. They are easier to design, manage, and control than localized heating systems, but they tend to have higher energy consumption. Space heating is usually based on a convection system using warm air. Heat is produced in boilers or furnaces and then is transferred into the building when needed.
An alternative is to use direct air heating in the house. Direct heating can be cheaper to install, but requires more maintenance to deal with contaminants, dust, and moisture (Lindley and Whitaker, 1996). Also, there is a need to vent exhaust fumes and CO2 so ventilation flow rates have to be increased, requiring more energy consumption (Costantino et al., 2020). In other agricultural buildings, such as greenhouses, the warm air is recirculated to decrease energy consumption. In livestock houses this practice is strongly not recommended since the concentration of contaminants that are produced in the enclosure make the IAQ even worse.
Table $3$: List of energy uses and their percentages of the total energy consumption of different types of livestock houses in Italy (Rossi et al., 2013).
Livestock House Operation Percentage of Electrical Energy
(of the total)
Percentage of Thermal Energy
(of the total)
Broiler Houses
ventilation
39%
-
supplemental heating
27%
96%
lighting
9%
-
feeding distribution
20%
-
litter distribution and manure removal
-
3%
manure transportation and disposal
-
1%
product collecting and package
5%
-
Laying Hen Houses
ventilation
44%
-
supplemental heating
-
-
lighting
15%
-
feeding distribution
5%
-
litter distribution and manure removal
2%
33%
manure treatment
27%
-
manure transportation and disposal
-
67%
product collecting and package
7%
-
Pig Houses
ventilation and supplemental heating
48%
69%
lighting
2%
-
feeding preparation
11%
-
feeding distribution
19%
-
litter care and manure removal
4%
1%
manure treatment
4%
-
manure transportation and disposal
12%
30%
Dairy Cow Houses
ventilation
20%
-
lighting
8%
-
feeding
17%
52%
milking
16%
6%
milk cooling
12%
-
litter care
-
7%
manure removal
8%
5%
manure treatment
18%
4%
manure transportation and disposal
1%
26%
Localized and space heating systems can be used together or coupled with floor heaters to improve the control of the indoor climate conditions. Floor heating is usually through hot water pipes or electric resistance cables buried directly in the floor, but this can cause greater evaporation and a rise in the air moisture content.
The most common energy sources for heating are electricity, natural gas, propane, and biomass. Solar energy represents an interesting solution for providing supplemental heating, but peak availability is during warm seasons and the daytime when heat demand is lowest.
Heat Recovery Systems
To maintain IAQ, indoor air is replaced by fresh outdoor air to dilute contaminants and decrease moisture content. During heating periods, every cubic meter of fresh air that is introduced inside the livestock house is heated to reach the indoor air set point temperature. The heat of the exhausted air is lost. When the outdoor air is cold, heating the fresh air requires considerable energy; ventilation accounts for 70% to 90% of the heat losses in typical livestock houses during the winter season (ASHRAE, 2011).
To improve energy performance especially in cold climates, heat recovery can be used. In livestock houses, air-to-air heat recovery systems are used to transfer sensible heat from an airstream at a high temperature (exhaust air) to an airstream at a low temperature (fresh supply air) (ASHRAE, 2012). The heat transfer happens through a heat exchange surface (a series of plates or tubes) that separates the two airstreams, avoiding the cross-contamination of fresh supply air with the contaminants in the exhaust air. The most common type of heat exchanger used in livestock houses is cross-flow (Figure 5.2.3). The recovered heat directly increases the temperature of the fresh supply air, decreasing the supplemental heat that is needed to reach the indoor air set point temperature. Heat recovery systems mainly transfer sensible heat but, under certain psychrometric conditions, even part of the latent heat of the exhaust air can be recovered. For example, when the outdoor air is very cool, the water vapor contained in the exhaust air condenses and releases the latent heat of condensation increasing the temperature of the fresh air.
In practice, heat exchanger effectiveness is the ratio between the actual transfer of energy and the maximum possible transfer between the airstreams (ASHRAE, 1991). In livestock houses this is usually between 60% and 80%, because of freezing and dust accumulation on the heat-exchanging surfaces (ASHRAE, 2011). A buildup of dust reduces the heat transfer between the airstreams and reduces the flow rate. In addition, gases and moisture in exhaust air can damage the heat-exchanging surface. Filtration, automatic washing, insulation, and defrost controls can be used to avoid problems with heat exchange.
Cooling Animal Houses
Cooling Systems
In warmer conditions, cooling is required to reduce the indoor air temperature and to alleviate animal heat stress. Air flow driven by fans is used to remove the heat generated by animals and from solar radiation. With high indoor air temperature and in heat stress situations, greater air velocities around the animals are preferred because the skin temperature of the animals is reduced through the increasing convective heat exchange.
When the difference between outdoor air and indoor air temperatures is small, cooling ventilation is less effective because the needed air flow rates require air velocities too great for animal comfort. To overcome this problem, water cooling and evaporative cooling can be used (Lindley and Whitaker, 1996). Water cooling consists of sprinkling or dripping water directly on the animals to remove heat from their bodies through evaporation. Evaporative cooling uses heat from the indoor air to vaporize water and thus decrease indoor air temperature with either a fogging system or evaporative pads. Foggers release a mist of tiny water droplets directly inside the enclosure. Evaporative pads are used in livestock houses with exhaust ventilation systems (Figure 5.2.4). In these systems, exhaust fans force out the indoor air creating a negative pressure difference between inside and outside the house. This pressure difference pulls the fresh outdoor air inside the house through the evaporative pads, decreasing its temperature by some degrees as a function of the direct saturation effectiveness, $\varepsilon$ (ASHRAE, 2012) (Equation 5.2.20). From a technical point of view, $\varepsilon$ is the most exciting feature of an evaporative pad, and it ranges between 70% and 95% for commercially available evaporative pads. This value is directly proportional to the pad thickness (from 0.1 to 0.3 m) (ASHRAE, 2012) and inversely proportional to the air velocity through the pad. The highest efficiencies are with air velocity between 1.0 and 1.4 m s−1 (ASHRAE, 2011). The value of $\varepsilon$ is also influenced by the age and the maintenance of the pad; $\varepsilon$ can decrease to 30% in old and poorly maintained pads (Costantino et al., 2018).
Evaporative pads affect energy consumption in two ways. On the one hand, they decrease the temperature of the air that is used to ventilate the house, which means a reduction in the ventilation flow rate needed to maintain the indoor air setpoint temperature. On the other hand, they increase the pressure difference between the inside and outside the house, so for the same air flow rate, the fans in a livestock house equipped with evaporative pads require higher electricity consumption. Finally, the use of evaporative pads requires extra electrical energy due to the circulation pumps used to move the water from storage for wetting the top of the pads.
Ventilation Systems
The effectiveness of ventilation inside a livestock house depends on the selection, installation, and operation of the ventilation equipment, such as air inlets, outlets, control systems, and fans.
Fans are classified as centrifugal or axial, according to the direction of the airflow through the impeller (ASHRAE, 2012). Axial fans draw air parallel to the shaft axis (around which the blades rotate) and exhaust it in the same direction. Centrifugal fans exhaust air by deflection and centrifugal force. In centrifugal fans air enters next to the shaft due to the rotation of the impeller and then moves perpendicularly from the shaft to the opening where it is exhausted. Axial fans are usually used in livestock housing because the primary goal is to provide a high airflow rate and not to create a high-pressure difference across the fan. Fans cause considerable energy consumption in livestock houses (Costantino et al., 2016), as shown in Table 5.2.3, but are typically bought based on purchase cost, not operating costs. When fans are installed in the livestock houses, a reduction in efficiency has to be expected due to the wear of the mechanical connections (ASHRAE, 2012).
Examples
Example $1$
Example 1: Heat flow through a building envelope
Problem:
Determine the total steady-state transmission heat flow through the building envelope of the gable roof broiler house presented in Figure 5.2.5. The thermophysical properties of the envelope elements are shown in Table 5.2.4. For the calculation, assume the indoor air temperature is 23°C and the outdoor air temperature is 20°C.
Solution
The total transmission heat flow through the envelope should be calculated through Equation 5.2.15. In the summation, all the envelope elements of the broiler house must be considered. In this broiler house, the various products $(b_{tr,j} \cdot U_{j} \cdot A_{j})$ of the summation of Equation 5.2.15 are:
Table $4$: Boundary conditions of the example broiler house.
Building
Element
Area
(m2)
U
(W m−2 k−1)
btr(-)
North wall
195
0.81
1
South wall
195
0.81
1
East wall
18
0.81
1
West wall
33
0.81
1
Roof
1320
1.17
1
Floor
1200
0.94
0.45
Door (east)
15
1.51
1
North windows
57
3.60
1
South windows
57
3.60
1
$\phi_{tr} = [\sum^{n}_{j=1}(b_{tr,j} \cdot U_{j} \cdot A_{j})]$ (Equation $15$)
$b_{tr,walls} \cdot U_{walls} \cdot A_{walls} = 1 \cdot 0.81 \frac{W}{m^{2} \cdot K} \cdot 441 m^{2} = 357.2 \frac{W}{K}$
$b_{tr,roof} \cdot U_{roof} \cdot A_{roof} = 1 \cdot 1.17 \frac{W}{m^{2} \cdot K} \cdot 1320 m^{2} = 1544.4 \frac{W}{K}$
$b_{tr, doors} \cdot U_{doors} \cdot A_{doors} = 1 \cdot 1.51 \frac{W}{m^{2} \cdot K} \cdot 15 m^{2} = 22.7 \frac{W}{K}$
$b_{tr, windows} \cdot U_{windows} \cdot A_{windows} = 1 \cdot 3.60 \frac{W}{m^{2} \cdot K} \cdot 114 m^{2} = 410.4 \frac{W}{K}$
The U-value of the floor of the broiler house is 0.94 W m−2 K−1. This value was calculated considering that the floor was made by a reinforced concrete screed and a waterproofing sheet directly in contact with the ground. In the transmission heat flow via ground, the $b_{tr}$ coefficient has to be considered. Considering that the floor of the broiler house is in direct contact with the ground, $b_{tr,floor}$ can be assumed equal to 0.45 (value from Table 5.2.2). The calculation is:
$b_{tr, floor} \cdot U_{floor} \cdot A_{floor} = 0.45 \cdot 0.94 \frac{W}{m^{2} \cdot K} \cdot 1200 m^{2} = 507.6 \frac{W}{K}$
Considering the previously calculated values, the sum is:
$\sum^{n}_{j=1} (b_{tr,j} \cdot U_{j} \cdot A_{j}) = 2842.3 \frac{W}{K}$
Finally, the heat flow can be calculated considering the temperature difference between inside and outside as:
Example $2$
Example 2: Sensible heat flow in a broiler house
Problem:
Determine the sensible heat flow produced at the house level by a flock of 14,000 broilers at an indoor air temperature of 23°C. The average weight of the broilers is 1.3 kg.
Solution
The total heat production $\phi_{a,tot}$ from a broiler flock at an indoor air temperature of 20°C is defined by Equation 5.2.6 that reads
$\phi_{a,tot}=10.62 \cdot w_{a}^{0.75} \cdot n_{a}$ (Equation $6$)
Considering the given boundary conditions, Equation 5.2.6 becomes:
$\phi_{a,tot}=10.62 \cdot 1.3^{0.75} \cdot 14,000 = 181,013.1\ W$
Before calculating $\phi_{a}$, nhpu has to be calculated according to Equation 5.2.14:
$n_{hpu} = \frac{\phi_{a,tot}}{1000}$ (Equation $14$)
$n_{hpu} = \frac{181,031.1\ W}{1000 \frac{W}{hpu}} = 181.01\ hpu$
Finally, $\phi_{a}$ calculated at 23°C of $T_{air,i}$ is (from Equation 5.2.10):
$\phi_{a} = \{0.61 \cdot [1000+20 \cdot(20-T_{air,i})] -0.228 \cdot T_{air,i}^{2} \} \cdot n_{npu}$ (Equation $10$)
$\phi_{a} = \{0.61 \cdot [1000+20 \cdot(20-23^\circ C)] -0.228 \cdot (23^\circ C)^{2} \} \cdot 181.01 = 81,959.2\ W$
The broiler flock in this example produces around 82 kW of sensible heat.
Example $3$
Example 3: Solar based heat flow
Problem:
Determine the value of $\phi_{sol}$ considering the boundary conditions shown in Table 5.2.5 and using the same broiler house of Examples 5.2.1 and 5.2.2.
Solution
The first step for determining $\phi_{sol}$ is to calculate $\phi_{sol,op}$ for each opaque building element according to Equation 5.2.17, as:
Table $5$: Boundary conditions of the example broiler house.
Building
Element
Area
(m2)
U
(W m−2 k−1)
α (-) ggl(-) Isol
(W m−2)
North wall
195
0.81
0.3
-
142
South wall
195
0.81
0.3
-
559
East wall
18
0.81
0.3
-
277
West wall
33
0.81
0.3
-
142
Roof
1320
1.17
0.9
-
721
Floor
1200
0.94
-
-
-
Door (east)
15
1.51
0.9
-
277
North windows
57
3.60
-
0.6
142
South windows
57
3.60
-
0.6
559
$\phi_{sol,op,q} = A_{q} \cdot U_{q} \cdot \alpha_{q} \cdot R_{ex} \cdot I_{sol,q} \cdot F_{sh,q}$ (Equation $17$)
$\phi_{sol,op,wall, N} = 195 m^{2} \cdot 0.81 \frac{W}{m^{2}\cdot K} \cdot 0.3 \cdot 0.04 \frac{m^{2} \cdot K}{W} \cdot 142 \frac{W}{m^{2}} = 269.1\ W$
$\phi_{sol,op,wall, S} = 195 m^{2} \cdot 0.81 \frac{W}{m^{2}\cdot K} \cdot 0.3 \cdot 0.04 \frac{m^{2} \cdot K}{W} \cdot 559 \frac{W}{m^{2}} = 1059.5\ W$
$\phi_{sol,op,wall, E} = 18 m^{2} \cdot 0.81 \frac{W}{m^{2}\cdot K} \cdot 0.3 \cdot 0.04 \frac{m^{2} \cdot K}{W} \cdot 277 \frac{W}{m^{2}} = 48.5\ W$
$\phi_{sol,op,wall, E} = 33 m^{2} \cdot 0.81 \frac{W}{m^{2}\cdot K} \cdot 0.3 \cdot 0.04 \frac{m^{2} \cdot K}{W} \cdot 142 \frac{W}{m^{2}} = 45.5\ W$
$\phi_{sol,op,wall, Roof} = 1320 m^{2} \cdot 1.17 \frac{W}{m^{2}\cdot K} \cdot 0.9 \cdot 0.04 \frac{m^{2} \cdot K}{W} \cdot 721 \frac{W}{m^{2}} = 40,086.4\ W$
$\phi_{sol,op,wall, Roof} = 15 m^{2} \cdot 1.51 \frac{W}{m^{2}\cdot K} \cdot 0.9 \cdot 0.04 \frac{m^{2} \cdot K}{W} \cdot 277 \frac{W}{m^{2}} = 225.9\ W$
The sum of the calculated $\phi_{sol,op,q}$ values is:
$\sum^{q}_{n=1}\phi_{sol,op,q} = 41,734.9\ W$
The solar heat loads on glazed components can be estimated using Equation 5.2.18:
$\phi_{s,gl,k}=A_{k} \cdot g_{gl} \cdot I_{sol,k} \cdot (1-F_{fr}) \cdot F_{sh,k} \cdot F_{sh,gl,k}$ (Equation $18$)
Considering the given boundary conditions, $\phi_{sol,gl}$ for the glazed elements can be computed as:
$\phi_{sol,gl,win,N} = 57\ m^{2} \cdot 0.6 \cdot 142 \frac{W}{m^{2}} \cdot (1-0.2) = 3885.1\ W$
$\phi_{sol,gl,win,S} = 57\ m^{2} \cdot 0.6 \cdot 559 \frac{W}{m^{2}} \cdot (1-0.2) = 15,294.2\ W$
The sum of the calculated $\phi_{sol,gl,k}$ values is:
$\sum^{k}_{n=1} \phi_{sol,gl,k} = 19,179.3\ W$
Finally, the total solar heat load is:
$\phi_{sol} = 41,734.9\ W + 19,179.3\ W = 60,914.2\ W$
Example $4$
Example 4: Ventilation flow rate for temperature control
Problem:
Determine the volumetric ventilation flow rate (m3 h−1) that has to be provided by the exhaust fans of the broiler house to maintain the indoor air temperature at 23°C. For the calculation, consider the absence of supplemental heating flow ($\phi_{H}=0\ W$) the heat flows calculated in Example 5.2.1 ($\phi_{tr}$), Example 5.2.2 ($\phi_{a}$) and Example 5.2.4 ($\phi_{sol}$). The supply air temperature is the same of the outdoor air (20°C, as in Example 5.2.1).
Solution
In the previous examples the following heat flows were calculated:
$\phi_{tr} = -8,526.9\ W = - 8.5\ kW$
$\phi_{a} = 81,959.2\ W = 82.0\ kW$
$\phi_{sol} = 60,914.2\ W = 60.9\ kW$
The text of the problem states that no supplemental heating flow is present, therefore:
$\phi_{H} = 0\ kW$
Considering the given boundary conditions, the energy balance of Equation 5.2.3 can be written as:
$82.0\ kW - 8.5\ kW + 0\ kW + 60.9\ kW + \phi_{v} = 0$
That becomes:
$\phi_{v} = -134.4\ kW$
Equation 5.2.19 can be expressed in $\dot{V}$ (the unknown of the problem, in kW) as:
$\dot{V} = \frac{\phi_{v}}{\rho_{air} \cdot c_{air} \cdot (T_{air,sup} - T_{air, i})}$
The value of $\rho_{air}$ is assumed equal to 1.2 kg m−3 and $c_{air}$ equal to 2.8 × 10−4 kWh kg−1 K−1 (1010 J kg−1 K−1), even though for more detailed calculation $\rho_{air}$ should be evaluated at the given indoor air temperature and atmospheric pressure. The ventilation air flow is provided with outdoor air, therefore, $T_{air,sup}$ is equal to $T_{air,o}$. Inputting the previously calculated value of $\phi_{v}$, the previous equation reads:
$\dot{V} = \frac{-134.4\ kW}{1.2 \frac{kg}{m^{3}} \cdot 2.8 \cdot 10^{-4} \frac{kWh}{kg \cdot K} \cdot (20 ^\circ C - 23 ^\circ C)} = 133,333\frac{m^{3}}{h}$
To maintain the required indoor air temperature inside the livestock house, around 133,000 m3 h−1 of fresh outdoor air should be provided by the ventilation system.
Image Credits
Figure 1. Fabrizio, E. (CC By 4.0). (2020). The sensible heat balance of equation 3 applied to a generic livestock house.
Figure 2. Fabrizio, E. (CC By 4.0). (2020). The vapor mass balance of equation 5 applied to a generic livestock house.
Figure 3. Costantino, A. (CC By 4.0). (2020). Diagram of the heat exchange surface in an air-to-air heat recovery system.
Figure 4. Costantino, A. (CC By 4.0). (2020). Diagram of a broiler house equipped with evaporative pads.
Figure 5. Costantino, A. (CC By 4.0). (2020). Diagram of the example broiler house with the main geometrical dimensions.
References
Albright, L. (1990). Environmental control for animals and plants. St. Joseph, MI: ASAE.
ASABE Standards. (2012). ASAE EP270.5 DEC1986: Design of ventilation systems for poultry and livestock shelters. St. Joseph, MI: ASABE.
ASHRAE. (2012). 2012 ASHRAE handbook: HVAC systems and equipment. Atlanta, GA: ASHRAE.
ASHRAE. (2011). 2011 ASHRAE handbook: HVAC applications. Atlanta, GA: ASHRAE.
ASHRAE. (1991). ANSI/ ASHRAE Standard 84-1991: Method of testing air-to-air heat exchangers. Atlanta, GA: ASHRAE.
CIGR. (1999). CIGR handbook of agricultural engineering (Vol. II). St. Joseph, MI: ASAE.
Costantino, A., Ballarini, I., & Fabrizio, E. (2017). Comparison between simplified and detailed methods for the calculation of heating and cooling energy needs of livestock housing: A case study. In Building Simulation Applications (pp. 193-200). Bolzano, Italy: Free University of Bozen-Bolzano.
Costantino, A., Fabrizio, E., Biglia, A., Cornale, P., & Battaglini, L. (2016). Energy use for climate control of animal houses: The state of the art in Europe. Energy Proc. 101, 184-191. https://doi.org/10.1016/j.egypro.2016.11.024
Costantino, A., Fabrizio, E., Ghiggini, A., & Bariani, M. (2018). Climate control in broiler houses: A thermal model for the calculation of the energy use and indoor environmental conditions. Energy Build. 169, 110-126. https://doi.org/10.1016/j.enbuild.2018.03.056
Costantino, A., Fabrizio, E., Villagrá, A., Estellés, F., & Calvet, S. (2020). The reduction of gas concentrations in broiler houses through ventilation: Assessment of the thermal and electrical energy consumption. Biosyst. Eng. https://doi.org/10.1016/j.biosystemseng.2020.01.002.
Esmay, M. E., & Dixon, J. E. (1986). Environmental control for agricultural buildings. Westport, CT: AVI.
European Committee for Standardisation. (2009). EN 12831: Heating systems in buildings—Method for calculation of the design heat load. Brussels, Belgium: CEN.
European Committee for Standardization. (2007). EN 13789: Thermal performance of buildings—Transmission and ventilation heat transfer coefficients—Calculation method. Brussels, Belgium: CEN.
Hamilton, J., Negnevitsky, M., & Wang, X. (2016). Thermal analysis of a single-storey livestock barn. Adv. Mech. Eng. 8(4). https://doi.org/10.1177/1687814016643456.
Hellickson, M. A., & Walker, J. N. (1983). Ventilation of agricultural structures. St. Joseph, MI: ASAE.
ISO. (2017). ISO 52016-1:2017: Energy performance of buildings—Energy needs for heating and cooling, internal temperatures and sensible and latent heat loads—Part 1: Calculation procedures. International Organization for Standardization.
Liberati, P., & Zappavigna, P. (2005). A computer model for optimisation of the internal climate in animal housing design. In Livestock Environment VII, Proc. Int. Symp. St. Joseph, MI: ASABE.
Lindley, J. A., & Whitaker, J. H. (1996). Agricultural buildings and structures. St. Joseph, MI: ASAE.
Midwest Plan Service. (1987). Structures and environment handbook (11th ed.). Ames, IA: Midwest Plan Service.
OECD. (2008). Environmental performance of agriculture in OECD countries since 1990. Paris, France: OECD.
Panagakis, P., & Axaopoulos, P. (2008). Comparing fogging strategies for pig rearing using simulations to determine apparent heat-stress indices. Biosyst. Eng. 99(1), 112-118. https://doi.org/10.1016/j.biosystemseng.2007.10.007.
Pedersen, S., & Sällvik, K. (2002). 4th report of working group on climatization of animal houses—Heat and moisture production at animal and house levels. Horsens, Denmark: Danish Institute of Agricultural Sciences.
Rossi, P., Gastaldo, A., Riva, G., & de Carolis, C. (2013). Progetto re sole—Linee guida per il risparmio energetico e per la produzione di energia da fonte solare negli allevamenti zootecnici (in Italian). Reggio Emilia, Italy: CRPA.
St-Pierre, N. R., Cobanov, B., & Schnitkey, G. (2003). Economic losses from heat stress by US livestock industries. J. Dairy Sci. 86, E52-E77. https://doi.org/10.3168/jds.s0022-0302(03)74040-5.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/05%3A_Plant_Animal_and_Facility_Systems/5.02%3A_Building_Design_for_Energy_Efficient_Livestock_Housing.txt
|
M. Elena Castell-Perez
Department of Biological and Agricultural Engineering
Texas A&M University, College Station, TX, USA
List of Key Terms
Sensible heat Freezing point Cooling load
Specific heat Conduction Freezing rate and time
Latent heat Convection Freeze drying
Introduction
Freezing is one of the oldest and more common unit operations that apply heat and mass transfer principles to food. Engineers must know these principles to analyze and design a suitable freezing process and to select proper equipment by establishing system capacity requirements.
Freezing is a common process for long-term preservation of foods. The fundamental principle is the crystallization of most of the water—and some of the solutes—into ice by reducing the temperature of the food to −18°±3°C or lower (a standard commercial freezing target temperature) using the concepts of sensible and latent heat. These principles also apply to freezing of other types of materials that contain water.
If done properly, freezing is the best way to preserve foods without adding preservatives. Freezing aids preservation by reducing the rate of physical, chemical, biochemical, and microbiological reactions in the food. The liquid water-to-ice phase change reduces the availability of the water in the food to participate in any of these reactions. Therefore, a frozen food is more stable and can maintain its quality attributes throughout transportation and storage.
Freezing is commonly used to extend the shelf life of a wide variety of foods, such as fruits and vegetables, meats, fish, dairy, and prepared foods (e.g., ice cream, microwavable meals, pizzas) (George, 1993; James and James, 2014). The great demand for frozen food creates the need for proper knowledge of the mechanics of freezing and material thermophysical properties (Filip et al., 2010).
Outcomes
After reading this chapter, you will be able to:
• • Describe the engineering principles of freezing of foods
• • Describe how food product properties, such as freezing point temperature, size, shape, and composition, as well as packaging, affect the freezing process
• • Describe how process factors, such as freezing medium temperature and convective heat transfer coefficient, affect the freezing process
• • Calculate values of food properties and other factors required to design a freezing process
• • Calculate freezing times
• • Select a freezer for a specific application
Concepts
Process of Freezing
Freezing is a physical process by which the temperature of a material is reduced below its freezing point temperature. Two heat energy principles are involved: sensible heat and latent heat. When the material is at a temperature above its freezing point, first the sensible heat is removed until the material reaches its freezing point; second, the latent heat of crystallization (fusion) is removed, and finally, more sensible heat is removed until the material reaches the target temperature below its freezing point.
Sensible heat is the amount of heat energy that must be added or removed from a specific mass of material to change its temperature to a target value. It is referred to as “sensible” because one can usually sense the temperature surrounding the material during a heating or cooling process. Latent heat is the amount of energy that must be removed in order to change the phase of water in the material. During the phase change, there is no change in the temperature of the material because all the energy is used in the phase change. In the case of freezing, this is the latent heat of fusion. Heat is given off as the product crystallizes at constant temperature.
For pure water, the latent heat of fusion is a constant with a value of ~334 kJ per kg of water. For food products, the latent heat of fusion can be estimated as
$\lambda = M_{water} \times \lambda_{w}$
where λ = latent heat of fusion of food product (kJ/kg)
Mwater = amount of water in product, or water content (decimal)
λw = latent heat of fusion of pure water (~334 kJ/kg)
The sensible and latent heat energy in the freezing of foods are quantified by Equations 5.2.2-5.2.9. Table 6.1.1 presents values of the latent heat of several foods with specific moisture contents.
Sensible Heat and Specific Heat
The sensible heat to change the temperature of a food is related to the specific heat of the food, its mass, and its temperature:
$Q_{s} = mC_{p}(T_{2}-T_{1})$
where QS = sensible heat to change temperature of a food (kJ)
m = mass of the food (kg)
Cp = specific heat of the food (kJ/kg°C or kJ/kgK)
T1 = initial temperature of food (°C)
T2 = final temperature of food (°C)
The specific heat (also called specific heat capacity), Cp, of liquid water (above freezing) is 4.186 kJ/kg°C or 1 calorie/g°C. In foods, specific heat is a property that changes with the food’s water (moisture) content. Usually, the higher the moisture or water content, the larger the value of Cp, and vice versa. As the water in the food reaches its freezing point temperature, the water begins to crystallize and turn into ice. When almost all of the water is frozen, the specific heat of the food decreases by about half (Cp of ice = 2.108 kJ/kg°C). Therefore, one must be careful when using Equation 6.1.2 to use the correct value of Cp (above or below freezing; see Example 6.1.1 and Equations 6.1.5-6.1.7).
Values of Cp of a wide range of foods at a particular moisture content, above and below freezing, are available (Mohsening, 1980; Choi and Okos, 1986; ASHRAE, 2018; The Engineering Toolbox, 2019; see Table 6.1.1 for some examples). When values of Cp or λ of the target foods are unavailable, they can be determined using several methods, ranging from standard calorimetry to differential scanning, ultrasound, and electrical methods (Mohsenin, 1980; Chen, 1985; Klinbun and Rattanadecho, 2017).
When these properties cannot be measured (e.g., because the sample is too small or heterogeneous, or equipment is unavailable), a wide range of models have been developed to predict the properties of food and agricultural materials as a function of time and composition. For instance, if detailed product composition data are not available, Equation 6.1.3 can be used to approximate Cp for temperatures above freezing:
Table $1$: Specific heat (Cp) and latent heat of fusion (λ) of selected foods estimated based on composition (ASHRAE, 2018).
Food Moisture Content
(%)
Cp
Above Freezing
(kJ/kg°C)
Cp
Below Freezing
(kJ/kg°C)
λ (kJ/kg) Tf
Initial Freezing Temperature (°C)[a]
Carrots
87.79
3.92
2.00
293
−1.39
Green peas
78.86
3.75
1.98
263
−0.61
Honeydew melon
89.66
3.92
1.86
299
−0.89
Strawberries
91.57
4.00
1.84
306
−0.78
Cod (whole)
81.22
3.78
2.14
271
−2.22
Chicken
65.99
4.34
3.62
220
−2.78
[a] Temperature at which water in food begins to freeze; freezing point temperature.
Cp,unfrozen = CpwXw + CpsXs (3)
where Cpw = specific heat of the water component (kJ/kg°C)
Xw = mass fraction of the water component (decimal)
Cps = specific heat of the solids component (kJ/kg°C)
Xs = mass fraction of the solids component (decimal)
As a material balance, Xw = 1 – Xs. This method approximates the food as a binary system composed of only water and solids. When the main solids component is known, the Cp of the solids (Cps) can be estimated from published data (e.g., Table 6.1.2). For instance, if the food is mostly water and carbohydrates (e.g., a fruit), Cps can be approximated as 1.5488 kJ/kgK (from Table 6.1.2). If the target food is composed mostly of protein, then Cps can be approximated as 2.0082 kJ/kg°C.
Table $2$: Specific heat, Cp, of food components for −40°C to 150°C.
Food Component Specific Heat (kJ/kg°C)
Protein
Fat
Carbohydrate
Fiber
Ash
From Choi and Okos, 1986; ASHRAE, 2018. T in °C. An extensive database on food products composition is available in USDA (2019).
The specific heat of the food above its freezing point can be calculated based on its composition and the mass average specific heats of the different components as:
$C_{p} = \sum^{n}_{i=1} X_{i}C_{pi}$
where Xi = mass fraction of component i (decimal, not percentage). For example, for water, Xwater = Mwater /M where M = total mass of product
i = component (water, protein, fat, carbohydrate, fiber, ash)
Cpi = specific heat of component i estimated at a particular temperature value (kJ/kgK) (from Table 6.1.2)
In the case of water, separate equations are available for liquid water at temperatures below (Equation 6.1.5) and above (Equation 6.1.6) freezing, while one equation applies for ice at temperatures below freezing (Equation 6.1.7) (Choi and Okos, 1986):
For water −40°C to 0°C:
$C_{p} = 4.1289-5.3062\times10^{-3}T+9.9516\times 10^{-4} T^{2}$
For water 0°C to 150°C:
$C_{p} = 4.1289-9.0864\times10^{-5}T+5.4731\times 10^{-6} T^{2}$
For ice −40°C to 0°C:
$C_{p} = 2.0623+6.0769\times10^{-3}$
Many predictive models have been developed for the calculation of specific heat of various food products. Some examples are presented in Table 6.1.3. An excellent description of these and other predictive models is presented in Mohsenin (1980).
Table $3$: Examples of predictive models for calculation of specific heat of foods.
Model, Source Equation (Cp in kJ/kgK)
Siebel (1892), above freezing[a]
Cp = 0.837 + 3.348Xw
Siebel (1892), below freezing
Cp = 0.837 + 1.256Xw
Chen (1985), above freezing[b]
Cp = 4.19 – 2.30 Xs – 0.628Xs3
Chen (1985), below freezing[c]
Cp = 1.55 + 1.26 Xs + Xs [R T02/MsT2]
Choi and Okos (1986)
Cp = 4.180Xw + 1.711Xprotein + 1.928Xfat + 1.547Xcarbohydrates + 0.908Xash
[a] Xw = moisture content, decimal; [b] Xs = mass fraction of solids, decimal; [c] R = universal gas constant, 8.314 kJ/kmol K; T0 = freezing point temperature of water, K; Ms = relative molecular mass of soluble solids in food; T = temperature, K.
Several models, such as a modified version of the model by Chen (1985), are available for simple calculation of the specific heat of a frozen food:
$C_{p,frozen} = 1.55+1.26X_{s}+\frac{(X_{w0}-X_{b})\lambda_{w}T_{f}}{T^{2}}$
where Cp,frozen = apparent specific heat of frozen food (kJ/kgK)
Xs = mass fraction of solids (decimal)
Xw0 = mass fraction of water in the unfrozen food (decimal)
Xb = bound water (decimal); this parameter can be approximated with great accuracy as Xb ~ 0.4Xp (Schwartzberg, 1976) with Xp = mass fraction of protein (decimal)
λw = latent heat of fusion of water (~334 kJ/kg)
Tf = freezing point of water = 0.01°C (can be approximated to 0.00°C)
T = food temperature (°C)
Latent Heat
The latent heat of a food product is:
$Q_{L}=m\lambda$
where QL = heat energy removed to freeze the product at its freezing point; also known as latent heat energy (kJ)
m = mass of product (kg)
λ = latent heat of fusion of product (kJ/kg)
For water, λ is approximated as 334 kJ/kg. Latent heat values for many food materials are also available (ASHRAE, 2018; Table 6.1.1).
Freezing Point Temperature and Freezing Point Depression
The freezing point temperature, or initial freezing point, of a food product is defined as the temperature at which ice crystals begin to form. Knowledge of this property of foods is essential for proper design of frozen storage and freezing processes because it affects the amount of energy required to reduce the food’s temperature to a specific value below freezing.
Although most foods contain water that turns into ice during freezing, the initial freezing point of most foods ranges from −0.5°C to −2.2°C (Pham, 1987; ASHRAE, 2018); values given in tables are usually average freezing temperatures. Foods freeze at temperatures lower than the freezing point of pure water (which is 0.01°C although most calculations assume 0.0°C) because the water in the foods is not pure water and, when removing heat energy from the food, the freezing point is depressed (lowered) due to the increase in solute concentration in the ice-water sections of the material. Therefore, the food will begin to freeze at temperatures lower than 0 to 0.01°C (Table 6.1.1). This is called the freezing point depression (Figure 6.1.1).
In general, 1 g-mol of soluble matter will decrease the freezing point of the product by approximately 1°C (Singh and Heldman, 2013). Consequently, the engineer should estimate the freezing point of the specific product and not assume that the food product will freeze at 0°C.
Unfrozen or Bound Water
Water that is bound to the solids in food cannot be frozen. The percent of unfrozen (bound) water at −40°C, a temperature at which most of the water is frozen, ranges from 3% to 46%. This quantity is necessary to determine the heat content of a food (i.e., enthalpy) when exposed to temperatures that cause a phase change; in other words, its latent heat of fusion, λ.
A freezing point depression equation allows for prediction of the relationship between the unfrozen water fraction within the food (XA) and temperature in a binary solution (i.e., water and solids mixture) over the range from −40°C to 40°C (Heldman, 1974; Chen, 1985; Pham, 1987):
$lnX_{A}=\frac{\lambda}{R}(\frac{1}{T_{0}}-\frac{1}{T_{f}})$
where XA = molar fraction of liquid (water) in product A (decimal). (The molar fraction is the number of moles of the liquid divided by the total number of moles of the mixture.)
λ = molar latent heat of fusion of water (6,003 J/mol)
R = universal gas constant (8.314 J/mol K)
T0 = freezing point of pure water (K)
Tf = freezing point of food (K)
XA is calculated as
$X_{A} = \frac{\frac{m_{A}}{M_{A}}}{\frac{m_{A}}{M_{A}}+\frac{m_{s}}{M_{s}}}$
where mA = mass of water in food or moisture content (decimal)
MA = molecular weight of water (18 g/mol)
Ms = mass of solute in product (decimal)
MS = molecular weight of solute (g/mol)
Physics of Freezing: Heat Transfer Modes
During freezing of a material, heat is removed within the food by conduction and at its surface by convection, radiation, and evaporation. In practice, these four modes of heat transfer occur simultaneously but with different levels of significance (James and James, 2014). The contributions to heat transfer by radiation and evaporation are much smaller than for the other modes and, therefore, are assumed negligible (Cleland, 2003).
Heat transfer problems can be defined as steady- or unsteady-state situations. During a steady-state process, the temperature within a system (e.g., the food) only changes with location. Hence, temperature does not change with time at that particular location. This is the equilibrium state of a system. One example would be the temperature inside an oven once it has reached the target heating temperature after the food is placed inside the oven. On the other hand, an unsteady-state process (also known as a transient heat transfer problem) is one in which the temperature within the system (e.g., the food) changes with both time and location (the surface, the center, or any distance within the food). Freezing of a product until its center reaches the target frozen storage temperature is a typical unsteady-state problem while storage of a frozen product is a steady-state situation.
Heat Transfer by Conduction
In general, the rate of heat transfer within the food is dominated by conduction and calculated as
$Q_{conduction} = kA \Delta T/\Delta x$
where Qconduction = heat energy transferred through a solid by conduction (kJ)
k = thermal conductivity of the food (W/m°C)
A = surface area of the food (m2)
T = temperature difference within the food (°C)
x = thickness of the food (m)
Equation 6.1.12 is valid for one-dimensional heat transfer through a rectangular object of thickness ∆x under steady-state conditions (i.e., equilibrium). Variations of Equation 6.1.12 have been developed for other geometries and are also available in heat transfer textbooks.
Heat Transfer by Convection
Convection controls the rate of heat transfer between the food and its surroundings and is expressed as
$Q_{convection} = hA \Delta T$
where Qconvection = heat energy transferred to a colder moving liquid (air, water, etc.) from the warmer surface of a solid by convection (kJ) during cooling of a solid food
h = convective heat transfer coefficient (W/m2°C)
A = surface area of the solid food (m2)
T = temperature difference between the surface of the solid food and the surrounding medium (air, water) = Tsurface – Tmedium (°C)
The convective heat transfer coefficient, h, is a function of the type of freezing equipment and not of the type of material being frozen. The greater the value of h, the greater the transfer of heat energy from the food’s surface to the cooling medium and the faster the cooling/freezing process at the surface of the food. Measurement and calculation of h values is a function of many factors (James and James, 2014; Pham, 2014). In the case of freezing, the convective heat transfer coefficient varies with selected air temperature and velocity. Table 6.1.4 shows values of h for different types of equipment commonly used in the food industry.
Table $4$: Values of convective heat transfer coefficient, h, and operating temperature for different types of equipment used in food freezing operations.
Freezing
Equipment
h
(W/m2K)
Operating (ambient) Freezing
Temperature
Ta (°C)
Still air (batch)
5 to 20
−35 to −37
Air blast
10 to 200
−20 to −40
Impingement
50 to 200
−40
Spiral belt
25 to 50
−40
Fluidized bed
90 to 140
−40
Plate
100 to 500
−40
Immersion
100 to 500
−50 to −70
Cryogenic
1,500
−50 to −196
Design Parameters: Cooling Load, Freezing Rate, and Freezing Time
The engineer in charge of selecting a cooler or a freezer for a specific type of food needs to know two parameters: the cooling load and the freezing rate, which is related to freezing time.
Cooling Load
The cooling load, also called refrigeration load requirement, is the amount of heat energy that must be removed from the food or the frozen storage space. Here we assume that the rate of heat removed from the product (amount of heat energy per unit time) accounts for the majority of the refrigeration load requirement and that other refrigeration loads, such as those due to lights, machinery, and people in the refrigerated space can be neglected (James and James, 2014). Therefore, the rate of heat transfer between the food and the surrounding cooling medium at any time can be expressed as:
$\dot{Q}_{p} = \dot{m}_{p}Q_{p}$
where $\dot{Q}_{p}$ = rate of heat removed from the food, i.e., cooling load (kJ/s or kW)
p = mass flow rate of product (kg/s)
Qp = heat energy in the product (kJ)
The computed cooling load is then used to select the proper motor size to carry out the freezing process.
Freezing Rate
The other critical design parameter is the product freezing rate, which relates to the freezing time. Basically, the freezing rate is the rate of change in temperature during the freezing process. A standard definition of the freezing rate of a food is the ratio between the minimal distance from the product surface to the thermal center of the food (basically the geometric center), d, and the time, t, elapsed between the surface reaching 0°C and the thermal center reaching 10°C colder than the initial freezing point temperature, Tf (IIR, 2006) (Figure 6.1.2). The freezing rate is commonly given as °C/h or in terms of penetration depth measured as cm/h.
Freezing rate impacts the freezing operation in several ways: food quality, rate of throughput or the amount of food frozen, and equipment and refrigeration costs (Singh and Heldman, 2013). The freezing rate affects the quality of the frozen food because it dictates the amount of water frozen into ice and the size of the ice crystals. Slower rates result in a larger amount of frozen water and larger ice crystals, which may result in undesirable product quality attributes such as a grainy texture in ice cream, ruptured muscle structure in meats and fish, and softer vegetables. Faster freezing produces a larger amount of smaller ice crystals, thus yielding products of superior quality. However, the engineer must take into account the economic viability of selecting a fast freezing process for certain applications (Barbosa-Canovas et al., 2005). Different freezing methods produce different freezing rates.
Freezing Time
Freezing rate and, therefore, freezing time, is the most critical information needed by an engineer to select and design a freezing process because freezing rate (or time) affects product quality, stability and safety, processing requirements, and economic aspects. In other words, the starting point in the design of any freezing system is the calculation of freezing time (Pham, 2014).
Freezing time is defined as the time required to reduce the initial product temperature to some established final temperature at the slowest cooling location, which is also called the thermal center (Singh and Heldman, 2013). The freezing time estimates the residence time of the product in the system and it helps calculate the process throughput (Pham, 2014).
Calculation of freezing time depends on the characteristics of the food being frozen (including composition, homogeneity, size, and shape), the temperature difference between the food and the freezing medium, the insulating effect of the boundary film of air surrounding the material (e.g., the package; this boundary is considered negligible in unpackaged foods), the convective heat transfer coefficient, h, of the system, and the distance that the heat must travel through the food (IRR, 2006).
While there are numerous methods to calculate freezing times, the method of Plank (1913) is presented here. Although this method was developed for freezing of water, its simplicity and applicability to foods make it well-liked by engineers. One modification is presented below.
In Plank’s method, freezing time is calculated as:
$t_{f} = \frac{\lambda \rho_{f}}{(T_{f}-T_{a})}(\frac{P_{a}}{h}+\frac{R_{a}^{2}}{k_{f}})$
where tf = freezing time (sec)
λ = latent heat of fusion of the food (kJ/kg); if this value is unknown, it can be estimated using Equation 6.1.1
ρf = density of the frozen food (kg/m3) (ASHRAE tables)
Tf = freezing point temperature (°C) (ASHRAE tables or Equation 6.1.10)
Ta = freezing medium temperature (°C) (manufacturer specifications; Table 6.1.4 for examples)
a = the thickness of an infinite slab, the diameter of a sphere or an infinite cylinder, or the smallest dimension of a rectangular brick or cube (m)
P and R = shape factor parameters determined by the shape of the food being frozen (Table 6.1.5).
h = convective heat transfer coefficient (W/m2°C) (Equation 6.1.13 or Table 6.1.4)
kf = thermal conductivity of the frozen food (W/m°C) (ASHRAE tables)
Table $5$: Shape factors for use in Equations 6.1.15 and 6.1.16 (Lopez-Leiva and Hallstrom, 2003).
Shape P R
Infinite plate[a]
1/2
1/8
Infinite cylinder[b]
1/4
1/16
Cylinder[c]
1/6
1/24
Sphere
1/6
1/24
Cube
1/6
1/24
[a] A plate whose length and width are large compared with the thickness
[b] A cylinder with length much larger than the radius (i.e., a very long cylinder)
[c] A cylinder with length equal to its radius
When the dimensions of the food are not infinite or spherical (for example, a brick-shaped product or a box), charts are available to determine the shape factors P and R (Cleland and Earle, 1982).
There are four common assumptions for using Plank’s method to calculate freezing times of food products. Here freezing time is defined as the time to freeze the geometrical center of the product.
• • The first assumption is that freezing starts with all water in the food unfrozen but at its freezing point, Tf, and loss of sensible heat is ignored. In other words, the initial freezing temperature is constant at Tf and the unfrozen center is also at Tf. The food product is not at temperatures above its initial freezing point and the temperature within the food is uniform.
• • The second assumption is that heat transfer takes place sufficiently slowly for steady-state conditions to operate. This means that the food product is at equilibrium conditions and temperature is constant at a specified location (e.g., center or surface of the product). Furthermore, the heat given off is removed by conduction through the inside of the food product and convection at the outside surface, described by combining Equations 6.1.11 and 6.1.12.
• • The third assumption is that the food product is homogeneous and its thermal and physical properties are constant when unfrozen and then change to a different constant value when it is frozen.
This assumption addresses the fact that thermal conductivity, k, a thermal property of the product that determines its ability to conduct heat energy, is a function of temperature, more importantly below freezing. For instance, a piece of aluminum conducts heat very well and it has a large value of k. On the other hand, plastics are poor heat conductors and have low values of k. Relative to other liquids, water is a good conductor of heat, with a k value of 0.6 W/mK. In the case of foods, k depends on product composition, temperature, and pressure, with water content playing a significant role, similar to specific heat. One distinction is that k is affected by the porosity of the material and the direction of heat (this is called anisotropy). Thus, the higher the moisture content in the food, the closer the k value is to the one for water. Equations to calculate this thermal property as a function of temperature and composition are also provided by Choi and Okos (1986) and ASHRAE (2018). In the case of a frozen food, kfrozen food is almost four times larger than the value of unfrozen food since kice is approximately four times the value of kliquid water (kice = 2.4 W/m°C, kliquid water = 0.6 W/m°C).
This third assumption also reminds us that the density, ρ, of food materials is affected by temperature (mostly below freezing), moisture content, and porosity. Equations to calculate density as a function of temperature and composition of foods are also provided by Choi and Okos (1986) and ASHRAE (2018). In the case of a frozen food, ρ frozen food is lower than the value of unfrozen food since ρ ice is lower than ρ liquid water (e.g., ice floats in water).
• • The fourth assumption is that the geometry of the food can be considered as one dimensional, i.e., heat transfers only in the direction of the radius of a cylinder or sphere or through the thickness of a plate and that heat transfer through other directions is negligible.
Despite its simplifying assumptions, Plank’s method gives good results as long as the food’s initial freezing temperature, thermal conductivity, and density of the frozen food are known. Modifications of Equation 6.1.15 provide some improvement but still have limitations (Cleland and Earle, 1982; Pham, 1987). Nevertheless, Plank’s method is widely used for a variety of foods.
One modified version of Plank’s method (Equation 6.1.15) that is commonly used was developed to calculate freezing times of packaged foods (Singh and Heldman, 2013):
$t_{f}=\frac{\lambda \rho_{f}}{(T_{f}-T_{a})} [PL(\frac{1}{h} +\frac{x}{k_{2}})+\frac{R_{a}^{2}}{k_{1}}]$
where L = length of the food (m)
a = thickness of the food (m); assume the food fills the package
x = thickness of packaging material (m)
k1 = thermal conductivity of packaging material (W/m°C)
k2 = thermal conductivity of the frozen food (W/m°C)
with other variables as defined in Equation 6.1.15.
The term $\frac{1}{(\frac{1}{h}+ \frac{x}{k_{2}})}$ is known as the overall convective heat transfer coefficient. It includes both the convective (1/h) and the conductive (x/k2) resistance to heat transfer through the packaging material.
Applications
Engineers use the concepts described in the previous section to analyze and design freezing processes and to select proper equipment by establishing system capacity requirements. Proper design of a freezing process requires knowledge of food properties including specific heat, thermal conductivity, density, latent heat of fusion, and initial freezing point, as well as the size and shape of the food, its packaging requirements, the cooling load, and the freezing rate and time (Heldman and Singh, 2013). All of these parameters can be calculated using the information described in this chapter.
When the freezing process is not properly designed, it might induce changes in texture and organoleptic (determined using the senses) properties of the foods and loss of shelf life (Singh and Helmand, 2013). Other disadvantages of freezing include the following:
• • product weight losses often range between 4% and 10%;
• • freezing injury of unpackaged foods in slow freezing processes causes cell-wall rupture due to the formation of large ice crystals;
• • frozen products require frozen shipping and storage, which can be expensive;
• • loss of nutrients such as vitamins B and C have been reported; and
• • frozen foods should not be stored for longer than a year to avoid quality losses due to freezer burn (i.e., food surface gets dry and brown).
Because of the potential disadvantages of freezing, proper design of a freezing process also requires the following considerations:
• • the parts of the equipment that will be in contact with the food (e.g., stainless steel) should not impart any flavor or odor to the food;
• • the conditions in the processing plant should be sanitary and allow for easy cleaning;
• • the equipment should be easy to operate;
• • the packaging should be chosen to prevent freezer burn and other quality losses; and
• • the properties of the food that is frozen rapidly may be different from when the food is being frozen slowly.
There is a wide variety of equipment available for freezing of food (Table 6.1.6). The choice of freezing equipment depends upon the rate of freezing required as well as the size, shape, and packaging requirements of the food.
Table $6$: Common types of freezers used in the food industry.
Type of Freezer Freezing Rate Range
Slow (still-air, cold store)
1°C and 10°C/h (0.2 to 0.5 cm/h)
Quick (air-blast, plate, tunnel)
10°C and 50°C/h (0.5 to 3 cm/h)
Rapid (fluidized-bed, immersion)
Above 50°C/h (5 to 10 cm/h)
Sources: George (1993), Singh (2003), Sudheer and Indira (2007), Pham (2014).
Traditional Freezing Systems
Slow freezers are commonly used for the freezing and storage of frozen foods and are common practice in developing countries (Barbosa-Canovas et al., 2005). Examples of “still” freezers are ice boxes and chest freezers, a batch-type, stationary type of freezer that uses air between −20°C and −30°C. Air is usually circulated by fans (~1.8 m/s). This freezing method is low cost and requires little labor but product quality is low because it may take 3 to 72 h to freeze a 65-kg meat carcass (Pham, 2014).
Quick freezers are more common within the food industry because they are very flexible, easy to operate, and cost-effective for large-throughput operations (George, 1993). Air is forced over the food at 2 to 6 m/s, for an increased rate of heat transfer compared to slow freezers. Blast freezers are examples of this category and are available in batch or continuous mode (in the form of tunnels, spiral, and plate). These quick and blast freezers are relatively economical and provide flexibility to the food processor in terms of type and shape of foods. It takes 10 to 15 minutes to freeze products such as hamburger patties or ice cream (Sudheer and Indira, 2007). Throughput ranges from 350 to 5500 kg/hr.
Rapid freezers are well-suited for individual quick-frozen (IQF) products, such as peas and diced foods, because the very efficient transfer of heat through small-sized products induces the rapid formation of ice throughout the product and, consequently, greater product quality (George, 1993). Fluidized bed freezers are the most common type of freezer used for IQF processes. It usually takes three to four minutes to freeze unpacked peas (Sudheer and Indira, 2007). Throughput ranges from 250 to 3000 kg/hr.
Immersion freezers provide extremely rapid freezing of individual food portions by immersing the product into either a cryogen (a substance that produces very low temperatures, e.g., liquid nitrogen) or a fluid refrigerant with very low freezing temperatures (e.g., carbon dioxide). Immersion freezers also provide uniform temperature distribution throughout the product, which helps maintain product quality. It takes 10 to 15 minutes to freeze many food types (Singh, 2003).
Ultra-rapid freezers (e.g., cryogenic freezers) are suitable for high product throughput rates (over 1500 kg/h), require very little floor space, and are very flexible because they can be used with many types of food products, such as fish fillets, shellfish, pastries, burgers, meat slices, sausages, pizzas, and extruded products (George, 1993). It takes between one-half and one minute to freeze a variety of food items.
Freeze Drying
Freeze drying is a specific type of freezing process commonly used in the food industry (McHug, 2018). The process combines drying and freezing operations. In brief, the product is dried (i.e., moisture is removed) using the principle of sublimation of ice to water vapor. Hence, the product is dried at temperature and pressure below the triple point of water. (The triple point is the temperature and pressure at which water exists in equilibrium with its three phases, gas, liquid and solid. At the triple point, T = 0.01°C and P = 611.2 Pa; see Figure 6.1.3.) The A-B line in Figure 6.1.3 represents the saturation (vaporization) line when water transitions from liquid to gas or vice versa; the A-C line represents the line where water transitions from solid to liquid (fusion or melting) or from liquid to solid (solidification or freezing); and the A-D line represents the sublimation line when water transitions from solid to gas directly (as in freeze drying) or when it changes from gas to solid (deposition). Freeze drying is popular for manufacture of rehydrating foods, such as coffee, fruits and vegetables, meat, eggs, and dairy, due to the minimal changes to the products’ physical and chemical properties (Luo and Shu, 2017). This phase change of water occurs at very low pressures.
New Freezing Processes
Alternatives to traditional freezing methods are evaluated to make freezing suitable for all types of foods, optimize the amount of energy used and reduce the impact in the environment. Processes such as impingement freezing and hydrofluidization (HF), an immersion type freezer that uses ice slurries, provide higher surface heat-transfer rates with increasing freezing rates, which has tremendous potential to improve the quality of products such as hamburgers or fish fillets (James and James, 2014). These methods use very high velocity air or refrigerant jets that enable very fast freezing of the product. Studies on their applications to foods and other biological materials are in progress. Operating conditions and feasibility of the techniques must be assessed before implementation.
Other promising technologies include high-pressure freezing (also called pressure-shifting freezing) (Otero and Sanz, 2012) and ultrasound-assisted freezing (Delgado and Sun, 2012), which facilitate formation of smaller ice crystals. Magnetic resonance and microwave-assisted freezing, cryofixation, and osmodehydrofreezing are other new freezing technologies. Another trend is “smart freezing” technology, which combines the mechanical aspects of freezing with sensor technologies to track food quality throughout the cold chain. Smart freezing uses computer vision and wireless sensor networks (WSN), real-time diagnosis tools to optimize the process, ultrasonic monitoring of the freezing process, gas sensors to predict crystal size in ice cream, and temperature-tracking sensors to predict freezing times and product quality (Xu et al., 2017).
Examples
Examples 6.1.1 through 6.1.6 show a few of the many options an engineer could consider when selecting the best type of freezing equipment and operational parameters to freeze a food product. Another critical aspect of design of freezing processes for foods is that many of the products are packaged and the packaging material offers resistance to the transfer of heat, thus increasing freezing time (Yanniotis, 2008). Example 6.1.7 illustrates this point.
Example $1$
Example 1: Calculation of refrigeration requirement to freeze a food product
Problem:
Calculate the refrigeration requirement when freezing 2,000 kg of strawberries (91.6% moisture) from an initial temperature of 20°C to −20°C. The initial freezing point of strawberries is −0.78°C (Table 6.1.1).
Solution
(1) identify the type of heat process(es) involved in this process and set up the energy balance; (2) calculate how much heat energy must be removed from the strawberries to carry out the freezing process; and (3) calculate the refrigeration requirement (in kW) for the freezing process.
The following assumptions are commonly made in this type of calculation:
• • Conservation of mass during the freezing process. Thus, mstrawberries = 2,000 kg remains constant because the fruits do not lose or gain moisture (or the changes in mass are negligible).
• • The freezing point temperature is known.
• • The temperature of the freezing medium (ambient temperature) and storage remains constant (i.e., a steady-state situation).
Step 1 Identify the type of heat processes and set up the energy balance:
• • Sensible, to decrease the temperature of the strawberries from 20°C to just when they begin to crystallize at −0.78°C
• • Latent, to change liquid water in strawberries to ice at −0.78°C
• • Sensible, to further cool the strawberries to −20°C (using Equation 6.1.2)
Thus, for the given freezing process, the heat energy balance is the sum of the three heat processes listed above.
Step 2 Calculate the total amount of energy removed in the freezing process, Q:
Sensible, from 20°C to −0.78°C, using Equation 6.1.2, where Qs = Q1:
$Q_{1}=mC_{p,frozen}(T_{2}-T_{1})$ (Equation $2$)
where m = 2,000 kg
T1 = 20°C
T2 = −0.78°C
Cp of unfrozen strawberries (at 91.6 % moisture) = 4.00 kJ/kg°C (Table 6.1.1). See Example 6.1.2 for calculation of the specific heat of a food product above freezing.
Thus,
Q1 = (2,000 kg)(4.00 kJ/kg°C)(−0.78 – 20°C) = −166,240 kJ
Note that this value is negative because heat is released from the product.
Latent, using Equation 6.1.9:
$Q_{2}=m\lambda \text{ at } T = -0.78 ^\circ C$ (Equation $9$)
where m = 2,000 kg
λ = latent heat of fusion of strawberries at given moisture content = 306 kJ/kg (from Table 6.1.1).
Thus,
Q2 = (2,000 kg)(306 kJ/kg) = −612,000 kJ
Note that this value is negative because heat is being released from the product.
Sensible, to further cool to −20°C, again using Equation 6.1.2:
$Q_{3}=mC_{p,frozen}(T_{2}-T_{1})$ (Equation $2$)
where m = 2,000 kg
T1 = −0.78°C
T2 = −20°C
Cp of frozen strawberries (at 91.6 % moisture) = 1.84 kJ/kg°C (Table 6.1.1). See Example 6.1.3 for calculation of the specific heat of a frozen food product.
Thus,
Q3 = (2,000 kg)(1.84 kJ/kg°C)(−20 + 0.78°C) = −70,729.6 kJ
Adding all energy terms:
Q = Q1 + Q2 + Q3 = −166,240 kJ – 612,000 kJ – 70,729.6 kJ
= −848,969.6 kJ = Qproduct
The heat energy removed per kg of strawberries:
Qproduct per kg of fruit = −848,969.6 kJ/2,000 kg = −424.48 kJ/kg
Thus, 848,969.6 kJ of heat must be removed from the 2,000 kg of strawberries (424.48 kJ/kg) initially held at 20°C to freeze them to the target storage temperature of −20°C.
Step 3 Calculate the refrigeration requirement, or cooling load (in kW), for the freezing process. The cooling load $\dot{Q}_{p}$ (also called refrigeration requirement) to freeze 2,000 kg/h of strawberries from 20°C to −20°C is calculated with Equation 6.1.14:
$\dot{Q}_{p} = \dot{m}_{p}Q_{p}$ (Equation $14$)
where p = mass flow rate of product (kg/s)
Qp = heat energy removed to freeze the product to the target temperature = −424.48 kJ/kg
$\dot{Q}_{p}$ = (2,000 kg/h × −424.48 kJ/kg) /3600 s = −235.82 kJ/s or kW
Note that 1 kJ/s = 1 kilowatt = 1 kW.
Example $2$
Example 2: Determine the initial freezing point (i.e., temperature at which water in food begins to freeze) and the latent heat of fusion of a food product
Problem:
Determine the initial freezing point and latent heat of fusion of green peas with 79% moisture.
Solution
Solution by use of tables:
From Table 6.1.1, the Tf of green peas at the given moisture content is −0.61°C and the latent heat of fusion λ is 263 kJ/kg.
Solution by calculation:
If tabulated λ values are not available, λ of the product can be estimated using Equation 6.1.1.
$\lambda = M_{water} \times \lambda_{w}$ (Equation $1$)
λ = (0.79) × (334 kJ/kg) = 263.86 kJ/kg
Example $3$
Example 3: Estimation of specific heat of a food product based on composition
Sometimes the engineer will not have access to measured or tabulated values of the specific heat of the food product and will have to estimate it in order to calculate cooling loads. This example provides some insight into how to estimate specific heat of a food product.
Problem:
Calculate the specific heat of honeydew melon at 20°C and at −20°C. Composition data for the melon is available as 89.66% water, 0.46% protein, 0.1% fat, 9.18% total carbohydrates (includes fiber), and 0.6% ash (USDA, 2019). Give answers in the SI units of kJ/kgK.
Solution
Use Equations 6.1.5-6.1.7 to account for the effect of product composition and temperature on specific heat. The specific heat of honeydew at T = 20°C is calculated using Equation 6.1.4:
$C_{p}=\sum^{n}_{i=1}X_{i}C_{pi}$ (Equation $4$)
Thus,
$C_{p,honeydew}=X_{w}C_{pw}+X_{p}C_{pp}+X_{f}C_{pf}+X_{c}C_{pf}+X_{a}C_{pa}$
with subscripts w, p, f, c, and a representing water, protein, fat, carbohydrates, and ash, respectively, and Cp in kJ/kg°C.
Step 1 Calculate the Cp of water (Cpw) at 20°C using Equation 6.1.6:
Cp = 4.1289 – 9.0864 × 10−5 T + 5.4731 × 10−6T2 (6)
Thus,
Cp = 4.1289 – (9.0864 × 10−5)(20) + (5.4731 × 10−6)(20)2
Cpw = 4.127 kJ/kg°C
Step 2 Calculate the specific heat of the different components at T = 20°C using the equations given in Table 6.1.2.
Food Component Cp of Honeydew at T = 20°C (kJ/kgK)
Water
4.127
Protein
2.032
Fat
2.012
Carbohydrate
1.586
Fiber
NA
Ash
1.129
Step 3 Calculate the specific heat of honeydew at 20°C using Equation 6.1.4:
$C_{p,honeydew}=(0.8966)(4.127)+(0.0046)(2.032)+(0.001)(2.012)+(0.0918)(1.586)+(0.006)(1.129)$
Cp,honeydew at 20°C = 3.86 kJ/kg°C
Step 4 Calculate the specific heat of honeydew at −20°C using Equation 6.1.8:
$C_{p,frozen} = 1.55 + 1.26X_{s}+\frac{(X_{w0}-X_{b})L_{0}T_{f}}{T^{2}}$ (Equation $8$)
From the given composition of honeydew:
Xs = mass fraction of solids = 1 – 0.8966 = 0.1034
Xw0 = mass fraction of water in the unfrozen food = 0.8966
Xb = bound water = 0.4Xp = 0.4(0.0046) = 0.00184
Tf = freezing point of food to be frozen = −0.89°C (from Table 6.1.1)
T = food target (or freezing process) temperature = −20°C
Substituting the numbers into Equation 6.1.8:
$C_{p,frozen} = 1.55 + 1.26X_{s}+\frac{(X_{w0}-X_{b})L_{0}T_{f}}{T^{2}}$ (Equation $8$)
$C_{p,frozen} = 1.55 + 1.26(0.1034)+\frac{(0.8966-0.00184)(\frac{334 \ kJ}{kg})(-0.89^\circ C)}{(-20^\circ C)^{2}}$
Cp,frozen = 2.397 kJ/kg°C
The calculated Cp values can then be used to calculate cooling load as shown in Example 6.1.1.
Observations:
• • As expected, the specific heat of the frozen honeydew is lower than the value for the fruit above freezing.
• • When values of the product’s specific heat and initial freezing point are not available from tables, engineers should be able to estimate them using available prediction models and composition data.
• • The Cp of the frozen product was calculated at −20°C, the freezing process temperature. Tabulated values are usually given when the food is fully frozen at a reference temperature of −40°C (ASHRAE, 2018). If we use −40°C in Equation 6.1.8, then
$C_{p,frozen} = 1.55 + 1.26(0.1034)+\frac{(0.8966-0.00184)(\frac{334 \ kJ}{kg})(-0.89^\circ C)}{(-40^\circ C)^{2}}$
Cp,frozen = 1.85 kJ/kg°C.
• This value is closer to the tabulated values. While the change in Cp as a function of temperature can be important in research studies, it does not influence the selection of freezing equipment.
• • Many mathematical models are available for prediction of specific heat and other properties of foods (Mohsenin, 1980; Choi and Okos, 1986; ASHRAE, 2018). The engineer must choose the value that is more suitable for the specific application using available composition and temperature data.
Example $4$
Example 4: Calculation of initial freezing point temperature of a food product
Problem:
Consider the strawberries in Example 6.1.1 and calculate the depression of the initial freezing point of the fruit assuming the main solid present in the strawberries is fructose (a sugar), with molecular weight of 108.16 g/mol.
Solution
Calculation of the initial freezing point temperature requires a series of steps.
Step 1 Collect all necessary data. From Example 6.1.1, strawberries contain 91.6% water (mA) and the rest is fructose (100 – 91.6 = 8.04% solids = ms). Other information provided is Ms = Mfructose = 108.16 g/mol, λ = 6,003 J/mol, MA = 18 g/mol, R = 8.314 J/mol K, and T0 = 273.15 K.
Step 2 Calculate XA, the molar fraction of liquid (water) in the strawberries (decimal) using Equation 6.1.11:
$X_{A} = \frac{\frac{0.916}{18}}{\frac{0.916}{18}+\frac{0.0804}{108.16}} = 0.9922$
Step 3 Calculate Tf of strawberries using Equation 6.1.10:
$nX_{A}=\frac{\lambda}{R}(\frac{1}{T_{0}}-\frac{1}{T_{f}})$ (Equation $10$)
Rearranged:
$T_{f} = (\frac{R}{\lambda}lnX_{A}-\frac{1}{T_{0}})^{-1}$
$T_{f} = (\frac{8.314\text{ J/mol K}}{6003\text{ J/mol}}ln(0.9922)-\frac{1}{273.15})^{-1}$
$T_{f} = 272.34\ K = -0.81^\circ C$
Observation:
The presence of fructose in the strawberries results in an initial freezing point temperature lower than that for pure water.
Example $5$
Example 5: Calculation of freezing time of an unpackaged food product
An air-blast freezer is used to freeze cod fillets (81.22% moisture, freezing point temperature = −2.2°C, initial temperature = 5°C, mass of fish = 1 kg). Assume that each cod fillet is an infinite plate with thickness of 6 cm. Freezing process parameters for the air-blast freezer are: freezing medium temperature −20°C, convective heat transfer coefficient, h, of 50 W/m2°C (Table 6.1.4), the density and thermal conductivity of the frozen fish are 992 kg/m3 and 1.9 W/m°C, respectively (ASHRAE, 2018). The target freezing time is less than 2 hours.
Problem:
Calculate the time required to freeze a fish fillet (freezing time, tf), using Plank’s method (Equation 6.1.15):
$t_{f}=\frac{\lambda \rho_{f}}{(T_{F}-T_{a})}(\frac{P_{a}}{h}+\frac{R^{2}_{a}}{k_{f}})$ (Equation $15$)
Solution
Step 1 Determine the required food and process parameters:
λ = latent heat of fusion of cod fillet = 271.27 kJ/kg (from tables, ASHRAE, or calculated using Equation 6.1.1, λ = (0.8122)(334 kJ/kg) = 271.27 kJ/kg = 271.27 × 103 J/kg)
ρf = density of the frozen food, 992 kg/m3 (from ASHRAE, 2018)
Tf = freezing point temperature, −2.2°C (available in ASHRAE, 2018, or calculated using composition and Equations 6.1.10 and 6.1.11)
Ta = freezing medium temperature, −20°C
a = thickness of the plate = 6 cm = 0.06 m
P and R = shape factor parameters, 1/2 and 1/8 (from Table 6.1.6)
h = convective heat transfer coefficient, 50 W/m2°C (given)
kf = thermal conductivity of the frozen food, 1.9 W/m°C (from ASHRAE)
Step 2 Calculate the freezing time, tf, from Equation 6.1.15 as:
$t_{f}=\frac{(271.27\times10^{3}\frac{J}{kg})\frac{992\ kg}{m^{3}}}{[(-2.2)-(20C)]}[\frac{(0.06\ m)}{2(\frac{50\ W}{m^{2}C})}+\frac{(0.06\ m)^{2}}{8(\frac{1.9\ W}{mC})}]$
tf = 12,651.35 seconds/3600 = 3.5 h. The freezing time target would not be met.
Reminder:
Plank’s method calculates the time required to remove the latent heat to freeze the fish. It does not take into account the time required to remove the sensible heat from the initial temperature of 5°C to the initial freezing point. This means that use of Equation 6.1.15 might underestimate freezing times.
As shown in Example 6.1.1, QS, the sensible heat removed to decrease the temperature of the fish from 5°C to just when it begins to crystallize at −2.2°C is calculated using Equation 6.1.2:
$Q_{s}=mC_{p}(T_{2}-T_{1})$ (Equation $2$)
where m = mass of the food = 1 kg
Cp = specific heat of the unfrozen cod (at 81.22 % moisture) = 3.78 kJ/kg°C (from tables, ASHRAE, 2018).
T1 = 5°C
T2 = −2.2°C
Thus, QS = (1 kg)(3.78 kJ/kg°C)(−2.2 – 5°C) = −27.216 kJ = −27,216 J of heat energy removed per kg of fish. The quantity is negative because heat is released from the product when it is cooled. Also, although not negligible, this amount is much lower than the latent heat removal.
Example $6$
Example 6: Find ways to decrease freezing time of an unpackaged food product
Problem:
Find a way to decrease freezing time of the cod fillets in Example 6.1.5 to less than 2 hours.
Solution
Evaluate the effect (if any) of some process and product variables on the calculated freezing time, tf, using Plank’s method and determine which parameters decrease freezing time. Equation 6.1.15:
$t_{f}=\frac{\lambda \rho_{p}}{(T_{F} - T_{a})}(\frac{P_{a}}{h}+\frac{R_{a}^{2}}{k_{f}})$ (Equation $15$)
Freezing process variables:
• • Freezing time decreases when freezing medium temperature, Ta, decreases (colder medium):
$t_{f} \propto \frac{1}{(T_{F}-T_{a})}$
• • Freezing time decreases when the convective heat transfer coefficient, h, increases (faster removal of heat energy and thus faster freezing process):
$t_{f} \propto \frac{1}{h}$
Product variables:
• • Freezing time decreases when the thickness, a, of the product decreases (smaller product):
$t_{f} \propto (\frac{a}{h}+\frac{a^{2}}{k_{f}})$
• • Freezing time decreases when the product shape changes from a plate to a cylinder or a sphere (greater surface area), i.e., P decreases from 1/2 to 1/6 and R decreases from 1/8 to 1/24:
$t_{f} \propto (\frac{P}{h}+\frac{R}{k_{f}})$
• • Freezing time decreases with a lower latent heat of fusion of the food, λ, a lower density of the frozen food, ρf, and a higher thermal conductivity of the frozen food, kf:
$t_{f} \propto \lambda \rho_{f}(\frac{1}{k_{f}})$
• This highlights the need for accurate values for these variables when using this method to calculate freezing times.
• • The effect of the initial freezing point is less significant due to the small range of variability among a wide variety of food products:
$t_{f} \propto \frac{1}{(T_{f}-T_{a})}$
Changing freezing process variables. Do the calculation assuming a freezing medium temperature of −40°C (Table 6.1.4) instead of the Ta = −20°C used in Example 6.1.5, while holding everything else constant:
$t_{f}=\frac{(271.27\times10^{3}\frac{J}{kg})\frac{992\ kg}{m^{3}}}{[(-2.2)-(40C)]}[\frac{(0.06\ m)}{2(\frac{50\ W}{m^{2}C})}+\frac{(0.06\ m)^{2}}{8(\frac{1.9\ W}{mC})}]$
tf = 5957.51 seconds ~1.66 h < 2 h. The freezing time target would be met.
This result makes sense because the lower the temperature of the freezing medium (air, in an air-blast freezer), the shorter the freezing time.
Next, consider increasing the convective heat transfer coefficient, h, for the air-blast freezer. Based on Table 6.1.4, this variable can go as high as 200 W/m°C for this type of freezer. While holding everything else constant,
$t_{f}=\frac{(271.27\times10^{3}\frac{J}{kg})(\frac{992\ kg}{m^{3}})}{[(-2.2)-(20C)]}[\frac{(0.06\ m)}{2(\frac{200\ W}{m^{2}C})}+\frac{(0.06\ m)^{2}}{8(\frac{1.9\ W}{mC})}]$
tf = 5848.27 seconds ~ 1.63 h < 2 h. The freezing time target would be met.
This result also makes sense because the faster the freezing rate (due to higher h value), the shorter the freezing time.
Achieving the target freezing time of less than 2 hours would require a change in the freezing process parameters of the air-blast freezer, either the convective heat transfer coefficient h or the operating conditions (the freezing medium temperature, Ta).
Changing product variables. Try changing the thickness, a. Assume the fish is frozen as fillets that are 3 cm thick (half the thickness of the original design). Holding everything else constant except now a = 0.03 m,
$t_{f}=\frac{(271.27\times10^{3}\frac{J}{kg})(\frac{992\ kg}{m^{3}})}{[(-2.2)-(20C)]}[\frac{(0.03\ m)}{2(\frac{50\ W}{m^{2}C})}+\frac{(0.06\ m)^{2}}{8(\frac{1.9\ W}{mC})}]$
tf = 5430.53 seconds = 1.5 h < 2 h. The freezing time target would be met.
In this case, there is no need to change the operating conditions of the air-blast freezer.
Next, change the shape of the product. Fillets can be shaped as infinite (very long) cylinders (P and R = 1/4 and 1/16, respectively; Table 6.1.5) with 6 cm diameter, instead of as long plates. Keeping everything else constant and using the original freezing process parameters:
$t_{f}=\frac{(271.27\times10^{3}\frac{J}{kg})(\frac{992\ kg}{m^{3}})}{[(-2.2)-(20C)]}[\frac{(0.06\ m)}{4(\frac{50\ W}{m^{2}C})}+\frac{(0.06\ m)^{2}}{16(\frac{1.9\ W}{mC})}]$
tf = 6325.68 seconds = 1.76 h < 2 h. The freezing time target would be met.
This result illustrates the significance of product shape on the rate of heat transfer and, consequently, freezing time. In general, a spherical product will freeze faster than one of similar size with the shape of a cylinder or a plate due to its greater surface area.
Example $7$
Example 7. Calculation of freezing time of a packaged food product
For this example, assume that the cod fish from Example 6.1.5 is packed into a cardboard carton measuring 10 cm × 10 cm × 10 cm. The carton thickness is 1.5 mm and its thermal conductivity is 0.065 W/m°C.
Problem:
Calculate the freezing time using the original freezing process parameters (h = 50 W/m2°C, Ta = −20°C) and determine whether the product can be frozen in 2 to 3 hours. If not, provide recommendations to achieve the desired freezing time.
Solution
Because the food is packaged, use the modified version of Plank (Equation 6.1.16):
$t_{f} = \frac{\lambda \rho_{f}}{(T_{f}-T_{a})}[PL(\frac{1}{h}+\frac{x}{k_{2}})+\frac{R_{a}^{2}}{k_{1}}]$ (Equation $16$)
Step 1 Collect the information needed from Example 6.1.5. Also,
L = length of the food = 10 cm = 0.1 m
a = 10 cm = 0.1 m
x = thickness of packaging material = 1.5 mm = 0.0015 m
k2 = thermal conductivity of packaging material = 0.065 W/m°C
k1 = thermal conductivity of the frozen fish =1.9 W/m°C
Step 2 Calculate the freezing time:
$t_{f} = \frac{(271.27\times10^{3} \frac{J}{kg})(992 \frac{kg}{m^{3}})}{17.8^\circ C}[\frac{0.1\ m}{6}(\frac{1}{50}+\frac{0.0015}{0.065})+(\frac{(0.1\ m)^{2}}{24(1.9)})]$
tf = 14,200.28 seconds = 3.9 h >>>> 2 to 3 h.
The freezing time target would not be met.
The freezing process must be modified. Note that freezing of the fish when packaged in cardboard takes longer than the unpackaged product.
Step 3 Calculate some possible options to reduce the freezing time.
• • Shorten the freezing time by using a higher convective heat transfer coefficient, h, of 100 W/m2°C. Then,
$t_{f} = \frac{(271.27\times10^{3} \frac{J}{kg})(992 \frac{kg}{m^{3}})}{17.8^\circ C}[\frac{0.1\ m}{6}(\frac{1}{100}+\frac{0.0015}{0.065})+(\frac{(0.1\ m)^{2}}{24(1.9)})]$
tf = 11649.6 seconds = 3.23 h.
• The freezing time target would not be met.
• • Shorten the freezing time by using a yet higher h of 200 W/m2°C. Then,
$t_{f} = \frac{(271.27\times10^{3} \frac{J}{kg})(992 \frac{kg}{m^{3}})}{17.8^\circ C}[\frac{0.1\ m}{6}(\frac{1}{200}+\frac{0.0015}{0.065})+(\frac{(0.1\ m)^{2}}{24(1.9)})]$
tf = 10389.8 seconds = 2.9 h.
• The freezing time target would be met.
• • Shorten freezing time by using h = 100 W/m2°C and changing the temperature of the freezing medium, Ta, to −40°C:
$t_{f} = \frac{(271.27\times10^{3} \frac{J}{kg})(992 \frac{kg}{m^{3}})}{37.88^\circ C}[\frac{0.1\ m}{6}(\frac{1}{200}+\frac{0.0015}{0.065})+(\frac{(0.1\ m)^{2}}{24(1.9)})]$
tf = 5485.8 seconds = 1.5 h.
This is closer to the target freezing time for the unpackaged fish.
Freezing time could also be reduced by using a different packaging material. For example, plastics have higher k2 values than cardboard, decreasing product resistance to heat transfer.
Note that changing the shape of the packaging container to a cylinder would not have an effect on freezing time since P and R are the same as for a cube.
Example $8$
Example 8. Selection of freezer
The choice of freezer equipment depends on the cost and effect on product quality. Overall, the engineer will need to consider a faster freezing process when dealing with foods packaged in cardboard, compared to unpackaged products or food packaged in plastic.
Problem:
For this example, compare the freezing times for a typical plate freezer to that of a spiral belt freezer.
Solution
Step 1 Calculate the freezing time for the packaged product in Example 6.1.7 using a plate freezer that produces h = 300 W/m2°C at Ta = −40°C:
$t_{f} = \frac{(271.27\times10^{3} \frac{J}{kg})(992 \frac{kg}{m^{3}})}{37.8^\circ C}[\frac{0.1\ m}{6}(\frac{1}{300}+\frac{0.0015}{0.065})+(\frac{(0.1\ m)^{2}}{24(1.9)})]$
tf = 4694.8 seconds = 1.3 h
Step 2 Calculate the freezing time for the packaged product in Example 6.1.7 using a spiral freezer that produces h = 30 W/m2°C at Ta = −40°C:
$t_{f} = \frac{(271.27\times10^{3} \frac{J}{kg})(992 \frac{kg}{m^{3}})}{37.8^\circ C}[\frac{0.1\ m}{6}(\frac{1}{30}+\frac{0.0015}{0.065})+(\frac{(0.1\ m)^{2}}{24(1.9)})]$
tf = 32893.2 seconds = 9 h.
This spiral freezer would not be suitable in terms of freezing time.
Image Credits
Figure 1. Castell-Perez, M. Elena. (CC By 4.0). (2020). Freezing curves for pure water and a food product illustrating the concept of freezing point depression (latent heat is released over a range of temperatures when freezing foods versus a constant value for pure water).
Figure 2. Castell-Perez, M. Elena. (CC By 4.0). (2020). Schematic representation of the International Institute of Refrigeration definition of freezing rate.
Figure 3. Castell-Perez, M. Elena. (CC By 4.0). (2020). Phase diagram of water highlighting the different phases. A (red dot): triple point of water, 0.00098°C and 0.459 mmHg. A-B line: saturation (vaporization) line. B-C line Solidification/fusion line. A-D line: Sublimation line. The green dot represents the T (100°C).
References
ASHRAE. (2018). Thermal properties of foods. In ASHRAE Handbook—Refrigeration (Chapter 19). American Society of Heating, Refrigeration and Air Conditioning. https://www.ashrae.org/.
Note: This source should be available in libraries at universities and colleges with engineering programs. It consists of four volumes with the one on “Refrigeration” being the most relevant to this chapter.
Barbosa-Cánovas, G. V., Altukanar, B., & Mejia-Lorio, D. J. (2005). Freezing of fruits and vegetables. An agri-business alternative to rural and semi-rural areas. FAO Agricultural Services Bulletin 158. Food and Agriculture Organization of the United Nations. http://www.fao.org/docrep/008/y5979e/y5979e00.htm#Contents.
Chen, C. S. (1985) Thermodynamic analysis of the freezing and thawing of foods: Enthalpy and apparent specific heat. J. Food Sci. 50(4), 1158-1162. doi.org/10.1111/j.1365-2621.1985.tb13034.x.
Choi, Y., & Okos, M. R. (1986) Effect of temperature and composition on the thermal properties of foods. In M. Le Maguer and P. Jelen (Eds.), Food Engineering and Process Applications (Vol.1, pp.93-101). Elsevier.
Cleland, A. C., & Earle, R. L. (1982). Freezing time prediction for foods—A simplified procedure. Int. J. Ref. 5(3), 134-140. https://doi.org/10.1016/0140-7007(82)90092-5
Cleland, D. J. (2003). Freezing times calculation. In Encyclopedia of Agricultural, Food, and Biological Engineering (pp. 396-401). Marcel Dekker, Inc.
Delgado, A., & Sun, D. W. (2012) Ultrasound-accelerated freezing. In D. W. Sun (Ed.), Handbook of Frozen Food Processing and Packaging (2nd ed., Chapter 28, pp. 645-666). CRC Press.
Engineering Toolbox. 2019. Specific heat of food and foodstuff. https://www.engineeringtoolbox.com/specific-heat-capacity-food-d_295.html. Accessed 8 July 2019.
Filip, S., Fink, R., & Jevšnik, M. (2010). Influence of food composition on freezing time. Int. J. Sanitary Eng. Res. 4, 4-13.
George, R. M. (1993). Freezing processes used in the food industry. Trends in Food Sci. Technol. 4(5), 134-138. https://doi.org/10.1016/0924-2244(93)90032-6.
Heldman, D. R. (1974). Predicting the relationship between unfrozen water fraction and temperature during food freezing using freezing point depression. Trans. ASAE 17(1), 63-66. https://doi.org/10.13031/2013.36788.
IRR. (2006). Recommendations for the Processing and Handling of Frozen Foods. International Institute of Refrigeration.
James, S. J., & James, C. (2014). Chilling and freezing of foods. In S. Clark, S. Jung, & B. Lamsal (Eds.), Food Processing: Principles and Applications (2nd ed., Chapter 5). Wiley.
Klinbun, W., & Rattanadecho, P. (2017). An investigation of the dielectric and thermal properties of frozen foods over a temperature from −18 to 80°C. Intl. J. Food Properties 20(2), 455-464. https://doi.org/10.1080/10942912.2016.1166129.
Lopez-Leiva, M., & Hallstrom, B. (2003). The original Plank equation and its use in the development of food freezing rate predictions. J. Food Eng. 58(3), 267-275. https://doi.org/10.1016/S0260-8774(02)00385-0.
Luo, N., & Shu, H. (2017). Analysis of energy saving during food freeze drying. Procedia Eng. 205, 3763-3768. https://doi.org/10.1016/j.proeng.2017.10.330.
McHug, T. (2018). Freeze-drying fundamentals. Food Technol. 72(2), 72–74.
Mohsenin, N. N. (1980). Thermal Properties of Plant and Animal Materials. Gordon and Breach.
Otero, L., & Sanz, P. D. (2012) High-pressure shift freezing. In D. W. Sun (Ed.), Handbook of Frozen Food Processing and Packaging (2nd ed., Chapter 29, pp. 667-683). CRC Press.
Pham, Q. T. (1987). Calculation of bound water in frozen food. J. Food Sci. 52(1), 210-212. doi.org/10.1111/j.1365-2621.1987.tb14006.x.
Pham, Q. T. (2014). Food Freezing and Thawing Calculations. SpringerBriefs in Food, Health, and Nutrition. doi.org/10.1007/978-1-4939-0557-7.
Plank, R. Z. (1913). Z. Gesamte Kalte-Ind. 20, 109. The calculation of the freezing and thawing of foodstuffs. Modern Refrig. 52, 52.
Schwartzberg, H. G. 1976. Effective heat capacities for the freezing and thawing of food. J. Food Sci. 41(1), 152-156. doi.org/10.1111/j.1365-2621.1976.tb01123.x.
Siebel, E. (1892). Specific heats of various products. Ice and Refrigeration, 2, 256-257.
Singh, R. P. (2003). Food freezing. In Encyclopedia of Life Support Systems (Vol. III, pp. 53-68). EOLSS. https://www.eolss.net/.
Singh, R. P., & Heldman, D. R. (2013). Introduction to Food Engineering (5th ed.). Academic Press.
Sudheer, K. P., & Indira, V. (2007). Post Harvest Technology of Horticultural Crops. Horticulture Science Series Vol. 7. New India Publishing.
USDA. 2019. Food composition database. ndb.nal.usda.gov/. Accessed 7 March 2019.
Xu, J.-C., Zhang, M., Mujumdar, A. S., & Adhikari, V. (2017). Recent developments in smart freezing technology applied to fresh foods. Critical Rev. Food Sci. Nutrition 57(13), 2835-2843. https://doi.org/10.1080/10408398.2015.1074158.
Yanniotis, S. (2008). Solving Problems in Food Engineering. Springer Food Engineering Series.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/06%3A_Processing_Systems/6.01%3A_Freezing_of_Food.txt
|
Ricardo Simpson
Departamento de Ingeniería Química y Ambiental, Universidad Técnica Federico Santa María, Valparaíso, Chile
Centro Regional de Estudios en Alimentos y Salud (CREAS) Conicyt-Regional GORE Valparaíso Project R17A10001, Curauma, Valparaíso, Chile
Helena Nuñez
Departamento de Ingeniería Química y Ambiental, Universidad Técnica Federico Santa Maria, Valparaíso, Chile
Cristian Ramírez
Departamento de Ingeniería Química y Ambiental, Universidad Técnica Federico Santa María, Valparaíso, Chile
Centro Regional de Estudios en Alimentos y Salud (CREAS) Conicyt-Regional GORE Valparaíso Project R17A10001, Curauma, Valparaíso, Chile
Key Terms
Heat transfer Bacterial inactivation Food sterilization
Microorganism heat resistance Decimal reduction time Commercial sterilization
Introduction
Thermal processing of foods, like cooking, involves heat and food. However, thermal processing is applied to ensure food safety and not necessarily to cook the food. Thermal processing as a means of preservation of uncooked food was invented in France in 1795 by Nicholas Appert, a chef who was determined to win the prize of 12,000 francs offered by Napoleon for a way to prevent military food supplies from spoiling. Appert worked with Peter Durand to preserve meats and vegetables encased in jars or tin cans under vacuum and sealed with pitch and, by 1804, opened his first vacuum-packing plant. This French military secret soon leaked out, but it took more than 50 years for Louis Pasteur to provide the explanation for the effectiveness of Appert’s method, when Pasteur was able to demonstrate that the growth of microorganisms was the cause of food spoilage.
The preservation for storage by thermal treatment and removal of atmosphere is known generically as canning, regardless of what container is used to store the food. The basic principles of canning have not changed dramatically since Appert and Durand developed the process: apply enough heat to food to destroy or inactivate microorganisms, then pack the food into sealed or “airtight” containers, ideally under vacuum. Canned foods have a shelf life of one to four years at ordinary temperatures, making them convenient, affordable, and easy to transport.
Outcomes
After reading this chapter, you should be able to:
• • Identify the role of heat transfer concepts in thermal processing of packaged foods
• • Describe the principles of commercial sterilization of foods
• • Describe the inactivation conditions needed for some example microorganisms important for food safety
• • Define some sterilization criteria for specific foods
• • Apply, in simple form, the main thermal food processing evaluation techniques
Concepts
The main concepts used in thermal processing of foods include: (a) heat transfer; (b) heat resistance of microorganisms of concern; and (c) bacterial inactivation.
Heat Transfer
The main heat transfer mechanisms involved in the thermal processing of packaged foods are convection and conduction. Heat transfer by convection occurs due to the motion and mixing of flows. The term natural convection refers to the case when motion and mixing of flow is caused by density differences in different locations due to temperature gradients. The term forced convection refers to the case when motion and mixing of flow is produced by an outside force, e.g., a fan. Heat transfer by conduction occurs when atoms and molecules collide, transferring kinetic energy. Conceptually, atoms are bonded to their neighbors, and if energy is supplied to one part of the solid, atoms will vibrate and transfer their energy to their neighbors and so on.
The main heat transfer mechanisms involved in the thermal processing of packaged foods are shown in Figure 6.2.1. Although the figure shows a cylindrical can (a cylinder of finite diameter and height), a similar situation will arise when processing other types of packaging such as glass containers, retortable pouches, and rigid and semi-rigid plastic containers. In general, independent of shape, food package sizes range from 0.1 L to 5 L (Holdsworth and Simpson, 2016).
The main mechanism of heat transfer from the heating medium (e.g., steam or hot water) to the container or packaging is convection. Then heat transfers by conduction through the wall of the container or package. Once inside the container, heat transfer through the covering liquid occurs by convection, and in solid foods mainly by conduction. In case of liquid foods, the main mechanism is convection.
The rate of heat transfer in packaged foods depends on process factors, product factors, and package types. Process factors include retort temperature profile, process time, heat transfer medium, and container agitation. Product factors include food composition, consistency, initial temperature, initial spore load, thermal diffusivity, and pH. Factors related to package type are container material, because the rate of heat transfer depends on thermal conductivity and thickness of the material, and container shape, because the surface area per unit volume plays a role in the heat penetration rate.
For liquid foods, the heating rate is determined not only by the thermal diffusivity α, but also by the viscosity. The thermal diffusivity is a material property that represents how fast the heat moves through the food and is determined as:
$a=K_{t}/(\rho\ C_{p})$
where α = thermal diffusivity (m2/s)
• Kt = thermal conductivity (W/m-K)
• ρ = density (kg/m3)
• Cp = specific heat (W/s-kg-K)
It is extremely difficult to develop a theoretical model for the prediction of a time-temperature history within the packaging material. Therefore, from a practical point of view, a satisfactory thermal process (i.e., time-temperature relationship) is usually determined using the slowest heating point, the cold spot, inside the container.
Heat Resistance of Microorganisms of Concern
The main objective in the design of a sterilization process for foods is the inactivation of the microorganisms that cause food poisoning and spoilage. In order to design a safe sterilization process, the appropriate operating conditions (time and temperature) must be determined to meet the pre-established sterilization criterion. To establish this criterion, it is necessary to know the heat resistance of the microorganisms (some examples are given in Table 6.2.1), the thermal properties of the food and packaging, and the shape and dimensions of the packaged food. From these, it is possible to determine the retort temperature and holding time (that is, the conditions for inactivation), how long it will take to reach that temperature (the come-up time), and how long it will take to cool to about 40°C (the cooling time) (Holdsworth and Simpson, 2016).
Table $1$: Some typical microorganisms heat resistance data (Holdsworth and Simpson, 2016).
Organism Conditions for Inactivation
Vegetative cells
10 min at 80°C
Yeast ascospores
5 min at 60°C
Fungi
30–60 min at 88°C
Thermophilic organisms:
Bacillus stearothermophilus
4 min at 121.1°C
Clostridium thermosaccharolyticum
3–4 min at 121.1°C
Mesophilic organisms:
Clostridium botulinum spores
3 min at 121.1°C
Clostridium botulinum toxins Types A & B
0.1–1 min at 121.1°C
Clostridium sporogenes
1.5 min at 121.1°C
Bacillus subtilis
0.6 min at 121.1°C
The pH of the food is extremely relevant to the selection of the sterilization process parameters, i.e., retort temperature and holding time, because microorganisms grow better in a less acid environment. That is why the standard commercial sterilization process is based on the most resistant microorganism (Clostridium botulinum) at the worst-case scenario conditions (higher pH) (Teixeira et al., 2006). The microorganism heat resistance is greater in low-acid products (pH ≥ 4.5–4.6). On the other hand, medium-acid to acidic foods require a much gentler heat treatment (lower temperature) to meet the sterilization criterion. Based on that, foods are classified into three groups:
• low-acid products: pH > 4.5–4.6 (e.g., seafood, meat, vegetables, dairy products);
• medium-acid products: 3.7 < pH < 4.6 (e.g., tomato paste);
• acidic products: pH < 3.7 (e.g., most fruits).
Bacterial Inactivation
Abundant scientific literature supports the application of first-order kinetics to quantify bacterial (spores) inactivation as (Esty and Meyer, 1922; Ball and Olson, 1957; Stumbo, 1973, Holdsworth and Simpson, 2016):
$(\dfrac{dN}{dt})_{I} = -kN$
where $N$ = viable bacterial (microbial) concentration (microorganisms/g) after process time t
• t = time
• I = inactivation
• k = bacterial inactivation rate constant (1/time)
Instead of k, food technologists have utilized the concept of decimal reduction time, D, defined as the time to reduce bacterial concentration by ten times. In other words, D is the required time at a specified temperature to inactivate 90% of the microorganism’s population. A mathematical expression that relates the rate constant, k, from Equation 6.2.2 to D is developed by separating variables and integrating the bacterial concentration from the initial concentration, N0, to N0/10 and from time 0 to D, therefore obtaining:
$k=\dfrac{ln\ 10}{D} = \dfrac{2.303}{D}$
or
$D = \dfrac{ln\ 10}{k} = \dfrac{2.303}{k}$
where k = reaction rate constant (1/min)
D = decimal reduction time (min)
A plot of the log of the survivors (log N) against D is called a survivor curve (Figure 6.2.2). The slope of the line through one log cycle (decimal reduction) is −1/D and
$log\ N = log\ N_{0} - \dfrac{t}{D}$
where N = number of survivors
N0 = N at time zero, the start of the process
Temperature Dependence of the Decimal Reduction Time, D
Every thermal process of a food product is a function of the thermal resistance of the microorganism in question. When the logarithm of the decimal reduction time, D, is plotted against temperature, a straight line results. This plot is called the thermal death time (TDT) curve (Figure 6.2.3). From such a plot, the thermal sensitivity of a microorganism, z, can be determined as the temperature change necessary to vary TDT by one log cycle.
Bigelow and co-workers (Bigelow and Esty, 1920; Bigelow, 1921) were the first to coin the term thermal death rate to relate the temperature dependence of D. Mathematically, the following expression has been used:
$log\ D = log\ D_{ref}-\dfrac{T-T_{ref}}{z}$
or
$D = D_{ref}10^{\dfrac{T-T_{ref}}{z}}$
where D = decimal reduction time at temperature T (min)
Dref = decimal reduction time at reference temperature Tref (min)
z = temperature change necessary to vary TDT by one log cycle (°C), e.g., normally z = 10°C for Clostridium botulinum
T = temperature (°C)
Tref = reference temperature (normally 121.1°C for sterilization)
The D value is directly related to the thermal resistance of a given microorganism. The more resistant the microorganism to the heat treatment, the higher the D value. On the other hand, the z value represents the temperature dependency but has no relation to the thermal resistance of the target microorganism. Then, the larger the z value the less sensitive the given microorganism is to temperature changes. D values are expressed as DT. For example, D140 means the time required to reduce the microbial population by one log cycle when the food is heated at 140°C.
Food Sterilization Criterion and Calculation
Sterilization means the complete destruction or inactivation of microorganisms. The food science and engineering community has accepted the utilization of a first-order kinetic for Clostridium botulinum inactivation (Equation 6.2.2). Again, this pathogen is the target microorganism in processes that use heat to sterilize foods. Theoretically, the inactivation time needed to fully inactivate Clostridium botulinum is infinite. According to Equations 6.2.2 and 6.2.3 and assuming a constant process temperature and that k is constant, the following expression is obtained:
$N_{f} = N_{0}e^{-kt} = N_{0}e^{-\dfrac{ln10}{D}t}$
This equation shows that the final concentration of Clostridium botulinum (Nf) tends to zero when time (t) tends to infinity; therefore, it is not possible to reach a final concentration equal to zero for the target microorganism. Thus, it is necessary to define a sterilization criterion (or commercial sterilization criterion) to design a process that guarantees a safe product within a finite time.
The level of microbial inactivation, defined by the microbial lethality value or cumulative lethality, is the way in which the sterilization process is quantified. Specifically, the sterilizing value, denoted by F0, is the required time at 121.1°C to achieve 12 decimal reductions (12D). In other words, F0 is the time required to reduce the initial microorganism concentration from N0 to N0/1012 at the process temperature of 121.1°C.
The 12D sterilization criterion is an extreme process (i.e., overkill) designed to ensure no cells of C. botulinum remain in the food and, therefore, prevent illness or death. According to the FDA (1972), the minimum thermal treatment for a low-acid food should reach a minimum F0 value of 3 min (that is larger than 12D; D for C. botulinum at 121.1°C is 0.21 min, then 12 × 0.21 = 2.52 min, which is lower than 3 min). Thus, a thermal process for commercial sterilization of a food product should have an F0 value greater than 3 minutes.
The F0 attained for a food can be calculated easily when the temperature at the center of the food during the thermal processing is known by:
$F_{0}=\int^{t}_{0}10^{\dfrac{T-T_{ref}}{z}}dt$
where F0 = cumulative lethality of the process from time 0 to the end of the process (t)
T = temperature measured at the food cold spot, which is the place in the food that heats last
Tref = temperature of microorganism reference; for sterilization of low-acid foods, Tref = 121.1°C for C. botulinum
z = temperature change necessary to reduce D value by ten times; in the case of sterilization of low-acid foods, z = 10°C for C. botulinum
t = process time to reach F0
Equation 6.2.9 can be calculated according to the general method proposed by Bigelow and co-workers 100 years ago (Bigelow et al., 1920; Simpson et al., 2003).
If the food is heated instantaneously to 121.1°C and maintained at this temperature for 3 min, then the F0 value for this process will be 3 min. From Equation 6.2.9,
$F_{0}=\int^{t}_{0}10^{\dfrac{121.1-121.1}{10}}dt=\int^{t}_{0}10^{0}dt=\int^{3}_{0}1dt$
Since the time interval is between 0 to 3 min, then the integral solution is 3 min or $\int^{3}_{0}1dt$= 3 – 0 = 3. However, in practice, due to the resistance of the food to the transfer of heat, the thermal sterilization process requires a longer time in order to get a F0 ≥ 3 min, because a significant part of the processing time is needed to raise the cold-spot temperature of the food and later to cool the food.
Applications
Commercial Sterilization Process
A general, simplified flow diagram for a typical commercial canning factory is presented in Figure 6.2.4.
Stage 1: Selecting and preparing the food as cleanly, rapidly, and precisely as possible. Foods that maintain their desirable color, flavor, and texture through commercial sterilization include broccoli, corn, spinach, peas, green beans, peaches, cherries, berries, sauces, purees, jams and jellies, fruit and vegetable juices, and some meats (Featherstone, 2015). The preparation must be performed with great care and with the least amount of damage and loss to minimize the monetary cost of the operation. If foods are not properly handled, the effectiveness of the sterilization treatment is compromised.
Stage 2: Packing the product in hermetically sealable containers (jars, cans, or pouches) and sealing under a vacuum to eliminate residual air. A less common approach is to sterilize the food first and then aseptically package it (aseptic processing and packaging of foods).
Stage 3: Stabilizing the food by sterilizing through rigorous thermal processing (i.e., high temperature to achieve the correct degree of sterilization or the target destruction of the microorganisms present in the food), followed by cooling of the product to a low temperature (about 40°C), at which enzymatic and chemical reactions begin to slow down.
Stage 4: Storing at a temperature below 35°C, the temperature below which food-spoilage organisms cannot grow.
Stage 5: Labeling, secondary packaging, distribution, marketing, and consumption. Although not part of the thermal process per se, this stage addresses the steps required for commercialization of the treated foods.
Stage 3, thermal processing, is the focus of this chapter. The aim of the thermal process is to inactivate, by the effect of heat, spores and microorganisms present in the unprocessed product. The thermal process is performed in vessels known as retorts or autoclaves to achieve the required high temperatures (usually above 100°C).
As depicted in Figure 6.2.5, a typical sterilization process has three main steps: come-up time, operator process time, and cooling. The first step, the come-up time (CUT), is the time required to reach the specified retort temperature (TRT), i.e., the target temperature in the retort. The second step is the holding time (Pt), also called operator process time, which is the amount of time that the retort temperature must be maintained to ensure the desired degree of lethality. This depends on the target microorganism or the expected microbiological contamination. The final step is the cooling, when the temperature of the product is decreased by introducing cold water into the retort. The purpose of cooling the food is to minimize the excessive (heat) processing of the food, and avoid the risk of thermophilic microorganism development. During the cooling cycle, it may be necessary to inject sterile air into the food packaging to avoid sudden internal pressure drops and prevent package deformation.
The concepts described in this chapter describe the key principles for applying a thermal process to packaged food to achieve the required lethality for food safety. These concepts can be used to design a thermal process to ensure adequate processing time and food safety while avoiding over processing the packaged food. This should ensure safe, tasty, and nutritious packaged foods.
Examples
Example $1$
Example 1: Calculation of microbial count after a given thermal process
Problem:
The D120°C value for a microorganism is 3 minutes. If the initial microbial contamination is 1012 cells per gram of product, how many microorganisms will remain in the sample after heat treatment at 120°C for 18 minutes?
Solution
Calculate the number of remaining cells using Equation 6.2.5 with N0 = 1012 cells/g, t = 18 minutes, and D120°C = 3 minutes.
From Equation 6.2.5,
$log\ N_{(t)} =log\ N_{0}-\dfrac{t}{D}$
$log\ N_{(18)} =log\ 10^{12}\dfrac{\text{cells}}{g}-\dfrac{18\text{ min}}{3\text{ min}}$
Solving for N(18) yields:
$N_{(18)} =10^{6}\text{ cells/g}$
Discussion:
Starting with a known microbial concentration (N0), the final concentration of a specific microorganism for a given thermal process at constant temperature can be calculated if the thermal resistance of the microorganism at a given temperature is known. In this case, D120°C = 3 min.
Example $2$
Example 2: Calculation of z value for a particular microorganism
Problem:
D of a given bacterium in milk at 65°C is 15 minutes. When a food sample that has 1010 cells of the bacterium per gram of food is heated for 10 minutes at 75°C, the number of survivors is 2.15 × 103 cells. Calculate z for this bacterium.
Solution
First, calculate D at the process temperature of 75°C, D75°C, using Equation 6.2.5. Then calculate z using Equation 6.2.6 with D65°C = 15 minutes, N0 = 1010 cells/g, and t = 10 minutes at T = 75°C.
$log\ N_{(t)} = log\ N_{0} - \dfrac{t}{D}$ (Equation $5$)
$log\ 2.15\times10^{3} \text{ cells/g} = log\ 10^{10} \text{ cells/g}-\dfrac{10\text{ min}}{D_{75^\circ C}}$
and D75°C = 1.5 min.
To calculate z, recall Equation 6.2.6:
$log\ D = log\ D_{ref} - \dfrac{T-T_{ref}}{z}$
Solving for z, Equation 6.2.6 can be expressed as:
$z = \dfrac{\Delta T}{log(\dfrac{D_{1}}{D_{2}})}$
with ∆T = (75 – 65)°C, D1 = D65°C and D2 = D75°C,
$z = \dfrac{75-65}{log(\dfrac{15}{1.5})}=10 ^\circ C$
Discussion:
As previously explained, the z value represents the change in process temperature required to reduce the D value of the target microorganism by ten times. In this case, the z value is 10°C and accordingly the D value was reduced 10 times, from 15 minutes to 1.5 minutes.
Example $3$
Example 3: Lethality of thermal processing of a can of tuna fish
Problem:
Table 6.2.2 presents the values of temperature measured in the retort (TRT) and the temperature measured at the cold spot of a can of tuna fish (Tcold spot) during a thermal process. The total process time was 63 min until the product was cold enough to be withdrawn from the retort.
1. (a) Determine CUT (the time required to come up to TRT), operator process time Pt, and cooling time.
2. (b) Determine the lethality value (F0) attained for the can of tuna fish.
Table $2$: Retort temperature (TRT) and cold spot (Tcold spot) during thermal processing of tuna fish in a can.
Time (min) TRT (°C) Tcold spot (°C)
0.97
29.7
45.0
1.97
39.7
45.0
2.97
49.7
45.0
3.97
59.7
45.0
4.97
69.7
44.9
5.97
79.7
44.9
6.97
89.7
44.8
7.97
99.7
44.7
8.97
109.7
44.7
9.97
119.7
44.8
10.97
120.0
45.0
11.97
120.0
45.4
12.97
120.0
46.0
13.97
120.0
46.9
14.97
120.0
48.0
15.97
120.0
49.3
16.97
120.0
50.8
17.97
120.0
52.6
18.97
120.0
54.4
19.97
120.0
56.4
20.97
120.0
58.5
21.97
120.0
60.6
22.97
120.0
62.8
23.97
120.0
65.0
24.97
120.0
67.1
25.97
120.0
69.3
26.97
120.0
71.4
27.97
120.0
73.5
28.97
120.0
75.5
29.97
120.0
77.5
30.97
120.0
79.4
31.97
120.0
81.2
32.97
120.0
83.0
33.97
120.0
84.7
34.97
120.0
86.3
35.97
120.0
87.9
36.97
120.0
89.4
37.97
120.0
90.8
38.97
120.0
92.2
39.97
120.0
93.6
40.97
120.0
94.8
41.97
120.0
96.0
42.97
120.0
97.2
43.97
120.0
98.3
44.97
120.0
99.3
45.97
120.0
100.3
46.97
120.0
101.3
47.97
120.0
102.2
48.97
120.0
103.0
49.97
120.0
103.9
50.97
120.0
104.7
51.97
120.0
105.4
52.97
120.0
106.1
53.97
120.0
106.8
54.97
120.0
107.4
55.97
120.0
108.0
56.97
120.0
108.6
57.97
120.0
109.2
58.97
120.0
109.7
59.97
120.0
110.2
60.97
120.0
110.7
61.97
120.0
111.1
62.97
120.0
111.6
63.97
120.0
112.0
64.97
120.0
112.4
65.97
120.0
112.8
66.97
120.0
113.1
67.97
120.0
113.4
68.97
120.0
113.8
69.97
120.0
114.1
70.97
120.0
114.4
71.97
120.0
114.6
72.97
120.0
114.9
73.97
120.0
115.2
74.97
120.0
115.4
76
25.0
115.6
77
25.0
115.8
78
25.0
116.0
79
25.0
116.2
80
25.0
116.2
81
25.0
116.0
82
25.0
115.5
83
25.0
114.6
84
25.0
113.4
85
25.0
111.8
86
25.0
110.0
87
25.0
107.9
88
25.0
105.6
89
25.0
103.1
90
25.0
100.6
91
25.0
97.9
92
25.0
95.3
93
25.0
92.6
94
25.0
89.9
95
25.0
87.3
96
25.0
84.7
97
25.0
82.1
98
25.0
79.6
99
25.0
77.2
100
25.0
74.9
101
25.0
72.6
102
25.0
70.5
103
25.0
68.4
104
25.0
66.3
105
25.0
64.4
106
25.0
62.6
107
25.0
60.8
108
25
59.08
109
25
57.46
110
25
55.91
111
25
54.43
112
25
53.01
113
25
51.67
114
25
50.38
115
25
49.15
116
25
47.99
117
25
46.87
118
25
45.81
119
25
44.8
120
25
43.84
121
25
42.92
122
25
42.05
123
25
41.22
124
25
40.43
Solution
1. (a) To determine CUT and Pt, plot TRT and Tcold spot against time, which produces the thermal profiles in Figure 6.2.6.
Figure 6.2.6 shows that the CUT is approximately 10 min and Pt, during which process temperature is maintained constant at 120°C, is approximately 64 min.
1. (b) The lethality value, F0, can be obtained through numerical integration of Equation 6.2.9 using the trapezoidal rule (Patashnik, 1953). The calculations can be completed as follows or using software such as Excel.
As presented in Table 6.2.3, for each time, we can evaluate Equation 6.2.9:
2. $F_{0}=\int^{t}_{0}10^{\dfrac{T-121.1}{10}}dt$ (Equation $9$)
where T = Tcold spot and Tref and z-value for Clostridium botulinum are 121.1°C and 10°C, respectively.
1. Given that F0 corresponds to the integral of 10[(Tcold spot −Tref)/z], this can be solved numerically by the trapezoidal rule method, i.e., by determining the area under the curve by dividing the area into trapezoids, computing the area of each trapezoid, and summing all trapezoidal areas to yield F0. (More details about the trapezoidal rule are included in the appendix.) The calculations are summarized in Table 6.2.3. In this particular case, F0 was about 6.07 min. The change of F0 along the thermal process is shown as the blue line in Figure 6.2.7.
Table $3$: Numerical integration of Equation 6.2.9 for the estimation of F0.
Time (min) TRT (°C) Tcold spot (°C) (Tcold spotTref)/z 10[(Tcold spot −Tref)/z] Trapezoidal Area Sum of Areas
0.97
29.67
45
−7.6
0.000
0.000
0.000
1.97
39.67
45
−7.6
0.000
0.000
0.000
2.97
49.67
44.99
−7.6
0.000
0.000
0.000
3.97
59.67
44.97
−7.6
0.000
0.000
0.000
4.97
69.67
44.93
−7.6
0.000
0.000
0.000
5.97
79.67
44.85
−7.6
0.000
0.000
0.000
6.97
89.67
44.76
−7.6
0.000
0.000
0.000
7.97
99.67
44.69
−7.6
0.000
0.000
0.000
8.97
109.67
44.68
−7.6
0.000
0.000
0.000
9.97
119.67
44.77
−7.6
0.000
0.000
0.000
10.97
120
45
−7.6
0.000
0.000
0.000
11.97
120
45.41
−7.6
0.000
0.000
0.000
12.97
120
46.03
−7.5
0.000
0.000
0.000
13.97
120
46.88
−7.4
0.000
0.000
0.000
14.97
120
47.97
−7.3
0.000
0.000
0.000
15.97
120
49.29
−7.2
0.000
0.000
0.000
16.97
120
50.83
−7.0
0.000
0.000
0.000
17.97
120
52.55
−6.9
0.000
0.000
0.000
18.97
120
54.42
−6.7
0.000
0.000
0.000
19.97
120
56.41
−6.5
0.000
0.000
0.000
20.97
120
58.49
−6.3
0.000
0.000
0.000
21.97
120
60.63
−6.0
0.000
0.000
0.000
22.97
120
62.79
−5.8
0.000
0.000
0.000
23.97
120
64.97
−5.6
0.000
0.000
0.000
24.97
120
67.14
−5.4
0.000
0.000
0.000
25.97
120
69.29
−5.2
0.000
0.000
0.000
26.97
120
71.41
−5.0
0.000
0.000
0.000
27.97
120
73.48
−4.8
0.000
0.000
0.000
28.97
120
75.5
−4.6
0.000
0.000
0.000
29.97
120
77.46
−4.4
0.000
0.000
0.000
30.97
120
79.36
−4.2
0.000
0.000
0.000
31.97
120
81.2
−4.0
0.000
0.000
0.000
32.97
120
82.97
−3.8
0.000
0.000
0.001
33.97
120
84.67
−3.6
0.000
0.000
0.001
34.97
120
86.31
−3.5
0.000
0.000
0.001
35.97
120
87.89
−3.3
0.000
0.001
0.002
36.97
120
89.4
−3.2
0.001
0.001
0.003
37.97
120
90.84
−3.0
0.001
0.001
0.004
38.97
120
92.22
−2.9
0.001
0.002
0.005
39.97
120
93.55
−2.8
0.002
0.002
0.007
40.97
120
94.81
−2.6
0.002
0.003
0.010
41.97
120
96.02
−2.5
0.003
0.004
0.014
42.97
120
97.17
−2.4
0.004
0.005
0.018
43.97
120
98.27
−2.3
0.005
0.006
0.024
44.97
120
99.31
−2.2
0.007
0.007
0.032
45.97
120
100.31
−2.1
0.008
0.009
0.041
46.97
120
101.27
−2.0
0.010
0.012
0.053
47.97
120
102.17
−1.9
0.013
0.014
0.067
48.97
120
103.04
−1.8
0.016
0.017
0.084
49.97
120
103.86
−1.7
0.019
0.021
0.105
50.97
120
104.65
−1.6
0.023
0.025
0.130
51.97
120
105.39
−1.6
0.027
0.029
0.159
52.97
120
106.1
−1.5
0.032
0.034
0.193
53.97
120
106.78
−1.4
0.037
0.040
0.233
54.97
120
107.42
−1.4
0.043
0.046
0.279
55.97
120
108.04
−1.3
0.049
0.053
0.332
56.97
120
108.62
−1.2
0.056
0.060
0.393
57.97
120
109.18
−1.2
0.064
0.068
0.461
58.97
120
109.7
−1.1
0.072
0.077
0.538
59.97
120
110.21
−1.1
0.081
0.086
0.624
60.97
120
110.69
−1.0
0.091
0.096
0.720
61.97
120
111.14
−1.0
0.101
0.106
0.826
62.97
120
111.57
−1.0
0.111
0.117
0.943
63.97
120
111.99
−0.9
0.123
0.129
1.072
64.97
120
112.38
−0.9
0.134
0.140
1.212
65.97
120
112.75
−0.8
0.146
0.153
1.365
66.97
120
113.11
−0.8
0.159
0.165
1.530
67.97
120
113.44
−0.8
0.171
0.178
1.708
68.97
120
113.76
−0.7
0.185
0.191
1.899
69.97
120
114.07
−0.7
0.198
0.205
2.104
70.97
120
114.36
−0.7
0.212
0.219
2.323
71.97
120
114.63
−0.6
0.225
0.233
2.555
72.97
120
114.9
−0.6
0.240
0.247
2.802
73.97
120
115.15
−0.6
0.254
0.261
3.063
74.97
120
115.38
−0.6
0.268
0.275
3.338
76
25
115.61
−0.5
0.282
0.290
3.628
77
25
115.83
−0.5
0.297
0.304
3.932
78
25
116.02
−0.5
0.310
0.316
4.248
79
25
116.17
−0.5
0.321
0.322
4.570
80
25
116.19
−0.5
0.323
0.315
4.885
81
25
115.97
−0.5
0.307
0.290
5.175
82
25
115.45
−0.6
0.272
0.248
5.422
83
25
114.58
−0.7
0.223
0.196
5.618
84
25
113.36
−0.8
0.168
0.143
5.761
85
25
111.81
−0.9
0.118
0.097
5.858
86
25
109.96
−1.1
0.077
0.062
5.920
87
25
107.87
−1.3
0.048
0.038
5.958
88
25
105.57
−1.6
0.028
0.022
5.980
89
25
103.12
−1.8
0.016
0.012
5.992
90
25
100.57
−2.1
0.009
0.007
5.999
91
25
97.94
−2.3
0.005
0.004
6.003
92
25
95.26
−2.6
0.003
0.002
6.005
93
25
92.58
−2.9
0.001
0.001
6.006
94
25
89.91
−3.1
0.001
0.001
6.007
95
25
87.26
−3.4
0.000
0.000
6.007
96
25
84.66
−3.6
0.000
0.000
6.007
97
25
82.11
−3.9
0.000
0.000
6.007
98
25
79.63
−4.1
0.000
0.000
6.007
99
25
77.22
−4.4
0.000
0.000
6.007
100
25
74.88
−4.6
0.000
0.000
6.007
101
25
72.62
−4.8
0.000
0.000
6.007
102
25
70.45
−5.1
0.000
0.000
6.007
103
25
68.35
−5.3
0.000
0.000
6.007
104
25
66.34
−5.5
0.000
0.000
6.007
105
25
64.41
−5.7
0.000
0.000
6.007
106
25
62.55
−5.9
0.000
0.000
6.007
107
25
60.78
−6.0
0.000
0.000
6.007
108
25
59.08
−6.2
0.000
0.000
6.007
109
25
57.46
−6.4
0.000
0.000
6.007
110
25
55.91
−6.5
0.000
0.000
6.007
111
25
54.43
−6.7
0.000
0.000
6.007
112
25
53.01
−6.8
0.000
0.000
6.007
113
25
51.67
−6.9
0.000
0.000
6.007
114
25
50.38
−7.1
0.000
0.000
6.007
115
25
49.15
−7.2
0.000
0.000
6.007
116
25
47.99
−7.3
0.000
0.000
6.007
117
25
46.87
−7.4
0.000
0.000
6.007
118
25
45.81
−7.5
0.000
0.000
6.007
119
25
44.8
−7.6
0.000
0.000
6.007
120
25
43.84
−7.7
0.000
0.000
6.007
121
25
42.92
−7.8
0.000
0.000
6.007
122
25
42.05
−7.9
0.000
0.000
6.007
123
25
41.22
−8.0
0.000
0.000
6.007
124
25
40.43
−8.1
0.000
0.000
6.007
Discussion:
The cumulative lethality, F0, was about 6.01 min, meaning that the process is safe according to FDA requirements, i.e., F0 ≥ 3 min (see the Food Sterilization Criterion and Calculation section above).
Example $4$
Example 4: Lethality of thermal processing of a can of mussels
Problem:
Temperatures measured in the retort and the temperature measured at the cold spot of a can of mussels during a thermal process performed at 120°C were recorded. The total process time was 113 min until the product was cold enough to be withdrawn from the retort. The measured thermal profiles (TRT and Tcold spot) were plotted, as was done in Example 6.2.3. The resulting plot (Figure 6.2.8) shows that CUT was approximately 10 min and Pt was approximately 53 min. The lethality value, F0, was obtained through numerical integration of Equation 6.2.9. In this case, F0 attained in the mussels can with a processing temperature of 120 °C was 2.508 min. The evolution of F0 along the thermal process is shown in Figure 6.2.9 as the blue line.
Discussion:
The cumulative lethality, F0, attained along the thermal process was 2.5 min, meaning that the process is not safe according to FDA requirements (F0 ≥ 3 min). Thus, the thermal processing time of canning process of mussels must be extended in order to reach the safety value recommended by the FDA.
Example $5$
Example 5: Processing time at different retort temperatures
Problem:
Determine the required processing time to get a lethality of 6 min (F0 = 6 min) when the retort temperature is (a) 120°C and considered equal to the cold spot temperature, (b) 110°C, and (c) 130°C.
Solution
The F0 is typically set for the 12D value to give a 12 log reduction of heat-resistant species of mesophilic spores (typically taken as C. botulinum). The Tref = 121.1°C and z = 10°C. Therefore, Equation 6.2.9 can be used directly by replacing T by the retort temperature, given that cold spot temperature can be assumed equal to retort temperature:
$F_{0} = \int^{t}_{0}10^{\dfrac{T-121.1}{z}}dt$ (Equation $9$)
$6 = \int^{t}_{0}10^{\dfrac{120-121.1}{10}}dt$
$t = \dfrac{6}{10^{\dfrac{120-121.1}{10}}}$
Solving the integral yields a processing time, t, of 7.7 min.
1. (b) When the temperature of the retort is reduced to 110°C, the lethality must be maintained at 6 min. Solving Equation 6.2.9:
$6 = \int^{t}_{0} 10^{\dfrac{110-121.1}{10}} dt$
1. gives the required processing time t of 77.2 min.
2. (c) When the temperature of the retort is increased to 130°C, and maintaining the F0 = 6 min, the processing time is reduced to 0.77 min
$6 = \int^{t}_{0} 10^{\dfrac{130-121.1}{10}} dt$
Discussion:
The results showed that as the temperature in the food increased in 10°C increments, the processing time was reduced by one decimal reduction. This variation is due to a z value of 10°C.
Appendix: The Trapezoidal Rule
A trapezoid is a four-sided region with two opposite sides parallel (Figure 6.2.10). The area of a trapezoid is the average length of the two parallel sides multiplied by the distance between the two sides. In Figure 6.2.11, the area (A) under function f(x) between points x0 and xn is given by:
$A=\int^{b}_{a} f(x)dx$
An approximation of the area A is the sum of the areas of the individual trapezoids (T), where T can be calculated using Equation 6.2.11:
$T=\dfrac{1}{2}\Delta x_{1}[f(x_{0})+f(x_{1})]+\dfrac{1}{2}\Delta x_{2}[f(x_{1})+f(x_{2})]+…+\dfrac{1}{2}\Delta x_{n}[f(x_{n-1})+f(x_{n})]$
where $\Delta x_{i}=x_{i}-x_{i-1}$, for i = 1, 2, 3, . . . , n
In the particular case where ∆x1 = ∆x2 = ∆x3 = . . . = ∆xn = ∆x, Equation 6.2.11 can be expressed as:
$T=\Delta x[\dfrac{f(x_{0})}{2} + f(x_{1})+ f(x_{2})+ f(x_{3})+…+\dfrac{f(x_{n})}{2}$
or, in the following reduced form:
$T=\Delta x[\dfrac{f(x_{0})}{2} +\sum^{n-1}_{i=1}f(x_{i})+\dfrac{f(x_{n})}{2}$
Finally, to estimate area A under the trapezoidal rule,
$A=\int^{x_{n}}_{x_{0}}f(x)dx\cong\dfrac{1}{2}\Delta x_{1}[f(x_{0})+f(x_{1})]+\dfrac{1}{2}\Delta x_{2}[f(x_{1})+f(x_{2})]+…+\dfrac{1}{2}\Delta x_{n}[f(x_{n-1})+f(x_{n})]$
When all intervals are of the same size (∆x1 = ∆x2 = ∆x3 = . . . = ∆xn = ∆x), the following expression can be applied:
$A=\int^{x_{n}}_{x_{0}}f(x)dx\cong\Delta x(\dfrac{f(x_{0})}{2} +\sum^{n-1}_{i=1}f(x_{i})+\dfrac{f(x_{n})}{2}) =\dfrac{1}{2}\Delta x(f(x_{0}) +2\sum^{n-1}_{i=1}f(x_{i})+f(x_{n}))$
Example $6$
Example
Problem:
Using the heat penetration data at the cold spot of a canned food in Table 6.2.4, calculate the cumulative lethality, F0, in the range of 23 to 27 min using the trapezoidal rule.
Table $4$: Heat penetration data at the slowest heating point.
Time (min) Temperature (C)
. . .
. . .
23
118.5
24
118.7
25
118.9
26
119.1
27
119.3
. . .
. . .
Solution
From Equation 6.2.9,
$F_{0}=\int^{27}_{23}10^{\dfrac{T-121.1}{10}}dt$
Applying the trapezoidal rule and considering that all time steps are equal (∆t = 1 min), calculate F0 using Equation 6.2.15,
$F_{0}=\int^{27}_{23}10^{\dfrac{T-121.1}{10}}dt \cong \dfrac{1}{2}[f(23)+2f(24)+2f(25)+2f(26)+f(27)]$
where ∆t = 1 (1 min interval), and:
$f(23)=10^{\dfrac{118.5-121.1}{10}} = 0.549541$
$f(24)=10^{\dfrac{118.7-121.1}{10}} = 0.57544$
$f(25)=10^{\dfrac{118.9-121.1}{10}} = 0.6025596$
$f(26)=10^{\dfrac{119.1-121.1}{10}} = 0.63095734$
$f(27)=10^{\dfrac{119.3-121.1}{10}} = 0.66069345$
Replacing into equation (6.2.15):
$F_{0}=\int^{27}_{23}10^{\dfrac{T-121.1}{10}}dt \cong \dfrac{1}{2}(0.549541+2\times0.57544+2\times0.6025596+2\times 0.63095734+0.66069345)$
Therefore, F0 ~ 2.41407394 ~ 2.41 min.
Discussion:
The applied process to sterilize the target food is not safe since F0 < 3 minutes.
Image Credits
Figure 1. Simpson, R. (CC By 4.0). (2020). Main heat transfer mechanisms involved in the thermal processing of packaged foods. Retrieved from onlinelibrary.wiley.com
Figure 2. Holdsworth, S. Donald-Simpson, R. (CC By 4.0). (2020). Semilogarithmic survivor curve. Retrieved from https://www.springer.com/la/book/9783319249025
Figure 3. Holdsworth, S. Donald-Simpson, R. (CC By 4.0). (2020). Thermal death time (TDT) curve. Retrieved from https://www.springer.com/la/book/9783319249025.
Figure 4. Simpson, R. (CC By 4.0). (2020). Stages of a typical food commercial canning factory.
Figure 5. Ramírez, C. (CC By 4.0). (2020). Temperature profiles for a typical thermal process, where CUT is come-up time and Pt is operator time.
Figure 6. Ramírez, C. (CC By 4.0). (2020). Temperature profile of thermal processing data in table 2.
Figure 7. Ramírez, C. (CC By 4.0). (2020). Thermal process temperature profiles including the cumulative lethality value (F at any time t).
Figure 8. Ramírez, C. (CC By 4.0). (2020). Temperature profile of thermal processing data (Table 4).
Figure 9. Ramírez, C. (CC By 4.0). (2020). Thermal process temperature profiles including the cumulative lethality value (F at any time t).
Figure 10. Simpson, R. (CC By 4.0). (2020). Example of a trapezoid.
Figure 11. Simpson, R. (CC By 4.0). (2020). Curve divided into n equal parts each of length ΔX.
References
Ball, C. O., & Olson, F. C. (1957). Sterilization in food technology—Theory, practice and calculations. New York, NY: McGraw-Hill.
Bigelow, W. D. (1921). The logarithmic nature of thermal death time curves. J. Infectious Dis., 29(5), 528-536. https://doi.org/10.1093/infdis/29.5.528.
Bigelow, W. D., & Esty, J. R. (1920). The thermal death point in relation to time of typical thermophilic organisms. J. Infectious Dis., 27(6), 602-617. doi.org/10.1093/infdis/27.6.602.
Bigelow, W. D., Bohart, G. S., Richardson, A. C., & Ball, C. O. (1920). Heat penetration in processing canned foods. Bull. No. 16. Washington, DC: Research Laboratory, National Canners Association.
Esty, J. R., & Meyer, K. F. (1922). The heat resistance of the spores of B. botulinus and allied anaerobes. J. Infectious Dis., 31(6), 650-663. https://doi.org/10.1093/infdis/31.6.650.
FDA. (1972). Sterilizing symbols. Low acid canned foods. Inspection technical guide. Ch. 7. ORO/ETSB (HFC-133). Washington, DC: FDA.
Featherstone, S. (2015). 7: Retortable flexible containers for food packaging. In A complete course in canning and related processes (14th ed.). Vol. 2: Microbiology, packaging, HACCP and ingredients (pp. 137-146). Sawston, Cambridge, U.K.: Woodhead Publ. https://doi.org/10.1016/B978-0-85709-678-4.00007-5.
Holdsworth, S. D., & Simpson, R. (2016). Thermal processing of packaged foods (3rd ed.). Springer. doi.org/10.1007/978-3-319-24904-9.
Patashnik, M. (1953). A simplified procedure for thermal process evaluation. Food Technol., 7(1), 1-6.
Simpson, R., Almonacid, S., & Teixeira, A. (2003). Bigelow’s general method revisited: Development of a new calculation technique. J. Food Sci., 68(4), 1324-1333. doi.org/10.1111/j.1365-2621.2003.tb09646.x.
Stumbo, C. R. (1973). Thermobacteriology in food processing (2nd. ed.). New York, NY: Academic Press.
Teixeira, A., Almonacid, S., & Simpson, R. (2006). Keeping botulism out of canned foods. Food Technol., 60(2), Back page.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/06%3A_Processing_Systems/6.02%3A_Principles_of_Thermal_Processing_of_Packaged_Foods.txt
|
Ram Yamsaengsung
Department of Chemical Engineering
Faculty of Engineering
Prince of Songkla University
Hat Yai, Thailand
Bandhita Saibandith
Department of Biotechnology
Faculty of Agro-Industry
Kasetsart University
Chatuchak, Bangkok, Thailand
Key Terms
Frying chemistry Mass transfer Frying technology
Heat transfer Mass and material balance Industrial continuous frying systems
Heat balance Product drying rate Vacuum frying systems
Introduction
This chapter introduces the basic principles of frying and its relevance to the food industry. To illustrate its importance, various fried products from around the world are described, and the mechanisms, equipment, and chemistry of the frying process are discussed. The pros and cons of frying food are presented in the context of texture, appearance, taste, and acceptability.
Frying is a highly popular method of cooking and has been used for thousands of years. A few examples from around the world include noodles, egg rolls, and crispy taros in China; tempuras (battered fried meats and vegetables) in Japan; fish and chips in the United Kingdom; and fried pork legs in Germany. In Latin countries and Tex-Mex restaurants, fried foods include tortilla-based products, such as tacos, nachos, and quesadillas. Examples of other popular fried foods include French fries, onion rings, and fried chicken, along with fried desserts such as doughnuts and battered, fried candy bars.
Traditionally, there are two major types of fried foods: (1) deep-fat fried (deep fried), such as potato chips, French fries, and battered fried chicken; and (2) pan fried, such as pancakes, eggs, and stir-fried dishes. This chapter focuses on atmospheric and vacuum deep-fat frying systems.
Outcomes
After reading this chapter, you should be able to:
• • Describe various types of frying technology
• • Describe basic frying chemistry and the heat and mass transfer mechanisms that are involved in the manufacture of different types of fried products
• • Explain the advantages and disadvantages of the frying process
• • Analyze the frying process using fundamental equations and calculate rate of water removal and amount of heat required during frying
Concepts
Frying Technology
Frying is defined as the process of cooking and drying through contact with hot oil. It involves simultaneous heat and mass transfer. Frying technology is important to many sectors of the food industry from suppliers of oils and ingredients; to fast-food outlets and restaurants; to industrial producers of fully fried, par-fried, and snack food products; and finally to manufacturers of frying equipment. The amount of fried food and oil used at both the industrial and commercial levels is massive.
Deep-Fat Frying (Deep Frying)
The process of immersing food partially or completely in oil during part or all of the cooking period at atmospheric pressure (760 mm Hg or 101.3 kPa absolute) is called deep-fat frying or deep frying. The food is completely surrounded by the oil, which is a very efficient heat-transfer medium. In addition to cooking the food, frying oil produces a crispy texture in food such as potato chips, French fries, and battered fried chicken (Moreira et al., 1999). The resulting product is usually golden brown in color with an oil content ranging from 8 to 25%.
A typical deep-fat fryer consists of a chamber into which heated oil and a food product are placed. The speed and efficiency of the frying process depend on the temperature and the overall quality of the oil, in terms of degradation of triglycerides and changes in thermal and physical properties such as color and viscosity (Moreira et al., 1999). The frying temperature is usually between 160° and 190°C. Cooking oil (such as sunflower oil, canola oil, soybean oil, corn oil, peanut oil, and olive oil) not only acts as the heat transfer medium, but it also enters into the product, providing flavor; Table 6.3.1 lists the oil content of commonly deep-fat fried products.
In addition to frying at atmospheric pressure, food products can also be fried under a vacuum, where the pressure is reduced to about 60 mm Hg (8 kPa absolute). At this lower pressure, the boiling point of water is decreased to 41°C allowing for the frying oil temperature to be reduced to 90°–110°C. As a result, heat-sensitive products, such as fruits with a high sugar content (e.g., bananas, apples, jackfruits, durians, and pineapples) can be fried to a crisp. Furthermore, the fried products are able to maintain a fresh color and intense flavor, while the frying oil will have a longer life because of less contact with atmospheric oxygen.
Table $1$: General oil content of products that are deep-fat fried using atmospheric frying (Moreira et al., 1999).
Product Oil Content (%)
Potato chips
33–38
Tortilla chips
23–30
Expanded snack products
20–40
Roasted nuts
5–6
French fries
10–15
Doughnuts
20–25
Frozen foods
10–15
Chemistry of Frying
Sources of Oil Used in Frying
Oil seed crops are planted throughout the world to produce cooking oil. The seeds are washed and crushed before oil is removed using an extraction process. The oil is then refined to remove any unwanted taste, smell, color, or impurities. Some oils, such as virgin olive oil, walnut oil, and grapeseed oil, are pressed straight from the seed or fruit without further refining (EUFIC, 2014). Some other sources of frying oil include sunflower, canola, palm, and soybean.
Most vegetable oils are liquid at room temperature. When oils are heated, unsaturated fatty acids, which are the building blocks of triglycerides, are degraded. Monounsaturated-rich oils, such as olive oil or peanut oil, are more stable and can be re-used much more than polyunsaturated-rich oils like corn oil or soybean oil. For this reason, when deep-frying foods, it is important not to overheat the oil and to change it frequently.
Chemical Reactions
Many chemical reactions, including hydrolysis, isomerization, and pyrolysis, take place during frying and affect the quality and storage time of the oil. Several of these reactions lead to spoilage of the oil.
Hydrolysis is a chemical reaction in which a water molecule is inserted across a covalent bond and breaks the bond. Hydrolysis is the major chemical reaction that occurs during frying. As the food product is heated, water in the food evaporates and the water vapor diffuses into the oil. The water molecules cause hydrolysis in the oil, resulting in the formation of free fatty acids, reduction of the smoke point of the oil, and unpleasant flavors in both the oil and the food. The smoke point, or the burning point, of an oil or fat is the temperature at which it begins to produce a continuous bluish smoke that becomes clearly visible (AOCS, 2017). Baking powder also promotes hydrolysis of the oil (Moreira et al., 1999). Table 6.3.2 lists the smoke points of some common oils used in frying. For high temperature cooking (160–190°C), an oil with a low smoke point, such as unrefined sunflower oil and unrefined corn oil, may not be suitable.
Table $2$: Smoke points of common oil used in frying (modified from Guillaume et al., 2018).
Cooking Oils and Fats Smoke Point °C Smoke Point °F Cooking Oils and Fats Smoke Point °C Smoke Point °F
Unrefined sunflower oil
107°C
225°F
Grapeseed oil
216°C
420°F
Unrefined corn oil
160°C
320°F
Virgin olive oil
216°C
420°F
Butter
177°C
350°F
Sunflower oil
227°C
440°F
Coconut oil
177°C
350°F
Refined corn oil
232°C
450°F
Vegetable shortening
182°C
360°F
Palm oil
232°C
450°F
Lard
182°C
370°F
Extra light olive oil
242°C
468°F
Refined canola oil
204°C
400°F
Rice bran oil
254°C
490°F
Sesame oil
210°C
410°F
Avocado oil
271°C
520°F
Isomerization (polymerization) is the process by which one molecule is transformed into another molecule that has exactly the same atoms but arranged differently. Isomerization occurs rapidly during standby and frying periods. The bonds in the triglycerides are rearranged, making the oil more unstable and more sensitive to oxidation.
Pyrolysis results in the extensive breakdown of the chemical structure of the oil resulting in the formation of compounds of lower molecular weight.
Fried foods may absorb many oxidative products, such as hydro-peroxide and aldehydes, that are produced during frying (Sikorski & Kolakowska, 2002), thus affecting the quality of oil.
Repeated frying (using the same oil several times) increases the viscosity and darkens the color of the cooking oil. If the physico-chemical properties of cooking oil deteriorate, the oil must be discarded because it can prove to be harmful for human consumption (Goswami et al., 2015; Rani et al., 2010; Choe et al., 2007). Antioxidants, such as Vitamin E, added during frying are extremely effective in decreasing the rate of lipid oxidation, while enzymes such as superoxide dismutase, catalase, and peroxidase are also beneficial. Nonetheless, Vitamin E effectiveness decreases with increasing temperature (Goswami et al., 2015).
Heat and Mass Transfer Processes During Frying
The frying process, whether atmospheric or vacuum frying, is quite complicated involving coupled heat and mass transfer through a porous medium (the food), crust formation, and product shrinkage and expansion. These mechanisms all contribute to the difficulties in predicting physical and structural appearance of the final product. Thus, an understanding of the frying mechanism and the heat and mass transport phenomena is useful for food processors in order to produce and develop new fried and vacuum fried snack foods to meet the demands of consumers.
Heat Transfer
During the frying process, both heat and mass transfer take place, with water leaving and oil entering the product (Figure 6.3.1). The heat transfer processes include radiation from the heat source to the fryer, conduction from the fryer outer wall to the inner surface, and from inner surface to oil. Once the oil is heated, heat energy is transferred by convection to the surface of the product. Due to the high temperature of frying (160–190°C), the convective heat transfer coefficient is much higher than air-drying processes. Finally, heat is conducted from the hotter surface to the colder center of the product, thus increasing its temperature.
Heat transfer during the frying process can be described using the three following simplifying assumptions (Equations 6.3.1–6.3.3) relating to convection, conduction, and sensible heat.
The first assumption is that heat is transferred from the oil to the product surface via convection:
$q=h\Delta T = hA(T_{s}-T_{\infty})$
where q = heat flux (J s−1m−2 or W m−2) (due to convection, in this case)
h = convective heat transfer coefficient (W m−2 °C −1)
A = surface area of product (m2)
T = difference in temperature (°C) between the product surface temperature and the oil temperature = TsT
Ts = product surface temperature (°C)
T = oil temperature (°C)
Table 6.3.3 lists ranges of values of the convective heat transfer coefficient (h) for several processes and media. Forced convection increases the heat transfer coefficient dramatically compared to free convection. At the same time, liquids have a much greater h value than gases, while a convection process with phase change can create a heat transfer coefficient as high as 2,500–100,000 W m−2. Krokida et al. (2002) provide a good compilation of literature data on convective heat transfer coefficients in food processing operations and Alvis et al. (2009) is a good source for values of the coefficient in frying operations.
The second assumption is that heat is transferred from the product surface internally via conduction:
$q=hA\frac{\Delta T}{\Delta x} = kA\frac{(T_{1}-T_{2})}{\Delta x}$
Table $3$: Typical values of convective heat transfer coefficient (Engineering ToolBox, 2003).
Process h (W m−2 K−1)
Free convection:
Gases (e.g., air)
2–20
Liquids (e.g., water, oil)
50–1000
Forced convection:
Gases (e.g., air)
25–300
Liquids (e.g., water, oil)
100–40,000
Convection with phase change:
Boiling or condensation
2,500–100,000
where q = heat flux (J s−1m−2) or (W m−2) (due to conduction in this case)
k = thermal conductivity (W m−1 °C −1)
A = surface area of product (m2)
T = T1T2 = difference in temperature between the inner and outer surface of the product (°C)
x = thickness of product (m)
The third assumption is that the heat from the oil is also used as sensible heat (change in the temperature of the product without a change in phase) to increase the product temperature toward the oil temperature:
$Q=\Delta H = \dot{m}C_{p} \Delta T= \dot{m}C_{p}(T_{1}-T_{2})$
where Q = sensible heat (J s−1)
H = change in enthalpy (J)
$\dot{m}$ = mass flow rate (kg s−1)
Cp = specific heat (kJ kg−1 °C−1)
T = T1T2 = change in temperature of the material without undergoing a phase change (°C)
Table 6.3.4 gives the specific heat of water, vegetable oil, and common materials. As shown, the specific heat of vegetable oil is less than half that of liquid water, indicating that much less energy is needed to raise the temperature of the same amount of material by 1°C.
Table $4$: Specific heat of some common materials (Figura and Teixeira, 2007).
Material Specific Heat (Cp)
(kJ kg−1 °C−1)
Liquid water
4.18
Solid water (ice)
2.11
Water vapor
2.00
Vegetable oil
2.00
Dry air
1.01
Sensible heat from the oil increases the water temperature to its boiling point. The release of heat energy at the boiling point is known as the latent heat of vaporization, or the heat required to evaporate the water or change its phase from liquid to gas. The latent heat of vaporization cools the product region during evaporation, keeping the product temperature near the boiling point (until most of the water has been removed).
Heat Balance
The simplified heat balance equation is:
$\rho C_{p}\frac{dT}{dt}-div[k\nabla T] = Q_{\text{heatsource}}+h(T-T_{\infty})$
where ρ = density of product (kg m−3)
Cp = heat capacity of product (J kg−1 °C−1)
div[k∇(T)] = conduction term = $\frac{\partial}{\partial x}(k\frac{\partial T}{\partial x})+\frac{\partial}{\partial y}(k\frac{\partial T}{\partial y})+\frac{\partial}{\partial z}(k\frac{\partial T}{\partial z})$
x = direction x (m)
y = direction y (m)
z = direction z (m)
k = thermal conductivity (W m−1 °C−1)
Qheatsource = latent heat of evaporation term (J s−1m−2)
h = convective heat transfer coefficient (W m−2 °C−1)
T = oil temperature at time t (°C)
T = product temperature at time t (°C)
t = time (s)
The simplified heat balance equation (Equation 6.3.4), consists of the heat accumulation term [ρCp(dT/dt)], the conduction term div[k∇(T)], the heat source term (Qheatsource) denoting the latent heat of vaporization, and the convection term, h(Toil – T), at the boundary surface, respectively. The heat accumulation term represents the change in the enthalpy of the system as a function of time. This change accounts for the heating of the product (change in enthalpy) and the transfer of the heat from the heated product toward evaporating the water vapor from the product. The conduction term accounts for the transfer of the heat from the product surface toward the center of the material, while the convection term accounts for the transfer of heat from the oil to the product surface and is dependent on the heat transfer coefficient of the cooking oil (Yamsaengsung et al., 2008).
Mass Transfer
The mass transfer processes during frying include (Figure 6.3.1):
1. 1. As the hot oil heats the product by conduction, the heat evaporates the water in the product when it reaches the water boiling temperature (Farkas et al., 1996).
2. 2. As the water turns into vapor, it diffuses within the product and moves out of the product by convection.
3. 3. Oil is driven into the product via capillary pressure (which is the pressure difference between two immiscible fluids in a thin tube), resulting from the interactions of forces between the fluids and the solid walls of the tube. Capillary pressure can serve as both an opposing force and a driving force for fluid transport (Moreira and Barrufet, 1998).
4. 4. The final product is comprised of solids, water, air, and oil. In general, the product becomes more hygroscopic, i.e., readily attracts water from its surrounding, as frying proceeds. French fries are an excellent example of a product with a crispy surface or crust and a soft inner portion called crumb. In brief, after a specific time, the surface of the product becomes crispy, while the internal part of the product may retain a certain amount of moisture, leaving it with a softer texture.
Figure 6.3.2 depicts a typical non-hygroscopic material and a hygroscopic material (Figura and Teixeira, 2007). Each material consists of all three phases: gas, liquid water, and solid. One major difference is that in a hygroscopic material there is bound water. Bound water is defined as water that is bonded strongly to the inner surface of the pores of the materials (Yamsaengsung and Moreira, 2002) and very difficult to remove. In contrast, free water can be removed through capillary diffusion (Moreira et al., 1999) and convection flow from a pressure gradient. The bound water requires a longer drying and frying time to be removed. While more heat energy is required to remove this bound water, its removal leads to shrinkage of the material. Drying can also lead to shrinkage of the material, but frying can lead to additional puffing and expansion of the structure as the water vapor and gas expand during the later stages of the frying process (Yamsaengsung and Moreira, 2002).
Product Drying Rate
The percent moisture content of a food material can be expressed as wet basis (% w.b.) or dry basis (% d.b.) by mass. Percentage wet basis is commonly used in commercial applications while percentage dry basis is used in research reports.
The % w.b. is defined as:
$\%\ w.b.=(\frac{\text{water content (kg)}}{\text{total weight of product (kg)}}) \cdot100$
The % d.b. is defined as:
$\%\ d.b.=(\frac{\text{kg of water}}{\text{kg of dried food}}) \cdot100$
The drying rate of the product during the frying process is divided into constant and falling rate periods. During the constant rate period, water removal is fairly constant. During the falling rate period, the rate of water removal is drastically reduced. Each period is characterized by an averaged set of heat and mass transport parameters (Yamsaengsung, 2014). The moisture ratio (MR) is defined as:
$MR = \frac{MC_{t}-MC_{e}}{MC_{o}-MC_{e}}$
where MR = moisture ratio
MCt = moisture content at frying time t (decimal d.b. or w.b.)
MCe = equilibrium moisture content; the moisture content of the product under equilibrium conditions at constant temperature and relative humidity (d.b. or w.b.)
MC0 = moisture content at start of frying, time = 0 (d.b. or w.b.)
Figure 6.3.3 illustrates the constant rate and the falling rate periods of the MR during the frying process.
During the constant rate period, free water is removed as vapor via evaporation and diffusion from the product. Changes in the product surface as the crust forms are also taking place during this period. Typically, for crispy foods, the moisture content should be less than 5% w.b. During the falling rate period, a distinct crust region has been developed and bound water is being removed via vapor diffusion. The rate of oil absorption is proportional to the rate of moisture loss during the constant rate period, but is limited by the presence of the crust during the falling rate period. The development of crust and the increase in pressure inside the structure as the gas vapor expands with continuous heat absorption help to limit oil absorption, while causing the pores of the product to expand and the entire product to increase its thickness. This expansion is called puffing (Moreira et al., 1999).
In terms of mass transfer, the diffusion equation (Equation 6.3.8) may be written to account for the convective flow of liquid and vapor as (Moreira et al., 1999):
$\frac{dc}{dt}-div[D\cdot \nabla(c)]=\dot{V}$
where $\frac{dc}{dt}$ = change in concentration of component (liquid, vapor, or oil) (kg mol m−2 s−1)
div[D·∇(c)] = convective flow = $\frac{\partial}{\partial x}(k\frac{\partial T}{\partial x})+\frac{\partial}{\partial y}(k\frac{\partial T}{\partial y})+\frac{\partial}{\partial z}(k\frac{\partial T}{\partial z})$
D = diffusion coefficient (m−2 s−1)
c = concentration of component (liquid, vapor, or oil) (kg mol m−3)
$\dot{V}$= rate of convective flow of liquid (kg mol m−2 s−1)
When applied to each component, i.e., liquid, vapor, or oil, Equation 6.3.8 is used to quantify the removal of liquid water and vapor from the product and the absorption of oil by the product during the frying process, i.e., as a function of time. The rate of water removal is estimated using the diffusion coefficient, while the change in the concentration of the component (liquid, vapor, or oil) is estimated using experimental data as a function of frying time (Yamsaengsung, et. al., 2008).
The heat and mass transfer equations allow calculation of the energy consumption, the heat required for heating of the cooking oil and removal of water from the product during the frying process, the amount of water that is being removed during frying, and, in many cases, the amount of product that would be obtained at the end of the frying period.
Material (Mass) Balance
Equations 6.3.4 and 6.3.8 describe heat and mass transfer during frying in three dimensions. Solving them requires advanced numerical methods, which are beyond the scope of this chapter. This section presents a model using simplified material balance (mass balance) equations, which accounts for the change in mass of each component during the process.
Equation 6.3.9 states the concept of the material, or mass, balance in words:
$\{ \text{accumulation within the system} \}=\{\text{input through system boundaries} \} - \{\text{output through system boundaries} \} + \{\text{generation within the system} \} - \{\text{consumption within the system} \}$
Accumulation refers to a change in mass (plus or minus) within the system with respect to time, whereas the input and output through the system boundaries refers to inputs and outputs of the system (Himmelblau, 1996). If considered over a time period for which the balance applies, Equation 6.3.9 is a differential equation (consider, for example, the mass balance of water in Figure 6.3.4). When formulated for an instant of time, Equation 6.3.9 becomes a differential equation:
$\frac{dm_{H_{2}O,within\ system}}{dt}=\dot{m}_{H_{2}O,in} - \dot{m}_{H_{2}O,out1} - \dot{m}_{H_{2}O,out2}$
where mH2O is the mass of water and $\dot{m}$H2O is the mass flow rate of water (mass/time, kg/s). When evaluating a process that is under equilibrium, or steady-state, condition, the values of the variables within the system do not change with time, and the accumulation (change in mass within the system with respect to time) term in Equations 6.3.9 and 6.3.10 is zero.
To illustrate application of the mass balance, consider a frying operation to make potato chips (which are fried potato slices). For this example, 4 kg of peeled raw potato slices containing 83% water enter a fryer to make chips with 2% water and 30% oil. How many kg of water are evaporated from the product leaving the fryer, and how many kg of potato chips are produced in the process? The process is at steady state conditions.
The system is the fryer, and no accumulation, generation, or consumption occurs, since the process is steady state. Also, assume potatoes are made up of water and solids. The next step is to write the mass balance equations, total and for each component (% solids, % water), and solve for the unknowns.
The total mass balance of potato slices is:
in = out, as the time-dependent terms in Equation 6.3.10 are zero
R + oil (absorbed from fryer) = W + P
The total material entering the fryer is given as 4 kg peeled raw potato slices. Substituting R = 4 kg in the total mass balance, yields one equation with two unknowns:
4 kg + oil (absorbed from fryer) = W + P
Hence, a second equation is needed (basic material balances principle). The potatoes are composed of water and solids, hence, the terms in the total mass balance equation can be multiplied by the respective component percentages. The percent solids balance is:
4 kg (1 – 0.83) + 0 = W(0) + P(1 – 0.32)
0.68 = 0.68 P
P = 1 kg of potato chips produced
Percent water balance is:
4 kg (0.83) + 0 = W(1) + P(0.02)
3.32 = W + 0.02 P
3.32 = W + (0.02) 1 kg = W + 0.02
W = 3.3 kg of water removed
Percent oil balance:
4 kg (0) + oil = 3.3 kg(0) + 1 kg(0.3)
Thus, the mass of oil absorbed by the potatoes during frying is 0.3 kg.
The total material balance is 4 kg + 0.3 kg = 3.3 kg + 1 kg, or 4.3 = 4.3, which confirms the conservation of mass law.
Applications
In order to design and construct a productive and cost-effective frying system, product properties and characteristics, including size, shape, thickness, thermal conductivity, specific heat capacity, composition, and desired product attributes, such as color, texture (hardness, crispiness), smell, and flavor, all play a role and affect the frying time and temperature. Using equations for mass balance and heat transfer, the rate of water removal and a drying curve can be developed which can be used to predict the moisture loss and the product weight, which, in turn, affects the frying time and production capacity.
Industrial Continuous Deep-Fat Frying Systems
A typical continuous frying system consists of a fryer, a heat exchanger, an oil tank with a cooling system, a control panel, and a filter. Another common type of frying system consists of a combustor, an oil heat exchanger, and a fryer. In the combustor, a gas burner burns natural gas with fresh air and foul gas (vapors from the fryer) to produce combustion gases that flow through a heat exchanger to heat the frying oil that is re-circulated through the fryer. In many cases, exhaust gas recirculation is used to increase turbulence, provide combustor surface cooling, and reduce emissions. To reduce emissions and smells, vapors generated from the frying process are directed from the fryer to the combustor where they are incinerated (Wu et al., 2013).
Table 6.3.5 provides examples of mechanical specifications of some continuous frying systems and their throughputs, while Table 6.3.6 gives pricing and frying time of some systems.
Table $5$: Mechanical specifications of continuous frying systems and their throughputs (provided by Tsung Hsing Food Machinery).
Model Dimensions
(mm)
Effective Frying Space, length × width × height (mm) Hp Energy Consumption (kWh) Edible Oil Capacity (L) Production Capacity
length width height Peanuts Snacks
FRYIN-302-E
3450
2350
1950
2600 × 820 × 700
3
232.44
440
480 kg/hr
300 kg/hr
FRYIN-402-E
4950
2350
1950
4100 × 820 × 700
5
348.67
650
650 kg/hr
550 kg/hr
FRYIN-602-E
6450
2450
2070
5890 × 820 × 700
7.5
464.89
850
Vacuum Frying Systems
The vacuum frying process, first developed in the 1960s and early 1970s, provides several benefits compared to the traditional (atmospheric) frying process. It is now widely used to process fruits in Asian countries. The principle behind vacuum frying is that using reduced pressure (below 101.3 kPa), the boiling point of water can be reduced from 100°C to as low as 45°C and the cooking oil temperature can also be reduced to less than 100°C (compared to atmospheric frying at 170°–190°C). As a result, products with high sugar content, such as ripened fruits, can be fried without burning and caramelization. Common methods to improve vacuum-frying of fruits include immersion in high sugar solutions and osmotic dehydration (Fito, 1994; Shyu and Hwang, 2001).
Table $6$: Prices of typical automatic continuous frying systems and their throughputs (provided by Grace Food Machinery).
Machine Type Products Fuel Source/Power Consumption Capacity
(kg/hr)
Frying Time
(min.)
Cost, US$Automatic continuous fried nuts processing line nuts, almond, cashew, peanuts, etc. diesel, LPG, gas, biofuel N/A N/A$143,145
Automatic snacks frying machine
chips, meat, chicken, peanut
electric/10 kW
100–1,000
N/A
$14,314 Continuous banana chips fryer machine chips, biscuit, donut, French fries, potato chips, banana chips, snacks electric/25 kW 100–500 2–20$25,766
Snack food fryer
snack foods
electric/25 kW
100–500
1–10
$143,145 Figure 6.3.6 shows a schematic of a vacuum fryer (Yamsaengsung, 2014). In addition to the features shown in Figure 6.3.6, a vacuum fryer must have a centrifuge to remove the oil content from the surface before the vacuum is broken. Table 6.3.7 provides a comparison of process operating conditions and applications for traditional and vacuum frying systems. The main components in the vacuum frying process are the vacuum fryer (8–10 mm thick wall and fryer cap), the condenser (for condensing water vapor), the water collector, and the vacuum pump (either rotary or liquid water ring type). However, the main drawbacks of vacuum frying are the high cost involved in purchasing the equipment and the more complicated process management. With the addition of a vacuum pump, a water condensing system, and much thicker fryer wall (8–10 mm vs. 1–2 mm), the cost of the vacuum fryer can be double the cost of an atmospheric fryer. The benefits of vacuum frying include the ability to: • • fry high sugar content products such as fresh fruits; • • maintain original color, while adding intense flavor to the final product; Table $7$: Process settings and product characteristics for atmospheric vs. vacuum frying.[a] Conditions/Attributes Atmospheric Frying Vacuum Frying Temperature 160°–190°C 90°–140°C Pressure (absolute) 101.3 kPa 3.115 kPa Convective heat transfer coefficient (h) −2 K−1 (80-120°C)[b] −2 K−1 (200-300°C)[b] 217–258 W m−2 K−1 (120°–140°C)[c] 700–1600 W m−2 K−1 (140°C)[d] Oil absorption 25–40% w.b. 1–10% w.b. Oil usage life susceptible to lipid oxidation minimal lipid oxidation longer usage life High sugar content foods not possible possible Major composition high starch/ high protein high starch high protein high sugar Taste/texture bland to salty/crispy intense flavor/crispy Color intensity of color decreases intensity of color is maintained Investment cost low high [a] Yamsaengsung (2014); [b] Farinu and Baik (2007) at 160°–190°C; [c] Pandey and Moreira (2012) at 120°–140°C; [d] Mir-Bel et al. (2012) at 140°C. • • reduce the amount of oil absorbed into the final product to as low as 1–3% depending on the machine (Garayo & Moreira, 2002); and • • extend the life of cooking oil by reducing its exposure to oxygen (lipid oxidation) and using a lower cooking oil temperature. Moreover, even though Garayo and Moreira (2002) found that potato chips fried under vacuum conditions (3.115 kPa and 144°C) had more volume shrinkage, their texture was slightly softer and they were lighter in color than potato chips fried under atmospheric conditions (165°C). Yamsaengung and Rungsee (2003) also found that, compared to atmospheric frying, vacuum fried potato chips retained a lighter color and had a more intense flavor. Examples Example $1$ Example 1: Material balances Problem: Determine how many kg of raw potato slices containing 80% water must enter a batch fryer to make 500 kg of potato chips (fried potato slices) with 2% water and 30% oil. Also calculate how many kg of water are evaporated and leave the fryer. The process is at steady state conditions. Solution Draw a schematic of the problem, enter the given data and identify the unknowns. Then, write down the material balance equations and solve for the unknowns. The system is the fryer, and no accumulation, generation, or consumption occurs, that is, it is at steady state. Also assume potatoes are made up of water and solids. The total material balance in the fryer is: in = out R + oil (in fryer) = W + P R + oil = W + 500 kg We have one equation and two unknowns, so we need another equation. Percent solids balance: R(1 – 0.80) + 0 = W(0) + 500 kg(1 – 0.32) R = 1700 kg R = 1700 kg of raw potato slices containing 80% water are required Percent water balance is: 1700 kg(0.80) + 0 = W(1) + 500 kg(0.02) 1360 = W + 10 W = 1350 kg of water removed from potato slices in the frying process Finally, determine the amount of oil in the fried chips by conducting a percent oil balance: 1700 kg(0) + oil = 1350 kg(0) + 500 kg(0.3) oil = 150 kg Total material balance: 1700 kg + 150 kg = 1350 kg + 500 kg This example illustrates the use of material balances using food initial and final composition data to calculate the amount of raw material entering the fryer to manufacture a product with specific composition characteristics. Example $2$ Example 2: Moisture content of fried chips Problem: During a batch frying process, the weight of 50 kg of raw, fresh peeled potato slices decreases to 15 kg after frying. Each fried chip contains 30% oil content. If the initial moisture content of the fresh peeled potato is 80% (w.b), determine the final moisture content (% w.b.) of the fried chips. Solution Draw a schematic of the problem, enter the given data and identify the unknowns. Then, write down the material balance equations and solve for the unknown moisture content of the fried chips, using the definition of percent wet basis. The system is the fryer, and no accumulation, generation, or consumption occurs (steady state). Also assume potatoes are made up of water and solids. The total material balance in the fryer is: in = out R + oil absorbed = W + P From the problem statement, mass of P = 15 kg with 30% oil. Using percent oil content, calculate how much oil (in kg) the chips contain: 0.30 × 15 kg = 4.5 kg of oil in fried chips To figure the percent solids balance, note that in material balance applications in food engineering, the dry matter is constant. Hence, solids in = solids out. From the raw materials with 80% water (and 20% solids), the dry matter (% solids) is: 50 kg × 0.2 = 10 kg Calculate how much water (in kg) is in the chips: chips = water + dry matter + oil 15 kg = kg H2O + 10 kg + 4.5 kg kg H2O = 15 kg – 10 kg – 4.5 kg = 0.05 kg Then, on a wet basis, the moisture content of the chips is (from Equation 6.3.5): $\%\ w.b. = \frac{\text{kg of water}}{\text{total of weight of product}} \times 100= \frac{0.05\ kg}{15\ kg}\times100=0.333\%$ This example shows how the final moisture content of the fried product can be calculated. Its importance lies in the effect of moisture on the crispiness of fried foods. Typically, for crispy snacks, the moisture content should be less than 5% w.b., so this fried product is considered crispy. Example $3$ Example 3: Drying curve Problem: The following data represent the change in weight of vacuum fried bananas (70% w.b. moisture content) as a function of frying time. Also assume the moisture content in % d.b. at equilibrium (at the end of frying) is 0.02 kg water/kg dry matter. Neglect the weight of the oil absorbed (% oil content = 0.0%) and plot the drying curve as a function of the frying time (moisture ratio vs. time). Time (min) Weight (kg) Solids (kg) H2O (kg) 0 10 3 7 1 8.4 3 5.4 2 7.2 3 4.2 3 6.3 3 3.3 4 5.4 3 2.4 6 4.6 3 1.6 8 4.1 3 1.1 10 3.65 3 0.65 12 3.4 3 0.4 14 3.35 3 0.35 16 3.3 3 0.3 Solution Calculate moisture ratio using Equations 6.3.6 and 6.3.7: $\%\ d.b. = \frac{\text{kg of water}}{\text{total of weight of product}} \cdot 100$ (Equation $6$) $MR = \frac{MC_{t}-MC_{e}}{MC_{o}-MC_{e}}$ For the banana with 70% w.b. moisture, the percent solids content is 1 – 0.7 = 0.3 or 30%. For an initial weight of 10 kg, the solids content is 0.3 × 10 kg = 3 kg (a constant throughout the process). Determine moisture content in % dry basis using Equation 6.3.6 at each time t: % d.b. = (total weight – weight of solids)/(weight of solids) For example, At t = 0 min MC0 (t=0) = (10 kg – 3 kg)/(3 kg) = 2.33 At t = 1 min MC1 (t=1) = (8.4 kg – 3 kg)/(3 kg) = 1.80 At t = 2 min MC2 (t=2) = (7.2 kg – 3 kg)/(3 kg) = 1.40 Repeat the procedure for all times. Next, determine MR using Equation 6.3.7. For example, MRt = (MCtMCe)/(MCoMCe) At t = 0 min MR0 = (2.33 – 0.02)/(2.33 – 0.02) = 1.00 At t = 1 min MR1 = (1.80 – 0.02)/(2.33 – 0.02) = 0.77 At t = 2 min MR2 = (1.40 – 0.02)/(2.33 – 0.02) = 0.60 Repeat the procedure for all times using table below. Time (min) Weight (kg) Solids (kg) H2O (kg) MC (d.b.) MR 0 10 3 7 2.33 1.00 1 8.4 3 5.4 1.80 0.77 2 7.2 3 4.2 1.40 0.60 3 6.3 3 3.3 1.10 0.47 4 5.4 3 2.4 0.80 0.34 6 4.6 3 1.6 0.53 0.22 8 4.1 3 1.1 0.37 0.15 10 3.65 3 0.65 0.22 0.09 12 3.4 3 0.4 0.13 0.05 14 3.35 3 0.35 0.12 0.04 16 3.3 3 0.3 0.10 0.03 Example $4$ Example 4: Production throughput of a continuous fryer Problem: At a potato chip factory, 1,000 kg of potatoes are fed into a continuous vacuum fryer per hour. • (a) Assuming the initial moisture content of peeled potatoes is 80% (w.b.) and the final moisture content is 2% (w.b.), how much water is removed per hour? • (b) How much (in kg) of oil is added to the potato chips per hour? Potato chips have 2% w.b. moisture content and 30% oil. • (c) How many bags can be produced in one day if each bag contains 50 g of potato chips and the factory operates for 8 hours per day? Solution Draw a schematic of the problem, enter the given data and identify the unknowns. Then, write down the material balance equations and solve for the unknowns. Neglect oil content in product. This is a continuous process (material/time). 1. (a) Determine the amount of water removed from the raw potatoes, R, per hour. Water in initial product: 1,000 kg/hr × 0.8 = 800 kg/hr of water 1. Percent solids balance: (1000 kg/hr)(1 – 0.8) = W(0) + (1 – 0.02 – 0.3)P (1000 kg/hr)(0.2) = (0.68)P 200 kg/hr = 0.68 P P = 294.12 kg/hr of potato chips 1. Water in final product: 294.12 kg/hr × 0.02 = 5.88 kg/hr of water in potato chips 1. Percent water balance: (1000 kg/hr)(0.8) = W(1) + 5.88 kg/hr W = 800 kg/hr – 5.88 kg/hr W = 794.12 kg of water removed from raw potatoes in one hour 1. (b) Determine the amount of oil per hour added to the potato chips in the fryer. Chips have 30% oil. Thus, 2. $0.30 = \frac{\text{oil in chips (kg/hr)}}{\text{total weight of chips (kg/hr)}}$ 3. $0.30 = \frac{\text{oil in chips (kg/hr)}}{294.12\text{ (kg/hr)}}$ oil in chips = (0.30) × 294.12 kg/hr oil in chips = 88.24 kg/hr 1. (c) Determine number of bags per 8-hr day: amount of chips per day = (294.12 kg/hr) × 8 hr/day = 2,352.96 kg chips/day × (1000 g/kg) = 2,352,960 g/day number of bags per day = (2,352,960 g/day) × (1 bag/50 g) = 47,059 bags per day This example illustrates how the engineer uses knowledge of material balances and food composition to determine production throughput of a continuous fryer. Example $5$ Example 5: Energy requirement for an industrial fryer Problem: For an industrial fryer with a production capacity of 5,000 kg of corn chips per hour, how much energy is required to reduce the water content of the pre-baked masa (the product that will be fried to make the chips) from 50% w.b. to 4% w.b.? If the frying time takes 60 seconds at a frying temperature of 180°C, calculate: 1. (a) initial feed rate of the chips, 2. (b) total amount of water removed, 3. (c) amount of heat required to evaporate the water, 4. (d) total energy required for the frying process, and 5. (e) power required for the frying system. Assume that the oil has already been pre-heated, the temperature of the oil does not drop during frying, but heat is needed to increase the corn chips feed temperature from 25°C to the frying temperature. The specific heat capacity (Cp) of the cooking oil is 2.0 kJ kg−1 °C−1, the specific heats of the corn chips before and after frying are 2.9 kJ kg °C−1 and 1.2 kJ kg°C−1, respectively, and the latent heat of evaporation of water at 100°C is 2,256 kJ/kg. (Hint: 1 kW = 1 kJ/s and water evaporates at 100°C.) Solution Calculate the initial mass of the pre-baked masa using Equation 6.3.5: $\%\ w.b. = (\frac{\text{water content (kg)}}{\text{total weight of product (kg)}}) \cdot 100$ (Equation $5$) 4% = (kg of water/5,000 kg) weight of water = 200 kg weight of solid = 5,000 kg – 200 kg = 4,800 kg MC = 50% w.b. Find the mass of water using Equation 6.3.5: 50% = (kg of water/(kg of water + kg of solid)) 50% = W/(W + 4,800 kg) 0.5× (W + 4,800 kg) = W W = (0.5 × 4,800 kg)/(1 – 0.5) weight of water = 4,800 kg Initial feed rate of corn chips = weight of water + weight of solid = 9,600 kg/hr Calculate the amount of water removed as initial – final: Initial weight of water = 4,800 kg Final weight of water = 200 kg Water removed = 4,800 – 200 = 4,600 kg Calculate Q required to remove the water (to evaporate the water) Q = water removed × latent heat of evaporation Q = (4,600 kg × 2,256 kJ/kg) Q = 10,377,600 kJ Calculate sensible heat (25° – 100°C and 100° – 180°C) using Equation 6.3.3: $Q=\Delta H = \dot{m}C_{p}\Delta T=\dot{m}C_{p}(T_{1}-T_{2})$ (Equation $3$) Q = (9,600 kg) × (2.9 kJ/kg °C) × (100°C – 25°C) Q = 2,088,000 kJ Likewise for 100° – 180°C, Q = (5,000 kg) × (1.2 kJ/kg °C) × (180°C – 100°C) Q = 4,800,000 kJ Calculate total Q as the sum of both sensible and latent heat: Qtotal = Qsensible + Qlatent Qtotal = 2,088,000 kJ + 4,800,000 kJ + 10,277,600 kJ = 17,265,600 kJ Calculate power as total heat per unit time: Power = Q/t t = 60 seconds Q = 17,265,600 kJ Power = 287,760 kW Example $6$ Example 6: Water removal rate during frying Problem: 12 kilograms of fresh bananas were purchased at 0.50 US$/kg. After 4 kg of peel were removed, the bananas were sliced 2 mm thick and vacuum fried at 110°C for 45 minutes. This process reduced the moisture content from 75% w.b. to 2.5% w.b. Determine the rate of water removed from the fresh peeled bananas (kg/min) during the frying process. Assume bananas are composed of water and solids, and that the amount of oil absorbed is negligible (oil content = 0%).
Solution
Draw a schematic of the problem, enter the given data, and identify the unknowns. Then, write down the material balance equations and solve for the unknowns.
Total material balance: in = out
Remember that oil is zero in this example. Therefore, B = W + P
• Solids balance: (0.25)B = (0)W + (0.975)P
• Water balance: (0.75)B = (1.00)W + (0.025)P
From B = 8 kg, find P from the solids balance:
P = (0.25)(8 kg)/(0.975)
P = 2.05 kg of fried bananas
From the total material balance, find the amount of water removed during the process:
W = B – P = 8 kg – 2.05 kg
W = 5.95 kg of water removed from the fresh peeled bananas
The rate of water removal is the amount of water removed per unit time. Since the frying time was 45 minutes the rate of water removal in is:
rate of water removal = (5.95 kg)/(45 min)
rate of water removal = 0.132 kg water/min
Why is this important? In a vacuum frying process, water is removed from the product during frying, trapped, and separated before it reaches the vacuum pump in order to maintain low pressure inside the frying system. The volume of the water trap and the capacity of the vacuum pump are needed in order to select the most efficient vacuum pump and heat exchanger for cooling the water vapor from the fryer. For example, if 20 kg of potato chips with an initial moisture content of 60% w.b. is fried, it can be assumed that almost 12 kg of water (approximately 12 L) must be removed and collected in a water trap. If the water is not condensed and collected, it will enter the pump and cause a decrease in vacuum pressure.
Image Credits
Figure 1. Yamsaengsung, R. (CC By 4.0). (2014). General schematic of the heat and mass transfer processes occurring during frying of a food product.
Figure 2. Yamsaengsung, R. (CC By 4.0). (2002). Schematic of non-hygroscopic and hygroscopic material.
Figure 3. Yamsaengsung, R. (CC By 4.0). (2020). Typical drying curve (frying curve) showing the constant rate period and the falling rate period.
Figure 4. Yamsaengsung, R. (CC By 4.0). (2020). Process for a simple mass balance consisting of air and water entering and leaving a system.
Figure 5. Yamsaengsung, R. (CC By 4.0). (2020). A schematic of the problem with the given data and the unknowns.
Figure 6. Yamsaengsung, R. (CC By 4.0). (2014). Schematic of a vacuum frying operation.
Example 1. Yamsaengsung, R. (CC By 4.0). (2020). Example 1: Material Balances.
Example 2. Yamsaengsung, R. (CC By 4.0). (2020). Example 2: Moisture content of fried chips.
Example 3. Yamsaengsung, R. (CC By 4.0). (2020). Example 3: Drying curve.
Example 4. Yamsaengsung, R. (CC By 4.0). (2020). Example 4: Production throughput of a continuous fryer.
Example 6. Yamsaengsung, R. (CC By 4.0). (2020). Example 6: Water removal rate during frying.
References
Alvis, A., Velez, C., Rada-Mendoza, M., Villamiel, M., & Villada, H.S. (2009). Heat transfer coefficient during deep-fat frying. Food Control 20(4), 321-325.
Official methods and recommended practices of the AOCS. Urbana, Ill.: American Oil Chemists’ Society.
Choe, E., & Min, D.B. (2007). Chemistry of deep-fat frying oils. J. Food Sci. 72(5), R77-R86.
Engineering ToolBox. (2003). Convective Heat Transfer. Available at: https://www.engineeringtoolbox.com/convective-heat-transfer-d_430.html.
EUFIC. (2014). Facts on fats: The basics. The European Food Information Council. Retrieved from https://www.eufic.org/en/whats-in-food/article/facts-on-fats-the-basics.
Farkas, B. E., Singh, R. P., and Rumsey, T. R. (1996). Modeling heat and mass transfer in immersion frying, part I: Model development. J. Food Eng. 29(1996), 211-226.
Farinu, A., & Baik, O. D. (2007). Heat transfer coefficients during deep fat frying of sweetpotato: Effects of product size and oil temperature. J. Food Res. 40(8), 989-994.
Figura, L. O., & Teixeira, A. A. (2007). Food physics: Physical properties—Measurement and applications. Germany: Springer.
Fito, P. (1994). Modelling of vacuum osmotic dehydration of foods. J. Food Eng. 22(1-4), 313-328.
Garayo, J. and Moreira, R.G. (2002). Vacuum frying of potato chips. J. Food Eng. 55(2), 181-191. dx.doi.org/10.1016/S0260-8774(02)00062-6.
Goswami, G., Bora, R., Rathore, M.S. (2015). Oxidation of cooking oils due to repeated frying and human health. Int. J. Sci. Technol. Manag. 4(1), 495-501.
Guillaume C., De Alzaa, F., & Ravetti, L. (2018). Evaluation of chemical and physical changes in different commercial oils during heating. Acta Sci. Nutri. Health 26(2018), 2-11.
Himmelblau, D. M. (1996). Basic principles in chemical engineering. London: Prentice Hall Int.
Krokida, M. K., Zogzas, N. P., & Maroulis, Z. B. (2002) Heat transfer coefficient in food processing: Compilation of literature data. Int. J. Food Prop., 5:2: 435-450. doi.org/10.1081/JFP-120005796.
Mir-Bel, J., Oria, R., & Salvador, M. L. (2012). Influence of temperature on heat transfer coefficient during moderate vacuum deep-fat frying. J. Food Eng. 113(2012), 167-176.
Moreira, R. G., & Barrufet, N. A. (1998). A new approach to describe oil absorption in fried foods: A simulation study. J. Food Eng. 35:1-22.
Moreira, R. G., Castell-Perez, M. E., & Barrufet, M. A. (1999). Deep-fat frying: Fundamentals and applications. Gaithersburg, MD: Aspen Publishers.
Pandey, A., & Moreira, R. G. (2012). Batch vacuum frying system analysis for potato chips. J. Food Process. Eng. 35(2012), 863-873.
Rani, A. K. S., Reddy, S. Y., & Chetana, R. (2010). Quality changes in trans and trans free fats/oils and products during frying. Eur. Food Res. Technol. 230(6), 803–811.
Shyu, S., & Hwang, L. S. (2001). Effect of processing conditions on the quality of vacuum fried apple chips. Food Res. Int. 34(2001), 133-142.
Sikorski, Z. E., & Kolakowska, A. (2002). Chemical and functional properties of food lipids. United Kingdom: CRC Press.
Wu, H., Tassou, S. A., Karayiammis, T. G., & Jouhara, H. (2013). Analysis and simulation of continuous frying processes. Appl. Thermal Eng. 53(2), 332-339. https://doi.org/10.1016/j.applthermaleng.2012.04.023.
Yamsaengsung, R., & Moreira, R. G. (2002). Modeling the transport phenomena and structural changes during deep fat frying Part I: model development. J. Food Eng. 53(2002), 1-10.
Yamsaengsung, R., Rungsee, C., & Prasertsit, K. 2008. Modeling the heat and mass transfer processes during the vacuum frying of potato chips. Songklanakarin J. Sci. Technol., 31(1), 109–115.
Yamsaengsung, R., & Rungsee, C. 2003. Vacuum frying of fruits and vegetables. Proc. 3th Ann. Conf. Thai Chem. Eng. Appl. Chem., Nakhon Nayok, Thailand, B-11.
Yamsaengsung, R. (2014). Food product development: Fundamentals for innovations. Hat Yai: Apple Art Printing House.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/06%3A_Processing_Systems/6.03%3A_Deep_Fat_Frying_of_Food.txt
|
Rosana G. Moreira
Department of Biological and Agricultural Engineering
Texas A&M University
College Station, Texas, USA
Key Terms
Radiation sources Depth-dose distribution Food safety applications
Absorbed dose Ionizing radiation effect Kinetics of pathogen inactivation
Introduction
Food irradiation is a non-thermal technology often called “cold pasteurization” or “irradiation pasteurization” because it does not increase the temperature of the food during treatment (Cleland, 2005). The process is achieved by treating food products with ionizing radiation. Other common non-thermal processing technologies include high hydrostatic pressure, high-intensity pulsed electric fields, ultraviolet (UV) light, and cold plasma.
Irradiation technology has been in use for over 70 years. It offers several potential benefits, including inactivation of common foodborne bacteria and inhibition of enzymatic processes (such as those that cause sprouting and ripening); destruction of insects and parasites; sterilization of spices and herbs; and shelf life extension. The irradiation treatment does not introduce any toxicological, microbiological, sensory, or nutritional changes to the food products (packaged and unpackaged) beyond those brought about by conventional food processing techniques such as heating (vitamin degradation) and freezing (texture degradation) (Morehouse and Komolprasert, 2004). It is the only commercially available decontamination technology to treat fresh and fresh-cut fruits and vegetables, which do not undergo heat treatments such as pasteurization or sterilization. This is critical because many recent foodborne illness outbreaks and product recalls have been associated with fresh produce due to contamination with Listeria, Salmonella, and Escherichia coli. Approximately 76 million illnesses, 325,000 hospitalizations, and 5000 deaths occur in the United States annually and 1.6 million illnesses, 4000 hospitalizations, and 105 deaths in Canada (Health Canada, 2016). During 2018, these outbreaks caused 25,606 infections, 5,893 hospitalizations, and 120 deaths in the US (CDC, 2018).
Irradiation of foods has been approved by the World Health Organization (WHO) and the Food and Agriculture Organization (FAO) of the United Nations. At least 50 countries use this technology today for treatment of over 60 products, with spices and condiments being the largest application. In 2004, Australia became the first country to use irradiation for phytosanitary purposes, i.e., treatment of plants to control pests and plant diseases for export purposes (IAEA, 2015; Eustice, 2017). About ten countries have established bilateral agreements with the United States for trade in irradiated fresh fruits and vegetables. More than 18,000 tons of agricultural products are irradiated for this purpose around the world. The US has a strong commercial food irradiation program, with approximately 120,000 tons of food irradiated annually. Mexico, Brazil, and Canada are also big producers of irradiated products. China is the largest producer of irradiated foods in Asia, with more than 200,000 tons of food irradiated in 2010 (Eustice, 2017) followed by India, Thailand, Pakistan, Malaysia, the Philippines, and South Korea. Egypt and South Africa use irradiation technology to treat spices and dried foods. Russia, Costa Rica, and Uruguay have obtained approval for irradiation treatment of foods. Eleven European Union countries utilize food irradiation but the rest have been reluctant to adopt the technology due to consumers’ misconceptions, such as thinking that irradiated foods are radioactive with damaged DNA or “dirty” (Maherani et al., 2016).
Food irradiation can be accomplished using different radiation sources, such as gamma rays, X-rays, and electron beams. Although the basic engineering principles apply to all the different sources of radiation energy, this chapter focuses on high-energy electron beams and X-rays to demonstrate the concepts because they are a more environmentally acceptable technology than the cobalt-60-based technology (gamma rays).
Outcomes
After reading this chapter, you should be able to:
• • Explain the interaction of ionizing radiation with food products
• • Quantify the effect of ionizing radiation on microorganisms and determine the dose required to inactivate pathogens in foods
• • Select the best irradiation approach for different food product characteristics
Concepts
Food irradiation involves using controlled amounts of ionizing radiation with enough energy to ionize the atoms or molecules in the food to meet the desired processing goal. Radiation is the emission of energy that exists in the form of waves or photons as it travels through space or the food material (electromagnetic energy). In other words, it is a mode of energy transfer. The heat transfer equivalent would be the energy emitted by the Sun.
The type of radiation used in food processing is limited to high-energy gamma rays, X-rays, and accelerated electrons or electron beams (e-beams). Gamma and X-rays form part of the electromagnetic spectrum (like radio waves, microwaves, ultraviolet, and visible light rays), occurring in the short wavelength (10−8 to 10−15 m), higher frequency (1016 to 1023 Hz), high-energy (102 to 109 eV) region of the spectrum. High-energy electrons produced by electron accelerators in the form of e-beams can have as much as 10 MeV (megaelectronvolts = eV × 106) of energy (Browne, 2013).
The wavelength, or distance between peaks, λ, of the radiation energy is defined as the ratio of the speed of light in a vacuum, c, to the frequency, f, as follows:
$\lambda = \frac{c}{f}$
where λ = wavelength (m)
c = 3.0 × 108 (m/s)
f = radiation frequency (1/s)
From a quantum-mechanical perspective, electromagnetic radiation may be considered to be composed of photons (groups or packets of energy that are quantified). Therefore, each photon has a specific value of energy, E, that can be calculated as follows:
$E_{p}=hf$
where Ep = energy of a photon (J)
h = Planck’s constant (6.626 × 10−34 J·s)
f = radiation frequency (1/s)
The frequency, energy and wavelength of different types of electromagnetic radiation, calculated using Equations 6.4.1 and 6.4.2, are given in Table 6.4.1. The higher the frequency of the electromagnetic wave, the higher the energy, and the shorter the wavelength. Table 6.4.1 illustrates that X-rays and gamma rays are used in food irradiation processes because of their high energy. Table 6.4.1 also explains why exposure to UV light would only cause sunburn (lower energy electromagnetic radiation) while exposure to X-rays could be lethal (high-energy electromagnetic radiation).
Table $1$: Frequency, energy level and wavelength of the different types of electromagnetic radiation calculated using Equations 6.4.1 and 6.4.2.
Type of Electromagnetic Radiation Frequency, f (Hz) Energy, E (eV) Wavelength, λ (cm)
Gamma rays
1020
4.140 × 105
3.0 × 10−10
X-rays
1018
4.140 × 102
3.0 × 10−8
UV light
1016
4.140
3.0 × 10−6
Infrared light
1014
0.414
3.0 × 10−4
Radiation Sources and Their Interactions with Matter
60Co) the most commonly used in food processing applications. X-ray machines with a maximum energy of 7.5 MeV and electron accelerators with a maximum energy of 10 MeV are approved by WHO worldwide because the energy from these radiation sources is too low to induce radioactivity in the food product (Attix, 1986). Likewise, although gamma rays are high energy radiation sources, the doses approved for irradiation of foods do not induce any radioactivity in products.
Table $2$: Different types of radiation sources and their characteristics (Attix, 1986; Lagunas-Solar, 1995; Miller, 2005). Characteristics Source E-beams X-rays Cobalt-60
(gamma rays)
Energy (MeV)
10
5 or 7.5
1.17 and 1.33
Penetration depth (cm)
< 10
100
70
Irradiation on demand (machine can be turned off)
yes
yes
no
Relative throughput efficiency
high
medium
low
Dose uniformity ratio (Dmax/Dmin)
low
high
medium
Administration process
authorization required[a]
authorization required[a]
authorization required[b]
Treatment time
seconds
minutes
hours
Average dose rate (kGy/s)
~3
0.00001
0.000061
Applications
low density products can be treated in cartons
low/medium density products can be treated in cartons or pellets
low/medium density products can be treated in cartons or pellets
[a] Standard registration required
[b] Complex and difficult process with extensive training
The difference in nature of the types of ionizing radiation results in different capabilities to penetrate matter (Table 6.4.2). Gamma-ray and X-ray radiation can penetrate distances of a meter or more into the product, depending on the product density, whereas electron beams (e-beams), even with energy as high as 10 MeV, can penetrate only several centimeters. E-beam accelerators range from 1.35 MeV to 10 MeV (Miller, 2005). All types of radiation become less intense the further the distance from the radioactive material, as the particles or rays become more spread out (USNRC, 2018).
Absorbed Dose
The SI unit of absorbed dose is the gray (Gy), where 1 Gy is equivalent to the absorption of 1 J per kg of material. Therefore, absorbed dose at any point in the target food is expressed as the mean energy, dE, imparted by ionizing radiation to the matter in an infinitesimal volume, dv, at that point divided by the infinitesimal mass, m, of dv:
Table $3$: Absorbed dose requirement for different food treatments (IAEA, 2002). Treatment Absorbed Dose (kGy)[a]
Sprout inhibition
0.1–0.2
Insect disinfestation
0.3–0.5
Parasite control
0.3–0.5
Delay of ripening
0.5–1
Fungi control
0.5–3
Pathogen inactivation
0.5–3
Pasteurization of spices
10–30
Sterilization (pathogen inactivation)
15–30
[a] 1 kGy = 103 Gy
$D=\frac{dE}{dm}$
where D = dose (Gy)
dE = energy in infinitesimal volume dv (J)
dm = mass in infinitesimal volume dv (kg)
D represents the energy per unit mass which remains in the target material at a particular point to cause any effects due to the radiation energy (Attix, 1986).
In 1928, the roentgen was conceived as a unit of exposure, to characterize the radiation incident on an absorbing material without regard to the character of the absorber. It was defined as the amount of radiation that produces one electrostatic unit of ions, either positive or negative, per cubic centimeter of air at standard temperature and pressure (STP). In modern units, 1 roentgen equals 2.58 × 10−4 coulomb/kg air (Attix, 1986). In 1953, the International Commission on Radiation Units and Measurements (ICRU) recommended the “rad” as a new unit with 1 Gy equal to 100 rad. The term “rad” stands for “radiation absorbed dose.” Absorbed dose requirements for various treatments involving food products range from 0.1 kGy to 30 kGy (Table 6.4.3). Table 6.4.4 shows the maximum allowable dose for different products in the United States and worldwide.
Table $4$: Maximum allowable dose for different foods in the United States and worldwide (WHO, 1981; ICGFI, 1999; Miller, 2005).
Purpose Maximum Dose (kGy) Product
Disinfestation
1.0
any food
Sprout inhibition
0.1–0.2
onions, potatoes, garlic
Insect disinfestation
0.3–0.5
fresh dried fruits, cereals and pulses, dried fish and meat
Parasite control
0.3–0.5
fresh pork
Delay of ripening
0.5–1.0
fruits and vegetables
Pathogen inactivation
3.0
poultry, shell eggs
Pathogen inactivation
1.0
fresh fruits and vegetables
Pathogen inactivation
4.5–7.0
fresh and frozen beef and pork
Pathogen inactivation
1.0–3.0
fresh and frozen seafood
Shelf life extension
1.0–3.0
fruits, mushrooms, leafy greens
Pasteurization
10–30
spices
Commercial sterilization
30–50
meat, poultry, seafood, prepared foods, hospital foods, pet foods
The dose rate, or amount of energy emitted per unit time (dD/dt or $\frac{d}{dt}(\frac{dE}{dm})$), determines the processing times and, hence, the throughput of the irradiator (i.e., the quantity of products treated per time unit). In those terms, 10 MeV electrons can produce higher throughput (higher dose rate) compared to X-rays and gamma rays (Table 6.4.2). Similar to absorbed dose, dose rates are average values.
Depth-Dose Distribution and Electron Energy
The energy deposition profile for a 10 MeV e-beam incident onto the surface of a water absorber has a characteristic shape (Figure 6.4.1). The y-axis is the energy deposited per incident electron per unit area, E, also described as Eab. This parameter is proportional to the absorbed dose, D. The x-axis is the penetration depth (also called mass thickness), d, in units of area density, g/cm2, which is the thickness in cm multiplied by the volume density in g/cm3:
$d_{p} = d\rho$
where dp = penetration depth of radiation energy per unit area (g/cm2)
d = thickness of irradiated material (cm)
ρ = density of irradiated material (g/cm3)
The penetration depth, d, of ionizing radiation is defined as the depth at which extrapolation of the tail of the dose-depth curve meets the x-axis (approximately 6 g/cm2 in Figure 6.4.1). Figure 6.4.1 also shows how the dose, D, tends to increase with increasing depth within the product to about the midpoint of the electron penetration range and then it quickly falls to low doses.
Because the electron energy deposition is not constant, there is a location in the product that will receive a minimum dose, Dmin, and another position that will receive the maximum dose, Dmax. A useful parameter for irradiator designers and engineers is the dose uniformity ratio (DUR), defined as the ratio of maximum to minimum absorbed dose:
$DUR = \frac{D_{max}}{D_{min}}$
A DUR close to 1.0 represents uniform dose distribution in the sample (Miller, 2005; Moreira et al., 2012). However, values greater than 1.0 are common in commercial applications and many food products can tolerate a higher DUR, of 2 or even 3 (IAEA, 2002).
The absorbed dose, D, at a particular depth, d, can be calculated as the product of the energy deposited times the current density times the irradiation time (Miller, 2005):
$D(d) = E_{ab}I^{“}_{A}t$
where D = dose (MeV/g) (1 Gy = 6.24 × 1012 MeV/kg)
Eab = energy deposited per incident electron (MeV-cm2/g)
$I^{“}_{A}$ = current density (A/cm2)
t = irradiation time (s)
For a product with thickness, x, the energy represented by the dashed area in Figure 6.4.1 is the useful energy absorbed in the product. The maximum efficiency will occur when the product depth is such that the back surface of the target product receives the same dose as the top surface. For instance, using Figure 6.4.1 and assuming only energy penetration through the thickness of the material, the target with a minimum dose of 1.85 MeV/g (entrance dose) and the optimum depth of 3.8 g/cm2 represents an effective absorbed energy of about 7 MeV (= 1.85 × 3.8). Therefore, using 10 MeV e-beams, the maximum utilization efficiency is 70% (Miller, 2005).
The depth in g/cm2 at which the maximum throughput efficiency occurs for one-sided irradiation can be calculated as (Miller, 2005):
$\text{Depth}_{\text{optimum}}=d_{opt}=0.4\times E - 0.2$
where E is the maximum absorbed energy (MeV).
Equation 6.4.7 provides a useful measure of the electron penetration power of the irradiator. The penetration of high-energy e-beams in irradiated materials increases linearly with the incident energy. The electron range (penetration) also depends on the atomic composition of the irradiated material. Materials with higher electron contents (electrons per unit mass) will have higher absorbed doses near the entrance surface, but lower electron ranges (penetration). For instance, because of its lack of neutrons, hydrogen has twice as many atomic electrons per unit mass as any other element. This means that materials with higher hydrogen contents, such as water (H2O) and many food products, will have higher surface doses and shorter electron penetration than other materials (Becker et al., 1979).
In general, dose-penetration depth curves, such as the one represented by Figure 6.4.1, show an initially marked increase (buildup) of energy deposition near the surface of the irradiated product. This buildup region is a phenomenon that happens in materials of low atomic number due to the progressive cascading of secondary electrons by collisional energy losses (IAEA, 2002). This is then followed by an exponential decay of dose to greater depths. The approximate value of the buildup depth for gamma rays (1.25 MeV) is 0.5 cm of water, while the depth for 10 MeV e-beams is 10.0 cm of water (IAEA, 2002).
Figure 6.4.2 shows the point of maximum dose (in kGy) and the absorption of energy for both electrons and photons (X-rays and gamma rays). The penetration depth of 10 MeV e-beams is limited as they deposit their energy over a short depth, with a maximum located after the entrance point. In the case of gamma rays, the energy is deposited over a longer distance, which results in a uniform dose distribution within the treated product. The penetration capabilities of both 7.5 MeV X-rays and gamma rays are comparable, but the higher energy of the X-rays results in a slightly more uniform distribution of the doses within the treated product. The configuration of the product strongly influences dose distribution within the product (IAEA, 2002).
Figure 6.4.3 shows the depth-dose distributions in water-equivalent products (such as fruits and vegetables) ranging from 1 to 10 MeV in terms of relative dose in percentage. For instance, for the 10 MeV curve, if the entrance (at the surface) dose of 1 kGy is 100%, the relative dose at a depth of 1 cm2/g is approximately 110% of the entrance dose or 1.1 kGy, and it is 0 and 1.40 kGy for 1 MeV and 5 MeV irradiation systems, respectively.
The shapes of the depth-dose curves shown in Figure 6.4.3 can be better defined in terms of the penetration depth within the product (or product thickness) (Figure 6.4.4). The parameters defined in Figure 6.4.4, rmax, ropt, r50, and r33, are useful to determine the maximum product thickness that can be irradiated using a particular type of electron beam (1, 5, or 10 MeV). Additionally, the deposited energy can be determined at a specific depth. For instance, E50 at a depth of r50 = 4.53 cm in water for a 10 MeV irradiation system is,
$E_{mean}=E_{50}=Cr_{50} =2.33(4.53cm)=10.55 MeV$
where C is the rate of energy loss for e-beam treatment in water and water-like tissues = 2.33 MeV/cm (Strydom et al., 2005).
From Figure 6.4.4 with rmax equal to 2.8 cm, the maximum dose is 130% or 1.3 kGy, and the entrance dose equals the exit dose at ropt equal to 4 cm. This result means that if the irradiated product has a thickness between 2.8 and 4 cm, the DUR is constant with a value of 1.3 (DUR = 1.3 kGy/1.0 kGy). Such a DUR value suggests the irradiation process provides good uniformity in the dose distributed throughout the product thickness. If the process yields a DUR of 2 with a minimum dose of 0.67 kGy (DUR = 1.35 kGy/0.67 kGy), the maximum useful thickness of the irradiated product will be 4.5 cm or r50, the depth at which the dose is half the maximum dose.
Note that r50 > ropt. Hence, if the product thickness exceeds ropt, the DUR increases. As DUR approaches infinity at a depth of 6.5 cm for 10 MeV e-beam (Figure 6.4.4), any part of the product beyond that depth will remain unexposed to the irradiation treatment. Therefore, the maximum processable product thickness for this irradiation system will be 6.5 cm. This result highlights a critical issue when using electron beam accelerators to pasteurize or sterilize food products, which need to be exposed in their entirety to the radiation energy.
The engineer has the option to apply the e-beams using the single e-beam configuration (which exposes the target food only on the top or bottom surface) or the double-beam configuration (which exposes the target food at both the top and bottom surfaces). Figure 6.4.5 illustrates the difference between one-sided and two-sided irradiation systems using 10 MeV electrons in water when DUR is 1.35.
Figure 6.4.5 shows that when irradiating from the top or bottom only, the maximum processable thickness will be close to 4 cm (shaded areas, Figure 6.4.6), while the double-beam system increases the maximum processable thickness to about 8.3 cm (shaded area, Figure 6.4.7). Therefore, to improve the penetration capability of a 10 MeV e-beam treatment, two 10 MeV accelerators, one irradiating from the top and the other from the bottom of a conveyor system, are frequently used in commercial applications (IAEA, 2002).
The depth at which the maximum throughput efficiency occurs for double-sided irradiation can be calculated as (Miller, 2005):
$Depth_{optimum} = d_{opt} = 0.9\times E-0.4$
Measurement of Absorbed Dose
The effectiveness of ionizing radiation in food processing applications depends on proper delivery of the absorbed dose. To design the correct food irradiation process, the operator should be able to (1) measure the absorbed dose delivered to the food product using reliable dosimetry methods; (2) determine the dose distribution patterns in the product package; and (3) control the routine radiation process (through process control procedures). Dosimeters are used for quality and process control in radiation research and commercial processing.
Reliable techniques for measuring dose, called dosimetry, are crucial for ensuring the integrity of the irradiation process. Incorrect dosimetry can result in an ineffective food irradiation process. Dosimetry systems include physical or chemical dosimeters and measuring instrumentation, such as spectrophotometers and electron paramagnetic resonance (EPR) spectrometers. A dosimeter is a device capable of providing a reading that is a measure of the absorbed dose, D, deposited in its sensitive volume, V, by ionizing radiation. The measuring instrument must be well characterized so that it gives reproducible and accurate results (Attix, 1986).
There are four categories of dosimetry systems according to their intrinsic accuracy and usage (IAEA, 2002):
• Primary standards (ion chamber, calorimeters) measure the absolute (i.e., does not need to be calibrated) absorbed dose in SI units.
• Reference standards (alanine, Fricke, and other chemicals) have a high metrological quality that can be used as a reference standard to calibrate other dosimeters. They need to be calibrated against a primary standard, generally through the use of a transfer standard dosimeter.
• Transfer standards (thermoluminiscent dosimeter, TLD) are used for transferring dose information from a national standards laboratory to an irradiation facility to establish traceability to that standards laboratory. They should be used under conditions specified by the issuing laboratory. They need to be calibrated.
• Routine dosimeters (process monitoring, radiochromic films) are used in radiation processing facilities for dose mapping and for process monitoring for quality control. They must be calibrated frequently against reference or transfer dosimeters.
Food Irradiation and Food Safety Applications
Effect of Ionizing Radiation on Pathogens
Pathogen inactivation is the end effect of food irradiation. Exposure to ionizing radiation has two main effects on pathogenic microorganisms. First, the radiation energy can directly break strands (single or double) of the microorganism’s DNA. The second effect occurs indirectly when the energy causes radiolysis of water to form very reactive hydrogen (H+) and hydroxyl (OH) radicals. These radicals can recombine to produce even more reactive radicals such as superoxide (HO2), peroxide (H2O2), and ozone (O3), which have an important role in inactivating pathogens in foods. Although DNA is the main target, other bioactive molecules, such as enzymes, can likewise undergo inactivation due to radiation damage, which enhances the efficacy of the irradiation treatment.
Kinetics of Pathogen Inactivation
The traditional approach used in thermal processing calculations is to develop survival curves, which are semi-log plots of microorganism populations as a function of time at a given process temperature. This same approach can be used to develop radiation survival curves, i.e., plots of the log of the change in microbial populations as a function of applied dose. In this chapter, only first-order kinetics of microbial destruction are described.
Figure 6.4.8 is a survival curve obtained for inactivation of a pathogen in a food product due to exposure to radiation energy. Based on first-order kinetics (i.e., ignoring the initial non-linear section of the curve indicated by the arrow and the dashed line in Figure 6.4.8), the microbial inactivation rate is described by:
$\frac{dN}{dD}=-kD$
where N = microbial population at a particular dose (CFU/g or CFU/mL; CFU stands for colony forming units)
D = the applied dose (kGy)
k = exponential rate constant (1/kGy)
The radiation resistance of the target microorganism is usually reported as the radiation D value, D10, defined as the amount of radiation energy (kGy) required to inactivate 90% (or one log reduction) of the specific microorganism (Thayer et al., 1990). Using this definition and integrating Equation 6.4.10 yields:
$N=N_{0}e^{-kD_{10}}$
where N0 = initial microbial population (CFU/g or CFU/mL)
Based on Figure 6.4.8 and Equation 6.4.11, the inverse of the slope of the line is the D10 value and is equivalent to the D-value used in thermal process calculations except that these have units of time as the slope of population change versus process time. The relationship between the D10 value and the rate constant is:
$k=\frac{1}{D_{10}}$
The D10 value varies with the target pathogen, type and condition of food (whole, shredded, peeled, cut, frozen, etc.), and the atmosphere in which it is packed (e.g., vacuum-packaged foods, pH, moisture, and temperature) (Niemira, 2007; Olaimat and Holley, 2012; Moreira et al., 2012). For instance, the D10-values for Salmonella spp. and Listeria spp. in fresh produce can range between 0.16 to 0.54 kGy while Escherichia coli is slightly more resistant to irradiation treatment (sometimes up to 1 kGy) (Fan, 2012; Rajtowski et al., 2003). When tomatoes are irradiated, the D10-values for Escherichia coli O157:H7, Salmonella spp., and Listeria monocytogenes are around 0.39, 0.56, and 0.66 kGy, respectively (Mahmoud et al., 2010). In commercial applications, the rule of thumb is to design an irradiation treatment for a five log or 5D10 reduction in the population of the target pathogen.
Applications
The goal of a food irradiation process is to deliver the minimum effective radiation dose to all portions of the product. Too high a dose (or energy) in any region of the target product could lead to wasted energy and deterioration of product quality.
To design a food irradiation process, the absorbed dose in the material of interest must be specified because different materials have different radiation absorption properties. In the case of food products, the material of interest is water because most foods behave essentially as water regardless of their water content. Dose requirements and maximum allowable doses should be used for specific applications (Tables 6.4.3 and 6.4.4).
Cost estimates for food irradiation facilities include the capital cost of equipment, installation and shielding, material handling and engineering, and variable costs including electricity, maintenance, and labor. The approximate cost of an e-beam accelerator facility for a production rate of 2000 hours per year is between 2 and 5 million US dollars and has remained fairly steady (Morrison, 1989; Miller, 2005; University of Wisconsin, 2019).
Technology Selection
The selection of the right technology for a particular food irradiation application depends on many factors, including food product characteristics and processing requirements (Miller, 2005). Figure 6.4.9 shows the steps required to choose a food irradiation approach.
The first step is to define the product characteristics. What is the main goal of the process? What is the product state, i.e., frozen, unpackaged, etc.? What is the product’s density, shape, and mass flow rate going through the accelerator? The second step specifies the process requirements, including the product thickness and the acceptable DUR (Equation 6.4.5). The final step is to select the appropriate radiation technology based on the product characteristics and process requirements. Selection includes determining the best technology (e-beams versus X-rays versus gamma rays), the size of the e-beam or X-rays accelerator(s), and, in the case of e-beams, whether single- or double-beam treatment will be more effective.
A simplified flow diagram provides guidelines to follow in selecting the right technology for food irradiation (Figure 6.4.10). The engineer must first determine if the product could be effectively irradiated at all based on maximum to minimum dose ratios and energy efficiency concepts. The penetration depth depends on the product mass thickness (g/cm2), which is based on the product and/or package dimensions and density (Equation 6.4.4). For food safety treatments, the DUR is based on the minimum dose requirement to reduce the population of a certain pathogen (i.e., the D10 value, Equation 6.4.12) and the maximum dose allowed by local regulation or the dose a product can tolerate without degrading its quality. As indicated in Figure 6.4.10, in general, the product will not be suitable for irradiation treatment when its mass thickness is greater than 50 g/cm2 and DUR must be less than 3.
Finally, the engineer must select the product handling systems to transport the food product in and out of the e-beam and X-rays irradiators via conveyors. Orientation of the irradiators is an important consideration since e-beams are oriented vertically to the product while the higher-penetrating X-rays allow for horizontal irradiation of products. The dose rate is set by varying the speed of the conveyors. The engineer must also determine whether absorbers must be used to reduce the entrance dose; provide refrigeration of the facility, if needed, since many food products are perishable; include shielding of the facility (X-rays require thicker walls than e-beam processing), and provide for ozone removal (a sub-product of irradiation from ionization of oxygen in the air) (Miller, 2005). Prior to entering the irradiation system, products are inspected in staging areas where products are palletized and loaded into containers to be transported on conveyers through the irradiators. Irradiated products are then loaded into transportation vehicles or stored in refrigerated chambers for distribution to retailers.
The speed, v, in cm/s, of the conveyor transporting the food through an e-beam scan facility is determined by (Miller, 2005):
$v = \frac{1.85\times10^{6}I_{a}}{wD_{sf}}$
where Ia = average current (A), an e-beam accelerator configuration parameter
w = scan width (cm), an e-beam accelerator configuration parameter (see Figure 6.4.11)
Dsf = the front surface dose (kGy), defined as the dose delivered at a depth d into the food (Figure 6.4.11); the target dose
The conveyor speed is directly related to the throughput as:
$v = \frac{dm/dt}{A_{d}w}$
where dm/dt = throughput or amount of product per time (g/s)
Ad = aerial density (g/cm2) obtained from Equation 6.4.15:
$A_{d}=\rho d$
where ρ = food density (g/cm3)
d = thickness (or depth) of food (cm)
Equations 6.4.13 and 6.4.14 show that for a system with fixed average current and scan width, the faster the speed of the conveyor, the more product is processed in the facility and the lower the dose it receives. Typical conveyor speeds range between 5 and 10 m/minute.
The total mass of product running through the conveyor belt is calculated as:
$m=A_{d}A_{c}$
where m = mass of food (kg)
Ad = aerial density from Equation 6.4.15
Ac = cross-sectional area of food or package (m2)
The throughput requirements of electron beam facilities (dm/dt) are estimated based on the beam power, the minimum required dose, and irradiation mode (e.g., e-beam vs. X-rays) as follows (Miller, 2005):
$\frac{dm}{dt}=\frac{\eta P}{D}$
where η = throughput efficiency, which is 0.025 to 0.035 at 5 MeV and 0.04 to 0.05 at 7.5 MeV for X-ray irradiation, and 0.4 to 0.5 for e-beam mode (Miller, 2005)
P = machine power (kW)
D = minimum dose requirement (kGy), which ranges from 250 Gy for disinfestation to 6–10 kGy for preservation of freshness for spices
Examples
Example $1$
Example 1: Interaction of ionizing radiation with matter
Problem:
If the incident current density at the surface is 10−6 A/cm2 of the water absorber in Figure 6.4.1 and the energy deposited per incident electron is 1.85 MeV-cm2/g, determine the absorbed dose in kGy after 1 second.
Solution
Using Equation 6.4.6:
$D(d)=E_{ab}I^{“}_{A}t$ (Equation $6$)
$D(d)=(1.85 MeV \frac{cm^{2}}{g})\times (10^{-6}\frac{A}{cm^{2}})\times 1\ s$
with 1 MeV = 106 eV:
$D(d)=(1.85 \frac{cm^{2}}{g})\times (10^{-6}\frac{A}{cm^{2}})\times 1\ s$
In units of energy, 1 eV (electrovolt) equals 1.60218 × 10−19 Joules and 1 kJ = 1000 J
$D(d)=(1.85 \frac{cm^{2}}{g})\times (10^{-6}\frac{A}{cm^{2}})\times 1\ s(\frac{1C}{1A\times s})\times(\frac{1.6022\times10^{-19}J}{1\ eV})\times(\frac{1}{1.6022\times10^{-19}C})$
Finally, the dose in kGy is:
$D(d)=1.85\frac{kJ}{kg} \text{ or }kGy$
The absorbed dose after 1 second is 1.85 kGy.
Example $2$
Example 2: Calculation of dose uniformity ratio (DUR)
Problem:
Figure 6.4.1 shows that the absorbed dose increases at a depth of 2.75 g/cm2 inside the irradiated water absorber. (a) Find the dose uniformity ratio (DUR). (b) Comment on the changes (if any) to this parameter as a function of depth in the irradiated target.
Solution
1. (a) Based on Figure 6.4.1 and using Equation 6.4.5, calculate the DUR:
DUR = Dmax/Dmin = 2.5/1.85 = 1.35
1. The DUR value is within the acceptable range for dose uniformity in commercial irradiator systems (close to 1.0).
2. (b) Based on Figure 6.4.1, the DUR remains constant (= 1.35) up to a depth of 3.8 g/cm2. Beyond this depth, the minimum dose decreases which increases the DUR. This is clearly shown in Figure 6.4.1 as the dose increases with increasing depth within the product and then it decreases.
Example $3$
Example 3: Product thickness for one sided e-beam irradiation
Problem:
Determine the maximum allowable product thickness for one-sided e-beam irradiation with 10 MeV electrons if a dose uniformity ratio of 3 is acceptable.
Solution
From Figure 6.4.4 and using Equation 6.4.5, determine the depth in cm for DUR = 3
DUR = Dmax/Dmin
Dmax = 130% or 1.3 kGy (Figure 6.4.4) and Dmin = 1.3/3 = 0.43 kGy or 43% relative dose
Again, from Figure 6.4.4, the depth value is 4.8 cm = r33.
Thus, the maximum allowable product thickness will be 4.8 cm and the exit dose equals a third of the maximum dose.
Example $4$
Example 4: Efficiency of single-sided vs. double-sided irradiation treatment
Problem:
Determine the depth at the maximum throughput efficiency for single-sided and double-sided 10 MeV irradiation of water (5 cm thick) when the energy absorbed is (a) 1.50 MeV-cm2/g, (b) 2.20 MeV-cm2/g, and (c) 2.40 MeV-cm2/g.
Solution
Select the appropriate equation and calculate the depth in cm.
For single-sided irradiation use Equation 6.4.7:
$d_{opt}=0.4\times E - 0.2$
1. (a) 1.50 MeV-cm2/g
$d_{opt}=0.4\times (1.50) - 0.2 = 0.40 \text{ g/cm}^{2}$
1. (b) 2.22 MeV-cm2/g
$d_{opt}=0.4\times (2.22) - 0.2 = 0.68 \text{ g/cm}^{2}$
1. (c) 2.40 MeV-cm2/g
$d_{opt}=0.4\times (2.40) - 0.2 = 0.76 \text{ g/cm}^{2}$
For double-sided irradiation use Equation 6.4.9:
$d_{opt}=0.9\times E - 0.4$
1. (a) 1.50 MeV-cm2/g
$d_{opt}=0.9\times (1.50) - 0.4 = 0.95 \text{ g/cm}^{2}$
1. (b) 2.22 MeV-cm2/g
$d_{opt}=0.9\times (2.22) - 0.4 = 1.60 \text{ g/cm}^{2}$
1. (c) 2.40 MeV-cm2/g
$d_{opt}=0.9\times (2.40) - 0.4 = 1.76 \text{ g/cm}^{2}$
Energy Absorbed
(MeV-cm2/g)
dopt (g/cm2)
Single-sided
dopt (g/cm2)
Double-sided
1.50
0.40
0.95
2.22
0.68
1.60
2.40
0.76
1.76
Results demonstrate that the double-beam configuration is more effective regarding penetration depth with minimum energy utilization, e.g., penetration of 0.95 g/cm2 versus 0.40 g/cm2 using electron beams with 1.5 MeV-cm2/g of energy.
Example $5$
Example 5: Interaction of ionizing radiation with food product and effect on dose penetration depth
Problem:
Comparisons of 10 MeV electron depth-dose distributions in a bag of vacuum-packed baby spinach leaves (mass thickness = 5.1 g/cm2) and ground beef patty (mass thickness = 5.1 g/cm2) are shown in Figure 6.4.12. Determine the depth at which the maximum dose occurs for both food products and discuss your results.
Solution
Locate the depth (x-axis) at which dose (y-axis) is maximum. For the spinach, depth is 3.00 cm and for the ground beef patty, depth = 2.70 cm.
Both materials have very similar atomic composition and, therefore, absorb the incident energy very similarly.
Example $6$
Example 6: Calculation of radiation D10 value
Problem:
Romaine lettuce leaves were exposed to radiation doses up to 1.0 kGy using a 10 MeV e-beam irradiator to inactivate a pathogen. The population of survivors at each dose was measured right after irradiation (see table below).
Number of pathogens (CFU/g) in romaine lettuce leaves as a function of radiation dose:
Dose
(kGy)
Population
(log CFU/g)
0
6.70
0.25
5.50
0.50
4.30
0.75
3.30
1.00
2.00
1. (a) Calculate the D10 value of the pathogen in the fresh produce and determine the dose level required for a 5-log reduction in the population of the pathogen. The data point for a dose of 0 kGy represents the non-irradiated produce.
2. (b) If the maximum dose approved for irradiation of fresh vegetables is close to 1 kGy, is the irradiation treatment suitable?
Solution
First, plot the logarithm of the population of survivors as a function of dose from the given data and determine the D10 value from the inverse of the slope of the line (Figure 6.4.13).
$Slope = -\frac{logN_{1}-logN_{2}}{D_{1}-D_{2}} =-\frac{5-4}{0.375-0.591}=-\frac{1}{-0.216}=\frac{1}{0.216}$
Then, determine the dose required for a 5-log reduction in microbial population, i.e., 5D10, and check if 5D10 < 1.0 kGy. If yes, the process is suitable for treatment of the fresh produce. If 5D10 > 1.0 kGy, another process should be considered.
5D10 = 5 × 0.216 kGy = 1.10 kGy
This irradiation process would be suitable because the pathogen population in the romaine lettuce leaves will be reduced by 5 logs when exposed to a dose of approximately 1.0 kGy using 10 MeV electron beams.
Example $7$
Example 7: Selection of best irradiation technology
Problem:
A 10 MeV e-beam and a 5 MeV X-ray accelerator are available for irradiating the following products. Select the best irradiation technology to treat each of the products. Assume a minimum dose of 1 kGy.
1. (a) Ground beef patty contaminated with Escherichia coli O157:H7, Dmax = 1.25 kGy (mass thickness = 8.5 g/cm2)
2. (b) Tomato contaminated with Listeria monocytogenes, Dmax = 1.4 kGy (mass thickness = 3.2 g/cm2)
3. (c) Romaine lettuce contaminated with Salmonella Poona, Dmax = 1.37 kGy (mass thickness = 4.1 g/cm2)
Solution
Use the given information and the flow chart (Figure 6.4.10) to determine whether e-beams or X-rays should be used for irradiation of the different products.
1. (a) DUR for beef patty (using Equation 6.4.5, DUR = Dmax/Dmin) = 1.25 kGy/1 kGy = 1.25 = MMR
Following Figure 6.4.10 with mass thickness d = 8.5 g/cm2 and MMR = 1.25 leads to condition 4: d >3.8 g/cm2 and MMR < 1.5 and selection of X-ray as the appropriate technology for the beef patty.
2. (b) DUR for tomato sample (using Equation 6.4.5, DUR = Dmax/Dmin): 1.4 kGy/1 kGy = 1.4 = MMR
Following Figure 6.4.10 with mass thickness d = 3.2 g/cm2 and MMR = 1.4 leads to condition 6 or 7: d < 3.8 g/cm2 and selection of single or double-sided e-beam would be appropriate for the tomato sample.
3. (c) DUR for romaine lettuce (using Equation 6.4.5, DUR = Dmax/Dmin): 1.37 kGY/1 kGy = 1.37 = MMR
Following Figure 6.4.10 with mass thickness d = 4.1 g/cm2 and MMR = 1.37 leads to condition 4: Mass thickness d = 4.1 g/cm2. Since d > 3.8 g/cm2 and MMR < 1.5, select X-ray as the appropriate technology for the romaine lettuce.
Product Criteria Choice of Radiation Technology
Beef patty
d >3.8 g/cm2, MMR < 1.5
X-rays
Tomato
d < 3.8 g/cm2, MMR < 1.5
E-beams
Romaine lettuce
d > 3.8 g/cm2, MMR < 1.5
X-rays
Example $8$
Example 8: Calculate the dose required for a 5-log reduction of pathogen population
Problem:
Calculate the dose required for a 5-log reduction of the pathogen for the three products from Example 6.4.7 using the following information. For each product, determine if the required dose is less than the maximum allowable dose for that product.
1. (a) Ground beef patty contaminated with Escherichia coli O157:H7 (D10 = 0.58 kGy)
2. (b) Tomato contaminated with Listeria monocytogenes (D10 = 0.22 kGy)
3. (c) Romaine lettuce contaminated with Salmonella Poona (D10 = 0.32 kGy)
Solution
Given the D10 value for each pathogen, calculate 5D. The pathogen with the higher 5D value is the more resistant to irradiation and will require treatment at higher doses.
Product Pathogen 5D (kGy)
Ground beef patty
Escherichia coli O157:H7
2.90
Tomato
Listeria monocytogenes
1.10
Romaine lettuce
Salmonella Poona
1.60
The E. coli in the beef patties will require higher doses to achieve a 5-log inactivation level than the doses required to treat the two fresh produces. The required treatment for the tomato samples falls within the acceptable dose level for fruits and vegetables (about 1 kGy). The Salmonella in the lettuce will require a slightly higher dose but the U.S. Food and Drug Administration (FDA, 2018) allows up to 4 kGy for treatment of leafy greens. The maximum allowable dose for pathogen inactivation in fresh and frozen beef ranges from 4.5–7.0 kGy in different countries (Table 6.4.4).
Example $9$
Example 9: Calculation of conveyor speed in an e-beam system
Problem:
Calculate the conveyor speed required for a 1.5 kGy entrance dose (front surface dose) irradiation for a single-sided process using a 10 MeV, 1-mA beam with a scan width of 120 cm.
Solution
Calculate the conveyor speed using Equation 6.4.13:
$v=\frac{1.85\times10^{6}I_{a}}{wD_{sf}}$
The conveyor speed, v, with the given values of Dsf = 1.5 kGy, Ia = 10−3 A and w = 120 cm is:
$v=\frac{1.85\times10^{6}I_{a}}{wD_{sf}} = \frac{1.85\times10^{6}\times10^{-3}}{120\times1.5} = 10.28 \text{ cm/s}$
Conveyor speed varies according to product throughput. In this case, the conveyor must run at 10.28 cm/s (6 m/min) to ensure a 1.5 kGy entrance dose when treating the food with a 10 MeV e-beam accelerator in singled-sided mode and given current and scan width. The faster the conveyor speed, the lower the dose. For instance, if the required Dsf is 1 kGy, then the conveyor should run at 15.42 cm/s (9.25 m/min):
$v=\frac{1.85\times10^{6}I_{a}}{wD_{sf}} = \frac{1.85\times10^{6}\times10^{-3}}{120\times1} = 15.42 \text{ cm/s}$
Example $10$
Example 10: Calculation of throughput rate for an e-beam system
Problem:
Calculate the throughput rate for e-beam disinfestation of papaya (minimum required dose of 0.26 kGy) with an e-beam irradiation (one-sided mode) with 12 kW of power and throughput efficiency of 0.5.
Solution
1. (a) Calculate the throughput rate with P = 12 kW, D = 0.26 kGy, and η = 0.5.
From Equation 6.4.17:
$\frac{dm}{dt}=\frac{\eta P}{D}$
Then: $\frac{dm}{dt} [\frac{kg}{s}]=\frac{0.5\times12[kW]}{0.26[kGy]} = 23.1\text{ kg/s}$
1. (b) Assuming an areal density of 7 g/cm2 and a scan width of 120 cm, calculate the conveyor speed, v.
Find v using Equation 6.4.14:
$v=\frac{dm/dt}{A_{d}w}$
1. with Ad = 7 g/cm2, then:
$v=\frac{dm/dt}{A_{d}\times w} = \frac{23.1[\frac{kg}{s}]\times1000[\frac{kg}{g}]}{7[\frac{g}{cm^{2}}]\times 120[cm]} =27.5 \text{ cm/s}$
1. (c) If the product is arranged in cardboard boxes (Figure 6.4.11), which have a cross sectional area of 7432 cm2, calculate the total mass of food that should be placed in a box
Find m using Equation 6.4.16:
$m=A_{d}A_{c}$
1. with Ad = 7 g/cm2 and Ac = 7432 cm2, then:
$m=A_{d}\times A_{c}=\frac{7[\frac{g}{cm^{2}}]\times7432[cm^{2}]}{1000[\frac{g}{kg}]}=52\ kg$
1. Disinfestation treatment of papaya (dose of 0.26 kGy) using a one-sided e-beam can be achieved when 52 kg of the food is placed under the e-beam with the conveyor running at 27.5 cm/s.
Image Credits
Figure 1. Moreira, R. G. (CC By 4.0). (2020). Energy deposition profile for 10-MeV electrons in a water absorber (adapted from Miller, 2005).
Figure 2. Moreira, R. G. (CC By 4.0). (2020). Dose-depth penetration for different radiation sources (X-rays, electron beams and gamma rays) (adapted from IAEA, 2015).
Figure 3. Moreira, R. G. (CC By 4.0). (2020). Typical depth–dose curves for electrons of various energies in the range applicable to food processing operations (adapted from IAEA, 2002).
Figure 4. Moreira, R. G. (CC By 4.0). (2020). Depth–dose curve for 10 MeV electrons in water, where the entrance (surface) dose is 100% (adapted from IAEA, 2002).
Figure 5. Moreira, R. G. (CC By 4.0). (2020). Depth-dose distributions for 10 MeV electrons in water for single-sided and double-sided configurations (DUR = 1.35).
Figure 6. Moreira, R. G. (CC By 4.0). (2020). Maximum penetration thickness for top-only and bottom-only e-beam configurations using 10 MeV electrons in water (DUR = 1.35).
Figure 7. Moreira, R. G. (CC By 4.0). (2020). Maximum penetration thickness for double-sided e-beam irradiation using 10 MeV electrons in water (DUR = 1.35).
Figure 8. Moreira, R. G. (CC By 4.0). (2020). Typical survival curve showing first-order kinetics behavior.
Figure 9. Moreira, R. G. (CC By 4.0). (2020). Steps needed to select the right irradiation technology for a food processing application (adapted from Miller, 2005).
Figure 10. Moreira, R. G. (CC By 4.0). (2020). Decision flow diagram for selecting the correct irradiation approach (adapted from Miller, 2005). MMR is the acceptable range of maximum to minimum dose ratios (DUR).
Figure 11. Moreira, R. G. (CC By 4.0). (2020). Typical electron beam irradiation configuration.
Example 5. Moreira, R. G. (CC By 4.0). (2020). Example 5.
Example 6. Moreira, R. G. (CC By 4.0). (2020). Example 6.
References
Attix, F. H. (1986). Introduction to radiological physics and radiation dosimetry. New York, NY: Wiley Interscience Publ. doi.org/10.1002/9783527617135.
Becker, R. C., Bly, J. H., Cleland, M. R., & Farrell, J. P. (1979). Accelerator requirements for electron beam processing. Radiat. Phys. Chem., 14(3–6), 353–375. https://doi.org/10.1016/0146-5724(79)90075-X.
Browne, M. (2013). Physics for engineering and science (2nd. ed.). New York, NY: McGraw Hill/Schaum.
CDC. (2018). Preliminary incidence and trends of infections with pathogens transmitted commonly through food. Foodborne Diseases Active Surveillance Network, Morbidity and Mortality Weekly Report. 68(16), 369–373. CDC. Retrieved from https://www.cdc.gov/mmwr/volumes/68/wr/mm6816a2.htm?s_cid=mm6816a2_w.
Cleland, M. R. (2005). Course on small accelerators. 383–416 (CERN-2006-012). Zeegse, The Netherlands. https://doi.org/10.5170/CERN-2006-012.383.
Eustice, R. F. (2017). Global status and commercial applications of food irradiation. Ch. 20. In I. C. F. R. Ferreira, A. L. Antonio, & Cabo Verde, S. (Eds.), Food irradiation technologies. Concepts, applications and outcomes. Food chemistry, function and analysis No. 4. Cambridge, U. K.: Royal Society of Chemistry, Thomas Graham House.
Fan, X. (2012). Ionizing radiation. In V. M. Gomez-Lopez (Ed.), Decontamination of fresh and minimally processed produce. doi.org/10.1002/9781118229187.
FDA. (2018). Irradiation in the production, processing and handling of food. Code of Federal Regulations Title 21. 3, CFR179.26. Washington, DC: FDA. Retrieved from www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=179.26.
Health Canada. (2016). Technical summary. Health Canada’s safety evaluation of irradiation of fresh and frozen raw ground beef. Government of Canada, Health Canada. Retrieved from http://www.hc-sc.gc.ca/fn-an/securit/irridation/tech_sum_food_irradiation_aliment_som_tech-eng.php#toxicology.
IAEA. (2002). Dosimetry for food irradiation. Technical Reports Series No. 409. Vienna, Austria: International Atomic Energy Agency. Retrieved from https://www-pub.iaea.org/MTCD/Publications/PDF/TRS409_scr.pdf.
IAEA. (2015). Manual of good practice in food irradiation. Technical Reports Series No. 481. Vienna, Austria: International Atomic Energy Agency. Retrieved from https://www.iaea.org/publications/10801/manual-of-good-practice-in-food-irradiation.
ICGFI. (1999). Facts about food irradiation. Food and Environmental Protection Section. Joint FAO/IAEA Division of Nuclear techniques in Food and Agriculture. Vienna, Austria: International Consultative Group on Food Irradiation.
Lagunas-Solar, M. C. (1995). Radiation processing of foods: An overview of scientific principles and current status. J. Food Protection, 58(2), 186–192. https://doi.org/10.4315/0362-028x-58.2.186.
Maherani, B., Hossain, F., Criado, P., Ben-Fadhel, Y., Salmieri, S., & Lacroix, M. (2016). World market development and consumer acceptance of irradiation technology. Foods, 5(4): 2–21. https://doi.org/10.3390/foods5040079.
Mahmoud, B. S. M., Bachman, G., & Linton, R. H. (2010). Inactivation of Escherichia coli O157:H7, Listeria monocytogenes, Salmonella enterica and Shigella flexneri on spinach leaves by X-ray. Food Microbiol., 27(1), 24–28. https://doi.org/10.1016/j.fm.2009.07.004.
Miller, R. B. (2005). Electronic irradiation of foods: An introduction to the technology. Food Eng. Series. New York, NY: Springer.
Morehouse, K. M., & Komolprasert, V. (2004). Irradiation of food and packaging: An overview. Ch. 1. In K. M. Morehouse, & V. Komolprasert (Eds.), Irradiation of food and packaging. American Chemical Society. ACS Symposium Series. https://doi.org/10.1021/bk-2004-0875.
Moreira, R. G., Puerta-Gomez, A. F., Kim, J., & Castell-Perez, M. E. (2012). Factors affecting radiation d-values (D10) of an Escherichia coli cocktail and Salmonella typhimurium LT2 inoculated in fresh produce. J. Food Sci., 77(4), E104-E111. doi.org/10.1111/j.1750-3841.2011.02603.x.
Niemira, B. A. (2007). Relative efficacy of sodium hypochlorite wash versus irradiation to inactivate Escherichia coli O157:H7 Internalized in leaves of romaine lettuce and baby spinach. J. Food Protection, 70(11), 2526–2532. https://doi.org/10.4315/0362-028x-70.11.2526.
Olaimat, A. N., & Holley, R. A. (2012). Factors influencing the microbial safety of fresh produce: A review. Food Microbiol., 32(1), 1–19. https://doi.org/10.1016/j.fm.2012.04.016.
Rajtowski, K. T., Boyd, G., & Thayer, D. W. (2003). Irradiation D-values for Escherichia coli O157:H7 and Salmonella sp. on inoculated broccoli seeds and effects of irradiation on broccoli sprout keeping quality and seed viability. J. Food Protection, 66(5), 760–766. https://doi.org/10.4315/0362-028x-66.5.760.
Strydom, W., Parker, W., & Olivares, M. (2005). Electron beams: Physical and clinical aspects. In E. Podgorsakt (Ed.), Radiation oncology physics: A handbook for teachers and students. Vienna, Austria: IAEA.
Thayer, D. W., Boyd, G., Muller, W. S., Lipson, C. A., Hayne, W. C., & Baer, S. H. (1990). Radiation resistance of salmonella. J. Ind. Microbiol., 5(6), 383–390. doi: https://doi.org/10.1007/BF01578097.
University of Wisconsin. (2019). The food irradiation process. UW Food Irradiation Education Group. Retrieved from uw-food-irradiation.engr.wisc.edu/materials/FI_process_brochure.pdf.
USNRC. (2018). U.S. Nuclear Regulatory Commission. Retrieved from https://www.nrc.gov/about-nrc/radiation/health-effects/radiation-basics.html.
WHO. (1981). Wholesomeness of irradiated foods. Technical Report Series 659. Geneva, Switzerland: World Health Organization. Retrieved from https://apps.who.int/iris/bitstream/handle/10665/41508/WHO_TRS_659.pdf?sequence=1&isAllowed=y.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/06%3A_Processing_Systems/6.04%3A_Irradiation_of_Food.txt
|
Scott A. Morris
Departments of Agricultural & Biological Engineering and Food Science & Human Nutrition
University of Illinois at Urbana-Champaign
Champaign, Illinois, USA
Key Terms
Product protection Permeation Packaging cycle
Packaging design Shelf life Information cycle
Packaging materials Packaging damage
Introduction
Packaging is an engineering specialization that involves a systems-oriented means of preparing and distributing goods of all types. Packaging is responsible for several fundamental functions as well as having broad reach and wide impact beyond the consumer’s immediate purchase. It is a much more complex system than most consumers (and many producers) realize and requires skills drawn from all facets of engineering. For that reason, integration of concepts is absolutely essential, and this chapter is best understood by considering the systems-cycle concepts laid out in the Applications section first, before pursuing isolated topics or calculations.
Packaging makes it possible to have a broad distribution of perishable items such as food and medicine. By considering the complete cycle of usage, conditions, handling, storage, and disposal, appropriate packaging can be designed for nearly any application, market, and regulatory structure. Thus, it is important for packaging to be included as early as possible in the product development cycle so that the proper packaging can be created and tested in time to meet production deadlines, and to highlight problems in the product that might make it susceptible to shipping damage or other harm.
Outcomes
After reading this chapter, you should be able to:
• • Describe the large-scale packaging system, both physical and informational, beyond development of a simple container
• • Apply basic materials data to calculate simple permeation (mass-transfer) problems for polymeric packaging applications
• • Estimate shelf life of products and recognize some of the problems of relying solely on data-projection based estimation
• • Describe how packaging designs and solutions vary depending on economics, available resources, and infrastructure, and how mimicking a solution from one market may be unproductive in another due to material availability or differing cost structure, particularly in different geographical regions
Concepts
Package Types
There are three package types: primary, secondary, and tertiary. The primary package material directly contacts the product, such as the plastic bottle containing water or a bag holding potato chips. For food, pharmaceuticals, cosmetics, and similar types of products, regulations require that the packaging material not transfer harmful material into the product (and the term primary package is usually used in relevant legislation) (Misko, 2019; USFDA, 2019). Current debates over bis-phenol A (BPA) content in packaging (for example, water bottles), and its health effects when consumed, is an example of this kind of material transfer that may cause material components to be banned in certain products or markets.
The secondary package usually surrounds the primary package. A box of cereal is a good example, with the product contained in the interior pouch (the primary package) and the exterior printed carton acting as the secondary package. The secondary package may act as advertising space on a store shelf, or to give a good first impression in e-commerce, and also carries information for point of sale (POS) operations.
Most often, the tertiary package is the shipping carton, carrier, or tray that carries unitized packages, i.e., packages that have been collected into groups for shipping, through the distribution system. In many cases, it is a corrugated shipping container, but for very strong types of packages such as glass jars and metal cans, it may be a simple overwrapped tray. This package must usually carry shipping information, and must frequently comply with relevant shipping regulations, rules, tariffs, and labeling requirements.
Material Types
Packaging is often described in terms of primary materials that make up the body or structure of the package. The most common primary packaging materials are plastics, metals (steel and aluminum), glass, and paper. Global use of material types is shown in Figure 6.5.1. Other materials include traditional low-use materials such as structural wood in crates, as well as printing inks, adhesives, and other secondary materials. Secondary materials and components of the package are usually added to the primary structure and are often used for assembly, such as adhesives or a “closure”—the cap or lid on a container. Other components, such as inks used for printing, spray pumps, and other secondary features, may be included in the latter group.
Plastics
Plastics are most often created by the polymerization of petrochemical hydrocarbons, though there is substantial effort to create useful versions from naturally occurring carbohydrates, particularly from plant and algal sources as well as genetically engineered bacterial cultures. These polymers typically contain long carbon “backbone” chains of considerable length, and may or may not have bonds forming cross-links between the chains. A rule of thumb is that more cross-linking will create a stiffer, more brittle material. Additionally, plastics exhibit “crystallinity,” which does not necessarily follow the strict definition of a crystal in the typical sense of a completely bound structure and very sharp melting point, but does exhibit a high degree of ordering: backbone chains arranged in regular patterns, usually emanating from a central nucleation site (Figure 6.5.2). Polymers that have a low degree of ordering in their chain orientation are typically termed “amorphous,” much like a bowl of cooked noodles. Additionally, melting would occur at a narrow range of temperatures depending on factors such as molecular weight distribution and additives rather than the broader, less well-defined softening and liquification range that an amorphous phase would exhibit.
For a given chain length, an ordered, crystalline polymer will have higher density, be more resistant to absorbing or permeating materials through the structure, and may be more brittle than amorphous materials, which will be tougher, more flexible, and more likely to absorb or transmit material through the molecular structure. For example, polyethylene is suitable for forming simple flexible structures such as milk cartons, but does not resist stress cracking while flexing, so polypropylene is used for “living hinge” structures that are often seen as flip caps on containers.
Polymers may also have their structure altered by post-processing the sheet, film, or structure in a process called “orienting.” This involves mechanically deforming the material so that the chains are pulled into alignment, creating a higher degree of crystallinity and better mechanical strength and barrier properties. This orientation may affect the density as well, since it will create order in the backbone chain. The relationship of chain length/molecular weight and crystallinity is illustrated in Figure 6.5.3 (Morris, 2011).
For example, a polyethylene terephthalate (PET) soda bottle is first created as a molded “preform,” roughly resembling a test-tube, with the threaded “finish” that the lid is attached to already formed. In the bottling plant, the preforms are heated to a very specific temperature and then rapidly inflated with compressed air inside a shaped mold. This “stretch-blow” process aligns the molecular structure of the body into a tight, two-way basket weave of polymer chains that is capable of resisting the tendency of the carbon dioxide (CO2) to dissolve into the polymer and escape through the structure.
If a type of polymer is too brittle to be used properly in its intended function, it may also be modified by the addition of plasticizers that act as lubricants or spacers in the internal molecular structure and make the structure more ductile. This might be done for a squeeze dispenser or a structure that is too brittle at low temperatures.
Side groups bonded to the main carbon-carbon “backbone” chain usually define plastics that are commonly used in packaging. Since writing the entire structure of hundreds of thousands of units would be impractical, the structure is often represented by the repeat units that comprise the polymer backbone chain (Figure 6.5.4). Several of these polymers may exhibit branching of the structures from the central backbone, but again, these are also composed of repeat units. For example, polyethylene has a simple side-structure of two hydrogen atoms, while polystyrene has a cyclic phenyl structure. The interaction of these rings with one another as long chains are produced results in the stiff, brittle behavior of unplasticized polystyrene.
Steel
Steel is used almost exclusively as cans for food, as well as larger drums for many types of products. When used as a food container for thermally processed foods, steel cans have an internal lining to reduce corrosion and reduce interaction with the product. Typically, the coating is of tin, which creates an anodic protection layer in the absence of oxygen, is non-toxic, and does not affect the flavor or texture of most products. There is usually an additional supplementary coating of some type of lacquer or synthetic polymer. Cans are formed either as two-piece or three-piece structures. The bottom and body of two-piece cans are formed from a single piece of material by progressive forcing through dies, with a seamed-on lid. The body of three-piece cans, which are increasingly uncommon since they are more costly to produce, is formed from a single piece of tinned sheet with a welded side-seam, a seamed-on bottom, and the lid seamed on after filling, as with the two-piece can. Steel cans have the advantage of resisting substantial loads both from stacking of many layers during storage and from the internal vacuum that is formed from condensing steam during the filling and lidding process that eliminates deteriorative oxygen in the headspace of the container.
Aluminum
Aluminum containers are used almost exclusively with beverages, since aluminum is quite ductile and relies on pressurization, either from the carbonation of a beverage or from the addition of a small quantity of pressurizing gas (typically nitrogen), to achieve sufficient strength. Aluminum cans are formed as two-piece cans and the interior of the can is coated with a sprayed-on resin to resist corrosion by the contents. For highly acidic products such as cola drinks, this critical step prevents the cans from corroding in a matter of days. The lid for aluminum cans has evolved as a masterpiece of production engineering since it attaches the tab with a formed “rivet” from a protrusion of the lid material rather than a third piece that would add prohibitive cost, and has a scored opening that reliably resists pressure until opened by the consumer.
Aluminum has two other substantial uses in packaging: foils and coatings. Since metal is inherently a very good barrier against gasses, light, and water, flexible foil layers are included in many types of paper/plastic laminates to provide protection for products. Similarly, an evaporated coating of aluminum is a common feature with flexible films, particularly with snack foods whose oily composition is susceptible to light, oxygen, and moisture.
Glass
Glass is formed by the fusion of sand, minerals, and recycled glass at temperatures above 1500°C. Forming a thick liquid, it is then dispensed in “gobs” that are carried to a mold that forms a preliminary structure called a “parison,” then on to a final mold where the parison is inflated with air and takes its final form while still quite hot. Since glass has the combination of poor heat conduction and brittleness, the formed containers must then be cooled slowly in an annealing process, usually done in a slow conveyor-feed structure called a “lehr” that contains progressively lower temperature zones. This allows the molded container to cool slowly over a long period and prevents failure from residual thermal stress.
Once formed, glass containers are quite strong, although susceptible to brittle failure, particularly as the result of stress concentration in scratches or abrasion. For this reason, the containers have thicker areas called “shock bands” molded in and also are coated to reduce contact damage. Many glass-packaged products, particularly beverages, are shipped with an internal divider of inexpensive paperboard to separate the containers and prevent scratching. Glass is otherwise strong enough that it is often shipped with a simple tray and overwrap to unitize the containers until they are shelved at retail.
Glass is being replaced with plastic in many applications for several reasons, primarily fragility and weight. Since ingested glass shards represent an enormous hazard to the consumer, breakage during filling and handling operations requires stopping production and thorough cleaning for every occurrence and discarding nearby product whether contaminated or not (American Peanut Council, 2009). Additionally, weight savings can be significant: one peanut butter filling operation saved 84% of package weight by replacing glass containers with plastic containers (Palmer, 2014). Generally, the substitution of plastic for glass has resulted in both cost and liability reduction, although for products intended for thermal processing after filling, the designs can demand precise control of material properties and forming (Silvers et al., 2012).
Paper, Paperboard, and Corrugated Fiberboard
Paper materials are created from natural fibers, primarily from trees and recycled content. Other sources, such as rice straw, hemp, and bamboo, may be used. There is directionality in paper’s preference for tearing, bending, and warping since the fibers will preferentially separate rather than break, causing paper to tear preferentially along the direction that the forming machine laid the fiber slurry (termed the “machine direction”). Since paper is a natural, fibrous material, there will be changes in the strength of the material because of moisture content. Since fibers typically swell in diameter (at right angles to the machine direction, termed the “cross machine direction”) without significantly changing length, surface exposure to water or steam may cause the paper to curl around the machine direction axis.
While paper fibers can be processed in many ways, the basic approach is to separate the fibers into a slurry, then reform the slurry into long sheets (called “web”) in one of two ways. The earliest process, the Fourdrinier process (named for the Fourdrinier brothers who developed it) mimics early hand-laid paper in that it pours the fiber slurry through a continuous wire-mesh belt (the “wire”). As the water drains, the web is eventually peeled from the wire, and put through rollers in several finishing and drying steps. This process is limited by drainage in its ability to create thick materials and only a few layers are possible.
A later development, the cylinder process, uses rotating cylinders to adhere fibers from the slurry to a continuous moving belt of absorbent material from underneath, circumventing the previous drainage limitation. This has the advantage of being able to form many layers for thick-section papers and paperboard. Paperboard, i.e., paper that is thicker than approximately 0.3 mm, is usually die-cut into cartons, dividers, or other more rigid structures. Paperboard is used in all types of consumer packaging from hanging cards to cartons, while paper is typically used in pouch structures and bags to add strength and good printing surfaces.
Corrugated fiberboard (colloquially called “cardboard”) is a manufactured product that assembles paper into a rigid structural sheet, usually consisting of two outer “linerboard” layers and a crenulated internal “corrugating medium” layer. The linerboard may be pre-printed to match the product; this can allow much more sophisticated graphics to be used compared to printing after manufacture, which is limited by the irregular surface of the material. The medium is continuously formed using steam and a shaped roller and adhered between the linerboard sheets using starch-based glue. The sheets of corrugated board are then usually die cut into the necessary shapes for forming boxes, shipping containers, and other structures. Multiple layers are possible and are used for specialized applications such as household appliance shipping containers.
Product Protection
Packaging serves several functions. Protection of the product is of primary importance, particularly with products such as food. Fresh foods often require vastly different types of protection than processed and shelf stable foods that are meant to be stored for much longer periods of time. Proper packaging protects products from physical damage and reduces costs due to waste. Additional functions of packaging include utilization, communication, and integration with ordering, manufacturing, transportation, distribution, and retail systems as well as return logistics networks.
Definition of Food Damage or Quality Loss
Defining damage, spoilage, or unsuitability of food can be very difficult. While microbial contamination levels can be quantified, the effects of texture or color changes are often subtle and subjective. Far too often, a food product is considered spoiled based on a qualitative measure that is entirely subjective and may be motivated by other factors. It is important that the definition of unacceptable product be carefully considered (and perhaps contractually defined) to avoid subsequent conflict. Ingredients, components, and materials supplied to other manufacturing operations should always have quality criteria carefully and quantitatively defined to avoid arguments that may be motivated by an attempt at renegotiation of price or other commercial considerations (Bodenheimer, 2014; Pennella, 2006).
Since defining food product failure may be a difficult task, it can be useful to focus on the most easily degraded or damaged component that will cause the product to become unsafe or unacceptable if it fails—the critical element (Morris, 2011). This critical element may be defined as an easily degraded ingredient, a significant color change, mechanical failure, or an organoleptic quality (usually defined by a taste, texture, or odor, most often identified by human evaluators in a blind test) that fails an objective criterion for failure. The critical element to be used in a sampling plan must meet two criteria: its state must be determinable by objective analysis, and its failure conditions must be defined by objective criteria rather than subjective anecdote.
This approach can have several shortcomings. It is easy to focus on a particular aspect of quality to the more general detriment of the product, and it is tempting to choose the quality element because of its ease of assay rather than its impact or importance. Finally, this may be a moving target, as the critical element may become some other factor as circumstances change.
Transportation and Storage Damage
Damage resulting from static and dynamic effects during manufacture, storage, handling, and distribution may range from simple compression failure of a container to complex resonance effects in a vehicle-load of mixed product. An understanding of storage conditions and the transportation environment can help in the design of an efficient package capable of surviving distribution without over-packaging.
Light and Heat Damage
Damage to a food product may occur because of exposure to light or to temperature extremes, both high and low. Ultraviolet light may cause fading of the external printed copy and an unappealing appearance, but by itself does not penetrate into a transparent package. Certain products, however, are extremely sensitive to visible light. Skim milk exhibits a marked decrease in Vitamin A with exposure to fluorescent lights common in retail environments and beer’s isohumulone flavoring will degrade into the compound 3-MBT (3-methyl-2-butene-1-thiol) causing a sulfurous “skunked” or “lightstruck” flavor to develop (deMan, 1981; Burns et al., 2001).
Thermal or heat damage may result from the long-term effects of both very high and very low temperature exposure, though low temperature exposure of a fragile product is more associated with the breakdown of texture and structure, usually from ice crystal growth or emulsion failure, than chemical changes. High temperatures will accelerate any thermally dependent degradation processes and may cause other problems, such as unexpectedly high permeation rates in packaging materials, because of transition from a glass to an amorphous state in polymers.
Gas and Vapor Transmission Damage
Gas and vapor transmission problems are very product-specific and may be situational. A carbonated drink may suffer from loss of carbonation, while another product may oxidize badly because of oxygen transmitted through the package. A confectionary product may have rapid flavor change because of loss of volatile flavor components that self-plasticize the packaging material. Volatile organic chemicals (VOCs) ranging from diesel fumes to flavor components may be transferred in or out through the package. Water vapor gain may cause spoilage of food or degradation of pharmaceuticals, while water vapor loss may cause staling of bread products. A good understanding of both the product properties and of the environment that it will face in distribution are important for proper design (Zweep, 2018).
Permeation in Permeable Polymeric Packaging Material
Permeation is the ability of one material (the permeant) to move through the structure of another. Many amorphous materials such as natural and artificial polymers are permeable because of substantial space between their molecular chains. Figure 6.5.5 shows this in schematic form, with permeation of vapor progressing from the high-concentration side to the low-concentration side via sorption into the high-concentration side, diffusion through the bulk matrix of the film membrane, and then desorption on the low-concentration side, all driven by the concentration differential across the material. Glass and metal packaging, on the other hand, are impermeable to everything except hydrogen because of their ordered structure or dense packing. Polymers in a highly ordered state also exhibit very low permeability relative to disordered structures.
The rate of permeation depends on the species of permeant, the type and state of the polymer, and any secondary factors such as coatings. The polymer may be glassy—essentially a low-order crystalline state (a good example of this is a brittle polystyrene drink cup)—or rubbery, which allows segmental motion of the polymer chains. With most polymers, this will have a measurable shift at a particular temperature, the glass transition temperature, with elasticity and permeation increasing when the polymer is above the glass transition temperature of the polymer.
Permeability can be modeled as the concentration-gradient driven process (mass transfer process) of dissolving into the high-concentration surface, diffusing through the film membrane matrix materials, and then desorbing from the low-concentration surface, much in the same way that heat is transmitted by conduction through the thickness of a wall (Suloff, 2002).
Mass-transfer equations can be constructed to create a simple model of the diffusive flux of the permeant (gas) based on the linear Fickian diffusion model (Equation 6.5.1) (Fick, 1855). For movement of a permeant through a layer of material per surface area:
$(\frac{\text{Quantity permeated per unit of time}}{\text{Area}}) = J =-D\frac{\delta c}{\delta x}$
where J = diffusive flux (mol m−2 s−1)
D = diffusion coefficient (m2 s−1)
c = concentration (mol m−3)
x = position (m); in Figure 6.5.5, this would be the position within the cross section of the film membrane.
The transmission rate through composite structures, i.e., structures having several layers, can be calculated in a manner similar to thermal systems using Equation 6.5.2:
For n layers of material,
$TR_{\text{total}}=\frac{1}{TR_{\text{layer 1}}+TR_{\text{layer 2}}+…+TR_{\text{layer n}}}$
where TRtotal = total transmission rate (mol s−1)
TRlayer n = transmission rate of layer n
If the permeation of the material is known (or can be estimated), then estimating the permeation of a package design is a function of temperature, surface area, and partial pressure gradient, ∆p. Partial pressure is defined as the pressure that would be exerted by a gas in a mixture if it occupied the same volume as the mix being considered. Usually ∆p is defined by Dalton’s law, i.e., in a mixture of non-reacting gases, the total pressure exerted is equal to the sum of the partial pressures of the individual gases, and, thus, the partial pressure is the product of both the partial pressure of the permeant species and the hydrostatic pressure (Dalton, 1802). Henry’s law, which says that the amount of gas absorbed in a material is proportional to its pressure over the material, and the combination of hydrostatic pressure and permeant species prompts the selective nature of permeation by gasses that have differing partial pressures in a given polymer (Sanchez & Rogers, 1990).
Equations 6.5.1 and 6.5.2 are for idealized circumstances—a constant rate of permeation without chemical reaction between the polymer and the permeant at constant temperature and without physical distortion of the film—and are only valid for diffusion-based permeation. With holes, perforations, voids, or defects, the gas flow is explained by simple fluid-flow models. In real world applications, many conditions, such as temperature changes, fabrication methods, and handling stresses, will compromise this assumption. Diffusion in polymers is an ongoing field of research, and with the great array of volatile compounds in foods, the system may be complicated by several types of deviation from the ideal case. From a practitioner’s standpoint, the permeation data provided by a supplier may be under idealized circumstances or for an initial production run, and will likely not accommodate variations that occur during manufacturing.
Permeability (frequently designated $\bar{P}$) has units that have been described as “. . . a nightmare of units” (Cooksey et al., 1999). The SI standard unit for this property of polymeric materials is mol/(m⋅s⋅Pa), though it is used inconsistently, even in academic literature and certainly in commercial data. Rates may be reported in any number of formats and improper mixes of US customary units, SI, cgs, or other measures, in results provided by various tests and manufacturers, so the practitioner will find it necessary to convert units in order to make use of the data. Most of these roughly conform to this format:
$\bar{P} = \frac{\text{(quantity of permeating gas)(thickness)}}{\text{(time)(membrane area)(partial pressure difference across membrane)}}$
Experimental Determination of Permeation Rate
Experimental determination of permeation rates and their derived constants is usually done using a test cell of known surface area with concentrated permeant (e.g., oxygen or CO2) on one side of the package film (generally between 0.06 mm and 0.25 mm in thickness) and inert gas or air on the other side. As permeation progresses, the lag time (the time to achieve a steady rate of permeation) and the rate of concentration increase on the non-permeant side can be measured and used to calculate the solubility and diffusion coefficients (Mangaraj et al., 2015). Typical values of oxygen and water transmission rates and glass transition temperatures are shown in Table 6.5.1.
For moisture permeation tests, a similar arrangement is used, except that a desiccant usually provides the partial pressure differential with a stream of humidified air circulating on the other side of the film membrane. The moisture gained by the desiccant, measured by weight change, is used to calculate the permeation rate (ISO, 2017). Additionally, there are dedicated test devices for oxygen and water permeability that rely on real-time determination of permeation rate using heated zircon and infrared-absorption detectors, respectively.
Permeation Modification in Packaging Films
Using the simple sorption-diffusion-desorption model of permeation shown in Figure 6.5.5, one can find several ways to modify the barrier characteristics of packaging films, either by modifying the surface (sorption/desorption) characteristics or by affecting the diffusion characteristics of the overall film structure. Coatings and surface treatments can be used to modify sorption/desorption characteristics of polymer films. Foremost among these treatments is metallization, which is the evaporation of a thin layer of aluminum in a vacuum chamber. This may be done on either side of the film, but is most often done inside the package to avoid abrasion loss and may be laminated to prevent transfer of aluminum that would discolor the product. There are other surface chemistry modifications, such as fluorination, which, though challenging to implement in production, can convert the surface of simple polyolefins to a polyfluorinated compound with markedly better barrier characteristics. Other surface coatings and laminations are common. Printing, labeling, and other surface decorations may provide a degree of barrier properties over part of the product as well (Nakaya et al., 2015).
Table $1$: Generalized properties of the common packaging polymers shown in Figure 6.5.4 (Thermofisher Inc., 2019; Rogers, 1985; Sigma-Aldrich Inc., 2019). These properties are generalized from available literature, and unlike many engineering materials, there are no standard grades. The properties may vary widely between manufacturers, and will vary with density, crystallinity, orientation, and additives, among other factors. This table is provided for comparison only.
Polymer Type Oxygen Transmission Rate1 Water Vapor Transmission Rate2 Glass Transition Temperature
(°C)
Comments
Polyethylene (PE)
194
18
−25
Polyethylene properties vary significantly with density, branching, and orientation.
Polyvinyl chloride (PVC)
5
12
81
Both PVC and PVDC must be food grade (i.e., demonstrating no extractable vinyl chloride monomers) to be used with food products. Concerns over chlorinated films in the popular press reduced their use starting in the early 2000s.
Polyvinylidene chloride (PVDC)
5
30
−18
Polyethylene terepthalate (PET)
5
18
72
PET will reduce its transmission rate drastically when it is oriented during fabrication.
Polystyrene (PS)
116–155
24
100
Polystyrene is very brittle, and must be plasticized to be useable in most applications. This increases transmission rates significantly.
Polypropylene (PP)
93
4
−8
Very impact resistant; used for snap caps and other multiple-use applications
Polyvinyl alcohol (PVOH)
0.8
8000+
(see note)
85
Water soluble; polyvinyl alcohol is a high oxygen barrier material, but must be kept dry, typically by layering between moisture barrier layers. Adsorption of moisture destroys barrier characteristics. PVOH film is also used by itself for water-soluble packets of household detergents and other consumer products.
Nylon 6,6
1.7
135
50
Hygroscopic; transmission rate varies with moisture content.
1 In units of $\frac{cc \cdot \mu m}{m^{2}\cdot 24h\cdot atm}$ tested at STP
2 In units of $\frac{g \cdot \mu m}{m^{2}\cdot 24h\cdot atm}$ tested at 37°C and 90% relative humidity
For a given polymeric material, modifying the internal structure of the polymer will change the diffusivity coefficient. Intentional modifications typically involve orienting the material by drawing it in one or more directions so that the polymer chains pack into a more orderly, denser structure (National Research Council, 1994). This produces better strength and barrier characteristics such as in the previously described stretch-blow molding of carbonated beverage bottles.
Polymers may also be modified with plasticizers—soluble polymer-chain lubricants—that reduce brittleness but allow chain mobility and create opportunities for permeants to penetrate the structure more readily. Plasticizers that contact food material must be approved for food use since they will likely migrate to the product in microscopic quantities. This has been the subject of several controversies as there is evidence of potential teratogenic (causing birth defects) activity in some plasticizers (EFSA, 2017). Food materials themselves, oils and fats most notably, may be plasticizers and may cause a package’s material to change its barrier or physical characteristics.
Permeation Changes during Storage
Product ingredients or components dissolving into the package structure may result in decreased mechanical strength, reduced barrier properties and shelf life, or even the selective removal of flavor compounds (termed “flavor scalping”). This may create a mysterious reduction of shelf life because of synergistic effects. For example, a citrus flavoring compound rich with limonene may plasticize the packaging material and increase loss of both flavor and water, creating what appears to be a moisture loss problem (Sajilata et al., 2007). Similarly, volatile flavorings can increase oxygen permeation rates with harmful effects for the product, or may increase CO2 loss rates in carbonated beverages.
Other Packaging Damage Occurring During Storage and Distribution
Corrosion of Tin-Plated Steel Cans
The electrochemistry of the tin-plated steel can is complex and depends on several factors in order to maintain the extraordinary shelf life that most consumers expect. Canning operations typically displace headspace air with live steam to both reduce oxygen in the can and provide vacuum once the steam condenses. After lidding, the can end is sealed by crimping the edge in a series of steps to provide a robust hermetic seal, and the environment in the package typically traverses three stages (Mannheim and Passy, 1982; Wu, 2016):
1. 1. Initial oxidizing environment—Residual oxygen inside the freshly-sealed can and dissolved into the product is bound up in oxidation products in the product and can material. The tin layer is briefly cathodic, providing a positive charge during this stage and provides little protection until the oxygen is depleted. This typically takes a few days to conclude, depending on the composition of the product and processing conditions.
2. 2. Reducing environment—In the absence of free oxygen, the electrochemistry then reverses, and the tin or chromium layer is anodic, slowly dissolving into the canned product to protect the steel of the can wall. This stage may last years, but may be affected by many factors, particularly product composition (e.g., pH level, acidifying agents, salts, and nitrogen sources). Each product must be considered unique, and product reformulation may cause significant changes in can corrosion properties.
3. 3. Terminal corrosion—At the end of service life, the environment may still be anaerobic, keeping the electrochemistry anodic, but the protective coating of tin will have been depleted, allowing corrosion and pitting of the can. This can result in staining of the product or can surface, gas formation (hydrogen sulfide, producing so-called “stinkers”) and, finally, pinholing of the can body and loss of hermeticity. Depending on the product, this may take from several months for highly acidic products, like pineapple juice and sauerkraut, to many decades.
Brittle Fracture and Glass Container Failure
Several failure modes are important to understand when working with glass packaging, particularly considering that there may be legal liabilities involved in their failure. Additionally, persistent glass failures in food production facilities can wreak havoc since dangerous glass shards are produced. As a brittle material, glass concentrates stress around thickness changes and scratches, since these provide a location for stress magnification as illustrated in Equation 6.5.4 (Griffith, 1921):
$\sigma_{max} = 2\sigma_{app}(\frac{d}{r})^{1/2}$
where σmax = maximum stress at crack tip (N m−2)
σapp = applied stress (N m−2)
d = depth of crack (m)
r = radius of crack tip (m)
A tiny scratch can create an enormous concentration of stress, and once the critical stress of the material is exceeded, a crack will form that will continue in the material until it fails or until it encounters a feature to re-distribute the stress. Stresses may occur as the result of thermal expansion or contraction since glass is not only brittle, but has poor thermal conductivity, so a section-thickness change may create a steep thermal gradient that causes a container to fail after fabrication or heat treatment. For carbonated beverages, the internal pressure combined with a surface scratch created during manufacture or handling may provide enough pressure and resultant stress in the package material to cause it to burst.
A stress concentration factor (K) can be developed from Equation 6.5.4 as:
$K = \frac{\sigma_{max}}{\sigma_{app}} = 2(\frac{d}{r})^{1/2}$
The stress concentration factor (K) becomes very large with scratches that have a very small crack tip, and even modest depth. The effects of scratches are avoided in design and manufacturing by providing “shock bands,” which are thicker sections of material that are added to contact other bottles in manufacturing and handling, as well as by adding external surface coatings and putting dividers in shipping cartons.
Failure analysis on a broad scale is a specialty unto itself, but when determining the origin of the fracture, there are characteristic features that help identify the point of origin and direction of travel (Figure 6.5.6). The point of origin in both ductile and brittle materials often has a different and distinct texture, usually mirror smooth, and as the failure progresses it will typically leave a distinctive pattern that radiates outward from the point of origin (Bradt, 2011).
When examining a failed glass container’s reconstructed pieces, it is useful to consider the different failure modes that are common in glass structures. The most common failures are impact and pressure fractures, thermal failure, and hydrodynamic (“water hammer”) failure (Figure 6.5.7). Impact and pressure fractures often originate from a single point in the structure, with the fracture originating on the outer surface from impact, and from the inside from pressure, as determined by observation of magnified fracture edges at the point of initiation.
Thermal failure typically starts at a section thickness change (from thick to thin) as the container is heated or cooled abruptly and a large thermal differential generates shear stress in the material. This manifests itself most often in bottles and jars with a bottom that falls out of the rest of the container at the thickness change, perhaps with other cracks radiating up the sidewall.
Water hammer failure is the result of hydraulic shock waves propagating through the product (usually from an impact that did not break the container directly) and causing localized formation of vapor bubbles that then collapse with enough force to break the container. This usually has the distinctive feature of a shattered ring completely around the container at a particular height (usually near the bottom) with obvious fragmentation outward from the pressure surge. Products with lower vapor pressures, particularly carbonated and alcoholic beverages, will fail with less energy input than liquid or gel products with high vapor pressures (Morris, 2011).
Shelf Life of Food Products and the Role of Packaging
Products have two shelf lives. The first is where the product becomes unusable or unsafe because of deterioration, contamination, or damage. The second shelf life is one of marketability; if the product’s appearance degrades (such as color loss in food that can be seen while still on the shelf), then it will not appeal to consumers and will be difficult or impossible to sell.
The primary concern with packaged processed foods is usually microbial contamination, followed by the previously discussed gain or loss of food components. Since the food is not actively metabolizing, the usual problems apart from microbial growth result from oxidation, gain or loss of moisture or other components, and discoloration from light exposure. While barrier films and packaging can help with some of these problems, it may be useful to include active components such as sachets or other materials or devices that will bind up oxygen or moisture that infiltrates into the package. These are commonly seen on refrigerated-fresh products such as pasta, prepared meats, and others. Other types of active films or structures may incorporate an oxygen-absorbing barrier to extend shelf life. Light barriers may be a tough problem to contend with since many regulations prohibit packaging from hiding the product from view. Processed meat products such as sandwich meat, which is normally a pinkish color from the nitric oxide myoglobin formed during the curing process, will turn brown or grey under prolonged light exposure and will appear to be spoiled. Bacon has a substantial problem with light-promoted fat oxidation and in some countries is allowed to have a flip-up cover over the product window.
Unprocessed foods, such as fresh meat and vegetables, should be regarded as metabolically active. Fresh fruits and vegetables after harvest typically metabolize as they ripen, slowly consuming oxygen and stored carbohydrates and giving off CO2, and may be ripening under the influence of ethylene gas self-production. It is possible to manipulate the oxygen level and strip ethylene from the products’ environment—this is done on a large scale in commercial controlled-atmosphere (CA) storage facilities—but at the individual package level, the cost of specialized wrapping film and an ethylene adsorbent sachet may be prohibitive in markets with ready access to fresh fruits and vegetables. Other markets may find these expensively packaged fruits and vegetables appealing because of the ability to distribute fresh produce at great distance or in regions where it may be difficult to do directly. Since the early 2000s, the use of 1-methylcyclopropene (MCP), an ethylene antagonist, has allowed ripening prevention, but overexposure may permanently prevent ripening of some species (Chiriboga et al., 2011).
Freshly butchered meat will absorb oxygen, converting purple-colored reduced myoglobin to red oxymyoglobin and then to brown metmyoglobin. Most customers are not accustomed to seeing the purple color of very fresh meat, and expect it to be red in color, although the redness occurs through oxidation. This leads to the problem of extending the shelf life of meat products beyond a few days, since the packaging must allow oxygen in to provide the expected red coloration yet at the same time preventing the ongoing brownish discoloration as metmyoglobin is formed from oxygen. Work is ongoing with this. Many centralized meat packing facilities for large retailers may use carbon monoxide gas in the package to provide a near-fresh red color. This has created some controversy as it may disguise the age of the product and prevent some spoilage indication, but the practice is being widely adopted in order to take advantage of centralized processing facilities. Similar processes are being investigated for other meat and seafood products.
Shelf Life Testing and Estimation
In most practical applications, there is not enough time to actually wait for several iterations of the product’s long intended shelf life in order to develop and refine a package. Once the initial design is laid out, it is often subjected to accelerated shelf life testing in order to allow an approximate assessment of protection over a shortened period. Shelf life modeling should be followed up with substantial quality-assessment data from actual distributed product over time, and attention should also be paid to errors in estimation methods, and their effect on longer-term predictions.
Q10 Accelerated Shelf Life Testing
For food and related products, shelf life testing may involve storing the test packages at high temperatures in order to accelerate the degradation that will occur over time. The core assumption with Q10 testing is that with an Arrhenius type reaction (Equation 6.5.6), increasing the temperature by 10°C will cause the quality loss rate to increase by a scaling factor (k). The k value can be thought of as a magnification of effect over time by increasing the temperature of the test, within moderation. The general approach is commonly termed Q10 testing (Ragnarsson and Labuza, 1977):
$k=Ae^{\frac{-E_{a}}{RT}}$
and
$Q_{10} = \frac{\text{Time for product to spoil at temperature T }^\circ C}{\text{Time for product to spoil at temperature }(+10^\circ C)}$
where k = reaction rate constant, in this context effectively the rate of quality deterioration
A = pre-exponential constant for the reaction
Ea = activation energy for the reaction (same units as RT)
R = universal gas constant
T = absolute temperature (kelvin)
Q10 = quality-loss scaling factor (dimensionless)
Typically Q10 values are in the range of 1.0 to 5.0 but must be verified by testing. Remember that shelf life is the result of many overlapping reactions, all of which may have very different kinetics, so the range of valid estimation is narrowly limited and the method and its results should be treated with great caution. There is a danger of attempting to do rapid testing at senselessly high temperatures, leading to grossly inaccurate estimations because of phase changes in the product, exceeding the packaging material’s glass transition temperature, thawing, vaporization of compounds, and similar non-linear temperature effects that violate the simple Arrhenius kinetics assumed in many shelf life studies (Labuza, 2000).
Applications
The Packaging Cycle
Given the enormous variety of materials, structures, and components of packages (e.g., rigid vs. flexible, cans vs. pouches) for a global range of products, it is useful to consider packaging as a material-use cycle (Figure 6.5.8) (Morris, 2011). This cycle originated with large-scale, industrialized types of packaging, but can be used to visualize the use of materials and design factors in other, smaller, or more specialized types of operations. When considering a new type of packaging material or new design, it provides a useful means for analyzing the resulting changes in sourcing and disposition beyond the immediate demands of the product.
Raw Materials
Raw materials of a full spectrum of major packaging materials and components consist of the resources needed to create the basic packaging materials. Raw materials are included in the packaging cycle because shifts in global resource production or supply may markedly affect package design and choice.
Conversion of Materials
Material conversion takes bulk, refined materials such as steel ingots or plastic resin pellets and converts them to an intermediate form, such as plastic film or metal foil, which is sent to manufacturers who create the finished package. Special processing may occur at this step, such as the plating of tin-plated steel for steel cans, or the aluminizing of plastic films for snack packaging. Because of the difficulties in molding molten glass, glass containers move directly from the refining furnace to finished containers in a single operation.
Finished Packages
Converted materials are made into ready-to-fill packages and necessary components such as jars, cans, bottles, boxes, and their lids or other closures. This step may occur in many places depending on the product involved. For example, a dairy operation or soft drink bottler in a rural area may find it advantageous to be able to produce containers directly on-site. Other operations, such as canneries in crop producing areas, may have a nearby can or pouch fabricator serving several different companies to take advantage of local demand, or there may be a range of local producers working on contract to serve a large-scale local operation.
Package Filling Operation
The package filling operation brings together the package and the product to form a system intended to maintain and protect the product. In this step, the package is filled and sealed. Packaged products are then sent to any secondary treatment such as thermal sterilization, irradiation, or high-pressure treatment (omitted from Figure 6.5.8). Once ready for shipment, the packages are usually unitized into multiples for greater handling and distribution efficiency.
This step also includes critical operations such as sealing, weight verification, label application, batch marking and “use by” date printing. Correct operation is imperative to deliver a consistent level of quality. Improvements in data management and control systems have offered efficiency improvements in this stage. For example, intra-system communication protocols, such as ISA-TR88.00.02 (often referred to as PackML, for packaging machine language), have been developed that define data used to monitor and control automated packaging and production systems and allow high levels of control and operation integration and increased production efficiency.
Transportation System
The unitized product is sent out through a multitude of channels to distribution points, and is increasingly diverse with the rise of e-commerce. Typical modes of transportation are long-haul trucks, railcars, ships and barges, and aircraft. Each of these has a range of applicability and an economic envelope for efficient use. In areas with less developed infrastructure, distribution may operate very differently and high-value items, such as critical, perishable medication, may be flown in and then quickly distributed by foot, motorbike, or on pack animals. This “last mile” of distribution has become increasingly important in all markets. Even with e-commerce, distribution is left in the hands of delivery or postal services where products were previously handled by retail outlets and the customers themselves, and this introduces uncertainty and the possibility of different damage sources. Therefore, when designing a packaging system, the distribution chain must be considered to account for sources of damage. Additionally, each transportation type may have specific rules and regulations that must be followed to be considered acceptable for shipment and to limit liability.
Distribution to Consumer
Final distribution varies widely, and may have several modes in a single marketplace, such as direct-to-consumer (D2C), online retail, and traditional “shelved” retail outlets. All of these may vary in size and complexity depending on the culture, economy, market, and location. Rural markets in some countries have often responded well to small, sachet-sized manufactured products that are usually sold in larger containers elsewhere (Neuwirth, 2011), while large “club” stores may require large-volume packages, or unitized groups of product that are sold directly to consumers.
Consumer Decision about Disposal
When the product has been completely used, the final step for the packaging is disposal. The end user decides which form of disposal to use, with the decision being affected by economic incentives, cultural and popular habits, and available infrastructure. Discarded packaging is one of the most visible types of waste, since many people do not dispose or recycle it properly even when facilities are available, but is often a minority component of total municipal solid waste (MSW) relative to non-durable goods or other waste components. While collection and reuse of materials can be profitable when well-organized and when transportation and re-manufacturing infrastructure is available, many places do not have this in any functional sense. In addition, certain materials have fallen away from recyclability because of market changes. A good example is the recycling of EPS foam (expanded polystyrene, typically called Styrofoam™) in the United States. When fast food restaurants transitioned away from using EPS sandwich containers because of their environmentally unfriendly image, the ability to recycle any EPS was largely eliminated because of the loss of the largest stream of material, making most EPS recycling operations unprofitable.
Discarding into the Waste Stream
Packaging can be discarded via a collection system that collects MSW efficiently, either as landfill or as part of an energy conversion system, or it may be part of a less-centralized incineration or disposal effort. In the worst cases, there is no working infrastructure for collection, and packaging waste—particularly used food packaging—is simply left wherever is convenient and becomes a public health hazard. Recent concerns have emerged over the large-scale riverine dispersion of plastic waste into mid-oceanic gyres that create a Sargasso of waste that photodegrades very slowly, if at all, and may be a hazard to ocean ecosystems. Even in many locations with operating infrastructure, discarded materials are entombed in carefully constructed landfills that do not offer the possibility of degradation, while in others, MSW is used as an energy source for power generation. In some areas, organic material such as food and garden waste may be composted for use as fertilizer.
Reuse
Informal reuse schemes have been around as long as containers have existed. In more modern times, reuse of containers for various purposes is common, but the market for refilling in developed economies is somewhat limited to simple products such as filtered water. In some markets, the beverage industry requires that bottles be returned, with reused bottles recirculating for decades. Reuse has complications and liability concerns because of cleanliness issues and requires washing to remove secondary contaminants, such as fuels and pesticides, and inspection for contaminants that are not removed during the washing cycle.
Recycling
Recycling brings materials back into the cycle, and reuse of materials in some form is common in all cultures. The trajectory the materials take may vary widely, however. For example, the German Environment Ministry operates a “Green Dot” recycling system that requires manufacturers of packaged goods to pay into a system that collects and recycles used packaging. As of 2018, the city of Kamikatsu, Japan, which has taken on the mission of being the world’s first “zero waste” community, had 45 different categories of recycling to be collected (Nippon.com, 2018). When properly conducted, recycling is the most efficient continued use of materials, but it depends on market demand and the ability to reprocess and reuse materials. For example, aluminum, which is intrinsically much cheaper to reuse from scrap than to reduce from bauxite ore, has had efficient recycling in place for more than half a century, whereas glass is often not recycled. Recycling is, in general, a function of economics, infrastructure, and regulations; in some markets, the waste disposal sites themselves are considered a resource for extracting materials such as steel and aluminum.
The Information Cycle
The information cycle (Figure 6.5.9) is often as important as the actual material production cycle in that machine-readable coding allows the packages themselves to interface directly with point of sale (POS) systems, inventory and ordering software, and distribution infrastructure. Increasingly, this information is also used to create user profiles for product preferences, to optimize response to variations in demand, and to allow targeted marketing and distribution into niche markets.
Information continuously flows back from many points in the system to automatically create orders for store inventory, to track orders, and to forecast production levels for product manufacturers. Of course, this is not tightly integrated in all cases, but serves as an idealized representation. Other useful information is derived from the correlation of other data such as credit cards, loyalty programs, phone data, and in-store tracking. This is done to assist with marketing and demographic predictions, and to automate the creation of order lead-timing with the ultimate result of reducing store inventory to those items kept on the shelf, which is constantly replenished through various “just in time” systems to meet demand. This type of distribution system is appealing but can be brittle, breaking down in the event of large-scale disruption of the distribution chain unless large-scale contingencies are considered.
The current trend is to glean marketing information from combinations of this type of data and social media metrics. Extended use of informatics in distribution systems may also serve to locate diversion or counterfeiting of product, losses and theft, and other large-scale concerns in both commercial and aid distribution (GS1.org).
Examples
Example $1$
Example 1: Calculation of permeation failure in a package
Problem:
Consider a fried snack chip product that will fail a test for oxidative rancidity under STP when reacting with 1.0 × 10−4 mol of oxygen and working with a polymeric film material that has $\bar{P} = 23.7\frac{cc \cdot \mu m}{m^{2} \cdot atm \cdot day}$ and an exposed area of 0.1 m2 at STP. It is assumed that there is no oxygen in the product or package headspace, and that the partial pressure of oxygen, from Dalton’s law, is 0.21 atm. The maximum amount of permeant allowed (Q), as determined by product lab tests, is Q = 1.0 × 10−4 mol of oxygen = 2.24 cc at STP. Determine the film thickness necessary at STP to provide a shelf life of 180 days by keeping the oxygen uptake below Q.
Solution
Solve Equation 6.5.3 for quantity of permeating gas:
$\bar{P} = \frac{\text{(quantity of permeating gas)(thickness)}}{\text{(membrane area)(time)(partial pressure difference across membrane)}}$ (Equation $3$)
where area = 0.1 m2
$\(\bar{P} = 23.7\frac{cc \cdot \mu m}{m^{2} \cdot atm \cdot day}$ \)
$\Delta P = 0.21 \text{ atm}$
$\text{Quantity permeated} = (23.7 \frac{cc \cdot \mu m}{m^{2} \cdot atm \cdot day})(0.1\ m^{2})(0.21\ atm)$
$= 0.498 \frac{cc \cdot \mu m}{day}$
$\frac{2.24\ cc}{(0.498 \frac{cc \cdot \mu m}{day})} = 4.501 \frac{day}{\mu m}$
For a 6 month (180 day) shelf life,
$\frac{180 \text{ days}}{(4.501 \frac{\text{day}}{\mu m})} = 39.994\ \mu m \text{ or } 0.040\ mm$
Example $2$
Example 2: Calculation of transmission rate (TR) of multi-layer film
Problem:
A composite plastic film with four layers is proposed as a packaging material. To determine its suitability, the overall transmission rate must be determined. The transmission rates, in units of (cc μm m−2 atm−1day−1), of the individual layers are the following: Film A: 5.0, Film B: 20.0, Film C: 0.05, and Film D: 20.0. What is the overall transmission rate of the film?
Solution
Calculate transmission rate (TR) using Equation 6.5.2.
$TR_{\text{total}} = \frac{1}{TR_{\text{layer 1}}+TR_{\text{layer 2}}+…+TR_{\text{layer n}}}$ (Equation $2$)
$TR_{\text{total}} = \frac{1}{\frac{1}{5}+\frac{1}{20}+\frac{1}{0.05}+\frac{1}{20}} = 0.0493$
All rates are in $\frac{cc \cdot \mu m}{m^{2} \cdot atm \cdot day}$.
Example $3$
Example 3: Stress concentration in brittle materials (the case of a glass container)
Problem:
A packaging engineer knows that the stress concentration in a scratch can affect the initiation of a fracture in the materials of a container. In order to add enough additional material in the shock band to help prevent failure, the stress concentration factor must be determined. For a scratch in the sidewall of a glass container, with a depth of 0.01 mm and a crack tip radius of 0.001 mm, what is the stress concentration factor (K)?
Solution
Calculate K using Equation 6.5.5:
$K = \frac{\sigma_{max}}{\sigma_{app}} = 2(\frac{d}{r})^{1/2}$ (Equation $5$)
or simply
$K = 2(\frac{d}{r})^{1/2}$
where d = depth of crack = 1.0 × 10−5 m
r = radius of crack tip = 1.0 × 10−6 m
$= 2(\frac{1.0 \times 10^{-6}\ m}{1.0 \times 10^{-5}\ m})^{1/2}$
$\cong 6.32 \text{ times the applied stress}$
Example $4$
Example 4: Identify the type of failure in glass
Problem:
Identify the type of failure experienced by the fractured glass in Figure 6.5.10.
Solution
The glass failed from thermal shock as evidenced by the crack traversing the region of transition from very thin sidewall to very thick base, the thickness change at the handle attachment point, and the lack of secondary fragmentation. The thick sections change temperature much more slowly than the thin sidewall, creating shear stress from differential expansion and failure in the material.
Example $5$
Example 5: Q10 determination and shelf life estimation
Problem:
A new food product is being introduced, and a 180-day shelf life has been determined to be necessary. Because of the short timeframe for production, years of repeated long-term shelf-life tests are not practical. Spoilage of the food product is determined by testing for discoloration using a color analyzer. Shelf life estimations are conducted at temperatures of 25°C and 35°C for 15 days, and the time for the discoloration criteria to be exceeded is projected from the short-term data to be 180 days at 25°C and 60 days at 35°C. These values are useful for estimating the Q10 value for the new product. For a more accurate estimate of the 180-day shelf life when stored at 25°C, an accelerated test at a higher temperature is planned to determine if the product fails or not. Estimate the time required for the complete accelerated test of the 180-day shelf life at 25°C with testing conducted at 45°C.
Solution
The first step is to calculate Q10 using Equation 6.5.7:
$Q_{10} = \frac{\text{time for product to spoil at temperature T}_{1}}{\text{time for product to spoil at temperature T}_{2}}^{\frac{10}{(T_{1}-T_{2})}}$ (Equation $7$)
$Q_{10} = \frac{180\text{ days}}{60\text{ days}}^{\frac{10}{(35^\circ -25^\circ)}} = 3.0$
Under the simplest of linear-data circumstances (see cautionary note in text), the product shelf life will decrease by 1/Q10 for each Q10 interval (10°C in this case) increase in storage temperature. Thus, when stored at 45°C, which is two times the Q10 interval, the product would have a shelf life of 180 days × (1/3) × (1/3) = 20 days. The test time can also be calculated by using the Q10 value of 3.0 to solve Equation 6.5.7 for the time for the product to spoil at 45°C:
$3.0 = \frac{180\text{ days at }25^\circ C}{\text{X days at } 45^\circ C}^{\frac{10}{(45 -25)}} = 3.0$
$X= \frac{180}{9} = 20 \text{ days}$
This procedure allows the simple-case projected estimation of a 180-day shelf life using only 20 days of exposure at 45°C to estimate shelf life at 25°C. Such accelerated testing allows an approximate estimation of shelf life using increased temperatures and is useful for testing when product formulations or packaging change as well as contributing to ongoing quality control.
Note that errors in measurement or procedure at 45° will be amplified. A 5% error in measurement at 45°C will produce 5% × 180 days = ±9 days of error in the estimated shelf life. Results from accelerated testing are often very simplified, and may produce spurious results or failure from another condition not included in the model. Follow up tests with real-world products is an essential part of validating and correcting deficiencies in the model and is a common practice with many products.
Image Credits
Figure 1. Morris, S. A. (CC By 4.0). (2020). Global Use of Packaging Materials by Type. (Created with data from Packaging Distributors of America, 2016).
Figure 2. Morris, S. A. (CC By 4.0). (2020). Illustration of highly ordered polymer chains in crystalline regions and disordered chains in amorphous regions.
Figure 3. Morris, S. A. (CC By 4.0). (2020). Relationship of crystallinity, molecular weight (which increases with chain length in this example), and physical properties for a typical linear polyolefin (polyethylene shown). ρ is density in g/ml. Morris, S. A. (2011). Food and package engineering. New York, NY: Wiley & Son.
Figure 4. Morris, S. A. (CC By 4.0). (2020). Repeat unit structures of common packaging polymers.
Figure 5. Morris, S. A. (CC By 4.0). (2020). Permeation through packaging film membrane.
Figure 6. Morris, S. A. (CC By 4.0). (2020). Fracture failure in a brittle material.
Figure 7. Morris, S. A. (CC By 4.0). (2020). Illustration of glass failure types and significant indications of failure source.
Figure 8. Morris, S. A. (CC By 4.0). (2020). Packaging cycle showing the material use cycle from raw materials through package manufacturing, filling, distribution, and end-of-life (EOL) disposition. Morris, S. A. (2011). Food and package engineering. New York, NY: Wiley & Son.
Figure 9. Morris, S. A. (CC By 4.0). (2020). The information cycle illustrating how information from the point of sale (POS) as well as distribution and transportation sources use machine-readable information to create orders, manage inventory levels and provide secondary information about customers, marketing trends and distribution characteristics Morris, S. A. (2011). Food and package engineering. New York, NY: Wiley & Son.
Figure 10. Morris, S. A. (CC By 4.0). (2020). Fractured Glass Example.
References
American Peanut Council. (2009). Good manufacturing practices and industry best practices for peanut product manufacturers. Retrieved from https://www.peanutsusa.com/phocadownload/GMPs/2009%20APC%20GMP%20BP%20Chapter%207%20Peanut%20Product%20Manufacturers%2016%20Nov%2009%20Final%20Edit.pdf.
Bodenheimer, G. (2014). Mitigating packaging damage in the supply chain. Packaging Digest. Retrieved from https://www.packagingdigest.com/supply-chain/mitigating-packaging-damage-inthe-supply-chain140910.
Bradt, R. C. (2011). The fractography and crack patterns of broken glass. J. Failure Analysis Prevention, 11(2), 79–96. doi.org/10.1007/s11668-011-9432-5.
Burns, C. S., Heyerick, A., De Keukeleire, D., & Forbes, M. D. (2001). Mechanism for formation of the lightstruck flavor in beer revealed by time-resolved electron paramagnetic resonance. Chem. European J., 7(21), 4553–4561. doi.org/10.1002/1521-3765(20011105)7:21<4553:aid-chem4553>3.0.co;2-0.
Chiriboga, M. A., Schotsmans, W. C, Larrigaudière, C., Dupille, E., & Recasens, I. (2011). How to prevent ripening blockage in 1-MCP-treated ‘Conference’ pears. J. Sci. Food Agric, 91(10), 1781–1788. doi.org/10.1002/jsfa.4382.
Cooksey, K., Marsh, K., & Doar, L. H. (1999). Predicting permeability & transmission rate for multilayer materials. Food Technol., 5(9), 60–63. https://www.ift.org/news-and-publications/food-technology-magazine/issues/1999/september/features/predicting-permeability-and-transmission-rate-for-multilayer-materials.
Dalton, J. (1802). Essay IV. On the expansion of elastic fluids by heat. Memoirs of the Literary and Philosophical Society of Manchester, 5(2), 595–602.
deMan, J. M. (1981). Light-induced destruction of vitamin A in milk. J. Dairy Sci., 64(10), 2031–2032. https://doi.org/10.3168/jds.S0022-0302(81)82806-8.
EFSA. (2017). BisPhenol A. European Food Safety Authority. Retrieved from https://www.efsa.europa.eu/en/topics/topic/bisphenol.
Fick, A. (1855). Ueber diffusion. Ann. Physik; 9(4), 59–86.
Griffith, A. A. (1921). VI. The phenomena of rupture and flow in solids. Phil. Trans. Royal Soc. A, 221(1 January 1921), 582–593. http://dx.doi.org/10.1098/rsta.1921.0006.
ISO. (2017). ISO 2528:2017: Sheet materials — Determination of water vapour transmission rate (wvtr) — Gravimetric (dish) method. Geneva, Switzerland: International Organization for Standardization. Retrieved from https://www.iso.org/standard/72382.html.
Labuza, T. (2000). The search for shelf life. Food Testing and Analysis, 6(3), 26–36.
Mangaraj, S., Goswami, T. K., & Panda, D. K. (2015). Modeling of gas transmission properties of polymeric films used for MA packaging of fruits. J. Food Sci. Technol., 52(9), 5456–5469. dx.doi.org/10.1007/s13197-014-1682-2.
Mannheim, C., & Passy, N. (1982). Internal corrosion and shelf life of food cans and methods of evaluation. Crit. Rev. Food Sci. Nutr., 17(4), 371–407. http://dx.doi.org/10.1080/10408398209527354.
Misko, G. G. (2019). The regulation of food packaging. SciTech Lawyer, 15(2). Retrieved from https://www.packaginglaw.com/special-focus/regulation-food-packaging.
Morris, S. A. (2011). Food and package engineering. New York, NY: Wiley & Son.
Nakaya, M., Uedono, A., & Hotta, A. (2015). Recent progress in gas barrier thin film coatings on pet bottles in food and beverage applications. Coatings, 5(4), 987–1001. https://doi.org/10.3390/coatings5040987.
National Research Council. (1994). 3. Manufacturing: Materials and processing. In Polymer science and engineering: The shifting research frontiers (pp. 65–115). Washington, DC: The National Academies Press. https://doi.org/10.17226/2307.
Neuwirth, B. (2011). Marketing channel strategies in rural emerging markets: Unlocking business potential. Chicago, IL: Kellogg School of Management. Retrieved from www.kellogg.northwestern.edu/research/crti/opportunities/~/media/Files/Research/CRTI/Marketing%20Channel%20Strategy%20in%20Rural%20Emerging%20Markets%20Ben%20Neuwirth.ashx.
Nippon.com. (2018). The Kamikatsu zero waste campaign: How a little town achieved a top recycling rate. Retrieved from https://www.nippon.com/en/guide-to-japan/gu900038/the-kamikatsu-zero-waste-campaign-how-a-little-town-achieved-a-top-recycling-rate.html.
Packaging Distributors of America. (2016). The numbers behind boxes and tape. Retrieved from http://www.pdachain.com/2016/11/30/packaging-statistics-that-might-surprise-you/.
Palmer, B. (2014). Why glass jars aren’t necessarily better for the environment than plastic ones. Washington Post, June 23, 2014. Retrieved from https://www.washingtonpost.com/national/health-science/why-glass-jars-arent-necessarily-better-for-the-environment-than-plastic-ones/2014/06/23/2deecfd8-f56f-11e3-a606-946fd632f9f1_story.html?noredirect=on&utm_term=.30ac7c6f77.
Pennella, C.R. (2006). Managing contract quality requirements. Milwaukee, WI: American Society for Quality, Quality Press Publishing.
Ragnarsson, J. O., & Labuza, T. P. (1977). Accelerated shelf life testing for oxidative rancidity in foods—A review. Food Chem., 2(4), 291–308. https://doi.org/10.1016/0308-8146(77)90047-4.
Rogers, C. E. (1985) Permeation of gases and vapours in polymers. In J. Comyn (Ed.), Polymer permeability (pp. 11–73). London, UK: Chapman and Hall.
Sajilata, M. G., Savitha, K., Singhal, R. S., & Kanetkar, V. R. (2007). Scalping of flavors in packaged foods. Comprehensive Rev. Food Sci. Food Saf., 6(1), 17–35. doi.org/10.1111/j.1541-4337.2007.00014.x.
Sanchez, I. C., & Rogers, P. A (1990). Solubility of gases in polymers. Pure & Appl. Chem., 62(11), 2107–2114.
Sigma-Aldrich Inc. (2019). Thermal transitions of homopolymers: Glass transition & melting point. Retrieved from https://www.sigmaaldrich.com/technical-documents/articles/materials-science/polymer-science/thermal-transitions-of-homopolymers.html.
Silvers, K. W., Schneider, M. D., Bobrov, S. B., & Evins, S. E. (2012). PET containers with enhanced thermal properties and process for making same. U.S. Utility Patent No. US9023446B2. Retrieved from https://patents.google.com/patent/US9023446B2/en?oq=US9023446.
Suloff, E. C. (2002). Chapter 4. Permeability, diffusivity, and solubility of gas and solute through polymers. In Sorption behavior of an aliphatic series of aldehydes in the presence of poly(ethylene terephthalate) blends containing aldehyde scavenging agents. PhD diss. Blacksburg, VA: Virginia Polytechnic Institute and State University, Department of Food Science and Technology. http://hdl.handle.net/10919/29917.
Thermofisher Inc. (2019). Physical properties table. Brochure D20823. Retrieved from http://tools.thermofisher.com/content/sfs/brochures/D20826.pdf.
USFDA. (2019). Packaging & food contact substances (FCS). United States Food and Drug Administration. Washington, DC: USFDA. Retrieved from https://www.fda.gov/food/food-ingredients-packaging/packaging-food-contact-substances-fcs.
Wu, Y. W. (2016). Investigation of corrosion in canned chicken noodle soup using selected ion flow tube-mass spectrometry (SIFT-MS). MS thesis. Columbus, OH: The Ohio State University, Department of Food Science and Technology. Retrieved from etd.ohiolink.edu/apexprod/rws_olink/r/1501/6.
Zweep, C. (2018). Determining product shelf life. Food Qual. Saf., October-November 2018. Retrieved from https://www.foodqualityandsafety.com/article/determining-product-shelf-life/.
|
textbooks/eng/Biological_Engineering/Introduction_to_Biosystems_Engineering_(Holden_et_al.)/06%3A_Processing_Systems/6.05%3A_Packaging.txt
|
$n_{i,k}^{(F)}$ = mass or molar flow rate of species $i$ entering separator $k$ in feed stream (mass time-1 or mol time-1)
$n_{i,k}^{(1)}$ = mass or molar flow rate of species $i$ leaving separator $k$ in first product stream (mass time-1 or mol time-1)
$n_{i,k}^{(2)}$ = mass or molar flow rate of species $i$ leaving separator $k$ in second product stream (mass time-1 or mol time-1)
${\rm SF}_{i,k}$ = split fraction for species $i$ in separator $k$ (dimensionless)
${\rm SR}_{i,k}$ = split ratio for species $i$ in separator $k$ (dimensionless)
$\tag{1.1} {\rm SF}_{i,k} = n_{i,k}^{(1)} / n_{i,k}^{(F)}$
$\tag{1.2} {\rm SR}_{i,k} = n_{i,k}^{(1)}/n_{i,k}^{(2)} = {\rm SF}_{i,k} / (1-{\rm SF}_{i,k})$
Example
For the process below, compute the split fraction and the split ratio for $i C_5 H_{12}$. Designate the overhead stream as product stream 1.
1.02: Mass Transfer in Gas-liquid Systems
In chemical separations, we can use thermodynamic models to predict the composition in each phase at equilibrium. For example, Raoult’s Law describes the compositions of vapor and liquid phases at equilibrium. Mass transfer models help us understand how we can manipulate the process to reach equilibrium in a faster or more economical manner.
This chapter will briefly review fundamentals of mass transfer in gas-liquid systems because many of the separation processes in this handbook involve the movement of species between gas and liquid phases.
Diffusion-based Mass T ransfer
$J_{Az}$ = molar flux of $A$ relative to the molar-average velocity of the mixture in the $z$ direction
$D_{AB}$ = mutual diffusion coefficient of $A$ in $B$ ($=D_{BA}$)
$c_A$ = molar concentration of $A$
$N_A$= molar flux of species $A$
$N$ = total molar flux
$x_A$= mole fraction of species $A$
$c$ = total molar concentration
$\tag{1.1} J_{Az}= -D_{AB}\,\frac{dc_A}{dz} \label{1.1}$
$\tag{1.2} N_A= x_A N– cD_{AB}\, \frac{dx_A}{dz} \label{1.2}$
Falling Liquid Film with Gaseous Solute ADiffusing into Liquid B
$\Gamma$ = liquid flow rate per unit width of film $=\rho \bar{u}_y r_H$ (mass length$^{-1}$ time$^{-1}$)
$\delta$ = film thickness (length)
$\eta$ = metric used in selecting the appropriate equation for calculating $k_c$
$\mu$ = dynamic viscosity of the liquid (pressure time)
$\delta$ = density of the liquid (mass volume-1)
$A$ = available area for mass transfer = $L*W$ (length2)
$c_A$ = concentration of species $A$ in liquid $B$ (mol volume-1)
$c_{Ai}$ = concentration of species $A$ in liquid $B$ at the gas/liquid interface (mol volume$^{-1}$)
$c_{A0}$ = concentration of species $A$ in liquid $B$ when the liquid enters the enters (mol volume$^{-1}$)
$\Delta c_A$ = log mean concentration difference driving force (mol volume$^{-1}$)
$\overline {c}_{A}$ = bulk concentration of species $A$ in liquid $B$ at any position $y$ (mol volume$^{-1}$)
$\overline {c}_{AL}$ = bulk concentration of species $A$ in liquid $B$ at any position $y = L$ (mol volume$^{-1}$)
$D_{AB}$ = diffusivity of solution $A$ in liquid $B$ (length$^2$ time$^{-1}$)
$g$ = gravitational constant (length time$^{-2}$)
$H_A$ = Henry’s Law constant for solute $A$ in our liquid at our system temperature (volume pressure mol$^{-1}$)
$k_{c, \rm ave}$ = average mass transfer coefficient (length time$^{-1}$)
$L$ = film height (length)
$n_A$ = molar flow rate of species $A$ (mol time$^{-1}$)
$N_{\rm Re}$ = Reynolds number (dimensionless)
$N_{\rm Sc}$ = Schmidt number (dimensionless)
$N_{\rm Pe,\,M}$ = Peclet number for mass transfer (dimensionless)
$P_A$ = partial pressure of species $A$ in the gas phase (pressure)
$r_H$ = flow cross section per wetted perimeter (length)
$\bar{u}_y$ = bulk velocity of the falling film in the $y$ direction (length time$^{-1}$)
$W$ = film width (length)
Reynolds number for a falling film
$\tag{1.3} N_{\rm Re} = \frac{\rho\bar{u}_y4r_H}{\mu} = \frac{4\Gamma}{\mu}$
Rate
$n_{A} = \bar{u}_y{\delta}W(\bar{c}_{AL}-c_{A0}) \tag{1.4}$
$n_{A} = k_cA{\Delta}c_A \tag{1.5}$
Film thickness
$\delta = \left(\frac{3\overline {u}_y\mu}{\rho g}\right)^{1/2} = \left(\frac{3\mu\Gamma}{\rho^{2}g}\right)^{1/3} \tag{1.6}$
Hydraulic radius for a falling film Flow cross section: $\delta W$; wetted perimeter: $W$
$\tag{1.7}{r_H} = \frac{{\delta}W}{W} ={\delta}$
Schmidt number
$\tag{1.8} N_{\rm Sc} = \dfrac{\mu}{\rho D_{AB}}$
Peclet number for mass transfer
$\tag{1.9} N_{\rm Pe, M} = N_{\rm Re}N_{\rm Sc} = \dfrac{4 \delta \overline{u}_y}{D_{AB}} = \dfrac{4 \Gamma}{\rho D_{AB}}$
Mass transfer coefficient $k_{c,{\rm ave}}$
$\eta = \dfrac{2D_{AB}L}{3 \delta^2 \overline{u}_y} = \dfrac{8/3}{N_{\rm Re}N_{\rm Sc}(\delta / L)} = \dfrac{8/3}{(\delta / L)N_{\rm Pe,M}} \tag{1.10}$
if $\eta$ < 0.001,
$k_{c,{\rm ave}}=\left(\frac{6D_{AB}{\Gamma}}{{\pi}{\delta}{\rho}L}\right)^{1/2} \tag{1.11}$
if $\eta$ > 0.1,
$k_{c, \rm ave} = \left( \overline{u}_y \dfrac{\delta}{L} \right) (0.241 + 5.1213 \eta) \tag{1.12}$
if $\eta$ > 1,
$k_{c,{\rm ave}}=3.414\frac{D_{AB}}{\delta} \tag{1.13}$
Henry’s Law, applies at the interface
$\tag{1.14}c_A = \frac{P_A}{H_A}$
Log mean concentration difference
$\Delta c_A = (c_{A_i} - \bar{c}_A)_{\rm LM} = \dfrac{(c_{A_i} - c_{A_0}) - (c_{A_i} - \bar{c}_{A_L})}{\ln [(c_{A_i} - c_{A_0}) / (c_{A_i} - \bar{c}_{A_L})]} \tag{1.15}$
Example
10 mL/s of water at 25$^\circ$C flows down a wall that is 1.0 m wide and 3.0 m high. This film is in contact with pure CO2 at 1.0 atm, 25$^\circ$C. Find the rate of absorption of CO2 into the water (kmol/s).
• $\rho_{\rm H2O}$ = 998 kg/m3
• $\mu_{\rm H2O}$ = 8.90×10-4 Pa s
• $D_{\rm CO2,H2O}$ = 1.96×10-9 m2/s
• $H_{\rm CO2,H2O}$ = 29.41 L atm mol-1
Velocity and Concentration Boundary Layers for Laminar Horizontal Flow Across a Flat Plate
$\delta$ = thickness (height) of the velocity boundary layer (length)
$\delta_c$ = thickness (height) of the concentration boundary layer (length)
$\mu$ = fluid viscosity (pressure time) or (mass length-1 time-1)
$\rho$ = fluid density (mass volume-1)
$c_A$ = concentration of species A in fluid B at some point ($x$, $z$) (mol volume-1)
$c_{A0}$ = initial concentration of species A in fluid B (mol volume-1)
$c_{Ai}$ = concentration of species A in fluid B at the interface (mol volume-1)
$D_{AB}$ = diffusivity of solute A in fluid B (length2 time-1)
$N_{\rm Re,x}$ = Reynolds number (dimensionless)
$N_{\rm Sc}$ = Schmidt number (dimensionless)
$u_{0}$ = free-stream velocity of the fluid in the $x$ direction (length time-1)
$u_{x}$ = fluid velocity in the x direction at some point ($x$, $z$) (length time-1)
$x$ = horizontal distance from the leading edge of the flat plate (length)
$z$ = vertical distance from the plate surface (length)
$\tag{2.1} \dfrac{\delta}{x} = \dfrac{4.96}{N_{\rm Re, x}^0.5}$
$\tag{2.2}N_{\rm Re,x}=\frac{xu_0{\rho}}{\mu}$
$\frac{u_x}{u_0}=1.5\left(\frac{z}{\delta}\right)-0.5\left(\frac{z}{\delta}\right)^{3} \tag{2.3}$
$\delta_c = \dfrac{\delta}{N_{\rm Sc}^{1/3}}\tag{2.4}$
$\tag{2.5} N_{\rm Sc} = \dfrac{\mu}{\rho D_{AB}}$
$\tag{2.6} \frac{(c_{A_i}-c_A)}{(c_{A_i}-c_{A_0})}=1.5\left(\frac{z}{\delta_c}\right)-0.5{\left(\frac{z}{\delta_c}\right)^{3}}$
Example
Air at 100$^\circ$C and 1.0 atm with a free-stream velocity of 5.0 m/s flows over a 3.0 m long flat plate made of naphthalene.
(a) At what horizontal position does flow become turbulent?
(b) What is the thickness of the velocity boundary layer at that point?
(c) What is the thickness of the concentration boundary layer at that point?
(d) What is the concentration of naphthalene in the air at this horizontal position and vertical position that is half of the height of the concentration boundary layer?
• $\mu_{\rm air}$ = 2.15×10-5 kg m-1 s-1
• $\rho_{\rm air}$ = 0.0327 kmol m-3
• $D_{AB}$ = 0.94×10-5 m2 s-1
• $c_{Ai}$ = 4.3×10-3 kmol m-3
Film Theory and Overall Mass Transfer Coefficients
• $\delta$ = thickness of the film in which $c_{A} \neq c_{Ab}$ (length)
• $c$ = total concentration of liquid B (mol volume-1)
• $c_{Ai}$ = concentration of species A in liquid B at the interface (mol volume-1)
• $c_{Ab}$ = bulk concentration of species A in liquid B (mol volume-1)
• $c^*_{A}$ = concentration of species A in liquid B at equilibrium with the bulk gas phase (mol volume-1)
• $D_{AB}$ = diffusivity of species A in liquid B (area time-1)
• $H’_{A}$ = Henry’s Law constant for equation of the form $c_{A}=H’_{A}P_{A}$;
• (mol volume-1 pressure-1)
• $k_{c}$ = liquid-phase mass transfer coefficient, with respect to concentration driving force (length time-1)
• $K_{G}$ = overall mass transfer coefficient, with respect to pressure driving force (mol time-1 area-1 pressure-1)
• $K_{L}$ = overall mass transfer coefficient, with respect to concentration driving force (length time-1)
• $k_{p}$ = gas-phase mass transfer coefficient, pressure driving force (mol time-1 area-1 pressure-1)
• $N_{A}$ = molar flux (mol area-1 time-1)
• $P_{Ab}=p_{Ab}$ = partial pressure of species A in the bulk gaseous phase (pressure)
• $P^*_A$ = partial pressure of species A in a gas at equilibrium with the bulk liquid phase (pressure) $x_{Ai}$ = mole fraction of species A in liquid B at the interface
• $x_{Ab}$ = bulk mole fraction of species A in liquid B
Film Theory
$N_A = \left(\frac{D_{AB}}{\delta}\right)(c_{Ai}-c_{Ab}) = \left(\frac{cD_{AB}}{\delta}\right)(x_{Ai}-x_{Ab}) \tag{3.1}$
Example
SO2 is absorbed from air into water using a packed absorption tower. At a specific location in the tower, we know that the pressure of SO2 is 0.15 atm. Measurements of the gas composition above and below this location in the tower have told us that the flux of SO2 into the water is 0.0270 kmol SO2 m-2 hr-1. We were able to sample the bulk liquid phase at this location and found that it contained 3.0×10-4 mol SO2/mol. Assuming that this system fits film theory, find the thickness of the film.
• $D_{\rm SO_2,H_2O}$ = 1.7*10-9 m2 s-1
• $H_{\rm SO_2,H_2O}$ = 1.2 L atm mol-1
Two-Film Theory, Overall Mass Transfer Coefficients, Gas/Liquid
$N_A = (P_{Ab}H'_A - c_{Ab}) \left(\dfrac{H'_A}{k_p} + \dfrac{1}{k_c} \right)^{-1} \tag{3.2}$
$\dfrac{1}{K_L} = \dfrac{H'_A}{k_p} + \dfrac{1}{k_c} \tag{3.3}$
$N_A=K_L(c^{*}_A-c_{Ab}) \tag{3.4}$
$N_A = (P_{Ab} - P^*_A) \left(\dfrac{1}{k_p} + \dfrac{1}{H'_Ak_c} \right)^{-1} \tag{3.5}$
$\frac{1}{K_G}=\frac{1}{k_p}+\frac{1}{H’_Ak_c} \tag{3.6}$
$N_A=K_G(P_{Ab}-P^{*}_A) \tag{3.7}$
Example
We intend to use water to absorb SO2 from air. The incoming air is at 50°C and 2.0 atm and contains 0.085 mol SO2/mol. The incoming water, also at 50°C, already contains 0.0010 mol SO2/mol.
1. Which phase is most limiting to mass transfer?
2. What is the expected initial flux value?
Solution
• $k_c$ = 0.18 m hr-1
• $k_p$ = 0.040 kmol hr-1 m-2 kPa-1
• $H’_A$ (50°C) = 0.76 mol atm-1 L-1
|
textbooks/eng/Chemical_Engineering/Chemical_Engineering_Separations%3A_A_Handbook_for_Students_(Lamm_and_Jarboe)/1.01%3A_Performance_Metrics_for_Separation_Processes.txt
|
Staged Liquid-Liquid Extraction and Hunter Nash Method
$E_n$ = extract leaving stage $n$. This could refer to the mass of the stream or the composition of the stream.
$F$ = solvent entering extractor stage 1. This could refer to the mass of the stream or the composition of the stream.
$n$ = generic stage number
$N$ = Final stage. This is where the fresh solvent S enters the system and the final raffinate $R_N$ leaves the system.
$M$ = Composition of the mixture representing the overall system. Points ($F$ and $S$) and ($E_1$ and $R_N$) must be connected by a straight line that passes through point $M$. $M$ will be located within the ternary phase diagram.
$P$ = Operating point. $P$ is determined by the intersection of the straight line connecting points ($F$, $E_1$) and the straight line connecting points ($S$, $R_N$). Every pair of passing streams must be connected by a straight line that passes through point $P$. $P$ is expected to be located outside of the ternary phase diagram.
$R_n$ = raffinate leaving stage $n$. This could refer to the mass of the stream or the composition of the stream.
$S$ = solvent entering extractor stage $N$. This could refer to the mass of the stream or the composition of the stream.
$S/F$ = mass ratio of solvent to feed
$(x_i)_n$ = Mass fraction of species $i$ in the raffinate leaving stage $n$
$(y_i)_n$ = Mass fraction of species $i$ in the extract leaving stage $n$
Process schematic for multistage liquid-liquid extraction.
Determining number of stages $N$ when (1) feed rate; (2) feed composition; (3) incoming solvent rate; (4) incoming solvent composition; and (5) outgoing raffinate composition have been specified/selected.
1. Locate points $F$ and $S$ on the ternary phase diagram. Connect with a straight line.
2. Do a material balance to find the composition of one species in the overall mixture. Use this composition to locate point $M$ along the straight line connection points $F$ and $S$. Note the position of point $M$.
3. Locate point $R_N$ on the ternary phase diagram. It will be on the equilibrium curve. Draw a straight line from $R_N$ to $M$ and extend to find the location of $E_1$ on the equilibrium curve.
4. On a fresh copy of the graph, with plenty of blank space on each side of the diagram, note the location of points $F$, $S$, and $R_N$ (specified/selected) and $E_1$ (determined in step 3).
5. Draw a straight line between $F$ and $E_1$. Extend to both sides of the diagram. Draw a second straight line between $S$ and $R_N$. Note the intersection of these two lines and label as “$P$”.
6. Determine the number of equilibrium stages required to achieve the desired separation with the selected solvent mass.
– Stream $R_N$ is in equilibrium with stream $E_N$. Follow the tie-lines from point $R_N$ to $E_N$.
– Stream $E_N$ passes stream $R_{N-1}$. Connect point $E_N$ to operating point $P$ with a straight line, mark the location of $R_{N-1}$.
– Stream $R_{N-1}$ is in equilibrium with stream $E_{N-1}$. Follow the tie-lines from stream $R_{N-1}$ to $E_{N-1}$.
– Stream $E_{N-1}$ passes stream $R_{N-2}$. Connect $E_{N-1}$ to operating point $P$ with a straight line, mark the location of $R_{N-2}$.
– Continue in this manner until the extract composition has reached or passed $E_{1}$. Count the number of equilibrium stages.
Watch this two-part series of videos from LearnChemE that shows how to use the Hunter Nash method to find the number of equilibrium stages required for a liquid-liquid extraction process.
Example
1000 kg/hr of a feed containing 30 wt% acetone, 70 wt% water. The solvent is pure MIBK. We intend that the raffinate contain no more than 5.0 wt% acetone. How many stages will be required for each proposed solvent to feed ratio in the table below?
$\textbf{S/F}$ $\textbf{S}$ (kg/hr) $\textbf{(x_A)_M}$ target $\textbf{(y_A)_1}$ $\textbf{N}$
1.0
2.0
0.2
Hunter Nash Method for Finding Smin, Tank Sizing and Power Consumption for Mixer-Settler Units
Staged LLE: Hunter-Nash Method for Finding the Minimum Solvent to Feed Ratio
$E_n$ = extract leaving stage $n$. This could refer to the mass of the stream or the composition of the stream.
$F$ = solvent entering extractor stage 1. This could refer to the mass of the stream or the composition of the stream.
$n$ = generic stage number
$N$ = Final stage. This is where the fresh solvent $S$ enters the system and the final raffinate $R_N$ leaves the system.
$M$ = Composition of the overall mixture. Points ($F$ and $S$) and ($E_1$ and $R_N$) are connected by a straight line passing through $M$.
$P$ = Operating point. Every pair of passing streams must be connected by a straight line that passes through $P$.
$R_n$ = raffinate leaving stage $n$. This could refer to the mass of the stream or the composition of the stream.
$S$ = solvent entering extractor stage $N$. This could refer to the mass of the stream or the composition of the stream.
$S/F$ = mass ratio of solvent to feed
$S_{\rm min}/F$ = Minimum feasible mass ratio to achieve the desired separation, assuming the use of an infinite number of stages.
$(x_i)_n$ = Mass fraction of species $i$ in the raffinate leaving stage $n$
$(y_i)_n$ = Mass fraction of species $i$ in the extract leaving stage $n$
$P_{\rm min}$ = Point associated with the minimum feasible $S/F$ for this feed, solvent and (raffinate or extract) composition. $P_{\rm min}$ is the intersection of the line connecting points ($R_N$, $S$) and the line that is an extension of the upper-most equilibrium tie-line.
Determining minimum feasible solvent mass ratio ($S_{\rm min}/F$) when (1) feed composition; (2) incoming solvent composition; and (3) outgoing raffinate composition have been specified/selected.
1. Locate points $S$ and $R_N$ on the phase diagram. Connect with a straight line.
2. Extend the upper-most tie-line in a line that connects with the line connecting points ($S$ and $R_N$). Label the intersection $P_{\rm min}$.
3. Find point $F$ on the diagram. Draw a line from $P_{\rm min}$ to F and extend to the other side of the equilibrium curve. Label $E_1$@$S_{\rm min}$.
4. On a fresh copy of the phase diagram, label points $F$, $S$, $R_N$ and $E_1$@$S_{\rm min}$. Draw one line connecting points $S$ and $F$ and another line connecting points $E_1$@$S_{\rm min}$
5. and $R_N$. The intersection of these two lines is mixing point $M$. Note the composition of species $i$ at this location.
6. Calculate
$\dfrac{S_{\rm min}}{F} = \dfrac{(x_i)_F - (x_i)_M}{(x_i)_M - (x_i)_S} \tag{5.1}$
Example
We have a 1000 kg/hr feed that contains 30 wt% acetone and 70 wt% water. We want our raffinate to contain no more than 5.0 wt% acetone. What is the minimum mass of pure MIBK required?
Liquid-Liquid Extraction: Sizing Mixer-settler Units
$\Phi_C$ = volume fraction occupied by the continuous phase
$\Phi_D$ = volume fraction occupied by the dispersed phase
$\mu_C$ = viscosity of the continuous phase (mass time-1 length-1)
$\mu_D$ = viscosity of the dispersed phase (mass time-1 length-1)
$\mu_M$ = viscosity of the mixture (mass time-1 length-1)
$\rho_C$ = density of the continuous phase (mass volume-1)
$\rho_D$ = density of the dispersed phase (mass volume-1)
$\rho_M$ = average density of the mixture (mass volume-1)
$D_i$ = impeller diameter (length)
$D_T$ = vessel diameter (length)
$H$ = total height of mixer unit (length)
$N$ = rate of impeller rotation (time-1)
$N_{\rm Po}$ = impeller power number, read from Fig 8-36 or Perry’s 15-54 (below) based on value of $N_{Re}$ (unitless)
$(N_{\rm Re})_C$ = Reynold’s number in the continuous phase = inertial force/viscous force (unitless)
$P$ = agitator power (energy time-1)
$Q_C$ = volumetric flowrate, continuous phase (volume time-1)
$Q_D$ = volumetric flowrate, dispersed phase (volume time-1)
$V$ = vessel volume (volume)
Tank and impeller sizing
$\rm residence time = \dfrac{V}{Q_C + Q_D} \tag{5.2}$
Geometry of a cylinder
$V = \dfrac{\pi D_T^2H}{4} \tag{5.3}$
General guidelines
$\dfrac{H}{D_T} = 1 \tag{5.4}$
$\dfrac{D_i}{D_T} = \dfrac{1}{3} \tag{5.5}$
Impeller power consumption:
$P=N_{Po}N^3D_i^5{\rho}_m \tag{5.6}$
$N_{Re}=\frac{D_i^2N{\rho}_M}{{\mu}_M} \tag{5.7}$
${\rho}_M={\rho}_C{\Phi}_C+{\rho}_D{\Phi}_D \tag{5.8}$
${\mu}_M=\frac{{\mu}_C}{{\Phi}_C}\left[1+\frac{1.5{\mu}_D{\Phi}_D}{{\mu}_C+{\mu}_D}\right] \tag{5.9}$
Modeling Mass Transfer in Mixer-Settler Units
$\Delta\rho$ = density difference (absolute value) between the continuous and dispersed phases (mass volume-1)
$\phi_C$ = volume fraction occupied by the continuous phase
$\phi_D$ = volume fraction occupied by the dispersed phase
$\mu_C$ = viscosity of the continuous phase (mass time-1 length-1)
$\mu_D$ = viscosity of the dispersed phase (mass time-1 length-1)
$\mu_M$ = viscosity of the mixture (mass time-1 length-1)
$\rho_C$ = density of the continuous phase (mass volume-1)
$\rho_D$ = density of the dispersed phase (mass volume-1)
$\rho_M$ = average density of the mixture (mass volume-1)
$\sigma$ = interfacial tension between the continuous and dispersed phases
(mass time-2)
$a$ = interfacial area between the two phases per unit volume (area volume-1)
$c_{D,\rm in}$, $c_{D,\rm out}$ = concentration of solute in the incoming or outgoing dispersed streams (mass volume-1)
$c^*_D$ = concentration of solute in the dispersed phase if in equilibrium with the outgoing continuous phase (mass volume-1)
$D_C$ = diffusivity of the solute in the continuous phase (area time-1)
$D_D$ = diffusivity of the solute in the dispersed phase (area time-1)
$D_i$ = impeller diameter (length)
$D_T$ = vessel diameter (length)
$d_{vs}$ = Sauter mean droplet diameter; actual drop size expected to range from $0.3d_{vs}-3.0d_{vs}$ (length)
$E_{MD}$ = Murphree dispersed-phase efficiency for extraction
$g$ = gravitational constant (length time-2)
$H$ = total height of mixer unit (length)
$k_c$ = mass transfer coefficient of the solute in the continuous phase (length time-1)
$k_D$ = mass transfer coefficient of the solute in the dispersed phase (length time-1)
$K_{OD}$ = overall mass transfer coefficient, given on the basis of the dispersed phase (length time-1)
$m$ = distribution coefficient of the solute, $\Delta c_C/\Delta c_D$ (unitless)
$N$ = rate of impeller rotation (time-1)
$(N_{\rm Eo})_C$ = Eotvos number = gravitational force/surface tension force (unitless)
$(N_{\rm Fr})_C$ = Froude number in the continuous phase = inertial force/gravitational force (unitless)
$N_{\rm min}$ = minimum impeller rotation rate required for complete dispersion of one liquid into another
$(N_{\rm Re})_C$ = Reynold’s number in the continuous phase = inertial force/viscous force (unitless)
$(N_{\rm Sh})_C$ = Sherwood number in the continuous phase = mass transfer rate/diffusion rate (unitless)
$(N_{\rm Sc})_C$ = Schmidt number in the continuous phase = momentum/mass diffusivity (unitless)
$(N_{\rm We})_C$ = Weber number = inertial force/surface tension (unitless)
$Q_D$ = volumetric flowrate of the dispersed phase (volume time-1)
$V$ = vessel volume (volume)
Calculating $N_{\rm min}$
$\dfrac{N_{\rm min}^2 \rho_M D_i}{g \Delta \rho} = 1.03 \left(\dfrac{D_T}{D_i}\right)^{2.76} (\phi_D)^{0.106} \left(\dfrac{\mu_M^2 \sigma}{D_i^5 \rho_M g^2 (\Delta \rho)^2} \right)^{0.084} \tag{6.1}$
${\rho}_M={\rho}_C{\phi}_C+{\rho}_D{\phi}_D \tag{6.2}$
${\mu}_M=\frac{{\mu}_C}{{\phi}_C}\left(1+\frac{1.5{\mu}_D{\phi}_D}{{\mu}_C+{\mu}_D}\right) \tag{6.3}$
Estimating Murphree efficiency for a proposed design
Sauter mean diameter
${\rm if}\;\; N_{\rm We} < 10,000,\; d_{vs}=0.052D_i(N_{\rm We})^{-0.6}\exp({4{\phi}_D}) \tag{6.4}$
${\rm if}\;\; N_{\rm We} >10,000,\; d_{vs}=0.39D_i(N_{\rm We})^{-0.6} \tag{6.5}$
$N_{\rm We}=\frac{D_i^3N^2{\rho}_C}{\sigma} \tag{6.6}$
mass transfer coefficient of the solute in each phase
$k_D=\frac{6.6D_D}{d_{vs}} \tag{6.7}$
$k_C=\frac{(N_{\rm Sh})_CD_c}{d_{vs}} \tag{6.8}$
$(N_{\rm Sh})_C = 1.237 \times 10^{-5} (N_{\rm Sc})_C^{1/3} (N_{\rm Re})_C^{2/3} (\phi_D)^{-1/2} \tag{6.9}$
$(N_{\rm Fr})_C^{5/12} \left( \dfrac{D_i}{d_{vs}} \right)^2 \left( \dfrac{d_{vs}}{D_T} \right)^{1/2} (N_{Eo})_C^{5/4} \tag{6.9}$
$(N_{\rm Sc})_C=\frac{{\mu}_C}{{\rho}_CD_C} \tag{6.10}$
$(N_{\rm Re})_C=\frac{D_i^2N{\rho}_C}{{\mu}_C} \tag{6.11}$
$(N_{\rm Fr})_C = \dfrac{D_i N^2}{g} \tag{6.12}$
$(N_{Eo})_C = \dfrac{\rho_D d_{vs}^2 g}{\sigma} \tag{6.13}$
Overall mass transfer coefficient for the solute
$\frac{1}{K_{OD}}=\frac{1}{k_D}+\frac{1}{mk_C} \tag{6.14}$
Murphree efficiency
$E_{MD}=\frac{K_{OD}aV}{Q_D}\left(1+{\frac{K_{OD}aV}{Q_D}}\right)^{-1} \tag{6.15}$
$a=\frac{6\phi_D}{d_{vs}} \tag{6.16}$
Experimental assessment of efficiency
$E_{MD}=\frac{c_{D,\rm in}-c_{D,\rm out}}{c_{D,\rm in}-c^*_D} \tag{6.17}$
Example
1000 kg/hr of 30 wt% acetone and 70 wt% water is to be extracted with 1000 kg/hr of pure MIBK. Assume that the extract is the continuous phase, a residence time of 5 minutes in the mixing vessel, standard sizing of the mixing vessel and impeller. Find the power consumption and Murphree efficiency if the system operates at $N_{\rm min}$, controlled at the level of 1 rev/s. Ignore the contribution of the solute and the co-solvent to the physical properties of each phase.
• MIBK
• density = 802 kg m-3
• viscosity = 0.58 cP
• diffusivity with acetone at 25°C = 2.90×10-9 m2 s-1
• Water
• density = 1000 kg m-3
• viscosity = 0.895 cP
• diffusivity with acetone at 25°C = 1.16×10-9 m2 s-1
• The interfacial tension of water and MIBK at 25°C = 0.0157 kg s-2. Use the ternary phase diagram to find $m$.
Liquid-Liquid Extraction Columns
$\Delta \rho$ = density difference (absolute value) between the continuous and dispersed phases (mass volume-1)
$\mu_C$ = viscosity of the continuous phase (mass time-1 length-1)
$\rho_C$ = density of the continuous phase (mass volume-1)
$\rho_D$ = density of the dispersed phase (mass volume-1)
$\sigma$ = interfacial tension between the continuous and dispersed phases
(mass time-2)
$D_T$ = column diameter (length)
$H$ = total height of column (length)
${\rm HETS}$ = height of equilibrium transfer stage (length)
$m^*_C$ = mass flowrate of the entering continuous phase (mass time-1)
$m^*_D$ = mass flowrate of the entering dispersed phase (mass time-1)
$N$ = required number of equilibrium stages
$u_0$ = characteristic rise velocity of a droplet of the dispersed phase (length time-1)
$U_i$ = superficial velocity of phase $i$ (C = continuous, downward; D = dispersed, upward) (length time-1)
$V^*_i$ = volumetric flowrate of phase $i$ (volume time-1)
$U_i = \dfrac{4V_i^*}{\pi D_T^2} \tag{7.1}$
definition of superficial velocity
$\dfrac{U_D}{U_C} = \dfrac{m_D^*}{m_C^*} \left( \dfrac{\rho_C}{\rho_D} \right) \tag{7.2}$
$(U_D + U_C)_{\rm actual} = 0.50(U_D + U_C)_f \tag{7.3}$
for operation at 50% of flooding
$u_0 = \dfrac{0.01 \sigma \Delta \rho}{\mu_C \rho_C} \tag{7.4}$
for rotating-disk columns, $D_T$ = 8 to 42 inches, with one aqueous phase
$D_T = \left( \dfrac{4m_D^*}{\rho_D U_D \pi} \right)^{0.5} = \left( \dfrac{4m_C^*}{\rho_C U_C \pi} \right)^{0.5} \tag{7.5}$
$H = \rm HETS * N \tag{7.6}$
Example
1000 kg/hr of 30 wt% acetone and 70 wt% water is to be extracted with 1000 kg/hr of pure MIBK in a 2-stage column process. Assume that the extract is the dispersed phase. Ignoring the contribution of the solute and the co-solvent to the physical properties of each phase, find the required column diameter and height.
• MIBK
• density = 802 kg m-3
• viscosity = 0.58 cP
• Water
• density = 1000 kg m-3
• viscosity = 0.895 cP
• The interfacial tension of water and MIBK at 25°C = 0.0157 kg s-2.
|
textbooks/eng/Chemical_Engineering/Chemical_Engineering_Separations%3A_A_Handbook_for_Students_(Lamm_and_Jarboe)/1.03%3A_Liquid-liquid_Extraction.txt
|
Overview of Absorption and Stripping Processes
$\phi_A$ = fraction not absorbed (unitless)
$\phi_S$ = fraction not stripped (unitless)
$A$ = absorption factor (unitless)
$K$ = equilibrium constant, (mole fraction in gas)/(mole fraction in liquid)
$L$ = total liquid flow rate (mol time-1)
$V$ = total gas flow rate (mol time-1)
$S$ = stripping factor (unitless)
$x$ (lowercase) = mol fraction of solute in liquid phase (mol solute/mol total liquid)
$X$ (uppercase) = mol ratio of solute in liquid phase (mol solute/mol solvent)
$y$ (lowercase) = mol fraction of solute in gas phase (mol solute/mol total vapor)
$Y$ (uppercase) = mol ratio of solute in gas phase (mol solute/mol gaseous carrier)
$A=\frac{(L/V)}{K} \tag{8.1}$
$S=\frac{K}{(L/V)}=\frac{1}{A} \tag{8.2}$
Example
K for acetone in an air/water system is 2.0. We intend to absorb 90% of the acetone entering in the gaseous phase into the liquid phase. How many theoretical stages would be required for (L/V) = 1, (L/V) = 2, and (L/V) = 10?
Finding mole ratio from mole fraction:
$Y = \dfrac{y}{1-y} \tag{8.3}$
$X = \dfrac{x}{1-x} \tag{8.4}$
Finding mole fraction from mole ratio:
$y = \dfrac{Y}{1+Y} \tag{8.5}$
$x = \dfrac{X}{1+X} \tag{8.6}$
Graphical Method to Find Equilibrium Stages for Absorption Columns
$K$ = equilibrium constant, (mole fraction in gas)/(mole fraction in liquid)
$L$ = liquid flow rate (mol time-1)
$L’$ = liquid flow rate on a solute-free basis (mol time-1)
$N$ = total number of stages. For an absorption column, $N$ is the bottom stage. For a stripping column, $N$ is the top stage.
$n$ = generic stage number
$V$ = gas flow rate (mol total time-1)
$V’$ = gas flow rate on a solute-free basis (mol gaseous carrier time-1)
$x$ = mole fraction of solute in liquid phase (mol solute/mol total liquid)
$X$ = mole ratio of solute in liquid phase (mol solute/mol liquid absorbent)
$X_0$ = mole ratio of solute/absorbent in liquid entering the top of the absorption column
$X_N$ = mole ratio of solute/absorbent in liquid leaving the bottom of the absorption column
$y$ = mole fraction of solute in gas phase (mol solute/mol total gas)
$Y$ = mole ratio of solute in gas phase (mol solute/mol gaseous carrier)
$Y_1$ = mole ratio of solute/gaseous carrier in gas leaving the top of the absorption column
$Y_{N+1}$ = mole ratio of solute/gaseous carrier in gas entering the bottom of the absorption column
Operating line for absorption column:
$Y_{n+1}=X_n\left(\frac{L’}{V’}\right)+Y_1-X_0\left(\frac{L’}{V’}\right) \tag{9.1}$
Watch a video from LearnChemE that demonstrates how to determine the number of equilibrium stages required for an absorption column when given inlet and outlet specifications: Absorption of a Dilute Species (10:43): https://youtu.be/BoPKngZZwVI
Example
The following equilibrium data is available for water, ammonia, and air at 20°C.
We intend to use fresh water to absorb the ammonia from a stream of air containing 28.5 mol% ammonia, with both streams at 20°C. We intend to absorb 80% of the incoming ammonia. How many equilibrium stages are required for each of the following design cases?
(a) $L’/V’$ = 1.0
(b) $L’/V’$ = 1.6
(c) $L’/V’$= 2.0
Determining the Minimum Ratio for Absorbent to Gas Flow Rates and a Mathematical Approach for Finding the Number of Equilibrium Stages
$K$ = equilibrium constant, (mole fraction solute in gas)/(mole fraction solute in liquid)
$L’$ = liquid flow rate on a solute-free basis (mol liquid absorbent time-1)
$n$ = generic equilibrium stage
$N$ = bottom equilibrium stage for an absorption column
$V’$ = gas flow rate on a solute-free basis (mol gaseous carrier time-1)
$(L’/V’)_{\rm min}$ = relative molar flowrates of solute-free absorbent and solute-free carrier gas at which an infinite number of equilibrium stages is required in order to achieve the desired separation
$x$ = mole fraction of solute in liquid phase (mol solute/mol total liquid)
$X$ = mole ratio of solute in liquid phase (mol solute/mol liquid absorbent)
$X_0$ = mole ratio of solute/absorbent in liquid entering the top of the absorption column
$X_N$ = mole ratio of solute/absorbent in liquid leaving the bottom of the absorption column
$y$ = mole fraction of solute in gas phase (mol solute/mol total gas)
$Y$ = mole ratio of solute in gas phase (mol solute/mol gaseous carrier)
$Y_1$ = mole ratio of solute/gaseous carrier in gas leaving the top of the absorption column
$Y_{N+1}$ = mole ratio of solute/gaseous carrier in gas entering the bottom of the absorption column
Equilibrium:
$K_n=\frac{y_n}{x_n}=\frac{Y_n/(1+Y_n)}{X_n/(1+X_n)} \tag{10.1}$
Operating line:
$Y_{n+1}=X_n\left(\frac{L’}{V’}\right)+Y_1-X_0\left(\frac{L’}{V’}\right) \tag{10.2}$
when $X$ ~ $x$ and $Y$ ~ $y$,
$\left(\frac{L’}{V’}\right)_{\rm min}=\frac{y_{N+1}-y_1}{\frac{y_{N+1}}{K_N}-x_0} \tag{10.3}$
when $X$ ~ $x$, $Y$ ~ $y$ and $x$0 = 0
$\left(\frac{L’}{V’}\right)_{\rm min}=K_N\left(1-\frac{y_1}{y_{N+1}}\right) \tag{10.4}$
Example
180 kmol/hr of off-gas from a fermentation unit contains 98 mol% CO2 and 2 mol% ethanol. We would like to recover 97% of this ethanol and have proposed the use of a staged absorption column with water as the absorbent. The incoming water contains no ethanol and is at 30°C. The incoming gaseous stream is at 30°C, 110 kPa. Assume there is no absorption of CO2 into the water. The activity coefficient at infinite dilution for ethanol in water at 30°C is 6.0 and Psat is 10.5 kPa. Thus, K = γiPisat/P = 0.57.
(a) What is $(L’/V’)_{\rm min}$?
(b) How many equilibrium stages are required if we operate at 1.5*$(L’/V’)_{\rm min}$?
Stage Efficiency and Column Height
$\lambda$ = inverse absorption factor (unitless)
$\mu_L$ = liquid-phase viscosity (mass time-1 length-1)
$\rho_L$ = liquid-phase density (mass volume-1)
$a$ = vapor-liquid interfacial area per volume of combined gas/liquid on the tray (area volume-1)=(length-1)
$A_b$ = portion of the column cross-sectional area dedicated to mixing of the gas/liquid (area)
$D_T$ = column diameter (length)
$E_{MV}$ = Murphree efficiency, calculated on the basis of the vapor phase composition
$E_O$ = overall stage efficiency
$K$ = equilibrium constant (gas-phase composition/liquid-phase composition)
$K_G$ = overall gas mass-transfer coefficient, partial pressure driving force (mol time-1 area-1 pressure-1)
$L$ = molar flow rate of liquid phase (mol time-1)
$M_L$ = molecular weight of the liquid phase
$N_a$ = actual number of stages required to achieve the desired separation
$N_{OG}$ = number of overall gas phase mass transfer units
$N_t$ = number of theoretical stages required to achieve the desired separation
$P$ = pressure (force area-1)
$T$ = temperature (temperature)
$V$ = molar flow rate of gaseous phase (mol time-1)
$x_{i,n}$ = the actual mole fraction of species i in the liquid leaving stage n and entering stage n+1
$y_i$ = the mole fraction of species i in a gas at some fixed point in the column
$y_{i,n+1}$ = the actual mole fraction of species i in the gas leaving stage n+1 and entering stage n
$y_{i,n}$ = the actual mole fraction of species i in the gas leaving stage n and entering stage n-1
$y^*_{i,n}$ = the mole fraction of species i in the gas leaving stage n if it had reached equilibrium with the liquid leaving that stage, where the liquid phase composition is $x_{i,n}$
$Z_f$ = height of combined gas and liquid holdup on the tray (length)
$E_O=\frac{N_t}{N_a} \tag{11.1}$
$E_O=19.2-57.8\log_{10}{\mu_L} \;\;\;\; 0.2 < \mu_L < 1.6\; {\rm cP}, \mu_L {\rm \;must\; be\; in\; cP} \tag{11.2}$
0.2 < $\mu_L$ < 1.6 cP, $\mu_L$ must be in $cP$
$\log_{10}E_O = 1.597 - 0.199 \log_{10} \left( \dfrac{KM_L \mu_L}{\rho_L} \right) - 0.0896 \left[ \log_{10} \left( \dfrac{KM_L \mu_L}{\rho_L} \right) \right]^2 \tag{11.3}$
Eq 11.3 is applicable for this range of conditions: $\mu_L$ must be in cP, $\rho_L$ must be in lb ft-3; restricted to $D_T$ = 2 in – 9 ft, average $P$ = 14.7 – 485 psi, average $T$ = 60 – 138°F, $E_O$ = 0.65% – 69%
Example
For our CO2/ethanol/water system, we found that for 97% removal of the ethanol using water as the absorbent and operating at 30°C, 110 kPa, with $K$ = 0.57 and $(L’/V’)_{\rm actual}$ = $(L’/V’)_{\rm min}$ = 0.828, 7 theoretical stages required. How many actual stages are needed, according to equations 11.2 and 11.3?
• $\mu_L$ = 0.89 cP
• $M_L$ = 18 lb/lb-mole
• $\rho_L$ = 62.5 lb ft-3
$E_O=\frac{\log_{10}{[1+E_{MV}({\lambda}-1)]}}{\log_{10}{\lambda}} \tag{11.4}$
$\lambda = \dfrac{KV}{L} \tag{11.5}$
$E_{MV}=\frac{y_{i,n+1}-y_{i,n}}{y_{i,n+1}-y^*_{i,n}}=1-\exp(-N_{OG}) \tag{11.6}$
$N_{OG}=\frac{K_GaPZ_f}{(V/A_b)} \tag{11.7}$
Staged Column Height and Diameter
$\rho_L$ = liquid-phase density (mass volume-1)
$\rho_V$ = vapor-phase density (mass volume-1)
$\sigma$ = liquid-phase surface tension (force length-1)
$A$ = total column cross-sectional area (area)
$A_a$ = active tray area (non downcomer) (area)
$A_h$ = area of the tray open to vapor (area)
$A_d$ = column cross-sectional area dedicated to downcomers (area)
$D_T$ = column diameter (length)
$f$ = fraction of flooding operation; usually we use 0.80
$F_{LV}$ = internal variable, relating the kinetic energy of the liquid and gas streams (unitless)
$L$ = molar flowrate of the liquid stream (mol time-1)
$M_V$ = molecular weight of the gaseous stream
$M_L$ = molecular weight of the liquid stream
$U_f$ = Vapor velocity that is sufficient to suspend liquid droplets. Vapor velocity greater than $U_f$ can be associated with entrainment flooding (length time-1)
$V$ = molar flowrate of the gaseous stream (mol time-1)
*Note that many of the parameters can have different values for different stages in the column. Use an appropriate average value and document and justify your assumptions.
$D_T=\left[\frac{4VM_V}{fU_f{\pi}(1-A_d/A){\rho}_V}\right]^{0.5} \tag{12.1}$
$F_{LV} = \dfrac{LM_L}{VM_V} \left( \dfrac{\rho_V}{\rho_L} \right)^{0.5} \tag{12.2}$
Where $A_d/A$ is estimated as follows
if $F_{LV} \leq 0.1$,
$A_d/A \sim 0.1 \tag{12.3}$
if $0.1 \leq F_{LV} \leq 1.0$,
$A_d/A \sim 0.1 + (F_{LV}-0.1)/9 \tag{12.4}$
if $F_{LV} \geq 1.0$,
$A_d/A \sim 0.2 \tag{12.5}$
Example
180 kmol/hr of fermentation off-gas containing 98 mol% CO2 and 2 mol% ethanol is to be fed to a staged absorption tower that operates at 30°C, 110 kPa. Fresh water will be supplied to this column at a ratio of $1.5*(L’/V’)_{\rm min}$= 154 kmol/hr and we intend to have a sufficient number of stages so that 97% of the ethanol will be recovered. Ignore the contribution of the solute to the physical properties of the carrier streams and assume a vapor flooding velocity of 3.12 m/s.
Graphical and Mathematical Determination of Vapor Phase Flooding Velocity
$\rho_L$ = liquid-phase density (mass volume-1)
$\rho_V$ = vapor-phase density (mass volume-1)
$\sigma$ = liquid surface tension (dyne cm-1)
$A_a$ = active tray area for contact between the gas and liquid phases (area)
$A_h$ = area of the holes (area)
$C$ = Souders and Brown capacity parameter, also known as vapor velocity factor (ft/s)
$C_1$ = internal parameter for calculation of $C_{\rm S,ult}$ (m s-1)
$C_2$ = internal parameter for calculation of $C_{\rm S,ult}$ (m s-1)
$C_D$ = liquid droplet drag coefficient
$C_F$ = entrainment flooding capacity, from Figure 6.23 (ft/s)
$C_{\rm S,ult}$ = ultimate capacity parameter (m s-1)
$d_p$ = liquid droplet diameter (length)
$F_F$ = foaming factor (unitless)
$F_{HA}$ = internal variable based on plate-specific value of Ah/Aa (unitless)
$F_{LV}$ = internal variable, relating the kinetic energy of the liquid and gas streams (unitless)
$F_{ST}$ = surface tension factor (unitless)
$F$ = internal parameter for calculation of $C_{\rm S,ult}$
$g$ = gravitational constant (length time-2)
$L_S$ = superficial liquid velocity (length time-1)
$U_f$ = superficial vapor velocity at flooding (length time-1)
$U_S$ = superficial vapor velocity (length time-1)
$U_{\rm S,ult}$ = superficial vapor velocity at which the vapor velocity exceeds the liquid droplet settling velocity (m s-1)
Graphical Determination of Superficial Vapor Velocity at Flooding
$U_f=C\left(\frac{\rho_L-\rho_V}{\rho_V}\right)^{0.5} \tag{13.1}$
$C=\left(\frac{4d_Pg}{3C_D}\right)^{0.5} \tag{13.2}$
we will use
$C=F_{ST}F_FF_{HA}C_F \tag{13.3}$
$F_{ST}=(\sigma/20)^{0.2} \tag{13.4}$
$\sigma$ must be in dyne/cm
if $A_h/A_a>0.10$ FHA = 1.0
$F_{HA}=1.0 \tag{13.5}$
if $0.06 < A_h/A_a < 0.1$ FHA = 5(Ah/Aa) + 0.5
$F_{HA}=5(A_h/A_a)+0.5 \tag{13.6}$
$C_F$ is from Fig 6.23, needs $F_{LV}$ and tray spacing
Mathematical Determination of Superficial Vapor Velocity at Flooding
$U_f=U_{\rm S,ult}=C_{\rm S,ult}\left(\frac{\rho_L-\rho_V}{\rho_V}\right)^{0.5} \tag{13.7}$
$C_{\rm S,ult}=C_1 \textrm{ or }C_2$, whichever value is smaller
$C_1=0.445(1-F)\left[\frac{\sigma}{\rho_L-\rho_V}\right]^{0.25}-1.4L_S \tag{13.8}$
$C_2=0.356(1-F)\left[\frac{\sigma}{\rho_L-\rho_V}\right]^{0.25} \tag{13.9}$
$F = \left(1 + 1.4 \left[ \dfrac{\rho_L - \rho_V}{\rho_V} \right]^{0.5} \right)^{-1} \tag{13.10}$
for equations 13.8, 13.9, and 13.10, densities must be in kg/m3 and surface tension must be in dyne/cm
$L_S = U_S \dfrac{LM_L}{\rho_L} \left( \dfrac{VM_V}{\rho_V} \right)^{-1} \tag{13.11}$
*true in all conditions, not just flooding
Example $1$
180 kmol/hr of fermentation off-gas containing 98 mol% CO2 and 2 mol% ethanol is to be fed to a staged absorption tower that operates at 30°C, 110 kPa. Fresh water will be supplied to this column at a ratio of $1.5*(L’/V’)_{\rm min}$ = 154 kmol/hr and we intend to have a sufficient number of stages so that 97% of the ethanol will be recovered. Assume a foaming factor of 0.90, trays that have an $A_h/A_a > 0.10$ and a surface tension of 70 dyne/cm and ignore the contribution of the solute to the physical properties of the carrier streams. Assume a tray spacing of 24 inches. What is $U_f$ according to
(a) the graphical method
(b) ultimate superficial velocity computational method?
Staged Column Pressure Drop
$\phi_e$ = height of clear liquid/height of froth (unitless)
$\rho_L$ = liquid-phase density (mass volume-1)
$\rho_V$ = vapor-phase density (mass volume-1)
$\sigma$ = liquid surface tension (dyne cm-1)
$A_a$ = active tray area, hosting interaction between the gas and liquid phases (area)
$A_d$ = downcomer area, hosting liquid handling (area)
$A_h$ = total area of the holes on each tray (area)
$C_0$ = tray-specific parameter, generally between 0.68 and 0.85
$C_l$ = internal parameter in calculation of h$\mathfrak{L}$
$D_{B\rm(max)}$ = maximum bubble size (length). We will use maximum hole diameter $D_H$.
$D_T$ = column diameter (length)
$g$ = gravitational constant (length time-2)
$h_d$ = pressure drop due to movement of the gas through tray perforations (inches of our liquid)
$h_l$ = pressure drop due to movement of the gas through the liquid hold-up (inches of our liquid)
$h_{\sigma}$ = pressure drop due to movement of the gas through the liquid surface (inches of our liquid)
$h_t$ = total pressure drop for a single actual tray (inches of our liquid)
$h_w$ = weir height (inches)
$K_S$ = capacity parameter (ft s-1)
$L_w$ = weir length (inches)
$q_L$ = liquid flow rate across the tray (gal min-1)
$u_0$ = velocity of the vapor phase through the holes in the tray (ft s-1)
$U_a$ = superficial vapor velocity, calculated based on the active bubbling area (ft s-1)
$h_t=h_d+h_{l}+h_{\sigma} \tag{14.1}$
$h_d=0.186\left(\frac{u_0^2}{C_0^2}\right)\left(\frac{\rho_V}{\rho_L}\right) \tag{14.2}$
$h_{l}=\phi_e\left[h_w+C_{l}\left(\frac{q_L}{L_w\phi_e}\right)^{2/3}\right] \tag{14.3}$
$\phi_e=\exp({-4.257K_S^{0.91}}) \tag{14.4}$
$K_S=U_a\left(\frac{\rho_V}{\rho_L-\rho_V}\right)^{0.5} \tag{14.5}$
$A_a=(A-2A_d) \tag{14.6}$
$L_w=0.73D_T \tag{14.7}$
$C_{l}=0.362+0.317\exp(-3.5h_w) \tag{14.8}$
$h_w$, $L_w$ must be in inches, $q_L$ must be in gal/min, $K_S$ must be ft/s
$h_{\sigma}=\frac{6\sigma}{g{\rho_L}D_{B\rm(max)}} \tag{14.9}$
To prevent weeping, maintain
$h_d+h_{\sigma}>h_{l} \tag{14.10}$
Example
180 kmol/hr of CO2 containing 2 mol% ethanol and 154 kmol/hr of fresh water are fed to a column with a diameter of 0.80m. The weirs have a height of 2.0 inches, the sieve tray holes have a diameter of 3/16″, C0 = 0.73. Ten percent (10%) of the column cross-sectional area is occupied by the downcomers and 10% of the active column area is occupied by the sieve tray holes. What is the pressure drop per tray?
Staged Column Mass Transfer
$\phi_e$ = height of clear liquid/height of froth (unitless); equation 14.4 from Lecture 2.2 Staged Column Pressure Drop
$\rho_L$ = density of the liquid phase (mass volume-1)
$\rho_V$ = density of the vapor phase (mass volume-1)
$a$ = interfacial gas-liquid area per unit volume of combined gas and liquid hold up (area/volume = length-1)
$\overline a$ = interfacial gas-liquid area per unit volume of equivalent clear liquid (area/volume = length-1)
$A_a$ = active tray area, hosting interaction between the gas and liquid phases (area)
$A_b$ = active bubbling area of the tray (area)
$D_L$ = diffusivity of the solute in the liquid phase (cm2 s-1)
$D_V$ = diffusivity of the solute in the vapor phase (cm2 s-1)
$E_{MV}$ = Murphree efficiency of a stage, based on the vapor phase
$f$ = fractional value representing proximity to flooding value, based on $U_a$ instead of $U_f$ (unitless)
$F$ = $F$-factor [(kg/m)0.5s-1]
$h_l$ = pressure drop due to movement of the gas through the liquid hold-up, from Lecture 2.2 Staged Column Pressure Drop (pressure)
$k_G$ = gas-phase mass transfer coefficient (length time-1)
$k_L$ = liquid-phase mass transfer coefficient (length time-1)
$K$ = equilibrium constant for our system (y/x)
$K_G$ = overall mass transfer coefficient, based on partial-pressure driving force (time-1)
$L$ = molar flow rate of the liquid phase (mol time-1)
$M_L$ = molecular weight of the liquid phase (mass mole-1)
$N_G$ = number of transfer units in the gas phase
$N_L$ = number of transfer units in the liquid phase
$N_{OG}$ = number of overall mass transfer units, expressed on a gas-phase basis
$P$ = pressure of the gas phase (pressure)
$q_L$ = liquid volumetric flow rate across the tray (volume time-1)
$\overline t_G$ = average gas residence time in the froth (time)
$\overline t_L$ = average liquid residence time in the froth (time)
$U_a$ = superficial vapor velocity, calculated based on the active bubbling area (length time-1)
$U_f$ = superficial vapor velocity at flooding, found by Souders and Brown or superficial velocity method, Lecture 2.1 Graphical and mathematical determination of vapor phase flooding velocity (length time-1)
$V$ = molar flow rate of the gas phase (mole time-1)
$y_{i,n+1}$ = composition (mole fraction) of the gas entering stage n from stage $n+1$
$y_{i,n}$ = composition (mole fraction) of the gas leaving stage $n$
$y^*_{i,n}$ = composition (mole fraction) of the gas leaving stage n if it had reached equilibrium with the liquid leaving stage $n$
$Z_f$ = height of combined gas and liquid hold up (length)
Refer to Lecture 1.13 Stage Efficiency and Column Height for the first part of this topic
$E_{MV} = \dfrac{y_{i, n+1} - y_{i,n}}{y_{i,n+1} - y_{i,n}^*} = 1 - \rm exp(N_{OG}) \tag{15.1}$
$N_{OG} = \dfrac{K_G a PZ_f}{(V/A_b)} \tag{15.2}$
$\frac{1}{N_{OG}}=\frac{1}{N_G}+\frac{(KV/L)}{N_L} \tag{15.3}$
$N_G=\frac{k_GaPZ_f}{(V/A_b)} \tag{15.4}$
$N_G=k_G\overline a \overline t_G \tag{15.5}$
$N_L=\frac{k_La\rho_LZ_f}{(LM_L/A_b)} \tag{15.6}$
$N_L=k_L\overline a\overline t_L \tag{15.7}$
$\overline t_G=\frac{(1-\phi_e)h_{l}}{\phi_eU_a} \tag{15.8}$
$\overline t_L=\frac{h_{l}A_a}{q_L} \tag{15.9}$
$k_G\overline a=\frac{1030D_V^{0.5}(f-0.842f^2)}{h_{l}^{0.5}} \tag{15.10}$
$f=U_a/U_f \tag{15.11}$
$k_L\overline a=78.8D_L^{0.5}(F+0.425) \tag{15.12}$
$F = U_a \rho_V^{0.5} \tag{15.13}$
In equations 15.12 and 15.13, $D_V$ and $D_L$ must be in cm2/s, $h_l$ must be in cm, $F$ must be in (kg/m)0.5s-1
Example
180 kmol/hr of CO2 containing 2 mol% ethanol is fed to an absorption column. 154 kmol/hr of fresh water is used as the liquid absorbent. The column has a diameter of 0.80m, weir height of 2.0 inches, 3/16” hole diameter, $C_0$ = 0.73, 10% of the column area is occupied by downcomers, 10% of the active area is occupied by sieve tray holes. Based on mass transfer principles, what is the expected Murphree efficiency of each stage? Ignore the contribution of the solute.
At the proposed operating conditions, the diffusivity of ethanol in CO2 is 7.85×10-2 cm2 s-1 and the diffusivity of ethanol in liquid water is 1.81×10-5 cm2 s-1
Packed Columns
$\epsilon$ = packing void fraction (volume volume-1), tabulated value
$\rho_V$ = density of the vapor phase (mass volume-1)
$\rho_L$ = density of the liquid phase (mass volume-1)
$A$ = absorption factor (unitless)
$a$ = specific surface area of the selected packing (area volume-1), tabulated value
$a_h$ = specific hydraulic area of the selected packing (area volume-1)
$C_h$ = dimensionless holdup parameter, tabulated value
$D_T$ = packed bed diameter (length)
$f$ = target value of fraction of flooding; for packed columns we typically use 0.5 – 0.7
$h_L$ = volume of liquid per unit volume of packed bed (volume volume-1)
$H_{OG}$ = overall height of a gas transfer unit (length) – will be discussed in Lecture 2.7 Packed Bed HOG
$K$ = equilibrium constant for our system at our operating condition
$K_ya$ = overall mass transfer coefficient in gas-phase mole fraction units (mole time-1 volume-1)
$L$ = liquid phase molar flow rate (mole time-1)
$l_T$ = depth of packed bed (length)
$M_V$ = vapor phase molecular weight
$N_{{\rm Fr},L}$ = Froude number for the liquid phase, inertial force/gravitational force (unitless)
$N_{OG}$ = number of overall gas-phase mass transfer units
$N_{{\rm Re},L}$ = Reynolds number for the liquid phase, inertial force/viscous force (unitless)
$S$ = cross-sectional area of the packed bed (area)
$u_L$ = liquid-phase superficial velocity through the packed bed (length time-1)
$u_{V,f}$ = superficial gas velocity at flooding (length time-1)
$V$ = vapor phase molar flow rate (mole time-1)
$x_{in}$ = mole fraction of the solute in the entering liquid
$y_{in}$ = mole fraction of the solute in the entering vapor
$y_{out}$ = mole fraction of the solute in the exiting vapor
Packed Bed Diameter Sizing
$D_T=\left(\frac{4VM_V}{fu_{V,f}\pi\rho_V}\right)^{0.5} \tag{16.1}$
Depth of Packing Required
$l_T=H_{OG}N_{OG} \tag{16.2}$
$H_{OG}=\frac{V}{K_yaS} \tag{16.3}$
$N_{OG} = \left( \dfrac{A}{A-1} \right) \ln \left[ \left( \dfrac{A-1}{A} \right) \left( \dfrac{y_{in} - K x_{in}}{y_{out} - K x_{in}} + \dfrac{1}{A} \right) \right] \tag{16.4}$
$A=\frac{(L/V)}{K} \tag{16.5}$
Example
180 kmol/hr of a fermentation off-gas stream (98 mol% CO2, 2 mol% ethanol) is fed to a packed absorption column. We aim to recover 97% of the incoming ethanol using 154 kmol/hr of fresh water. If HOG = 2.0 ft, what is the required depth of packed bed? K = 0.57.
Liquid Hold-up in a Packed Bed, Operating in Pre-loading Region
$h_L = \left( \dfrac{12N_{\rm Fr, L}}{N_{\rm Re, L}} \right)^{1/3} \left( \dfrac{a_h}{a} \right)^{2/3} \tag{16.6}$
$N_{{\rm Fr},L}=\frac{u_L^2a}{g} \tag{16.7}$
$N_{{\rm Re},L}=\frac{u_L\rho_L}{a\mu_L} \tag{16.8}$
if $N_{{\rm Re},L}<5$
$\frac{a_h}{a}=C_hN_{{\rm Re},L}^{0.15}N_{{\rm Fr},L}^{0.1} \tag{16.9}$
if $N_{{\rm Re},L} \geq 5$
$\frac{a_h}{a}=0.85C_hN_{{\rm Re},L}^{0.25}N_{{\rm Fr},L}^{0.1} \tag{16.10}$
Example
For our CO2/water/ethanol system, we have proposed to use 50-mm metal Hiflow rings as our packing material and a column diameter that results in a superficial liquid velocity of 0.01 m/s. Find the specific liquid holdup and specific volume available for the gas for this proposed design. Ignore the contribution of the solute.
Parameters for 50mm Hiflow Metal Rings: $a$ = 92.3 m2/m3, $C_h$ = 0.876, $\epsilon$ = 0.977 m3/m3
• $V$ = 180 kmol/hr
• $L$ = 154 kmol/hr
• $\rho_L$ = 1000 kg/m3
• $\mu_L$ = 8.9×10-4 kg s-1 m-1
Packed Bed Pressure Drop by the Graphical Method
$\epsilon$ = packing void fraction (volume volume-1), tabulated value
$\mu_L$ = viscosity of the liquid phase (mass time-1 length-1)
$\rho_{H_2O,L}$ = density of liquid water (mass volume-1)
$\rho_{L}$ = density of the liquid phase (mass volume-1)
$\rho_{V}$ = density of the vapor phase (mass volume-1)
$F_{LV}$ = internal variable (unitless)
$F_P$ = packing factor (area volume-1), tabulated value
$L$ = liquid phase flow rate (mole time-1)
$M_L$ = liquid phase molecular weight
$M_V$ = vapor phase molecular weight
$u_L$ = liquid-phase superficial velocity through the packed bed (length time-1)
$u_V$ = superficial gas velocity (length time-1)
$u_{V,f}$ = superficial gas velocity at flooding (length time-1)
$V$ = vapor phase molar flow rate (mole time-1)
$Y$ = internal variable for the GPDC method, y-axis value of Figure 6-35 (unitless)
Generalized Pressure Drop Correlation (GPDC) for Finding Flooding Velocity
$F_{\rm LV} = \left( \dfrac{LM_L}{VM_V} \right) \left( \dfrac{\rho_V}{\rho_L} \right)^{0.5} \tag{17.1}$
$Y = \left( \dfrac{u_V^2F_p}{g} \right) \left( \dfrac{\rho_V}{\rho_{H_2O, L}} \right) f[\rho_V]f[\mu_L] \tag{17.2}$
Example
180 kmol/hr of CO2 containing 2.0 mol% ethanol is fed to a packed column operating at 30°C and 110 kPa. 97% of the ethanol is to be removed via the addition of 154 kmol/hr of fresh water. The packed column contains 1” ceramic Raschig rings.
Find:
(a) the superficial gas velocity associated with flooding
(b) the necessary column diameter if we operate at 70% of flooding
(c) the pressure drop per foot of packing. Ignore the contribution of the solute to the magnitude and physical properties of each phase.
For 1” ceramic Raschig rings: $F_p$ = 179 ft2/ft3
Packed Bed Pressure Drop by the Mathematical Method
$\epsilon$ = packing void fraction (volume volume-1), tabulated value
$\Delta P$ = actual pressure drop across a packed bed during operation at your condition (pressure)
$\Delta P_0$ = pressure drop across your packed bed during dry operation (pressure)
$\xi_l$ = internal parameter for pressure drop calculation at the loading point (unitless)
$\mu_L$ = viscosity of the liquid (mass time-1 length-1)
$\mu_V$= viscosity of the vapor phase (mass time-1 length-1)
$\rho_L$ = density of the liquid phase (mass volume-1)
$\rho_V$ = density of the vapor phase (mass volume-1)
$\Psi_0$ = resistance coefficient of dry packing (unitless)
$\Psi_l$= internal parameter for pressure drop calculation at the loading point (unitless)
$a$ = specific surface area of the selected packing (area volume-1), tabulated value
$C$ = internal parameter for pressure drop calculation at the loading point (unitless)
$C_p$ = packing parameter, tabulated value
$C_s$ = packing parameter (unitless)
$D_p$ = effective packing diameter (length)
$D_T$ = packed bed diameter (length)
$g$ = gravitational constant (length time-2)
$h_L$ = liquid hold-up (volume volume-1)
$K_W$ = wall factor (unitless)
$L$ = liquid phase molar flow rate (mole time-1)
$l_T$ = depth of packed bed (length)
$M_V$ = vapor phase molecular weight
$M_L$ = liquid phase molecular weight
$N_{{\rm Fr},L}$ = Froude number, inertial force/gravitational force (unitless)
$N_{{\rm Re},V}$ = Reynold’s number of the vapor phase, inertial force/viscous force (unitless)
$n_s$ = internal parameter for pressure drop calculation at the loading point (unitless)
$u_L$ = liquid phase superficial velocity (length time-1)
$u_{L,l}$ = liquid phase superficial velocity at the loading point (length time-1)
$u_V$= vapor phase superficial velocity (length time-1)
$u_{V,f}$ = vapor phase superficial velocity at the flooding point (length time-1)
$u_{V,l}$ = vapor phase superficial velocity at the loading point (length time-1)
$V$ = vapor phase molar flow rate (mole time-1)
$\left( \dfrac{ \Delta P}{\Delta P_0} \right) = \left( \dfrac{\epsilon}{\epsilon - h_L} \right)^{(3/2)} \rm exp \left( \dfrac{13,300 N_{\rm Fr,L}^{0.5}}{a^{1.5}} \right) \tag{18.1}$
$N_{{\rm Fr},L}=\frac{u_L^2a}{g} \tag{18.2}$
$\frac{\Delta P_0}{l_T}=\Psi_0\left(\frac{a}{\epsilon^3}\right)\left(\frac{u_V^2\rho_V}{2}\right)\left(\frac{1}{K_W}\right) \tag{18.3}$
$\Psi_0 = C_P \left( \dfrac{64}{N_{\rm Re,V}} + \dfrac{1.8}{N_{\rm Re,V}^{0.08}} \right) \tag{18.4}$
$N_{{\rm Re},V}=\frac{u_VD_P\rho_VK_W}{(1-\epsilon)\mu_V} \tag{18.5}$
$D_P=6\left(\frac{1-\epsilon}{a}\right) \tag{18.6}$
$\frac{1}{K_W}=1+\frac{2}{3}\left(\frac{1}{1-\epsilon}\right)\left(\frac{D_P}{D_T}\right) \tag{18.7}$
*at any operating condition: $u_L = u_V \left( \frac{LM_L}{\rho_L} \right) \left( \frac{ \rho_V}{VM_V} \right)$
Vapor flooding velocity
$u_{V,f}=\frac{u_{V,l}}{0.7} \tag{18.8}$
At the loading point:
$u_{V,l}=\left(\frac{g}{\Psi_l}\right)^{1/2}\left[\frac{\epsilon}{a^{1/6}}-a^{0.5}{\xi}_{l}^{1/3}\right]{\xi}_{l}^{1/6}\left(\frac{\rho_L}{\rho_V}\right)^{1/2} \tag{18.9}$
$\Psi_{l}=\frac{g}{C^2}\left[F_{LV}\left(\frac{\mu_L}{\mu_V}\right)^{0.4}\right]^{-2n_s} \tag{18.10}$
$F_{LV} = \left( \dfrac{LM_L}{VM_V} \right) \left( \dfrac{\rho_V}{\rho_L} \right)^{0.5} \tag{18.11}$
If $F_{LV} \leq 0.4$, liquid is disperse, $C = C_S$, $n_S = -0.326$
If $F_{LV} > 0.4$, liquid is continuous, $C = 0.695\left(\frac{\mu_L}{\mu_V}\right)^{0.1588}C_S$, $n_S = -0.723$
$\xi_{l}=12\frac{\mu_Lu_{L,l}}{g\rho_L} \tag{18.12}$
Example $1$
180 kmol/hr of CO2 containing 2.0 mol% ethanol and 154 kmol/hr fresh liquid water are fed to a packed column containing 1” ceramic Raschig rings. Find (a) superficial gas velocity at flooding; (b) column diameter, if we intend to operate at 70% of the flooding velocity; (c) pressure drop per foot of packing at loading point.
Determining the Overall Height of a Gas-liquid Transfer Unit for a Packed Column
$\epsilon$ = packing void fraction (volume volume-1), tabulated value
$\mu_V$ = viscosity of the vapor phase (mass time-1 length-1)
$\rho_L$ = density of the liquid phase (mass volume-1)
$\rho_V$ = density of the vapor phase (mass volume-1)
$\sigma$ = surface tension of the liquid phase (force length-1)
$a$ = specific surface area of the selected packing (area volume-1), tabulated value
$a_{Ph}$ = specific area of the gas/liquid interface (area volume-1)
$C_V$ = packing parameter, tabulated value
$C_L$ = packing parameter, tabulated value
$D_G$ = diffusivity of the solute in the gas phase (length2 time-1)
$d_h$ = packing hydraulic diameter (length)
$D_L$ = diffusivity of the solute in the liquid phase (length2 time-1)
$g$ = gravitational constant (length time-2)
$H_G$ = height of gas-phase transfer unit, with partial pressure driving force (length)
$h_L$ = liquid hold-up (volume volume-1)
$H_L$ = height of liquid-phase transfer unit, with mole fraction driving force (length)
$H_{OG}$ = overall height of gas/liquid transfer unit, on a gas phase basis (length)
$K$ = equilibrium constant for our species in our selected operating condition
$L$ = liquid phase molar flow rate (mole time-1)
$N_{{\rm Fr}_L,h}$ = Froude number of the liquid phase with the hydraulic diameter as the characteristic length (unitless)
$N_{{\rm Re}_L,h}$ = Reynold’s number of the liquid phase, hydraulic diameter as the characteristic length (unitless)
$N_{{\rm Re}_V}$ = Reynold’s number of the vapor phase, inertial force/viscous force (unitless)
$N_{{\rm Sc}_V}$ = Schmidt number of the vapor phase (unitless)
$N_{{\rm We}_L,h}$ = Weber number of the liquid phase with the hydraulic diameter as the characteristic length (unitless)
$u_L$ = liquid phase superficial velocity (length time-1)
$u_V$ = vapor phase superficial velocity (length time-1)
$V$ = vapor phase molar flow rate (mole time-1)
$H_{OG}=H_G+\left(\frac{KV}{L}\right)H_L \tag{19.1}$
$H_G = \dfrac{1}{C_V} (\epsilon - h_L)^{0.5} \left( \dfrac{4 \epsilon}{a^4} \right)^{0.5} \left( \dfrac{1}{N_{\rm Re_V}} \right)^{0.75} \left( \dfrac{1}{N_{\rm Sc_V}} \right)^{1/3} \left( \dfrac{u_Va}{D_Ga_{Ph}} \right) \tag{19.2}$
$N_{{\rm Re}_V}=\frac{u_V\rho_V}{a\mu_V} \tag{19.3}$
$N_{\rm Sc_V} = \dfrac{\mu_V}{\rho_VD_G} \tag{19.4}$
$\dfrac{a_{Ph}}{a} = \dfrac{1.5N_{\rm We,L,h}^{0.75}}{(ad_h)^{0.5}N_{\rm Re_L,h}^{0.2} N_{\rm Fr_L,h}^{0.45}} \tag{19.5}$
$d_h=\frac{4\epsilon}{a} \tag{19.6}$
$N_{{\rm Re}_L,h}=\frac{u_Ld_h\rho_L}{\mu_L} \tag{19.7}$
$N_{{\rm We}_L,h}=\frac{u_L^2\rho_Ld_h}{\sigma} \tag{19.8}$
$N_{{\rm Fr}_L,h}=\frac{u_L^2}{gd_h} \tag{19.9}$
$H_L=\frac{1}{C_L}\left(\frac{1}{12}\right)^{1/6}\left[\frac{4h_L\epsilon}{D_Lau_L}\right]^{0.5}\left(\frac{u_L}{a}\right)\left(\frac{a}{a_{Ph}}\right) \tag{19.10}$
Example
180 kmol/hr of CO2 containing 2.0 mol% ethanol and 154 kmol/hr liquid water are fed to a packed column with diameter of 1.1m containing 1” (25mm) ceramic Raschig rings, with a pressure drop of 1.1 kPa/m. Find the necessary column height and total pressure drop if we aim to recover 97% of the incoming solute and the column is operated at the loading point. Ignore the contribution of the solute.
Packing properties: $a$=190 m2/m3, $\epsilon$=0.680, $C_h$ = 0.577, $C_P$ = 1.329, $C_L$ = 1.361, $C_V$ = 0.412, $c_s$ = 2.454
• Diffusivity of ethanol in CO2 at 30°C $D_G$ = 7.85×10-2 cm2/s
• Diffusivity of ethanol in water at 30°C $D_L$ = 1.81×10-5 cm2/s
• $\rho_L$ = 1000 kg/m3
• $\rho_V$ = 1.92 kg/m3
• $\sigma_L$ = 70 dyne/cm
• $\mu_L$ = 8.9×10-4 kg m-1s-1
• $\mu_V$ = 1.53×10-5 kg m-1s-1
|
textbooks/eng/Chemical_Engineering/Chemical_Engineering_Separations%3A_A_Handbook_for_Students_(Lamm_and_Jarboe)/1.04%3A_Absorption_and_Stripping.txt
|
Introduction to Distillation
$B$ = mass or molar flow rate of the bottoms stream leaving the systems (mass time-1 or mole time-1)
$D$ = mass or molar flow rate of the distillate stream leaving the system (mass time-1 or mole time-1)
$F$ = mass or molar flow rate of the feed stream entering the system (mass time-1 or mole time-1)
$L$ = mass or molar flow rate of the liquid reflux returned to the column from the condenser (mass time-1 or mole time-1); also generic flow rate of the liquid phase in the rectifying section
$\overline L$ = mass or molar flow rate of the liquid leaving the bottom of the column and entering the reboiler (mass time-1 or mole time-1); also generic flow rate of the liquid phase in the stripping section
$n$ = generic stage number, stage 1 is at the top of the column
$R$ = reflux ratio
$V$ = mass or molar flow rate of vapor leaving the top of the column and entering the condenser (mass time-1 or mole time-1); also generic flow rate of the vapor phase in the rectifying section
$\overline V$ = mass or molar flow rate of the gaseous boilup returned to the column from the reboiler (mass time-1 or mole time-1); also generic flow rate of the vapor phase in the stripping section
$x$ = mass or mole fraction of the light key in a liquid stream
$x_B$ = mass or mole fraction of the light key in the bottoms stream
$x_D$ = mass or mole fraction of the light key in the distillate stream
$x_n$ = mass or mole fraction of the light key in the liquid leaving stage $n$
$y$ = mass or mole fraction of the light key in vapor stream
$y_n$ = mass or mole fraction of the light key in the vapor leaving stage $n$
$z_F$ = mass or mole fraction of the light key in the feed stream
Overall material balance
$F = D + B \tag{20.1}$
Material balance on light key
$F z_F = x_DD + x_BB \tag{20.2}$
Combination of material balances in Equations 20.1 and 20.2
$D = F \left( \dfrac{z_F - x_B}{x_D - x_B} \right) \tag{20.3}$
$R=\frac{L}{D} \tag{20.4}$
$V_B=\frac{\overline V}{B} \tag{20.5}$
Material balance on stages $1-n$, the rectifying section of the column
$y_{n+1}=\left(\frac{L}{V}\right)x_n+y_1-\left(\frac{L}{V}\right)x_0 \tag{20.6}$
Rectifying section operating line
$y_{n+1}=\left(\frac{R}{R+1}\right)x_n+\left(\frac{x_D}{R+1}\right) \tag{20.7}$
Stripping section operating line
$y_{n+1}=\left(\frac{V_B+1}{V_B}\right)x_n-\left(\frac{x_B}{V_B}\right) \tag{20.8}$
McCabe-Thiele Method for Finding N and Feed Stage Location
$\Delta H^{\rm vap}$ = enthalpy change of vaporization of the feed stream at the column operating pressure (energy mole-1)
$C_{P_L}$ = heat capacity of the liquid feed stream (energy mole-1 temperature-1)
$C_{P_V}$ = heat capacity of the vapor feed stream (energy mole-1 temperature-1)
$F$ = molar flow rate of the feed stream entering the system (mole time-1)
$L$ = molar flow rate of the liquid phase in the rectifying section (mole time-1)
$\overline L$ = molar flow rate of the liquid phase in the stripping section (mole time-1)
$L_F$ = molar flow rate of the liquid portion of the feed stream (mole time-1)
$n$ = generic stage number, stage 1 is at the top of the column
$q$ = metric that reflects the physical state of the feed stream (unitless)
$R$ = reflux ratio
$T_b$ = bubble-point temperature of the feed stream at the column operating pressure (temperature)
$T_d$ = dew-point temperature of the feed stream at the column operating pressure (temperature)
$T_F$ = temperature of the feed stream (temperature)
$V$ = molar flow rate of the vapor phase in the rectifying section (mole time-1)
$\overline V$ = molar flow rate of the vapor phase in the stripping section (mole time-1)
$V_F$ = molar flow rate of the vapor portion of the feed stream (mole time-1)
$x_B$ = mole fraction of the light key in the bottoms stream
$x_D$ = mole fraction of the light key in the distillate stream
$x_n$ = mole fraction of the light key in the liquid leaving stage
$z_F$ = mole fraction of the light key in the feed stream
Finding the Theoretical Number of Stages from known Reflux Ratio, Boilup Ratio, Distillate Composition and Bottoms Composition
Rectifying section operating line
$y_{n+1}=\left(\frac{R}{R+1}\right)x_n+\left(\frac{1}{R+1}\right)x_D \tag{21.1}$
Stripping section operating line
$y_{n+1}=\left(\frac{V_B+1}{V_B}\right)x_n-\left(\frac{1}{V_B}\right)x_B \tag{21.2}$
Plotting the q-line
$F = L_F + V_F \tag{21.3}$
$q=\frac{(\overline L -L)}{F}=1+(\frac{\overline V-V}{F}) \tag{21.4}$
For sub-cooled liquid, q > 1
$q=1+\frac{C_{P_L}(T_b-T_F)}{\Delta H^{\rm vap}} \tag{21.5}$
For a saturated liquid,
$q=1 \tag{21.6}$
For a mixture of liquid and vapor, 0 < q < 1
$q=\frac{L_F}{F} \tag{21.7}$
For a saturated vapor, q = 0
$q = 0 \tag{21.8}$
For sub-heated vapor, q < 0
$q=\frac{C_{P_V}(T_d-T_F)}{\Delta H^{\rm vap}} \tag{21.9}$
q-line
$y=\left(\frac{q}{q-1}\right)x-\left(\frac{1}{q-1}\right)z_F \tag{21.10}$
Watch this two-part video series from LearnChemE that demonstrates how to use the McCabe-Thiele graphical method to determine the number of equilibrium stages needed to meet a specified separation objective: McCabe-Thiele Graphical Method Example Part 1 (8:21): https://youtu.be/Cv4KjY2BJTA and McCabe-Thiele Graphical Method Example Part 2 (6:35): https://youtu.be/eIJk5uXmBRc
Watch this video from LearnChemE for a conceptual demonstration of how to relate stepping off stages to distillation column design: McCabe-Thiele Stepping Off Stages (7:02): https://youtu.be/rlg-ptQMAsg
McCabe-Thiele Method for Finding the Minimum Number of Stages, the Minimum Reflux Ratio, and the Minimum Boilup Ratio
$\alpha$ = relative volatility of the light key and the heavy key at a given temperature (unitless)
$\alpha_F$ = relative volatility of the light key and the heavy key at the feed temperature (unitless)
$\gamma_{HK}$ = activity coefficient of the heavy key; can be a function of $x$ and/or $T$; 1 for an ideal solution (unitless)
$\gamma_{LK}$ = activity coefficient of the light key; can be a function of $x$ and/or $T$; 1 for an ideal solution (unitless)
$B$ = molar flow rate of the bottoms leaving the system (mol time-1)
$D$ = molar flow rate of the distillate leaving the system (mol time-1)
$F$ = molar flow rate of the feed stream (mol time-1)
$L$ = molar flow rate of liquid within the rectifying section, assumed constant in McCabe-Thiele model (mol time-1)
$\overline L$ = molar flow rate of liquid within the stripping section, assumed constant in McCabe-Thiele model (mol time-1)
$N_{t,\rm min}$ = minimum required number of theoretical stages for a given combination of equilibrium data, $x_D$ and $x_B$
$P_{HK}^{\rm sat}$ = saturated vapor pressure of the heavy key at a given temperature, i.e. by Antoine equation (pressure)
$P_{LK}^{\rm sat}$ = saturated vapor pressure of the light key at a given temperature, i.e. by Antoine equation (pressure)
$q$ = metric that indicates that physical state of the feed stream, i.e. $q$ = 1 for saturated liquid (unitless)
$R$ = reflux ratio = $L/D$ (unitless)
$R_{\rm min}$ = reflux ratio that requires an infinite number of stages in the rectifying section (unitless)
$V$ = molar flow rate of vapor within the rectifying section, assumed constant in McCabe-Thiele model (mol time-1)
$\overline V$ = molar flow rate of vapor within the stripping section, assumed constant in McCabe-Thiele model (mol time-1)
$V_B$ = boilup ratio = $\overline V/B$
$V_{B,\rm min}$ = boilup ratio that requires an infinite number of stages in the stripping section (unitless)
$V_F$ = molar flow rate of the vapor component of the feed stream (mol time-1)
$x_B$ = target mole fraction of the light key in the bottoms product
$x_D$ = target mole fraction of the light key in the distillate product
$x_{HK}$ = mole fraction of the heavy key in the liquid phase
$x_{LK}$ = mole fraction of the light key in the liquid phase
$y_{HK}$ = mole fraction of the heavy key in the vapor phase
$y_{LK}$ = mole fraction of the light key in the vapor phase
$z_F$ = mole fraction of the light key in the feed stream
$R_{\rm min}=\frac{(L/V)_{\rm min}}{1-(L/V)_{\rm min}} \tag{22.1}$
$(L/V)_{\rm min}$ = slope of the line that connects ($x_D$, $x_D$) to the intersection of the q-line and the equilibrium curve
$\alpha=\frac{y_{LK}/y_{HK}}{x_{LK}/x_{HK}}=\frac{\gamma_{LK}P_{LK}^{\rm sat}}{\gamma_{HK}P_{HK}^{\rm sat}} \tag{22.2}$
$V_{B,\rm min}=\frac{1}{(\overline L /\overline V)_{\rm max}-1} \tag{22.3}$
$(\overline L/\overline V)_{\rm min}$ = slope of the line that connects ($x_B$, $x_B$) to the intersection of the q-line and the equilibrium curve
$V_{B}=\frac{L+D-V_F}{B}=\frac{D(R+1)-V_F}{B} \tag{22.4}$
when $q \leq 0$, $V_F = F$; when $0 < q < 1$, $V_F = (1-q)F$; when $q \geq 1$, $V_F = 0$
*we will use Eq 22.4 to calculate $V_B$ as a function of our selected $R$
Example
$x_D = 0.80$, $q = 0$, $z_F = 0.25$
$(L/V)_{\rm min}$ =
$R_{\rm min}$ =
Example
$x_D = 0.90$, $x_B = 0.20$
$N_{t,\rm min} =$
Distillation Energy Demand and Correlations for Efficiency
$\alpha$ = relative volatility of the light key and heavy key (unitless). For equation 23.2, this is evaluated at the average column temperature.
$\Delta H^{\rm vap}$ = average heat of vaporization for the stream entering the condenser or reboiler (energy mole-1)
$\Delta H_S^{\rm vap}$ = average heat of vaporization for the steam entering the reboiler (energy mole-1)
$\mu$ = liquid phase viscosity (cP). For eqs 23.1 and 23.2, this is the viscosity of the feed stream at the average column temperature.
$B$ = bottoms flow rate (mole time-1)
$C_{P,\rm H_2O}$ = heat capacity of liquid water (energy mole-1 temperature-1) or (energy mass-1 temperature-1)
$D$ = distillate flow rate (mole time-1)
$E_O$ = stage efficiency (unitless)
$L$ = liquid flow rate in the rectifying section (mole time-1)
$\overline L$ = liquid flow rate in the stripping section (mole time-1)
$m_{\rm cw}$ = flow rate of cooling water to condenser (mass time-1) or (mole time-1)
$m_s$ = flow rate of steam to reboiler (mass time-1) or (mole time-1)
$Q_C$ = energy demand (cooling) for the condenser (energy time-1)
$Q_R$ = energy demand (heating) for the reboiler (energy time-1)
$R$ = reflux ratio = $L/D$ (unitless)
$T_{\rm in}$ = temperature of cooling water entering the condenser (temperature)
$T_{\rm out}$ = temperature of cooling water leaving the condenser (temperature)
$V_B$ = boilup ratio = $\overline V/B$ (unitless)
$V_F$ = molar flow rate of the vapor portion of the feed (mole time-1)
Correlations for Stage Efficiency
Drickamer and Bradford
$E_O=13.3-68.8\log_{10}{\mu} \tag{23.1}$
Restrictions on eq 23.1: $\mu = 0.066 – 0.355$ cP, $T = 157 – 420$°F, $P = 14.7 – 366$ psia, $E_O = 41 – 88$%
O’Connell
$E_O=\frac{50.3}{(\alpha\mu)^{0.226}} \tag{23.2}$
when $0.1 \leq \alpha \mu \leq 1$, adjust $E_O$ calculated by 23.2 with correction factor from Table 7.5
Restriction on eq 23.2: $\alpha = 1.16 – 20.5$
Condenser and Reboiler Energy Demand
total condenser
$Q_C=D(R+1)\Delta H^{\rm vap} \tag{23.3}$
partial reboiler
$Q_R=BV_B\Delta H^{\rm vap} \tag{23.4}$
for partially vaporized feed $(0 < q < 1)$ and total condenser
$Q_R=Q_C\left[1-\frac{V_F}{D(R+1)}\right] \tag{23.5}$
$m_{\rm cw}=\frac{Q_C}{C_{P,\rm H_2O}(T_{\rm out}-T_{\rm in})} \tag{23.6}$
if using saturated steam for the reboiler
$m_S=\frac{Q_R}{\Delta H_S^{\rm vap}} \tag{23.7}$
Distillation Packed Column Depth
$\rm HETP$ = height of equivalent theoretical plates
$H_{OG} = x$ (length)
$\lambda$ = local slope of equilibrium curve/local slope of operating line
${\rm HETP}=H_{OG}\frac{\ln\lambda}{\lambda-1} \tag{24.1}$
Example
We aim to distill benzene and toluene to a distillate that contains 95 mol% benzene and a bottoms stream that contains 95% toluene. The feed stream is 100 kmol/hr of an equimolar mixture with q = 0.50. We will be operating at 1.0 atm, $R/R_{\rm min}$ of 1.8 with a packed column containing 25-mm metal Bialecki rings. Assume operating at 70% of the flooding velocity. What depth of packing is needed to achieve this separation?
• For Antoine equation of the form $\log_{10}{p^*} = A – B/(T+C)$, where $T$ is in °C and $p^*$ is in mmHg
• Benzene: $A = 6.89$, $B = 1204$, $C = 220$
• Toluene: $A = 6.96$, $B = 1350$, $C = 220$
• 25-mm metal Bialecki rings: $a=210$, $\epsilon = 0.956$, $C_h =0.692$, $C_p =0.891$, $C_l =1.461$, $C_v =0.331$, $C_s =2.521$
• Toluene: ${\rm MW} = 92.14$, $\rho_L = 0.87$ g/mL, $\mu_L = 0.590$ cP, $\sigma_L = 27.73$ dyne/cm
• Benzene: ${\rm MW} = 78.11$, $\rho_L = 0.88$ g/mL, $\mu_L = 0.652$ cP, $\sigma_L = 28.88$ dyne/cm
• $D_L = 1.85*10^{-5}$ cm2/s (Table 3.4, Seader)
• $D_V = 0.0565$ cm2/s (estimated via eq 3-36, Seader)
• $\mu_V = 0.0133$ cP, estimated from online gas viscosity calculator (LMNO Engineering) as a function of T (94°C)
|
textbooks/eng/Chemical_Engineering/Chemical_Engineering_Separations%3A_A_Handbook_for_Students_(Lamm_and_Jarboe)/1.05%3A_Distillation.txt
|
Introduction to Membrane Processes and Modeling Porous Membranes
$\Delta P$ = pressure driving force across the membrane (pressure)
$\Delta z$ = membrane thickness (length)
$\epsilon$ = membrane porosity; volume of pores per unit volume of membrane (unitless)
$\mu$ = permeate viscosity (cP) or (mass length-1 time-1)
$\rho$ = fluid density (mass volume-1)
$\tau$ = membrane tortuosity factor (>1)
$A_M$ = membrane cross-sectional area (area)
$a_V$ = total pore surface area per volume of membrane solid material (area volume-1)
$D$ = pore diameter (length)
$d_H$ = hydraulic pore diameter (length)
$J$ = permeate flux (vol area-1 time-1) or (length time-1)
$k$ = permeability of component i (length2)
$L$ = pore length
$l_M$ = membrane thickness (length)
$n$ = number of pores per unit of flow area (i.e. top down, not a cross-section) of membrane
$N_i$ = molar flux through the membrane per unit area (mol time-1 area-1)
$N_{\rm Re}$ = Reynold’s number (unitless)
$P$ = pressure (pressure)
$P_0$ = pressure at the surface of the pore (pressure)
$P_L$ = pressure at position L within the membrane pore (pressure)
$P_{M_i}$ = permeability of the membrane to species i (length2 time-1)
$\overline P_{M_i}$ = permeance of the membrane to species i (length time-1)
$R_c$ = resistance of the filter cake (length-1)
$R_i$ = resistance of component i (length-1)
$R_m$ = resistance of the membrane (length-1)
$t$ = time (time)
$u$ = superficial velocity of the permeate (length time-1)
$v$ = flow velocity (length time-1)
$V(t)$ = cumulative volume of permeate collected since the start of the filtration (volume)
$z$ = direction of flux (length)
General Flux Equation
$N_i=\left(\frac{P_{M_i}}{l_M}\right)*[\textrm {driving force}]=\overline P_{M_i}*[\textrm {driving force}] \tag{25.1}$
Dead-end Filtration
$J=\frac{1}{A_M}\frac{dV(t)}{dt}=\left(\frac{-k}{\mu}\right)\frac{dP}{dz} \tag{25.2}$
$R_i = \dfrac{\Delta z_i}{k_i} \tag{25.3}$
$u=J=\frac{\Delta P}{\mu(R_m+R_c)} \tag{25.4}$
Modeling of Porous Membranes
Hagen-Poiseuille Law
$v=\frac{D^2}{32\mu L}(P_0-P_L) \tag{25.5}$
*eq 25.5 is restricted to $N_{\rm Re} < 2100$, where $N_{\rm Re} = Dv\rho/\mu$
$\epsilon = \frac{n\pi D^2}{4} \tag{25.6}$
Ideal porous membrane: straight pores of uniform diameter
$N=v\rho\epsilon=\frac{D^2\rho\epsilon}{32\mu\; l_M}(P_0-P_L)=\frac{n\pi D^4\rho}{128\mu\; l_M}(P_0-P_L) \tag{25.7}$
Compensation for tortuous pores and variation in pore diameter
$L = l_M \tau \tag{25.8}$
$d_H = \dfrac{4 \epsilon}{a_V (1 - \epsilon)} \tag{25.9}$
$N=\frac{\rho\epsilon^3(P_0-P_L)}{2(1-\epsilon)^2\tau a_V^2 \mu \; l_M} \tag{25.10}$
Example
A membrane of thickness 0.003 cm will be used to filter room-temperature water. In order to justify the cost of the membrane, we need to filter 200 m3 of water every day per m2 of membrane purchased. We are able to maintain a pressure of 50 kPa on the permeate side. What pressure do we need to apply on the retentate side? Ignore any resistance from the retentate. Assume operation with an ideal porous membrane with a porosity of 35% and pore diameter of 0.2 μm.
Porous Membranes
$\alpha$ = internal parameter (length mass-1)
$\Delta P$ = pressure driving force across the membrane (pressure)
$\Delta P_{UL}$ = pressure across the membrane during the constant pressure segment of combined operation (pressure)
$\epsilon_c$ = filter cake porosity; volume of void space per unit volume of filter cake (unitless)
$\mu$ = permeate viscosity (cP) or (mass length-1 time-1)
$\rho_c$ = filter cake density (mass volume-1)
$A_c$ = surface area of the accumulated filter cake (area)
$A_M$ = surface area of the membrane (area)
$c_F$ = concentration of solid material per unit volume of feed (mass volume-1)
$D_p$ = effective diameter of cake particles (length)
$J$ = permeate flux (volume area-1 time-1) or (length time-1)
$K$ = internal parameter used in modeling constant pressure operation (volume2 hr-1)
$K_1$ = internal parameter, function of effective particle diameter (length-2)
$K_2$ = parameter used in modeling combined constant flux/constant pressure operation, equals $\alpha$ (length mass-1)
$l_c$ = thickness of accumulated filter cake (length)
$R_c$ = resistance of the accumulating filter cake (length-1)
$R_m$ = resistance of the membrane (length-1)
$t$ = time (time)
$t_{CF}$ = total time elapsed during the constant flux operation mode (time)
$u$ = target permeate flux value (volume time-1)
$V(t)$ = cumulative volume of permeate collected since the start of the filtration (volume)
$V_0$ = internal parameter for modeling constant pressure operation (volume)
$V_{CF}$ = total permeate collected during the constant flux operation mode (volume)
Resistance from Filter Cake
$R_c=\frac{150l_c(1-\epsilon_c)^2}{D_P^2\epsilon_c^3}=\frac{K_1l_c(1-\epsilon)^2}{\epsilon_c^3} \tag{26.1}$
\begin{displaymath}
\tag{26.2}
K_1=\frac{150}{D_P^2}
\end{displaymath}
for large, relatively flat membranes
$R_c(t)=\frac{K_1(1-\epsilon_c)c_FV(t)}{\epsilon_c^3\rho_cA_c}=\frac{\alpha c_FV(t)}{A_c} \tag{26.3}$
An equation for the cake resistance, $R_c(t)$, for capillary or hollow fiber membranes given in Seader (14-22).
Operation with Constant Pressure (Flux Decreases with Time)
$\frac{t}{V(t)}=\frac{V(t)+2V_0}{K} \tag{26.4}$
$K = \dfrac{2A_c^2 \Delta P}{\alpha c_F \mu} \tag{26.5}$
$V_0 = \dfrac{R_m A_c}{\alpha c_F} \tag{26.6}$
$\alpha = \dfrac{K_1(1- \epsilon)}{\epsilon_c^3 \rho_c} \tag{26.7}$
$K_1 = \dfrac{150}{D_P^2} \tag{26.8}$x
Operation with Constant Flux (Applied Pressure Drop Increases with Time)
$\Delta P(t) = \left( \dfrac{\alpha c_F \mu}{A_c^2} \right) u^2t + \left( \dfrac{R_m \mu}{A_c} \right) u \tag{26.9}$
Combined Operation: Constant Flux to Maximum Pressure Drop, Then Continue at Constant Pressure with Decreasing Flux
$V(t)=\frac{-R_mA_M}{K_2c_F} + [\left(\frac{A_MR_m}{K_2c_F}\right)^2\+\frac{2A_M}{K_2c_F}\left(R_mV_{CF}+\frac{0.5K_2c_FV_{CF}^2}{A_M}+\frac{A_M\Delta P_{UL}\left(t-t_{CF}\right)}{\mu}\right) ]^{0.5} \tag{26.10}$
$J(t)=\frac{A_M\Delta P_{UL}}{K_2c_F\mu} [\left(\frac{A_MR_M}{K_2c_F}\right)^2\+\frac{2A_M}{K_2c_F}\left(R_mV_{CF}+\frac{0.5K_2c_FV_{CF}^2}{A_M}+\frac{A_M\Delta P_{UL}\left(t-t_{CF}\right)}{\mu}\right) ]^{-0.5} \tag{26.11}$
$K_2=\alpha \tag{26.12}$
Example
We aim to use a flat, porous membrane to filter milk. The membrane has a surface area of 17.3 cm2; the membrane resistance and thickness are not known. The milk contains 4.3 kg/m3 solids and has a viscosity of 0.001 Pa-s. We previously filtered this milk at an applied pressure drop of 20 psi and the following data was collected:
Time (hr) 0.5 1.0 1.5 2.0
Total volume collected (L) 0.31 0.40 0.53 0.61
1. How much filtered milk could be collected over a 12-hour period if we operate at $\Delta P = 20$ psi?
2. How much filtered milk could be collected over a 12-hour period if we operate at $\Delta P = 40$ psi?
3. If we operate in constant flux mode at 0.1 L/hr, how long will it take to reach our maximum allowable $\Delta P$ of 40 psi? How much permeate would be collected during this time?
4. If we operate in combined mode for 24 hours with a constant flux of 0.1 L/hr and maximum allowable pressure drop of 40 psi, how much permeate would be collected?
Nonporous Membranes Gas Permeation
$\alpha^*_{A,B}$ = ideal separation factor of species A and B (unitless)
$\alpha_{A,B}$ = actual separation factor of species A and B (unitless)
$D_i$ = diffusivity of species i in the membrane (length2 time-1)
$H_i$ = Henry’s Law coefficient of species i in the membrane (mol volume-1 pressure-1)
$l_M$ = membrane thickness (length)
$N_i$ = molar transmembrane flux of species i (mol area-1 time-1)
$P_F$ = total pressure of the feed (pressure)
$p_{i_0}$ = partial pressure of species i at the membrane on the feed side (pressure)
$p_{i_F}$ = partial pressure of species i in the bulk feed (pressure)
$p_{i_L}$ = partial pressure of species i at the membrane on the permeate side (pressure)
$p_{i_P}$ = partial pressure of species i in the bulk permeate (pressure)
$P_{M_i}$ = permeability of the membrane to species i (length2 time-1)
$\overline P_{M_i}$ = permeance of the membrane to species i (length time-1)
$P_P$ = total pressure of the permeate (pressure)
$r$ = pressure ratio (unitless)
$x_i$ = mole fraction of species i on the feed side
$y_i$ = mole fraction of species i in the permeate
Gas through a non-porous membrane
$N_i=\frac{H_iD_i}{l_M}(p_{i_0}-p_{i_L}) \tag{27.1}$
if film resistance is negligible
$N_i=\frac{H_iD_i}{l_M}(p_{i_F}-p_{i_P}) \tag{27.2}$
$\alpha_{A,B} = \dfrac{y_A / x_A}{y_B/x_B} \tag{27.3}$
$\alpha^*_{A,B}=\frac{H_AD_A}{H_BD_B}=\frac{P_{M_A}}{P_{M_B}} \tag{27.4}$
$\alpha_{A,B}=\alpha^*_{A,B}\left[\frac{(x_B/y_B)-r\alpha_{A,B}}{(x_B/y_B)-r}\right] \tag{27.5}$
$r = P_P/P_F \tag{27.6}$
when A and B are the only components of the feed and permeate, so that
$x_A + x_B = y_A + y_B = 1 \tag{27.7}$
$\alpha_{A,B}=\alpha^*_{A,B}\left[\frac{x_A(\alpha_{A,B}-1)+1-r\alpha_{A,B}}{x_A(\alpha_{A,B}-1)+1-r}\right] \tag{27.8}$
Example
A certain membrane has an ideal separation factor of 5.12 for O2 (A) and N2 (B). It has been proposed to use this membrane to separate O2 from air. If our feed pressure is 5.0 atm and our permeate pressure is maintained at 0.25 atm, what is the composition of our product gas?
Dialysis
$(\Delta c_i)_{\rm LM}$ = log mean concentration difference (mol volume-1)
$A_M$ = area of membrane cross-sectional to the flow path (area)
$c_{i_F}$ = concentration of species i on the feed side of the membrane (mol volume-1)
$c_{i_P}$ = concentration of species i on the permeate side of the membrane (mol volume-1)
$c_{i_R}$ = concentration of species i in the retentate (mol volume-1)
$c_{I,\rm wash}$ = concentration of species i in the wash solution (mol volume-1)
$k_{i_F}$ = mass transfer coefficient of species i in the feed (length time-1)
$k_{i_P}$ = mass transfer coefficient of species i in the permeate (length time-1)
$K_i$ = overall mass transfer coefficient of species i (length time-1)
$l_M$ = thickness of the membrane (length)
$n_i$ = rate of mass transfer of species i (mol time-1)
$P_{M_i}$ = permeability of the membrane to species i (length2 time-1)
Transport across a small membrane segment
$dn_i=K_i(c_{i_F}-c_{i_P})dA_M \tag{28.1}$
$\frac{1}{K_i}=\frac{1}{k_{i_F}}+\frac{l_M}{P_{M_i}}+\frac{1}{k_{i_P}} \tag{28.2}$
$n_i=K_iA_M(\Delta c_i)_{\rm LM} \tag{28.3}$
For counter-current operation:
$(\Delta c_i)_{\rm LM} = \dfrac{(c_{i_F} - c_{i_P}) - (c_{i_R} - c_{i, \rm wash})}{\ln \left[ \frac{(c_{i_F} - c_{i_P})}{(c_{i_R} - c_{i, \rm wash})} \right]} \tag{28.4}$
Water (solvent) transport number = water (solvent) flux / solute flux
Example
We aim to recover 30% of the H2SO4 from a 0.78 m3/hr feed containing 300 kg/m3 of H2SO4 and smaller amounts of CuSO4 and NiSO4. We have up to 1.0 m3/hr of water available as a wash stream. The process is to run counter-current and at 25°C. The available membrane has an H2SO4 permeance of 0.025 cm/min, negligible permeance to the other sulfates, and a water transport number (mass) of +1.5. Previous experience suggests that $1/k_F+1/k_P =$1/(0.020 cm/min). What is the required membrane area and the volumetric flowrate of the two streams exiting the dialysis unit?
Reverse Osmosis
$\Delta P$ = pressure drop across the membrane (pressure)
$\Delta \pi$ = osmotic pressure drop across the membrane (pressure)
$\gamma^1_A$ = activity coefficient of the solvent on the feed/retentate side (unitless)
$\Gamma$ = concentration polarization factor (unitless)
$\mu$ = viscosity of the feed solution (cP)
$\pi$ = osmotic pressure (pressure)
$\rho$ = density of the feed solution (mass volume-1)
$a$ = internal parameter used in estimating ki (unitless)
$b$ = internal parameter used in estimating ki (unitless)
$c^1_i$ = concentration of species i in feed/permeate (mol volume-1)
$(c_{\rm salt})_{\rm permeate}$ = concentration of solute in the permeate (mol volume-1) or (mass volume-1)
$(c_{\rm salt})_{\rm feed}$ = concentration of solute in the feed (mol volume-1) or (mass volume-1)
$d$ = internal parameter used in estimating $k_i$ (unitless)
$D$ = tube diameter (length)
$d_H$ = hydraulic diameter (length)
$D_i$ = diffusivity of species i in the indicated solvent (length2 time-1)
$h$ = height of flow channel (length)
$k_i$ = mass transfer coefficient of species i (length time-1)
$N_{A}$ = molar flux of solvent A through the membrane (mol area-1 time-1)
$N_{\rm Re}$ = Reynold’s number of the feed/retentate (unitless)
$N_{\rm Sc}$ = Schmidt number of the feed/retentate (unitless)
$R$ = universal gas constant (pressure volume mol-1 temperature-1) or (energy mol temperature-1)
$SP$ = salt (solute) passage number (unitless)
$SR$ = salt (solute) rejection factor (unitless)
$T$ = system temperature (temperature)
$v$ = velocity of the feed (length time-1)
$v_{A_L}$ = specific volume of the solvent (volume mol-1)
$w$ = width of flow channel (length)
$x^1_i$ = mole fraction of species i on the feed/retentate side (unitless)
$\pi = \frac{-RT}{v_{A_L}}\ln{(x_A^1\gamma_A^1)} \tag{29.1}$
when $\gamma^1_A \sim 1$ and $\ln(1-x^1_B) \sim -x^1_B$ then
$\pi = \frac{RTx_B^1}{v_{A_L}} \tag{29.2}$
if $x_B$ is sufficiently small, then $x_B/v_{AL}=c_B$ and
$\pi \sim RTc_B^1 \tag{29.3}$
$N_A=\frac{P_{M_A}}{l_M}(\Delta P-\Delta\pi) \tag{29.4}$
$k_i=\frac{aN_{\rm Re}^bN_{\rm Sc}^{0.33}(d_H/L)^d}{(d_H/D_i)} \tag{29.5}$
$N_{\rm Re}=\frac{d_Hv\rho}{\mu} \tag{29.6}$
$N_{\rm Sc}=\frac{\mu}{\rho D_i} \tag{29.7}$
for a circular tube, $d_H=D$
for a rectangular channel, $d_H=2hw/(h+w)$
during turbulent flow $(N_{\rm Re} > 10,000)$ $a = 0.023, b = 0.8, d = 0$
during laminar flow, circular tube $(N_{\rm Re} < 2,100)$ $a = 1.86, b = 0.33, d = 0.33$
during laminar flow, rectangular channel $(N_{\rm Re} < 2,100)$ $a = 1.62, b = 0.33, d = 0.33$
$SP=\frac{(c_{\rm salt})_{\rm permeate}}{(c_{\rm salt})_{\rm feed}} \tag{29.8}$
$SR=1-SP \tag{29.9}$
$\Gamma=\frac{N_{\rm solvent}(SR)}{k_{\rm solute}} \tag{29.10}$
Watch a video from LearnChemE that explains osmotic pressure: Osmotic Pressure Derivation (5:00).
Example
We intend to use reverse osmosis on a feed stream containing 1.8 wt% NaCl to produce water containing 0.05 wt% NaCl. The separation is to take place at 25°C with a feed side pressure of 1,000 psia and a permeate-side pressure of 50 psia. The proposed membrane has permeance of $1.1\times 10^{-5}$ g/cm2-s-atm for water.
(a) Ignoring resistances to mass transfer, how much water can be produced per day per unit area of membrane?
(b) If $k_{\rm salt}$ = 0.005 cm/s, what is the concentration polarization factor?
Pervaporation
$\gamma_i$ = activity coefficient of species i (unitless)
$A_{12},A_{21}$ = system-specific parameters for the van Laar model
$l_M$ = membrane thickness (length)
$P^{\rm sat}_i$ = saturated vapor pressure of species i, function of temperature and Antoine equation coefficients (pressure)
$P_{M_i}$ = permeability of the membrane to species i (length2 time-1)
$P_P$ = total pressure on permeate side (pressure)
$x_i$ = mole fraction of species i on the feed/retentate side (unitless)
$y_i$ = mole fraction of species i on the permeate side (unitless)
$N_i=\frac{P_{M_i}}{l_M}\left(\gamma_ix_iP_i^{\rm sat}-y_iP_P\right) \tag{30.1}$
van Laar model for activity coefficients, binary system
$\ln\gamma_1=\frac{A_{12}}{\left[1+\frac{x_1A_{12}}{x_2A_{21}}\right]^2} \tag{30.2}$
$\ln\gamma_2=\frac{A_{21}}{\left[1+\frac{x_2A_{21}}{x_1A_{12}}\right]^2} \tag{30.3}$
Example
We have obtained a potential pervaporation membrane and we aim to use it to separate ethanol and water. We tested this system at 60°C, a permeate pressure of 76 mmHg and a feed containing 8.8 wt% EtOH. Using 1.0 cm2 of membrane, we collected 0.25 g/hr of permeate that was found to contain 10.0 wt% EtOH.
(a) What is the permeance of our membrane to ethanol and to water?
(b) What is the expected product composition and flowrate (g/hr) for a feed containing 10.0 wt% EtOH at 60°C with operation at a permeate pressure of 100 mmHg?
• AEtOH,H2O = 1.6276
• AH2O,EtOH = 0.9232
• PEtOHsat(60°C) = 352 mmHg
• PH2Osat(60°C) = 149 mmHg
|
textbooks/eng/Chemical_Engineering/Chemical_Engineering_Separations%3A_A_Handbook_for_Students_(Lamm_and_Jarboe)/1.06%3A_Membranes.txt
|
Adsorption, Ion Exchange, and Chromatography
$c_i$ = concentration of species i in the mobile phase (mass volume-1) or (mole volume-1)
$k_i$ = empirical constant for species i for isotherms (units vary)
$K_i$ = adsorption equilibrium constant for species i
$n_i$= internal parameter for isotherms (units vary)
$p_i$= partial pressure of species i (pressure)
$q_i$= amount of species i adsorbed per unit mass of adsorbent at equilibrium (mass mass-1) or (mole mass-1)
$q_{m_i}$ = amount of species i adsorbed per unit mass of adsorbent at maximum loading, where maximum loading corresponds to complete surface coverage (mass mass-1) or (mole mass-1)
linear isotherm:
$q_i=k_ip_i \tag{31.1}$
Freundlich isotherm:
$q_i=k_ip_i^{1/n_i} \tag{31.2}$
Langmuir isotherm:
$q_i=\frac{K_iq_{m_i}p_i}{1+K_ip_i} \tag{31.3}$
chromatography equilibrium:
$K_i=\frac{q_i}{c_i} \tag{31.4}$
Watch a video from LearnChemE for an explanation about the concept of adsorption: Adsorption Introduction (8:49)
Modeling Differential Chromatography
$\alpha_i$ = average partitioning of species i between the bulk fluid and sorbent (unitless)
$\epsilon_b$ = sorbent porosity, ranges from 0 to 1 (unitless)
$\epsilon^*_{p,i}$ = inclusion porosity, accounts for accessibility of sorbent pores to species i (unitless)
$\tau_f$ = sorbent tortuosity factor, usually approximately 1.4 (unitless)
$\omega_i$ = fraction of solute in the mobile phase, relative to sorbed solute, at equilibrium (unitless)
$A$ = cross-sectional area of the column (area)
$c_{f,i}$ = concentration of species i in the mobile phase (mass volume-1) or (mol volume-1)
$D_{e,i}$ = effective diffusivity of species i within the sorbent pores (length2 time-1)
$E_i$ = coefficient that accounts for axial diffusion of species i and non-uniformities of flow (length2 time-1)
$H_i$ = height of theoretical chromatographic plate for species i (length)
$k_{a,i}$ = kinetic rate constant of adsorption of species i to the sorbent (time-1)
$k_{c,i}$ = mass transfer coefficient of species i in the mobile phase (length time-1)
$k_{c,i,tot}$ = overall mass transfer coefficient of species i (length time-1)
$K_{d,i}$ = equilibrium distribution coefficient of species i between the mobile phase and sorbent (unitless)
$L$ = length of column (length)
$m_{0_i}$ = amount of solute i fed to column (mass) or (mol)
$R_{1,2}$ = resolution of species 1 and 2 in the proposed operating condition (unitless)
$R_p$ = radius of sorbent particles (length)
$s_i$ = variance of the Gaussian peak of the distribution of species i along the column length (time)
$t$ = elapsed time since loading of the column (time)
${\overline t}_i$ = mean residence time of species i in the column (time)
$u$ = actual fluid velocity through the bed (length time-1)
$u_s$ = superficial fluid velocity through the bed (length time-1)
$z$ = position along the length of the column, in the direction of flow (length)
$z_{0,i}$ = mean position of species i along the length of the column as a function of time (length)
$z_{0,i}(t)=\omega_iut \tag{32.1}$
$\omega_i=\frac{1}{1+\frac{1-\epsilon_b}{\epsilon_b\alpha_i}} \tag{32.2}$
$\alpha_i=\frac{1}{\epsilon_{p,i}^*(1+K_{d,i})} \tag{32.3}$
$u=u_s/\epsilon_b \tag{32.4}$
$\overline t_i=\frac{L}{\omega_iu} \tag{32.5}$
$c_{f,i}(z,t)=\frac{m_{0_i}\omega_i}{A\epsilon_b(2\pi H_iz_0)^{0.5}}\textrm {exp}\left(\frac{-(z-z_0)^2}{2H_iz_0}\right) \tag{32.6}$
$H_i=2\left[\frac{E_i}{u}+\frac{\omega_i(1-\omega_i)R_pu}{3\alpha_ik_{ci,tot}}\right] \tag{32.7}$
$N_{{\rm Pe},i}=N_{\rm Re}N_{{\rm Sc},i}=\frac{2R_pu\epsilon_b}{D_i} \tag{32.8}$
if $N_{{\rm Pe},i}<<1$
$E_i=\frac{D_i}{\tau_f} \tag{32.9}$
else
$E_i=\dfrac{2R_pu \epsilon_b}{N_{ Pe,E,i}} \tag{32.10}$
$N_{{\rm Pe},E,i}$ calculated by 15-61 or 15-62, Seader
$\frac{1}{k_{ci,tot}}=\frac{1}{k_{c,i}}+\frac{R_p}{5\epsilon_{p,i}^*D_{e,i}}+\frac{3}{R_pk_{a,i}\epsilon_{p,i}^*}\left[\frac{K_{d,i}}{1+K_{d,i}}\right]^2 \tag{32.11}$
$s_i^2=\frac{\overline t_iH_i}{\omega_iu} \tag{32.12}$
$R_{1,2}=\frac{\textrm {abs}(\overline t_1-\overline t_2)}{2(s_1+s_2)} \tag{32.13}$
Example
1.0 g of species A is added to a chromatography column of cross-sectional area 1.0 m2 and length 1.0 m. Mobile phase is added at a flowrate of $4.0 \times 10^{-3}$ m3/s. Species A has a mass transfer coefficient of $2.0 \times 10^{-5}$ m/s in this solvent. The selected sorbent has a porosity of 0.40 m and average particle radius of $5.0\times 10^{-6}$ m. For species A in this sorbent, the inclusion porosity is 0.80, $K_d = 50$, $E = 2.0\times 10^{-8}$ m2/s, $k_a = 100$ s-1 and the effective diffusivity is $3.5\times 10^{-12}$ m2/s.
(a) When is mean expected elution time for species A?
(b) Plot the concentration profile for species A at 0.05 m increments along the column length in 10-minute increments, until all of the solute has eluted.
(c) Find the variance of the peak for species A in the proposed operating condition.
(d) The column feed also contains 1.0 g of species B. Species B has a mass transfer coefficient of $1.0\times 10^{-5}$ m/s in the mobile phase, inclusion porosity of 0.50, $K_d = 60$, $E = 3.0\times 10^{-8}$ m2/s, effective diffusivity of $4\times 10^{-12}$ m2/s and $k_a = 200$ s-1. What is the resolution of these two species in the proposed operating condition?
|
textbooks/eng/Chemical_Engineering/Chemical_Engineering_Separations%3A_A_Handbook_for_Students_(Lamm_and_Jarboe)/1.07%3A_Sorption_and_Chromatography.txt
|
This is a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. It is organized as it would be used for the process design of large-scale distillation columns, which may differ from the way topics are introduced to students academically. As a part of distillation column design, these articles also involve the vapor-liquid equilibria (VLE) of binary systems, with potential extension to multi-component systems. For application or other questions, the author can be contacted at [email protected].
• 1: Overview
While chlorosilanes and their electronic impurities are used as recurring examples, the technology developed here has great application in other mildly polar and hydrogenated compounds, which are normally excluded from theoretical treatment in many texts. The emphasis is on industrial level applications.
• 2: Vapor Pressure
This article deals with the pure component VP relationships commonly found in textbooks, as well as their limitations. This article lays the basis for subsequent articles on improving VP equations and tying in Equations of State.
• 3: Critical Properties and Acentric Factor
This article deals with the tabulation of these properties for selected fluids based on globally collected data. Data analysis and validation are discussed, as well as estimation techniques for those fluids for which data is either poor or non-existent. Critical properties are used to convert temperature, pressure, and specific volume from conventional units used in both chemistry and chemical engineering, to the reduced form.
• 4: New Vapor Pressure Equation
This article shows how a new vapor pressure equation form allows practice of distillation applications at the elevated pressures more common to industry. The culmination of this article is a thermodynamically consistent equation form that is valid between the atmospheric boiling point and the critical point, and which allows evaluation of other required distillation properties such as saturated phase densities and latent heat of vaporization.
• 5: Equation of State
This article deals with the recommendations for Equation Of State (EOS), as well as the mathematical techniques for solving such non-intrinsic EOS equation forms. The module explains why an EOS is needed to evaluate the pure-component physical properties that are used in distillation science, reviews the various options and makes a recommendation. Also included are techniques for solving the EOS cubic equations and some work-arounds near the critical point.
• 6: Fugacity
This article deals with the one of the two possible departures from ideal pure-component vapor pressure in binary mixtures, as is commonly found in practical application of distillation science.
• 7: Liquid Activity Coefficients
This article deals with Liquid Activity Coefficients, the second type of departure from pure-component vapor pressure. It describes the application and estimation of Liquid Activity Coefficients, as commonly found in practical application of distillation science. Various activity coefficient models are reviewed along with some limited data and a recommended estimation technique given where data is lacking.
• 8: VLE Analysis Methods
This article discusses the recommended methodology used when data-collecting binary systems, in order to assure that systemic errors are minimized. This topic also deals with validation whenever data collection is done on reactive fluids, which can disproportionate or dimerize during study.
• 9: Putting It All Together
This article shows how to combine the components of the above Distillation Science topics in a practical application, as well as illustrating why it is necessary to include departures from ideal behavior in real binary systems, as commonly encountered in industrial practice.
• 10: Convergence Strategy
This last article shows how VP equation solutions are best obtained for additional fluids in non-intrinsic or nested-loop equation forms, such as the recommended vapor pressure equations. While this topic is more mathematical or computer-science oriented than chemistry, it is a necessary technique to understand when dealing with modern technology.
Distillation Science (Coleman)
This is Part I, Overview of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. It is organized as it would be used for the industrial process design of distillation columns, which may differ from the way it is introduced to students academically. As a part of distillation column design, these articles also involve the vapor-liquid equilibria (VLE) of binary systems, with potential extension to multi-component systems.
It is assumed that the reader has a basic understanding of distillation, as a means of separating two or more volatile fluids by the difference between their boiling points; and also the purpose of a distillation column's various components. A variety of texts exist that explain these concepts on a basic level. From such pre-requisites, the purpose of this series of articles is to expand these basics to the degree necessary for commercial applications with real fluids - which typically do not follow ideal behavior. It is also assumed that the reader is familiar with the concepts of molar units, mole fractions, the bonding of chemical elements, vapor pressure, latent heat of vaporization, and a compound's critical point - all of which are found in college freshman-level textbooks. In discussing mathematical relationships, it is assumed that the concepts of differentiation and integration are known to the reader.
While chlorosilanes and their electronic impurities are used as recurring examples, the technology developed here has great application in other mildly polar and hydrogenated compounds, which are normally excluded from theoretical treatment in many texts. The chlorosilane homologue starts with silane (SiH4), and concludes with silicon tetrachloride (SiCl4), as the Si-H bonds are incrementally swapped for Si-Cl bonds.
Chlorosilanes (and many of the electronic impurities) are non-naturally occurring compounds, but are key to the manufacture of high-purity silicon (solar photovoltaics and modern electronic integrated circuits, aka “computer chips”); as well as silicon-based chemicals such as silicones and organic/inorganic coupling agents. Applications for distillation science are found in both bulk separations of these fluids, as well as the purification by high-reflux processing, to parts-per-billion levels.
Furthermore, this technology is applicable to a wide variety of other non-organic fluids such as refrigerants, biologic pre-cursors, and pharmaceutical pre-cursors (i.e., not the final biological or pharmaceutical product, but rather the compounds used as catalysts and building-block agents for assembling these specialized chemical products).
To make the discussion of Distillation Science more generic, temperature, pressure, and molar volume ( T, P, and V) are often expressed in dimensionless reduced units, designated respectively as Tr , Pr , and Vr. Tr = T/Tc ; Pr=P/Pc ; Vr = V/Vc . By definition, Tr =1, Pr =1, and Vr =1 simultaneously at the critical point. When temperature and pressure are given as T and P, these are in absolute units of Kelvins and atmospheres. For those more familiar with pressures in other absolute units such as PSIA, bar absolute, and Pascals absolute, converting those units to absolute atmospheres is easily done using internet-accessible conversion tables and calculators.
• Part II, Existing Vapor Pressure Equations deals with the pure component VP relationships commonly found in textbooks, as well as their limitations. This article lays the basis for subsequent articles.
• Part III, Critical Properties and Acentric Factor deals with the tabulation of these properties for selected fluids based on globally collected data. Data analysis and validation are discussed, as well as estimation techniques for those fluids for which data is either poor or non-existent. Critical properties are used to convert temperature, pressure, and specific volume from conventional units used in both chemistry and chemical engineering, to the reduced form of Part IV. In Parts V and VII, the value of the acentric factor is important.
• Part IV, New Vapor Pressure Equation expands on the basics of Part II and the results of Part III, to show how a new vapor pressure equation allows the practice of distillation applications at the elevated pressures more common to industry. The culmination of this article is a thermodynamically consistent equation that is valid between the atmospheric boiling point and the critical point, and which allows evaluation of other required distillation properties such as saturated phase densities and latent heat of vaporization.
• Part V, Equation of State deals with the recommendations for best EOS, as well as the mathematical techniques for solving such non-intrinsic EOS equation forms.
• Part VI, Fugacity deals with the departure of apparent pure-component vapor pressure in binary mixtures, as is commonly found in practical application of distillation science. The equations for evaluating fugacity coefficents are given.
• Part VII, Liquid Activity Coefficients deals with the application and estimation of Liquid Activity Coefficients, as commonly found in practical application of distillation science. Various Liquid Activity Coefficient models are reviewed along with some limited data and a recommended estimation technique given where data is lacking.
• Part VIII, VLE Analysis Methods discusses the recommended methodology used when data-collecting binary systems, to assure that systemic errors are minimized. This topic also deals with validation whenever data collection is done on reactive fluids, which can disproportionate or dimerize during study.
• Part IX, Putting It All Together shows how to combine the components of the above distillation science articles in a practical application.
• Part X, Convergence Strategy illustrates how solutions are best obtained for additional fluids using non-intrinsic or nested-loop equation methods, such as the recommended vapor pressure equations. While this topic is more mathematical or computer-science oriented than chemistry-centered, it is a necessary technique to understand when dealing with modern technology. No complex mathematics are required.
Equation Notation Used
Notation Usage Units Notation Usage Units
NBP Normal Boiling Point °K q Watson exponent -
P, VP Pressure, Vapor Pressure atmospheres ΔHvap Latent heat of Vaporization Cal/g-mol
Pc
Critical Pressure atmospheres ΔHvb Latent heat @ Tb Cal/g-mol
Pr
Reduced Pressure - T-S ΔHvb Thek-Stiel VP equation parameter Cal/g-mol
R
Gas constant Cal/g-mol°K Z compressibility -
T Temperature °K Zv & ZL compressibility of saturated vapor & liquid -
Tb
Temperature @ Normal Boiling Point °K ΔZvap (Zv-ZL) @ vaporization -
Tc
Critical Temp °K α Riedel derivative -
Tr
Reduced Temp - αc α @ critical -
Tbr
Reduced Tb - $\psi \nonumber$ Vapor Pressure derivative -
Vc
Critical Volume cc/g-mol ø fugacity coefficient -
k
H bonding parameter - $\gamma \nonumber$ liquid activity coefficient -
kij Binary Interaction Coefficient - ω acentric factor -
Units
The units used in these articles are temperatures in °K, pressures in atmospheres (absolute), and molar volumes in cc/gram-mole. Molecular weight (MW), and compressibility (Z) are dimensionless. For the above units, the value of the gas constant (R) is 82.057 atm-cc/mole-°K.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/01%3A_Overview.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part II, Vapor Pressure of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature.
Part II, Vapor Pressure deals with the existing pure component vapor pressure (VP) equations normally found in textbooks. The content of this article is referred to in subsequent articles. The goal of this article is to explain the limitations of these less sophisticated VP equations, to show how best to use them, and to set up the introduction of a superior VP equation form in Part IV.
The original Clausius-Clapeyron equation relating VP, temperature, vaporization molar volume change (ΔVv) and latent heat of vaporization (ΔHv) dates back to mid-19th century and is derived from thermodynamic principles. The derivation is given in many college freshman-level texts and results from thermodynamic equilibrium between liquid and vapor phases. First, the differential of pressure with respect to system temperature (for both vapor and liquid) is re-arranged to:
$\frac{dP}{dT} = \frac{\Delta H_{v}}{T\Delta V_{vap}} \label{2-1}$ If (assumption 1) the vaporization molar volume change (ΔVvap) is set equal to the saturated vapor volume (V) by assuming the boiling liquid's molar volume is essentially zero; and if (assumption 2) the "Ideal Gas Law" (PV=RT) can hold for this saturated vapor, then Equation \ref{2-1} in differential form the relationship becomes:
$\frac{d(\ln P)}{d(1/T)} = \frac{\Delta H_{vap}}{RT} \label{2-2}$ When integrated, this results in the simplified Clausius-Clapeyron relationship that is found in most textbooks. However, there are limited conditions for which the above two assumptions are nearly correct: non-complex molecular structure and very low pressure (say, below atmospheric or very near atmospheric). For most compounds, and for most pressures encountered in normal industrial processes, neither of these assumptions holds very well and accuracy gets increasingly worse as pressure increases beyond atmospheric.
For real compounds and pressures normally encountered in industry, a term is added to the "Ideal Gas Law" ($PV=RT$) called compressibility, $Z$; so then the relationship becomes:
$PV = ZRT \label{2-3}$ where compressibility Z is a function of that fluid's pressure, temperature, and other physical properties such as discussed in the Part III. Note that Equation (\ref{2-3}) holds for both vapors and liquids: with Zv being vapor compressibility, ZL being liquid compressibility, and ΔZvap being the change in compressibility with vaporization. Now Equation (\ref{2-1}) can be transformed into a more usable relationship for all fluids and all sub-critical pressures. In differential form, it is:
$\frac{d(\ln P)}{d(1/T)} = \frac{\Delta H_{vap}}{\Delta Z_{vap}RT} \label{2-4}$
After integration between close temperatures T1 and T2 (so the ratio ΔHvap / ΔZvap can be taken as constant over that close range), Equation (\ref{2-4}) becomes:
$\ln(P_{2}/P_{1}) = \dfrac{\Delta H_{vap}}{\Delta Z_{vap}R} \times (1/T_{1}-1/T_{2}) \label{2-5}$
In order for this equation to be fairly accurate, it important for $T_1$ and $T_2$ to be close, since both ΔHvap and ΔZvap are actually functions of temperature. Also note that if ΔZvap is set to unity, Equation (\ref{2-5}) becomes the same as the simplified integrated Clausius-Clapeyron equation: $\ln(P_{2}/P_{1}) = \dfrac{\Delta H_{vap}}{R} \times (1/T_{1}-1/T_{2}) \label{2-6}$
It can now be understood that ΔZvap is the measure of the net effect of removing above assumptions (1) and (2) that allowed Equation (\ref{2-2}). For most fluids near atmospheric pressure, ΔZvap is about 0.95 - 0.99, but as pressures increase toward a fluid's critical point (where vapor and liquid merge at a singularity), ΔZvap goes to zero. ( But of course, ΔHvap also goes to zero at the critical point, so the ratio ΔHvap / ΔZvap becomes undefined at this singularity).
The evaluation of ΔZvap normally requires use of an Equation of State (EOS), which is discussed in Part V. Using such an EOS to assess ΔZvap would allow vapor pressure vs temperature data to exactly align with ΔHvap values. In Part IV, Equation (\ref{2-4}) will be further developed and integrated without use of an EOS, up to pressures close to critical (usually about 95% of critical pressure in pure component systems).
Returning back to the above integrated form of Clausius-Clapeyron, Equation (\ref{2-5}) suggests the commonly used empirical means to curve-fit VP data over small temperature/pressure ranges: $Ln(P) = A-B/T \label{2-7}$
The term "B" does not have any exact scientific significance, but works as a curve-fitting parameter and is normally shown with a negative sign, so as to have a positive value. Since Equation (\ref{2-7}) has a limited range of application to low pressures, it can be improved for use near ambient and slightly higher pressures with an empirical form for curve-fitting VP vs T, called the Antoine Equation. However the constants A, B, and C have no scientific basis either and the equation form can still only be used over modest ranges ( and low pressures).
$\ln(P) = A-\frac{B}{T+C} \label{2-8}$
Additional constants D, E, F can be added to make the “extended Antoine” relationship for empirical curve-fitting; however there is still no correlation between the constants and scientific meaning.
$\ln(P) = A-\frac{B}{T+C}+D\times T+E\times T^2+F\times Ln(T) \label{2-9}$
From an industrial distillation column design perspective, even the extended Antoine VP relationships found in handbooks are not entirely adequate: they rarely reproduce the correct VP values at Tb, Tc and at Tr=0.7 (where the acentric factor is determined). That inadequacy undermines attempts to use modern Equations of State (EOS are discussed in Part V), forcing the use of the antiquated Van der Waals EOS. With any EOS, the extended Antoine VP relationship cannot connect latent heats and saturated (vapor and liquid) phase densities to vapor pressure. Additionally, none of these empirical VP equations reproduces the inflection point required by thermodynamics of a real fluid's VP vs T plot, normally occurring between Tr= 0.7 and Tr = 0.85. For economic reasons, many industrial processes operate at these higher pressures, so using empirical VP equations can lead to poor distillation column design. This is especially true in modern industrial applications where complex computer programs have automated several aspects of distillation column design.
In designing such an industrial-level distillation column, the various process simulation packages (e.g., ASPEN, VMG, etc) would then be fed with rather poor VP estimations. It is not uncommon with such simulation software to become non-convergent or to have the problem solution get stuck on a singularity. In the vernacular of the early days of computing, “Garbage in – Garbage out”.
The solution to this quandary is to find a better VP equation form that possesses all the criteria lacking in the VP equations shown in this article. A solution to that is proposed in Part IV.
However, Equation (\ref{2-7}) and Equation (\ref{2-8}) are not without merit, as long as they are used for the purpose they were derived: only for narrow ranges of temperature and at moderate pressures. Equation (\ref{2-8}) (Antoine) is especially useful in correlating data taken around the atmospheric boiling point, Tb, to get a more accurate value from several experimental data points, rather than just one. It also allows data from several sources to be inter-compared, as long as they had a similar temperature range.
While evaluating the “A, B, and C” of Equation (\ref{2-8}) may seem daunting, the solution is readily managed using some algebraic manipulation and a spreadsheet multiple regression function (e.g., MS Excel). Equation (\ref{2-8}) is re-written as the algebraically equivalent
$\ln(P) = A+\frac{AC-B}{T}-C\times Ln(P)/T \label{2-10}$
Then VP vs T data (always as absolute pressure and temperature) are converted into three columns for each data set: the dependent variable is “Ln(P)”, and the two independent variables are “1/T” and “Ln(P)/T”. When the regression is run, the regression's intercept will equal Equation (\ref{2-8})’s “A”; and the second independent variable’s coefficient (i.e., for Ln(P)/T ) will equal the negative of Equation (\ref{2-8})’s “C”. The value of Equation (\ref{2-8})’s “B” is determined algebraically from the first independent variable’s coefficient (i.e., for 1/T)= (AC-B). See an example of using this solution procedure in below Table 2-1.
This curve-fit parameter solution technique does not work with Equation (\ref{2-9}), so often the “Extended Antoine” equation is stated differently as
$\ Ln(P) = A-\frac{B}{T}+C\times Ln(T)+DT+ET^2 \label{2-11}$
requiring a multiple linear regression (such as with MS Excel) be done with the four “independent” variables: 1/T, Ln(T), T, and T2. Assuming the temperature range is narrow, the curve-fit quality is almost always statistically better with Equation (\ref{2-8}) than with Equation (\ref{2-11}), since there are two fewer constants. If the range is broader, such as spanning from atmospheric to the half-way point of critical pressure, then Equation (\ref{2-11}) will give the better curve-fit.
When using experimental data to determine the normal boiling point (Tb), Equation (\ref{2-8}) seems to work best. When using experimental data to determine the VP at Tr= 0.7 (i.e., evaluating the acentric factor), or inter-comparing VP measurements over a similar broader range, Equation (\ref{2-11}) is preferred. Equation (\ref{2-9}) is rarely used for design or data comparison, and is just mentioned for historical purposes.
Example $1$
PCl3 is an important impurity to remove in producing high quality polysilicon for use in solar arrays and electronic integrated circuits ( like computer chips). Part of the purification process involves distillation, and so it is desirable to know the NBP of this compound with high confidence. Researching the NIST database shows an Antoine expression ( Equation (\ref{2-8}) ), but the constants given are recalculated from a 1947 paper by Dan Stull published in I&EC. In making that paper's VP tables, he took data from several early-to-mid 20th century sources and plotted them on a Cox Chart (a graphical approximation method from the 1920's), then read "best fit" values from the Cox Chart at selected pressures. So this data source is questionable, with a possibility of data being "overly massaged". However from DeChema and Infotherm online global databases, experimental data is available from four more recent sources, with some mild disagreement in the pressure range of 1.0 atmospheres = 760 Torr = 101.325 kPa. It is decided to download the original data and develop an Antoine curve-fit, to best determine a most-likely NBP for PCl3.
Table 2-1 shows the experimental data, sorted by temperature. DeChema and Infotherm report vapor pressures in kPa, so that pressure unit is used in the below calculation. To each data point row, columns for "1/T" and "Ln(P)/T" are added. The last two columns show the VP predicted by the Antoine regression and the difference between data point and predicted VP. Under the tabulated data, the results of the regression are shown, as well as the calculation of Antoine constants A, B, and C. And finally the best value for PCl3's NBP based on data along with an assessment of accuracy.
Table 2-1 Curve-fitting PCl3 VP data to the Antoine Equation, plus regression results
T, K VP, kPa Ln(VP), dependent variable 1/T, independent variable 1 Ln(VP)/T, independent variable 2 Antoine predicted VP, kPa Prediction Difference
361.85 151.988 5.024 2.7636E-03 1.3884E-02 150.362 1.626
361.75 151.988 5.024 2.7643E-03 1.3887E-02 149.964 2.024
358.55 136.389 4.916 2.7890E-03 1.3709E-02 137.544 -1.155
355.75 126.790 4.843 2.8110E-03 1.3612E-02 127.191 -0.402
352.95 116.524 4.758 2.8333E-03 1.3481E-02 117.318 -0.794
349.15 104.525 4.649 2.8641E-03 1.3316E-02 104.680 -0.155
348.55 101.325 4.618 2.8690E-03 1.3250E-02 102.764 -1.439
348.45 101.325 4.618 2.8699E-03 1.3250E-02 102.447 -1.122
347.95 101.325 4.618 2.8740E-03 1.3273E-02 100.870 0.455
346.15 95.192 4.556 2.8889E-03 1.3162E-02 95.317 -0.125
343.75 88.259 4.480 2.9091E-03 1.3034E-02 88.213 0.046
343.15 86.526 4.460 2.9142E-03 1.2999E-02 86.490 0.036
338.15 73.461 4.297 2.9573E-03 1.2707E-02 72.952 0.508
337.55 72.127 4.278 2.9625E-03 1.2675E-02 71.425 0.703
The regression results using MS Excel's data analysis function show an R2 = 0.9971 with 14 data points
Intercept = 9.7954; Variable 1 coefficient = -2640.04; Variable 2 coefficient =181.437
Calculate Antoine "A", "B", and "C" per the above procedure:
Intercept = 9.7954 = Antoine "A"
Variable 1 coeff. = -2640.04 so (- Intercept x Variable 2 coeff. - Variable 1 coeff. ) = Antoine "B" = 862.795
Variable 2 coeff. = 181.437 = -1 x Antoine "C", so Antoine "C" = -181.437
The Antoine Equation for PCl3 is determined as $\ln(P) = 9.7954-\frac{862.795}{T-181.437} \label{2-12}$
Solving Equation (\ref{2-12}) for the atmospheric pressure of 101.325 kPa gives a Tb of 348.09°K = 74.94°C ⇒ 74.9°C .
The average absolute difference of ( data -predicted) VP = the table's last column is 0.756 kPa
Note that if the NIST webbook values were used for "A", "B" and "C", a value of 348.34°K ⇒ 75.2°C would result. In this case, the NIST listed results were pretty close the experimental values' regression. Also note that a quick search on Wikipedia's website would have offered a Tb value of 76.1°C, or about a degree higher than actual results. But now that the best value of Tb is known, and there can also be a reasonable evaluation of value's accuracy based on the regression's average error. A 0.756 kPa error at 101.325 kPa pressure equates to a change in calculated Tb of 0.24°K, so the scientific answer to the question is 74.9+ 0.2°C.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/02%3A_Vapor_Pressure.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part III, Critical Properties and Acentric Factor of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature. The goal of this article is to explain how the critical properties and acentric factor are used, and how they may be estimated when reliable values are not available.
As discussed in Part II, the more basic vapor pressure equations have limitations that prevent them from being used in many modern industrial distillation systems, specifically at higher pressures. The concepts in this article will be used in Part IV, to introduce an improved vapor pressure equation, which removes the limitations of basic vapor pressure equations.
To implement the more sophisticated VP relationships and inter-compare the constants in Part IV, experimentally measured P vs T data needs to be put into reduced form. This is done only with the knowledge of the critical point, which is that singularity on the saturation line where the liquid and vapor phases become one. The critical temperature and pressure are identified as Tc and Pc. At the critical point, the specific molar volume is Vc.
$T_{r} = T/T_{c} \label{3-1}$
$P_{r} = P/P_{c} \nonumber$
$V_{r} = V/V_{c} \nonumber$
(note that reduced properties have the advantage of being dimensionless)
Using reduced variables Tr, Pr and Vr from Equation (\ref{3-1}) also allows introduction to the concept of the "Law of Corresponding States" ( which is really not a scientific law per se, but more of a generally followed relationship). This "law" expresses the generalization that those properties dependent on intermolecular forces are related to the critical properties in the same way for all fluids. This concept underlies the development of several later articles of this series on Distillation Science, including Parts IV through VII.
The critical compressibility is Zc, as defined as:
$Z_{c}= \frac {P_{c}V_{c}}{RT_{c}} \label{3-2}$ which is also dimensionless.
Two other properties are important to note:
• $T_b$ is defined as the atmospheric pressure boiling point of a fluid, with $T_{br}$ as the reduced value of the atmospheric boiling point
• $ω$ is the acentric factor, which is a measure of molecular complexity as defined as:
$\omega=-\log_{10}(VP)-1 \label{3-3}$ where the VP (in atmospheres) is evaluated at Tr= 0.7
The term “acentric factor” comes from Kenneth Pfizer's observation in 1955 that compact and near-spherical molecules have close to zero values when their ω is calculated. For example, neon, argon and krypton have ω values of -0.04, 0.00 and 0.00, respectively (helium has a more slightly negative ω, the value depending on isotope). Methane (CH4) has a ω of 0.011; and the molecularly larger silane (SiH4) has an ω of 0.099. An even larger and more complex molecule, like carbon tetrachloride (CCl4) has an ω of 0.193; whereas silicon tetrachloride (SiCl4) has an ω of 0.248. So valuation of the acentric factor from VP data and knowledge of the critical point is a good validation check against the known molecular size and shape.
For Equations of State (see Part V) that are more advanced than Van der Waals, the acentric factor is a required property, along with critical temperature and pressure. It is also used along with critical properties in the estimation of binary interaction parameters in Part VII.
As mentioned in Part II, the basic VP relationships have acceptable accuracy to obtain good values of Tb from available data. Experimental data is often taken near atmospheric pressure, but not exactly at one atmosphere absolute (i.e, 760 mmHg = 760 Torr = 101.325 Pa). Instead of blindly accepting a value of Tb from a handbook or single website, the practicing scientist or engineer should consider the data’s source and reliability. There is ready internet availability of atmospheric boiling points, critical property data, and acentric factors, especially on the NIST WebBook and global websites like Dechema, Infotherm (recently acquired by John Wiley and Sons from FIZ Chemie Berlin).
A recommended way to sort through the dizzying amount of critical property data (i.e., Tc , Pc , and Vc) is to use the correlation techniques of Lyderson ("Estimation of Critical Properties of Organic Compounds", University of Wisconsin College of Engineering, 1955) and others, which are specifically geared toward organic compounds. With some algebraic manipulation and adjustment of Lyderson’s parameters (since the usage intent here is for polar compounds that are not organic, but are not ionic either), and based on data across homologues, the following relationships seem to hold and are recommended for both validating questionable data or filling in “holes” of no data:
For Tb, the general correlating relationship within a homologue ( e.g., from silane to silicon tetrachloride, or from phosphine to trichlorophosphine) :
$(T_{b}\times MW)^n= A+B\times MW \label{3-4}$
where MW is the molecular weight. For chlorosilanes, chloromethanes, chlorophosphines, etc, this relationship shows a best fit with an exponent “n” of 0.8320. The constants A and B for the homologue are typically set by the hydride and chloride, since that data is usually more reliable. Where the core atom is not just carbon or silicon, but a blend (i.e, a methyl silane), an exponent of 0.8455 fits the data a bit better. This contrasts with organic compounds, whose data tends to fit best with somewhat lower exponents that are closer to 0.80.
For Tc (critical temperature), the best correlation within a homologue is found to be:
$\frac {T_{c}-T_{b}}{T_{c}}= A+B\times MW \label{3-5}$
where MW is the molecular weight and the constants A and B for the homologue are typically set by the hydride and chloride, as long as there is no organic content and the fluids are monomeric.
For Pc (critical temperature), use
$(\frac {MW}{P_{c}})^n= A+B\times MW \label{3-6}$
where MW is the molecular weight, exponent "n" = 0.5672, and constants A and B for the homologue are typically set by the hydride and chloride (as long as there is no organic content and the fluids are monomeric). For a pure organic compound, the exponent "n" should be Lyderson’s suggested 0.5000; for methyl silanes (e.g, dimethyl silane) the best exponent is 0.4550; and for Group III (dimeric bridge-bonded) compounds the best exponent fit is 1.03-1.08 ( 1.03 for di-gallanes and 1.08 for diboranes).
For Vc (critical molar volume), Lyderson’s rule of
$V_{c}=A+B\times MW \label{3-7}$
is probably still the best, where MW is the molecular weight and constants A and B for the homologue are typically set by the hydride and chloride. For some homologues, there is a possibility that the Vc vs MW relationship is not exactly linear, but rather has a slight concave quadratic nature; but the data is rarely good enough to determine such. Of all the critical properties, Vc is the hardest to experimentally measure, and is frequently estimated.
There is a way to get around some of the uncertainty with Vc, and that is to validate (or make minor Vc adjustments) based on the pattern of Zc values in the homologue where
$Z_{c}= \frac{P_{c}\times V_{c}}{R\times T_{c}} \label{3-8}$
Within each homologue there is a characteristic “check-mark” pattern to Zc values, when plotted against the number of hydrogen to chlorine swaps of the homologue (i.e., the homologue’s hydride has zero swaps, the monochloride one swap, the dichloride two swaps, etc). See below example data plot Figure 3-1 for the chlorosilane and chlorophosphine homologues.
The homologue’s hydride (which tends to be a fairly spherical shaped molecule with minimal dipole moment) typically has a Zc in the range of 0.27-0.29. The first swap replaces a small “H” atom with a larger “Cl” atom, and re-orients the molecular shape to have a significant dipole, dropping the Zc value by about 10%. Then the next swaps reduce the dipole, until the molecular shape returns closer to spherical, although significantly larger in diameter. Typically the Zc of the completely chlorinated homologue fluid has a slightly larger Zc than the homologue’s hydride.
To the extent that this “checkmark” shape is not observed in plotting calculated Zc values per Equation 3-8, the “culprit” is almost always the value of Vc, which allows for some correction. The value of Zc is known to have a significant relationship to molecular shape and complexity, although the stronger relationship between Zc and the acentric factor, ω, is not so easily generalized as in organic compounds.
The importance of accurate values of Tb , Tc , Pc (and ω to an extent) is in calculating vapor pressure using the advanced relationship of Part IV. Having an accurate value of ω is more important, along with Tc, Pc in calculating the Equation of State for a fluid, in Part V. Having good values for ω and Vc are important to the estimation of binary interaction parameters in Part VII.
When two different types of swaps are made in a slightly polar molecule, an alternate technique is used to best correlate fluid properties (Tb , Tc , Pc , Vc , and ω), with a good example being methyl chlorosilanes. With methyl chlorosilanes, one or more methyl groups are substituted for hydrogen atoms (i.e., Si-H swapped to Si-CH3), with the remaining Si-H bonding possibly swapped to Si-Cl. Another example would be swapping C-H for C-Cl and C-F bonding in chloro-fluoro methanes. There is no concise formula that governs such a dual substitution. Instead the concept of neural networks is used.
In this technique, the two base curves are laid out from the same starting compound (silane in the example below), and on each base curve the pertinent physical property values shown for each substituted homologue (chlorosilanes and methyl silanes being the base curves in the example below) vs molecular weight. For each amount of combined substitution (100%, 75% and 50% substitution = 0%, 25%, and 50% Si-H remaining) the data points are connected to their respective base curves. For example, on the 75% substitution amount, one would curve-connect trichlorosilane, methyl dichlorosilane, dimethyl monochlorosilane, and trimethyl silane.
Additional curve-connections could be made for values of constant amount of chlorine or methyl substitution (e.g., curve-connecting monochlorosilane, methyl monochlorosilane, dimethyl monochlorosilane, and trimethyl monochlorosilane). Using this method, the data on the dual-substituted fluids can be validated and adjustments made so that the curve-connections are smooth.
In the example in below Figure 3-2, the values of Pc for methyl monochlorosilane (sic, me-mono) and dimethyl monochlorosilane (sic, di-me mono) are hardened up from rough measurements.
The value of the neural network technique is to make adjustments where data is poor or the estimation is of questionable accuracy. The presumption of neural networks is that for a class of compound, the continuum of structural changes should be uniform and internally consistent (although not necessarily linear).
Neural networks were constructed for all the pertinent properties of methyl chlorosilanes, and the technique found to be rather simple to deal with graphically using MS Excel. The resulting values were included in Table 3-3 below, for chlorosilanes and their common impurities, whose properties were validated using the correlating techniques of Equation (\ref{3-4}) through Equation (\ref{3-8}).
Table 3-3, Key properties, arranged by Group and Homologue
Fluid
MW
Tb
Tc
Pc
Vc
Zc
ω
Group III(A)
B2H6
27.670
180.54
289.70
39.58
173.10
0.2882
0.1254
B2H5Cl
62.119ǂ
216.37
346.33
37.13
189.92
0.2481
0.1283
BH2Cl
as monomer
42.284
237.72
379.56
37.10
206.73
0.2463
0.1315
BHCl2
as monomer
82.722
266.07
422.71
37.63
240.37
0.2608
0.1389
B2HCl5 hypothetic
199.893ǂ
276.67
438.47
37.92
257.18
0.2711
0.1433
BCl3
117.169
285.88
451.95
38.20
274.00
0.2822
0.1468
AlCl3
as monomer
133.341
466.86*
625.70
26.00
261.80
0.1326
0.3474
Ga2H4Cl2
as dimer
214.383ǂ
350.86
545.93
40.94
231.9
0.2120
0.2726
Ga2H2Cl4
as dimer
283.273ǂ
421.79
639.69
41.02
247.5
0.1943
0.3525
Ga2Cl6
as dimer
352.162ǂ
473.49
694.00
37.70
263.0
0.1741
0.4504
Group IV(A)
SiH4
32.117
161.75
269.65
47.99
130.07
0.2821
0.09860
SiH3Cl
66.562
242.75
396.65
47.82
169.32
0.2488
0.1252
SiH2Cl2
101.007
281.45
449.45
44.83
215.05
0.2614
0.1589
SiHCl3
135.452
306.15
479.15
41.15
267.28
0.2797
0.2090
SiCl4
169.896
330.72
506.95
36.50
326.00
0.2860
0.2482
SiH3(CH3)
46.144
216.48
348.35
41.53
185.20
0.2691
0.1264
SiH2Cl(CH3)
80.589
278.82
439.11
41.16
233.90
0.2672
0.1793
SiHCl2(CH3)
115.034
314.17
489.18
39.25
287.81
0.2814
0.2269
SiCl3(CH3)
149.479
339.72
517.61
35.86
345.70
0.2919
0.2655
SiH2(CH3)2
60.169
252.86
399.17
36.05
245.84
0.2706
0.1604
SiHCl(CH3)2
94.615
305.30
471.35
35.94
300.32
0.2791
0.2264
SiCl2(CH3)2
129.061
342.89
519.21
34.26
358.20
0.2880
0.2740
GeH4
76.642
184.93
307.98
54.77
128.28
0.2780
0.1270
GeH3Cl
111.087
302.91
495.10
49.95
180.17
0.2215
0.1476
GeH2Cl2
145.532
334.60
536.93
45.36
236.71
0.2437
0.1721
GeHCl3
179.976
343.54
541.41
41.42
283.96
0.2647
0.2012
GeCl4
214.421
357.28
553.16
38.10
335.86
0.2819
0.2334
SnH4
122.742
221.07
360.20
51.70
152.90
0.2674
0.1619
SnCl4
260.521
387.21
591.85
36.95
351.20
0.2672
0.2625
Group V(A)
PH3
33.998
185.41
324.75
64.51
113.33
0.2743
0.03052
PH2Cl
68.443
273.09
463.73
62.29
153.63
0.2515
0.0750
PHCl2
102.888
318.44
524.74
55.82
202.52
0.2625
0.1362
PCl3
137.333
349.25
558.95
50.00
260.00
0.2834
0.2117
POCl3
153.331
379.00
605.21
47.59
276.00
0.2645
0.1993
Table 3-3, Continued
AsH3
77.945
210.73
373.00
65.12
132.50
0.2819
0.01341
AsH2Cl
112.390
300.41
515.99
64.33
174.85
0.2657
0.0573
AsHCl2
146.835
359.66
600.00
61.57
218.29
0.2730
0.1153
AsCl3
181.281
403.30
654.00
58.35
259.56
0.2822
0.1875
SbH3
124.781
256.09
446.20
66.61
157.20
0.2860
0.01659
SbCl3
228.115
794.05
794.05
68.85
268.00
0.2832
0.3309
n-pentane
72.149
309.16
470.05
32.86
310.0
0.2641
0.2393
iso-pentane
72.149
300.82
460.56
33.17
307.1
.2695
0.2147
In above Table 3-3, those entries that are marked with a ǂ have their molecular weight given as the dimer. For AlCl3, the asterisk on the Tb entry is for the best correlating value for the Part IV VP equation, even though it is below the triple point. Group IIIA compounds can feature “bridge-bonding” and can be either monomeric or dimeric. The two pentane entries at the end of the table are included because of their significance as electronic impurities, and to show that the techniques can be extended to organics.
With Group IIIA, some compounds are not included in a homologue simply because they are impossible structurally or are unstable. An example would be B2H3Cl3, which would put too much strain on the B-B bridge-bond. Another is AlH3 which cannot exist as a stand-alone liquid with vapor pressure at normal processing conditions, but tends to form stable hydrides with Group I metals such as LiAlH4. However, B2HCl5 is included (but noted as hypothetic), since some researchers claim to have seen its presence in low levels of TCS.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/03%3A_Critical_Properties_and_Acentric_Factor.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part IV, New Vapor Pressure Equation of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See Part I, Overview for introductory comments, the scope of the article series, and nomenclature. Parts II and III are pre-requisite to Part IV.
Previous Part II, Vapor Pressure deals with the pure component equations normally found in textbooks, but which have limitations in industrial application. Part III, Critical Properties and Acentric Factor develops those parameters used to inter-relate conventional units of temperature, pressure and volume to their reduced property equivalents, as well as the Pfizer acentric factor that is used in modern Equations of State.
Several thermodynamically consistent vapor pressure (VP) equation forms have been developed in the past purposely for organic fluids, and typically have the stipulation that they do not work with either polar or heavily hydrogen-bonded fluids. Being both slightly polar in nature as well as hydrogen-bonded, chlorosilanes and their impurities fall exactly in this exclusion, and therefore make good continuing examples.
In this article, temperature and pressure are used in their reduced form, since this allows better inter-comparison between fluids, is dimensionless, and allows a more global future development. The reduced form is obtained by simply dividing conventional temperature, T, by the fluid’s critical temperature=> Tr = T/Tc. Similarly, reduced pressure is conventional pressure divided by the fluid’s critical pressure => Pr= P/Pc. Temperature and pressure must be in absolute units ( e.g., ºK not ºC)
Referring back to Part II, Equation 2-4, the Clausius-Clapeyron equation is given in reduced format, using $\psi \nonumber$ as notation for the thermodynamic-based derivative of the natural log of (reduced) vapor pressure with respect to the inverse of (reduced) absolute temperature:
$\psi = \frac{-d(\ln P_{r})}{d(1/T_{r})} = \frac{\Delta H_{v}}{\Delta ZRT_{c}} \label{4-1}$
In order to have an improved VP equation form that can be used for the wide range between Tb and Tc (i.e., from atmospheric pressure to the critical point), that relationship should meet the following criteria:
1. The equation form should exactly have a reduced VP value of 1/Pc at Tb/Tc (i.e., yields the known atmospheric boiling point at atmospheric pressure), which is a well-measured lab value and is readily estimated based on structural considerations (see Part III article).
2. The equation form should exactly have a reduced VP value of unity at the critical temperature (i.e., match the measured value of the critical point), where the liquid meniscus disappears. This property can also be readily estimated based on structural considerations (see Part III article).
3. Give a reasonable value for reduced VP at Tr=0.7, corresponding to the definition of the Pfizer acentric factor, which has a basis in molecular complexity (see Part III article).
4. Meet the two thermodynamic consistency Reidel tests, as the critical point is approached ( that "α" monotonically drop to its lowest value at critical, and that the temperature derivative of α = zero at critical). See the left-most part of Equation (\ref{4-4}) for the definition of "α".
Such a VP equation form would allow evaluation of modern Equations of State (see Part V article), Fugacity considerations (see Part VI article), and Binary Interaction Parameters (see Part VII article). Preferably the VP equation form would reasonably fit most fluids (both naturally occurring as well as synthetic), including polar compounds and those with substantial hydrogen bonding. Note that the VP equations given in Part II work over only narrow ranges; however above criteria #1 and #2 require the VP equation form to span a very broad range. In the text "The Properties of Gases and Liquids", by Prausnitz, et al, several alternate VP equation forms are discussed, all of which meet the above four criteria.
In his doctoral thesis at Syracuse University in 1965, under the guidance of Leonard Stiel, Richard Thek proposed exactly such an equation form to fill the technology gap. It was subsequently published by the AIChE in 1966. Unfortunately the computing power at that time was very limited, so equation forms with iterative solutions could not be used; and the internet did not exist to allow ready access to global data taken over the last many decades. Using modern computer processing speed and programming, these limitations have been resolved, and the constants determined for all chlorosilanes and their electronic impurities.
As given by Equation (\ref{4-2}) below, the original Thek-Stiel VP equation in its differential form has two terms: the first that governs at lower pressures close to atmospheric, and the second that takes over as the critical point is approached. While Thek set variables “q” and “k” equal to constants in order to avoid iterative solutions, with modern programming these can now be evaluated as true variables. The derivation of the final Modified Thek-Stiel VP Equation is available on request, but is not reproduced in these articles for the sake of brevity.
$\frac{d(\ln P_{r})}{d(1/T_{r})} = \frac{A}{T_{r}^2}[1-B_{1}T_{r}+B_{2}T_{r}^2-B_{3}T_{r}^3+B_{4}T_{r}^4...]+ \frac{c}{T_{r}^2}(T_{r}^n-k) \label{4-2}$
The several “B” constants result from the binomial expansion of the general Watson relationship for ΔHv. After dropping the B4 and higher expansion terms as non-significant mathematically and integrating, the resulting reduced vapor pressure equation is:
$\ln (P_{r})= A[B_{0}-T_{r}^{-1}-B_{1}\ln T_{r}+B_{2}T_{r}-(1/2)B_{3}T_{r}^2]+c\left[\frac{T_{r}^{n-1}-1}{n-1} +k(T_{r}^{-1}-1)\right] \label{4-3}$
• $A=\dfrac{\Delta H_{vb}}{RT_{c}(1-T_{br})^q} \nonumber$
• $B_{0}=1-B_{2}+(\frac{1}{2})B_{3} \nonumber$
• $B_{1}=q \nonumber$
• $B_{2}= \dfrac{q(q-1)}{2!} = \dfrac{q(q-1)}{2} \nonumber$
• $B_{3}=\dfrac{q(q-1)(q-2)}{3!} = \dfrac{q(q-1)(q-2)}{6} \nonumber$
• $c=\dfrac{\alpha_{c} -A(1-B_{1}+B_{2}-B_{3})}{1-k} \nonumber$
• $n=(1-k)+\dfrac{A}{c}(1-B_{2}+2B_{3}) \nonumber$
The valuation of equation parameters ΔHvb, q, k, and αc are done iteratively using the techniques in Part X, Convergence Strategy, based on vapor pressure data, the molecule’s structure, and the value of Tbr (the reduced atmospheric boiling point, Tb/Tc= Tbr). Pr must be unity at the critical point (Tr= 1), which allows valuing the integration constant of Equation 4-3. The two important thermodynamic derivative functions are:
$\frac{d(\ln P_{r})}{d(\ln T_{r})} = \alpha = A(T_{r}^{-1}-B_{1}+B_{2}T_{r}-B_{3}T_{r}^2)+c(T_{r}^{n-1}-\frac{k}{T_{r}}) \label{4-4}$
$\frac{-d(\ln P_{r})}{d(1/T_{r})} = \frac{\Delta H_{v}}{\Delta ZRT_{c}}= \psi = A(1-B_{1}T_{r}+B_{2}T_{r}^2-B_{3}T_{r}^3)+c(T_{r}^n-k) \label{4-5}$
The "A" and "B" constants of Equations (\ref{4-2}), (\ref{4-3}), and (\ref{4-4}) are the same as Equation (\ref{4-5}). The evaluation of the Reidel derivative function "α" in Equation (\ref{4-4}) is used to establish thermodynamic consistency, and allows valuing an equation parameter. Note that the Clapeyron derivative of Equation (\ref{4-5}), $\psi \nonumber$, is mathematically identical to that in Equation (\ref{4-1}). $\psi \nonumber$ will be used Part V to establish the latent heat, ΔHv, as well as the saturated vapor and liquid density of the fluids at the various conditions in the distillation column modeling.
Using the results of Part III, the Modified Thek-Stiel VP Equation constants are given below in Table 4-1, for chlorosilanes and electronic impurities found in the manufacture of high purity silicon. For brevity, the values of acentric factor, ω, and the critical Riedel derivative, $\alpha_{c} \nonumber$, are not given in the table. They can be calculated from the other equation constants. In some instances, an electronic impurity can have either monomeric or dimeric form, but the form is noted in the table. Where a compound is not stable, the reader will note a “hole” in the table (e.g., AlH3 and Ga2H6 are not stable as liquids or vapors).
Because the solution to the Modified Thek-Stiel VP Equation is necessarily iterative, the application of this VP relationship requires the use of some modern computing tools to perform such iterative calculations (aka “nested loops”). Depending on preference, this could be done as a macro in a spreadsheet program like MS Excel, or a stand-alone program developed in BASIC (or MatLab or Fortran). As mentioned above, some tips on convergence strategy are given in Part X.
Table 4-1 is rather expansive, to illustrate how general the new VP relationship is. Table organization is primarily done by Periodic Table Group (e.g., Group III, Group IV, and Group V) of the molecule’s core atom, and then by the homologue from hydride to chloride. Methyl chlorosilanes are included as examples of good application to fluids that form the border between classically organic and inorganic. Two of the pentanes are included, which happen to be of industrial concern as electronic impurities, to illustrate that the new VP equation is useful for organic fluids as well.
While volatile Group II, Group VI and Group VII fluids seem to follow the new VP relationship as well, there are no entries given in Table 4-1 since these fluids are generally not of concern in the production of electronic materials. The reader is encouraged to follow the evaluation techniques and stratagems given in these articles to further develop use of the new VP equations.
Table 4-1, VP solution constants, by Group and Homologue
Fluid
MW
A
B0
B1
B2
B3
c
n
k
Group III(A)
B2H6
27.670
8.9910
1.1494
0.37848
-0.11762
0.063573
2.8701
4.7796
0.1197
B2H5Cl
62.119ǂ
8.1915
1.1494
0.37867
-0.11764
0.063578
3.4560
3.8431
0.1073
BH2Cl as monomer
42.284
8.3872
1.1495
0.37888
-0.11766
0.063583
3.2767
4.0925
0.0938
BHCl2 as monomer
82.722
8.9910
1.1495
0.37937
-0.11772
0.063596
2.7577
4.9953
0.0636
BCl3
117.169
9.5652
1.1496
0.37988
-0.11779
0.063609
2.2357
6.2975
0.0291
AlCl3 as monomer
133.341
11.9183
1.1512
0.39300
-0.11928
0.063892
2.2700
7.5185
0.0291
Ga2H4Cl2
as dimer
214.383ǂ
7.0714
1.1506
0.38810
-0.11874
0.063799
5.3346
2.5583
0.0938
Ga2H2Cl4
as dimer
283.273ǂ
10.8499
1.1513
0.39333
-0.11931
0.063898
3.3656
4.9567
0.0636
Ga2Cl6
as dimer
352.162ǂ
11.1760
1.1520
0.39974
0.11997
0.063996
3.8566
4.5874
0.0291
Group IV(A)
SiH4
32.117
7.4652
1.1492
0.37673
-0.11740
0.063525
3.6247
3.4716
0.0914
SiH3Cl
66.562
8.8830
1.1494
0.37847
-0.11761
0.063572
2.7614
4.9286
0.0756
SiH2Cl2
101.007
9.9221
1.1497
0.38068
-0.11788
0.063629
2.1589
6.6627
0.0598
SiHCl3
135.452
9.9796
1.1501
0.38395
-0.11827
0.063708
2.5687
5.7950
0.0446
SiCl4
169.896
10.0240
1.1504
0.38651
-0.11856
0.063765
2.8465
5.3590
0.0291
SiH3(CH3)
46.144
9.0535
1.1494
0.37855
-0.11762
0.063574
2.6380
5.1963
0.0758
SiH2Cl(CH3)
80.589
10.1402
1.1499
0.38201
-0.11804
0.063662
2.1796
6.7338
0.0600
SiHCl2(CH3)
115.034
9.1805
1.1503
0.38512
-0.11840
0.063735
3.3346
4.3854
0.0446
SiCl3(CH3)
149.479
10.1187
1.1506
0.38512
-0.11840
0.063789
2.9357
5.2665
0.0291
SiH2(CH3)2
60.169
8.2747
1.1497
0.38077
-0.11789
0.063631
3.4554
3.9214
0.0603
SiHCl(CH3)2
94.615
9.1914
1.1503
0.38509
-0.11840
0.063734
3.3232
4.4012
0.0447
SiCl2(CH3)2
129.061
10.1007
1.1507
0.38820
-0.11875
0.063801
3.0279
5.1285
0.0291
GeH4
76.642
8.1986
1.1494
0.37858
-0.11763
0.063575
3.3615
3.9445
0.0914
GeH3Cl
111.087
8.6130
1.1496
0.37993
-0.11779
0.063610
3.1743
4.3025
0.0756
GeH2Cl2
145.532
8.8966
1.1498
0.38154
-0.11798
0.063650
3.1184
4.4929
0.0598
GeHCl3
179.976
9.1230
1.1501
0.38344
-0.11821
0.063696
3.1490
4.5641
0.0446
GeCl4
214.421
9.4041
1.1503
0.38554
-0.11845
0.063744
3.1654
4.6724
0.0291
SnH4
122.742
9.0298
1.1497
0.38087
-0.11790
0.063634
3.0671
4.5745
0.0914
SnCl4
260.521
10.1994
1.1506
0.38745
-0.11867
0.063744
2.8471
5.4354
0.0291
Group V(A)
PH3
33.998
7.8041
1.1485
0.37228
-0.11684
0.063396
2.8050
4.3644
0.1009
PH2Cl
68.443
8.6412
1.1490
0.37518
-0.11721
0.063482
2.4920
5.2267
0.0766
PHCl2
102.888
8.5412
1.1495
0.37919
-0.11770
0.063591
3.0719
4.3813
0.0527
PCl3
137.333
7.2366
1.1501
0.38413
-0.11829
0.063712
4.4289
2.9789
0.0291
POCl3
153.331
9.7746
1.1500
0.38331
-0.11819
0.063693
2.4586
5.9576
0
Table 4-2, VP solutions, by Group and Homologue, continued
Fluid
MW
A
B0
B1
B2
B3
c
n
k
Group V(A),cont’d
AsH3
77.945
7.4208
1.1484
0.37116
-0.11670
0.063362
2.9474
4.0298
0.1009
AsH2Cl
112.390
8.5242
1.1488
0.37403
-0.11707
0.063448
2.3972
5.3468
0.0766
AsHCl2
146.835
9.3548
1.1493
0.37782
-0.11754
0.063555
2.1821
6.2831
0.0527
AsCl3
181.281
10.0281
1.1499
0.38254
-0.11810
0.063675
2.2519
6.5171
0.0291
SbH3
124.781
8.5409
1.1484
0.37136
-0.11673
0.063368
2.0491
6.0821
0.1009
SbCl3
228.115
7.1200
1.1511
0.39192
-0.11916
0.063873
5.3455
2.6317
0.0291
n-pentane
72.149
10.0508
1.1504
0.38593
-0.11849
0.063753
2.6269
5.7673
0
iso-pentane
72.149
9.9797
1.1502
0.38432
-0.11831
0.063717
2.4514
6.0715
0
Two plots are offered in Figure 4-3 to validate the principles of this Distillation Science article. These plots are for the $\psi$(Psi) function of Equation (\ref{4-5}) and the $\alpha$(alpha) function of Equation (\ref{4-4}), for the chlorosilane homologue. The derivative function $\phi$ goes through the required minimum, which produces the expected inflection in the vapor pressure curve, typically around Tr= 0.80-0.86. Silane’s minimum $\phi$occurs at Tr=0.67 and DCS’s minimum $\phi$ occurs at Tr= 0.89. The fact that two fluids of this homologue are outliers illustrates why chlorosilane vapor pressures are so poorly fit by VP expressions that are intended for use with hydrocarbons. However, it can be seen how the Reidel $\alpha$ function does fit the thermodynamic consistency requirements: for all homologue fluids, the $\alpha$ function asymptotes to a constant value ($\alpha_{c}$) as the critical point is approached, with the slope of the $\alpha$ function going to zero as critical is approached.
Another validation check is shown in Figure 4-4, comparing the vapor pressures of chlorosilanes and chlorophosphines, which are distillation separation applications in all commercial chlorosilane purification.
As expected, the chlorophosphine homologue’s VP curves “nest” around those of the chlorosilanes. There are no cross-overs of the curves, and the location of chlorophosphine vapor pressure curves lie exactly as they are experienced in commercial practice of chlorosilane purification distillation columns.
Since the math in Equation (\ref{4-3}) is somewhat involved, an example is provided with calculations.
Example: At 3.50 atmospheres pressure what is the boiling point of trichlorosilane (TCS)?
From Part III, Table 3-3, the pertinent property values for TCS are: Tc=479.15°K and Pc=41.15 atm. MW, Tb , Vc and Zc are not needed for this example. From Part IV, the VP solution constants for TCS are: A=9.9796; B0=1.1501; B1=0.38395; B2=-0.11827; B3=0.063708; c=2.5687; n=5.7950; and k=0.0446.
As all advanced VP equations are, determining the TCS boiling point at 3.50 atmospheres is iterative; so it is nice to have a VP plot to get a good first guess. Using above Figure 4‑4, for
\ln (VP) =\ln (3.50) = 1.2528, it looks like 1/T= 2.89E-3 might be a good first guess => T=346°K; so first guess Tr=346/479.15 =0.7221. Plugging that first guess value of Tr into the T-S VP equation with the constants above:
$\ln (P_{r})= A[B_{0}-T_{r}^{-1}-B_{1}\ln T_{r}+B_{2}T_{r}-(1/2)B_{3}T_{r}^2]+c[\frac{T_{r}^{n-1}-1}{n-1} +k(T_{r}^{-1}-1)] \nonumber$ yields a reduced VP value of Pr = 0.08273; so VP=0.08273*41.15 = 3.404 atm, which is just a bit less than the desired 3.50 atm. So T needs to increase a little from the 346°K initial first guess.
Re-doing the TCS VP equation with T= 347°K yields a VP of 3.50 atm (note: 347.05°K is the answer, if the solution is worked out to two decimal places of temperature). So the boiling temperature at 3.50 atmospheres is 347°K or 73.9°C.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/04%3A_New_Vapor_Pressure_Equation.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part V, Equation of State of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature. Part V, Equation of State explains why an Equation of State (EOS) is needed to evaluate the pure-component physical properties that are used in distillation science, reviews the various options and makes a recommendation. Also included are techniques for solving the EOS cubic equations via spreadsheet (i.e., MS Excel) and some work-arounds near the critical point. This article uses the critical constants and acentric factor of Part III. The EOS solutions of this article will feed values to Part VI, Fugacity.
The general PVT relationship for a fluid is
$PV=ZRT \label{5-1}$
where the fluid could be a sub-cooled liquid, a saturated liquid (i.e, at its boiling point), a saturated vapor (i.e., at its dewpoint), or a superheated vapor (i.e., a gas). In the ideal situation ( ambient pressure and temperature), ZL is very close to zero for either a sub-cooled or saturated liquid, and Zv is close to unity for either a saturated or superheated vapor. The Ideal Gas Law has $Z_V=1$, so $ΔZ=(Z_v-Z_L) ≈ 1\nonumber$
$ΔZ$ is an important parameter to Equation 2-2 and 4-1 in previous articles Part II and Part IV. For calculations in this and other Parts, temperature and pressure are in absolute units (°K and atmospheres), and volume in molar units (e.g., cc/mole). In that case, the gas constant $R$ has units of 82.057 atm-cc/mole-K.
Van der Waals EOS and more recent improvements
For a real fluid, $Z$ is never 0 and almost never =1, but rather a value between 0 and 1, except for gases at high temperatures and pressures (where Zv >1). While the Ideal Gas Law is a nice introductory concept, it is rarely used in distillation science. Instead, a more accurate expression is needed, and so Equation (\ref{5-1}) is re-written as
$Z=\dfrac{PV}{RT} \label{5-2}$
and different models are used to give results: $Z$ as a function of $T$ & $P$. The oldest is the Van der Waals EOS from the 1873 based on a model that molecules are hard bodies that take up space and have inter-particle forces. Note that an Equation of State covers all non-solid phases, so for saturated systems (i.e., fluids at their boiling/condensation point) it will have two real solutions: one for the vapor and one for the liquid: $Z_V$ and $Z_L$. For a non-saturated system (i.e., a super-heated vapor or sub-cooled liquid) it will have only one real solution.
Written in terms of Z, the Van der Waals EOS is
$Z=\left(P+ \frac{a}{V^2}\right) \times (V-b)/RT \label{5-3}$
where a and b are constants derived from just the fluid’s critical properties. While a good general approximation at lower pressures (better than the Ideal Gas Law), this expression becomes less accurate as the critical point is approached, and it always yields 0.375 as Zc. There are very few fluids that have such a high Zc - most are closer to 0.30. Solutions for saturated phase values, $Z_v$ and $Z_L$, require manipulation of a cubic equation and solving its roots. However inaccurate it may be, it does allow estimation of the saturated vapor and liquid molar densities: $\rho_{\nu}=1/V_{\nu} \nonumber$ and $\rho_{L}=1/V_{L} \nonumber$, as well as $\Delta Z= Z_{v}-Z_{L} \nonumber$.
In Part IV, Equation 4‑1 showed how the latent heat of vaporization, $ΔH_v$, is calculated from the differential Clausius-Clapyeron equation, once a good vapor pressure relationship is known and ΔZ is closely determined. And in Part VI, it will be seen how fugacity coefficients are calculated from an EOS, to partially account for some departures to ideal behavior in mixtures of fluids.
In 1949, the Redlich-Kwong EOS improved on Van der Waals, and in 1972 Soave made a modification to yield the S-R-K EOS, by adding in the effect of the acentric factor, ω. Further improvement was done in a general-application EOS by Peng-Robinson in 1976. Since then, there have been many additional EOS proposed for specialized applications which give better results, but require additional data the additional constants. A full history of various EOS’s is given in Wikipedia.
For purposes of practical chlorosilane distillation design of fugacity (see Part VI), the Peng-Robinson (P-R) EOS appears to give satisfactory results. P-R does not appear to be accurate enough to extend to closely estimating saturated phase densities of chlorosilanes, or latent heats. If enough data were available, it might be possible to determine which of the newer, more specific EOS forms is accurate enough for those derivative properties (i.e., superior to Peng-Robinson). Unfortunately, that data does not yet exist (although a few key data points show the failings of P-R).
Regardless of the fluid parameters chosen, the P-R EOS yields a Zc of 0.307 at the critical point. While this Zcvalue is still higher than that of chlorosilanes, the accuracy of Zv and ZL values are adequate for fugacity estimation, through 90% of critical pressure (Pc). Since commercial applications only rarely approach 90% of Pc for reasons of process and equipment design, the P-R EOS is an acceptable compromise. It must be pointed out that the P-R EOS should not be used in any calculations that involve differential or integral calculus manipulation of Z, since erroneous (or impossible) values result.
Peng-Robinson Equation of State
$p= \frac{RT}{V_{m}-b} - \frac{a \alpha}{V_{m}^2 +2bV_{m} -b^2} \label{5-4}$
with
• $a=\dfrac{0.45724R^2T_{c}^2}{p_{c}} \nonumber$
• $b=\dfrac{0.07780RT_{c}}{p_{c}} \nonumber$
• $\alpha= (1+\kappa(1-T_{r}^{0.5})^2 \nonumber$
• $\kappa=0.37464+1.54226\omega-0.26992\omega^2 \nonumber$ when ω < or = 0.49
• $\kappa=0.379642+1.48503\omega-0.164423\omega^2 +0.016666\omega^3 \nonumber$ when ω>0.49
• $T_{r}=\dfrac{T}{T_{c}} \nonumber$ noting that $\omega,\kappa, \alpha$ are dimensionless
In more compact and dimensionless form, the Peng-Robinson EOS can be re-stated as a cubic in $Z$ using
$A=\dfrac{\alpha a p}{R^2T^2} \nonumber$ and $B=\dfrac{bp}{RT} \nonumber$
$Z^3 - (1-B)Z^2+(A-2B-3B^2)Z -(AB-B^2-B^3) = 0 \label{5-5}$
At a given temperature, T, and for fluid physical properties Tc, Pc, and ω (from Part III), and where P is the vapor pressure calculated for that fluid by the VP equation of Part IV, dimensionless parameters “A” and “B” are determined for Equation (\ref{5-5}). This sets up a cubic equation in Z, which has three real roots (as opposed to being imaginary). The largest root is the value of Zv; the smallest root is the value of ZL; and the third root is discarded as having no physical meaning.
Then knowing Zv and ZL, the value of ΔZ can be calculated, from which the latent heat of vaporization (at that T & P) could be calculated using Equation 4-1. Also from Zv and ZL, the saturated phase densities can be calculated as estimates. But again, the P-R EOS will not hold up under this much mathematical manipulation to give any more than rough estimates of these distillation design properties; and so it is recommended to only use the P-R EOS to estimate fugacities (unless rough estimates of latent heat and phase densities are acceptable) . The P-R EOS is offered to lay the groundwork for future improvement.
A cautionary statement is in order regarding the use of an EOS (whether Van der Waals, SRK, P-R or others) with liquid mixtures. In some hydrocarbon applications, VLE convergence calculations attempt to combine the critical properties of a mixture's components, as a "psuedo-critical" mixture property, and use these "psuedo-criticals" in EOS calculations along with various mixing models. While there has been some success in this approach for certain hydrocarbons ( mostly in the oil and gas industry), such approach rarely gives good results with inorganic polar or mildly polar mixtures, such as chlorosilanes. Various papers have been issued over the recent years promoting such approach, suggesting the use of "binary interaction parameters" to estimate an EOS for a mixture. There are simply too many differences in polarity, acentric factor, and individual fluid critical properties for this to work. Therefore, the reader is discouraged from such practice. Sometimes calculations must be worked out, checked along the way, and results validated - rather than taking an "easy way out". Certainly with computer automation, there is no reason not to do the full calculation procedure.
One stumbling block to many is that solving the EOS equation for Z means solving a cubic equation. Currently there are no MS Excel functions for solving the roots of a cubic equation. However a ready solution is found in an older CRC “Standard Math Tables” book, which can be solved with a spreadsheet or programmed into a macro.
Cubic Equations
Method for solving for the roots of a cubic equation in the variable "y", with roots y1, y2 and y3
$y^3 +py^2 + qy + r =0 \label{5-6}$
may be transformed to the format , $x^3+ax+b =0$ by substituting for $y$ the value, $x - \frac{p}{3} \nonumber$. Where $a= \frac{1}{3}(3q-p^2)$ and $b=\frac{1}{27}(2p^3 - 9pq + 27r)$
If $p$, $q$, and $r$ are real, then
• If $\frac{b^2}{4} +\frac{a^3}{27} > 0 \nonumber$, there will be one real root and two conjugate imaginary roots.
• If $\frac{b^2}{4} +\frac{a^3}{27} = 0 \nonumber$, there will be three real roots of which at least two are equal.
• If $\frac{b^2}{4} +\frac{a^3}{27} < 0 \nonumber$, there will be three real roots and unequal roots.
For saturated liquids and vapors, the last case is always true so a trigonometric solution is useful for solving EOS equations. Compute the value of the angle $\phi \nonumber$ in the expression
$\cos\phi = \frac{-b}{2} \div \sqrt{(\frac{-a^3}{27})}$
Then the three transformed roots x1, x2, and x3 will have the following values:
• $x_1= 2 \sqrt{\frac{-a}{3}} \times cos(\frac{\phi}{3})$ and so cubic root $y_1= -\frac{p}{3} +x_1 \nonumber$
• $x_2 =2 \sqrt{\frac{-a}{3}} \times cos(\frac{\phi}{3}+120º)$ and cubic root $y_2= -\frac{p}{3} +x_2 \nonumber$
• $x_3=2 \sqrt{\frac{-a}{3}} \times cos(\frac{\phi}{3}+240º)$ and cubic root $y_3= -\frac{p}{3} +x_3 \nonumber$
There is an alternate algebraic solution to solving a cubic equation (which is frequently needed) to evaluate Z for superheated vapors. But for liquids and vapors near saturation, this alternate solution frequently breaks down close to 80-85% of critical pressure using MS Excel, due to software problems. Hence, the recommendation of the more stable trigonometric method, even though its math is seemingly odd. The trigonometric root-solving technique can break down somewhere near 95% of critical, using MS Excel because of numerical precision; but since the P-R EOS is only valid to 90% of critical, that is not a problem. Note that there are still other cubic root solving algorithms, but these also require even higher levels of numerical precision and are often found to break down (e.g., division by zero or requiring math with imaginary numbers).
When doing the above calculations, note that cos-1(ø) is only valid over the range between -1 and 1, returning values of ø between 1 to 0 radians, respectively. When doing automated calculations (via a spreadsheet like MS Excel), it is a good idea to check to see if the quantity ${-b/(2\sqrt{-a^3/27})} \nonumber$ is between -1 and 1, in order to avoid an error message.
In Equation (\ref{5-5}) above, the cubic equation’s Z3 term is unity, the Z2 is [- (1-B)] = B-1, the Z term is [A‑2B‑3B3] and the constant is [- (AB-B2-B3)]. Then the solution of that cubic equation, using Equation (\ref{5-6}):
• $p=B-1 \nonumber$
• $q=A-2B-3B^2 \nonumber$
• $r= -(AB-B^2-B^3) \nonumber$
• $a=(3q-p^2)/3 \nonumber$
• $b=(2p^3 -9pq +27r)/27 \nonumber$
Intermediate parameters “p” and “r” will usually be negative, and parameter “q” will be positive; and the quantity $\frac{b^2}{4} + \frac{a^3}{27} \nonumber$ will always be negative for liquids and vapors at or near saturation: so there will be three unequal roots. Note that for superheated vapors or vapors above critical pressure (aka gases), the quantity $\frac{b^2}{4} + \frac{a^3}{27} \nonumber$ can be >0, in which case there is only one real root for Z, and an alternate algebraic solution method will be needed ( i.e., the trigonometric method breaks down) . In such case, Z will be >1.
To use the trigonometric method to get $Z_v$ and ZL , first solve for the angle ø (in radians) of Equation (\ref{5-6}):
$\phi = cos^{-1}(-b/(2\sqrt{-a^3/27}) \nonumber$ and so the three roots, y1, y2, and y3 will be:
$y_1= -p/3 +2\sqrt{-a/3} \times cos(\phi /3) \nonumber$, $y_2= -p/3 +2\sqrt{-a/3} \times cos(\phi /3 + 2\pi/3) \nonumber$ and $y_3= -p/3 +2\sqrt{-a/3} \times cos(\phi /3 + 4\pi/3) \nonumber$
The largest valued root is $Z_v$, and the smallest valued root is ZL, with the intermediate valued root being discarded as meaningless.
Since the math in Part V of both the EOS and the cubic solution might seem daunting, an example is provided with calculations.
Example $1$
Continuing the example of Part IV, of TCS vaporizing at 3.50 atm, and 347.05°K = 73.9°C, determine the values of Zv , ZL , and of ΔZ = Zv - ZL. From the values of Zv and ZL determine the saturated vapor and liquid densities and then from the value of ΔZ determine the value of the latent heat of vaporization at 3.50 atm.
Solution
From Part III, Table 3-3, the pertinent property values for TCS are: MW = 135.452; Tc=479.15°K; Pc=41.15 atm ; and ω=0.2090. Tb , Vc and Zc are not needed for this example.
For use in the P-R EOS, at the examples temperature of 347.05°K, T r= 347.05/479.15 = 0.7243.
Using Equation (\ref{5-5}) above for the P-R EOS and plugging in the values for T, P, Tr, Tc, Pc, and ω:
$a=0.45724 \times 82.057^2 \times 479.15^2/41.15 = 1.7177E7 \nonumber$
$b=0.07780 \times 82.057 \times 479.15/41.15 = 74.336 \nonumber$
$\kappa=0.37464 + 1.54226 \times 0.02090 - 0.26992 \times 0.2090^2=.68518 \nonumber$
$\alpha =(1+0.68518\times (1-0.7243^{0.5}))^2 =1.2145 @: 347.05°K \nonumber$
$A=1.2145 \times 1.7177E7 \times 3.50/(82.057^2 \times 347.05^2) \nonumber$=0.09003 @: 347.05°K, 3.50 atm
$B=(74.336 \times 3.50)/(82.057 \times 437.05)= 9.136E-3 \nonumber$ @: 347.05°K, 3.50 atm
Then the cubic equation to solve (using Equation (\ref{5-6})) is $Z^3 +pZ^2 +qZ +r=0 \nonumber$, with terms:
• $p=B-1 \nonumber$
• $q=A-2B-3B^2 \nonumber$
• $r= -(AB-B^2-B^3) \nonumber$
• $a=(3q-p^2)/3 \nonumber$
• $b=(2p^3 -9pq +27r)/27 \nonumber$
• $p=0.0091361-1 = -0.99086 \nonumber$
• $q= 0.09003 -2\times 0.00913161 - 3\times 0.009136^2= 0.071516 \nonumber$
• $r=-(0.09003 \times 0.009136 -0.009136^2 -0.009136^3)=-0.00073819 \nonumber$
so for the trigonometric solution a and b are:
$a=(3 \times 0.071516 - (-0.99086)^2) /3 = -.25575 \nonumber$
$b=(2(-0.99086)^3 -9\times (-0.99086)\times 0.071516 +27\times (-0.00073819))/27 =-0.049179 \nonumber$
b2/4 +a3/27 = -1.49326E-5, so three real roots for Z are confirmed
$\phi = cos^{-1}(-b/(2\sqrt{-a^3/27}) \nonumber$= 0.15588 radians and so the three roots are:
• $Z_1= -p/3 +2\sqrt{-a/3} \times cos(\phi /3)= 0.91345\nonumber$
• $Z_2= -p/3 +2\sqrt{-a/3} \times cos(\phi /3 + 2\pi/3) = 0.012439 \nonumber$
• $Z_3= -p/3 +2\sqrt{-a/3} \times cos(\phi /3 + 4\pi/3) = 0.064968 \nonumber$
Zv is the largest root = 0.91345 and ZLis the smallest root = 0.01244 , with the Z3 root being discarded as meaningless. Zv and ZL will be used in Part VI to give estimates of fugacity coefficients, which partially account for departures from ideal mixture behavior.
ΔZ = Zv - ZL= 0.91345-0.01244 = 0.90101 (note that this is less than the unity value of Part II, Equation 2-3 and 2-4 , which led to simplified VP equations).
If Equation (\ref{5-1}) is re-written as $\frac{1}{V} = \frac{P}{ZRT} \nonumber$ that gives the density in molar units,
and then multiply by molecular weight to get density in more familiar mass units.
Predicted vapor density @ 3.50 atm, $73.9°C = 3.50/(0.91345 \times 82.057 \times 347.05) \nonumber$ = 1.3455E-4 g-mole/cc . In mass units = (135.45 x 1.3455E-4) = 1.822E-2 g/cc
Predicted liquid density @3.50 atm,$73.9°C = 3.50/(0.012439 \times 82.057 \times 347.05) \nonumber$ = 9.8804E-3 g-mole/cc . In mass units = (135.45 x 9.8804E-3) = 1.338 g/cc
Looking at some TCS saturated liquid phase density data, it can be seen that the predicted saturated liquid density by the P-R EOS is high by about 8%, with a somewhat greater inaccuracy for sub-cooled liquid TCS. So unless there is better data available, the P-R EOS should not be thought of as better than ±10% on saturated phase density. Comparable TCS saturated vapor phase density data does not seem to exist, so no accuracy estimates can be made.
In order to estimate the latent heat of vaporization at these conditions, it is necessary to go back to Part IV, Equation 4.5 and the example values: Tr= 0.7243; A=9.9796; B1=0.38395; B2= -0.11827; B3=0.063708; c=2.5687; n=5.7950; and k=0.0446.
$\psi=\frac{\Delta H_{\nu}}{\Delta Z RT} = A(1-B_{1}T_{r} +B_{2}T_{r}^2-B_{3}T_{r}^3 +c(T_{r}^n -k)) \nonumber$
and substituting in the values from above, but using a value for R=1.9859 to get ΔHv in energy units of calories/gram-mole;
$\Delta H_{\nu}/(0.90101 \times 1.9859 \times 347.05)=9.9796 \times (1-0.38395\times 0.7243 -0.11827 \times 0.7243^2 -0.063708 \times 0.7243^3) + 2.5687(0.7243{5.7950}-0.0446) \nonumber$
So ΔHv is predicted at = 4,114 cal/g-mole. Multiplying by the MW, the latent heat of vaporization is predicted as 30.4 cal/gram at 3.50 atm and 72.9°C. Better estimates of the TCS latent heat at these conditions by previous work done under NASA contract indicated a value closer to 50 cal/gram (but these were still estimates, not direct measurements, and are possibly in error as well).
This illustrates the potential power of using an EOS with a thermodynamically consistent VP equation, but also shows the degree of inaccuracy of pairing the P-R EOS with the VP derivative of Part IV, Equation 4.5. In this case, it is unknown how much of the inaccuracy is caused by the EOS, and how much caused by VP equation’s derivative.
As will be seen in Part VI, the best use of the P-R EOS is just to estimate fugacity coefficients, which are relative quantities rather than absolute.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/05%3A_Equation_of_State.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part VI, Fugacity of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature.
Part VI, Fugacity deals with the expected departure of apparent pure-component vapor pressure in the vapor-liquid equilibria (VLE) of binary mixtures, as is commonly found in practical application of distillation science. Part VII, Liquid Activity Coefficients builds on this for mixtures that have greater-than-expected departures from ideal behavior.
This article uses the Vapor Pressure of Part IV and Equations of State of Part V.
The ideal vapor-liquid behavior of volatile fluid mixtures is governed by the combination of Raoult’s Law and Dalton’s Law. Neither of these is a scientific law but rather a representation of ideal behavior, closely approximated when the fluids in question have very similar properties. Raoult’s Law of liquid partial pressures ( PPi= Xi* VPi) states that the partial pressure of a fluid is the liquid mole fraction times the pure-component vapor pressure. Dalton’s Law (Yi= PPi / Σ{PPi} ) states that the vapor mole fraction of a volatile component is that component’s partial pressure divided by the sum of all partial pressures. If either of these two relationships is extended, with either Xi or Yi being zero or unity, a tautology results. The nomenclature used here is: Xi and Yi are the liquid and vapor mole fractions of the ith component in the mixture; VPi and PPi are the vapor pressure and partial pressure of the ith component.
Combining Raoult & Dalton, for a binary mixture:
$Y_{1}= \dfrac{X_{1} \times VP_{1}}{X_{1} \times VP_{1} + X_{2} \times VP_{2}} \label{6-1}$
$Y_{2}=\dfrac{X_{2} \times VP_{2}}{X_{1} \times VP_{1} + X_{2} \times VP_{2}} \nonumber$
However, in actual practice this never exactly works out. The more volatile fluid exerts an effect on the less volatile fluid, seemingly boosting that fluid’s vapor pressure slightly. Conversely, the less volatile fluid slightly reduces the more volatile fluid’s vapor pressure. There are also interactions in the vapor phase due to compressibility (Zv, from Part V), which are sometimes significant. These departures from ideal behavior are due to two factors: (1) the fluid’s phase change is at a temperature that is above or below its pure components’ boiling points; (2) interactions occur between molecules with different properties. From a distillation column design perspective, these departures are important, since they make the component separations more difficult.
The technical term for the first factor of departure is fugacity, which has the same units as the fluid’s vapor pressure, including the slight positive or negative secondary effect. The ratio between a liquid’s fugacity and its vapor pressure is the liquid-phase fugacity coefficient; between the vapor’s fugacity and its partial pressure is the vapor-phase fugacity coefficient. For a fluid by itself, fugacity has no meaning.
But in a mixture of two volatile fluids, it is the vapor and liquid fugacities that are in equilibrium with real fluids ( as opposed to a mixture of ideal fluids, such as with Raoult/Dalton).
$F_{i}^L = \phi_{i}^L \times VP_{i}\label{6-2}$
where øLi denotes the liquid fugacity coefficient
$F_{i}^V = \phi_{i}^V \times PP_{i}\label{6-3}$
where øVi denotes the vapor fugacity coefficient
Consider a mixture of "fluid 1" and "fluid 2". At a given temperature, "fluid 1" has a pure-component vapor pressure of VP1. But the presence of "fluid 2" alters that somewhat, so "fluid 1's" fugacity (F1) is different from its pure-component vapor pressure. Conversely the presence of "fluid 1" somewhat alters "fluid 2's" fugacity (F2), away from it's pure-component vapor pressure of VP2. The higher the mole fraction of "fluid 2", the more "fluid 1's " fugacity is altered away from its pure-component vapor pressure. And vice-versa.
This is how fluid mixtures behave in the real world: Raoult/Dalton is just a introductory concept that would apply in an ideal world.
(The technical term for the second factor of departure is the Liquid Activity Coefficient, which is covered extensively in Part VII, but initially mentioned here briefly for a more complete understanding.)
Fugacity effects are always present whenever two fluids of different volatility are mixed and vaporized or condensed. The Liquid Activity Coefficient effects are always present, but more noticed when the two fluids have significantly different properties, and even more so when the fluids are polar.
This is best illustrated by an example, partially continued from Part IV and Part V. At 73.90°C = 347.05°K, TCS has a pure component vapor pressure of 3.50 atmospheres. At that same temperature, STC has a pure component vapor pressure of 1.65 atmospheres. In this example, the liquid mixture is 40% mole fraction TCS, so X1= 0.40 . Therefore STC’s liquid mole fraction, X2= 0.60. Note: it is conventional notation for the most volatile fluid to have the lowest subscript.
Per Raoult, the partial pressures PP1 and PP2 should be $0.40 \times 3.50=1.40 \nonumber$ and $0.60 \times 1.65=0.991 \nonumber$ atmospheres, respectively. This liquid mixture at 73.90°C would be expected to boil at $0.40 + 1.99=2.39 \nonumber$ atmospheres total system pressure; with the TCS mole fraction of the first bubble’s vapor (Y1) being $1.40/2.39=0.586 \nonumber$= 0.586 (so Y2 is 0.414).
Instead, it is found that first bubble’s vapor has a TCS mole fraction (Y1) of only 0.573 (with Y2 being 0.427) indicating that the TCS was just a little less volatile than expected and the STC was just a little more. While there is just over 2% difference in this case between Raoult/Dalton’s expected and the actual resulting vapor mole fractions, the slight difference is truly present. If the difference in vapor pressures between the two fluids increased, and the overall pressure increased towards critical, the departure from the Raoult & Dalton relationship would then increase.
Two departure-from-ideality factors were purposefully shown in the above example: Fugacity and Liquid Activity Coefficients. The vapor-phase and liquid-phase fugacity coefficients ($\phi_{i}^V\nonumber$& $\phi_{i}^L\nonumber$) are never exactly unity (as implicit with Raoult &Dalton), but they do often approach unity (especially at low pressures and for molecularly similar fluids). Likewise, the Liquid Activity Coefficients ($\gamma_{i}\nonumber$) are never exactly unity for real fluid mixtures (see Part VII), but are often set to unity because the fluids are non-interactive. The point to make here is that non-unity fugacity coefficients are just a result of fluids having different volatilities (i.e., they are not fluid-dependent); whereas Liquid Activity Coefficients are specific fluid-system dependent (e.g., TCS-STC has a different set of parameters than DCS-TCS). (There is no analog to Liquid Activity Coefficients in the vapor phase - vapors just interact due to fugacity effects.)
In the above example, the overall deviation of 2.3% (0.586 vs 0.573) on TCS mole fraction is almost all due to fugacity (both liquid and vapor), with a lesser amount due to Liquid Activity Coefficients. When designing a distillation for high-reflux purification, the relative importance between these two departure factors often reverses, as well as the overall magnitude.
The general equation (including the $\gamma_{i}\nonumber$ covered in Part VII), is that:
$Y_{i} \times \pi =PP_{i} =(\phi_{i}^L/ \phi_{i}^V) \times \gamma_{i} \times VP_{i} \times X_{i} \label{6-4}$ where $\pi \nonumber$ is the total pressure
Setting aside further discussion of Liquid Activity Coefficients until Part VII, the remainder of this article deals with how to determine the fugacity coefficients,$\phi_{i}^V\nonumber$ & $\phi_{i}^L\nonumber$. That determination is dependent on the constants of Part III, and the calculation of the compressibility factors Zv and ZL from Part V.
For either phase, the equation for fugacity coefficient, as derived from thermodynamics is:
Fortunately, the Peng-Robinson Equation of State (P-R EOS) is good enough for most distillation design to evaluate fugacity coefficients based on some simplifications (see Part V). Admittedly the P-R EOS does not exactly fit chlorosilanes (and similar fluids), and a better EOS would be preferable, if it exists. However P-R is adequate to the task, especially with the “work-arounds” given below.
For $\phi_{i}^V \nonumber$ there is an almost linear relationship between Zv and VP until close to Pr=0.4. So that allows Equation (\ref{6-5}) to be reduced to $Ln(\phi)=( Z_{V}^{T,P} -Z_{V}^{sat}) \label{6-6}$
where $Z_{V}^{T,P} \nonumber$ and $Z_{V}^{sat} \nonumber$ are respectively the component’s vapor-phase compressibility at the temperature and pressure of the boiling mixture, and at the pure-component saturation condition (T & VP). For Pr>0.4, Equation 6-6 tends to give low values of $\phi^{\nu} \nonumber$.
Since many industrial processes operate at higher pressures (closer to critical than Pr=0.4) another work-around is needed. Between 0.3< Pr<0.90, a cubic polynomial fits well for Zv as a function of Pr, which then integrates (Zv-1)/Pr of Equation (\ref{6-5}) to the function:
$Ln(\phi^{\nu}) = a_{3}(P_{r2}^{3}-P_{r1}^{3})+ a_{2}(P_{r2}^{2}-P_{r1}^{2})+a_{1}(P_{r2}-P_{r1})+a_{0}[(LnP_{r2})-Ln(P_{r1})] \label{6-7}$
where Pr2 and Pr1 are the reduced pressures of the mixture boiling pressure and the component’s saturation pressure. The constants for Equation (\ref{6-7}), a3 through a0, are -0.175248; 0.393866; -0.887714; and -0.0141409, respectively.
For $\phi_{i}^L \nonumber$, the evaluation of Equation (\ref{6-5}) is simpler, since ZL is fairly insensitive to pressure, simplifying the relationship for the ith component to be:
$Ln(\phi^{L})=[Z_{L} \times (\pi -VP)/VP] \label{6-8}$ where $\pi \nonumber$ is the system total pressure, VP is the pure component’s vapor pressure and $Z_{L} \nonumber$ is the compressibility at the component’s saturation temperature/pressure.
To the extent that the system pressure is greater than the ith component’s vapor pressure (i.e., that liquid component is forcibly sub-cooled), the value of $\phi_{i}^{L} \nonumber$ will be greater than unity. To the extent that the system pressure is less than the ith component’s vapor pressure (i.e., that liquid component is forcibly super-heated), the value of $\phi_{i}^{L} \nonumber$ will be less than unity.
The pattern for $\phi_{i}^{V} \nonumber$ is usually the reverse as a result of compressibility changes, so the value of $\phi_{i}^{L}/ \phi_{i}^{V} \nonumber$ in Equation (\ref{6-4}) above shows how the distinct difference in Raoult/Dalton departure results. In the example above (40% molar TCS in STC @ 73.9°C saturation), the values are given in Table 6-1:
Table 6-1: Comparing TCS and STC Fugacity Coefficients
TCS
STC
$\phi^{L} \nonumber$
0.9976
1.0039
$\phi^{V} \nonumber$
1.0488
0.9562
$\phi^{L}/ \phi^{V} \nonumber$
0.9512
1.0498
The careful reader will note an apparent discrepancy between the overall 2.3% ideality departures reported in the example above and the fugacity ratio of Table 6-1. Just the fugacity effects should make the TCS mole fraction departure about 4.8% lower than Raoult/Dalton expectation, but the Liquid Activity Coefficient effects restores about half of the departure. Thus it can be seen that fugacity effects and Liquid Activity Coefficient effects can be counteractive.
The “take-away” from this article is:
• Raoult & Dalton describe ideal behavior of vaporizing liquid mixtures, but do not completely describe the behavior of real fluids
• All departure factors must be considered: liquid fugacity, vapor fugacity and Liquid Activity Coefficients
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/06%3A_Fugacity.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part VII, Liquid Activity Coefficients of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature.
Part VII, Liquid Activity Coefficients builds on Part VI, Fugacity regarding the departure of vapor-liquid equilibria (VLE) of binary mixtures from ideal behavior, as is commonly found in practical application of distillation science. In conjunction with previous articles, the goal of this article to to complete the explanation of equilibrium behavior of binary systems; such that Part IX can illustrate an example of a distillation process design.
In Part VI the combination of Raoult’s Law and Dalton’s Law was introduced governing the ideal vapor-liquid behavior of volatile fluid mixtures, with Equation 6-1 repeated for convenience.
$Y_{1}=(X_{1} \times VP_{1})/(X_{1} \times VP_{1} + X_{2} \times VP_{2}) \nonumber$ $Y_{2}=(X_{2} \times VP_{2})/(X_{1} \times VP_{1} + X_{2} \times VP_{2}) \nonumber$ repeat Equation 6-1
Also in Part VI the real-world departure from ideal behavior was discussed as far as the fugacity coefficients, but stopped short of Liquid Activity Coefficients. Equation 6-4 repeated below gave the complete relationship between the partial pressure of the ith component, PPi, and the pure-component vapor pressure VPi , for liquid and vapor mole fractions Xi and Yi, respectively. The total system pressure (sum of the partial pressures) is denoted as $\pi \nonumber$; ($\phi_{i}^V\nonumber$& $\phi_{i}^L\nonumber$) are the liquid and vapor fugacity coefficients. $\gamma_{i} \nonumber$ is the ith component’s Liquid Activity Coefficient for that specific set of components.
$Y_{i} \times \pi =PP_{i} =(\phi_{i}^L/ \phi_{i}^V) \times \gamma_{i} \times VP_{i} \times X_{i} \nonumber$ repeat Equation 6-4
An example was given in Part VI of a vaporizing TCS-STC mixture, contrasting the vapor-liquid equilibria (VLE) expected from Raoult/Dalton; from what actually occurs in the real world as a result of the ideality departures. Summarizing the previous article, the more volatile fluid exerts an effect on the less volatile fluid, to increase its apparent vapor pressure; and the less volatile fluid reduces the more volatile fluid’s vapor pressure. The net effect is to make the component separations more difficult via distillation (e.g., more theoretical trays or increased amount of reflux than expected). For detailed discussion of fugacity coefficients, see Part VI.
As a “house-keeping” note, it must be emphasized that Parts VI and VII treat fugacity coefficients and Liquid Activity Coefficients separately. In some texts, the two topics are compressed together using specific models of mixing and Equations of State (EOS). However, the two ideality departure factors are differently based, and their combination often gives erroneous or nonsense results when used with polar fluids, or those that have a high degree of hydrogen bonding (such as the chlorosilanes that are used as continuing examples). Fugacity coefficients naturally result from differences in fluid volatility. Liquid Activity Coefficients result from differences in fluid properties, including critical temperature (Tc), critical pressure (Pc), critical volume (Vc) and acentric factor (ω), as well as the entropy & enthalpy effects of mixing (referred to in thermodynamic texts as excess Gibbs Free Energy).
For a given temperature and combination of fluids, the value of $\gamma \nonumber$ is a function of the liquid mole fraction, rising asymptotically from unity at the pure component condition. Typically the values of $\gamma \nonumber$ for a system are plotted as a function of the more volatile component’s mole fraction. An example of such a plot is given in Figure 7-1, for the TCS-STC binary at 73.9°C (the temperature of the continuing example of previous articles).
There are several models for Liquid Activity Coefficients that are thermodynamically consistent (i.e., follow the Gibbs-Duhem Rule of binary system thermodynamics). The two-constant Van Laar model is the easiest to manipulate, but is limited to binary systems. The Margules model is not that different from the Van Laar model, and also limited to binary systems. The Wilson model is more complex mathematically, but can be applied to ternary or greater-numbered component systems. Fortunately the Wilson model parameters can be calculated from the Van Laar model. Even more complex Liquid Activity Coefficient models are known (e.g., NRTL and UNIQUAC), but these require more data points to effectively evaluate. In the case of the above plot, the Van Laar model is shown. In Part VIII, the techniques are discussed to evaluate experimental data on Liquid Activity Coefficients and fit the data to a model. In most cases with chlorosilanes (and their impurities) the data quality is not that great, so the simpler Van Laar is typically used to correlate data and for evaluating binary systems, and the Wilson model is used for ternary systems and beyond. For a more in-depth discussion of Liquid Activity Coefficients, the reader is referred to the text " The Properties of Gases and Liquids", by Reid, Prausnitz and Sherwood, (McGraw-Hill). The third edition is more readable, but the fifth edition is more updated.
The Van Laar model is:
$Ln(\gamma_{1}) =A_{1} \times [1+(A_{1} \times X_{1})/(A_{2} \times X_{2})]^{-2} \label{7-1}$
$Ln(\gamma_{2}) =A_{2} \times [1+(A_{2} \times X_{2})/(A_{1} \times X_{1})]^{-2} \nonumber$
with A1 and A2 being the Van Laar constants for the more volatile and less volatile component, respectively.
Conveniently, $EXP(A_{1}) = \gamma_{1} @ X_{2} \rightarrow 0 \label{7-2}$
$EXP(A_{2}) = \gamma_{2} @ X_{1} \rightarrow 0 \nonumber$
which simplifies evaluation. Note from Figure 7-1 that the Liquid Activity Coefficients plot is not always symmetrical (i.e., A1≠ A2). In the plot of Figure 7-1, A1 (TCS) = 0.1752, so the TCS curve asymptotes at $\gamma \nonumber$= 1.191; and A2 (STC) = 0.2086, so the TCS curve asymptotes at $\gamma \nonumber$= 1.232 .
Completing the on-going example from Part VI, for a TCS-STC liquid mixture, with 40% molar TCS, at 73.9°C, to determine the vapor mole fraction in equilibrium (but now including the Liquid Activity Coefficient effects):
In Part VI, the pure-component vapor pressures of TCS and STC respectively were calculated from the VP equation of Part IV to be 3.500 and 1.651 atmospheres, respectively at 73.9°C. Also from Part V, the liquid and vapor fugacities were calculated as per Table 6-1, repeated below, using the Peng-Robinson EOS (Part V).
Repeat Table 6-1: Comparing TCS and STC Fugacity Coefficients
TCS
STC
$\phi^L \nonumber$
0.9976
1.0039
$\phi^V \nonumber$
1.0488
0.9562
$\phi^L / \phi^V \nonumber$
0.9512
1.0498
The values for $\gamma \nonumber$ are calculated for TCS and STC @ 0.40 mf TCS, per Equation (\ref{7-1}) using the values for A1 and A2 above (or just read from Figure 7-1), to be 1.075 and 1.027 respectively. Plugging all these values into Equation 6-4 (repeated above), the partial pressures of TCS and STC @ 73.9°C are:
$PP_{TCS} =0.9512 \times 1.075 \times 3.500 \times 0.40 =1.432 \nonumber$ atmospheres, and
$PP_{STC} =1.0498 \times 1.027 \times 1.651 \times 0.60 =1.068 \nonumber$ atmospheres.
So the value of $Y_{TCS} = 1.432/(1.432+1.068) =.573 \nonumber$, which was given in Part VI, with the STC mole fraction being 0.427.
The careful reader will see that not only are the partial pressures of TCS and STC different from Raoult/Dalton, but so is the total pressure. In the ideal behavior of Raoult/Dalton, the partial pressures were given in Part VI as 1.400 and 0.991 atmospheres, for a total pressure of 2.39 atmospheres. But in reality, at 73.9°C and 0.40 mf TCS, the total pressure would instead be 2.50 atmospheres. So while the ideality departures of TCS made the vapor mole fraction smaller (0.573 vs 0.586), both the TCS and STC exerted slightly more partial pressure than Raoult would indicate. But the TCS mole fraction still went down, because the ideality departure of STC was greater than that of TCS.
The take-away from this example is that while Raoult/Dalton will give you a “quick & dirty” value for Y/X, it is easy to get the overall wrong answer for determining the requirements of a distillation column design. In almost every instance of industrial practice, the use of Raoult/Dalton will undersize the number of trays needed to make a separation (or for the same number of trays, undersize the reflux required). For this reason, it is important to know how to work out the better answer.
For many binary systems, VLE data exists – but typically not at the temperature/pressure needed for industrial design. Most industrial designs use higher pressures for economy of size, as well as to accommodate available energy sources for driving the column reboiler and condenser. Yet the practicalities of data collection in the laboratory usually require pressures slightly above ambient, or at slight vacuums. This is especially true of chlorosilanes, since the normal materials of construction for higher pressures can catalyze slow side reactions, such as disproportionation and dimerization.
Lab data is normally collected based on small amounts of fluid that are used repeatedly at somewhat different combinations of temperature and composition. In the case of electronic impurities, some of these compounds are not very stable outside of a chlorosilane matrix. So a correlating method must be used that allows a certain degree of extrapolation.
In 1981 Chung-Ton Lin and Thomas Daubert from Penn State University developed such a Liquid Activity Coefficient model and published same in Industrial & Engineering Chemistry Process Design and Development, basing their work on non-polar hydrocarbon mixtures. I have used their general method, but modified two of the constants so as to better fit available VLE data on chlorosilanes. Using these re-evaluated constants, the revised “Lin & Daubert” method (detailed below as Equation (\ref{7-3}) was used to check against industrially obtained data on the distillation purification of TCS and silane, with good results. Obviously more data would be preferable, especially on ternary mixtures and low-concentration electronic impurities in chlorosilanes. However until such becomes available, Equation (\ref{7-3}) below is recommended.
To develop a generic expression for the Van Laar parameters ($A_{i} \nonumber$ & $A_{j} \nonumber$ ), Lin & Daubert go back to Van Laar’s original two-term expression regarding the excess Gibbs energy of mixing, using the SRK Equation of State for partial molar compressibility and simple mixing rules; and expand it into the following:
$A_{i} = c \times F_{i} +d \times k_{ij} \times G_{i} \label{7-3}$
$A_{j} = c \times F_{j} +d \times k_{ij} \times G_{j} \nonumber$
• $F_{i} =1/[T_{ri}P_{ci}] \times (R_{i} \times (1+M_{j})^{0.5} \times P_{ci}^{0.5} - R_{j} \times (1+M_{j})^{0.5} \times P_{cj}^{0.5})^2 \nonumber$
• $F_{j} =1/[T_{rj}P_{cj}] \times (R_{j} \times (1+M_{i})^{0.5} \times P_{cj}^{0.5} - R_{i} \times (1+M_{i})^{0.5} \times P_{ci}^{0.5})^2 \nonumber$
• $G_{i} = T_{ri}^{-1} (P_{cj}/P_{ci})^{0.5} \times R_{i} \times (1+M_{i})^{0.5} \times R_{j} (1+M_{i})^{0.5} \nonumber$
• $G_{j} = T_{rj}^{-1} (P_{ci}/P_{cj})^{0.5} \times R_{j} \times (1+M_{j})^{0.5} \times R_{i} (1+M_{j})^{0.5} \nonumber$
• $M_{i} = 0.480 + 1.574\omega_{i} -.176\omega_{i}^2 \nonumber$ $M_{j} = 0.480 + 1.574\omega_{j} -.176\omega_{j}^2 \nonumber$
• $R_{i} =[1+M_{i} \times (1-T_{ri}^{0/5})]^{0.5} \nonumber$ $R_{j} =[1+M_{j} \times (1-T_{rj}^{0/5})]^{0.5} \nonumber$
• $k_{ij} =1 - 2 \times [V_{ci}^{1/3} \times V_{cj}^{1/3}]^{0.5} / [V_{ci}^{1/3} + V_{cj}^{1/3}] \nonumber$
Using an improved set of curve-fits for “c”,
• $0< \Delta \omega <0.03 ⇒ \nonumber$$Ln(c)= -8.0637E+04(\Delta \omega)^3 +8.2649E+03(\Delta \omega)^ 2 - 3.2891E+02 (\Delta \omega) +5.5051 \nonumber$
• $0.03 < \Delta \omega <0.30 ⇒ \nonumber$$Ln(c)=6.4645E+02(\Delta \omega)^4 -7.9857E+02(\Delta \omega)^3 +3.4768E+02(\Delta \omega)^2 -6.5099E+01(\Delta \omega) +2.5667 \nonumber$
Reducing available VLE data to Van Laar parameters, and adjusting for vapor and liquid fugacity coefficients, the best value of “d” is found to be 112.1 for chlorosilanes and similar fluids.
Lin & Daubert suggested that the “c” function be capped at $0.005 < \Delta \omega=|\omega_{i} - \omega_{j}| <0.15 \nonumber$, based on their data; but using the above improved set of polynomials for “c” avoids singularities during iterative calculations, with a minimum “c” of 0.135 (as “Δω” → ∞ )
Note that the "c" variable in the first term of Equation 7-3 is based on the $F_{i} \nonumber$ values and the differences between the two fluid's acentric factor, ω ( i.e., their molecular complexity); whereas the second term of Equation 7-3 is based on the $G_{i} \nonumber$ values and the differences between the two fluid's molecular volumes at critical (Vc).
Returning to the continuing previous TCS-STC example, and using the two components’ critical temperature (Tc), critical pressure (Pc), critical volume (Vc) and acentric factor (ω), as well as basing the reduced temperature (Tr) on 73.9°C = 347.05°K, Equation (\ref{7-3}) gives the values of A1 and A2 used earlier in this article, of 0.1752 and 0.2086 respectively.
Frequently in evaluating different distillation designs for a process, the designer is faced with the need to make process temperature changes and determine the effect on Y/X using Equation 6-4, repeated above. That causes no problem in the evaluation of the fugacity ratio,$\phi_{i}^L / \phi_{i}^V \nonumber$, since those factors are derived from the EOS ( see Part VI). However, if using experimentally obtained values for $\gamma \nonumber$ that were obtained at a different temperature, there needs to be further adjustment. The general “rule-of-thumb” seen in some texts is that Ln($\gamma \nonumber$) is proportional to 1/T (absolute temperature) for small temperature changes. In reality such a “rule-of-thumb” is tantamount to making an assumption that the mixing effects of the multi-component fluid are closer to an isothermal model than an isenthalpic one. In most cases, the temperature dependency on activity coefficient is a blend of two theoretical models, which leads to the relationship:
$Ln(\gamma_{i})=a + \frac{b}{T} \label{7-4}$
where “a” and “b” are the constants of a linear relationship (“a” = 0 leading to the isenthalpic model, or “b” = 0 leading to the isothermal model). The use of Equation (\ref{7-3}) solves the problem, since it predicts such temperature changes along the lines of Equation (\ref{7-4}) , and so can be used in a relative manner. For example, if Equation (\ref{7-3}) is found to over-predict the value of $Ln(\gamma) \nonumber$ by 10% of an experimentally obtained activity coefficient value, then calculate how much Equation (\ref{7-3}) would predict for the different temperature, and apply that 10% on the $Ln(\gamma) \nonumber$.
Note that to all intents, the Liquid Activity Coefficient is not a direct function of pressure, other than increasing the boiling temperature increases pressure. However, fugacities are a function of both temperature and pressure, and that is why they should be included in Y/X calculations (many texts advocate setting fugacities to unity, which is only close to being valid at very low pressures).
To demonstrate how Equation (\ref{7-3}) performs with changing temperature (from 330°K to 380°K), Figure 7-2 shows the dependency for the asymptotic BIP’s of the TCS-STC mixture in the example above. Note that the slope of the STC Liquid Activity Coefficient line is about 20% steeper than that of the TCS Liquid Activity Coefficient line (i.e., in the TCS-STC binary system, increasing the temperature has a greater effect on the STC's $\gamma \nonumber$ than it does on the TCS's $\gamma \nonumber$).
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/07%3A_Binary_Interaction_Parameters.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part VIII, VLE Analysis Methods of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature.
Part VIII, VLE Analysis Methods recommends methodology used when data-collecting the Liquid Activity Coefficients of binary systems (see Part VII). This article also deals with validation, especially when data collection is done on reactive fluids like chlorosilanes which can disproportionate or dimerize during study.
This article uses the information of Parts III through VII.
In Part VII a correlation technique was given for estimating Liquid Activity Coefficients. As mentioned in the previous article it is common to collect such vapor-liquid equilibria (VLE) data in the lab, at conditions close to ambient pressure. Yet many industrial applications frequently need to use VLE results at higher process pressure/temperatures or different compositions. To avoid notation confusion between the liquid activity coefficient ($\gamma_{i} \nonumber$) and the vapor mole fraction (Yi), the mole fractions X and Y are shown as bolded.
Recall from Parts VI and VII that fugacity coefficients ($\phi_{i}^V\nonumber$& $\phi_{i}^L\nonumber$) are functions of temperature and pressure. However, Liquid Activity Coefficient s ( $\gamma_{i} \nonumber$ ) are functions of only temperature and mole fraction. In an experimental situation, the calculated values of $\gamma_{i} \nonumber$ (from the values of Yi and Xi = vapor-phase mole fraction and liquid-phase mole fraction) are easiest to correlate when system temperature is held constant, and system pressure is allowed to vary. Stated otherwise, PXY data @ constant T is far easier to collect and correlate than TXY @ constant P. However, in commercial applications the distillation column is typically controlled at constant pressure. So a common industrial communication error is to request the lab to collect VLE data as TXY@P, but instead get PXY@T instead. There is a way to convert one data set to the other relationship, but it is somewhat awkward and requires choosing activity coefficient model.
In Figure 7-1 of Part VII, the expected profile of $\gamma \nonumber$ vs X @ constant T is shown, and in Figure 7-2 of Part VII the expected profile of $\gamma \nonumber$ vs T @ constant X is shown, at the X=0 and X=1 axes. The reason for generating Liquid Activity Coefficients experimentally is normally to use the lab results for distillation column design, with the confidence that $\gamma \nonumber$ can be accurately known at any value of X or T, so as to establish the Y/X relationship up and down the column design. Using the principles of Parts VI and VII, these problems can be resolved.
Having established the experimental protocol, it is necessary to consider the impact of data collection equipment on the fluids being analyzed and to validate the quality of the sample fluids. For reactive fluids such as chlorosilanes, that means using pressure-capable glassware is the best choice (or nickel-lined steel as a second choice), since some of the alloying elements of stainless steel will slowly catalyze disproportionation reactions, which alter the sample compositions.
Also, it is important to validate the purity of the fluid samples, as opposed to blindly accepting the analysis of any accompanying supplier information. Again, with chlorosilanes (as typical of reactive fluids), the shipping container can catalyze side reactions (i.e., the supplier’s COA was perhaps accurate when the sample was loaded into the container, but purity degraded during shipment). It is common for 99.99% pure TCS samples (under argon inerting) from reputable suppliers to end up having several percent DCS and several percent STC, along with a few tenths percent hexachlorodisilane and a few tenths percent hydrogen gas, when used just a few weeks later. A good practice before loading sample fluids into the lab equipment is to double-distill samples (discarding the “lights” and “heavies” fractions, and assuring that only the “heart’s cut” fraction is used: whose boiling pressure is identical to the expected pure component vapor pressure).
With the above precautions, data is collected at 15-20 (or more) X,Y points, with some duplicates later in the run to establish experimental analysis accuracy and confirm the absence of systemic error (such as contamination). At least two pairs of data points should be collected close to X= zero and X= unity. To help confirm temperature dependency of activity coefficients, at least three sets of data (each at a different constant temperature or pressure) should be collected.
After data collection, the lab work is suspended, but then "number-crunching" is needed to confirm the data’s validity before reporting or using the results. If data validity is not confirmed, the error must be found and lab work repeated. First, plot out the data set and check that both the P-X curve and P-Y curve , or T-X curve and T-Y curve show an identical intersection, at both zero and unity mole fraction axes; and that the axis intersections are exactly representative of each pure-component’s known vapor pressure. If the X and Y curves do not intersect at the correct value on each axis, there is some systematic error in the data set that must be resolved before any more evaluation work is done. A corrupted dataset is likely to be of only minimal value in establishing activity coefficients; and is more often a reason to toss it all out and re-do the work, after the systemic error is resolved.
Using an algebraic modification of previous Equation 6-4, activity coefficients, the $\gamma_{i} \nonumber$ i are calculated for each P-X and P-Y data point.
$\gamma_{i} =(\phi_{i}^V/\phi_{i}^L) \times (Y_{i} \times \pi)/(VP_{i} \times X_{i}) \label{8-1}$
where $\pi \nonumber$ is the total pressure, $VP_{i} \nonumber$ is the vapor pressure of the ith component, and $(\phi_{i}^V/\phi_{i}^L) \nonumber$ is the vapor/liquid fugacity ratio of that component.
In Figure 8-1 the twelve-point TXY/P data set of Zanta et al, from Chemicky Prumysl (Czech Chemical Industry) is shown, using a spreadsheet plot. The researchers did collect a few data points near the STC axis (i.e., TCS= zero). However it would have been preferable to have a few more collected close to the other (i.e., TCS= unity) axis; and to have that all data reported with more precision. The researchers chose to collect data at a constant pressure of 740 mm Hg (as opposed to the more preferable PXY/T method that avoids the need for adjustment of calculated Liquid Activity Coefficient $\gamma \nonumber$ for changing temperature), for reasons of their lab equipment and simplifying their procedure; and to only collect one TXY/P data set.
If only the raw data is considered, it would appear that the T-X and T-Y curves might not mutually intersect at the STC axis, or that the intersection is higher than the expected 56.2°C temperature representative of STC vaporization @ 740 mm Hg. The reason is that individual data point precision is low in the region of the STC axis.
However, with data-smoothing and extrapolation of the T-X and T-Y curves via some spreadsheet curve-fitting, the two curves appear to quite likely both intersect the STC axis near the expected 56.2°C temperature. At the TCS axis however, curve-fitting the T-X and T-Y data appears to both result in their intersection on the TCS axis at 30.0°C, as opposed to the expected 32.3°C for pure TCS. The most likely explanation for this data set discrepancy is that the TCS sample supplied was either a disproportionated mix of DCS/TCS/STC as-received, and/or the fluids in the equipment disproportionated as a result of the equipment materials of construction. A TCS axis temperature of 30.0°C is representative of a 5% DCS/95% TCS liquid mixture. Surprisingly, the researchers’ analytical procedure did not pick up the DCS peak, which would have revealed the sample corruption ( but that is a common error using that the researcher’s type of analyzer).
Now that the data set has been discredited, it remains to be seen what - if any- useful information can be extracted. Other than demonstrating technique, there is little use in reducing the data that is high in “bad TCS” to get the $\gamma \nonumber$ asymptote intersecting the TCS axis. That is because the XTCS* mole fractions are changing along the STC activity coefficient curve (about a high 1:19 ratio of DCS/TCS, dropping in ratio to very little DCS in the ternary mixture, close to 56.2°C at the STC axis). It would be impossible to guess what property values are best to use for fugacity coefficients, given a “wild-card” DCS content). The data closest to intersecting the STC axis could be interpolated to make up a few “pseudo-points” close to XTCS=0, in order to get a rough value of the TCS $\gamma \nonumber$ asymptote (based on the assumption that there is little DCS in the ternary mix at that point). However, even small amounts of DCS near the XTCS=0 area could be expected to exert a significant Liquid Activity Coefficient effect, and the fugacity values might be somewhat in error by assuming only the property values of TCS.
However, to illustrate the data reduction technique for TXY/P data (as opposed to the preferred PXY/T form), a few “pseudo-points” are evaluated near the axis.
Using a fourth order curve-fit, with the STC axis temperature set to 56.20°C, the T-X and T-Y curve-fit equations are, respectively, via spreadsheet curve-fitting functions:
T-X $T=18.949x^4-64.825x^3+75.709x^2-56.044x+ 56.20 \nonumber$
T-Y $T=-22.243y^4+33.158y^3-18.028y^2-19.075y+ 56.20 \nonumber$
From those two curve-fits, the following “pseudo-points” are calculated, by solving the quartic equations for T-X and T-Y:
Table 8-1, three “pseudo-points” made from curve-fits near the STC axis
P, mm Hg
T, °C
XTCS*
$\gamma \nonumber$ TCS*
740
56.2
0.0000
0.0000
740
55.0
0.00359
0.0104
740
54.0
0.0415
0.1065
740
53.0
0.0620
0.1515
In Table 8-1, XTCS* and YTCS* are subscripted as "TCS*" to denote that while TCS properties are used for vapor pressure and fugacity estimation, the more volatile fluid is really a DCS/TCS mix. The data-point at X, Y = 0.0000, 0.0000 is given just to show the STC axis intersection, but cannot be used to calculate the $\gamma \nonumber$ asymptote since that would result in “division by zero” in Equation 7-1.
Using the properties of TCS, the values for VP, øL and øv are calculated as per Parts IV, V, and VI, and the results shown in Table 8-2 for calculated $\gamma \nonumber$ TCS* at 740mm Hg (for each “pseudo-point” temperature).
Table 8-2 for calculated $\gamma \nonumber$ TCS* at “pseudo-point” temperatures, from PXY/T data
P, mm Hg
T, °C
VPTCS*
øL
øv
$\gamma \nonumber$ TCS*
740
56.2
2.130
0.9983
1.0321
to be determined
740
55.0
2.055
0.9984
1.0316
1.418
740
54.0
1.994
0.9984
1.0311
1.294
740
53.0
1.934
0.9984
1.0307
1.270
The calculated values for $\gamma \nonumber$ TCS* in Table 8-2 are based on the X,Y “pseudo-points” of Table 8-1 and using Equation (\ref{8-1}) above, to demonstrate the methodology of experimental data reduction. The trending on fugacity coefficients and activity coefficient is correct: øL values increase toward unity as XTCS* increases toward unity; øv values decrease toward unity as XTCS* increases toward unity; $\gamma \nonumber$ TCS* values decrease toward unity as XTCS* increases toward unity.
The asymptote at the STC axis is determined by extrapolating value of $\gamma \nonumber$ TCS* from the three “pseudo-points”. (That curve would have a different slope with respect to composition if it were TXY/P data, since temperature is changing as well as XTCS* in Table 7-2.) This technique identifies the $\gamma \nonumber$ TCS* asymptote as 1.437 (the average value of $\gamma \nonumber$ TCS* obtained by extrapolating curve-fits of $\gamma \nonumber$ TCS* vs XTCS* to a zero value of XTCS*, and $\gamma \nonumber$ TCS* vs YTCS* to a zero value of XTCS*). Compared to other researcher’s data, the calculated asymptote has a high value. It also seems high per expectation from Equation 7-3 from Part VII, which would indicate a value closer to 1.21 @ 56.2°C, or 1.23 @ 32.3°C (the expected TCS boiling point at 740 mm Hg). Possibly there is error from the DCS content, which would tend to make the calculated $\gamma \nonumber$ TCS* values high by increasing the non-STC mole fraction in the vapor. The curve-fitting technique could be “off” since there was low precision in the two data pairs close to the XTCS* =0 axis.
This demonstrates that there is simply no good way to “fix” data that has systematic error in it: virtually all texts suggest that when corrupted data is encountered, it is pointless to continue with data analysis. In addition to determining reasonable asymptotic values at either axis, Reid, Prausnitz, and Sherwood suggest in “The Properties of Gases and Liquids”, that all $\gamma \nonumber$ calculated values be first adjusted to a common temperature basis (using the approximation of $Ln(\gamma) \times = constant \nonumber$), then use the Gibbs-Duhem Law to establish thermodynamic consistency. While technically the best way to validate data for consistency, it requires a significant amount of high-quality data, and is suggested only if advanced Liquid Activity Coefficient models are to be considered (e.g., NRTL or UNIQUAC). For Van Laar, Margules, Wilson or similar two-constant models, 15-20 data points should be sufficient.
It is always better to have fewer data points, but have the dataset internally consistent, rather than a large number of possibly corrupted data points.
This article suggests using the Van Laar model for activity coefficients based on reasonable results with data analysis on chlorosilanes, and the success in predicting the Van Laar constants via Equation 7-3 in Part VII. However, the reader may want to explore other models for a better fit to experimental data, after such data has been validated. In “The Properties of Gases and Liquids”, the authors give an exhaustive list of other activity coefficient models to consider, with some notes on “pro & con”.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/08%3A_VLE_Analysis_Methods.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part IX, Putting It All Together of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature.
Part IX, Putting It All Together uses the information from Parts III through VII, showing how to combine them in a practical example for bulk separation; and how using the techniques detailed in previous articles give answers that differ from the ideals of basic Raoult/Dalton Law application.
See previous articles per the following detail:
• Part III for critical properties and acentric factor, especially Table 3-3
• Part IV for the recommended vapor pressure equation, especially Equation 4-3 and Table 4-1
• Part V for Equation of State, especially Equations 5-3 and 5-4
• Part VI, especially Equation 6-4 for partial pressure and Equations 6-7 and 6-8 for fugacity coefficients
• Part VII for Liquid Activity Coefficients, especially Equation 7-3
In this article a practical distillation example is developed, showing the Y/X relationship with temperature for the DCS-TCS binary system with pressure is held constant at 11.0 atmospheres, and showing the results as a TXY/P plot as well as in tabular form. From such data, a distillation column designer would consider the best arrangement for column feed, “tops” and “bottoms” recovery, tray count and external reflux ratio ( and from that, the energy required for the reboiler and condenser, as well as hydraulic loading per unit mass flow of feedstock). Two variations of this example are worked-up, showing the application of distillation science: one which only uses ideal Raoult/Dalton, and one that uses the full set of ideality departure coefficients. Table output and plots below are done using MS Excel. Then at the end of the article, the table results are graphically demonstrated using a McCabe-Theile plot to step off the trays for an appropriate reflux rate and show the best feed tray.
The table of Figure 9-1 is set up to cover the range of DCS liquid mole fraction from unity to zero, in increments of 0.05 mole fraction, with 0.01/0.99 mole fraction added to illustrate asymptotic effects. For each non-zero table row, a temperature is first assumed, and then the vapor mole fractions calculated per Raoult/Dalton, with p=11.0 atmospheres; then that row’s temperature is iteratively adjusted until the total pressure = 11.0 atmospheres.
In the plot of Figure 9-1 shows the calculation results in TXY/P form, with the value of XDCS=1.0 representing the tightest possible “tops” product at the distillation column condenser; and a value of XDCS=0.0 representing the tightest possible “bottoms” product at column reboiler, all at a total reflux condition. For a finite column, the mole fraction of the “tops” product would be a cropping of the upper XDCS values of the TXY/P curves; and the mole fraction of the “bottoms” product a would be a cropping of the lesser XDCS values of the TXY/P curves. In such instance, the column feed tray would be identified as that point in the column where the tray temperature is identical to the feed mixture’s boiling point. The accompanying plot of solution shows what the TXY/P curves would look like, for such a Raoult/Dalton solution to the DCS-TCS binary at 11.0 atmospheres.
$Y_{1}=(X_{1} \times VP_{1})/(X_{1} \times VP_{1} + X_{2} \times VP_{2}) \nonumber$ $Y_{2}=(X_{2} \times VP_{2})/(X_{1} \times VP_{1} + X_{2} \times VP_{2}) \nonumber$ repeat Equation 6-1
If the data plot of Figure 9-1 plot is carefully examined, one can see that the T-X and T-Y curves are exactly symmetrical when “folded” along a line running from abscissa/ordinate points of (0,124.76) to (1,92.40); or “folded” orthogonally to that same line. In other words, there are no departures from ideality in either vapor or liquid phase, either with DCS or TCS.
Now X and Y are re-calculated including the departures from ideality, both in fugacity (ø) as well as Liquid Activity Coefficients ( $\gamma \nonumber$ ).
The table in above Figure 9-2 is likewise set up and calculations performed row-wise for each value of XDCS, but the liquid and vapor-phase fugacity coefficients (øL and øV) and activity coefficients ( $\gamma_{i} \nonumber$ ) are included in the calculation of vapor mole fractions ($\gamma_{DCS} \nonumber$ ). For each row, the temperature is iterated until the sum of partial pressures = 11.0 atmospheres. Note that since the activity coefficients are mild functions of temperature, each row has its own Van Laar constants determined and the value of $\gamma \nonumber$ calculated for that row’s liquid mole fraction. Thus all departure-from-ideality factors are included.
$Y_{i}= (\phi_{i}^L \times \gamma_{i} \times VP_{i} \times X_{i})/(\phi_{i}^V \times \pi) \label{9-1}$
which is an algebraic re-arrangement of Equation 6-4 from previous Part VI.
To better show how the several ideality departure factors impact the TXY/P plot (as compared to Figure 9‑1 above), the table in Figure 9-2 is plotted. The symmetry of the Figure 9-1 Raoult/Dalton plot is no longer there, neither along the “fold” line running from abscissa/ordinate points of (0,124.76) to (1, 92.40); or “folded” orthogonally to that same line. Also note that the T-X and T-Y curves are closer together in Figure 9‑3, as opposed to Figure 9-1.
For a very simple approximation of tray count, for given distillation column “tops” product , “bottoms” product and feed mixture mole fractions, and for a given reflux rate, a McCabe-Thiele plot is constructed in Figure 9-4 to graphically step off the number of trays above and below the feed tray. For almost all industrial applications, this tray-by-tray analysis is done via computer modeling, such as ASPEN, VMG, HYSYS, ChemCad, etc. To apply the technique to a computer model, the user would specify the critical constants ( as per Part III), the vapor pressure equation to be used (as per Part IV), the EOS to be used to calculate fugacity coefficients (as per Part V), and the Liquid Activity Coefficient model to be used (as per Part VI).
To construct a McCabe-Thiele plot, the X-Y equilibrium (in red below) is plotted on a set of axes where X= liquid mole fraction of the most volatile component = XDCS ; and Y= vapor mole fraction of the most volatile component = YDCS . The total reflux line (in black below) connects (X,Y) = (0,0) and (1,1); representing the maximum reflux possible. The lines are added (in dashed blue below) for the feed (XF), and distillate product (XD) and the bottoms product (XB), from the X-axis to the total reflux line. The feed line is extended toward the equilibrium curve, and from that intersection a line is drawn to the end of the distillate product line. The slope of that drawn line represents the minimum internal reflux needed to make the separation (i.e., with an infinite number of trays). The upper column operating line (in magenta below) is then drawn connecting the end of the distillate product line with the feed line, with a slope somewhat greater than the minimum reflux slope. In normal practice the upper column operating line slope is usually about 1.3 times the minimum slope, depending on the relative value of operating energy vs capital cost. Then the lower column operating line is drawn, intersecting the bottoms product line with the upper column operating line (and the feed line). The number of theoretical trays (aka stages) can then be stepped off ( in green below) from XD to XB, and best tray determined for the feed.
To illustrate an example of using a McCabe-Thiele plot, assume the mixture to be separated is a 35% DCS molar liquid, pre-heated to its boiling point of 108.61°C (as shown on the 35% of Figure 9-2). Further, assume that the column “tops” distillate product is to be 98% molar DCS and the column “bottoms” product is to be 98% molar TCS. Per the text above, the minimum reflux (i.e., the slope of the line between XD = XDCS of 0.98, XB = XDCS of 0.02) is determined to be 3.9:1, reflux-to-distillate. To allow a reasonable number of trays, a reflux rate of 5:1 is chosen (about 1.3 times the minimum), so the slope of the upper column operating line is R/(R+1) = 5/(5+1) =0.8333. Stepping off trays, it is determined that the total number of theoretical trays required is 34, with the feed tray at #22. Conventional tray counting starts at the column condenser = tray #1.
As can be seen by examining Figure 9-4, as the column’s reflux rate is increased, upper column (magenta) operating line’s slope increases, and the gap opens up between the (red) X-Y Equilibrium Line and the (black) Total Reflux Line. So increasing the reflux decreases the number trays required to make the specified separation.
In a normal industrial process application, all of the calculations are done via computer simulators, allowing for variations in operating line slope; feed quality condition (e.g., saturated vapor, saturated liquid, sub-cooled liquid, etc); changes in the internal liquid/vapor flow ratios due to physical properties; ambient heat effects; tray efficiencies, etc; as well as integrating the distillation column into the rest of the process design. So the McCabe-Thiele method is just used for graphical explanation.
However, the primary consideration of the distillation column design is the X-Y equilibrium relationship. If this is considerably in error – such as using only Raoult/Dalton – the column design will be a failure. Given the cost of industrial-scale distillation systems being in the millions of dollars (tens of \$MM for larger ones), there is every reason to get the X-Y equilibrium relationship correct.
In the example of this article, the basic Raoult/Dalton model would result in the X-Y equilibrium relationship in the above example having more curvature (than shown in Figure 9-4) and therefore more open gap between the equilibrium curve and operating lines. The resulting reflux determined would be about 2/3 of required, and number of trays required would also be about 2/3 of needs - so a major error in design. Given what is at stake financially, there is no question that the use of proper distillation science is well worth it, even if the concepts are complex and the calculations are tedious.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/09%3A_Putting_It_All_Together.txt
|
Distillation Science (a blend of Chemistry and Chemical Engineering)
This is Part X, Convergence Strategy of a ten-part series of technical articles on Distillation Science, as is currently practiced on an industrial level. See also Part I, Overview for introductory comments, the scope of the article series, and nomenclature.
This last article discusses how to approach solving non-intrinsic equations and equation systems, since these are part of the practice of distillation science. While it can be argued that this topic is technically more related to math or computer science, the techniques are indispensable tools to obtaining workably accurate solutions.
Most relationships in Distillation Science feature equations that are written with temperature as the starting variable, and then one solves for the other parameters. For example, almost all vapor pressure relationships are written intrinsically with VP as a function of temperature. But frequently the problem statement begins with a fixed pressure, and so there needs to be an iterative procedure to solve for temperature.
More complex relationships involving two starting variables are written in temperature and pressure, with the equation’s dependent variable written as a function of (absolute) temperature and (absolute) pressure. An example is an Equation of State, where Z is written as f(T,P), with molar volume (V) being back-solved from the defining relationship PV=ZRT. Some scientific relationships are written in dimensionless “reduced” variables, where T, P, and V are divided by their critical point value (e.g., Tr=T/Tc, Pr=P/Pc), which allows for more global equation forms. The Peng-Robinson Equation of State is an example of using reduced properties, although Equation 5-4 partially masks that by breaking the equation into pieces.
The most basic technique for solution convergence in a non-intrinsic relationship is based on “gaming strategy”: (1) pick a possible independent variable’s value and work it through to a solution, comparing it to the desired final dependent variable; then (2) move the value choice a little bit and see how that helps. This is just a methodical form of “trial and error”, and will eventually lead to a solution. However, such strategy is almost impossible to automate for easy programming, as in a spreadsheet macro or a stand-alone BASIC program. Moreover, it can be hard to get started, may lead to an unstable situation that cycles around a solution rather than stably converging to the required accuracy.
Newton-Raphson Method
A better convergence option is a modification of Newton-Raphson, aka Newton’s Method. In the instructions below, "X" is the relationship's independent variable and "Y" is the dependent variable. The problem is that you have a desired target value for "Y" and have to iteratively seek the correct value for "X", and there is no way to re-write the relationship so that "Y" is the independent variable. The increasing subscripts denote the progression at improving the solution, such that "Yn" is identical to the desired target value of "Y" = "YT", and "Xn" is the value for "X" that yields that solution.
1. If there is a way to get a very rough estimate of solution, start with that as the first trial = X0, and work the scientific relationship through to a first possible solution, Y0. (If there is no rough estimation possible, start with a known “safe” initial X0 and calculate Y0.) Then make an extremely small ΔX step (say 0.1% change in X0). In the next iteration, X1=X0+ΔX would be evaluated to get Y1. If the change in ΔY (i.e., ΔY= Y1-Y0) is too inconsequential to be seen, increase the value of the ΔX step, until there is some noticeable change in ΔY. (This is an important consideration since some relationships are complex, so it is hard to pick that first ΔX step to get a reasonable change in Y). It does not matter whether the change to X (i.e., ΔX) is positive or negative: that will get worked out later.
2. Calculate the quantity $m=(\Delta Y/ \Delta X) = (Y_{1} -Y_{0})/(X_{1} -X_{0}) \nonumber$, which approximates the scientific relationship’s first derivative. “m” can be positive or negative (which is why the sign in ΔX didn’t matter). A positive value for “m” indicates that Y increases with increased X; or a negative value for “m” indicates the opposite effect. Take notice of the sign of “m”, to see if its sign makes common sense.
3. With YT as the target solution value, make the next guess for X as $X_{2} =X_{1} + K \times (Y_{T} -Y_{1})/m \label{10-1}$
4. The variable “K” is the relaxation factor, with values of “K” closer to unity giving faster convergence, but at the peril of overshoot or convergence instability; and “K” values closer to zero giving slow but stable convergence. Until you know something about the scientific relationship, it is recommended using a “K” value of about 0.2-0.3 at first, especially when logarithmic relationships are involved. You can always change the “K” value once you see how sensitive the scientific relationship is.
5. For the “ith” iteration, the iteration equation becomes: $X_{i+1}=X_{i} +K \times (Y_{T} -Y_{i}) \times (X_{i} -X_{i-1}) / (Y_{i} -Y_{i-1}) \label{10-2}$
6. Continue iterating "n" times until the solution, Yn, is adequately close to the target solution, YT. It is suggested that loop iteration accuracy should be one decimal place greater than that of the final solution’s desired precision, to allow for round-off error and computer calculation precision.
When doing automated computer calculations, either via spreadsheet macro or stand-alone BASIC program, it is a good idea to error-check each iteration for division by zero, and to limit the number loop iterations (say to a high number like 100). Otherwise, the iteration program can get “hung up” in an infinite loop. If you are getting essentially zero value for the quantity $(Y_{n} -Y_{n-1}) \nonumber$ or requiring an excessive number of loops to converge, that requires adjusting the relaxation factor, “K”, up or down to adjust convergence sensitivity.
Some ideas for making that first rough estimate, to get the iteration started, are:
• For finding the saturation temperature for a known pressure (i.e., inverting a vapor pressure relationship), start with using a basic VP relationship (inaccurate, but simple), such as $Ln(P) \alpha - \Delta H_{vap}/RT \nonumber$ using the known ambient boiling temperature, Tb @ P=1 atmosphere and a reasonable estimate for latent of vaporization,$\Delta H_{vap} \nonumber$. Then the rough starting estimate is $T_{0}=T_{b}- \Delta H/[R \times Ln(P_{T})] \nonumber$, where PT is the known pressure target, in atmospheres. This will give a high estimated T0 for PT > atmospheric, and a low estimated T0 for PT < atmospheric. It is more important to start with a "safe" estimate than close first iteration, to avoid the pitfalls of automated solutions mentioned above.
• Vapor pressure charts and tables are good ways to get started, although sometimes hard to program into an automated sequence.
• When iterating for the saturation temperature of a mixture, start with the assumption of a linear relationship between the two fluids’ saturation temperatures at that pressure; then interpolate between those two pure-component temperatures per the mixture’s mole fraction. There is always curvature between two fluids’ saturation temperatures, so assuming linearity isn’t correct; but it gets the iteration loop started.
• Starting an iteration loop for fugacity coefficients should only be done after a cursory look at the reduced pressure, Pr, since these relationships involve calculating compressibility, Z. For liquid fugacity coefficients, avoid starting out or getting too close to Pr=0.9, because the automated solution to cubic equations gets too sensitive, and one of the roots may become an “imaginary number” (i.e., a number times the square root of one). For vapor fugacity coefficients, do an initial check to see which Z = f(Tr, Pr) equation is best to use (i.e., Equation 6-6 or Equation 6-7): one is better for lower values of Pr and one better for higher values. If you attempt to cross over between equations in the middle of a solution, that discontinuity might cause an automated iteration to “blow up”.
• Resist the temptation to use Newton-Raphson to get the roots for the cubic relationship Z: there are three roots, and only two are scientifically meaningful. Besides, you need to know all three roots in order to judge which is Zv (the largest of the three) and which is ZL (the smallest of the three). Instead, go through the procedure in Part V and solve the cubic equation for Z, using the exact trigonometric method of Equation 5-4.
• Starting the loop for activity coefficients should never be done at either extreme of mole fractions (zero or unity). The Liquid Activity Coefficient's relationships are complex-shaped and can have $\gamma/ \Delta X \nonumber$ values that are almost zero (near X=1) or very high (near X=0). For bulk separations, start somewhere between 0.3<X<0.7, and use a moderate value for relaxation factor, “K” (say 1/3) in Equation (\ref{10-2}) . For trace impurity distillation systems, either start closer to X=0.1 or at X=0.9, plus use a conservative relaxation parameter, say K=0.1.
Convergence Strategy for modified Thek-Stiel VP Equations
The “sting” of working out nested loop iterations has been removed by giving the solutions for the modified Thek-Stiel VP equation (Equation 4-3) in tabular form (Table 4-1). Obtaining vapor pressure solutions for fluids other than chlorosilanes and their impurities (see Part IV) require more complex convergence strategies than Newton-Raphson.
These VP solutions involve a system of nested-loops: the value of the Watson coefficient “q” depends on evaluating the candidate vapor pressure equation at Tr=0.7 to get the loop’s estimate of acentric factor; and obtaining the loop’s estimate of “c” and “n” depends on finding a value of αc that gives the correct saturation temperature at ambient pressure (aka, the normal boiling point). The following solution strategy is advised below, for fluids other than those of Table 4-1 (or in case one wants to check the table’s values).
First, be sure all of the values for Tb, Tc, Pc are well-validated, and the degree of hydrogen bonding is known, per the definitions in Part IV. If the new fluid to be investigated is of a periodical table group other than III, IV, or V, the solution will involve a more intricate solution than below, since “k” (i.e., the degree of hydrogen bonding) is not well determined for other than Groups III-V, and a double iteration will be needed. Also be sure the vapor pressure data is also well-validated. A good check on that is to use a simplistic extrapolation of the known VP data, to see if the rough value of the acentric factor matches with the values of Tc and Pc. See Part III for more details on acentric factor, and especially Equation 3-3 For example, if an extrapolation of VP data using the Antoine VP relationship (Equation 2-3) in conjunction with values for Tc and Pc , ends up giving an acentric factor value over 0.3 for a compact molecule, or an acentric factor under 0.1 for a complex molecule, something is wrong with the data (either the VP data or the values of Tc and Pc).
For Equation 4-3, the value of the hydrogen bonding parameter, k, is established by experimental data for Group III, IV, and V hydrides, chlorides, oxychlorides, intermediate hydrochlorides, and several alphatic hydrocarbons. The value of k ranges from 0.12 to zero, and is tabulated in Part IV for many fluids. The rules for determining "k" are a bit complex, and are available upon request from the author. In brief, for Group III, IV, and V hydrides, k=0.1197; 0.0914; and 0.1009 respectively. For all fully chlorinated or methylated compounds of Groups III, IV, and V , k = 0.0291. For aliphatic hydrocarbons, k = 0. For other hydrochloride intermediates, a quadratic function has been determined to interpolate between all-hydride and all-chloride. These rules for "k" values are used in the entries of Table 4-3.
Having a rough estimate of the ambient pressure latent heat of vaporization is handy (but not necessary) for the first guess at the T-S heat variable. If no estimate is available, consult one of the many handbooks of chemical compounds, such as Yaw’s Chemical Property Handbook, or Lange’s Handbook of Chemistry and look for a latent heat value for a compound of similar complexity. It is always better to start a new fluid’s solution to the Thek-Stiel VP equation with a low estimate for the T-S heat variable, and then increase.
The overall solution convergence strategy to the modified Thek-Stiel VP equation involves a triple-nested iteration loop, with the logic flow sheet of the strategy shown on Figure 10-1. The solution is obtained by improving candidate solutions for the T-S heat variable, the acentric factor (ω ), and the αc parameter, until all loops are converged.
The inner-most iteration loop is the $\alpha_{c} \nonumber$ parameter (a thermodynamic consistency parameter from Part IV) , which is adjusted until the VP equation gives the correct saturation temperature at atmospheric pressure, aka the normal boiling point Tb. That loop is started by using Reidel’s estimation method: calculating the rough value of $\psi_{b}= (-35 +36/T_{br} + 42LnT_{br} -T_{br}^6 \nonumber$, and then plugging in that rough value of $\psi_{b} \nonumber$ in the equation below:
$\alpha_{c} \approx (0.315 \psi_{b} + LnP_{c})/(0.838 \psi_{b} + LnT_{rb}) \label{10-3}$
The middle of the nested loops is the acentric factor, which can be conveniently started using the Lee-Kister acentric factor (ω) approximation (which is not very accurate, but a good loop-starting estimate):
The Lee-Kister approximation for acentric factor (ω) conveniently uses only Pc and Tbr=Tb/Tc :
$\omega \cong \frac{-lnP_{c}+A+B/T_{br} +C*LnT_{br} +D*T_{br}^6}{E+F/T_{br} +GLnT_{br}+H*T_{br}^6} \label{10-4}$
A = -5.92714 E = 15.2518
B = 6.09648 F = -15.6875
C = 1.28862 G = -13.4721
D = -0.169347 H=0.43577
Based on this rough estimated value of ω, a value of Watson’s coefficient “q” (from Equation 4-3) can be determined per the relationship
$q = 0.37028 + 0.065404*ω \label{10-5}$
The outer-most loop is the T-S heat variable, “T-S ΔHv”, which doesn't really have much scientific meaning, but is a part of the Thek-Stiel relationship. This parameter is iterated based on the error function comparing the VP equation’s output against the various known ( or estimated) VP data points. To get the best overall fit, I suggest including data near Tr = 0.7 (where the acentric factor is evaluated). In getting the various T-S VP Equation solutions shown on Table 4-1, I chose to not automate this outer loop, but to rely on the more basic “gaming strategy” ⇒ always starting low, so as to monitor the convergence better. For computer automation, I used a BASIC program (actual software used was Liberty Basic version 4.03, selected due to prior familiarity). The starting guess for the outer loop iteration was based on the atmospheric ΔHvap given in chemistry handbooks and from internet sources, or based on similar compounds where no published estimate of atmospheric ΔHvap was available. This choice of starting guess was based on the "safe guess" criterion, and opposed to a scientific estimation.
Figure 10-1 below schematically gives the convergence strategy previously used with success. After insuring that all input parameter and data are reasonable, start with the initial guess for the T-S heat variable, “T-S ΔHv”. Then estimate a starting ω per Equation (\ref{10-4}); a from that estimate a starting $\alpha_{c} \nonumber$ per Equation 10-3. Now calculate values for constants “A”, “B0-B3”, “c”, and “n” in Equation 4-3, and see if the inner-most loop’s convergence is satisfied. If not, iterate the value of $\alpha_{c} \nonumber$ until the correct VP is shown at Tb of 1.0 atmosphere (i.e., the normal boiling point). This inner-most loop is best converged by approaching the VP @ Tb from P<1 atmosphere, with convergence to ± 1.0E-4 atmospheres being seen within about ten iterations via Newton-Raphson, using a relaxation factor of 0.4 in Equation (\ref{10-2}). Note that if the first guess of $\alpha_{c} \nonumber$ shows a VP @ Tb of >1.0 atmospheres, just keep reducing the value of $\alpha_{c} \nonumber$ by 10% until the convergence approach is from lower values of Tb. Trying to converge from the “high side” can be unstable.
Once the inner-most loop is converged, start changing the value of the acentric factor (ω) to converge the middle loop. This loop converges quite easily with just simple continuous substitution. Newton-Raphson is not required. Just put in the last value of ω, calculate the candidate VP constants (“A”, “B0‑B3”, “c”, and “n” in Equation 4-3) and determine a new value for ω based on the VP at Tr=0.7 (see Equation 3-3 for the definition of ω).
The outer loop is converged by picking that value of the T-S heat variable, “T-S ΔHv”, that best satisfies the error function of the VP data (i.e., predicted VP's as compared to the known data point VP's). I found the best overall fit used a least squares approach which emphasized the data near Tr=0.7, and which compared the VP equation error on a relative level (as opposed to an absolute level). If absolute VP equation errors are used (as opposed to relative ones), that over-emphasizes data at higher temperatures.
|
textbooks/eng/Chemical_Engineering/Distillation_Science_(Coleman)/10%3A_Convergence_Strategy.txt
|
Overview
Module Goal: To sensitize you to the central role that phase behavior plays in the petroleum extraction processes.
Module Objective: At the completion of this module, you should be able to describe, in concrete terms, how knowledge of fluid phase behavior impacts specific aspects of the process design and/or operations you are engaged in.
01: Introduction and Purpose
The basic essence of engineering is to harness the tremendous energy of the universe, in its latent or potent form, to resolve socio-economic problems. Most often than not in the process, energy must be converted from its natural form into a form that is more amenable to human utilization. Petroleum and natural gas engineering is no different. It is a discipline focused on efficiently extracting the petroleum buried deep within the earth and getting it to where it is needed. The petroleum fluids of particular interest to the petroleum and natural gas engineers are crude oil and natural gas. The term crude oil is used broadly here as to encompass the different forms of petroleum liquids that are used for energy and other purposes.
Natural gas and crude oil are naturally occurring hydrocarbon mixtures; generally they are referred to as petroleum fluids. These fluids are found underground at elevated pressure and temperature conditions. Petroleum fluids are composed principally of hydrocarbons; various non-hydrocarbon components, such as nitrogen, carbon dioxide, and hydrogen sulfide, may also be present.
Producing, separating, transporting, and storing petroleum fluids are the primary responsibilities of a petroleum and natural gas engineer. Therefore, we make no mistake when we refer to natural gas and petroleum engineers as petroleum fluid engineers. These engineers deal with hydrocarbon fluids to make a living. As a consequence, they may very well be referred to as hydrocarbon fluid engineers. One cannot overemphasize the importance of the primary fluids — oil and natural gas — to the modern industrial society. Indeed, modern man’s reliance upon natural gas and crude oil as the primary source of energy is such that these fluids are absolutely critical to the operation of today’s industrial society. It is fair to assume that you, the reader, are interested in hydrocarbon fluids production.
At every stage of the petroleum exploration and production business, a hydrocarbon fluid engineer is needed. Hydrocarbon fluid engineers might find themselves dealing with activities such as reserve evaluations, drilling operations, reservoir analyses, production operations, and gas processing. They are, therefore, called upon to deal with a wide spectrum of activities, mostly dealing with fluid handling and the associated facilities.
Most of the fluid handling protocols require the engineer to know a priori how the fluids will behave under a wide range of pressure and temperature conditions, particularly in terms of their volumetric and thermophysical properties. For example, the engineers should know if the reservoir contains a dry gas, a wet gas, or a gas-condensate before they design the surface production facility. This is collectively termed fluid phase behavior. It is therefore, not an overstatement to say that a thorough understanding of hydrocarbon fluid phase behavior is essential for the work of a petroleum and natural gas engineer. Phase behavior has defining implications in petroleum and natural gas engineering processes. Pressure, volume, and temperature (PVT) relations are required in simulating reservoirs, evaluating reserves, forecasting production, designing production facilities, and designing gathering and transportation systems.
En route from a subsurface reservoir to man’s energy extracting combustion processes, a hydrocarbon molecule goes through various phase- and property-altering intermediate stages. These properties are crucial in designing and operating the processes efficiently and optimally. Phase behavior thermodynamics gives us the tools needed for gaining the desired understanding of how fluids behave at any of those stages.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/01%3A_Introduction_and_Purpose/1.01%3A_Introduction.txt
|
Natural gas and crude oil are naturally occurring hydrocarbon mixtures that are found underground and at elevated conditions of pressure and temperature. They are generally referred to as petroleum fluids. Petroleum fluids are principally made up of hydrocarbons; but few non-hydrocarbon components may be present such as nitrogen, carbon dioxide and hydrogen sulfide.
We make no mistake when we refer to Natural Gas and Petroleum Engineers as Fluid Engineers. This is, engineers that deal with fluids to make a living. Moreover, we specialize in two special fluids whose importance to the society cannot be overstated--indeed, humankind rely on natural gas and crude oil as the premier source of energy that keeps the society operative. As a consequence, we may very well be titled as Hydrocarbon Fluid Engineers. That is everything we are basically about. At every stage of the oil business, a Hydrocarbon Fluid engineer is required. This is, reservoir analyses, drilling operation, production operation, processing, among others, reveal the wide spectrum of areas where an engineer with expertise on hydrocarbon fluids is fundamental.
This said, what can be more important for a Hydrocarbon Fluid Engineer than understanding how these fluids behave?
It is not an overstatement to say that a through understanding of hydrocarbon phase behavior is quintessential for the Petroleum and Natural Gas Engineer. Phase Behavior has many implications in natural gas and petroleum engineering. Pressure, volume, temperature (PVT) relations are required in simulating reservoirs, evaluating reserves, forecasting production, designing production facilities and designing gathering and transportation systems.
Every hydrocarbon molecule in the reservoir is to embark on a fascinating journey from beneath the earth, passing through a great deal of intermediate stages, to be finally dumped into our atmosphere upon combustion (release of energy). Phase Behavior is the part of thermodynamics that gives us the tools for the complete understanding of how fluids behave at any of those stages. Let us be the witnesses of this exciting journey.
1.03: Summary
Regardless of the aspect of petroleum extraction process — be it drilling, reserve estimation, reservoir performance analysis, reservoir simulation, tubing flow hydraulics, gathering design, gas-liquid separation, oil and gas transmission, oil and gas metering or quality control — a good predictive knowledge of phase behavior is called for. This course will help you to acquire this knowledge. You may never have to develop computer routines for doing many of the predictive calculations pertaining to hydrocarbon fluid phase behavior; nevertheless, the knowledge base developed in this course will help you develop the understanding needed to be an intelligent user of commercial software packages and ask the right questions that your responsibility as an engineer demands.
1.04: Action Item
Write a paragraph or two describing, very briefly, your educational background and practical experience, including your knowledge of phase behavior (if any) and how you apply or have applied this subject matter in your current or recent work assignments. Post your paragraph(s) to the Module 1 Message Board in our online course environment Canvas. From our Canvas course environment, click on the In Touch tab.
• Scroll down to the "Message Boards" section, then click on the link for Module 1 Message Board.
• In the message space provided, type in your paragraph(s) or paste them in if you drafted them first in another application.
• Click on the Save button to post your message.
• Click on the other student postings in the listing of threaded messages to learn more about your fellow classmates.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/01%3A_Introduction_and_Purpose/1.02%3A_Why_Study_Phase_Behavior%3F.txt
|
Learning Objectives
• Module Goal: To familiarize you with the basic concepts of phase diagrams as a means of representing thermodynamic data.
• Module Objective: To highlight the basic description of phase diagrams of single-component systems.
02: Phase Diagrams I
One of the primary challenges that we face as engineers is how to communicate large quantities of complex information to our peers and superiors. For instance, we may need to report what thermodynamic changes our working fluid is undergoing. Here, the familiar saying: A picture is worth a thousand words applies. Although most non-engineers believe that the only language an engineer speaks is mathematics, in actuality, the most effective means that engineers use to communicate information involves the use of pictures or diagrams. Phase behavior is not an exception. In phase behavior thermodynamics, phase diagrams are used precisely for this purpose.
Phase Diagram:
A phase diagram is a concise graphical method of representing phase behavior of fluids. It provides an effective tool for communicating a large amount of information about how fluids behave at different conditions.
First of all, let us classify fluids into two broad groups on the basis of the number of components that are present in the system: pure component systems and mixtures.
Two Classes of Fluids
1. Pure-component systems
2. Mixtures
Although this classification may seem trivial at a first glance, it recognizes the paramount influence of composition on the phase behavior of a fluid system. Whereas for a single-component system, composition is not a variable and therefore cannot influence behavior, the behavior of a mixture is strongly controlled by composition. In fact, as the number of components in the system increases, the complexity of the phase diagram increases as well. Therefore, the simplest form of a phase diagram is that of a system made of only one component (a pure-component system).
2.02: Pure-component Systems
Let us start with a well-known single component system: water — the most plentiful substance on earth. What is the behavior of water under different conditions of pressure and temperature? Or even more specific, is there a single answer to the question: what is the boiling point of water? Most people would say, “of course, 100 °C”, but a more accurate response would pose a clarifying question: “At what pressure?” It is common knowledge that water boils at 100 °C (212 °F) at atmospheric pressure. By requiring a pressure specification in order to uniquely define the boiling point of water, we are acknowledging that the boiling temperature of a pure substance is pressure-dependent. In reality, we are also implicitly applying a very useful thermodynamic principle, the Gibbs Rule, but this will be the topic for later discussion.
In thermodynamics, we refer to the Normal Boiling Point as the boiling temperature of a fluid at 1 atm of pressure (that is, atmospheric pressure.) Therefore, 100 °C (212 °F) is the normal boiling point of water.
What if we want to communicate this idea? We would like to communicate the concept that the temperature at which water boils varies with pressure. What about a picture to represent this information? Here is our first phase diagram, Figure 2.2.1. Whereas several sentences may be required to describe the variability of the boiling point of water, the single line shown in Figure 2.2.1 is adequate. What does Figure 2.2.1 tell us? It tells us that the boiling temperature of a liquid increases as pressure increases. In other words, it says that the vapor pressure of a liquid increases as temperature increases. That is to be expected because as the temperature increases, more liquid molecules are able to escape into the vapor phase, thus increasing the pressure that the aggregate of all vapor molecules exerts on the system (i.e. vapor pressure).
The curve in Figure 2.2.1 is called the vapor pressure curve or boiling point curve. The line also represents the dew point curve and the bubble point curve; one on top of the other. This curve represents the transition between the vapor and liquid states.
Definition of Basic Terms
Vapor Pressure: The pressure that the vapor phase of a fluid exerts over its own liquid at equilibrium at a given temperature.
Dew Point: The pressure and temperature condition at which an infinitesimal quantity of liquid (a droplet) exists in equilibrium with vapor. It represents the condition of incipient liquid formation in an initially gaseous system. Notice that it can be also visualized as a “liquid system” where all but an infinitesimal quantity of liquid has been vaporized.
Bubble Point: The pressure and temperature condition at which the system is all liquid, and in equilibrium with an infinitesimal quantity (a bubble) of gas. This situation is, in essence, the opposite of that of the dew point.
Note:
For single-component systems, one single curve represents all three of these conditions (vapor pressure, dew point and bubble point conditions) simply because Vapor Pressure = Dew Point = Bubble Point for unary systems.
In Figure 2.2.1, once a saturation pressure has been selected, there is one (and only one) saturation temperature associated with it. This is only true for a single component system. In other words, this is the only temperature (at the given pressure), at which liquid and gas phase will co-exist in equilibrium. The rule that governs the uniqueness of this point, for a single-component system, is called the Gibbs Phase Rule. This is the second time we have mentioned this rule, pointing to its great importance in phase behavior. Let us postpone any detailed discussion of the Gibbs Phase Rule until later.
The vapor curve, shown in Figure 2.2.1, represents the transition between the vapor and liquid states for a pure component. Obviously, this is not the whole story. If we cool the liquid system, it makes sense to expect ice to form (a solid phase). We can communicate this new idea by adding more information to Figure 2.2.1. In fact, there is a line that defines the liquid-solid transition; it is called the solidification (or melting) curve (see Figure 2.2.2). Furthermore, even though it is counter-intuitive, it is possible to go from solid to vapor without going through a liquid state, if the pressure is low enough. This information can be added to the diagram by including the sublimation curve. Thus, Figure 2.2.2 represents a more complete phase diagram for a pure-component system.
See how more and more information can be represented within the limits of the phase diagram. As petroleum and natural gas engineers, we focus our attention on the Boiling Point Curve, as it represents the transition between liquid and gas, the phases with which we deal most often. Let us focus our attention on it.
Figure 2.2.3 presents a P-T diagram representing the vapor pressure curve and its extremities. Two very important thermodynamic points bound the vapor pressure curve: the Critical Point at its upper end and the Triple Point at its lower end.
The Triple Point is the meeting point of the vapor pressure, solidification and sublimation curves (see Figure 2.2.2); hence, it represents the only condition at which all three phases of a pure substance (solid, liquid and gas) can co-exist in equilibrium.
At the Critical Point, gas and liquid are in equilibrium without any interface to differentiate them; they are no longer distinguishable in terms of their properties. As we recall, the only location on the P-T diagram where liquid and gas can be found together in equilibrium is along the vapor pressure curve. Hence, the critical point is clearly the maximum value of temperature and pressure at which liquid and vapor can be at equilibrium. This maximum temperature is called the critical temperature (Tc); the corresponding maximum pressure is called the critical pressure (Pc).
Let us conclude this first module by highlighting the most important properties of the critical point of pure substances, as shown below. The next module will be providing more details of interest that are embedded in Figure 2.2.3.
Properties Of The Critical Point (Tc,Pc) (For Pure Substances):
1. Temperature and pressure for which liquid and vapor are no longer distinguishable.
2. For T > Tc, liquid and vapor will not co-exist, no matter what the pressure is.
3. For P > Pc, liquid and vapor will not co-exist, no matter what the temperature is.
2.03: Action Item
Problem Set
1. Speculate on what the difference would be between phase diagrams (P-T curve) of various pure component systems. Write a paragraph or so on what you believe the differences are and why.
2. What should be the differences in the diagrams in question 1? Are the dew points the same? What about the bubble points? And their critical points?
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/02%3A_Phase_Diagrams_I/2.01%3A_Introduction.txt
|
Learning Objectives
• Module Goal: To familiarize you with the basic concepts of phase diagrams as a means of representing thermodynamic data.
• Module Objective: To familiarize you with the use of P-T and P-v diagrams.
03: Phase Diagrams II
From the last module you will recall that the vapor-pressure curve (Figure 2.2.3) and its extremities were examined. That figure is presented in this module in Figure 3.1.1. We concluded that two very important thermodynamic points bound the vapor pressure curve: the Critical Point at its upper end and the Triple Point at its lower end.
Now let us take a second look at this figure. We can better understand the information represented by the vapor pressure curve by looking at the process of isobaric heating. This is illustrated by line ACB in Figure 3.1.2. The term “isobaric” pertains to a constant pressure process. By isobaric heating process, we then mean a “constant pressure addition of heat into a system.”
Such a process can be described as follows. Starting at point A and moving towards point C at constant pressure, we add heat to the system. By adding this heat, we cause a temperature increase in the system (Temperature at C > Temperature at A.). At point C, which lies above the vapor pressure or boiling point curve, we encounter a phase change. To the left of point C, at lower temperatures, exists only liquid. To the right of point C, at higher temperatures, there is only vapor. Therefore, a sharp discontinuity in density exists at point C. During this transition, from liquid to vapor, we will notice that the heat that we add to the system does not cause any temperature increase, and in fact, temperature and pressure conditions remain constant during the transition represented by the vapor curve. In other words, even though we are adding heat, the system remains at the pressure and temperature associated with point C until the whole phase transition has taken place — i.e., until all the vapor is converted to liquid. Instead of working to increase liquid temperature, this heat serves to move liquid molecules apart until all liquid has become vapor.
Up to this point, we saw that the heat added before the system reached the phase transition was used to raise the temperature of the substance. However, the heat that we are adding right now, during the phase transition, is not causing any temperature increase (hence it is said to be hidden heat or latent heat of vaporization). Therefore, we differentiate between two kinds of heat: sensible heat and latent heat.
• Sensible Heat: Its main purpose is to cause an increase in temperature of the system.
• Latent Heat: It serves only one purpose: to convert the liquid into vapor. It does not cause a temperature increase.
In fact, the name “latent” suggests “hidden.” Here, we are adding heat to the system but are not seeing its effect in terms of temperature increase. The heat that is needed to transform one mole of saturated liquid into vapor is known as the latent molar heat of vaporization: $(\Delta \tilde{H}_{vap})$
Once we have converted all the liquid into vapor (i.e., we supplied all the necessary latent heat to accomplish this), we may continue to add more heat. If we do so, the temperature will rise again and we will end at point B (Figure 3.1.2). This heat is also sensible since it is causing the temperature of the system to rise.
It is interesting to note that, in order to reverse the process from point B to point A, we will have to remove the exact amount of heat that we had added before. This is a basic consequence of an energy balance principle. We call such a reverse process an isobaric cooling process. We will have to remove some sensible heat in order to cool the vapor from point B to C, and then we will remove all the latent heat of the vapor to condense it into liquid (transition at point C). Finally, we will also need to remove more sensible heat from the system for the cooling of the liquid from point C to point A.
In the previous two processes, from A to B or vice versa, we had to cross the phase boundary represented by the vapor pressure curve. However, this is not the only thermodynamic path that is available for us to go from A to B. Figure 3.1.3 depicts another possible path.
Instead of doing the whole process isobarically, we may devise a new path that may also accomplish the goal of taking the system from a condition ‘A’ to a condition ‘B.’ Consider the path ADEB that is shown in Figure 3.1.3.
Sequence of Paths
1. Path AD: Isothermal compression
2. Path DE: Isobaric heating
3. Path EB: Isothermal expansion
There is something remarkable about this new path. Unlike the previous path, notice that we do not cross the phase boundary at all. The consequences of taking this new road may seem astonishing at first glance: we went from an all-liquid condition (point A) to an all-vapor condition (point B) without any sharp phase transition. In fact, along the path ADEB there is NO phase transition because we never crossed the phase boundary. Since the phase boundary represents a sharp discontinuity in density (and other physical properties), the fact that we are not crossing it tells us that as we go, there is actually a gradation in density (from a liquid like, or high-density at point A to a gas like, or low-density at point B) instead of a sharp change from the high liquid density to the low gas density.
We were able to do so because we went above the critical conditions. In the vicinity of and beyond critical conditions, we are no longer able to distinctly label the single-phase condition as either “liquid” or “gas”. Under these conditions, any transition takes place gradually without any differentiation between a ‘liquid’ and a ‘gas’ phase. We call this fluid, which we cannot define either as a liquid or as a gas, a supercritical fluid. In terms of density, a supercritical fluid may be described, at the same time, as a light liquid (its density is not as high as the liquid density) and a heavy gas (its density is not as low as the typical gas density of the given substance). The behavior of fluid around this area is an active and interesting area of current research.
In summary, for a pure substance, you can avoid having an abrupt phase transition (such as the one described by the path ACB) by going around the critical point (path ADEB). Keep in mind that any path that crosses the vapor pressure curve (ACB) will undergo a phase transition.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/03%3A_Phase_Diagrams_II/3.01%3A_Vapor_Pressure_Curve.txt
|
In the previous discussion, we used the P-T diagram and were not concerned about changes in the volume of the system. If we want to follow changes in volume, we may construct P-v or T-v diagrams, in which we hold temperature (T) or pressure (P) constant. Let us consider the case of a P-v Diagram (Figure $4$).
In this case temperature is being held constant; our substance is undergoing an isothermal compression process. Starting at E (all-vapor condition), an increase in pressure will result in a rather significant reduction in volume since the gas phase is compressible. If we keep compressing isothermally, we will end up at point F, where the gas will be saturated and the first droplet of liquid will appear. We have come to the two-phase condition, where liquid (L) and vapor (V) co-exist in equilibrium, for the first time, albeit an infinitesimal amount of liquid.
Once we reach the two-phase condition, what happens is not intuitive. While we keep on compressing by decreasing the volume (path F-G,) the pressure of the system remains constant; this condition continues until all the vapor has become liquid. Point G represents the last condition of liquid and vapor (L+V) coexistence, saturated liquid condition (liquid in equilibrium with an infinitesimal amount of vapor.) Once we have only liquid, if we keep on compressing (i.e., attempting to reduce liquid volume) we will observe a rapid increase in pressure, as indicated by the steep slope in the P-v diagram. This is because liquid is virtually incompressible, hence, a great deal of pressure is needed to cause a small reduction in volume.
It is important to recognize some points of this process. If we recall our previous definitions of basic concepts, we will recognize point F, where only a tiny quantity of liquid exists in an otherwise completely gaseous system (the dew point of the system at the given temperature). Similarly, Point G is the bubble point; only an infinitesimally small bubble of vapor exists in an otherwise liquid system.
But wait a second. Let us try to compare Figure 3.2.4 with Figure 3.1.2. Can we relate them to each other? Where is path F-G in Figure 3.2.4 represented in Figure 3.1.2 (repeated below)?
The answer is, path F-G is represented by one point in Figure 3.1.2; that is, point C. Recall, for a single-component system, dew points and bubble points are identical. During a phase transition, both pressure and temperature must remain constant for pure components.
Now, if we want to generate all the possible points that make up the vapor pressure curve in Figure 3.1.2, we would need to repeat the experiment for different temperatures. We would end up with a family of isotherms (each similar to the one presented in Figure 3.2.4). This is represented in Figure 3.2.5.
The zone where the isotherms become flat delineates the two-phase region. It is clearly seen that by plotting all the pairs in that zone (P1,T1), (P2,T2)… (Pc, Tc) we will be able to reproduce Figure 3.1.2.
If we now draw a line through all the Bubble Points in Figure 3.2.5, and then draw a line connecting all the Dew Points, we will end up with the Bubble Point Curve and the Dew Point Curve, respectively. It is clear that the two curves meet at the critical point (Pc, Tc). Furthermore, the two curves delineate the phase envelope, which contains the 2-phase region inside. If we “clean” Figure 3.2.5 a little, we end up with the phase envelope that is shown in Fig. 3.2.6.
If you carefully follow the trend of the critical isotherm (@ T = Tc in Fig. 3.2.5), you will realize that it has a point of inflexion (change of curvature) at the critical point. Furthermore, the critical point also represents the maximum point (apex) of the P-v envelope. Mathematically, this information is conveyed by the expressions:
$\left( \frac{\partial P}{\partial V} \right)_{Pc,Tc} = \left( \frac{\partial^2 P}{\partial V^2} \right)_{Pc,Tc} = 0 \nonumber$
which are usually known as the criticality conditions. These conditions are always satisfied at the critical point. We will comment more on this after we begin the discussion on Equations of State (EOS) — semi-empirical relationships that mathematically model the P-v-T behavior of fluids.
3.03: Action Item
Problem Set
1. Explain in your own words (no more than a paragraph) why the 2-Phase segment of the P-v isotherm should correspond to one single point in the P-T diagram.
2. What about the critical point in the P-T diagram? Which P-v isotherm could that correspond to?
3. Take two different hydrocarbons (say, C1 and C2) and compare their P-T vapor pressure curves. What do you notice? As you saw in the previous module, those P-T curves will be found at different conditions of pressure and temperature. But more than that, what can you comment about the slopes of both curves? Are they different? Why? Speculate.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/03%3A_Phase_Diagrams_II/3.02%3A_PV_Diagram_for_Pure_Systems.txt
|
Learning Objectives
• Module Goal: To familiarize you with the basic concepts of Phase Diagrams as a means of representing thermodynamic data.
• Module Objective: To introduce you to the additional complexity brought about by the presence of one or more additional components.
• 4.1: Binary Systems
• 4.2: Retrograde Phenomenon
Regarding multi-component mixtures (where the binary system is the simplest case), some interesting phenomenon profoundly differentiate their behavior from the behavior of single-component systems. We are now talking about retrograde phenomena.
• 4.3: Action Item
04: Phase Diagrams III
Thus far, we have focused our attention on single-component systems (also called unary, one-component, or pure-component systems). However, real-life systems are never single-component; they are multicomponent systems. The simplest of this category is the binary system. The good news, however, is that the behavior of multicomponent systems are quite similar to that of binary systems. Therefore, we will focus on binary systems, since they are easier to illustrate.
Let us place two gases (A and B) in an isothermal cell. As we did before, we will keep temperature constant during this experiment (shown in Fig. 4.1.1)
If we move the piston down, we would compress the gasses, causing a decrease in volume. In such a scenario, an increase in pressure would occur. The process starts at point A, as shown in Figure 4.1.2, an all-vapor condition.
Nothing here is new regarding the compression of the vapor itself. As pressure increases, volume decreases accordingly. After some compression, the first droplet of liquid will appear. That is, we have found the dew point of the mixture (point B). We then proceed with compression. As we further compress the system, more liquid will appear and the volume will continue to decrease.
It would appear that we are seeing here the same features as those of the single-component system that we studied in the previous modules. But wait a minute, is that so?
Actually, there is a difference. During the phase transition, pressure does not remain constant in this experiment. In fact, as compression progresses and more liquid is formed, pressure keeps rising — although not as sharply as in the single-phase vapor region. When the entire system has become liquid, with only an infinitesimal bubble of vapor left, we are at point C — the bubble point of the mixture. Please note that, for binary mixtures (as is the case for multicomponent mixtures,) the dew point and bubble point do not occur at the same pressure for isothermal compression. If you recall, for the single-component system, the dew point and the bubble point coincide. This is not true for binary and multicomponent systems. Compare Figure 4.1.2 with Figure 3.2.4 (repeated below from Module 3) to see this point.
WHY?? Why is pressure increasing during the phase transition? At this point, we start to realize the ways in which composition plays a fundamental role in the behavior of mixtures.
In a single-component system, both liquid and vapor in the two-phase region have the same composition (there is only one chemical substance within the system). Now, when a mixture exists in a two-phase condition, different molecules of different species are present and they can be either in a liquid or vapor state (two-phase condition). Some of them would “prefer” to be in the gas phase while the others would “prefer” to be in the liquid phase. This “preference” is controlled by the volatility of the given component. When we reach point B (Figure 4.1.2) and the first droplet of liquid appears, the heaviest molecules are the ones that preferentially go to that first tiny droplet of liquid phase. For ‘heavy’ molecules, given the choice, it is more desirable to be in the condensed state.
As we keep on forming more liquid (by compression), mainly light molecules remain in the vapor phase. However, at the end point of the transition (point C in Figure 4.1.2) we have forced all of them to go to the liquid state — they no longer have a choice. This enforcement requires greater pressure. If you compare a sample of liquid at dew point conditions (point B in Figure 4.1.2) to one taken in the middle of the transition, it is clear that the former would be richer in heavy components than the latter. The properties of the heaviest component would be most influential at the dew point (when the liquid first appears); while the properties of the lighter component would be most influential at the bubble point (when the last bubble is about to disappear.)
In the two-phase region, pressure increases as the system passes from the dew point to the bubble point. The composition of liquid and vapor is changing; but — watch out! — the overall composition is always the same! At the dew point, the composition of the vapor is equal to the overall composition of the system; however, the infinitesimal amount of liquid that is condensed is richer in the less volatile component. At the bubble point, the composition of the liquid is equal to that of the system, but the infinitesimal amount of vapor remaining at the bubble point is richer in the more volatile component than the system as a whole.
In general, when two different species are mixed, some of the behaviors of the individual species and their properties will change. Their usual behavior (as pure components) will be altered as a consequence of the new field of molecular interactions that has been created. While kept in a pure condition, molecules only interact with like molecules. On the other hand, in a mixture new interactions between dissimilar molecules occur.
Our next step, in order to continue this discussion in a coherent manner, is to draw the complete P-V diagram for this binary mixture by delineating the two-phase region. In the same way as we did previously, we delineate the two-phase region by drawing a complete family of isotherms in the P-V diagram. Figure 4.1.3 illustrates this.
Again, the line connecting all of the bubble and dew points will generate the bubble and dew point curve, both of which meet at the critical point. Notice that the critical point does not represent a maximum in the P-V diagram of a mixture. Also note that bubble point pressures and dew point pressures are no longer the same.
Again, the isotherms through the two-phase region are not horizontal but have a definite slope. This must have an implication. In fact, it does. What if we now want to plot the P-T diagram for this mixture? Will we have only a single line, where bubble and dew point curves lie on top of each other, as we had for a single-component system? Of course not. Instead of both curves being together, the bubble point curve will shift to the upper left (higher pressures) and dew point curve will shift to the lower right (lower pressures) — both of them meeting at the critical point. Figure 4.1.4 shows us a typical phase envelope for a mixture.
Notice the enormous difference between the P-T curve of a multi-component system (binary system in this case, Fig. 4.1.4) and the P-T curve of a pure (single) component (Fig. 3.1.1 of Module 3, repeated below). The only system for which the bubble point curve will coincide with the dew point curve is a single component system, where we have a single line for the P-T diagram (for example, the boiling point curve represented in Fig. 3.1.1). In Fig 3.1.1, the critical point represents the maximum set of (P,T) values that you could find in the P-T graph.
This is not all. There are some other implications. Can we say now that the critical point is the maximum value of pressure and temperature where liquid and gas can coexist? Look at Figure 4.1.4 again. Obviously not. The critical point is no longer at the apex or peak of the two-phase region; hence, vapor and liquid can coexist in equilibrium at T > Tc and P > Pc. In fact, we can identify two new maxima: condition Pcc is the maximum pressure and condition Tcc is the maximum temperature at which L+V will be found in equilibrium. We assign special names to these points. There are the cricondenbar and cricondentherm, respectively.
Clearly, the only definition that now can still hold for the critical point — both for mixtures and pure components — is the one shown below.
Critical Point (Pc,Tc):
The temperature and pressure for which liquid and vapor are indistinguishable.
Again, this definition is applicable both for mixtures and pure-component systems; it does not make any reference to maximum values in the curve. These maximum values, as we said, have special names in the case of mixtures. Thus, for mixtures, we have to additionally define:
Cricondentherm (Tcc):
1. The highest temperature in the two-phase envelope.
2. For T > Tcc, liquid and vapor cannot co-exist at equilibrium, no matter what the pressure is.
Cricondenbar (Pcc):
1. The highest pressure in the two-phase envelope.
2. For P > Pcc, liquid and vapor cannot co-exist at equilibrium, no matter what the temperature is.
In the case of the unary system, besides dew and bubble point curves lying on top of each other, it is clear that cricondentherm, cricondenbar, and critical conditions are also represented by a single point (that is, the critical point itself.) This is clear from all three previous definitions. Thus, as we saw before, the definition for critical point in unary systems encompasses all three of the definitions given above.
For pure substances only:
Cricondentherm = Cricondenbar = Critical Point.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/04%3A_Phase_Diagrams_III/4.01%3A_Binary_Systems.txt
|
Regarding multi-component mixtures (where the binary system is the simplest case), some interesting phenomenon profoundly differentiate their behavior from the behavior of single-component systems. We are now talking about retrograde phenomena.
In the previous section, we learned how the “critical point” for a single-component system meant everything: the highest pressure and temperature at which two phases can exist and the point for which liquid and vapor phases are indistinguishable. Then we learned that this is not the case for multicomponent systems. Although the critical point for these systems is the common point between the dew and bubble point curves (the point for which liquid and vapor phases are indistinguishable), in general, this point neither represents the maximum pressure nor the maximum temperature for vapor-liquid coexistence. In fact, we gave new names to these maxima: cricondenbar (for the maximum pressure) and cricondentherm (for the maximum temperature). Let’s look at this again in Figure 4.2.5, where the critical point (Pc, Tc), cricondentherm (Tcc), and cricondenbar (Pcc) are highlighted.
Please recall that the bubble point curve represents the line of saturated liquid (100 % liquid with an infinitesimal amount of vapor) and the dew point curve represents the line of saturated vapor (100 % vapor with an infinitesimal amount of liquid). These conditions are all shown in Figure 4.2.5.
Let us now consider the isothermal processes taking place at T = T1 and T = T2, represented in Figure 4.2.6.
Figure 4.2.6 shows us two cases of isothermal compression for two different temperatures T1 and T2. Notice that these temperatures are such that T1 < Tc and Tc < T2 < Tcc.
It is common knowledge that an isothermal compression (increasing pressure while temperature is held constant) causes the condensation of a vapor (steam, in the case of water). That is the normal or expected behavior of a vapor under compression: the more you compress it, the more liquid you get out of it after the saturation conditions have been reached. This is always true for a pure-component system, such as water.
Well, that is exactly what is happening for our first case, the isothermal compression at T = T1. At point A, we are in an ALL VAPOR condition (0 % liquid) and we are starting to cross over into the two-phase region. As we compress from point A to B, more and more liquid is formed until the entire system has been condensed (point B). We went all the way from 0 % liquid to 100 % liquid, as we expected, by compressing the vapor. How liquid yield progresses with pressure is shown in Figure 4.2.7.
Again, there is nothing contrary to expectations here, and we would get the same result as long as T < Tc. However, there is something very interesting going on within the region Tc < T < Tcc.
In the second case (Tc < T2 < Tcc), we have a different behavior. At point C (Figure 4.2.6), we are starting in an ALL VAPOR condition (0 % liquid); by increasing pressure, we force the system to enter the two-phase region. Thus, some liquid has to drop out; we expect that as the pressure keeps increasing, we will produce more and more liquid. That is true to some extent… BUT, look at the final point of our journey, point D: although we are producing liquid, our final condition (dew point) requires us to have 0 % liquid in the system again.
How so?? This is telling us that, as we are entering the two-phase region, we will start to produce some liquid; but, there will be a point (of maximum liquid yield) where that liquid will start to vaporize (point C’). In other words, even though we are compressing the system, liquid will vaporize and not condense. Isn’t this contrary to expectations? Yes, and that is why we call this a retrograde (contrary to expectation) behavior.
Figure 4.2.8 shows a typical curve for the variation of the liquid volume percentage with pressure. This curve can be also referred to as the liquid dropout curve.
The increase in the liquid fraction with decreasing pressure between points C and D is exactly the opposite of the normal trend. This behavior, however, is typical of gas condensate systems. Retrograde conditions may be encountered in deep-well gas production, as well as in reservoir conditions.
For production operations, usually the objective is to maintain pressure so as to achieve maximum liquid dropout. The initial PVT conditions of the well may correspond to a point above point D. If the conditions at the wellhead are then maintained near point C’, liquid recovery is maximized at the surface. However, maximum liquid dropout is not always sought. At reservoir conditions, presence of liquid is not desirable in a gas reservoir, because liquids have negligible mobility (at low saturations) and thus, the hydrocarbon would be — for practical purposes — lost forever. Liquid also impairs gas mobility; hence, liquid production at reservoir conditions is to be avoided at all times in a gas reservoir. This is often achieved by repressurization or lean gas injection.
It is also important to see that a similar behavior is to be expected within the region Pc < P < Pcc. In this case, we talk about retrograde vaporization since we will be moving from a 100 % liquid to another 100 % liquid condition (both on the bubble point curve) in an isobaric heating.
4.03: Action Item
Problem Set
1. Compare and contrast the definition of critical point for a single component system and a mixture. What is common between these two systems and what is not? Explain.
2. Do the cricondentherm and cricondenbar play a similar role as the critical point? If not, why? What is the difference?
3. In a couple of sentences, speculate on the physics of why we have retrograde phenomena.
4. Compare and contrast the 2-Phase segment of the P-v isotherm for single components and the P-v isotherm for a binary system. What is the difference?
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/04%3A_Phase_Diagrams_III/4.02%3A_Retrograde_Phenomenon.txt
|
Learning Objectives
• Module Goal: To familiarize you with the basic concepts of Phase Diagrams as a means of representing thermodynamic data.
• Module Objective: To familiarize you with the process of extracting quantitative compositional information from phase envelopes.
05: Phase Diagrams IV
Let us try to reconcile the P-T graphs for the binary mixture with what we know from P-T graphs for single-component systems. At the end of the day, a mixture is formed by individual components which, when pure, act as presented in the P-T diagram shown in Fig. 3.1.1 [Module 3, repeated below]. We would, therefore, expect that the P-T diagram of each pure component will have some sort of influence on the P-T diagram of any mixture in which it is found.
In fact, it would be reasonable to think that as the presence of a given component A dominates over B, the P-T graph of that mixture (A+B) should get closer and closer to that of A as a pure component.
What this is telling us is that a new variable is coming into the picture: composition. So far we have not considered the ratio of component A to component B in the system. Now we are going to study how different ratios (compositions) will give different envelopes, i.e., different P-T behaviors.
Let us say that we have a mixture of components A (Methane, CH4) and B (Ethane, C2H6), where A is the more volatile of the two. It is clear that for the two pure components, we would have two boiling point curves for each component (A and B) as shown in Figure 5.1.1.
(Bloomer, O.T., Gami, D.C., Parent, J.D., “Physical-Chemical Properties of Methane-Ethane Mixtures”. Copyright 1953, Institute of Gas Technology (now “Gas Technology Institute”, in Chicago). Research Bulletin No. 17.)
Please notice that the position of each of curve with respect to the other depends on its volatility. Since we are considering A to be the more volatile, it is expected to have higher vapor pressures at lower temperatures, thus, its curve is located towards the left. For B, the less volatile component, we have a boiling point curve with lower vapor pressures at higher temperatures. Hence, the boiling point curve of B is found towards the right at lower pressures.
Now, if we mix A and B, the new phase envelope can be anywhere within curves A and B. This is shown in Figure 5.1.2, where the effect of composition on phase behavior of the binary mixture Methane/Ethane is illustrated.
(Bloomer, O.T., Gami, D.C., Parent, J.D., “Physical-Chemical Properties of Methane-Ethane Mixtures”. Copyright 1953, Institute of Gas Technology (now “Gas Technology Institute”, in Chicago). Research Bulletin No. 17.)
In Figure 5.1.2, each phase envelope represents a different composition or a particular composition between A and B (pure conditions). The phase envelopes are bounded by the pure-component vapor pressure curve for component A (Methane) on the left, that for component B (Ethane) on the right, and the critical locus (i.e., the curve connecting the critical points for the individual phase envelopes) on the top. Note that when one of the components is dominant, the curves are characteristic of relatively narrow-boiling systems, whereas the curves for which the components are present in comparable amounts constitute relatively wide-boiling systems.
Notice that the range of temperature of the critical point locus is bounded by the critical temperature of the pure components for binary mixtures. Therefore, no binary mixture has a critical temperature either below the lightest component’s critical temperature or above the heaviest component’s critical temperature. However, this is true only for critical temperatures; but not for critical pressures. A mixture’s critical pressure can be found to be higher than the critical pressures of both pure components — hence, we see a concave shape for the critical locus. In general, the more dissimilar the two substances, the farther the upward reach of the critical locus. When the substances making up the mixture are similar in molecular complexity, the shape of the critical locus flattens down.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/05%3A_Phase_Diagrams_IV/5.01%3A_Effect_of_Composition_on_Phase_Behavior.txt
|
In addition to considering variations with pressure, temperature, and volume, as we have done so far, it is also very constructive to consider variations with composition. Most literature on the subject calls these diagrams the “P-x” and “T-x” diagrams respectively. However, a word of caution is needed in order not to confuse the reader. Even though “x” stands for “composition” — in a general sense — here, we will see in the next section that it is also customary to use “x” to single out the composition of the liquid phase. In fact, when we are dealing with a mixture of liquid and vapor, it is customary to refer to the composition of the liquid phase as “xi” and use “yi” for the composition of the vapor phase. “xi” pertains to the amount of component in the liquid phase per mole of liquid phase, and “yi” pertains to the amount of component in the vapor phase per mole of vapor phase. However, when we talk about composition in general, we are really talking about the overall composition of the mixture, the one that identifies the amount of component per unit mole of mixture. It is more convenient to call this overall composition “zi”. If we do so, these series of diagram should be called “P-z” and “T-z” diagrams. This is a little awkward in terms of traditional usage; and hence, we call them “P-x” and “T-x” where “x” here refers to overall composition as opposed to liquid composition.
A P-x diagram for a binary system at constant temperature and a T-x diagram for a binary system at a constant pressure are displayed in Figures 5.2.3 and 5.2.4, respectively. The lines shown on the figures represent the bubble and dew point curves. Note that the end points represent the pure-component boiling points for substances A and B.
Figure \(4\): T-X Diagram For Binary System
(Courtesy of ©LOMIC, INC)
In a P-x diagram (Figure 5.2.3), the bubble point and dew point curves bound the two-phase region at its top and its bottom, respectively. The single-phase liquid region is found at high pressures; the single-phase vapor region is found at low pressures. In the T-x diagram (Figure 5.2.4), this happens in the reverse order; vapor is found at high temperatures and liquid at low temperatures. Consequently, the bubble point and dew point curve are found at the bottom and the top of the two-phase region, respectively.
5.03: The Lever Rule
P-x and T-x diagrams are quite useful, in that information about the compositions and relative amounts of the two phases can be easily extracted. In fact, besides giving a qualitative picture of the phase behavior of fluid mixtures, phase diagrams can also give quantitative information pertaining to the amounts of each phase present, as well as the composition of each phase.
For the case of a binary mixture, this kind of information can be extracted from P-x or T-x diagrams. However, the difficulty of extracting such information increases with the number of components in the system.
At a given temperature or pressure in a T-x or P-x diagram (respectively), a horizontal line may be drawn through the two-phase region that will connect the composition of the liquid (xA) and vapor (yA) in equilibrium at such condition — that is, the bubble and dew points at the given temperature or pressure, respectively. If, at the given pressure and temperature, the overall composition of the system (zA) is found within these values (xA < zA < yA in the T-x diagram or yA < zA < xA in the P-x diagram), the system will be in a two-phase condition and the vapor fraction (αG) and liquid fraction (αL) can be determined by the lever rule:
$\alpha_G = \frac{z_A - x_A}{y_A - x_A} \label{alpha1}$
Note that αL and αG are not independent of each other, since αL + αG = 1. Figure 5.3.5 illustrates how Equations \ref{alpha1} and \ref{alpha2} can be realized graphically. This figure also helps us understand why these equations are called the "lever rule.” Sometimes it is also known as the “reverse arm rule,” because for the calculation of αL (liquid) you use the “arm” within the (yA-xA) segment closest to the vapor, and for the vapor calculation (αG) you use the “arm” closest to the liquid.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/05%3A_Phase_Diagrams_IV/5.02%3A_Px_and_Tx_Diagrams.txt
|
The next more complex type of multi-component system is a ternary, or three-component, system. Ternary systems are more frequently encountered in practice than binary systems. For example, air is often approximated as being composed of nitrogen, oxygen, and argon, while dry natural gas can be rather crudely approximated as being composed of methane, nitrogen and carbon dioxide. We can also have pseudo 3-component systems, which consist of multicomponent systems (more than 3 components) that can be described by lumping all components into 3 groups, or pseudo-components. In this case, each group is treated as a single component. For example, in CO2 injection into an oil reservoir, CO2, C1, and C2 are often lumped into a single light pseudo-component, while C3 to C6 form the intermediate pseudo-component, and the others (C8+) are lumped together into a single heavy pseudo-component.
Intuitively, having more than two components poses a problem when a pictorial representation is desired. A rectangular coordinate plot, having only two axes, will no longer suffice. Gibbs first proposed the use of a triangular coordinate system. In modern times, we use an equilateral triangle for such a representation. Figure \(1\) shows an example of a ternary phase diagram. Note that the relationship among the concentrations of the components is more complex than that of binary systems.
• Any point within this triangle represents the overall composition of a ternary system at a fixed temperature and pressure.
• By convention, the lightest component (L) is located at the apex or top of the triangle. The heavy (H) and medium (M) components are placed at the left hand corner and right hand corner, respectively.
• Every corner represents a pure condition. Hence, at the top we have 100 % L, and at each side, 100 % H and 100 % M, respectively.
• Each side of the triangle represents all possible binary combinations of the three components.
• On any of those sides, the fraction of the third component is zero (0%).
• As you move from one side (0 %) to the 100 % or pure condition, the composition of the given component is increasing gradually and proportionally. At the very center of the triangle, we find 33.33 % of each of the component.
To differentiate within the two-phase region and single-phase region in the ternary diagram, pressure and temperature must be fixed. There will be different envelopes (binodal curves) at different pressures and temperatures. The binodal curve is the boundary between the 2-phase condition and the single-phase condition. Inside the binodal curve or phase envelope, the two-phase condition prevails. If we follow the convention given above (lights at the top, heavies and mediums at the sides), the two-phase region will be found at the top. This can be seen more clearly in Figure \(2\).
The binodal curve is formed of the bubble point curve and the dew point curve, both of which meet at the plait point. This is the point at which the liquid and vapor composition are identical (resembles the critical point that we studied before). Within the two-phase region, the tie lines are straight lines that connect the compositions of the vapor and liquid phase in equilibrium (bubble point to the dew point). These tie lines angle towards the medium-component corner. It can also be recognized that any mixture on a tie line has the same liquid and vapor compositions.
Finally, to find the proportion of liquid and vapor at any point on the tie line, we apply the lever rule.
5.05: Multicomponent Mixtures
For systems containing more than three components, pictorial representation becomes difficult, if not impossible. Simple diagrams can be obtained if the mole fractions of all but two or three components remain constant, and the variation of the two or three varying components with temperature and pressure are shown.
In practical applications, the mole fractions of all components can be expected to vary. For such systems, direct calculations based on physical models are the only way to obtain reliable information about the system phase behavior. This is the ultimate goal of this series of modules, and these calculations will be studied in detail as the course progresses.
5.06: Action Item
1. Figure 5.1.2 shows the P-T phase envelopes for eight binary mixtures of Methane and Ethane at eight different compositions. The data was experimentally generated at IGT (Institute of Gas Technology) and presented in the Research Bulletin No. 22. The vapor pressure curves of Methane and Ethane bound the phase envelopes of the eight binary mixtures of Methane and Ethane at different proportions. From left to right, these proportions (molar compositions) are:
P-T phase envelopes for eight binary mixtures of Methane and Ethane at eight different compositions
Mixture Composition
Mixture 1 97.50% CH4 and 2.50% C2H6 (closest to 100% pure Methane vapor pressure curve)
Mixture 2 92.50% CH4 and 7.50% C2H6
Mixture 3 85.16% CH4 and 14.84% C2H6
Mixture 4 70.00% CH4 and 30.00% C2H6
Mixture 5 50.02% CH4 and 49.98% C2H6
Mixture 6 30.02% CH4 and 69.98% C2H6
Mixture 7 14.98% CH4 and 85.02% C2H6
Mixture 8 5.00% CH4 and 95.00% C2H6 (closest to 100% pure Ethane vapor pressure curve)
Using this phase behavior data, generate the P-x and T-x diagram for Methane/Ethane mixtures at T = – 40 oF and P = 500 psia respectively.
2. If a certain amount of the mixture 40% C1 – 60% C2 is kept in a vessel at 500 psia, determine the temperature at which the vessel has to be subjected if at least 50% of the substance is required to be in the liquid state. How would your answer change if we were now required to have 10% of liquid in the vessel? How would it change if the overall composition is now considered to be 60% C1 and 40% C2 ?
3. 10 lb-mol of an equimolar C1/C2 mixture is confined inside a vessel at conditions of – 60 oF and 500 psia. What is the percentage of liquid inside the vessel? What is the composition of the liquid and vapor at such condition?
4. What would be the composition of the first bubble that appears in a isothermal expansion of a liquid at – 40 oF? At which pressure would this bubble appear? What would be the pressure at which the first droplet appears in a isothermal compression of a vapor at – 40 oF? What is the composition of such a droplet?
5. Can you apply the lever rule to in a ternary diagram? If not, why not? What about a multicomponent system?
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/05%3A_Phase_Diagrams_IV/5.04%3A_Ternary_Systems.txt
|
Module Goal: To introduce you to quantification in fluid phase behavior.
Module Objective: To quantitatively and qualitatively compare ideal and real gas behavior.
The ultimate purpose of this text is to build a firm knowledge of the phase behavior of fluids. With this understanding, we will be able to establish the basis and rationale upon which phase behavior applications in production systems are grounded. We are using the word production in a generic sense, that is, where it pertains to reservoir, pipeline, and the surface production (of any produced fluid).
06: PT Behavior and Equations of State (EOS) I
So far, we have seen that much of the importance that we place on understanding phase behavior comes from the ability that it gives us to predict how a given system will behave at different conditions. We need phase diagrams to look at what the state of the system that we are dealing with is; that is, what is its original state. As a matter of fact, if we look at petroleum production, we often talk about a thermodynamic process that is taking place, involving a process path similar to one that we have seen in any basic thermodynamics course.
Just to give an illustration, consider Figure \(1\).
Production, as defined above, involves taking the reservoir from an initial condition (PA , Tf) to final state of depletion (PB , Tf), (PD , Tf), (PE , Tf) or even (PF , Tf). Once the end points of our thermodynamic path are fixed, the single most important question is determining the path that leads to such an end point. This path dictates whether or not you have the maximum recovery possible from the system.
When we talk about gas cycling, we are generally referring to the practice of injecting gas back into the reservoir. This is done in order to optimize the thermodynamic path we have chosen to take. In a typical condensate system, you generally produce a wet gas from the system with a high liquid yield at the surface. At the surface, you pass this gas through a series of separators; during this process liquid is going to drop out. The liquid that drops out will be rich in the heavier components. Hence, the gas that comes out of the separator will be dry (i.e., very light). If you inject this lean gas back into the reservoir, there will be a leaching process. All you are trying to do from the point of view of the phase diagram is to move the phase boundary and dew point towards the left (lower temperatures zones). Let me explain this in more detail.
Let us say that we have the phase envelope for the reservoir fluid shown in Figure \(1\), with the given path of production. If we were to follow the path from (A , Tf) to (E , Tf), we would enter the two-phase region and end up having liquid in the reservoir. However, you do not want liquid in the reservoir because its low mobility dictates that it would not be recovered! Next, you want to move that phase diagram to the left by injecting a lighter gas. When you inject a lighter gas, the phase envelope shifts to the left; your production path will be free of liquid dropout at reservoir conditions. By injecting the gas, we are making the overall composition of the reservoir fluid lighter. The effect of composition on phase behavior was discussed in the previous module (see Figure 5.2.2 in Module 5). This example demonstrates the importance of phase diagrams as tools that help us produce a reservoir in an optimal way.
So, we recognize that we needed phase behavior data for this particular system. The question now is how do we get the data? We can collect data in at least two ways: from laboratory measurements and from field measurements. Lab experiments are expensive, and we cannot hope to generate data for every foreseeable condition we may encounter. Just to give you an idea, generating a single phase envelope may cost at least \$120,000. This is not something you want to be doing all the time. On the other hand, if you went to the field, you would lose valuable resources or have to stop operations to make your observations. On a routine basis, you don’t want to use the field or a lab as your main sources of phase behavior data. These options mean a lot of lost revenue and a great deal of expense. Is there a third option? Yes, indeed. We can rely on prediction, by which we produce a model that can do this work for us. In fact, we will be dealing with, and developing, this option in this course.
The basis for such a model is what is called an Equation of State (EOS). Hence, the central part of this course is EOS, since they are the basis of what we do in phase behavior.
There are several other examples that illustrate very vividly why we need to study equations of state. For instance, let us think about the concept of equilibrium.
In petroleum production, we generally make the assumption that, at every stage, the system is in equilibrium. When you think about equilibrium, you generally think about a system that is static, that is, not moving. When a system is moving, it cannot, in actuality, be in equilibrium. Nevertheless, the best approach we have so far is to describe it using equilibrium thermodynamics. While we usually assume equilibrium, we recognize that it is not a perfect assumption, but that it is a reasonable one.
This means that in the course of producing the reservoir, a process that always involves movement, I am assuming that everywhere the gas and the liquid are in equilibrium. With this assumption, we are free to use equilibrium thermodynamics, so we are able to employ EOS in describing the state of the system.
Consider the reservoir in Figure 6.1.1, in an entirely gaseous condition at (A, Tf), and having a known fluid composition zri (i=1,…n). As we produce this reservoir through a pipeline, we take the fluid from reservoir conditions through a battery of separators. Generally speaking, we deal with a series of separators, but for the sake of this discussion, we will assume that we have just a single separator.
This separator does not care about the pressure and temperature of the reservoir. It only cares about its own pressure and temperature condition: Ps, Ts. The composition of the fluid at the separator inlet is assumed to be the same as that of the reservoir fluid, although this is strictly true only for single phase conditions.
The fluid exits the separator in two streams: a vapor stream and a liquid stream. As a petroleum engineer, we want to know how much gas, how much liquid, and the quality (compositions) of both streams. That is, we need quantitative and qualitative information. As we shall study in Module 12, we can perform a material balance around each separator to calculate the amount of vapor and liquid that is to be recovered. We will need the properties of both streams (such as density and molecular weight) in order to express flow rates in suitable field units.
How do we generate all this? We need a tool; that tool is an Equation of State! Why do we need an Equation of State? We need EOS to define the state of the system and to determine the properties of the system at that state. That is why it is called an equation of state. As you may have noticed, something critical in this series of lectures is the ability to establish links within all the material we are studying. We will not look at each topic simply as an isolated compartment, but instead, we must think in terms of how each piece of information fits into the overall picture that we are developing.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/06%3A_PT_Behavior_and_Equations_of_State_(EOS)_I/6.01%3A_Introduction.txt
|
We embark now on a rather ambitious journey. Given a fluid, we would like to develop mathematical relationships for predicting its behavior under any imaginable condition of pressure, temperature and volume (P-V-T). In other words, we want to describe the P-V-T behavior of fluids in general.
As we stated earlier, this is a very challenging problem. The way science approaches these sorts of problems is to introduce simplifications of the physical reality. In other words, we formulate a set of assumptions and come up with a base model that we might call ideal. From that point on, once the base model has been established, we look at a real case by estimating how close (or far) it performs, with respect to the base (ideal) case, and introducing the corresponding corrections. Such corrections will take into account all the considerations that our original assumptions left out.
Let us discuss our base case for fluids (the simplest fluid we may deal with): the ideal or perfect gas.
6.03: Ideal Behavior
An ideal gas is an imaginary gas that satisfies the following conditions:
• Negligible interactions between the molecules,
• Its molecules occupy no volume (negligible molecular volume),
• Collisions between molecules are perfectly elastic — this is, no energy is lost after colliding.
We recognize that this fluid is imaginary because — strictly speaking — there are no ideal gases. In any fluid, all molecules are attracted to one another to some extent. However, the ideal approximation works best at some limiting conditions, where attraction forces can be considered to be weak. In fact, ideal behavior may be approached by real gases at low pressures (close to atmospheric) and high temperatures. Note that at low pressures and high temperatures, the distance between any pair of gas molecules is great. Since attraction forces weaken with distance, we have chosen a condition where attraction forces may be neglected. In conclusion, we consider a gas ideal when each molecule behaves as if it were alone — molecules are so far apart from each other that they are not affected by the existence of other molecules.
The behavior of ideal gases has been studied exhaustively and can been extensively described by mathematical relationships.
For a given mass of an ideal gas, volume is inversely proportional to pressure at constant temperature, i.e.,
$v\alpha\frac{1}{P} (at\,constant\,temperature) \label{Boyle}$
This relationship is known as Boyle’s Law. Additionally, volume is directly proportional to temperature if pressure is kept constant, i.e.,
$v\alpha T (at\,constant\,pressure) \label{Charles}$
This relationship is known as Charles’ Law. By combining both laws and recognizing “R” (the universal gas constant) as the constant of proportionality, we end up with the very familiar equation:
$Pv = nRT \label{EOS}$
This represents the equation of state (EOS) of an ideal gas. Numerical values of “R” depend on the system of units that is used:
$R = 10.731\dfrac{psia ft^3}{lbmol R} = 8.3144\dfrac{Joule}{gmole K} = 1.9872\dfrac{cal}{gmole K} \nonumber$
$R = 1.314\dfrac{atm ft^3}{lbmol K} = 0.7302\dfrac{atm ft^3}{lbmol R} = 1.9869\dfrac{BTU}{lbmol R} \nonumber$
If we construct the P-v diagram for an ideal gas at a given temperature, we end up with the isotherm shown in Figure 6.3.2.
The ideal gas model predicts two limiting fluid behaviors: first, that the volume of the gas becomes very large at very low pressures $v \rightarrow \infty$ as $P \rightarrow 0$, a concept that agrees with what we know from our experience in the physical world). And second, $v \rightarrow \infty$ as $P \rightarrow 0$ (the volume of matter just “vanishes” if the pressure is high enough: this concept we would not be as willing to accept). These two behaviors are a consequence of the assumptions made in the ideal gas model.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/06%3A_PT_Behavior_and_Equations_of_State_(EOS)_I/6.02%3A_P-V-T_Behavior.txt
|
In reality, no gas behaves ideally. Therefore, the ideal EOS is not useful for practical applications, although it is important as the basis of our understanding of gas behavior. Even though the ideal model is not reliable for real engineering applications, we have to keep in mind that the ideal gas EOS is the starting point of all modern approaches.
If we look back at Figure 6.3.2 and recall our discussions about P-v behavior of pure substances, something should catch your attention. Figure 6.4.3 shows us the typical P-v behavior of a pure substance to facilitate our discussion.
What can we conclude about the ideal EOS while contrasting Figure 6.3.2 to 6.4.3? The curve in Figure 6.3.2 is continuous; Figure 6.4.3 has an obvious discontinuity (at the vapor+liquid transition). Hence, one thing we can already say is that the ideal EOS is at least qualitatively wrong. For a real substance, as pressure increases, there must be a point of discontinuity that represents the phase change. Ideal gas will not condense, no matter what pressure it is subjected to, regardless of the temperature of the system. In other words, we cannot hope to reproduce the P-v behavior of Figure 6.4.3 using the ideal equation (\(6.3.3\)) since no discontinuity is to be found. However, the real P-v isotherm can be approximated by ideal behavior at low pressures, as we can see from the plots.
We can also establish some quantitative differences between ideal and real PVT behavior. For example, for most conditions of interest at a given volume and temperature, the ideal gas model over-predicts the pressure of the system:
\[ P_{Ideal\,Gas} > P_{Real\,Gas} \]
We can explain this difference by recalling that a real gas does have interaction forces between molecules. Secondly, we recall that the concept of “pressure” of a gas is a consequence of the number of molecular collisions per unit area against the wall of the container. Such number of collisions is, in turn, a measure of the freedom of the molecules to travel within the gas. The ideal gas is a state of complete molecular freedom where molecules do not even know the existence of the others. Hence, a hypothetical ideal gas will exert a higher pressure than a real gas at any given volume and temperature. When molecules come together (real gas), it reduces the available free space for the molecules and pressure is reduced.
Additionally, the ideal model assumes that the physical space that the molecules themselves occupy is negligible. In reality molecules are physical particles and they do occupy space. Once we find a way of accounting for the space that the molecules themselves occupy, we would be able to compute a “real” free volume available for the molecules to travel through the gas. In the ideal case, this free volume is equal to the volume of the container itself since molecular volume is not accounted for. In the real case, this free volume must be less than the volume of the container itself after we account for the physical space that molecules occupy. Therefore:
\[ V_{real, free} < V_{Ideal} = V_{Container} \]
\[ V_{real, free} < V_{Container} = V_{occupied\,by\,molecules}\]
6.05: Summary
The ideal gas EOS is inaccurate both quantitatively and qualitatively. Quantitatively, ideal gas will not condense no matter what pressure and temperature the system is subjected to. Quantitatively, pressures and volumes used by the ideal gas model are higher than the values that a real gas would have. These are the primary reasons that scientists have made an effort to go beyond the ideal gas EOS, simply because it does not apply for all the cases of interest.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
6.06: Action Item
Please note:
• Your answers must be submitted in the form of a Microsoft Word document.
• Include your Penn State Access Account user ID in the name of your file (for example, "module2_abc123.doc").
• The due date for this assignment will be sent to the class by e-mail in ANGEL.
• Your grade for the assignment will appear in the drop box approximately one week after the due date.
• You can access the drop box for this module in ANGEL by clicking on the Lessons tab, and then locating the drop box on the list that appears.
Problem Set
1. Under what conditions should a real gas behave as an ideal gas?
2. Can we manufacture an ideal gas?
3. Write a paragraph on what you think we should do with the ideal gas equation to make it applicable to real gases. Describe all the considerations that you feel must be accounted for.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/06%3A_PT_Behavior_and_Equations_of_State_(EOS)_I/6.04%3A_Real_Gases.txt
|
Module Goal: To introduce you to quantification in fluid phase behavior.
Module Objective: To introduce you to the concept of Z-factor and the van der Waals equation of state.
07: PT Behavior and Equations of State (EOS) II
From the last module, it is very likely that one question is left in our minds. How can we adjust the ideal model to make it suitable for real gases? Well, we already have the answer. We said that once we have established a base (ideal) model, we look at a real case by estimating how close (or far) it performs with respect to the base (ideal) case, and introducing the corresponding corrections. Again, such corrections will take into account all the considerations that our original assumptions left out.
For the case of gas behavior, we introduce a correction factor to account for the discrepancies between experimental observations and predictions from our ideal model. This correction factor is usually referred to as the compressibility factor(Z), and is defined as:
$z=\frac{V}{V_{\text {ideal}}}$
In the previous equation, V is the real volume occupied by the gas and VIdeal is the volume that the ideal model predicts for the same conditions. The ideal volume is given by:
$V_{\text {ideal}}=\frac{n R T}{P}$
Hence, the equation of state for real gases is written as:
$P V=Z n R T$
Engineers are very much familiar with this equation, to the extent that it is usually recognized as Engineering EOS. Please note that for $Z = 1$, this equation collapses to the ideal gas model. In fact, unity is the compressibility factor of any gas that behaves ideally. However, please note that $Z = 1$ is a consequence of ideal behavior, but this is not a definition.
Something to think about:
Is it possible to have a real gas at a condition at which Z=1 without being ideal (far removed from the ideal-gas theory assumptions)?
For natural gases, the most enduring method of estimating Z has been the Katz-Standing Method. However, we are now living in a computer-driven era, where thermodynamic estimations are very rarely taken from graphs or plots, as was common in the past.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
7.02: Definition of Equation of State (EOS)
Assuming an equilibrium state, the three properties needed to completely define the state of a system are pressure (P), volume (V), and temperature (T). Hence, we should be able to formulate an equation relating these 3 variables, of the form f(P,T,V)=0.
An equation of state (EOS) is a functional relationship between state variables — usually a complete set of such variables. Most EOS are written to express functional relationships between P, T and V. It is also true that most EOS are still empirical or semi-empirical. Hence, the definition:
An Equation of State (EOS) is a semi-empirical functional relationship between pressure, volume and temperature of a pure substance. We can also apply an EOS to a mixture by invoking appropriate mixing rules.
There have been a number of attempts to derive a theoretically sound EOS; but, generally speaking, not much success has been achieved along that line. As a result, we use what are known as semi-empirical EOS. Most equations of state used today are semi-empirical in nature, this being so because they are fitted to data that are available. Additionally, equations of state are generally developed for pure substances. Their application to mixtures requires an additional variable (composition) and hence an appropriate mixing rule.
The functional form of an EOS can be expressed as:
(7.4)
where ak = EOS parameters.
As we stated earlier, most applicable EOS today are semi-empirical, in the sense that they have some theoretical basis but their parameters (ak) must be adjusted. The number of parameters (np) determines the category/complexity of the EOS. For instance, 1-parameter EOS are those for which np = 1, and 2-parameter EOS are those for which np = 2. The higher “np” is, the more complex is the EOS. Also, in general terms, the more complex the EOS, the more accurate it is. However, this is not always the case; in some cases a rather simple EOS can do a very good job.
Since the time of the ideal gas law (ideal gas EOS), a great number of equations of state have been proposed to describe real gas behavior. However, many of those have not passed the test of time. Only few have persisted through the years, this because of their relative simplicity. In the petroleum business, the most common modern EOS are the Peng Robinson EOS (PR EOS) and Soave-Redlich-Kwong EOS (SRK EOS). Both of these are cubic EOS and hence derivations of the van der Waals EOS, which we will be discussing next. There are other more complex EOS, although they have not yet found widespread application in our field:
• Lee Kesler EOS (LK EOS)
• Benedict-Webb-Rubin EOS (BWR EOS)
• Benedict-Webb-Rubin-Starling EOS (BWRS EOS)
In the natural gas business, especially in the gas transmission industry, the standard EOS used is the AGA EOS; this is an ultra-accurate EOS for Z-factor calculations — a very sensitive variable for custody-transfer operations.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/07%3A_PT_Behavior_and_Equations_of_State_(EOS)_II/7.01%3A_Z-Factor.txt
|
We want to use EOS as the basis for generating data: volumetric data, thermophysical data, and to help us perform vapor/liquid equilibrium (VLE) calculations. Probably there has not been an area of thermodynamics to which so many hours of research have been devoted as that of the topic of EOS. Among the properties derived from an EOS include:
• Densities (vapor and liquid),
• Vapor pressures of pure components,
• Critical pressures and temperatures for the mixture,
• Vapor-Liquid equilibrium (VLE) information,
• Thermodynamic properties (ΔH, ΔS, ΔG, ΔA).
300 years of EOS Development
PERIOD 1: “Foundational work”
Before 1662, there was an incomplete understanding and qualitative representation of the volumetric behavior of gases.
1662: First breakthrough — Boyle’s Law. Boyle did not define an ideal behavior. When he proposed this law, he was convinced that it applied to all gases. Among the limitations that Boyle was working under: no high pressure measurements could have been taken using the equipment of his time, and his working fluid was air. Hence, it is no wonder that everything these pioneers did pertained to what we now recognize as the ideal state.
PV =constant
1787: Charles’ Law. It was one hundred plus years until a new, important development in the gas behavior field. Charles postulated that the volume of a gas is proportional to its temperature at isobaric conditions.
Combining, PV/T = constant = R, gas constant
1801: Dalton introduced the concept of partial pressures and recognized that the total pressure of a gas is the sum of the individual (partial) contributions of its constituents.
1802: Gay-Lussac. He helped to define the universal gas constant “R”. Dalton had looked at different gases and calculated the ratio PV/T to verify that it was constant. However, it was believed that each gas may have its own R. Gay-Lussac showed that a single constant applied to all gases, and calculated the “universal” gas constant.
1822: Cagniard de la Tour. He discovered the critical state (critical point) of a substance.
1834: Clapeyron. He was the first to suggest PV=R(T+273).
PERIOD 2: “Monumental Work”
Period of turning points and landmarks with quantitative developments.
1873: van der Waals. With van der Waals, a quantitative approach was taken
for the first time. He was an experimentalist and proposed the continuity of gases and liquid that won for him a Nobel Prize. He has provided the most important contribution to EOS development.
1875: Gibbs, an American mathematical physicist, made the most important contributions to the thermodynamics of equilibrium in what has been recognized as a monumental work.
1901: Onnes theoretically confirmed the critical state.
1902: Lewis defined the concept of fugacity.
1927: Ursell proposed a series solution (polynomial functional form) for EOS: P = 1 + b/V + c/V2 + d/V3 +… This is known as the virial EOS. Virial EOS has better theoretical foundation than any other. However, cubic EOS (as vdW’s) need only 2 parameters and have become more widespread in use.
PERIOD 3: “Incremental Improvement”
During this last and current period, a better quantitative description of volumetric behavior has been achieved at a rather low pace. What is striking, as we will study later, is that most of the tools of most critical use for us today are based on the works of van der Waals, Gibbs, and Lewis, and have been around for years.
1940: Benedit, Webb, & Rubbin proposed what can be called the “Cadillac” of EOS, i.e., the most sophisticated and most accurate for some systems. However, the price to pay is that it is complicated and not easy to use.
1949: Redlich & Kwong introduced a temperature dependency to the attraction parameter “a” of the vdW EOS.
1955: Pitzer introduced the idea of the “acentric factor” to quantify the non-sphericity of molecules and was able to relate it to vapor pressure data.
1972: Soave modified the RK EOS by introducing Pitzer’s acentric factor.
1976: Peng and Robinson proposed their EOS as a result of a study sponsored by the Canadian Gas Commission, in which the main goal was finding the EOS best applicable to natural gas systems.
Since then, there has not been any radical improvement to SRK and PR EOS, although a great deal of work is still underway.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/07%3A_PT_Behavior_and_Equations_of_State_(EOS)_II/7.03%3A_Purpose_and_Historical_Development.txt
|
Even though van der Waals EOS (vdW EOS) has been around for more than one hundred years, we still recognize van der Waals’ achievements as crucial in revolutionizing our thinking about EOS. We talk about vdW EOS because of pedagogical reasons, not because it finds any practical application in today’s world. In fact, vdW EOS is not used for any practical design purposes. However, most of the EOS being used widely today for practical design purposes have been derived from vdW EOS.
The contributions of vdW EOS can be summarized as follows:
• It radically improved predictive capability over ideal gas EOS,
• It was the first to predict continuity of matter between gas and liquid,
• It formulated the Principle of Corresponding States (PCS),
• It laid foundations for modern cubic EOS.
In his PhD thesis in 1873, van der Waals proposed to semi-empirically remove the main key “weaknesses” that the ideal EOS carried with it. Essentially, what he did was to look again at the basic assumptions that underlie the ideal EOS, which we have listed above.
vdW accounted for the non-zero molecular volume and non-zero force of attraction of a real substance. He realized that there is a point at which the volume occupied by the molecules cannot be neglected. One of the first things vdW recognized is that molecules must have a finite volume, and that volume must be subtracted from the volume of the container. At the same time, he modified the pressure term to acknowledge the fact that molecules do interact with each other though cohesive forces. These are the two main valuable recognitions that he introduced.
The ideal EOS states:
(7.5)
or,
(7.6)
where represent the molar volume () of the substance.
vdW focused his attention on modifying the terms “P” and “v” in the original ideal gas law by introducing an appropriate correction. Looking back at the inequality of equation (6.4) in Module 6, vdW proposed that the difference of both pressures is the result of the attraction forces — cohesive forces — neglected in the ideal model (equation 7.6) and thus,
(7.7)
At this point, vdW postulated the term δPattraction, an inverse function of the mean distance between molecules — this being a direct consequence of Newton’s law of inertial attraction forces, F α (distance)-2. Recognizing that the volume of the gas is a measure of the mean distance between molecules (the smaller the volume, the closer the molecules and vice versa),
(7.8)
and using “a” as a constant of proportionality,
(7.9)
Next, vdW took care of the inequality in equation (6.5) (see Module 6). Any particle occupies a physical space; hence, the space available to the gas is less than the total volume of its container. Let us say we can experimentally determine the actual physical space that all the molecules in the container occupy, and that we call it “b”, or the co-volume. vdW then proposed:
(7.10)
The inclusion of a parameter “b” (co-volume) recognizes the role of repulsive forces. Repulsive forces prevent molecules from “destroying” one another by not letting them get too close. In a condensed state, there is a maximum allowable “closeness” among molecules. Therefore, it is because of repulsive forces that we cannot compress the volume of a fluid beyond its co-volume value “b”. If the molecules get too close to each other, repulsion forces take over to prevent their self-destruction.
In summary, vdW proposed to correct the pressure and volume terms of the ideal model represented by equation (7.6). The “new” modified-ideal equation of state becomes:
(7.11a)
or,
(7.11b)
where:
P = absolute pressure
v = molar volume
T = absolute temperature
R = universal gas constant
Equation (7.11b) demonstrates that vdW EOS is explicit in pressure. At this stage it is important to stress that any prediction for from equations (7.11) is meaningless due to the physical significance that we have attached to this parameter. Since all the others parameters are constants, equations (7.11) are functional relationships of the variables P, T and :
(7.12)
Therefore, equation (7.11a) expresses a PVT relationship and hence, it is an equation of state. In this equation, “a” and “b” are constants that are specific to each component. However, the numerical value of “R” depends on the system of units chosen, as we discussed above.
It is time to ask ourselves a very important question:
How do we calculate “a” and “b” for each substance?
As can be inferred from their definitions, “a” and “b” are different for different substances; i.e.,
;
How do we relate “a” and “b” to well-known and easily-obtainable physical properties of substances?
It turns out that there are a set of conditions called criticality conditions that must be satisfied for all systems, provided that those systems satisfy the 2nd law of thermodynamics. Indeed, in the previous chapter we recognized that the critical isotherm of a pure substance has a point of inflexion (change of curvature) at the critical point. Furthermore, we recognized the critical point to be the maximum point (apex) of the P-V envelope. This condition of horizontal inflexion of the critical isotherm at the critical point is mathematically imposed by the expression:
(7.13)
These conditions are called the criticality conditions. It turns out that when one imposes these conditions on equation (7.11a), one is able to derive expressions for the parameters “a” and “b” as a function of critical properties as follows:
(7.14a)
(7.14b)
You may want to prove this as an exercise. “a” and “b” can therefore be known because they are functions of known (tabulated) properties of all substances of interest (critical pressure and temperature.)
So far, we have applied vdW EOS to pure components. Can we extrapolate this to apply to multi-component systems?
To extend this concept to a system of more than one component, we use what is called a mixing rule. A mixing rule relates the parameters that characterize the mixture (am and bm) to the individual contributions of the pure components that make up that mixture (ai and bi)
How do we do this? vdW proposed to weight the contributions of each component using their mole compositions, as follows:
(7.15a)
(7.15b)
The former is called the quadratic mixing rule, while the latter is known as the linear mixing rule.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
7.05: Action Item
Problem Set
1. If a gas is ideal, would its compressibility factor (Z) be always equal to one?
2. For a gas with Z=1, would its behavior be ideal?
3. Take a look at the Standing-Katz Compressibility Factor Plot for Natural Gases. What is the information you need to obtain a value of “Z” for a gas? What happens to “Z” at low pressures? What is the behavior of “Z” at high pressures? What is the compressibility factor of Methane (Pc = 666 psia, Tc = – 117 F) at P = 1000 psia and T = 0 F?
4. As pressure approaches to zero, what should vdW EOS collapse to? Why? Can you show it?
5. Speculate on how you could calculate the “Z” factor for a gas using vdW EOS.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/07%3A_PT_Behavior_and_Equations_of_State_(EOS)_II/7.04%3A_The_van_der_Waals_Equation.txt
|
Learning Objectives
• Module Goal: To introduce you to quantification in fluid phase behavior.
• Module Objective: To highlight the principle of corresponding states and its importance for thermodynamic correlations.
• 8.1: Principle of Corresponding States (PCS)
The principle of Corresponding States (PCS) was stated by van der Waals and reads: “Substances behave alike at the same reduced states. Substances at same reduced states are at corresponding states.” That is, “Substances at corresponding states behave alike.”
• 8.2: Acentric Factor and Corresponding States
It is important to point out that the PCS that we have just discussed was originally outlined by van der Waals. In reality, it is the simplest version of the principle of corresponding states, and it is referred to as the two-parameter PCS. This is because it relies on two parameters (reduced pressure and temperature) for defining a “corresponding state.”
• 8.3: Action Item
08: PT Behavior and Equations of State III
The principle of Corresponding States (PCS) was stated by van der Waals and reads: “Substances behave alike at the same reduced states. Substances at same reduced states are at corresponding states.” That is,
“Substances at corresponding states behave alike.”
Reduced properties are used to define corresponding states. Reduced properties provide a measure of the “departure” of the conditions of the substance from its own critical conditions and are defined as follows:
$P_{r}=\frac{P}{P_{c}} \label{8.1a}$
$T_{r}=\frac{T}{T_{c}} \label{8.1b}$
$\bar{v}_{r}=\frac{\bar{v}}{\bar{v}_{c}} \label{8.1c}$
If $P_r = T_r = v_r = 1$, the substance is at its critical condition. If we are beyond critical conditions, Tr > 1, Pr > 1 and vr > 1. By the same token, if all the conditions are subcritical, Tr < 1, Pr < 1 and vr < 1. Critical conditions become the scaling factor by which substances can be compared among each other in terms of their “departure from criticality” or reduced properties.
The PCS says that all gases behave alike at the same reduced conditions. That is, if two gases have the same “relative departure” from criticality (i.e., they are at the same reduced conditions), the corresponding state principle demands that they behave alike. In this case, the two conditions “correspond” to one another, and we are to expect those gases to have the same properties.
The Corresponding State Principle can be derived from vdW EOS. If we recall,
$\left(P+\frac{a}{\bar{v}}\right)(v-b)=R T \label{8.2a}$
where:
$a=\frac{24}{64} \frac{R^{2} T^{2} c}{P_{c}} \label{8.2b}$
$b=\frac{R T_{c}}{8 P_{c}} \label{8.2c}$
We defined the reduced conditions as:
$P_{r}=\frac{P}{P_{c}} \label{8.3a}$
$T_{r}=\frac{T}{T_{c}} \label{8.3b}$
$\bar{v}_{r}=\frac{\bar{v}}{\bar{v}_{c}} \label{8.3c}$
If we substitute all this into vdW EOS,
$\left(P_{c} P_{r} \frac{27 R^{2} T_{c}^{2}}{64 P_{c} \bar{v}^{-2} \bar{v}_{r}^{2}}\right)\left(\bar{v}_{c} \bar{v}_{r}-\frac{R T_{c}}{8 P_{c}}\right)=R T_{c} T_{r} \label{8.4}$
Simplifying the expression, and employing the expressions:
$\bar{v}_{c}=\frac{3 R T_{c}}{8 P_{c}} \label{8.5a}$
$Z_{c}=3 / 8=0.375 \label{8.5b}$
We get:
$\left(P_{r} \frac{3}{\bar{v}_{r}^{2}}\right)\left(3 \bar{v}_{r}-1\right)=8 T_{r} \label{8.6}$
Equation \ref{8.6} is the reduced form of vdW EOS. See how this equation is “universal”. It does not care about which fluids we are talking about. Just give it the reduced conditions “Pr, Tr” and it will give you back vr — regardless of the fluid. Hence, if you compute vr for a certain fluid by entering Pr and Tr for that fluid into vdW reduced EOS (Equation \ref{8.6}), you will compute the same vr, for any other fluid at the same conditions of Pr and Tr. There is no other possibility. Strictly speaking, van der Waals’ Corresponding States Principle reads: “fluids at the same reduced pressures and temperatures have the same reduced volume.” This is how van der Waals discovered the Principle of Corresponding States. As long as two gases are at corresponding states (same reduced conditions), it does not matter what components you are talking about, or what is the nature of the substances you are talking about; they will behave alike.
The critical point provides the perfect scaling for the application of the corresponding state principle because of the existence of the criticality conditions. In fact, equation (7.13) makes the application of corresponding states possible for equations of state.
$\left(\frac{\alpha P}{\partial v}\right)_{P_{c}, T_{c}}=\left(\frac{\alpha^{2} P}{\partial v}\right)_{P_{c}, T_{c}}=0 \label{7.13}$
Indeed, for us to arrive at Equation \ref{8.6}, we needed to use Equations \ref{8.2a}-\ref{8.2c} — which in turn were the outcome of the application of the criticality conditions to van der Waals’ equation of state. As a result, gases that have the same relative departure from their own critical condition have the same properties.
What is the use of this principle? Basically, it is used for thermodynamic correlations — its most powerful application. Most thermodynamic correlations have been made viable and general because of the application of the principle of corresponding states. An excellent example is the popular Z-chart of Standing and Katz, shown in Figure 8.1. In fact, most of the correlations that we use in thermodynamics are based on this principle. This explains why “Pr” and “Tr” so often appear in thermodynamic correlations. The main reason for using “Pr” and “Tr” is to obtain the most generalized correlation possible, so that it is suitable for use with most substances.
Contributors and Attributions
• Prof. Michael Adewumi (The Pennsylvania State University). Some or all of the content of this module was taken from Penn State's College of Earth and Mineral Sciences' OER Initiative.
|
textbooks/eng/Chemical_Engineering/Phase_Relations_in_Reservoir_Engineering_(Adewumi)/08%3A_PT_Behavior_and_Equations_of_State_III/8.01%3A_Principle_of_Corresponding_States_%28PCS%29.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.