chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
1.3 Problematic Dependence on Foreign Fuel Sources
The US is highly dependent on crude oil to produce fuels for transportation. Figure 1.3 shows how the transportation sector is almost all oil-based, and the other sources barely make a dent in the hold petroleum has. Up until the last few years, the US has been highly dependent on foreign sources of oil. The US was the world's largest petroleum consumer (EIA, 2012), but was third in crude oil production. Over half of the material that is imported into the US comes from the Western hemisphere (North, South, and Central America, and the Caribbean), but we also import 29% from Persian Gulf countries (Bahrain, Iraq, Kuwait, Saudi Arabia, and the United Arab Emirates). The top 5 sources of net crude oil and petroleum imports include: 1) Canada, 28%, 2) Saudi Arabia, 13%, 3) Mexico, 10%, 4) Venezuela, 9%, and 5) Russia, 5%. According to CNN Money, the US is behind Russia and Saudi Arabia in oil production for the first three months of 2016. See the following link for further information: World's Top Oil Producers. However, this situation recently changed and the US became the world's largest oil producer in 2018 for the first time since 1973. See the following link for further information: America is now the world's largest oil producer.
So, while oil is fairly available currently, there is extensive potentially explosive turmoil in many petroleum-producing regions of the world, and, in several places, the US's relationship with some oil-producing countries is strained. China and India are now aggressive and voracious players in world petroleum markets because of high economic growth (as pointed out in the previous section). Saudi Arabia production is likely "maxed out," and domestic oil production peaked in 1970. While the US dependence on imported oil has declined after peaking in 2005, it is clear that if any one of the large producers decides to withhold oil, it could cause a shortage of fuel in the US and would cause the prices to skyrocket from an already high price (depending on the type of crude oil, the price of oil is currently 100−100−106/bbl) (see U.S. Energy Information Administration website). Figure 1.4 is a graphic showing the price level of oil from 1970 until recently (adjusted for inflation using the headline CPI). As you can see, there has been significant volatility in the price of oil in the last ~50 years. One of the first spikes came in 1974 when OPEC became more organized and withheld selling oil to the US. It was a true crisis at that point, with gasoline shortages causing long lines and fights at gas stations, with people filling up only on certain days depending on their license plates. It had a high spike in 1980, but a significant low in 1986. When the price of oil hit a significant low in 1998, the government took steps to lower the tax burden on oil companies. But when the prices went back up, the law remained in place, and currently, oil companies do not have to pay taxes on produced oil. When the reduced tax burden went into place in the late 90s, it made sense, but oil companies have continued to convince Congress with lobbyists that it should stay that way. What do you think?
Processing of oil into useable fuels has also increased. At one point in 2015, the price of gasoline nationwide stayed at >\$3.50/gal, but then, the price has dropped down to as low as \$2.00/gal. As we will discuss in a later lesson, there are several aspects that contribute to the price of gasoline. Figure 1.5a is a graphic that shows the price volatility for gasoline from December 2005 - December 2015, and Figure 1.5b shows a breakdown of what goes into the price of gasoline. Figures 1.6a and 1.6b are similar graphics, but for diesel.
Natural gas prices have been volatile over recent years. As you can see in Figure 1.7, the production of natural gas and oil has expanded in the USA; so, the price went up in recent years, but lately, it has been going down.
Figure 1.7: Natural gas price trend from 2009 until 2020. Natural gas price is in \$ per MMBtu.
Click here for a text description of Figure 1.7
Natural Gas Price (\$/MMBtu) between 2009 and 2020. General Trends are as follows (approximations):
2009: 5.24
Mid 2009: 3.8
2010: 5.83
Mid 2010: 4.8
2011: 4.49
Mid 2011: 4.8
2012: 2.67
Mid 2012: 2.46
2013: 3.33
Mid 2013: 3.83
2014: 4.71
Mid 2014: 4.59
2015: 2.99
Mid 2015: 2.78
2016: 2.28
Mid 2016: 2.59
2017: 3.3
Mid 2017: 2.98
2018: 3.87
Mid 2018: 2.97
2019: 3.11
Mid 2019: 2.4
2020: 2.02
Mid 2020: 1.63
Credit: U.S. Energy Information Administration (open source data)
And even though coal isn't the "popular" fuel currently, almost 40% of electricity generation comes from coal. Power plants began switching to natural gas in the last few years because it was quite inexpensive in 2012, and there is a common belief that natural gas would put less carbon in the atmosphere than coal because of the molecular ratio of hydrogen to carbon (natural gas, CH4, 4:1; petroleum, CH2, 2:1; and coal, CH, 2:1). However, there are also issues with natural gas that we will discuss in another section. Figure 1.8 shows the price variation in coal over recent years. When the price of oil increased in 2010, so did the price of coal. However, as natural gas prices went down, so did prices for coal. The price of coal per short ton was \$40.00 in 2016. However, the price of coal typically has a range depending on the use of it (higher quality coal is used to make a carbon material called coke, a material that is used in the manufacturing of various metal materials). Lower priced coal tends to be coal that has more "bad actors" such as sulfur and nitrogen (we will discuss in another lesson) or higher water content or lower energy content.
In recent years, petroleum became less available and more expensive, and replacement alternative fuels emerged because the economics were beginning to become more favorable. However, due to lower demand and high petroleum supply, prices drastically dropped and may affect the development of alternative fuels. There is one factor that will most likely reverse this trend; energy demands will continue to increase worldwide. For future transportation fuel needs, most likely a liquid fuel will be necessary, and no one source will be able to replace petroleum. Figure 1.9 shows the breakdown of how much of the world's energy consumption is supplied by various materials, with fossil fuels use in 2011 constituting 83%. The EIA predicts that in 2040, fossil fuel use will only decrease to 78%, even with a doubling of biomass use. If significant changes are going to happen with reducing fossil fuel usage, then a major transformation will need to happen.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/01%3A_Why_Alternative_Fuels_from_Biomass/1.03%3A_Problematic_Dependence_on_Foreign_Fuel_Sources.txt
|
1.4 Reduction of Greenhouse Gas (GHG) Emission
There is a scientific consensus that greenhouse gas (GHG) production is increasing, which has led to climate change and several other environmental concerns. As much as oil companies and conservative think tanks don't want this to be true, it is, and much of the severe weather that has been occurring worldwide is due to climate change issues. There is a significant amount of evidence to substantiate the existence of climate change and the overall warming of the earth. The change in climate is due to the Greenhouse Effect; it is a natural effect, caused by CO2 and water vapor naturally present in the atmosphere. The focus for debate (scientific and political) has been on whether there is also an anthropogenic greenhouse effect causing further climate change. Carbon dioxide (CO2) is not the only greenhouse gas (methane, CH4 is another potent GHG; will be discussed further in upcoming sections), but most of the debate focuses on it. And it is thought that the dramatic increase in CO2 in the atmosphere is due to burning fossil fuels.
The world is highly dependent on fossil fuels; the US is also highly dependent on fossil fuels. As we saw in Figure 1.9, in 2011, only 17% of the fuel consumed was non-fossil fuel based, and that consumption is projected to be 21% in 2040. And about half of that is renewables.
There is a mountain of evidence indicating that the planet is warming. Figure 1.10 shows a graphic depicting CO2 levels plotted with change in average global temperatures from 1880-2010. The change has been most dramatic in the last 30 years.
In the Arctic and Antarctic regions, the ice pack and glaciers are melting, and at an even faster rate than originally anticipated. Scientists have found that increasing atmospheric temperatures are not the only cause of this; the melting is causing water currents to shift and move warmer water around the poles, so that melting is happening underneath the ice pack. Figure 1.11a shows the change in the ice pack from 1984 to 2012 for the Arctic, while Figure 1.11b shows the changes in sea level, globally, from 1993-2012.
Try This
Visit Earth Observatory(link is external) to try an interactive tool that allows you to manipulate images to show dramatic changes in ice pack.
Another problem could stem from the increased production of natural gas. Natural gas consists primarily of methane. Sources include petroleum and natural gas production systems, landfills, coal mining, animal manure, and fermentation of natural systems. Methane has 25 times the global warming potential of CO2. Figure 1.12 shows the total GHG percentages from various sources. However, Figure 1.13 shows the emissions of various GHG emissions from 1990-2014. The EPA points out that overall emissions of CH4 have been reduced by 11% from 1990-2012. However, an article published in Nature (Yvon-Durocher, March 2014) suggests that there may be an unexpected consequence of warming temperatures; global warming can increase the amount of methane evolved from natural ecosystems. So, it remains to be seen what impacts can happen that have not been included in climate change models.
There are several possible responses to abate CO2 and CH4: 1) do nothing; 2) reduce CO2 and CH4 prudently; 3) drastically reduce energy use; and 4) move to a carbon-free society. The easiest, but quite possibly the most damaging in the long run, is to do nothing - currently, there are some nations that are pushing to at least increase conservation. The use of hybrids has actually decreased our use of gasoline, as the increase in Corporate Average Fuel Economy (CAFE) standards has had an impact. However, prudent measures to reduce GHG will most likely not be enough to make a huge impact. Therefore, the use of biofuels could have great potential for reducing the impact of CO2 and CH4, if done well. However, some actions in South America have shown that if switching to biofuel growth is not handled well, a greater problem can be created. Some rainforest areas were removed from South America to clear land for producing biofuels, but the rainforests that were removed were burned, putting an excessive amount of CO2 in the atmosphere. Rainforests have grown over long periods of time, so there was a lot of carbon stored in them - they were also places where exotic animals, plants, and insects lived, so the burning endangered the wildlife species in the rainforests. One thing to always keep in mind: whenever an action is taken in our atmosphere, there is the possibility of a negative consequence that one cannot foresee.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/01%3A_Why_Alternative_Fuels_from_Biomass/1.04%3A_Reduction_of_Greenhouse_Gas_%28GHG%29_Emission.txt
|
1.5 Assignments
Homework #1
For this homework, you will read two selections and compose an essay.
1. Please read the following selections. Both can be accessed viaLibrary Resources.
• "Chapter 10: Biofuels." Renewable Energies. Ed. Jean-Claude Sabonnadiere. Hoboken, NJ: ISTE Ltd/John Wiley & Sons, 2009.
• "Introduction: An Overview of Biofuels and Production Technologies." Handbook of Biofuels Production: Processes and Technologies. Ed. Rafael Luque, Juan Campelo, and James Clark. Oxford: Woodhead Pub., 2011.
2. Write an essay addressing these two questions:
• Question #1: According to these two papers, what is one main advantage of the use of biofuels?
• Question #2: According to these two papers, what is one main disadvantage of the use of biofuels?
Some notes on the format:
• Approximate length - 2 pages, double-spaced, 1" margins, 12 point font, name at the top.
• The essay should incorporate answers to both questions.
• Use APA(link is external) citation format.
• Use as filename your user ID_HW1 (i.e., ceb7_HW1).
• Upload it to the Homework #1 Dropbox.
(12 points)
Discussion #1
Post a response that includes 2 statements (not your entire essay, but statements) from your Homework #1 essay addressing questions #1 and #2. Take some time to review others' responses. Then respond to at least one other person's post. Grades will reflect critical thinking in your input and responses. Don't just take what you read at face value; think about what is written.
(5 points)
1.06: Summary and Final Tasks
Summary
This lesson was about how using biofuels can benefit society. We looked at increasing energy demands around the world, how economically dependent we are on foreign sources of fuel, and how we don't have much control over what the prices for our fuels will be. We also explored how the growth in GHG emissions is a vital environmental concern and discussed how, without the use of biofuels, we cannot achieve significant reductions in GHG.
References
"U.S. Energy Information Administration - EIA - Independent Statistics and Analysis."EIA's Energy in Brief: How Dependent Are We on Foreign Oil?(link is external) Web. 8 May 2014.
"Greenhouse Gas Emissions: Greenhouse Gases Overview."(link is external) EPA. Environmental Protection Agency. Web. 27 May 2014.
Yvon-Durocher, G., Allen, A.P., Bastviken, D., Conrad, R., Gudasz, C., St-Pierre, A., Thanh-Duc, N., del Giorgio, P.A., "Methane fluxes show consistent temperature dependence across microbial to ecosystem scales," Nature, 507, 488-491, 3/21/2014.
Reminder - Complete all of the Lesson 1 tasks!
You have reached the end of Lesson 1! Double-check the Road Map on the Lesson 1 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 2.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour& between 10:00 am-12:00 pm on Thursdays. The discussion forum is checked regularly (Monday through Friday). While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/01%3A_Why_Alternative_Fuels_from_Biomass/1.05%3A_Assignments.txt
|
2.1 Chemistry Tutorial
The chemical compounds that are important for understanding most of the chemistry in this course are organic - that means that the compounds primarily contain carbon, hydrogen, and oxygen atoms (also sulfur and nitrogen). They can also be called hydrocarbons. The basic structures that we will be discussing in this course are called: 1) alkane (aka aliphatic), 2) branched alkane, 3) cycloalkane, 4) alkenes (double-bonds), 5) aromatic, 6) hydroaromatic, and 7) alcohols. First, I will show the atoms and how they are connected using the element abbreviation and lines as bonds, and then I will show abbreviated structural representations.
1. Alkane - atoms are lined up. For stick representation, each corner represents a CH2 group, and each end represents a CH3 group.
Name Atoms and Bonds Stick Representation
Heptane (7 C atoms)
2. Branched Alkane - still an alkane, but instead of a straight line, the carbons are branched off of each other.
Name Atoms and Bonds Stick Representation
Isobutane (4 C atoms)
Isopentane (5 C atoms)
3. Cycloalkanes - again, still an alkane, but forms a ring compound.
Name Atoms and Bonds Stick Representation
Cyclohexane (6 C atoms)
4. Alkenes - alkanes that contain a double bond.
Name Atoms and Bonds Stick Representation
Pentene (5 C atoms)
5. Aromatic - hydrocarbon ring compound with single and double bonds, significant differences in properties.
Name Atoms and Bonds Stick Representation
Benzene (6 C atoms) ,
6. Hydroaromatics - hydrocarbon ring compound with an aromatic and an alkane in one molecule.
Name Atoms and Bonds Stick Representation
1,2,3,4-tetrahydronaphthalene, aka tetralin (10 C atoms)
7. Alcohols - hydrocarbon with -OH functional group.
Name Atoms and Bonds Stick Representation
Butanol (4 C atoms)
Ethanol (2 C atoms)
The following table shows common hydrocarbons and their properties. It is important to know the properties of various hydrocarbons so that we can separate them and make chemical changes to them. This is a very brief overview - we will not yet be going into significant depth as to why the differences in chemicals affect the properties.
Table 2.1: List of Common Hydrocarbons and Properties
Name Number of C Atoms Molecular Formula
bp
(°C), 1 atm
mp
(°C)
Density
(g/mL) (@20°C)
Methane 1 CH4 -161.5 -182 --
Ethane 2 C2H6 -88.6 -183 --
Propane 3 C3H8 -42.1 -188 --
Butane 4 C4H10 -0.5 -138 --
Pentane 5 C5H12 36.1 -130 0.626
Hexane 6 C6H14 68.7 -95 0.659
Heptane 7 C7H16 98.4 -91 0.684
Octane 8 C8H18 125.7 -57 0.703
Nonane 9 C9H20 150.8 -54 0.718
Decane 10 C10H22 174.1 -30 0.730
Tetradecane 14 C14H30 253.5 6 0.763
Hexadecane 16 C16H34 287 18 0.770
Heptadecane 17 C17H36 303 22 0.778
Eicosane 20 C20H42 343 36.8 0.789
Cyclohexane 6 C6H12 81 6.5 0.779
Cyclopentane 5 C5H10 49 -94 0.751
Ethanol 2 C2H6O 78 -114 0.789
Butanol 4 C4H10O 118 -90 0.810
Pentene 5 C5H10 30 -165 0.640
Hexene 6 C6H12 63 -140 0.673
Benzene 6 C6H6 80.1 5.5 0.877
Naphthalene 10 C10H8 218 80 1.140
1,2,3,4-Tetrahydronaphthalene 10 C10H12 207 -35.8 0.970
2.02: Refining of Petroleum into Fuels
2.2 Refining of Petroleum into Fuels
Much of the content in this particular section is based on information from Harold H. Schobert, Energy and Society: An Introduction, 2002, Taylor & Francis: New York, Chapters 19-24.
The following figure is a simplified flow diagram of a refinery. Since it looks relatively complicated, the diagram will be broken into pieces for better understanding.
Distillation
We will start with the first step in all refineries: distillation. Essentially, distillation is a process that heats the crude oil and separates it into fractions. It is the most important process of a refinery. Crude oil is heated, vaporized, fed into a column that has plates in it, and the materials are separated based on the boiling point. Figure 2.2 shows the first stage of the refinery. It indicates that as the liquids are separated, the top end materials are gases and lighter liquids, but as you go down the column, the products have a higher boiling point, the molecular size gets bigger, the flow of the materials gets thicker (i.e., increasing viscosity), and the sulfur (S) content typically stays with the heavier materials. Notice we are not using the chemical names, but the common mixture of chemicals. Gasoline represents the carbon range of ~ C5-C8, naptha/kerosene (aka jet fuel) C8-C12, diesel C10-C15, etc. As we discuss the refinery, we will also discuss important properties of each fuel.
The most important product in the refinery is gasoline. Consumer demand requires that 45-50 barrels per 100 barrels of crude oil processed are gasoline. The issues for consumers are, then: 1) quality suitability of gasoline and 2) quantity suitability. The engine that was developed to use gasoline is known as the Otto engine. It contains a four-stroke piston (and engines typically have 4-8 pistons). The first stroke is the intake stroke - a valve opens, allows a certain amount of gasoline and air, and the piston moves down. The second stroke is the compression stroke - the piston moves up and valves close so that the gasoline and air that came in the piston during the first stroke are compressed. The third stroke happens because the spark plug ignites the gasoline/air mixture, pushing the piston down. The fourth stroke is the exhaust stroke, where the exhaust valve opens and the piston moves back up. Figure 2.3 shows the steps. There is a good animation in How Stuff Works (Brain, Marshall. 'How Car Engines Work' 05 April 2000. HowStuffWorks.com).
You'll notice the x and the y on strokes 1 and 2. The ratio of x/y is known as the compression ratio (CR). This is a key design feature of an automobile engine. Typically, the higher the CR, the more powerful the engine is and the higher the top speed. The "action" is in the ignition or power stroke. The pressure in the cylinder is determined by 1) pressure at the moment of ignition (determined by CR) and 2) a further increase in pressure at the instant of ignition. At higher pressures with the CR, the more likely the pressure will cause autoignition (or spontaneous ignition), which can cause "knocking" in the engine - the higher the CR, the more likely the engine will knock. This is where fuel quality comes in.
For gasoline engines, the CR can be adjusted to the fuel rating to prevent knocking; this fuel quality is known as "octane" number. Remember the straight-chain alkanes in the chemistry tutorial? The straight-chain alkanes are prone to knocking. The branched alkanes are not. The octane number is defined as 1) heptane - octane number equal to 0, and 2) 2,2,4-trimethylpentane - octane number equal to 100 (this is also known as "octane"). See Figure 2.3b below for the chemical structures of heptane and octane for octane number. Modern car engines require an 87, 89, or 93-94 octane number. However, when processing crude oil, even a high-quality crude oil, we can only produce from distillation yield of 20% with an octane number of 50. This is why crude oil needs to be processed, to produce gasoline at 50% yield with an octane number of 87-94.
Other ways to improve the octane number:
1. Add aromatics. Aromatics have an octane number (ON) greater than 100. They can be deliberately blended into gasoline to improve ON. However, many aromatic compounds are suspected carcinogens, so there are regulatory limits on the aromatic content in gasoline.
2. Another approach to increasing ON is to add alcohol groups. Methanol and ethanol are typical alcohols that can be added to fuel. ON is ~110. They can be used as blends with racing cars (known as "alky").
But even with these compounds, distillation will not produce enough gasoline with a high enough ON. So other processes are needed.
"Cracking" Processes
Thermal cracking
One way to improve gasoline yield is to break the bigger molecules into smaller molecules - molecules that boil in the gasoline range. One way to do this is with "thermal cracking." Carbon Petroleum Dubbs was one of the inventors of a successful thermal cracking process (see Figure 2.4). The process produces more gasoline, but the ON was still only ~70-73, so the quality was not adequate.
Catalytic cracking
Eugene Houdry developed another process; in the late 1930s, he discovered that thermal cracking performed in the presence of clay minerals would increase the reaction rate (i.e., make it faster) and produce molecules that had a higher ON, ~100. The clay does not become part of the gasoline - it just provides an active surface for cracking and changing the shape of molecules. The clay is known as a "catalyst," which is a substance that changes the course of a chemical reaction without being consumed. This process is called "catalytic cracking" (see Figure 2.4). Figure 2.4 shows the reactants and products for reducing the hexadecane molecule using both reactions. Catalytic cracking is the second most important process of a refinery, next to distillation. This process enables the production of ~45% gasoline with higher ON.
Figure 2.5 is the refining schematic with the additional processing added.
There are also tradeoffs when refineries make decisions as to the amount of each product they make. The quality of gasoline changes from summer to winter, as well as with gasoline demand. Prices that affect the quality of gasoline include 1) price of crude oil, 2) supply/demand of gasoline, 3) local, state, and federal taxes and 4) distribution of fuel (i.e., the cost of transporting fuel to various locations). Figure 2.6 shows a schematic of how these contribute to the cost of gasoline.
Additional Processes
Alkylation
The alkylation process takes the small molecules produced during distillation and cracking and adds them to medium-sized molecules. They are typically added in a branched way in order to boost ON. An example of adding methane and ethane to butane is shown in Figure 2.7.
Catalytic Reforming
A molecule may be of the correct number of carbon atoms, but need a configuration that will either boost ON or make another product. The example in Figure 2.8 shows how reforming n-octane can produce 3,4-dimethyl hexane.
So, let's add these two new processes to our schematic in order to see how they fit into the refinery, and how this can change the ON of gasoline. Figure 2.9 shows the additions, as well as adding in the middle distillate fraction names. Typically, naphtha and kerosene, which can also be sold as these products, are the products that make up jet fuels. So, our next topic will cover how jet engines are different from gasoline engines and use different fuel.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/02%3A_Existing_Fossil_Fuel_Technologies_for_Transportation/2.01%3A_Chemistry_Tutorial.txt
|
2.3 Jet Engines
The first aircraft used engines similar to the Otto four-stroke cycle, reciprocating piston engines. The Wright flyer was an aircraft with this type of engine. During WWII, powerful 16-cylinder, high compression ratio reciprocating engines were developed. However, the military was interested in developing engines that would make airplanes go faster, higher, and farther - this was to reduce the length of flights and provide better international communication. In order to achieve high-speed flight, a dilemma ensued: 1) the atmosphere thins at high altitudes, offering less air resistance to a plane which could lead to higher speeds, but 2) in "thinner" air, it is more difficult to get combustion air into the conventional piston engine. The modern jet engine was developed as part of a term paper by Frank Whittle while at the British Royal Air Force College, covering the fundamental principles of jet propulsion aircraft.
The jet engine begins with a "burner can," where jet fuel is injected and combusted in high-pressure air. The combustion produces a stream of high temperature, high-pressure gases (see Figure 2.10a). If more power is required, two to four-burner cans can be included, and the high temperature, high-pressure combustion gases operate a turbine (more about turbines for electricity generation in the lesson on electricity). Figure 2.10b depicts these additions. In Figure 2.10c, a containment vessel is put around the burner cans; the gases that exit the turbine pass through a nozzle. The gases exiting the nozzle provide thrust for the airplane. Figure 2.10d shows the completed engine - the high-pressure air comes from the air compressor, which is operated by the turbine.
There are variations on a simplistic jet engine: 1) the fan jet (turbofan), 2) the prop jet (turboprop), and 3) the turboshaft. The fan jet has a large fan in front of the engine to help provide air to the air compressor. It is a little slower than a turbojet but more fuel-efficient. This is the type favored for civilian transport aircraft. The prop jet uses the mechanical work of the turbine to operate a propeller. These types of engines are typically used for commuter aircraft. The turboshaft is a gas turbine engine that uses all of the output of the turbine to turn the blades, without jet exhaust. Helicopters, tanks, and hovercrafts use these types of engines. So, what is the fuel for jets?
Jet Fuel
Conventional jet fuel is composed primarily of straight-run kerosene (straight-chain carbons and accompanying hydrogen, bigger molecules than gasoline). However, there are some purification steps that are needed to ensure that the fuel behaves in jet engines.
The first step is the removal of sulfur. When sulfur is burned, it forms sulfur oxide compounds, such as sulfur dioxide (SO2) and sulfur trioxide (SO3). Because there are multiple sulfur oxide compounds, they are abbreviated into one chemical formula of SOx. These compounds, when combined with water, form acid rain (more on this in the next lesson on coal for electricity generation). Sulfur compounds are corrosive to fuel systems and have noxious odors. Sulfur is removed by reacting it with hydrogen and a metal catalyst; the processes are known as hydrogen desulfurization processes (HDS) and produce H2S (hydrogen sulfide), which is then reacted to solid sulfur.
Another problem that can occur with jet fuel is if it contains too much aromatic compound content. A small amount is actually necessary to lubricate gaskets and O-rings. However, aromatics are suspected carcinogens, and in combustion, aromatics are precursors to smoke and soot. Too much aromatic content can cause problems such as 1) poor aesthetics, 2) carcinogens, and 3) tracking of military aircraft. The way to remove aromatic compounds is the same as for removing sulfur; the aromatic compound is reacted with hydrogen and a metal catalyst to add hydrogen to the aromatic ring. The resulting compounds are heteroaromatics and cycloalkanes.
Another problem that can occur in the middle distillate fractions can occur if the fuel contains waxes. Waxes are higher molecular weight alkane hydrocarbons that can be dissolved in kerosene. At the very cold temperatures at high altitudes, wax can either separate as a solid phase or cause the fuel to freeze and cause plugging in the fuel lines. This can also cause a problem called low-temperature viscosity. Viscosity is a measurement of the flow of a fluid; the thicker the fluid gets (and flow is reduced), the higher the viscosity. While the fuel isn't frozen, it is flowing slower and could cause problems for the engine. Again, the reason for the increase in viscosity is similar to having waxes in the kerosene; high viscosity is caused by bigger molecules within the fuel. The way to improve jet fuel properties is to remove the larger molecules. This is called dewaxing.
The last problem we will discuss has to do with nitrogen. Jet fuels do not typically contain nitrogen, but when combusting fuel using air (which contains primarily nitrogen), nitrogen oxide compounds can form, shown as a formula NOx. Because jet engines burn fuels at high temperatures, thermal NOx is a problem. NOx will contribute to acid rain. If there is any nitrogen in the fuel, it would be removed during the removal of sulfur.
A refinery will make ~10% of its product as jet fuel. The Air Force uses 10% of that fuel, so about 1% of refinery output is for military jet fuel. Figure 2.11 shows the additional processes just discussed in our schematic.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/02%3A_Existing_Fossil_Fuel_Technologies_for_Transportation/2.03%3A_Jet_Engines.txt
|
2.4 Diesel Engines
Rudolf Diesel first developed Diesel engines in the 19th century. He did so because he wanted to develop an engine that was more efficient than an Otto engine and that could use poorer quality fuel than gasoline. The Diesel engine also operates on a four-stroke cycle, but there are some important differences. Diesel engines have a high compression ratio (CR)- a small Diesel engine has a CR of 13:1, while a high-performance Otto engine has a CR of 10:1. Upon the compression stroke (stroke 2), there is a high increase in temperature and pressure. In the third stroke, fuel is injected and ignites because of the high temperature and pressure of the compressed air. You can see an animation of this at How Stuff Works (Brain, Marshall. 'How Diesel Engines Work' 01 April 2000. HowStuffWorks.com). Diesel engines use fuel more efficiently; and under comparable conditions, a Diesel engine will always get better fuel efficiency than a gasoline Otto engine. Essentially, Diesel engines operate by knocking. The continuous knocking has two consequences: 1) a Diesel engine must be more sturdily built than a gasoline engine, so it is heavier and has a longer life - 300,000-350,000 miles before major engine service, and 2) fuel standards are "backwards" from that of gasoline; we want fuel to knock.
Diesel Fuel
Diesel fuel has a much higher boiling range than gasoline. The molecules are larger than gasoline, and the octane scale cannot be used as a guide. The scale that is used for diesel fuel is called the cetane number. The compound, cetane, or hexadecane, C16H34, is the standard where the cetane number is 100. For the cetane number 0 (the other end of the scale), the chemical compound used is methylnaphthalene, an aromatic compound that doesn't knock. Most diesel fuels will have cetane numbers of 40-55, with the value in Europe on the higher end and the value in the US at the lower end of that range. In a refinery, diesel fuels are processed in the same fashion as jet fuels, using hydrogenation reactions to remove sulfur and nitrogen and reacting aromatics to hydroaromatics and cycloalkanes. Dewaxing also must be done to improve viscosity and low-temperature problems, particularly in colder climates. Therefore, Figure 2.11 applies to diesel fuel as well as jet fuel. Except in airplanes, Diesel engines dominate internal combustion engine applications. They are standard for large trucks; dominate railways in North America and other countries; are common in buses; and are adapted in small cars and trucks, particularly in Europe.
2.05: Assignments
Homework #2
Complete Homework #2. It contains six questions that pertain to the Lesson 2 course material.
(10 points)
Discussion #2
Please read the following selections. Both can be accessed via Library Resources. Then answer the questions that follow in your discussion post.
For this discussion, you should read the following:
• Bryce, Robert. Power Hungry: The Myths of 'green' Energy and the Real Fuels of the Future. New York, NY: PublicAffairs, 2010. Print. (Chapters 1-3) (Library Resources)
• Laughlin, Robert B. Powering the Future: How We Will (Eventually) Solve the Energy Crisis and Fuel the Civilization of Tomorrow. New York: Basic, 2011. Print. (Chapter 7) (Library Resources)
After reading the two selections, answer the following questions in your discussion forum post:
1. What does Bryce (Power Hungry), think of the use of biofuels?
2. What does Laughlin (Powering the Future) think will be the best source of biofuel production and why?
In your posts, make at least one point agreeing with the author and one point disagreeing with the author. After posting your response, please comment on at least one other person's response. Discussion grades will be based on content, not just that you completed the assignment.
(5 points)
2.06: Summary and Final Tasks
Summary
This lesson was a very brief overview - there are entire classes based on this one lecture. In this lesson, we discussed the different transportation engines for vehicles, the fuels used for these vehicles, and how the fuels are produced from a refinery. Gasoline is the lighter fuel used in typical automobile engines, while diesel fuel is used in Diesel engines. Diesel engines get better fuel mileage than gasoline engines - gasoline is lighter than diesel. Here in the US, the primary fuel produced is gasoline (~45-50%).
Reminder - Complete all of the Lesson 2 tasks!
You have reached the end of Lesson 2! Double-check the Road Map on the Lesson 2 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 3.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum. The discussion forum will be checked regularly. While you are there, feel free to post responses to your classmates if you are able to help. The assistant will hold regular office hours to provide help for EGEE 439 students. Office hours are by appointment on Thursdays: 10:00-12:00 in the office or via Zoom. Please contact the assistant of the course to set up a time for the office hour the previous Sunday until 11:59 pm.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/02%3A_Existing_Fossil_Fuel_Technologies_for_Transportation/2.04%3A_Diesel_Engines.txt
|
History
There were several people who tried to use steam pressure to produce some sort of mechanical energy, but they were not really able to accomplish this (including Watt, 1769; von Kempelen, 1784; Trevithick, early 1800s). The first steam turbine was developed by De Laval in the 1870s - his device was used to separate cream from milk. However, Charles Parsons was the first to use steam as the working fluid for electricity generation.
Mechanics
The overall goal is to move an electric generator in a circular fashion, which can be done with a turbine. In order for a turbine to be driven, a working fluid must be used. Water can be used for driving a turbine for electricity - it is known as hydroelectricity. Figure 3.1a shows the schematic of a water wheel and how it works, and Figure 3.1b shows a picture of a modern turbine. A turbine uses the force of water (and windmills work on this principle too) to turn a wheel (or turbine). The turning turbine can be used to move something else, like something that will grind wheat into flour.
A working fluid must meet certain criteria. It must be:
• cheap;
• available, or able to be produced in large quantities;
• reasonably safe and environmentally friendly.
One of the few substances that meets this criteria is water. However, since we don't have unlimited waterfalls to produce hydroelectricity, the next best thing is "gaseous water," or steam. And to produce electricity, we want the turbine to turn very fast, and the way to do that is with high-pressure steam.
The way to produce high-pressure steam is based on Boyle's Law: For a fixed quantity of gas held at a constant temperature, pressure times the volume equals a constant (P*V = constant). For this application, Boyle's Law becomes important when combined with the work of Charles and Gay-Lussac, where volume is proportional to temperature. Therefore, for a fixed quantity of gas at constant pressure, then P*V = (constant)*T. If the volume is held constant, and temperature increases, then the pressure will increase as well. The key to producing high-pressure steam is to produce high-temperature steam. If high-pressure and high-temperature steam is fed to a turbine, the steam is allowed to expand across the turbine, and the volume increases. During expansion, as the volume increases, the pressure drops, which in turn causes the temperature to drop. Figure 3.2a is a schematic that summarizes how the steam plays a role in the turbine.
When the turbine is connected to a generator, then electricity is produced. A generator is a coil of wire that is spun very quickly around a set of magnets. So, if we add a generator to the turbine.... (see Figure 3.2b) As seen in Figure 3.1, water can be used to turn a turbine, which then turns the generator for electricity. An example of a hydroelectric plant is the one at Hoover Dam in Nevada (see Figure 3.3).
Almost 99% of our electricity comes from generators. In the past, 12-15% of electricity was produced by hydroelectric facilities, but that number has gone down to 6-9%. Hydroelectricity is limited by location (waterfalls), therefore electricity has to come from another source. The remaining 85-94% comes from electricity plants in which steam is used as the working fluid in the turbine. So, how do we do this as cheaply and reliably as we can? Steam is the working fluid that is used, so now we'll go into how we do this.
3.02: Production of Steam Plant Design
3.2 Production of Steam – Plant Design
A typical modern medium to medium-large electricity plant may have a steam flow rate in excess of 3 million pounds per hour (lb/h). For comparison, the rate of steam that has to be generated would be equivalent to burning 20 gallons of gasoline (one car) 5-6 times per second. The factors that affect how steam is boiled are 1) heat transfer rate and 2) heat release rate. Think about a kettle of water heating to boiling on a stove. The more of the kettle that rests on the burner, the faster it will boil (heat transfer rate). The higher the heat is turned up, the faster the water boils (heat release rate).
Heat transfer can be affected in three ways 1) conduction - direct contact of an object with the source of heat; 2) convection - heat carried by currents of fluid; and 3) radiation - heat that is transmitted by electromagnetic radiation from glowing objects. In our case, the heat to produce steam is made available by burning fuel. That heat must somehow be transferred to the water or steam. The rate at which heat can be transferred depends on:
• the nature of the material through which heat is transferred;
• its thickness;
• the difference in temperature across the material (losses);
• the total area across which the heat is being transferred.
Increasing the surface area is the most effective way to do this. A way to increase surface area is to transfer the heat through smaller tubes. Doing this will reduce the need to make the boiler bigger and bigger - and if you think about a pot of water boiling, the more water you put in a pan, the longer it takes to boil it keeping the surface area constant.
The first evolutionary step in boilers was the fire-tube boiler. Heated gases are in the tubes, and water and steam are in a big tank; the entire tank is under pressure. The problem with using this design was that if the tank burst, it created a major explosion. This design provides a significantly more heat-transfer surface area than the corresponding flat plate boiler. Fire tube boilers are useful in industrial heating and in very small (by today's standards) electric plants. "Rolling fire tube boilers" were successful for 150 years as steam locomotives. However, the steady growth in electricity demand and the consequent increase in plant size and necessary steam rate meant that eventually not even the fire tube boiler could keep up.
This led to the next evolutionary design step, which was the water tube (or steam tube) boiler. This is the present state-of-the-art design. Depending on the fuel used and the necessary steam rate, a modern water tube boiler is 10-20 stories tall. The design changed so that the water/steam is in tubes within the boiler with hot gases surrounding the tubes.
For More Information
Visit howstuffworks.com for some additional diagrams of steam engines.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/03%3A_Electricity_Generation_101/3.01%3A_Basics_of_Electricity_Production_from_Steam_Turbines.txt
|
3.3 Production of Steam - Fuel
The fuel that has been used as a primary source for electricity for several years is coal. It has not been the only source but has been beneficial to the electricity industry because:
1. coal has been the cheapest fuel, on \$ per million Btu basis; natural gas has been strongly competitive primarily in recent years;
2. at one time (recent), 60% of U.S. electricity was generated in coal-fired plants. Now ~40% from coal;
3. approximately 80% of U.S. coal production is burned in electric plants.
However, the main reason we are considering switching away from coal is burning coal is one of the most challenging environmental problems.
For purposes of the course, we consider steam generation from coal as steam generation from biomass; both fuels are fairly similar.
We want a high heat release rate, which is tied to the burning rate of fuel. Since coal is a solid fuel, it won't burn quickly if it is in chunks. So, the way to increase the burning rate is to increase the surface area of the coal; the way to increase the surface area is to pulverize the coal into very small particles. However, when the coal particles are small in size (something like flour), it makes it difficult to handle. It is hard to shovel something that is like dust or to support it on a grate. Instead, it is actually blown into the boiler unit with a current of air, which is called pulverized-coal firing or suspension firing. This is now the standard for electric power generation, abbreviated PC-fired water-tube boiler. Burning the coal produces heat; the heat is used to boil the water to steam; the steam moves across the turbine to move it; and the turbine turns the generator to produce electricity (see Figure 3.4). Through this sequence of transformations, the chemical potential energy of fuel (coal in this case) is converted to high-potential, high-voltage electricity for distribution to consumers. If you consider the net efficiency from the coal pile to the end of the plant, the plant efficiency is ~33%. Plants built more recently can be in the middle-high 30s range, while older plants may be in the mid-20s.
Figure 3.4 above is an illustration of a coal-fired power plant that operates on a RANKINE cycle. A Rankine steam cycle is the way most steam plants operate (the most ideal way to operate an engine is the Carnot cycle; the Rankine cycle is a modified version of the Carnot cycle.)
The following steps are involved:
1. Water is pumped at constant entropy to State 2 and into boiler.
2. Liquid is heated at constant pressure State 3 (saturated steam).
3. Steam expands at constant entropy through the turbine to State 4.
4. Constant pressure transfer of heat in condenser.
5. The turbine turns the engine to produce electricity.
For More Information
The following links provide some background information if you are interested:
The way to determine the efficiency is to look at the efficiency across each part of the plant. Losses can occur at each step of the process (see Figure 3.4). For a PC-fired modern power plant, assume operation at 2500 psi, with a steam temperature of 540°C, then the overall efficiency is 34%. Losses at each part include 1 & 2) heat losses in pipes and from the friction of the pump (efficiency of 92%); 3) heat losses and friction in the turbine (efficiency of 44%); 4) heat losses as the steam condenses back to water (efficiency of 85%), and 5) very little loss of efficiency from generator (efficiency of 99%). For every three rail cars of coal used to generate electricity, two cars of coal are lost to waste heat.
Figure 3.5 is an overall schematic of a power plant. The next sections will discuss several of the components.
Feeding Units
This is the front end of the plant. The materials that will be fed to the plant must be made into small particles in order to increase the surface area; for coal, it must be crushed to a certain size (less than 100 micrometers). We will discuss biomass preparation when we get to the discussion on the combustion of biomass. The front end of the coal delivery system includes a coal hopper (like a funnel in some ways) and a conveyor belt. Coal is typically sprayed into the boiler with air for better mixing of the two reactants.
Plant Boiler
The boiler has what is called a water wall inside - the water wall is a series of tubes welded together where the water flows. The "box" around the tubes is the boiler itself and is typically 10-20 stories high. The coal and air are sprayed into multiple burners. Figure 3.6a is an example of the water tubes inside a boiler, and Figure 3.6b shows the large scale of a boiler unit.
Burners Inside Boiler
There are typically multiple burners along the bottom of the boiler. It is a way to increase the area of heat being generated.
Plant Turbines
In a coal-fired power plant, the turbines are significantly more sophisticated than the turbine we saw for a waterwheel or for wind. Figure 3.8a shows the turbine - it actually has multiple stages on it in order to increase the efficiency.
Plant Generators
In order to generate electricity, the turbine is connected to a generator. A generator is a device of coiled wires that turn around a magnet - the action of the wires turning around the magnet generates electricity. Figure 3.9a shows the turbine in the previous figure connected to a generator, and Figure 3.9b shows the inside of the power plant generator and the enormous size that it is.
Interaction of Condenser and Cooling Water Facilities
Steam exits the turbine and is condensed back to water. Typically the condenser is a heat exchanger that uses a natural water source as working fluid. Many power plants are located along rivers or on lakes in order to have a place to return and reuse water. Condensate is returned to the boiler. Water must be extremely pure in order to avoid corrosion in boiler tubes and/or turbine blades; the purity standards may be stricter than for drinking water.
The condenser heat is transferred from the steam (including heat & condensation) to condenser water; therefore, the water leaving the condenser will be hot or warm. If the water is dumped directly into a water source while hot, it will alter the microclimate and local ecology. This is called thermal pollution. Often, cooling towers are used to cool condenser effluent before returning it to the water source. Figure 3.10a shows a schematic of how the condenser interacts with a reservoir and cooling tower, and Figure 3.10b is a picture of a cooling tower at a power plant.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/03%3A_Electricity_Generation_101/3.03%3A_Production_of_Steam_-_Fuel.txt
|
3.4 Plant End Systems
At the end of the power plant facility, flue gases from the burning of fuel will come out of the stack. However, to meet mandated emission standards, there will be units to help reduce the "bad" emitters.
The primary combustion products come from carbon and hydrogen and are shown in the reaction equations below:
C + O2CO2
4H + O2 → 2H2O
Carbon dioxide and water are formed. But they are not the only products of combustion.
Coal also has sulfur, nitrogen, and minerals that go through the combustion process. Sulfur turns into sulfur dioxide and trioxide, also known as SOx. Nitrogen in coal can form NO, N2O, and NO2, also known as NOx (fuel NOx). NOx can also form from the nitrogen in air when the temperature in the boiler is high (thermal NOx). Minerals that go through combustion are called ash and are the oxygenated compounds of the minerals in coal. If you have ever burned wood in a fireplace or at a campsite, you have seen the ash that remains.
The constituents can be summarized in a pneumonic: NO CASH. Every product of combustion, other than water, has been implicated in an environmental problem of some sort. Table 3.1 shows a summary of NO CASH:
Table 3.1: Summary of NO CASH
Acronym Coal Components Emission
N Nitrogen NOx
O Oxygen --
C Carbon CO2
A Minerals Ash
S Sulfur SOx
H Hydrogen H2O
Coal Components - Environmental Issues
One of the worst environmental consequences that can occur is when NOx and SOx are released in the atmosphere and eventually converted into the corresponding acids:
NOx + O2 + H2O → HNO3
SOx + O2 + H2O → H2SO4
Both nitric and sulfuric acids are very soluble in water. They will eventually fall to the earth either as acid precipitation (acid rain or snow) or as deposits.
In many parts of the US, rainfall is 10 times as acidic as rain falling in unpolluted areas. In some locations, or on some occasions, it can be 100 times more acidic. Numerous environmental and health problems are related to acid rain, including the following:
• Acid rainfall accumulates in streams and lakes, so fewer and fewer aquatic species can reproduce or survive. Water areas can become biologically "dead."
• Acid rain in the soil can leach key nutrients out of the soil.
• Acid rain can affect trees, especially on mountain tops. The type of rainfall that can be particularly damaging is a fine mist of acid rain.
• Whole forests can be wiped out if the damage is extensive enough, including entire ecosystems of plants and some animals.
• Acid rain or deposition can be corrosive. It can attack marble, limestone, etc. Historic buildings, monuments, and statues have been defaced by acid deposition.
• Human health can be affected by acid rain. Humans can inhale a mist of dilute acids, which can irritate the respiratory tract, which, in turn, exacerbates chronic respiratory illnesses. The elderly and infants are at greatest risk.
Degree of Acidity in an Aqueous Solution - pH Scale
Here are some key facts about pH:
• pH = 7 is perfectly neutral
• pH < 7 is acidic
• pH > 7 is basic (alkaline)
• smaller the number = more acidic the solution
• for each 1 unit change in pH, there is a ten-fold change in acidity
• a solution with pH=5 is 10 times more acidic than pH=6; pH=4 is 100 times more acidic than pH = 6
Natural rainfall is mildly acidic because carbon dioxide in the air (CO2) is moderately acidic and soluble in water.
CO2 + H2O = H2CO3
(carbonic acid, pH=5-6)
So, acid rain is defined as rainfall having a pH < 5.6.
When coal is burned in the absence of control equipment, smoke is generated. Smoke is a mixture of fly ash particles and unburned char. On a day of high humidity, the smoke particles act as points to condense moisture from the air. When coal has high sulfur content, you also have SOx emissions. Under these conditions, the dispersion of sulfuric acid droplets occur, and when associated with the particles of smoke:
SMOKE + FOG = SMOG
There have been sulfuric acid smog events that have killed people - in Donora, PA (1947), in New York City (1966), and in London (1952). In most industrialized nations, this is no longer a problem, as regulations have reduced the smoke and sulfur emissions at power plants and there is now little domestic use of coal.
Clean-up Strategies
There are several options for cleaning up the bad emissions:
1. Do nothing. (Use a tall stack to disperse pollutants: the solution to pollution is dilution.)
2. Remove or reduce sulfur and nitrogen in fuel feedstock before it is burned (precombustion). This includes sulfur, nitrogen, and minerals.
3. Allow the SOx, NOx, and ROx to form in the boiler, but capture them before they can be emitted into the environment. These are called post-combustion strategies.
The "do nothing" strategy is illegal in the US. The Clean Air Act of 1977 and amendments to the Clean Air Act of 1990 have changed the air environment in the US. However, this is still a problem in the former Soviet Bloc, China, and third world nations.
Precombustion strategies can be approached in the following ways. One way is to switch to cleaner fuel, such as natural gas. In order to do so, however, extensive changes may need to be made to the burners and boilers. Another way is to switch to a cleaner form of coal. Most low sulfur coals are in the western US and have to be transported to the east. These coals tend to have a lower heating value, which leads to more expensive operating costs related to the need to purchase more coal. Finally, impurities can be removed from coal; this can be done by removing minerals that contain sulfur or nitrogen, such as pyrite (FeS). However, some S and N are chemically bonded to the organic portion of coal itself and cannot be removed. Petroleum and natural gas can also have sulfur associated with it. For petroleum products, as discussed in Lesson 2, hydrogen is used to react with sulfur to form hydrogen sulfide (H2S). H2S can be captured from natural gas as well; H2S can be converted into solid sulfur and sold to the chemical industry.
There are also post-combustion strategies for removing impurities. Most of the ash that forms during combustion drops to the bottom of a boiler (~80%) and can be removed for disposal back into the mine. However, up to 20% is carried out of the boiler through the flue gas and is known as fly ash (and can be called particulate matter). Fly ash can also cause health problems. A tiny particle of ash can get lodged in narrow air passages of the lungs. If the body cannot remove it by coating it with mucus and expelling it, then the body will try to seal it off with scar tissue. Solid particulate matter can be in handled in two ways: the fly ash can be caught in gigantic fabric filter bags (like a vacuum cleaner bag), which is called the baghouse (see Figure 3.12a and 3.12b). The particles can also be given an electric charge. At high electric potentials, the charged particles are attracted to the electrode of opposite charge; the device used to do this is called an electrostatic precipitator (ESP) (see Figure 3.13).
We can also remove SOx in the flue gas. The SOx can dissolve in water to form an acid, which can then be neutralized by reacting it with a base. The cheapest and most available base is lime or limestone, which reacts:
Ca(OH)2 + SOx → CaSO4 + H2O
Calcium sulfate (CaSO4) is an insoluble precipitate; the SOX wasn't destroyed; we just convert it from a gas to an easier to handle solid. The technology for the removal of SOx is called flue gas desulfurization (FGD). The hardware is called a scrubber (see Figure 3.14). The SOx scrubbers are effective, as they capture 97% of the emitted sulfur. The CaSO4 produced is called scrubber sludge and is either put back in the mine or sold as gypsum to make drywall.
The hardest pollutant to deal with is NOx. A scrubber does not work for NOx control because nitrate salts are water-insoluble. To limit the production of thermal NOx, low-temperature burners produce less NOx or they use staged combustion so that the temperatures will be low enough to allow the reverse reaction:
2NO → N2 + O2
Flue gas NOx can be treated with ammonia:
2NH3 + NO2 + NO → 2N2 + 3H2O
All of the technologies discussed work. All add costs to producing power (a scrubber will add ~33% to the capital cost of a plant as well as operating costs). Coal cleaning adds \$2-3 per ton of coal to the coal cost. And hydrotreating diesel and heating oil add 5-7¢/gal to the cost of the fuels. And these costs are passed on to the consumer.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/03%3A_Electricity_Generation_101/3.04%3A_Plant_End_Systems.txt
|
3.5 Assignments
Homework #3
1. Search for a news or opinion article related to biofuels, energy sustainability, or energy security. Provide a URL link to the article you select.
2. Write a paragraph summarizing the most important details of the article.
3. Select at least one item from the article and research it on your own, in order to establish if what was included is factual. Write another 1-2 paragraphs providing details as to whether the information from the article can be backed up with actual data or not. Include your sources. The format should be:
• Approximate length – 2 pages, double-spaced, 1” margins, 12 point font, name at the top
• Title and URL link for the article you select should be included
• For Citation and Reference Style, You will use the APA citation style.
• Use as filename your user ID_HW3 (i.e., ceb7_HW3)
• Upload it to the Homework #3 Dropbox.
(12 points)
Discussion #3
Post a response that includes an opening statement on the article you picked for the homework above. Take some time to review others' responses. Then respond to at least one other person’s post. One thing you can comment on is if the article was written from a particular bias. You may need to look up some additional information to determine if there is a bias, such as additional factual information or the background of the person doing the reporting.
(5 points)
Exam #1
This week you will complete Exam #1.
3.06: Summary and Final Tasks
Summary
There are two aspects to remember from this lesson. First, the way electricity is generated is by steam boiler plants. Every power plant contains a way to introduce the fuel to the boiler; a boiler to heat steam to high temperature and pressure; and steam which turns the turbine, which turns the generator for electricity. Other parts of the power plant include a water cycle to condense the steam and then reheat it. There are also several systems in place to reduce the “bad” emissions that come out with the flue gas. We did not discuss the issues with CO2 emissions, what problems are caused by CO2, and how we might mitigate CO2 emissions, as that could be one lesson on its own. By using biomass, we mitigate CO2 use, and that will be discussed in a future lesson.
Many different fuels can be used to generate heat for making steam. The most common way here in the US is by coal, although plants have been switching to natural gas due to the low cost of natural gas currently. Coal, along with all fossil fuels, has issues with emissions. This lesson detailed the emissions from coal and how we deal with them in boilers for power plants.
Reminder - Complete all of the Lesson 3 tasks!
You have reached the end of Lesson 3! Double-check the Road Map on the Lesson 3 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 4.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour between 10:00 am-12:00 pm on Thursdays. Discussion forum is checked regularly (Monday through Friday). While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/03%3A_Electricity_Generation_101/3.05%3A_Assignments.txt
|
4.1 Wood
History of Burning Wood
Wood has been used as a source of energy for thousands of years (the first known use of fire was determined when archeologists made discoveries of humans living 400,000 years ago), and wood was the obvious source to make fire. In the Americas, in 1637, the people of Boston suffered from the scarcity of wood. It became America’s first energy crisis after less than one century of settlement. During the late 1700s, Benjamin Franklin invented a cast iron stove for indoor use. It held heat in the room after the fire burned out. However, it had a design flaw in that it had no way to pull in air, so fires went out quickly. So David R. Rittenhouse added a chimney and exhaust pipe to improve upon it.
Burning Wood
First, we will look at where energy is stored in materials, starting with the methane molecule. The combustion of methane is exothermic (releases heat as the reaction proceeds), but the reaction must be initiated before it will sustain itself with the continued availability of methane and oxygen. The formula below shows the reaction in a stoichiometric format:
CH4+4O2→CO2+H2O (plus heat!)
Figure 4.1 shows the same reactants and products, but with the bonds before reaction and after reaction, on a molecular/atomic level. The number of atoms in each molecule doesn't change, but how they are arranged and connected does. The only real change is how the atoms are linked - these are the chemical bonds. Since ENERGY comes out of a burning system, then it must mean that more energy is stored in 4 C-H bonds and 2 O-O bonds than in 4 H-O and two C-O bonds. The ENERGY released during chemical combustion comes from ENERGY stored in chemical bonds of fuel & oxygen.
We now know the reaction chemistry of methane combustion, but wood is a much more complex material than methane. Wood contains up to 50% water. Water in the wood will reduce the heating value of the wood, and if the wood is very wet, it will lead to a smoky fire. The main components of wood (we will cover this in more depth in a later lesson) are cellulose (what paper is made from) and lignin (the part of a tree that makes it have a sturdy structure). In order to start a fire, you typically must ignite a material that burns easily to begin heating the wood (this can be newspaper or a “fire starter”). The components begin to decompose from the heat (therefore we are not technically “burning” yet), which produces vapors and char. The vapors are called “volatiles” and the char is composed of carbon and ash. The volatiles are what actually begins to burn, producing a flame. The carbon-rich char produces glowing embers or “coals,” which are needed to keep the fire sustained. Wood does not typically contain sulfur, so no sulfur oxides (or SOx) are produced.
There can be problems with burning wood. The smoke comes from particulates that did not burn or only partially burned that can pollute the atmosphere, and typically come from resins in the trees. It isn’t an issue when one or two people are burning wood, but when thousands of people burn wood in fireplaces. In State College, Pennsylvania, in the winter, one can see smoke in the air from fireplaces. Wood fires in fireplaces can also deposit soot and creosote in the chimneys, which if not cleaned periodically, can ignite. Burning wood (or really most things) will produce an ash material (minerals in wood and coal that react with air under combustion conditions); the ash must be disposed of. Wood smoke also contains a variety of chemicals that can be carcinogenic.
Now let’s begin discussing different biomass sources, how we measure different properties of different biomasses, and how to determine the atomic composition of biomass.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/04%3A_Use_of_Biomass_in_Thermal_Technologies/4.01%3A_Wood.txt
|
4.2 Biomass
There are four types of biomass resources that can be utilized: 1) agricultural residues, 2) energy crops, 3) forestry residues, and 4) processing wastes. Examples of different sources are listed below:
Agricultural Residues:
• Corn stover
• Wheat straw
• Rice straw
• Soybean stalk
Energy Crops:
• Switch grass
• Sweet sorghum
• Sugar canes
• Algae
• Cattail
• Duckweed
Forestry Residues:
• Saw dust
• Woody chips
Processing wastes:
• Food processing wastes
• Animal wastes
• Municipal solid wastes
As already mentioned, most biomass is at least partially composed of three components: cellulose, hemicellulose, and lignin. Figure 4.2a shows a diagram of lignocellulose, and Figure 4.2b shows the biomass broken down into the three parts. There will be significantly more discussion on biomass composition in future lessons. Cellulose is a crystalline polymer of ring molecules (6 carbons) with OH and COOH groups (in Figure 4.2a, cellulose is the straight green lines; in Figure 4.2b, the green molecule). Hemicellulose is similar, but has ring molecules with 5 and 6 carbons, and is amorphous in structures, as depicted in Figure 4.2a by the black squiggly line; Figure 4.2b shows how it is around the cellulose and more detail of the molecular structure. Lignin is the material that holds it all together and is the light blue line in Figure 4.2a; it is in red in 4.2b.
How To Determine Properties of Biomass
There are four common ways to measure the properties of any carbon product, which will also be used for biomass: 1) proximate analysis, 2) ultimate analysis, 3) heat of combustion, and 4) ash analysis.
Proximate analysis
Proximate analysis is a broad measurement to determine the moisture content (M), volatile matter content (VM), fixed carbon content (FC), and the ash content. These are all done on a mass basis, typically, and are done in what is called a proximate analyzer – the analyzer just measures the mass loss at certain temperatures. Moisture is driven off at ~105-110°C (just above the boiling point of water); it represents physically bound water only. Volatile matter is driven off in an inert atmosphere at 950°C, using a slow heating rate. The ash content is determined by taking the remaining material (after VM loss) and burning it at above 700°C in oxygen. The fixed carbon is then determined by difference: FC = 1 – M – Ash – VM.
The following is an example of proximate analysis of lignin, which is part of wood and/or grasses, primarily:
• Moisture (wt%): 5.34
• Ash (wt%): 14.05
• Volatile Matter (wt%): 60.86
• FC=100−M(%)−A(%)−VM(%)FC=100−M%−A%−VM%
• FC=100−5.34−14.05−60.86=19.75FC=100−5.34−14.05−60.86=19.75
Sometimes the moisture content will be removed from the VM and ash contents, on a dry basis:
• FC=100−M(%)−A(%dry)−VM(%dry)FC=100−M%−A%dry−VM%dry
• FC=100−14.05−60.86=25.09FC=100−14.05−60.86=25.09
Ultimate analysis
Ultimate analysis is more specific in that it analyzes the elemental composition of the organic portion of materials. The compositions of carbon (C), hydrogen (H), nitrogen (N), sulfur (S), and oxygen (O) are determined on a mass percent basis, and can be converted to an atomic basis. In some cases, chlorine (Cl) will also be analyzed. There are instruments that are designed to measure only the C H N mass percent and then another to measure S percent; the instrument combusts the material and measures the products of combustion. The following is an example problem for determining the molecular atomic composition of biomass when being provided with an ultimate analysis. Oxygen is usually determined by difference. Water can skew the hydrogen results and must be accounted for.
Your Turn
Problem 1:
The ultimate analysis shows that the C, H, O, N and S contents of a biomass material are 51.9%, 5.5%, 41.5%, 0.8% and 0.3% on a dry basis. What is the chemical formula of this biomass? How many kilograms of air are required to completely combust 1 kg of this biomass? The results are shown below.
The following examples are of the calculation of Problem 1, the chemical formula of biomass, when given mass percent on a dry basis. If you know the elemental mass percent of the sample, you can divide by the molecular weight to determine the atomic value of each element. The values in the table are then divided by the atomic number of carbon to normalize the molecule. So, for every carbon, you have 1.26 atoms of hydrogen, 0.6 atoms of oxygen, etc.
Table 4.1: Problem 1 Calculations
Mass%(1/MW)=x Values
C=51.9 ($\frac{1}{12.011}$) ($\frac{4.32}{4.32}$)=1
H=5.5 ($\frac{1}{1.0079}$) ($\frac{5.46}{4.32}$)=1.260
O=41.05 ($\frac{1}{15.9994}$) ($\frac{2.59}{4.32}$)=0.600
N=0.8 ($\frac{1}{14.0067}$) ($\frac{0.06}{4.32}$)=0.013
S=0.3 ($\frac{1}{32.06}$) ($\frac{0.01}{4.32}$)=0.002
Heat of combustion
The heat of combustion can be measured directly using a bomb calorimeter. This instrument is used to measure the calorific value per mass (calorie/gram or Btu/lb). It can also be estimated using different formulas that calculate it based on either ultimate or proximate analysis. A common type of calorimeter is the isoperibol calorimeter, which will contain the heat inside the jacket but will accommodate the change in temperature of the water in the bucket; see Figure 4.3 for a schematic. A sample is placed in a crucible that is put inside of a reactor with high-pressure oxygen. The sample is connected to a fuse and electrical leads that will ignite the sample, all contained within the reactor (sometimes called a bomb calorimeter). The water temperature in the bucket is measured before and after ignition, and with all the other parts calibrated, the specific heat of water and the change in temperature are used to determine the heat of combustion.
The heating value is determined in a bomb calorimeter. Heating values are reported on both wet and dry fuel bases. For the high heating value (HHV), the value can be determined by normalizing out the moisture in a liquid form. For the low heating value (LHV), a portion of the heat of combustion is used to evaporate the moisture.
Ash analysis
The minerals in the material, once combusted, turn to ash. The ash can be analyzed for specific compounds that will contain oxygen, such as CaO, K2O, Na2O, MgO, SiO2, Fe2O3, P2O5, SO3, and Cl. The original minerals can also be measured. Once the mineral or ash is isolated, it often must be dissolved in various acids and then analyzed. There is other instrumentation available, but the analysis is quite complicated and not often done.
Bulk density is also determined for biomass as a property. It is typically determined by measuring the weight of material per unit volume. It is usually determined on a dry weight basis (moisture free) or on an as-received basis with moisture content available. For biomass, the low values (grain straws and shavings) are 150-200 kg/m3 (0.15-0.20 g/cm3), and high values (solid wood) are 600-900 kg/m3 (0.60-0.90 g/cm3). The heating value and bulk density are used to determine the energy density. Figure 4.4 shows a comparison of various biomass sources to fossil fuel sources on an energy density mass basis.
Many of the fuel characteristics we have been discussing need to be known for proper use of biomass in combustion, gasification, and other reaction chemistry.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/04%3A_Use_of_Biomass_in_Thermal_Technologies/4.02%3A_Biomass.txt
|
4.3 Gasification
Now, we will go into gasification and compare it to combustion. Gasification is a process that produces syngas, a gaseous mixture of CO, CO2, H2, and CH4, from carbonaceous materials at high temperatures (750 – 1100°C). Gasification is a partial oxidation process; reaction takes place with a limited amount of oxygen. The overall process is endothermic (requires heat to keep the reaction going), so it requires either the simultaneous burning of part of the fuel or the delivery of an external source of heat to drive the process.
Historically, gasification was used in the early 1800s to produce lighting, in London, England (1807) and Baltimore, Maryland (1816). It was manufactured from the gasification of coal. Gasification of coal, combined with Fischer-Tropsch synthesis was one method that was used during WWII to produce liquid fuel for Germany because they did not have access to oil for fuel. It has also been used to convert coal and heavy oil into hydrogen for the production of ammonia and urea-based fertilizer. As a process, it continues to be used in South Africa as a source for liquid fuels (gasification followed by Fischer-Tropsch synthesis).
Gasification typically takes place at temperatures from 750-1100°C. It will break apart biomass (or any carbon material), and usually, an oxidizing agent is added in insufficient quantities. The products are typically gas under these conditions, and the product slate will vary depending on the oxidizing agent. The products are typically hydrogen, carbon monoxide, carbon dioxide, and methane. There may also be some liquid products depending on the conditions used. Gasification and combustion have some similarities; Figure 4.5 shows the variation in products between gasification and combustion. Table 4.2 shows a comparison of the conditions.
Table 4.2: Comparison of Combustion versus Gasification
Specification Combustion Gasification
Oxygen Use Uses excess Uses limited amounts
Process Type Exothermic Endothermic
Product Heat Combustible Synthesis
Zones of Gasification
There are several zones that the carbon material passes through as it proceeds through the gasifier: 1) drying, 2) pyrolysis, 3) combustion, and 4) reduction. The schematic in Figure 4.6 shows the zones and the products that typically occur during that part of the process. First, we will discuss what happens in each zone. We will also be looking at different gasifier designs to show these zones change depending on the design, and each design has advantages and disadvantages.
The drying process is essential to remove surface water, and the “product” is water. Water can be removed by filtration or evaporation, or a combination of both. Typically, waste heat is used to do the evaporation.
Pyrolysis is typically the next zone. If you look at it as a reaction:
Reaction 1: Dry biomass → Volatiles + Chars (C) + Ash
Reaction 2: Volatiles → (x) Tar +(1−x) Gas
where x is the mass fraction of tars in the volatiles. Volatile gases are released from the dry biomass at temperatures ranging up to about 700oC. These gases are non-condensable vapors such as CH4, CO, CO2 and H2 and condensable vapor of tar at the ambient temperature. The solid residues are char and ash. A typical method to test how well a biomass material will pyrolyze is thermogravimetric analysis; it is similar to the proximate analysis. However, the heating rate and oxidizing agent can be varied, and the instrument can be used to determine the optimum temperature of pyrolysis.
Gasification Process and Chemistry: Combustion and Reduction
A limited amount of oxidizing agent is used during gasification to partially oxidize the pyrolysis products of char (C), tar and gas to form a gaseous mixture of syngas mainly containing CO, H2, CH4 and CO2. Common gasifying agents are: air, O2, H2O and CO2. If air or oxygen is used as a gasifying agent, partial combustion of biomass can supply heat for the endothermic reactions.
Reaction 3: C (char) + O2=CO2
Reaction 4 CmHn (tar) + (m + n/4)O2→mCO2+n/2H2O
Combustion of gases:
Reaction 5: H2+1/2O2→H2O
Reaction 6: CH4+2O2→CO2+2H2O
Reaction 7: CO +1/2O2→CO2
The equivalence ratio (ER) is the ratio of O2 required for gasification, to O2 required for full combustion of biomass. The value of ER is usually 0.2 - 0.4. At too high ER values, excess air causes unnecessary combustion of biomass and dilutes the syngas. At too low ER values, the partial combustion of biomass does not provide enough oxygen and heat for gasification.
There are several reactions that can take place in the reduction zone. There are three possible types of reactions: 1) solid-gas reactions, 2) tar-gas reactions, and 3) gas-gas reactions. Essentially, H2O and CO2 are used as gasifying agents to increase the H2 and CO yields. The double-sided arrow represents that these reactions are reversible depending on the conditions used.
Solid-gas reactions include:
Reaction 8: C + CO2 ↔ 2CO (Boudouard Reaction)
Reaction 9: C + H2O ↔ CO + H2 (Carbon-Water Reaction)
Reaction 10: C + 2H2↔CH4 (Hydrogenation Reaction)
Tar-gas reactions include:
Reaction 11: CmHn (tar) + mH2O ↔ (m+n/2)H2 + mCO (Tar Steam Reforming Reaction)
Reaction 12: CmHn (tar) + mCO2 ↔ n/2H2 + 2mCO (Tar Dry Reforming Reaction)
Gas-gas reactions include:
Reaction 13: CO + H2O↔CO2 + H2(Water-Gas Shift Reaction)
Reaction 14: CO + 3H2↔CH4 + H2O (Methanation)
The reactions can be affected by reaction equilibrium and kinetics. For a long reaction time: 1) chemical equilibrium is attained, 2) products are limited to CO, CO2, H2, and CH4, and 3) low temperatures and high pressures favor the formation of CH4, whereas high temperatures and low pressures favor the formation of H2 and CO. For a short reaction time: 1) chemical equilibrium is not attained, 2) products contain light hydrocarbons as well as up to 10 wt% heavy hydrocarbons (tar), and 3) steam injection and catalysts can shift the products toward lower molecular weight compounds.
Gasifier Designs
There are several types of gasifier designs: 1) updraft, 2) downdraft, 3) cross downdraft, 4) fluidized bed, and 5) plasma. The first type of gasifier is the updraft (Figure 4.7) design. The advantages include that it is a simple design and is not sensitive to fuel selection. However, disadvantages include a long start-up time, production of high concentrations of tar, and general lack of suitability for modern heat and power systems.
The downdraft gasifier (Figure 4.8) is similar, but the air enters in the middle of the unit and gases flow down and out. The oxidation and reduction zones change places. Advantages to this design include low tar production, low power requirements, a quicker response time, and a short start up time. However, it has a more complex design, fuel can be fouled with slag, and it cannot be scaled up beyond 400 kg/h.
The crossdraft design gasifier is shown in Figure 4.9. Similar to the downdraft, it has a quicker response time and has a short start up time; it is also complex in design, cannot use high mineral containing fuels, and fuel can be contaminated with slag from ash.
A fluidized bed design gasifier is shown in Figure 4.10. The action of this gasifier is similar to how water might boil, except the air (or other gas) flows through the fines (the sample and sand) at temperature, creating a bubbling effect similar to boiling. Because of this action, it has the advantages of greater fuel flexibility, better control, and is quick in response to changes. But because of these advantages, these types of gasifiers have a higher capital cost, a higher power requirement, and must be operated on high particulate loading.
One of the new design gasifiers is a plasma gasifier design. Plasma gasification uses extremely high temperatures in an oxygen-starved environment to decompose waste material into small molecules and atoms, so that the compounds formed are very simple and form a syngas with H2, CO and H2O. This type of unit functions very differently, as electricity is fed to a torch that has two electrodes – when functioning, the electrodes create an arc. Inert gas is passed through the arc, and, as this occurs, the gas heats to temperatures as high as 3,000 °C (Credit: Westinghouse Plasma Corporation). The advantages of such units include: 1) process versatility, 2) superior emission characteristics, 3) no secondary treatment of byproducts, 4) valuable byproducts, 5) enhanced process control, 6) volume reduction of material fed, and 6) small plant size. Units such as these are more expensive and scaling up is still in the research stage. These types of units are most commonly used for municipal waste sludge.
General information on gasification
So what products are made, what advantages are there to using various oxidizing sources, how are the byproducts removed, and how is efficiency improved? Besides syngas, other products are made depending on the design. As stated previously, the syngas is composed of H2, CO, CO2, H2O, and CH4. Depending on the design, differing amounts of tar and char can also be made. For example, for steam fluidized gasification of wood sawdust at atmospheric pressure and 775°C, 80% of the carbon will be made into syngas, 4% of the carbon will produce tar, and 16% will produce char (Herguido J, Corella J, Gonzalez-Saiz J. Ind Eng Chem Res 1992; 31: 1274-82.)
There are multiple uses for syngas, for making hydrocarbon fuels, for producing particular chemicals, and for burning as a fuel; therefore, syngas has a heating value. The heating value can be calculated by the volumetric fraction and the higher heating values (HHV) of gas components, which is shown in this equation:
HHVgas = VCO (HHVCO) + VCO2 (HHVCO2)+VCH4 (HHVCH4)
+VH2 (HHVH2 ) + VH2O(HHVH2O)+VN2(HHVN2)
where:
HHVCO=12.68 MJ/Nm3
HHVCO2=0.00MJ/Nm3
HHVCH4=38.78MJ/Nm3
HHVH2=12.81MJ/Nm3
HHVH2O=2.01MJ/Nm3
HHVN2=0.00MJ/Nm3
A problem based on this equation and HHVs will be included in the homework.
Other factors are determined for optimal gasification. Thermal efficiency is the conversion of the chemical energy of solid fuels into chemical energy and sensible heat of gaseous products. For high temperature/high pressure gasifiers, the efficiency is high, ~90%. For typical biomass gasifiers, the efficiency is reduced to 70-80% efficiency. Cold gas efficiency is the conversion of chemical energy of solid fuel to chemical energy of gaseous products; for typical biomass gasifiers, the efficiency is 50-60%.
There are several processing factors that can affect different aspects of gasification. Table 4.3 shows the main advantages and technical challenges for different gasifying agents. Steam and carbon dioxide as oxidizing agents are advantageous in making a high heating value syngas with more hydrogen and carbon monoxide than other gases, but also require external heating sources and catalytic tar reformation.
Table 4.3: Advantages and technical challenges of different gasifying agents. (Wang, LJ, Well, CL, Jones, DD and Hanna, MA. 2008. Biomass and Bioenergy, 32:573-581.)
Gasifying Agent Main Advantages Main Technical Challenges
Air
Partial combustion for heat supply of gasification.
Moderate char and tar content.
Low heating value (3-6 MJ/Nm3)
Large amount of N2 in syngas (i.e., >50% by volume)
Difficult determination of equivalence ratio (ER)
Steam
High heating value syngas (10-15 MJ/Nm3)
H2-rich syngas (i.e., >50% by volume)
Requires indirect or external heat supply for gasification
High tar content in syngas
Tar requires catalytic reforming to syngas unless used to make chemicals
Carbon dioxide
High heating value syngas
High H2/CO and low CO2 in syngas
Requires indirect or external heat supply
Tar requires catalytic reforming to syngas unless used to make chemicals
Basic design features can also affect the performance of a gasifier. Table 4.4 shows the effect of fixed bed versus a fluidized bed and differences in temperature, pressure, and equivalence ratio. Fixed/moving beds are simpler in design and favorable on a small scale economically, but fluidized bed reactors have a higher productivity and low byproduct generation. The rest of the table shows how increased temperature can also favor carbon conversion and the HHV of the syngas, while increased pressure helps with producing a high pressure syngas without compression to higher pressures downstream.
Table 4.4: Effect of bed design and differences in operating parameters on gasifier operation. (Wang, LJ, Weller, CL, Jones, DD and Hanna, MA. 2008. Biomass and Bioenergy, 32: 573-581.)
Bed Design Main Advantages Main Technical Challenges
Fixed/moving bed
Simple and reliable design
Favorable economics on a small scale
Long residence time
Non-uniform temperature distribution in gasifiers
High char and/or tar contents
Low cold gas efficiency
Low productivity (i.e., ~5 GJ/m2h)
Fluidized bed
Short residence time
High productivity (i.e., 20-30 GJ/m2h)
Uniform temperature distribution in gasifiers
Low char and/or tar contents
High cold gas efficiency
Reduced ash-related problems
High particulate dust in syngas
Favorable economics on a medium to large scale
Increase of temperature
Decreased tar and char content
Decreased methane in syngas
Increased carbon conversion
Increased heating value of syngas
Decreased energy efficiency
Increased ash-related problems
Increase of pressure
Low tar and char content
No costly syngas compression required for downstream utilization of syngas
Limited design and operational experience
Higher cost of gasifier at small scale
Increase of ER (Equivalence Ratio) Low tar and char content Decreased heating value of syngas
Product Cleaning
The main thing that has to be done to clean the syngas is to remove char and tar. The char is typically in particulate form, so the particulates can be removed in a way similar to what was described in the power plant facility. Typically for gasifiers, the method of particulate filtration includes gas cyclones (removal of particulate matter larger than 5 μm). Additional filtration can be done using ceramic candle filters or moving bed granular filters.
Tars are typically heavy liquids. In some cases, the tars are removed by scrubbing the gas stream with a fine mist of water or oil; this method is inexpensive but also inefficient. Tars can also be converted to low molecular weight compounds by “cracking” into CO and H2 (these are typically the desired gases for syngas). This is done at high temperature (1000°C) or with the use of a catalyst at 600-800°C. Tars can also be “reformed” to CO and H2, which can be converted into alcohols, alkanes, and other useful products. This is done with steam and is called steam reforming of tar; the reaction conditions are at a temperature of ~250°C and pressure of 30-55 atm. The reaction is shown below, and is the same reaction as that shown in reaction 11:
Tar steam reforming reaction:
Reaction 11: CmHn (tar) + mH2O ↔ (m+n/2)H2 + mCO
Steam reforming has advantages. It is generally a safer operation since there isn’t any oxygen in the feed gases, and it produces a higher H2/CO ratio syngas product than most alternatives. The main disadvantage is a lower thermal efficiency, as heat must be added indirectly because the reaction is endothermic.
Syngas Utilization
As stated earlier, syngas has multiple uses. Syngas can be used to generate heat and power, and can even be used to turn a turbine in some engineering designs. Syngas can also be used as the synthesis gas for Fischer-Tropsch fuel production, synthesis of methanol and dimethyl ether (DME), fermentation for production of biobased products, and production of hydrogen.
So, how is syngas utilized in heat and power generation? Syngas can be used in pulverized coal combustion systems; it helps the coal to ignite and to prevent plugging of the coal feeding system. Biomass gasification can ease ash-related problems. This is because the gasification temperature is lower than in combustion, and once gasified, can supply clean syngas to the combustor. Adding a gasifier to a combustion system helps in utilization of a variety of biomass sources with large variations in properties. Once the syngas has been cleaned, it can be fed to gas engines, fuel cells or gas turbines for power generation.
Syngas may also be used to produce hydrogen. When biomass is gasified, a mixture of H2, CO, CH4, and CO2 is produced. Further reaction to hydrogen can be done using water reforming and water-gas shift reactions:
Water reforming reaction for CH4 to H2:
Reaction 15: CH4 + H2O ↔ 3H2 + CO
Water-gas shift reaction for CO to H2 (as shown earlier):
Reaction 13: CO + H2O ↔ CO2+H2
Carbon dioxide may also be removed, as it is typically an undesirable component. One method to keep it from going into the atmosphere is to do chemical adsorption:
Reaction 16: CaO + CO2↔CaCO3
Syngas can also be utilized for Fischer-Tropsch synthesis of hydrocarbon fuels. Variable chain length hydrocarbons can be produced via a gas mixture of CO and H2 using the Fischer-Tropsch method. The reaction for this is:
Reaction 17: CO + 2H2→(-CH2-)n+H2O
In order for the reaction to take place, the ratio should be close to 2:1, so gases generated via gasification may have to be adjusted to fit this ratio. Inert gases also need to be reduced, such as CO2 and contaminants such as H2S, as the contaminants may lower catalyst activity.
Methanol and dimethyl ether can also be produced from syngas. The reactions are:
Reaction 18: CO + 2H2→CH3OH
Reaction 19: CO2+3H2→CH3OCH3+H2O
Dimethyl ether (DME) can be made from methanol:
Reaction 20: 2 CH3OH→CH3OCH3+H2O
Syngas can also be fermented to produce bio-based products. This will be discussed in detail in a later lesson.
4.04: Assignments
4.4 Assignments
Homework #4
Download and complete Homework #4. It contains four questions that pertain to the Lesson 4 course material. Be sure to show your work! When you are finished, upload your completed assignment to the Homework #4 Dropbox. Use the following naming convention for your assignment: your user ID_HW4 (i.e., ceb7_HW4).
(12 points)
Discussion #4
Answer the following questions in your discussion forum post:
• What is causing the change to the price of natural gas?
• How have oil prices changed over the last year?
• How might this affect the price of biofuels, or utilization of biofuels?
• What can be done to minimize the volatility in fuel prices, or can anything be done?
After posting your response, please comment on at least one other person's response. Discussion grades will be based on content, not just that you completed the assignment. Grades will reflect critical thinking and effort in your input and responses.
(5 points)
4.05: Summary and Final Tasks
Summary
There are several potential crops that can be utilized for combustion and gasification. These include energy crops, crop residues, forest residues, and process wastes. These can be utilized in both combustion processes and gasification processes. For gasification, we looked at several factors: 1) gasification process and chemistry, 2) gasifier design and operation, 3) syngas cleaning, and 4) syngas utilization to make a variety of products. Biobased energy and chemical products discussed in the lesson include: 1) heat and power, 2) hydrogen, 3) F-T hydrocarbon fuels, 3) alcohols, and 4) biochemical and biopolymers.
References
Schobert, H.H., Energy and Society: An Introduction, 2002, Taylor & Francis: New York, Ch. 4-6.
Wang, L., Biological Engineering, North Carolina A&T University, BEEMS Module C1, Biomass Gasification, sponsored by USDA Higher Education Challenger Program, 2009-38411-19761, PI on project Li, Yebo.
Reminder - Complete all of the Lesson 4 tasks!
You have reached the end of Lesson 4! Double-check the Road Map on the Lesson 4 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 5.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum. I will check that discussion forum daily (Monday through Friday). While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/04%3A_Use_of_Biomass_in_Thermal_Technologies/4.03%3A_Gasification.txt
|
5.1 Biomass Pyrolysis
Figure 5.1 shows a graphic of the four methods of thermochemical conversion of biomass, with pyrolysis highlighted. We just went over combustion and gasification, and we’ll cover direct liquefaction later on in the semester.
There are differences in each of the thermal processes. For combustion, the material is in an oxygen-rich atmosphere, at a very high operating temperature, with heat as the targeted output. Gasification takes place in an oxygen-lean atmosphere, with a high operating temperature, and gaseous products being the main target (syngas production in most cases). Direct liquefaction (particularly hydrothermal processing) occurs in a non-oxidative atmosphere, where biomass is fed into a unit as an aqueous slurry at lower temperatures, and bio-crude in liquid form is the product.
So, what is pyrolysis? There are several definitions depending on the source, but essentially it is a thermochemical process, conducted at 400-600°C in the absence of oxygen. The process produces gases, bio-oil, and a char, and as noted in Lesson 4, is one of the first steps in gasification or combustion. The composition of the primary products made will depend on the temperature, pressure, and heating rate of the process.
There are advantages, both economical and environmental, to doing pyrolysis. They are:
• utilization of renewable resources through a carbon neutral route – environmental potential;
• utilization of waste materials such as lumber processing waste (barks, sawdust, forest thinnings, etc.), agricultural residues (straws, manure, etc.) – economic potential;
• self-sustaining energy – economic potential;
• conversion of low energy in biomass into high energy density liquid fuels – environmental & economic potentials;
• potential to produce chemicals from bio-based resources – environmental & economic potentials.
Pyrolysis was initially utilized to produce charcoal. In indigenous cultures in South America, the material was ignited and then covered with soil to reduce the oxygen available to the material – it left a high carbon material that could stabilize and enrich the soil to add nutrients ([Discussion of applications of pyrolysis], (n.d.), Retrieved from MagnumGroup.org). It has also been used as a lighter and less volatile source of heat for cooking (i.e., “charcoal” grills) in countries where electricity is not widely available and people use fuel such as this to cook with or heat their homes (Schobert, H.H., Energy, and Society: An Introduction, 2002, Taylor & Francis: New York). Not only is there a solid product, such as charcoal, liquid products can also be produced depending on the starting material and conditions used. Historically, methanol was produced from the pyrolysis of wood.
This process for pyrolysis can also be called torrefaction. Torrefaction is typically done at relatively low pyrolysis temperatures (200-300°C) in the absence of oxygen. The feed material is heated up slowly, at less than 50°C/min, and is done over a period of hours to days – this way the volatiles are released and carbon maintains a rigid structure. In the first stage, water, which is a component that can inhibit the calorific value of a fuel, is lost. This is followed by a loss of CO, CO2, H2, and CH4, in low quantities. By doing this, approximately 70% of the mass is retained with 90% of the energy content. The solid material is hydrophobic (little attraction to water) and can be stored for a long period of time.
Classification of pyrolysis methods
There are three types of pyrolysis: 1) conventional/slow pyrolysis, 2) fast pyrolysis, and 3) ultra-fast/flash pyrolysis. Table 5.1 and Figure 5.2 summarize how each method differs in temperature, residence time, heating rate, and products made.
As mentioned earlier, slow pyrolysis is typically used to modify the solid material, minimizing the oil produced. Fast pyrolysis and ultra-fast (flash) pyrolysis maximize the gases and oil produced.
Fast pyrolysis is a rapid thermal decomposition of carbonaceous materials in the absence of oxygen in moderate to high heating rates. It is the most common of the methods, both in research and in practical use. The major product is bio-oil. Pyrolysis is an endothermic process. Along with the information listed in Table 5.1, the feedstock must be dry; of smaller particles (< 3 mm); and typically done at atmospheric temperature with rapid quenching of the products. The yields of the products are: liquid condensates – 30-60%; gases (CO, H2, CH4, CO2, and light hydrocarbons) – 15-35%; and char – 10-15%.
Ultra-fast, or flash pyrolysis is an extremely rapid thermal decomposition pyrolysis, with a high heating rate. The main products are gases and bio-oil. Heating rates can vary from 100-10,000° C/s and residence times are short in duration. The yields of the products are: liquid condensate ~10-20%; gases – 60-80%; and char – 10-15%.
Table 5.1: Classification of pyrolysis methods with differences in temperature, residence time, heating rate, and major products.
Method Temperature
(°C)
Residence
Time
Heating rate
(°C/s)
Major products
Conventional/slow pyrolysis Med-high
400-500
Long
5-30 min
Low
10
Gases
Char
Bio-oil (tar)
Fast pyrolysis Med-high
400-650
Short
0.5-2 s
High
100
Bio-oil (thinner)
Gases
Char
Ultra-fast/flash pyrolysis High
700-1000
Very short
< 0.5 s
Very high
>500
Gases
Bio-oil
Bio-oil Product Properties
Crude bio-oils are different from petroleum crude oils. Both can be dark and tarry with an odor, but crude bio-oils are not miscible with petro-oils. Bio-oils have high water content (20-30%); their density is heavier than water (1.10-1.25 g/mL); their heating value is ~5600-7700 Btu/lb (13-18 MJ/kg). Bio-oils have high oxygen content (35-50%), which causes high acidity (pH as low as ~2). Bio-oils are also viscous (20-1000 cp @ 40°C) and have high solid residues (up to 40%). These oils are also oxidatively unstable, so the oils can polymerize, agglomerate, or have oxidative reactions occurring in situ which lead to increased viscosity and volatility. The values in Table 5.2 compare the properties of bio-oil to a petroleum-based heavy fuel oil.
Table 5.2: Typical properties of wood pyrolysis bio-oil and heavy fuel oil.
Physical Property Bio-oil Heavy fuel oil
Moisture Content 15-30 0.1
pH 2.5 --
Specific gravity 1.2 0.94
Elemental composition (wt%) - -
C 54-58 85
H 5.5-7.0 11
O 35-40 1.0
N 0-0.2 0.3
Ash 0-0.2 0.1
HHV, MJ/kg 16-19 40
Viscosity (cp, @50°C) 40-100 180
Solids (wt %) 0.2-1 1
Distillation residue (wt%) Up to 50 1
(Czemik, S. and Bridgewater, A.V., 2004. Overview of Applications of Biomass Fast Pyrolysis, Energy Fuels 18, 590-598).
Process Considerations
Several components are necessary for any pyrolysis unit, outside of the pyrolyzer itself. The units and how they are connected are shown in Figure 5.3.
The goal of the process is to produce bio-oil from the pyrolyzer. The bio-oil that’s generated has potential as a transportation fuel after upgrading and fractionation. Some can be used for making specialty chemicals as well, especially ring-structure compounds that could be used for adhesives. The gases that are produced contain combustible components, so the gases are used to generate heat. A bio-char is produced as well. Biochar can be used as a soil amendment that improves the quality of the soil, sequesters carbon, or can even be used as a carbon material as a catalyst support or activated carbon. There will also be a mineral-based material called ash once it’s been processed. Typically, the ash must be contained.
The next units to be considered are separation units. Char is solid, so it is typically separated using a cyclone or baghouse. It can be used as a catalyst for further decomposition into gases because the mineral inherent in the char as well as the carbon can catalyze the gasification reactions. The liquids and gases must also be separated. Usually, the liquids and gases must be cooled in order to separate the condensable liquids from the non-condensable gases. The liquids are then fractionated and will most likely be treated further to improve the stability of the liquids. At times, the liquid portion may plug due to heavier components. The non-condensable gases need to be cleaned of any trace amounts of liquids and can be reused if needed.
The next considerations are the heat sources for the unit. Hot flue gas is used to dry the feed. As the flue gas contains combustible gases, they can be partially combusted to provide heat. Any char that is left over is burned as a major supply of heat. And, biomass can be partially burned as another major source of heat.
Another important process to consider is the means of heat transfer. Much of it is indirect, through metal walls and tube and shell units. Direct heat transfer has to do with char and biomass burning. And in the fluidized bed unit, the carrier (most often sand) brings in the heat, as the carrier is heated externally and recycled to provide heat to the pyrolyzer.
Types of Pyrolyzers
So, what types of pyrolyzers are used? The more common types are fluidized-bed pyrolyzers. Figures 5.4a and 5.4b show schematics of two different types. The advantages of using fluid-beds are uniform temperature and good heat transfer; high bio-oil yield of up to 75%; a medium level of complexity in construction and operation; and ease of scaling up. The disadvantages of fluid-beds are the requirement of small particle sizes; a large quantity of inert gases; and high operating costs. The unit is shown in Figure 5.4b, the circulating fluid bed pyrolyzer, (CFB), has similar advantages, although medium-sized particle sizes for feed are used. Disadvantages include a large quantity of heat carriers (i.e., sand); more complex operation, and high operating costs.
Two other types of pyrolyzers are the rotating cone (Figure 5.5a) and the Auger (Figure 5.5b) pyrolyzers. The rotating cone creates a swirling movement of particles through a g-force. This type of pyrolyzer is compact, has a relatively simple construction and operation, and has a low heat carrier/sand requirement. However, it has a limited capacity, requires feed to be fine particles, and is difficult to scale up. Auger pyrolyzers are also compact, simple in construction, and easy to operate; they function at a lower process temperature as well (400 °C). The disadvantages of Auger pyrolyzers include long residence times, lower bio-oil yields, high char yield, and limits in scaling up due to heat transfer limits.
Bio-Oil Upgrading
As noted earlier, bio-oil has issues and must be upgraded, which means essentially processed to remove the problems. These problems include high acid content (which is corrosive), high water content, and high instability both oxidatively and thermally (which can cause unwanted solids formation).
The oils must be treated physically and chemically. Physical treatments include the removal of char via filtration and emulsification of hydrocarbons for stability. Bio-oils are also fractionated, but not before chemical treatments are done. The chemical treatments include esterification (a reaction with alcohol to form esters – this will be covered in detail when discussing biodiesel production); catalytic de-oxygenation/hydrogenation to remove oxygen and double bonds; thermal cracking for more volatile components; physical extraction; and syngas production/gasification.
Catalytic de-oxygenation/hydrogenation takes place. A catalyst is used along with hydrogen gas; specialty catalysts are used, such as sulfides and oxides of nickel, cobalt, and molybdenum. Hydrogenation is commonly used in petroleum refining for the removal of sulfur and nitrogen from crude oil and to hydrogenate the products where double bonds may have formed in processing. Catalytic processes are separate processes and use specific equipment to perform the upgrading. One problem can be that there may be components of bio-oil that may be toxic to catalysts.
Esterification reacts to the corrosive acids in bio-oils with alcohol to form esters. An ester is shown below in Figure 5.6. A discussion of the esterification reaction will be discussed in the biodiesel lesson.
Bio-oil can also be thermally cracked and/or made into syngas through gasification. Please refer to Lesson 2 for the thermal cracking discussion and Lesson 4 for the gasification discussion. One other process that can be utilized is physical extraction, although extraction takes place due to the affinity of some of the compounds to a particular fluid. One example is the extraction of phenols. Phenols can be extracted using a sodium solution such as sodium hydroxide in water; the phenolic compounds are attracted to the sodium solution, while the less oxygenated compounds will stay in the organic solution. Again, this will be discussed in more detail in later lessons. Figure 5.7 shows a schematic of a typical processing unit to upgrade bio-oil.
Biomass Pretreatment
Current methods of generating biofuels are primarily from starch or grain, and starch hydrolysis is fairly straightforward. However, because the starch feedstocks are typically food-based, the goal is to develop technologies to produce ethanol from cellulose; cellulose is obtained from lignocellulosic biomass sources and must be pretreated before breaking down into ethanol. Figure 5.8 is a schematic of the differences in processing for starch (current) and cellulose (emerging). Before we go any further, we will have a short tutorial on the various components of lignocellulosic biomass.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/05%3A_Biomass_Pyrolysis_and_Pretreatment/5.01%3A_Biomass_Pyrolysis.txt
|
5.2 Biomass Carbohydrate Tutorial
When the word carbohydrate is used, I typically think of the carbohydrates in food. Carbohydrates are the sugars and complex units composed of sugars. This section will describe each.
Sugars are also called saccharides. Monomer units are single units of sugars called monosaccharides. Dimer units are double units of sugars called disaccharides. Polymers contain multiple units of monomers and dimers and are called polysaccharides.
So, what are typical monosaccharides? They are made up of a molecule that is in a ring structure with carbons and oxygen. Figure 5.9a shows the structure of glucose; it is made up of C6H12O6. Glucose is distinguished by its structure: five carbons in the ring with one oxygen; CH2OH attached to a carbon; and OH and H groups attached to the other carbons. This sugar is known as blood sugar and is an immediate source of energy for cellular respiration. Figure 5.9b shows galactose next to glucose, and we can see that galactose is almost like glucose, except on the No. 4 carbon the OH and H are an isomer and just slightly different (highlighted in red on the galactose molecule). Galactose is a sugar monomer in milk and yogurt. Figure 5.9c shows fructose; while it still has a similar chemical formula as glucose (C6H12O5), it is a five-membered ring with carbons and oxygens, but two CH2OH groups. This is a sugar found in honey and fruits.
We also have disaccharides as sugars in food. Disaccharides are dimers of the monomers we just discussed and are shown below. One of the most common disaccharides is sucrose, which is common table sugar and is shown in Figure 5.10a. It is a dimer of glucose and fructose. Another common sugar dimer is lactose. It is the major sugar in milk and a dimer of galactose and glucose (see Figure 5.10b). Maltose (5.10c) is also a sugar dimer but is a product of starch digestion. It is a dimer made up of glucose and glucose. In the next section, we will discuss what starch and cellulose are composed of in order to see why maltose is a product of starch digestion.
Carbohydrate structure
All carbohydrate polymers are monomers that connect with what is called a glycosidic bond. For example, sucrose is a dimer of glucose and fructose. In order for the bond to form, there is a loss of H and OH. So, another way to show this is:
C12H22O11 = 2 C6H12O6 − H2O
And as dimers can form, polymers will form and are called polysaccharides. Typical polysaccharides include 1) glycogen, 2) starch, and 3) cellulose. Glycogen is a molecule in which animals store glucose by polymerizing glucose, as shown in Figure 5.11.
Starches are similar to glycogen, with a little bit different structure. Starch is composed of two polymeric molecules, amylose and amylopectin. The structures of both are shown in Figure 5.12a and 5.12b.
About 20% of starch is made up of amylose and is a straight-chain that forms into a helical shape with α-1,4 glycosidic bonds and the rest of the starch is amylopectin, which is branched with α-1,4, and α-1,6 glycosidic bonds. Figure 5.13 shows the structure of cellulose. Cellulose is a major molecule in the plant world; it is also the single most abundant molecule in the biosphere. It is a polymer of glucose and has connectors of the glucose molecule that are different from starch; the linkages are β-1,4 glycosidic bonds. The polymer of cellulose is such that it can form tight hydrogen bonds with oxygen, so it is more rigid and crystalline than starch molecules. The rigidity makes it difficult to break down.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/05%3A_Biomass_Pyrolysis_and_Pretreatment/5.02%3A_Biomass_Carbohydrate_Tutorial.txt
|
When the word carbohydrate is used, I typically think of the carbohydrates in food. Carbohydrates are the sugars and complex units composed of sugars. This section will describe each.
Sugars are also called saccharides. Monomer units are single units of sugars called monosaccharides. Dimer units are double units of sugars called disaccharides. Polymers contain multiple units of monomers and dimers and are called polysaccharides.
So, what are typical monosaccharides? They are made up of a molecule that is in a ring structure with carbons and oxygen. Figure 5.9a shows the structure of glucose; it is made up of C6H12O6. Glucose is distinguished by its structure: five carbons in the ring with one oxygen; CH2OH attached to a carbon; and OH and H groups attached to the other carbons. This sugar is known as blood sugar and is an immediate source of energy for cellular respiration. Figure 5.9b shows galactose next to glucose, and we can see that galactose is almost like glucose, except on the No. 4 carbon the OH and H are an isomer and just slightly different (highlighted in red on the galactose molecule). Galactose is a sugar monomer in milk and yogurt. Figure 5.9c shows fructose; while it still has a similar chemical formula as glucose (C6H12O5), it is a five membered ring with carbons and oxygens, but two CH2OH groups. This is a sugar found in honey and fruits.
We also have disaccharides as sugars in food. Disaccharides are dimers of the monomers we just discussed and are shown below. One of the most common disaccharides is sucrose, which is common table sugar and is shown in Figure 5.10a. It is a dimer of glucose and fructose. Another common sugar dimer is lactose. It is the major sugar in milk and a dimer of galactose and glucose (see Figure 5.10b). Maltose (5.10c) is also a sugar dimer, but is a product of starch digestion. It is a dimer made up of glucose and glucose. In the next section, we will discuss what starch and cellulose are composed of in order to see why maltose is a product of starch digestion.
Carbohydrate structure
All carbohydrate polymers are monomers that connect with what is called a glycosidic bond. For example, sucrose is a dimer of glucose and fructose. In order for the bond to form, there is a loss of H and OH. So, another way to show this is:
\[\ce{C12H22O11 = 2 C6H12O6 − H2O}\]
There is a wide variety of sources for lignocellulosic biomass, which includes agricultural waste (i.e., corn stover), forest waste from furniture and home construction, municipal solid waste and energy crops. They all look very different, but all are composed of cellulose, hemicellulose, lignin, and other minor compounds. Figure 5.14a shows switchgrass (with parts magnified to emphasize different parts of the plant structure). Once you get down to the microfibril structure, you can see the components of the microfibril, which includes lignin on the outside layer, hemicellulose on the next layer, and finally, cellulose. Because of the structure, the lignocellulose is difficult to break down, which is known as recalcitrance. In order to get to the cellulose, the cell wall has to be opened up, the lignin has to be removed or separated from the hemicellulose and cellulose, and then the cellulose, crystalline in nature, has to be broken down. All these steps are resistant to microbial attack, so pretreatment methods are used to break it apart. In other words, biomass recalcitrance requires pretreatment.
Another Perspective
You can access the following online journal article to see another illustration of lignocellulose, but with the lignin component included (Fig. 1):
Pretreatment is the most costly step; however, the only process step more expensive than pretreatment is no pretreatment. Without pretreatment, yields are low and drive up all other costs, more than the amount saved without pretreatment. Increased yields with pretreatment reduces all other unit costs. Figure 5.15 shows a schematic of the role pretreatment plays. Pretreatment, depending on the method, will separate the lignin, the hemicellulose, and the cellulose. Figure 5.15 shows how these break apart. Part of the lignin and the hemicellulose are dissolved in liquid during hydrolysis, and part of the lignin and the cellulose are left as a solid residue. There is partial breakdown of the polymeric molecules, and the cellulose is now more accessible to microbial attack.
Pretreatment is costly and affects both upstream and downstream processes. On the upstream side, it can affect how the biomass is collected or harvested, as well as the comminution of the biomass. Downstream of pretreatment, the enzyme production can be affected, which in turn will affect the enzymatic hydrolysis and sugar fermentation. Pretreatment can also affect hydrolyzate conditioning and hydrolyzate fermentation. The products made and the eventual final processing also will be affected by pretreatment. However, it is more costly to not do pretreatment.
There are two different types of pretreatment. Physical effects disrupt the higher-order structure and increase surface area and chemical/enzyme penetration into plant cell walls, and include mechanical size reduction and fiber liberation. Chemical effects include solubilization, depolymerization, and breaking of crosslinks between macromolecules. The individual components can “swell," depending on the organic solvent or acid used. Lignin can be “redistributed” into a solution, and lignin and carbohydrates can be depolymerized or modified chemically.
The following pretreatment technologies will be discussed in more depth: 1) size reduction, 2) low pH method, 3) neutral pH method, 4) high pH method, 5) organic solvent separation, 6) ionic liquid separation, and 7) biological treatments.
5.03: Pretreatment of Lignocellulosic Biomass
5.3a Size Reduction
Size reduction is also known as comminution. Decreasing particle size of biomass improves accessibility to plant cell wall carbohydrates for chemical and biochemical depolymerization. It can also increase the bulk density for storage and transportation. There is a cost of energy when using mechanical size reduction. For example, 20-40 kWh/metric tons are needed to reduce the size of hardwood chips to coarse particles of 0.6-2.0 mm in size, and kWhs typically have a cost of anywhere from \$0.04-0.10 per kWh. To reduce the size of particles to a fine size (0.15-0.30 mm), 100-200 kWh/ton is required.
There are multiple methods used to reduce the size of particles, and the method used will depend on whether the sample is dry or wet. There are hammer mills (a repetitive hammering of sample), knife mills (a rotating knife slices the sample), and ball mills (the sample is put into a container with metal balls and rolled). Sometimes the sample has to be shredded and dried before using some of these techniques.
Samples can also be “densified.” Samples can be mixed with some sort of binder (to keep the materials together, like a glue) and pushed into shape, or pelletized. This increases the bulk density (i.e., from 80-150 kg/m3 for straw or 200 kg/m3 for sawdust to 600-700 kg/m3 after densification). This can lower transportation costs, reduce the storage volume, and make handling easier. After densification, the materials usually have lower moisture content.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/05%3A_Biomass_Pyrolysis_and_Pretreatment/5.03%3A_Pretreatment_of_Lignocellulosic_Biomass/5.3a%3A_Size_Reduction.txt
|
5.3b Low pH Methods
The mechanism for low pH treatments is the hydrolysis of hemicellulose. Hydrolysis is a reaction with water, where acid is added to the water to accelerate the reaction time. Several acids can be used, including dilute sulfuric acid (H2SO4), gaseous sulfur dioxide (SO2), hydrochloric acid (HCl), phosphoric acid (H3PO4), and oxalic acid (C2H2O4). Because it is a reaction, the key parameters affecting it include temperature, time, acid concentration, and moisture content of the biomass. The following reactions can take place: hemicellulose can be solubilized, lignin can be separated, acetyl groups are removed, and the surface of the biomass becomes more accessible. As an example (Figure 5.16), the α-1,4 bond is broken by the water and acid to yield two glucose units. An enzyme, amylase, can also promote the reaction. The addition of acid and elevated temperature increases the rate of reaction.
Not only is acid used to facilitate hydrolysis, but acid-catalyzed dehydration of sugars can form furans, which can break down into organic acids such as formic acid and levulinic acid. These compounds can be toxic to the enzymes that are used in sugar fermentation. So after reaction, the residual acid must be neutralized, and inhibitors formed or released during pretreatment must be reduced. Two methods are to use calcium oxide (also known as overliming) or ammonium hydroxide.
Calcium oxide is cheap, forms gypsum during the process, and has a loss of sugar of ~10%, with the necessity of by-product removal and disposal. The reaction is shown below in Reaction 1
Reaction1: CaO + H2SO4→H2O + CaSO4
The advantage of using ammonium hydroxide is that less sugar is lost and less waste is generated, but the cost is higher. The reaction is shown below in Reaction 2.
Reaction 2: 2NH4OH + H2SO4→2H2O + (NH4)2SO4
Figure 5.17a and 5.17b show process diagrams of typical configurations and reaction conditions for sulfuric acid and SO2.
5.3c: Neutral pH pretreatment
5.3c Neutral pH Pretreatment
Pretreatment can also take place in neutral pH water. There are two pathways that can occur. One is when acidic compounds are released from acetylated hemicellulose, mainly acetic acid. This is also called autohydrolysis. Water can also dissociate as the temperature and pressure increases to near the supercritical point (approximately 374°C, 3200 psi), into H+ and OH−, and as this happens, the water behaves like an acid/base system. It is done in water without added chemicals, either in liquid hot water, steam explosion, or water near the supercritical point. The key parameters are time, temperature, and moisture content, and the effects are similar to low pH methods. A schematic for liquid hot water processing is shown in Figure 5.18.
One process, developed by Inbicon, is a counter-current multi-stage hot water pretreatment process. There is a pilot-scale unit at Skærbæk, Denmark. It is a three-stage process using hot water (hydrothermal) at 80°C, 160-200 °C, and 190-230°C. After the first stage, liquid composed C5-molasses (sugar) is taken out of the process, which is used for animal feed. After the third stage, the fiber fraction contains cellulose and lignin. Bioethanol and a solid fuel for heat and power are produced when using enzymes, yeast, and fermentation. Figure 5.19 shows the before and after pretreatment of wheat straw (the raw wheat straw and the cellulosic-lignin portion).
The next pretreatment processes to discuss are at high pH. The high pH removes the lignin portion of biomass through the breaking of ether linkages (R-O-R’) that hold aromatic phenolic compounds together; ring opening can also take place. It is a depolymerization process. There are several processes and bases used, including: lime, calcium carbonate, potassium hydroxide, sodium hydroxide, and aqueous ammonia. Key parameters include temperature, reaction time, concentration of base, moisture of the feed material, as well as oxidizing agents. The effects include removal of most of the lignin, some removal of hemicellulose, and removal of acetyl links between lignin and hemicellulose.
Lignin is most prominent in grasses and woody biomass. It composes 6-35% of lignocellulosic biomass, depending on the type of grass or wood. Lignin is comprised of crosslinked, branched, monoaromatic units with methoxy and propyl alcohol functional groups. These are shown in Figure 5.20a. Figure 5.20b shows a model of a lignin molecule and how the aromatic monomers are linked together.
5.3d: High pH (Alkaline)
5.3d High pH (Alkaline) Pretreatment
There are two possible outcomes for the chemistry behind the high pH treatment: 1) one is essentially a degradation reaction that liberates lignin fragments and leads to lignin dissolution, and 2) the other is condensation reactions that increase the molecular size of lignin fragments and result in lignin precipitation. As you can see, lignin is a complicated molecule, with a variety of linkages, so reactions are complicated due to lignin complexity. Addition of oxidizing agents greatly improves delignification.
There are multiple processes that have been developed for this type of treatment. Figure 5.21 shows the lime pretreatment process flow diagram. The pretreatment can be done under various conditions, such as oxidative and non-oxidative conditions, short term high temperature (100-200°C, 1-6 h), and long term low temperature (25-65°C, 1-8 weeks). Figure 5.22 shows the soaking in aqueous ammonia (SAA) process flow diagram.
One of the more developed high pH processes is the ammonia fiber expansion (AFEX) process. Lignocellulosic biomass is soaked in liquid ammonia (causing swelling) followed by the rapid release of pressure (causing expansion). Anhydrous liquid ammonia is used, and key parameters include temperature, residence time, ammonia concentration, and moisture content of the biomass. During this process, there is virtually no compositional change, but lignin is relocated, cellulose is decrystallized, and hemicellulose is depolymerized. This method increases the size and number of micropores in the cell wall to allow for greater accessibility of chemicals for the following stages of processing. A process schematic is shown in Figure 5.23.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/05%3A_Biomass_Pyrolysis_and_Pretreatment/5.03%3A_Pretreatment_of_Lignocellulosic_Biomass/5.3b%3A_Low_pH_Methods.txt
|
5.3e Organic Solvation Processes
The next process type is using an organic solvent, such as the Organosolv (OS) process or the Cellulose solvent- and Organic Solvent-based LIgnocellulose Fractionation (COSLIF) process. For the OS pretreatment, the main mechanism involves the dissolution of lignin by organic solvent and then re-precipitation by adding an antisolvent, such as acidified water. This method was first introduced as a pulping method for papermaking. The organic solvents commonly used are acetone, ethanol, methanol, etc., in an aqueous solution of 20-60% water. Key parameters include temperature, residence time, chemical addition, and the water concentration. The effect is to: separate lignin from lignocellulosic biomass; solubilize hemicellulose; and increase pore size and surface area in the cell wall. Figure 5.24 shows a schematic of a process diagram for OS pretreatment.
Another organic solvent based process is cellulose-solvent and organic-solvent lignocellulose fractionation (COSLIF). For this process, an organic solvent is introduced to dissolve cellulose prior to Organosolv processing. Figure 5.25 shows a schematic of COSLIF processing.
5.3f: Ionic Liquids
5.3f Ionic Liquids
One of the more usual methods of pretreatment of biomass uses ionic liquids. Ionic liquids (ILs) are organic salts that usually melt below 100°C and are strong solvation agents. A common salt that we are all familiar with is table salt, sodium chloride, NaCl. If dissolved in water, it separates into the ions of Na+ and Cl-, but it is not an organic salt like ILs. It has interesting properties, including the fact that, depending on the IL, it can solubilize whole cellulosic biomass or selectively dissolve components, e.g., lignin and cellulose. It is relatively easy to separate the dissolvent component from the organic salt by using an anti-solvent such as water, methanol, or ethanol. When cellulose has been dissolved by organic liquid and then re-precipitated by an anti-solvent, cellulose is less crystalline and easier to break down. Unfortunately, this is still a costly method of pretreatment, as there is difficulty in recycling ILs, and the ILs can be toxic to the enzymes and microbes used in processing cellulose to ethanol. One such IL is known as EmimAc (1-ethyl-3-methylimidazolium acetate), and is able to completely solubilize both cellulose and lignin in switchgrass. Figure 5.26 shows the chemical structure of EmimAc and the change in cellulose after reprecipitation of it using an antisolvent (T = 120°C). Figure 5.27 shows a schematic of the process diagram.
5.3g: Biological Pretreatment
5.3g Biological Pretreatment
The last technology we will look at is biological pretreatment. Lignin is removed from lignocellulosic biomass through lignin-degrading microorganisms. Key parameters are temperature, cultivation time, nutrient addition, and selectivity on lignin. Some of the lignin-degrading enzymes include lignin peroxidase, manganese peroxidase, laccase, and xylanase. Advantages to using a system such as this include: no chemicals, mild conditions (ambient temperature and pressure), low energy and low capital outlay, and less enzyme use later on. However, pretreatments take days to weeks, loss of cellulose and hemicellulose, contaminants, and additional pretreatment for higher sugar yield.
5.3h: Summary
5.3h Summary
Of the methods we’ve discussed, there are pretreatment options that lead the others (some under commercialization). The current leading pretreatment options include dilute acid, AFEX, liquid hot water, lime, and aqueous ammonia, with dilute acid and water, AFEX, and lime under commercialization. Figure 5.28 shows switchgrass before pretreatment and after several pretreatment options, i.e., AFEX, dilute acid, liquid hot water, lime, and soaking in aqueous ammonia (SAA).
To summarize the methods of pretreatment, Table 5.3 shows some of these pretreatment methods and the major and minor effects on lignocellulosic biomass. All methods (AFEX, dilute acid, lime, liquid hot water, soaking aqueous ammonia, and treatment with SO2) have an effect on increasing surface area, removing hemicellulose, and altering lignin structure. Only AFEX, lime, and SAA pretreatment remove lignin, and AFEX and SAA decrystallize cellulose.
Table 5.4 shows the conditions for ideal pretreatment of lignocellulosic biomass for dilute acid, steam explosion, AFEX and liquid hot water.
Table 5.4: Comparison of Pretreatment Processesa
Pretreatment Process Dilute Acid Steam Explosion AFEX Liquid Hot Water
Reactive Fiber Yes Yes Yes Yes
Particle Size Reduction Required Yes No Nob No
Hydrolyzate Inhibitory Yes Yes No Slightly
Pentose Recovery Moderate Low High High
Low Cost Materials of Construction No Yes Yes Yes
Production of Process Residues Yes No No No
Potential for Process Simplicity Moderate High Moderate High
Effectiveness at Low Moisture Contents Moderate High Very High
Not Known
a Modified from (86); AFEX ratings from Bruce Dale (personal communication).
b For grasses, data for wood not available.
Credit: Lynd, 1996. Annual Rev. Energy Environ., 21: 403-465
5.04: Assignments
5.4 Assignments
Homework #5
For this homework, you will read four selections and compose an essay.
1. Please read the following selections. The first three can be accessed via the links in the Lesson 5 Module. The Laughlin selection can be accessed via the Library Resources.
• Wald, Matthew. "On the Horizon, Planes Powered by Plant Fuel." New York Times 17 Jan. 2012.
• "Obama’s Pitch on Energy." New York Times 14 Feb. 2012, The Opinion Pages sec.
• Taylor, James. "Trump's Energy Policy: 10 Big Changes", Opinion Contributor, Forbes 26 December 2016.
• Laughlin, Robert B. Powering the Future: How We Will (Eventually) Solve the Energy Crisis and Fuel the Civilization of Tomorrow. New York: Basic, 2011. Print. (Chapter 3)
2. Write a summary of each article, addressing these three questions:
• Question #1: What is the perspective of the article?
• Question #2: Do you agree/disagree with it?
• Question #3: Explain why.
Some notes on the format:
• Approximate length – 1-1½ pages total (a paragraph for each article), double-spaced, 1” margins, 12-point font, name at the top.
• The essay should incorporate answers to all 3 questions for each article.
• Use footnotes to indicate which reading you are referring to (i.e., Wald, pg #).
• Use as filename your user ID_HW5 (i.e., ceb7_HW5).
• Upload it to the Homework #5 Assignment in Lesson 5.
(12 points)
Discussion #5
Post a response in Discussion #5 that includes discussion of the following questions:
1. Do you think that jet fuel generated from biomass will make an impact on the jet fuel market?
2. What do you think about the proposed energy policy and suggestions about impacting the Big Oil companies?
3. What are the “jungle laws” that appear to be governing our energy policy currently?
Take some time to review others' responses. Then respond to at least one other person’s post. Grades will reflect critical thinking in your input and responses.
(5 points)
Exam #2
This week you will complete Exam #2.
5.05: Summary and Final Tasks
Summary
Lesson 5 covered biomass pyrolysis and biomass pretreatment. Pyrolysis is a thermal treatment in the absence of oxygen and at lower temperature than gasification. The main products of interest are chars that are often used in combustion (with some of the undesirable components removed), and liquids that need to be processed further to remove oxygen functionality and add hydrogen.
The goal of pretreatment is to overcome biomass recalcitrance and improve conversion efficiency/economics. Mechanical size reduction is generally required. Several pretreatment technologies have been developed based on the use of different chemicals. They do the following:
• Low pH pretreatment: hemicellulose removal, acetyl removal, lignin solubilization and re-precipitation, increased surface area
• High pH pretreatment: lignin removal, acetyl removal, hemicellulose solubilization, increased surface area
• AFEX pretreatment: no-compositional changes, lignin alternation, cellulose decrystallization, increased surface area
• Organosolv and IL: fractionation of lignin from cellulose and hemicellulose, increased surface area
References
Schobert, H.H., Energy and Society: An Introduction, 2002, Taylor & Francis: New York, Ch. 4-6.
He, Brian, Department of Biological and Agricultural Engineering, University of Idaho, BEEMS Module C2, Biomass Pyrolysis, sponsored by USDA Higher Education Challenger Program, 2009-38411-19761, PI on project Li, Yebo.
Shi, Jian, Hodge, D.B., Pryor, S.W., Li, Yebo, Department of Food, Agricultural, and Biological Engineering, The Ohio State University, BEEMS Module B1, Pretreatment of Lignocellulosic Biomass, sponsored by USDA Higher Education Challenger Program, 2009-38411-19761, PI on project Li, Yebo.
Reminder - Complete all of the Lesson 5 tasks!
You have reached the end of Lesson 5! Double-check the Road Map on the Lesson 5 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 6.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum. I will check that discussion forum daily (Monday through Friday). While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/05%3A_Biomass_Pyrolysis_and_Pretreatment/5.03%3A_Pretreatment_of_Lignocellulosic_Biomass/5.3e%3A_Organic_Solvation_Processes.txt
|
6.1 Final Project (Biorefinery Project)
The final project will be due at the end of the semester. Toward the end of the semester, the homework will be less, so you’ll have ample opportunity to work on this. However, I am including the expectations now, so that you can begin to work on it.
Biomass Choice
You will be choosing a particular biomass to focus your report on. For the biomass you choose, you will need to do a literature review on the biomass and how and where it grows. Your requirements for location include 1) where it grows, 2) climate, 3) land area requirement, and 4) product markets near location. However, you are not to make a choice that already exists in the marketplace. This includes 1) sugarcane for ethanol production in Brazil and 2) corn for ethanol production in the Midwest of the USA. You need to put thought into what biomass you are interested in converting to fuels and chemicals, as well as where you want to locate your small facility. Most of all, choose biomass and location based on your particular interests, so as to make it interesting to you.
The literature review should consist of a list of at least ten resources that you have consulted from journals and websites of agencies such as IEA. If available, five of these sources should be from the last five years. Please use APA style for your references.
Location Choice and Method of Production
Once you have determined a biomass, choose a location based on previous information. Discuss your reasons for the choice of biomass, location, and desired products for production. Include a map of the area where you want to grow and market your product. You need to be aware of whether or not the biomass you choose can grow in the climate of the area you choose.
You will be choosing a method with which to convert your biomass into fuels. You are expected to include a schematic of the process units and a description of each process that will be necessary to do the biomass conversion; you should include what each process does and a little about the chemistry of each. Show the major chemical reactions that will take place in the process. Figures 6.1 and 6.2 show a process diagram and a chemical reaction so you have an idea of what I expect.
Market
The next section has to do with marketing your product. If you don’t have somewhere to sell your product, it will sit in a warehouse, maybe degrade (spoil is a more common term), and you won’t be making money on it. In the location you have chosen, is there a market for the product? If not, is there a location nearby where you can sell it? Discuss how you might market your products in the areas where you want to use biomass and sell products. How might you make the product you are selling appeal to the public? Due to the deregulation of electricity markets in various states, the prices of electricity will vary. Some companies charge more for renewable-based electricity, so they have to appeal to a particular market of people who are willing to spend more on renewable electricity.
Economics
We are going to assume that your process is going to be economic. However, any economic evidence that you can include that supports your process or indicates it would be a highly economical process will be beneficial to your paper. I would also like for you to include any research and development that must occur in order for the process to become viable and economic (i.e., what is the current research on this process?).
Other Factors
Discuss other factors that could affect the outcome of implementing a bio-refining facility. What laws, such as environmental laws, might be in place? What is the political climate of the community you have chosen? What is the national political climate related to the biomass processing you have chosen? Are there any tax incentives that would encourage your process to be implemented or the product to be sold? An example would be something like this: all airlines in the US are expected to include a certain percentage of renewables in the jet fuel they use. So, would your process make jet fuel, and how would you market it to airlines? Include other factors that could “make or break” the facility.
Format
The report should be 8-12 pages in length. This includes figures and tables. It should be in 12-point font with 1” margins. You can use line spacing from 1-2. It is to be clearly communicated in English, with proper grammar and as free from typographical errors as possible. You will lose points if your English writing is poor.
The following format should be followed:
• Cover Page – Title, Name, Course Info
• Introduction
• Body of Paper (see sections described above)
• Summary and Conclusions
• APA citation style for citations and references.
Grading Rubric:
• Outline 10 points
• Rough Draft 30 points
• Final Draft: 30 points
• Presentation: 30 points
• TOTAL: 100 points
Rubrics specific to each section of the final project are available in the submission dropboxes.
When submitting, please upload your final project to the Final Project Submission Dropbox. Save it as a PDF according to the following naming convention: userID_FinalProject (i.e., ceb7_FinalProject).
Attention:
Please remember that by submitting your paper in Canvas, you affirm that the work submitted is yours and yours alone and that it has been completed in compliance with University policies governing Academic Integrity, which, as a Penn State student, are your responsibility to understand and follow. Your projects will be reviewed closely for unattributed material and may be uploaded the plagiarism detection service Turnitin.com to assure its originality. Academic dishonesty and lazy citation practices are not tolerated, and should you submit a paper that violates the Academic Integrity policies of the College and the University, be advised that the strictest penalties allowable by the University will be sought by your instructor. Please ask for help if you are concerned about proper citation.
Questions
If you have questions:
• Office hours are by appointment on Fridays: 15:30-17:30 via Zoom. Please contact the assistant of the course to set up a time for the office hour the previous Tuesday until 11:59 pm. Please provide the question(s) that you plan to ask during the office hour when you set up a time for the office hour.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/06%3A_General_Ethanol_Production/6.01%3A_Final_Project_%28Biorefinery_Project%29.txt
|
For Review
To begin this part of Lesson 6, review the Biomass Carbohydrate Tutorial from the previous lesson. It will be important to remember all of the terminology for carbohydrates.
So, at this point, we’ve talked a bit about what lignocellulosic biomass is composed of, what various carbohydrates are chemically, and how to pretreat various biomass sources. Now, we will discuss the use of enzymes in biomass conversion, particularly in cellulose conversion. I’ll first introduce you to cellulases, and then we'll look at a model of enzymatic hydrolysis of cellulose, and enzymes for hemicellulose and lignin.
For cellulases, we’ll discuss what they are, provide a brief history, look at glycosyl hydrolases, and, finally, cellulases.
The processing of cellulose in lignocellulosic biomass requires several steps (see Figure 6.3). We’ve discussed pretreatment, where cellulose, lignin, and hemicellulose are separated. Hemicellulose is broken down to xylose and other sugars, which can then be fermented to ethanol. Lignin is separated out and can be further processed or burned depending on the best economic outcome. The first step of processing is then on the cellulose.
Pretreatment helps to decrystallize cellulose. However, it must be further processed to break it down into glucose, as it is glucose (a sugar) that can be fermented to make ethanol, and the liquid product must be further processed to make a concentrated ethanol. So, we are focusing this lesson on enzymatic hydrolysis of starch and cellulose.
6.02: Biochemical Structural Aspects of Lignocellulosic Biomass
6.2a Starch
We briefly addressed what starch is in Lesson 5. Now, we’ll go into a little more depth. In plants, starch has two components: amylose and amylopectin. Amylose is a straight-chain sugar polymer. Normal corn has 25% amylose, high amylose corn has 50-70% amylose, and waxy corn (maize) has less than 2%. The rest of the starch is composed of amylopectin. Its structure is branched and is most commonly the major part of starch. Animals contain something similar to amylopectin, called glycogen. The glycogen resides in the liver and muscles as granules.
You can visit howstuffworks.com to see a schematic of what amylopectin looks like in a granule (see 'How Play-Doh Works') and then strands of the compound. Figure 6.4 shows some micrographs of starch as it begins to interact with water. When cooking with starch, you can make a gel from the polysaccharide. (A) This part of the figure shows polysaccharides (lines) packed into larger structures called starch granules; upon adding water, the starch granules swell and polysaccharides begin to diffuse out of the granules; heating these hydrated starch granules helps polysaccharide molecules diffuse out of the granules and form a tangled network. (B) This is an electron micrograph of intact potato starch granules. (C) This is an electron micrograph of a cooked flaxseed gum network.
Now, let’s look at the starch components on a chemical structure basis. Amylose is a linear molecule with the α-1,4-glucosidic bond linkage. Upon viewing the molecule on a little larger scale, one can see it is helical. It becomes a colloidal dispersion in hot water. The average molecular weight of the molecule is 10,000-50,000 amu, and it averages 60-300 glucose units per molecule. Figure 6.5 depicts the chemical structure of amylose.
Amylopectin is branched, not linear, and is shown in Figure 6.6. It has α-1,4-glycosidic bonds and α-1,6-glycosidic bonds. The α-1,6-glycosidic branches occur for about 24-30 glucose units. It is insoluble compared to amylose. The average molecular weight is 300,000 amu, and it averages 1800 glucose units per molecule. Amylopectin is about 10 times the size of amylose.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/06%3A_General_Ethanol_Production/6.02%3A_Biochemical_Structural_Aspects_of_Lignocellulosic_Biomass/6.2a%3A_Starch.txt
|
6.2b Cellulose
Cellulose is the most abundant polysaccharide, and it is also the most abundant biomass on earth. The linkages are slightly different from starch, called β-1,4-glycosidic linkages (see Figure 6.7C), as the bond is in a slightly different configuration or shape. As shown in Figure 6.7A and 6.7B, this bond causes the strands of cellulose to be straighter (not helical). The hydrogen on one polymer strand can interact with the OH on another strand; this interaction is known as a hydrogen bond (H-bond), although it isn't an actual bond, just a strong interaction. This is what contributes to the crystallinity of the molecule. [Definition: the H-bond is not a bond like the C-H or C-O bonds are, i.e., they are not covalent bonds. However, there can be a strong interaction between hydrogen and oxygen, nitrogen or other electronegative atoms. It is one of the reasons that water has a higher boiling point than expected.] The strands of cellulose form long fibers that are part of the plant structure (see Lesson 5 Figure 5.14). The average molecular weight is between 50,000 and 500,000, and the average number of glucose units is 300-2500.
Table 6.1 shows a comparison of the two types of starch and cellulose. Cellulose forms elongated fibers that stretch out; it doesn’t curl the way the amylose does (remember the helical structure) and doesn’t branch and curl the way the amylopectin does. Because of its chemical structure, it forms a large network where H-bonds stabilize the strand itself, and also the cluster of strands that make up the fibers. The H-bond gives cellulose fibers several important structural features. It is incredibly tough. It is water-impermeable because of the H-bonds and thus excludes water. Table 6.1: Comparison of features of the two components of starch and cellulose.
Table 6.1: Comparison of features of the two components of starch and cellulose.
Type of polysaccharide Starch (Amylopectin) Starch (Amylose) Cellulose
Types of linkages
• α-1,4-glycosidic
• α-1,6-glycosidic
α-1,4-glucosidic β-1,4-glucosidic
Function Stores energy Stores energy Supports and strengthens
Molecular weight (amu) 300,000 10,000-50,000 50,000-500,000
Size of glucose units 1800 glucose units 6-300 glucose units 300-2500 glucose units
6.2c: Hemicellulose
6.2c Hemicellulose
As seen in previous lessons, lignocellulosic biomass contains another component, hemicellulose. Rather than being a typical polymer where units repeat over and over again, hemicellulose is a heteropolymer. It has a random, amorphous structure with little strength. It has multiple sugar units rather than the one glucose unit we’ve seen for starch and cellulose, and the average number of sugar units is 500-3000 (glucose units with the starch and cellulose). The monomer units include: xylose, mannose, galactose, rhamnose, and arabinose units (see Figure 6.8 with a chemical structure of each). The various polymers of hemicellulose include: xylan, glucuronoxylan, arabinoxylan, glucomannan, and xyloglucan (see a couple of examples in Figures 6.9a and 6.9b).
6.2d: Lignin
6.2d Lignin
So, we’ve identified the chemical structures of starch, cellulose, and hemicellulose (see Figure 6.3 to see how cellulose and hemicellulose are related.) Now we’re going to take a look at what lignin is, chemically.
Vascular land plants make lignin in order to solve problems due to terrestrial lifestyles. Lignin helps to keep water from permeating the cell wall, which helps water conduction in the plant. Lignin adds support – it may help to “weld” cells together and provides stiffness for resistance against forces that cause bending, such as wind. Lignin also acts to prevent pathogens and why it is recalcitrant to degradation; it protects against fungal and bacterial pathogens (there is a discussion in Lesson 5 about recalcitrance). Lignin is comprised of crosslinked, branched aromatic monomers: p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol; their structures are shown in Figure 6.10a-c. Figures 6.10d and 6.10e show how these building blocks fit into the lignin structure. p-Coumaryl alcohol is a minor component of grass and forage type lignins. Coniferyl alcohol is the predominant lignin monomer found in softwoods (hence the name). Both coniferyl and sinapyl alcohols are the building blocks of hardwood lignin. Table 6.2 shows the differing amounts of lignin building blocks in the three types of lignocellulosic biomass sources.
Table 6.2: The amount of different building blocks in grasses, softwood, and hardwood.
Lignin Sources Grasses Softwood Hardwood
p-coumaryl alcohol 10-25% 0.5-3.5% Trace
coniferyl alcohol 25-50% 90-95% 25-50%
sinapyl alcohol 25-50% 0-1% 50-75%
There are several different materials that can be made from lignin, but most are not on a commercial scale. Table 6.3 shows the class of compounds that can be made from lignin and the types of products that come from that class of compounds. If an economic method can be developed for lignin depolymerzation and chemical production, it would benefit biorefining of lignocellulosic biomass.
Table 6.3: Low molecular chemicals and the products made from these types of chemicals.
Class of Compound Product Examples
Simple aromatics Biphenyls, Benzene, Xylenes
Hydroxylated aromatics Phenol, Catechol, Propylphenol, etc.
Aromatic Aldehydes Vanillin, Syringaldehyde
Aromatic Acids and Diacids Vanillic Acid
Aliphatic Acids Polyesters
Alkanes Cyclohexane
There are also high molecular weight compounds. These include carbon fibers, thermoplastic polymers, fillers for polymers, polyelectrolytes, and resins, which can be made into wood adhesives and wood preservatives.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/06%3A_General_Ethanol_Production/6.02%3A_Biochemical_Structural_Aspects_of_Lignocellulosic_Biomass/6.2b%3A_Cellulose.txt
|
Starches are broken down by enzymes known as amylases; our saliva contains amylase, so this is how starches begin to be broken down in our body. Amylases have also been isolated and used to depolymerize starch for making alcohol, i.e., yeast for bread making and for alcohol manufacturing. Chemically, the amylase breaks the carbon-oxygen linkage on the chains (α-1,4-glucosidic bond and the α-1,6-glucosidic bond), which is known as hydrolysis. Once the glucose is formed, then fermentation can take place to break the glucose down into alcohols and CO2. The amylases were isolated and the hydrolysis of glucose began to be understood in the 1800s.
However, recall that cellulose linkages are β-1.4-glucosidic bonds. These bonds are much more difficult to break, and due to cellulose crystallinity, breaking cellulose down into glucose is even more difficult. It was only during WWII that enzymatic hydrolysis of cellulose was discovered. Instead of enzymes called amylases, the enzymes that degrade cellulose are called cellulases.
Cellulases are not a single enzyme. There are two main approaches to biological cellulose depolymerization: complexed and non-complexed systems. Each cellulase enzyme is composed of three main parts, and there are multiple synergies between enzymes.
6.03: Enzymatic Biochemistry and Processing
6.3a The Reaction of Cellulose: Cellulolysis
Cellulolysis is essentially the hydrolysis of cellulose. If you recall from Lesson 5 (see Figure 5.16), in the low and high pH conditions, hydrolysis is a reaction that takes place with water, with the acid or base providing H+ or OH- to precipitate the reaction. Hydrolysis will break the β-1,4-glucosidic bonds, with water and enzymes to catalyze the reaction. Before discussing the reaction in more detail, let’s look at the types of intermediate units that are made from cellulose. The main monomer that composes cellulose is glucose (Lesson 5, Figure 5.9a). When two glucose molecules are connected, it is known as cellobiose – one example of a cellobiose is maltose (Lesson 5, Figure 5.10b). When three glucose units are connected, it is called cellotriose – one example is β -D pyranose form. And four glucose units connected together are called cellotetraose. Each of these is shown below in Figure 6.11.
We’ve seen the types of intermediates, so now let’s see the reaction types that are catalyzed by cellulose enzymes. The steps are shown in Figure 6.12.
1. Breaking of the noncovalent interactions present in the structure of the cellulose, breaking down the crystallinity in the cellulose to an amorphous strand. These types of enzymes are called endocellulases.
2. The next step is hydrolysis of the chain ends to break the polymer into smaller sugars. These types of enzymes are called exocellulases, and the products are typically cellobiose and cellotetraose.
3. Finally, the disaccharides and tetrasaccharides (cellobiose and cellotetraose) are hydrolyzed to form glucose, which are known as β-glucosidases.
Okay, now we have an idea of how the reaction proceeds. However, there are two types of cellulase systems: noncomplexed and complexed. A noncomplexed cellulase system is the aerobic degradation of cellulose (in oxygen). It is a mixture of extracellular cooperative enzymes. A complexed cellulase system is an anaerobic degradation (without oxygen) using a “cellulosome.” The enzyme is a multiprotein complex anchored on the surface of the bacterium by non-catalytic proteins that serve to function like the individual noncomplexed cellulases but is in one unit. Figure 6.13 shows a figure of how the two different systems act. However, before going into more detail, we are now going to discuss what the enzymes themselves are composed of. The reading by Lynd provides some explanation of how the noncomplexed versus the complexed systems work.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/06%3A_General_Ethanol_Production/6.03%3A_Enzymatic_Biochemistry_and_Processing/6.3a%3A_The_Reaction_of_Cellulose-_Cellulolysis.txt
|
6.3b Composition of Enzymes
The first place to start is to describe the structure of a cellulase using typical terms in biochemistry. A modular cellobiohydriolase (CBH) has a few aspects in common; the common features include: 1) binder region of the protein, 2) catalytic region of the protein, and 3) a linker region that connects the binder and catalytic regions. Figure 6.14a shows a general diagram of the common features of a cellulase. The CBH is acting on the terminal end of a crystalline cellulosic substrate, where the cellulose binding domain (CBD) is imbedded in the cellulose chain, and the strand of cellulose is being digested by the enzyme catalyst domain to produce cellobiose. This type of enzyme is typical of exocellulases. Figure 6.14b shows a more realistic model, where the linker is attached to the surface of the cellulose.
One of the main differences between glycosyl hydrolases (a type of cellulase) and the other enzymes how the catalytic domain functions. There are three types: 1) pocket, 2) cleft, and 3) tunnel. Pocket or crater topology (Figure 15A) is optimal for the recognition of a saccharide non-reducing extremity and is encountered in monosaccharidases. Exopolysaccharidases are adapted to substrates having a large number of available chain ends, such as starch. On the other hand, these enzymes are not very efficient for fibrous substrates such as cellulose, which has almost no free chain ends. Cleft or groove cellulase catalytic domains are “open” structures (15B), which allow a random binding of several sugar units in polymeric substrates and is commonly found in endo-acting polysaccharidases such as endocellulases. Tunnel topology (Figure 6.15C) arises from the previous one when the protein evolves long loops that cover part of the cleft. Found so far only in CBH, the resulting tunnel enables a polysaccharide chain to be threaded through it. The red portions on each catalytic domain is supposed to be the carbohydrate being processed, although it is difficult to see in this picture.
The other main feature of these enzymes is the cellulose binding domain or module (CBD or CBM). Different CBDs target different sites on the surface of the cellulose; this part of the enzyme will recognize specific sites, help to bring the catalytic domain close to the cellulose, and pull the strand of cellulose molecule out of the sheet so the glycosidic bond is accessible.
So now, let’s go back to noncomplexed versus complexed cellulase systems. Figure 6.16 is another comparison of noncomplexed versus complexed cellulase systems, but this time, it focuses on the enzymes. Notice in Figure 6.16A, the little PacMan look-alike figures for enzymes. The enzymes are separate, but work in concert to break down the cellulose strands into cellobiose and glucose. Recall that this process is aerobic (in oxygen).
Now look at Figure 6.16B and the complexed system. The enzymes are attached to subunits that are attached to the bacterium cell wall. The products are the same, but recall that this system in anaerobic (without oxygen), and these enzymes all work together to produce cellobiose and glucose.
So, what are those subunits that are essentially the connectors in the enzyme? Figure 6.17 shows a schematic of the types. The cellulosome is designed for the efficient degradation of cellulose. A scaffoldin subunit has at least one cohesin modules that are connected to other types of functional modules. The CBM shown is a cellulose-binding module that helps the unit anchor to the cellulose. The cohesin modules are major building blocks within the scaffoldin; cohesins are responsible for organizing the cellulolytic subunits into the multi-enzyme complex. Dockerin modules anchor catalytic enzymes to the scaffoldin. The catalytic subunits contain dockerin modules; these serve to incorporate catalytic modules into the cellulosome complex. This is the architecture of the C. thermocellum cellulosome system. (Alber et al., CAZpedia, 2010). Within each cellulosome, there can be many different types of these building blocks. Figure 6.18 shows a block diagram of two different structures of T. neapolitana LamA and Caldicellulosiruptor strain Rt8B.4 ManA in a block diagram form. Due to the level of this class, we will not be going into any greater depth about these enzymes.
6.3c: Hemicellulases and Lignin-degrading Enzymes
6.3c Hemicellulases and Lignin-degrading Enzymes
Hemicellulases work on the hemicellulose polymer backbone and are similar to endoglucanases. Because of the side chain, “accessory enzymes” are included for side-chain activities. An example of hemicellulase activity on arabinoxylan and the places where bonds are broken by enzymes are shown (blue) in Figure 6.19. Figure 6.20 shows another example of how hemicellulose breaks down hemicellulose, a complex mixture of enzymes in order to degrade hemicellulose. The example depicted is cross-linked glucurono arabinoxylan.
The complex composition and structure of hemicellulose require multiple enzymes to break down the polymer into sugar monomers—primarily xylose, but other pentose and hexose sugars also are present in hemicelluloses. A variety of debranching enzymes (red) act on diverse side chains hanging off the xylan backbone (blue). These debranching enzymes include arabinofuranosidase, feruloyl esterase, acetylxylan esterase, and alpha-glucuronidase [Table 6.4 shows enzyme families for degrading the hemicellulose]...As the side chains are released, the xylan backbone is exposed and made more accessible to cleavage by xylanase. Beta-xylosidase cleaves xylobiose into two xylose monomers; this enzyme also can release xylose from the end of the xylan backbone or a xylo-oligosaccharide. (U.S. DOE, 2006)
Table 6.4: Enzyme families for degraded hemicelluloses, i.e., glycoside hydrolase (GH) and carbohydrate esterase (CE).
Enzyme Enzyme Families
Endoxylanase GH5, 8, 10, 11, 43
Beta-xylosidase GH3, 39, 43, 52, 54
Alpha-L-arabinofuranosidase GH3, 43, 51, 54, 62
Alpha-glucurondiase GH4, 67
Alpha-galatosidase GH4, 36
Acetylxylan esterase CE1, 2, 3, 4, 5, 6, 7
Feruloyl esterase CE1
Lignin-degrading enzymes are different from hemicellulases and cellulases. They are known as a group as oxidoreductases. Lignin degradation is an enzyme-mediated oxidation, involving the initial transfer of single electrons to the intact lignin (this would be a type of redox reaction or reduction-oxidation reaction). Electrons are transferred to other parts of the molecule in uncontrolled chain reactions, leading to the breakdown of the polymer. It is different from the carbohydrate hydrolysis because it is an oxidation reaction, and it requires oxidizing power (e.g., hydrogen peroxide, H2O2) to break the lignin down. In general, it is a significantly slower reaction than the hydrolysis of carbohydrates.
Examples of lignin-degrading enzymes include lignin peroxidase (aka ligninase), manganese peroxidase and laccase, which contain metal ions involved in the electron transfer. Lignin peroxidase (previously known as ligninase) is an iron-containing enzyme, which accepts two electrons from hydrogen peroxide (H2O2), then passes them as single electrons to the lignin molecule. Manganese peroxidase acts in a similar way to lignin peroxidase but oxidizes manganese (from H2O2) as an intermediate in the transfer of electrons to lignin. Laccase is a phenol oxidase, which directly oxidizes the lignin molecule (contains copper). There are also several hydrogen-peroxide generating enzymes (e.g., glucose oxidase), which generate H2O2 from glucose. (The Microbial World website)
If you are interested in learning about mechanisms of these enzymes, then visit this website from the Department of Chemistry, University of Maine. There are several pages that discuss how each of the different types of enzymes works mechanistically.
Lesson 7 will discuss the process of ethanol production after the use of cellulases on cellulose.
6.04: Assignments
6.4 Assignments
To Read
Please read Lynd, L. R., P. J. Weimer, W. H. Van Zyl, and I. S. Pretorius. "Microbial Cellulose Utilization: Fundamentals and Biotechnology." Microbiology and Molecular Biology Reviews 66.3 (2002): 511-15. This can be accessed via the Library Resources. You will need to use the information from this selection to complete Homework #2.
Homework #2
Download and complete Homework #2. It contains questions that pertain to the Lesson 6 course material. When you are finished, upload your completed assignment to the Homework #2 Dropbox. Use the following naming convention for your assignment: your user ID_HW2 (i.e., ceb7_HW2).
(12 points)
6.05: Summary and Final Tasks
Summary
In Lesson 6.1, we went over the requirements for the final project. In a future lesson, you will be expected to choose your biomass and outline your project.
Lesson 6.2 provided an overview of lignocellulosic biomass structure in greater depth than the previous lesson did. The greater depth is needed in order to understand how the enzymes work. You are expected to understand what lignocellulosic biomass is and how the components can break apart (i.e., what the fragments are chemically).
Lesson 6.3 discussed the basic composition of enzymes, how cellulosic enzymes (cellulases) work, and how hemicellulosic and lignitic enzymes work. The homework provided a background of what you need to know for enzymes.
References
M. Bembenic and C.E.B. Clifford, “Subcritical water reactions of model compounds for a hardwood derived Organosolv lignin with nitrogen, hydrogen, carbon monoxide and carbon dioxide gases,” Energy Fuels, 27 (11), 6681-6694, 2013.
David Hodge, Wei Liao, Scott Pryor, Yebo Li, Enzymatic Conversion of Lignocellulosic Materials: BEEMS Module B2, sponsored by USDA Higher Education Challenger Program 2009-38411-19761.
Lee Lynd, P.J. Weimer, W.H. van Zyl, I.S. Pretorius, “Microbial cellulose utilization: Fundamentals and biotechnology,” Microbiology and Molecular Biology Reviews, 66 (3), 506-577, 2002.
Gideon Davies and Bernard Henrissat, “Structures and mechanisms of glycosyl hydrolases,” Structure, 3, 853-859, 1995.
Alber, O., Dassa, B., and Bayer, E., “Cellulosome” within the CAZpedia website, 2010, accessed June 5, 2014.
Summa, A., Gibbs, M.D., and Bergquist, P.L., “Identification of novel β-mannan- and β-glucan-binding modules: evidence for a superfamily of carbohydrate-binding modules,” Biochem, J., 356, 791-798, 2001.
U.S. DOE, Breaking the Biological Barriers to Cellulosic Ethanol: A Joint Research Agenda. DOE/SC-0095, U.S. Department of Energy Office of Science and Office of Energy Efficiency and Renewable Energy, 2006.
Reminder - Complete all of the Lesson 6 tasks!
You have reached the end of Lesson 6! Double-check the Road Map on the Lesson 6 Overview page to make sure you have completed the activity listed there before you begin Lesson 7.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/06%3A_General_Ethanol_Production/6.03%3A_Enzymatic_Biochemistry_and_Processing/6.3b%3A_Composition_of_Enzymes.txt
|
7.1 Ethanol Production - General Information
Back in Lesson 2, I included a chemistry tutorial on some of the basic constituents of fuels. In this lesson, we will be discussing the production of ethanol (CH3-CH2-OH) and butanol (CH3-CH2-CH2-CH2-OH) from starch and sugar. Ethanol, or ethyl alcohol, is a chemical that is volatile, colorless, and flammable. It can be produced from petroleum via chemical transformation of ethylene, but it can also be produced by fermentation of glucose, using yeast or other microorganisms; current fuel ethanol plants make ethanol via fermentation.
The basic formula for making ethanol from sugar glucose is as follows:
C6H12O6 → 2C2H5OH + 2CO2
For fermentation, yeast is needed (other enzymes are used but yeast is most common), a sugar such as glucose is the carbon source, and anaerobic conditions (without oxygen) must be present. If you have aerobic (with oxygen) conditions, the sugar will be completely converted into CO2 with little ethanol produced. Other nutrients include water, a nitrogen source, and micronutrients.
Here in the US, the current common method of ethanol fuel production comes from starches, such as corn, wheat, and potatoes. The starch is hydrolyzed into glucose before proceeding with the rest of the process. In Brazil, sucrose, or sugar in sugarcane is the most common feedstock. And in Europe, the most common feed is sugar beets. Cellulose is being used in developing methods, which includes wood, grasses, and crop residues. It is considered developing because converting the cellulose into glucose is more challenging than in starches and sugars.
The International Energy Agency (IEA) predicts that ethanol will constitute two-thirds of the global growth in conventional biofuels with biodiesel and hydrotreated vegetable oil accounting for the remaining part (2018-2023). Global ethanol production is estimated to increase by 14% from about 120 bln L in 2017 to approximately 131 bln L by 2027 (Figure 7.1). Brazil will accommodate fifty percent of this increase and will be used to fill in the domestic demand (OECD/FAO (2018), “OECD-FAO Agricultural Outlook”).
World production of ethanol-based by country is shown in Figure 7.2. The US produces the most ethanol worldwide (~57%), primarily from corn. Brazil is the next largest producer with 27%, primarily from sugarcane. Other countries, including Australia, Columbia, India, Peru, Cuba, Ethiopia, Vietnam, and Zimbabwe, are also beginning to produce ethanol from sugarcane.
Figure 7.3a shows the growth of sugarcane in the world, in tropical or temperate regions. Sugar beet production in Europe is the other source of sugar for ethanol. It is grown in more northern regions than sugarcane, primarily in Europe and a small amount in the US. Figure 7.3b shows the growth of sugar beets in the world.
7.02: Sugarcane Ethanol Production
7.2 Sugarcane Ethanol Production
Production of ethanol from corn will be discussed in the next section; this section will focus on sugarcane ethanol production. So, what needs to be done to get the sugar from sugarcane?
The first step is sugarcane harvesting. Much of the harvesting is done with manual labor, particularly in many tropical regions. Some harvesting is done mechanically. The material is then quickly transported by truck to reduce losses.
The cane is then cut and milled with water. This produces a juice with 10-15% solids from which the sucrose is extracted. The juice contains undesired organic compounds that could cause what is called sugar inversion (hydrolysis of sugar into fructose and glucose). This leads to the clarification step in order to prevent sugar inversion.
In the clarification step, the juice is heated to 115°C and treated with lime and sulfuric acid, which precipitates unwanted inorganics.
The next step for ethanol production is the fermentation step, where juice and molasses are mixed so that a 10-20% sucrose solution is obtained. The fermentation is exothermic; therefore, cooling is needed to keep the reaction under fermentation conditions. Yeast is added along with nutrients (nitrogen and trace elements) to keep yeast growing. Fermentation can take place in both batch and continuous reactors, though Brazil primarily uses continuous reactors.
Figure 7.4 shows a schematic of one process for ethanol production along with the option to produce refined sugar as well. Sugarcane contains the following: water (73-76%), soluble solids (10-16%), and dry fiber or bagasse (11-16%). It takes a series of physical and chemical processes that occur in 7 steps to make the two main products, ethanol and sugar.
So, why produce both sugar and ethanol? Both are commodity products, so the price and market of the product may dictate how much of each product to make. This is how Brazilian ethanol plants are configured. In order to have an economic process, all of the products, even the by-products, are utilized in some fashion.
As noted previously, one of the major by-products is the dry fiber of processing, also known as bagasse. Bagasse is also a by-product of sorghum stalk processing. Most commonly, bagasse is combusted to generate heat and power for processing. The advantage of burning the bagasse is lowering the need for external energy, which in turn also lowers the net carbon footprint and improves the net energy balance of the process. In corn processing, a co-product is made that can be used for animal feed, called distillers grains, but this material could also be burned to provide process heat and energy. Figure 7.5 shows a bagasse combustion facility. The main drawback to burning bagasse is its high water content; high water content reduces the energy output and is an issue for most biomass sources when compared to fossil fuels, which have a higher energy density and lower water content.
Bagasse (see Figure 7.6) can have other uses. The composition of bagasse is: 1) cellulose, 45-55%, 2) hemicellulose, 20-25%, 3) lignin, 18-24%, 4) minerals, 1-4%, and 5) waxes, < 1%. With the cellulose content, it can be used to produce paper and biodegradable paper products. It is typically carted on small trucks that look like they have “hair” growing out of them.
Another crop that has some similarities to sugarcane is sorghum. Sorghum is a species of grass, with one type that is raised for grain and many other types that are used as fodder plants (animal feed). The plants are cultivated in warmer climates and are native to tropical and subtropical regions. Sorghum bicolor is a world crop that is used for food (as grain and in sorghum syrup or molasses), as animal feed, the production of alcoholic beverages, and biofuels. Most varieties of sorghum are drought- and heat-tolerant, even in arid regions, and are used as a food staple for poor and rural communities. Figure 7.7 shows a picture of a sorghum field.
The US could use several alternative sugar sources to produce ethanol; it turns out corn is the least expensive and, therefore, the most profitable feed and method to produce ethanol. Table 7.1 shows a comparison of various feedstocks that could be used to make ethanol, comparing feedstock costs, production costs, and total costs. When you look at using sugar to make ethanol (from various sources), you can see processing costs are low, but feedstock prices are high. However, in Brazil, sugarcane feed costs are significantly lower than in other countries. Notice the data is from 2006.
Table 7.1: Summary of estimated ethanol production costs (\$/gal)a (Credit: USDA Rural Development)
Cost Item Feedstock Costsb Processing Costs Total Costs
UC Corn wet milling 0.40 0.63 1.03
UC Corn dry milling 0.53 0.52 1.05
US Sugarcane 1.48 0.92 2.40
US Sugar beets 1.58 0.77 2.35
US Molassesc 0.91 0.36 1.27
US Raw Sugarc 3.12 0.36 3.48
US Refined Sugarc 3.61 0.36 3.97
Brazil Sugarcaned 0.30 0.51 0.81
EU Sugar beetsd 0.97 1.92 2.89
a Excludes capital costs
b Feedstock costs for US corn wet and dry milling are net feedstock costs; feedstock for US sugarcane and sugar beets are gross feedstock costs
c Excludes transportation costs
d Average of published estimates
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/07%3A_Processing_to_Produce_Ethanol_and_Butanol_from_Carbohydrates_and_Enzymes/7.01%3A_Ethanol_Production_-_General_Information.txt
|
7.3 Ethanol Production from Corn
The following pages will describe the process of ethanol production from corn.
7.03: Ethanol Production from Corn
7.3a Composition of Corn and Yield of Ethanol from Corn
As established in the previous section, corn has the least expensive total cost for ethanol production. So what part of the corn is used for ethanol? Primarily the corn kernel is used for ethanol production. Figure 7.8 shows the general composition of corn. It is a picture of yellow dent corn, which is commonly used for ethanol production. The endosperm is mostly composed of starch, the corn’s energy storage, and protein for germination. It is the starch that is used for making fuel. The pericarp is the outer covering that protects the kernel and preserves the nutrients inside. The pericarp resists water and water vapor and protects against insects and microorganisms. The living organism in the kernel is the germ. It contains genetic information, enzymes, vitamins, and minerals, which help the kernels grow into a corn plant. About 25% of the germ is corn oil and is a valuable part of the kernel. The tip cap is where the kernel is attached to the cob, and water and nutrients flow through the tip cap. This part of the kernel is not covered by the pericarp.
Starch is a polymer. It is made up of D-glucose units. Therefore, glucose components directly impact ethanol yields. The components of yellow dent corn are the following. It is primarily composed of starch, at 62%. The corn kernel is also composed of protein and fiber (19%), water (15%), and oil (4%). It can also contain traces of other constituents, but these are small relative to the main components. If you’ll recall from Lesson 6, starch is composed of two different polymeric molecules: amylose and amylopectin. If you factor in these two carbons, the starch can be broken into these components: amylopectin is 50% of the yellow dent corn kernel (80% of the starch) and amylose is 12% of the kernel (20% of the starch).
One bushel of corn (56 lbs.) can provide several products. The one bushel can provide:
31.5 lbs. of starch
OR
33 lbs. of sweetener
OR
2.8 gal. of fuel ethanol
OR
22.4 lbs of PLA fiber, which is a starch-based polymer called polylactic acid
In addition, the corn will provide 13.5 lbs. of gluten feed (20% protein), 2.5 lbs. of gluten meal (60% protein), and 1.5 lbs. of corn oil. Based on this information, we can calculate the actual yield to the theoretical yield and determine the percent yield we can achieve for ethanol conversion. This is shown below:
1 bushel of corn:
56 lbs/bu x 62% starch = 34.7 lbs of starch/bu
34.7 lbs starch x 1.11 lbs glucose/lb starch = 38.5 lbs glucose/bu
The reaction of glucose to ethanol:
C6H12O6 → 2C2H5OH + CO2
180g/mol 2*46 g/mol
38.5 lbs glucose x 92 lbs EtOH/180 lbs glucose = 19.7 lbs EtOH/bu
19.7 lbs EtOH x 1 gal EtOH/6.6 lbs = 3.0 gal EtOH/bu theoretical
100 x 2.8/3.0 = 93% yield of ethanol, typically
As discussed in Lesson 5 for pretreatment of lignocellulosic biomass, breaking down of glucose also requires hydrolysis. As water ionizes into H+ and OH-, it will break apart a molecule such as maltose into two glucose molecules. The reaction does not happen fast without either an enzyme (Lesson 6) or acid/heat (Lesson 5). Figure 7.9 shows the ratio of glucose monomer to the glucose subunit in starch. When starch is broken down, it is done so by adding the water molecule to form the glucose. This is where the value for lbs glucose/lb starch is derived for the calculation above.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/07%3A_Processing_to_Produce_Ethanol_and_Butanol_from_Carbohydrates_and_Enzymes/7.03%3A_Ethanol_Production_from_Corn/7.3a%3A_Composition_of_Corn_and_Yie.txt
|
7.3b How Corn is Processed to Make Ethanol
The process of making corn into ethanol is a multistep process. The first step is to milling the corn. It can be done by dry milling or wet milling. Figures 7.10a and 7.10b show the process steps for each wet and dry milling. For wet milling, the corn kernels are broken down into starch, fiber, corn germ, and protein by heating in the sulfurous acid solution for 2 days. The starch is separated and can produce ethanol, corn syrup, or food-grade starch. As is noted in Figure 7.10a, the wet milling process also produces additional products including feed, corn oil, gluten meal, and gluten feed. Dry milling is a simpler process than wet milling, but it also produces fewer products. The main products of dry milling are ethanol, CO2, and dried distiller grain with solubles (DDGS). Let's go through each of the steps in the dry grind process. The five steps are: 1) grinding, 2) cooking and liquefaction, 3) saccharification, 4) fermentation, and 5) distillation.
Grinding
For dry grinding corn, a hammermill or roller mill is used to do the grinding. Figure 7.11 is a schematic of a hammermill with corn being put through it. The hammers are attached to rods that turn on a rotor. As the rotor turns, the feed (corn in this case) is hammered against the wall. A screen at the bottom allows particles that are small enough to leave the unit and keep in the larger particles to continue to be hammered until all the material is in the correct size range. The grinding helps to break the tough outer coatings of the corn kernel, which will increase the surface area of the starch. Once the corn is broken down, it is mixed/slurried with heated water to form a mash or slurry.
Cooking and Liquefaction
Once the corn slurry (mash) is made, it goes through cooking and liquefaction. The cooking stage is also called gelatinization. Water interacts with the starch granules in the corn when the temperature is >60°C and forms a viscous suspension. Have you ever cooked with cornstarch to make thick gravy? Figure 7.12 shows a picture of starch mixed with water being poured into a heated sauce as it cooks. It will thicken with heat.
The liquefaction step is actually partial hydrolysis that lowers the viscosity. It is essentially breaking up the longer starch chains into smaller chains. One way to measure this is to look at dextrose equivalents (DE), or a measure of the amount of reducing sugars present in a sugar product, relative to glucose, expressed as a percentage on a dry basis. Dextrose is also known as glucose, and dextrose equivalent is the number of bonds cleaved compared to the original number of bonds. The equation is:
$\text{Equation 1:100} × \dfrac{\text{number of bonds cleaved}}{\text{number of original bonds}}$
Pure glucose (dextrose): DE = 100
Maltose: DE = 50
Starch: DE = 0
Dextrins: DE = 1 through 13
Dextrins are a group of low molecular weight carbohydrates produced by hydrolysis of starch or glycogen. Dextrins are mixtures of polymers of D-glucose units linked by α (1,4) or α (1,6) glycosidic bonds. Dextrins are used in glues and can be a crispness enhancer for food processing.
Maltodextrin: DE = 3 through 20
Maltodextrin is added to beer.
Recall that starch hydrolysis is where water reacts with the sugar to break the sugar down and form glucose. The water breaks into the H+ and OH- ions to interact with the starch as it breaks down.
In order to accomplish liquefaction, the reaction must take place under certain conditions. The pH of the mash is maintained in the range of 5.9-6.2, and ammonia and sulfuric acid are added to the tank to maintain the pH. About one-third of the required type of enzyme, α-amylase, can be added to the mash before jet cooking (2-7 minutes at 105-120°C) to improve the flowability of the mash. The jet cooking serves as a sterilization step to avoid bacterial contamination during the fermentation step later on. At this stage, shorter dextrins are produced but are not yet glucose.
Three types of processes can be utilized for liquefaction. Figure 7.13 shows the three options. Process 1 is where the α-amylase is added and the material is incubated at 85-95°C. Process 2 has the mash in the jet cooker at 105-120ºC for 2-7 minutes, then flows to a flash tank at 90°C. α-Amylase is added three hours later. The third option, Process 3, adds the α-amylase, the heats in the jet cooker at 150°C, followed by flow to the flash tank at 90°C and adding more α-amylase.
The α-amylase for liquefaction acts on the internal α (1,4) glycosidic bonds to yield dextrins and maltose (glucose dimers). A type of α-amylase exists in the saliva of humans; a different α-amylase is utilized by the pancreas. Figure 7.14a shows one type of α-amylase. The α-amylase works a little faster than the β-amylase, and the β-amylase works on the second α (1,4) glycosidic bond so that maltose is formed (see Figure 7.14b). β-amylase is part of the ripening process of fruit increasing the sweetness of fruit as it ripens.
Saccharification
The next step in the process of making ethanol is saccharification. Saccharification is the process of further hydrolysis to glucose monomers. A different enzyme is used, called a glucoamylase (also known by the longer name amyloglucosidase). It cleaves both the α (1,4) and α (1,6) glycosidic bonds from dextrin ends to form glucose. The optimum conditions are different from the previous step and are at a pH of 4.5 and a temperature of 55-65°C. Figure 7.14c shows a schematic of the glucoamylase, which is also called a ϒ-amylase. There are a wide variety of amylase enzymes available that are derived from bacteria and fungi. Table 7.2 shows different enzymes, their source, and the action of each.
Table 7.2: Different enzymes used in starch depolymerization. (Credit: MF Chaplin and C. Bucke, Enzyme Technology, Cambridge University Press, 1990)
Enzyme Source Action
α-Amylase Bacillus amyloliquefaciens Only α-1,4-oligosaccharide links are cleaved to give a-dextrins and predominantly maltose (G2), G3, G6, and G7 oligosaccharides
B. licheniformis Only α-1,4-oligosaccharide links are cleaved to give a-dextrins and predominantly maltose, G3, G4, and G5 oligosaccharides
Aspergillus oryzae, A. niger Only α-1,4 oligosaccharide links are cleaved to give a-dextrins and predominantly maltose and G3 oligosaccharides
Saccharifying a-amylase B. subtilis (amylosacchariticus) Only α-1,4-oligosaccharide links are cleaved to give a-dextrins with maltose, G3, G4 and up to 50% (w/w) glucose
β-Amylase Malted barley Only α-1,4-links are cleaved, from non-reducing ends, to give limit dextrins and b-maltose
Glucoamylase A. niger α-1,4 and α-1,6-links are cleaved, from the nonreducing ends, to give β-glucose
Pullulanase B. acidopullulyticus Only α-1,6-links are cleaved to give straight-chain maltodextrins
Some of the newer developed enzymes (granular starch hydrolyzing enzymes – GSHE) allow skipping the liquefaction stage by hydrolyzing starch at low temperatures with cooking. Advantages include: 1) reduced heat/energy, 2) reduced unit operation (reducing capital and operating costs), 3) reduced emissions, and 4) higher DDGS. They work by “coring” into starch granules directly without the water swelling/infusion. Disadvantages include: 1) enzymes cost more and 2) contamination risks.
Fermentation
The final chemical step in producing ethanol from the starch is fermentation. The chemical reaction of fermentation is where 1 mole of glucose yields 2 moles of ethanol and 2 moles of carbon dioxide. The reaction is shown in Equation 2 below:
C6H12O6→2C2H6OH + 2CO2
To cause fermentation to take place, yeast is added. A common yeast to use is saccharomyces cerevisiae, which is a unicellular fungus. The reaction takes place at 30-32°C for 2-3 days in a batch process. Supplemental nitrogen is added as ammonium sulfate ((NH4)2SO4) or urea. A protease can be used to convert proteins to amino acids to add as an additional yeast nutrient. Virginiamycin and penicillin are often used to prevent bacterial contamination. The carbon dioxide produced also lowers pH, which can reduce the contamination risk. Close to 90-95% of the glucose is converted to ethanol.
It is possible to do saccharification and fermentation in one step. It is called Simultaneous Saccharification and Fermentation (SSF), and both glucoamylase and yeast are added together. It is done at a lower temperature than saccharification (32-35°C), which slows the hydrolysis into glucose. As glucose is formed, it is fermented, which reduces enzyme product inhibition. It lowers initial glucose concentrations, lowers contamination risk, lowers energy requirements, and produces higher yields of ethanol. Because SSF is done in one unit, it can improve capital costs and save residence time.
Distillation and Increase of Ethanol Concentration
The last phase of ethanol production is the processing of ethanol to increase the ethanol concentration. Downstream from the fermenters, the ethanol concentration is 12-15% ethanol in water (which means you have 85-88% water in your solution!). Distillation was mentioned in an earlier lesson; crude oil must be distilled into various boiling fractions to separate the oil into useable products. Distillation is a process to separate components using heat and specially designed towers to keep the liquid flowing downward and the vapors being generated to flow upwards. Water boils at 100°C, while ethanol boils at 78°C. However, because water and ethanol evaporate at a lower temperature than their boiling points, and because they both have OH functional groups that are attracted to each other, ethanol and water molecules are strongly bound to each other and form an azeotrope together. That just means that you cannot completely separate ethanol from water – the ethanol fraction will contain about 5% water and 95% ethanol when you get to the end of the distillation process. Figure 7.15 shows a schematic of a distillation unit. You don’t want water in gasoline as you drive, because it prevents efficient combustion. Do you want water in your ethanol if you use it as a fuel?
The answer is no, so you must use an additional method to remove all the water from ethanol. The method is called dehydration. The unit that is used is called a molecular sieve, and the material used in it is called zeolite. Under these conditions, the zeolite absorbs the water into it, but the ethanol will not go into the zeolite. They use what is called a pressure-swing adsorption unit. The unit is designed to run in two modes. At high pressure, the ethanol is dehydrated in Unit 1, and at low pressure, anhydrous ethanol is fed through to remove the water from Unit 2 (Figure 7.16a). When the zeolite sieve has absorbed all the water, Unit 1 is switched to become the low pressure regenerating bed, and Unit 2 becomes the high-pressure unit (Figure 7.16b). The residence time for the process is 3-10 minutes. The zeolite for this process is a highly ordered aluminosilicate with well-defined pore sizes that are formed into beads or included in a membrane. The zeolites attract both water and ethanol, but the pore sizes are too small to allow the ethanol to enter. As noted in Figure 7.17, the pore size of the zeolite membrane is 0.30 nm, while the size of the water molecule is 0.28 nm and the ethanol 0.44 nm. Depending on the type of unit, the membrane or beads can be regenerated using heat and vacuum, or by flowing the pure ethanol through the unit as well as described above.
So once we have fermented the material to ethanol, it goes through a series of processes to obtain the products in the form that we want them. Figure 7.18a is a schematic of product recovery, and Figure 7.18b shows the definitions of some of the terminology.
To summarize, corn has 62% starch, 19% protein, 4% oil, and 15% water. If you look at the products on a dry basis (you don’t look at the water like a product), 73% of the corn is starch and 27% is protein, fiber, and oil. For every bushel of corn, realistically you’ll generate 2.8 gallons of ethanol, ~17 lbs of CO2, and ~17 lbs of DDGS. We’ll look at the economics of this process and a couple of other processes in a later lesson.
So, at this point, you can see how to generate ethanol from corn. If you want to generate ethanol from cellulose in plants, you have the information from Lesson 6 to generate glucose from cellulose (it is a more involved process), but once you have glucose, you can use the same end steps in ethanol production from fermentation of glucose. In the next section, we’ll look at the production of another alcohol, butanol.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/07%3A_Processing_to_Produce_Ethanol_and_Butanol_from_Carbohydrates_and_Enzymes/7.03%3A_Ethanol_Production_from_Corn/7.3b%3A_How_Corn_is_Processed_to_Ma.txt
|
7.4 Butanol Production
Another alcohol that can be generated from starch or cellulose is butanol, a four-carbon chain alcohol. There are usually two isomers: normal butanol (n-butanol) and iso-butanol. Their structures, along with ethanol, are shown below:
Table 7.3
Name Atoms and Bonds Stick Representation
n-Butanol (4 C atoms)
Ethanol (2 C atoms)
Isobutanol (4 C atoms)
There are some advantages of butanol when compared to ethanol:
1. It has a higher energy content than ethanol.
2. It is less hydrophilic than ethanol (less attracted to water).
3. It is more compatible with oil and its infrastructure.
4. It has a lower vapor pressure and higher flash point than ethanol (evaporates less easily).
5. It is less corrosive.
6. N-butanol works very well with diesel fuel.
7. Both n-butanol and iso-butanol have good fuel properties.
Table 7.4 shows a comparison of the energy content of various fuels in Btu/gal. The higher the value the more miles per gallon one can achieve; the Btu/gal value of butanol is close to the value of gasoline, and is higher than ethanol.
Table 7.4: Energy content of various fuels.
Fuel Energy Content (Btu/gal)
Gasoline 114,800
Diesel fuel 140,000
Methanol 55,600
Ethanol 76,100
Butanol 110,000
Butanol production is also a fermentation process – we’ll go over the differences in a little bit. There is a history regarding butanol production. It was known as the ABE process, or acetone, butanol, ethanol process. It was commercialized in 1918 using an enzyme named Clostridium acetobutylicum 824. Acetone was needed to produce Cordite, a smokeless powder used in propellants that contained nitroglycerin, gunpowder, and a petroleum product to hold it together – the acetone was used to gelatinize the material. In the 1930s, the butanol in the product was used to make butyl paints and lacquers. It has also been reported that Japanese fighter planes used butanol as fuel during WWII. The process of ABE fermentation was discontinued in the US during the early 1960s due to unfavorable economic conditions (made less expensively using petroleum). South Africa used the process into the 1980s, but then discontinued. There are reports that China had two commercial biobutanol plants in 2008, and currently, Brazil operates one biobutanol plant. There are three species of enzymes commonly used for butanol fermentation because they are some of the highest producers of butanol: Clostridium acetobutylicum 824, Clostridium beijerinckii P260, and Clostridium beijerinckii BA101. Figures 7.19a and 7.19b show micrographs of two of the fermentation enzymes used for butanol production.
As in the conversion of starch to ethanol, the plants must be processed in a similar way, so I won’t repeat the five steps we just covered – we just use different enzymes, and end processing may be different because of the different chemicals produced. Starch must be hydrolyzed in acid before using the enzyme. And, as with using cellulose and hemicellulose as the starting material, it must first be pretreated to separate out the cellulose, then treated again to eventually produce glucose in order to make butanol from fermentation. Remember the glucose to ethanol reaction? Starch will produce the following products: 3 parts acetone (3 CH3-CO-CH3), 6 parts butanol (6 CH3-CH2-CH2-CH2OH), and 1 part ethanol (1 CH3-CH2-OH).
So, what feed materials are used for butanol production? Similar to what is used for ethanol production, which includes: 1) grains, including wheat straw, barley straw, and corn stover, 2) by-products from paper and sugar production, including waste paper, cotton woods, wood chips, corn fiber, and sugarcane bagasse, and 3) energy crops including switchgrass, reed canarygrass, and alfalfa. Table 7.5 shows the costs of various biomass sources.
Table 7.5: Prices of biomass sources for alcohol production.
Source Price (\$/ton)
Wheat straw 24
Barley straw 26
Oat straw 32
Pea straw 44
Grass hay 50
Corn stover 50
Switchgrass 60
Corn 260 (varied from 73-260)
The price and the availability of feeds determine what might be used to produce various biofuels. The feeds most available in the US are corn stover (2.4 x 108 ton/year) and wheat straw (4.9 x 107 ton/year). Other biomass substrates include corn fiber, barley straw and corn fiber at ~4-5 x 106 ton/year. Yields of butanol from corn and corn products by fermentation are shown in Table 7.6.
Table 7.6: Yields of ABE and as individual components from corn and corn products during fermentation by solventogenic Clostridium species.
Clostridium species.">
Fermentation Substrates
Ferment* Parameters Glucose Cornstarch Maltodextrins Soy Molasses Ag Waste Pack Peanuts
Acetone (g/L) 3-7 3-7 3-7 2-4 1-5 5-7
Butanol (g/L) 7-20 7-20 7-19 7-18 1-10 1-16
Ethanol (g/L) 0.3-1 0.3-1 0.5-1.7 0.3-0.6 0.2-1 0.3-1
Total ABE (g/L) 14-26 14-26 14-27 14-23 5-16 5-22
ABE yield g/g 0.33-0.42 0.33-0.44 0.33-0.50 0.33-0.39 0.18-0.39 0.34-0.38
The solventogenic Clostridium species can metabolize both hexose and pentose sugars, which are released by cellulose and hemicellulose in wood and agricultural wastes; this is an advantage over other cultures used to produce biofuels. If all the residues available were converted into acetone-butanol (AB), the result would produce 22.1 x 109 gallons of AB. In 2009, 10.6 x 109 gallons of ethanol was produced, but that was only equivalent to 7.42 x 109 gallons of butanol on an equal energy basis.
There are several issues that are a challenge to producing AB in a traditional batch process: 1) product (butanol) concentration is low 13-20 g/L, 2) incomplete sugar utilization (<60 g/L), and 3) the process streams are large. These issues are due to severe product inhibition. Other issues include: 1) butanol glucose yield low, 22-26%, 2) butanol concentration in fermentation is low, 1.5%, 3) butanol concentration of 1% inhibits microbial cell growth, 4) butanol fermentation is in two phases, and 5) feedstock cost is high.
One of the more important considerations of butanol production is limiting the microbial inhibitory compounds. These compounds include some compounds related to lignin degradation, including syringaldehyde, coumaric acid, ferulic acid, and hydroxymethylfurfural.
As an example of one particular process, wheat straw was processed using a separate hydrolysis, fermentation, and recovery process. The following conditions were used: 1) wheat straw milled to 1-2 mm size particles, 2) dilute sulfuric acid (1% v/v) pretreatment at 160 C for 20 min., 3) mixture cooled to 45 C and hydrolyzed with cellulase, xylanase, and β-glucosidase enzymes for 72 h, followed by centrifugation and removal of sediments, 4) fermentation with C. beijerinckii P260 (fermentation gases CO2 and H2 were released to the environment, but could be captured, separated and used in other processes, and 5) butanol removed by distillation. For this particular process, the production of ABE was relatively high, with butanol and acetone being the major products. The reaction was done in a batch reactor and no treatment was used to remove inhibitor chemicals. Table 7.7 shows the process with wheat straw, barley straw, corn stover and switchgrass. Wheat straw did not need to be detoxified, but the others did. Detoxification can be done with adding lime (a weak base) or using a resin column to separate out the components.
So, what can be done to overcome butanol toxicity? What kind of downstream processing needs to be done to separate out the wanted components? The butanol level in the reactor has to be kept to a certain threshold in order to reduce toxicity to the culture and utilize all the sugar reactants.
First of all, these are the typical processing steps that must be utilized in some form for most refining units (the upstream processing includes pretreating the raw material, similar to what we discussed in Lesson 5): 1) sorting, 2) sieving, 3) communition (size reduction by milling), 4) hydrolysis, and 5) sterilization. The next main stage is the bioreaction stage: metabolite biosynthesis and biotransformations. The final aspect of processing is downstream processing, and the methods used depend on the products made. To separate solids, filtration, precipitation, and centrifugation take place. Flocculation can also be done. To separate liquids, several processes can be done: 1) diffusion, 2) evaporation, 3) distillation, and 4) solvent-liquid extraction.
For butanol processing, there have been several processes developed to reduce the level of toxicity. These include: 1) simultaneous saccharification, fermentation, and recovery (SSFR), 2) gas stripping (using N2 and/or fermentation gases – CO2 and H2), 3) cell recycle, 4) pervaporation (combination process of permeation/evaporation using selective membranes), 5) vacuum fermentation, 6) liquid-liquid extraction, and 6) perstraction (combination of solvent extraction and membranes for permeation). The goal is to convert all the sugars to acetone and butanol, but remove the products as they are produced to decrease toxicity. We’ll discuss more about liquid-liquid extraction (or solvent extraction) when we get to the lesson on biodiesel.
Table 7.7: AB production from detoxified agricultural residue hydrolysates.
Substrate Before detoxification After detoxification
Wheat straw
ABE (g/L) 25.0-28.2 No detox required
Productivity (g/L•h) 0.63-0.71 --
Barley straw
ABE (g/L) 7.1 26.6
Productivity (g/L•h) 0.10 0.39
Corn stover
ABE (g/L) 0.00 26.3
Productivity (g/L•h) 0.00 0.31
Switchgrass
ABE (g/L) 1.5 13.1
Productivity (g/L•h) <0.02 <0.03
7.05: Assignments
Reminder
Remember that your Final Project outline will be due October 25th. You can see the full Final Project Outline Assignment in the next lesson.
Discussion #1
Please read the following selections.
• Robison, D. (2012, March 19). Startup Converts Plastic To Oil, And Finds A Niche.
• Bourzac, K. (2009, July 9). Biofuel Plant Opens in Brazil.
Write a paragraph discussing how these articles relate to biomass production and sustainability.
After posting your response, please comment on at least one other person's response. Discussions will be reviewed, and grades will reflect critical thinking in your input and responses. Don't just take what you read at face value; think about what is written.
(5 points)
7.06: Summary and Final Tasks
Summary
This lesson continued from the previous lesson, but went into greater depth with the processing aspects of ethanol production. Starch and cellulose must first be converted into glucose before fermentation into ethanol and CO2. Starch feedstocks include sugarcane in Brazil, sugarbeets in Europe, and corn in the US. In order to process corn, there are five steps: grinding, cooking and liquefaction, saccharification, fermentation, and distillation. Enzymes are needed in saccharification, and yeast is needed in fermentation. Cellulose to glucose requires some additional steps and enzymes in order to break the structure down, but once it gets to the glucose stage, all the processing is the same. Because the water in the ethanol must be removed for use as a fuel, the last steps include distillation and molecular sieve.
Butanol can be produced in a similar way, but acetone and ethanol also accompany butanol processing. Different enzymes are used. While the concentration of butanol is low when converting from feed materials, butanol has some advantages over using ethanol; it mixes better with gasoline and has a higher energy content.
References
Pryor, Scott; Li, Yebo; Liao, Wei; Hodge, David; “Sugar-based and Starch-based Ethanol,” BEEMS Module B5, USDA Higher Education Challenger Program, 2009-38411-19761, 2009.
Bothast, R.J., Schilcher, M.A., Biotechnological processes for conversion of corn into ethanol, Appl. Microbiol. Biotechnol., 67, 19-25, 2005.
Reminder - Complete all of the tasks!
You have reached the end of this Lesson! Double-check the Road Map on the Lesson Overview page to make sure you have completed all of the activities listed there before you begin the next lesson.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/07%3A_Processing_to_Produce_Ethanol_and_Butanol_from_Carbohydrates_and_Enzymes/7.04%3A_Butanol_Production.txt
|
8.1 Review of Refinery Processing and Chemical Structures for Jet Fuel and Diesel Fuel
Recall from Lesson 2 the general schematic of a refinery, shown here in Figure 8.1. Jet fuel is typically in the middle distillate range, also known as naphtha and kerosene. Diesel fuel is heavier (higher molecular weight and longer long-chain hydrocarbons). These fuels do not require as much processing because they can be obtained primarily from distillation of oil, but because of sulfur/oxygen/nitrogen functional groups and high molecular weight waxes, these fuels must have these components removed. They are hydrotreated (hydrogen is added, sulfur/oxygen/nitrogen are removed, and aromatics are made into cycloalkanes). Waxes are also separated out.
The primary structure we want for jet fuel and diesel fuel is:
Alkane - atoms are lined up. For stick representation, each corner represents a CH2 group, and each end represents a CH3 group.
Name Atoms and Bonds Stick Representation
Heptane (7 C atoms)
Cycloalkanes - again, still an alkane, but forms a ring compound.
Name Atoms and Bonds Stick Representation
Cyclohexane (6 C atoms)
Table 8.1 also shows a list of different chemicals and the properties of each. This table is mainly focused on those chemicals that would be in jet and diesel fuels.
Table 8.1: List of common hydrocarbons and properties
Name Number of C Atoms Molecular formula bp (0C), 1 atm mp (0C) Density (g/mL)(@200C)
Decane 10 C10H22 174.1 -30 0.760
Tetradecane 14 C14H30 253.5 6 0.763
Hexadecane 16 C16H34 287 18 0.770
Heptadecane 17 C17H36 303 22 0.778
Eicosane 20 C20H42 343 36.8 0.789
Cyclohexane 6 C6H12 81 6.5 0.779
Cyclopentane 5 C5H10 49 -94 0.751
Benzene 6 C6H6 80.1 5.5 0.877
Naphthalene 10 C10H8 218 80 1.140
Tetrahydronaphthalene(tetralin) 10 C10H12 207 -35.8 0.970
Decahydronaphthalene(decalin) 10 C10H18 187,196 -30.4, -42.9 0.896
8.02: Direction Liquefaction of Biomass
8.2 Direction Liquefaction of Biomass
There are differences for each of the thermal processes, as described in Lesson 5. Here we focus on direct liquefaction. Direct liquefaction (particularly hydrothermal processing) occurs in a non-oxidative atmosphere, where the biomass is fed into a unit as an aqueous slurry at lower temperatures, with bio-crude in the liquid form being the product. The primary focus of these particular processes is to produce a liquid product that is a hydrocarbon with an atomic H:C ratio of ~2, and a boiling range of 170-280 °C.
Many of the processes developed are based on coal-to-liquids processing. The main purposes in taking coal and biomass into a liquid is to produce liquids, to remove some of the less desirable components (i.e., sulfur, oxygen, nitrogen, minerals), and to make a higher energy density material that will flow.
One of the primary processes to convert coal into liquids directly is through a combination of thermal decomposition and hydrogenation under pressure. There are several single and two-stage processes that have been developed but have not been made commercial in the US. However, China opened a commercial direct liquefaction plant partially based on US designs in 2008. Figure 8.3a shows the general schematic of the plant as well as the products they make in Figure 8.3b. Design considerations include: 1) temperatures of ~400-450°C, 2) hydrogenation catalysts, 3) hydrocarbon solvents that are similar to fuels, 4) naturally occurring aromatics in coal, 5) sulfur, nitrogen, minerals that must be removed in refining of the liquid. Biomass can be processed in a similar manner, but biomass has significantly more oxygen and less aromatic compounds and decomposes differently than coal. Other processes have been developed for biomass, that appear to do a better job of processing cellulose. One process is hydrothermal processing in pressurized water using an acid catalyst such as LaCl3 at 250°C - we won't go into more detail here, but it is different from the direct liquefaction discussion in the next paragraph.
So, what are the differences with direct liquefaction of biomass? On the surface, it looks pretty much the same as the process for coal liquefaction. It is a thermochemical conversion process of organic material into liquid bio-crude and co-products. Depending on the process, it is usually conducted under moderate temperatures (300-400°C, lower than coal liquefaction) and pressures (10-20 MPa, similar or maybe a little higher with primarily hydrogen in coal to liquids) with added hydrogen or CO as a reducing agent. Unlike coal, the biomass is “wet”, or at least wetter than coal, and can be processed as an aqueous slurry. When processed as an aqueous slurry, the process is referred to in the literature as hydrothermal processing and can be subcritical to supercritical for water. Figure 8.4 shows the conditions for supercritical water; water behaves more like an acid/base system under these conditions. Thus, it can also be a catalyst. There is also a high solubility of organic material in water under these conditions. This mainly occurs along the liquid/vapor line. The basic reaction mechanisms can be described as:
1. depolymerization of biomass;
2. decomposition of biomass monomers by cleavage, dehydration, decarboxylation, and deamination;
3. recombination of reactive fragments.
Different types of biomasses react differently depending on the biomass source. Carbohydrates, such as cellulose, hemicellulose, and starch can decompose in hydrothermal water. The typical product formed under these conditions is glucose, and glucose can then be fermented to make alcohol or further degrade in water to make glycolaldehyde, glyceraldehyde, and dihydroxyacetone. The products made depend on the conditions: at temperatures, ~180°C, products are sugar monomers, but at higher temperatures, 360-420°C, the aldehyde, and acetone compounds are formed.
Lignin and fatty acids also decompose in hydrothermal water, but the products are very different because the substrate is different. For lignin, the products are similar to the building blocks for lignin, as shown in Figure 5.20a of Lesson 5 (p-coumaryl, coniferyl, and sinapyl alcohols), although the functional groups vary depending on the hydrothermal conditions. Bembenic and Clifford used hydrothermal water at 365°C and ~13 MPa to form methoxy phenols, using different gases to change the product slate (hydrogen, carbon monoxide, carbon dioxide, and nitrogen). For lipid or triglyceride (fats and oils) reaction in hydrothermal water at 330-340°C and 13.1 MPa, the main products are the free fatty acids (HC – COOH) and glycerol (C3H8O3). The free fatty acids can then be reacted to straight-chain hydrocarbons that can be used for diesel or jet fuel, although the temperature usually needs to be a little higher (400°C) for this to take place. Figure 8.5 shows the schematic of a hydrothermal water process to convert algae into liquid fuels, making use of heat from an integrated heat and power system. Flue gas from a power generation facility is used to grow algae. Algae is then harvested and concentrated in water. The algae are then reacted in a hydrothermal unit followed by catalytic hydrogenation to make the straight-chain hydrocarbon liquid fuels.
Many types of catalysts can be used, although it depends on the process stage in which catalysts are used and what feed material is used. In hydrothermal processing, the more common catalysts used are acid and base catalysts. Particle size for biomass needs to be fine, with a size of < 0.5 mm. The introduction of the feed into the reactor is also challenging, as it is fed into a high-pressure reactor. Some advantages of using this process for biomass: 1) it is possible to process feeds with high water content, as much as 90%, 2) it is possible to process many different types of waste materials, including MSW, food processing waste, and animal manure, and 3) the process serves the dual roles of waste treatment and renewable energy production.
Process parameters include solids content, temperature, pressure, residence time, and use of catalysts. Often simultaneous reactions are taking place, which makes the overall understanding of the reactions complicated. The types of reactions taking place include solubilization, depolymerization, decarboxylation, hydrogenation, condensation, and hydrogenolysis.
For one particular process, hydrothermal liquefaction requires the use of catalysts. One typical catalyst used is sodium carbonate combined with water and CO to produce sodium formate:
Na2CO3 + H2O + CO → 2HCO2Na + CO2
This dehydrates the hydroxyl groups to carbonyl compounds, then reduces the carbonyl group to an alcohol:
HCO2Na + C6H10O5 → C6H10H4 + NaHCO3
H2 + C6H10O5 → C6H10H4 + H2O
The formate and hydrogen can be regenerated and recycled. Other catalysts used that behave in a similar manner include K2CO3, KOH, NaOH, and other bases. For simultaneous decomposition and hydrogenation, nickel (Ni) catalysts are used.
Similar to pyrolysis, the major product of this process is a liquid biocrude, which is a viscous dark tar or asphalt material. Up to 70% of the carbon is converted into biocrude; lighter products are obtained when different catalysts are used. Co-products include gases (CO2, CH4, and light hydrocarbons) as well as water-soluble materials. The liquid biofuel has a similar carbon to hydrogen ratio as in the original feedstock and is a complex mixture of aromatics, aromatic oligomers, and other hydrocarbons. In this process, the oxygen is reduced and is 10-20% less than typical pyrolysis oils, with a heating value higher than pyrolysis oils, 35-40 MJ/kg on a dry basis. However, the USDA has developed a pyrolysis process using recycled gases that produces a fairly light hydrocarbon with very little oxygen content. (Mullens et al.) I will discuss this more in the next section. Table 8.2 shows a comparison of biocrudes from various processes and feed materials. The quality of the biocrude shown from hydrothermal processing is for a heavy biocrude. Other processes will make a lighter material, but also produce more co-products that must be utilized as well.
Table 8.2: Comparison of biocrude from hydrothermal processing, bio-oil from fast pyrolysis, and heavy petroleum fuel.
Characteristic Hydrothermal Bio-oil Fast pyrolysis Bio-oil Heavy Petroleum Fuel USDA Oil Oak
Water Content, wt% 3-5 15-25 0.1 4.8
Insoluble solids, % 1 0.5-0.8 0.01 n/a
HHV, MJ/kg 30 17 40 34.0
Density, g/ml 1.10 1.23 0.94 n/a
Viscosity, cp 3,000-17,000 10-150 180 n/a
Wet Dry Wet Dry
Carbon, % 73.0 77.0 39.5 55.8 85.2 80.2
Hydrogen, % 8.0 7.8 7.5 6.1 11.1 5.9
Oxygen, % 16.0 13.0 52.6 37.9 1.0 11.8
Nitrogen, % <0.1 <0.1 <0.1 <0.1 0.3 2.1
Sulfur, % <0.05 <0.5 <0.05 <0.5 2.3 n/a
Ash, % 0.3-0.5 0.3-0.5 0.3-0.5 0.2-0.3 <0.1 n/a
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/08%3A_Thermochemical_Methods_to_Produce_Biofuels/8.01%3A_Review_of_Refinery_Processing_and_Chemical_Structures_for_Jet_Fuel_and_Diesel_Fuel.txt
|
8.3 Bioprocessing to Make Jet Fuel
Many researchers and scientists think that ground transportation will become increasingly dependent upon batteries, as in hybrids and electric vehicles. Reduction in fuel usage has been realized in the last 10 years, due to hybrid automobiles coming into the auto market. However, this is not a viable option for air travel, which will remain dependent on liquid fuel. Since more fuel will be available for aircrafts if less is used for vehicles, petroleum refineries should be able to keep up with demand. However, if there is a concern about emissions, especially the need to reduce CO2, liquid jet fuels from biomass will by far be the best option. Jet fuel must also go through a qualification process and become certified for use depending on the source of the fuel and the type of jet engine. As discussed briefly in Lesson 2, jet fuel should have certain properties. Table 8.3 shows some of the ASTM qualifications for jet fuel that currently exist.
Table 8.3: Some jet fuel properties for certified military fuel JP-8.
JP-8 spec limits, Min JP-8 spec limits, Max
Flash point, °C 38 (min.) --
Viscosity, cSt, -20°C -- 8.0 (max.)
Freezing point, °C -- -47 (max.)
Smoke pt., mm 19 (min.) --
Sulfur , wt% -- 0.3 (max.)
Aromatics, % -- 25 (max.)
Thermal stab.@ 260°C -- 25 mm (max.)
Calorific value, Btu/lb 18,400 --
Hydrogen content 13.4 --
API gravity, 60° 37.0 51.0
FSII (DiEGME) 0.10 0.15
Conductivity pS/m 150 600
The Federal Aviation Administration has been working diligently to get some alternative fuels available in the market. The government has aspirational goals for American airlines to utilize alternative jet fuel, with the hope of 1 billion gallons of alternative jet fuel per year by the year 2018. Airlines will need to meet this requirement either by purchasing alternative jet fuel or finding viable methods to produce alternative jet fuel. The expectations for jet fuel are that it be primarily composed of long chain alkanes (although shorter carbon chain lengths than diesel fuel) with some cycloalkane and/or aromatic content for the necessary O-ring lubrication and other reasons.
There are several known biomass materials that could be utilized in the production of jet fuel. These include fats/oils, cellulose, woody biomass, and coal.
One of the primary sources is vegetable oil (this also includes algae oil and fats from production of meats). Vegetable oils contain long chain hydrocarbons connected by three carbons as esters. The fatty acid portion of the oil is easily converted into fatty acid methyl esters (FAMEs) through transesterification to produce biodiesel. We will discuss biodiesel production for transesterification extensively in another lesson, but I will briefly discuss here why the FAA is interested in making jet fuel from fats and oils. Unfortunately, currently, FAMEs are an issue for biojet fuel and requirements include a limit of 5 ppm, as the FAMEs can cause corrosion, have a high freeze point, and are not compatible with materials in a jet engine. (Fremont, 2010) At present, biodiesel production is not always economic due to the high cost of oils and the method of production, and the ester must be removed for jet fuel.
Therefore, other methods are being evaluated to produce not only biodiesel, but also biojet fuel. These process methods include Hydroprocessed Esters and Fatty Acids (HEFA), Catalytic Hydrothermolysis, and Green Diesel. (Hileman and Stratton, 2014) Figure 8.6 shows a schematic of the process for HEFA. Jet fuel made from HEFA has been approved for use in airplanes because it has gone through the approval process. The following white paper, Alternative fuels specification and testing, (Kramer, S., 2013, March 1, Retrieved December 16, 2014) includes a schematic of the approval process on page 4; as you can see, it is a thorough and complicated process, and it takes quite some time to get a particular type of fuel qualified.
There are others who want to explore the use of the fluid catalytic cracking (FCC) unit in a refinery to convert vegetable oils into jet fuel. Al-Sabawi et al. (CanmetENERGY) provided a review of various biomass products that have been processed at a lab scale in the FCC unit. They show that the main effect would be on the catalyst used and the lifetime of the catalyst. (Al-Sabawi, 2012)
Mullens and Boateng of the USDA have developed a process to produce pyrolysis oils of low oxygen content (data of properties of fuel in the previous section). (Mullens, 2013) The review paper by Al-Sabawi et al. also discusses the potential processing of pyrolysis oils in the FCC unit. The main requirement is the pyrolysis oils need to be low in oxygen, but additional information on the composition of the oil could tell us whether the FCC unit or another unit in a refinery would be best for processing.
Cellulosic sources for producing alternative jet fuel can be used. By use of gasification of biomass and Fischer-Tropsch processing, a good biojet fuel can be produced, although some additives need to be included to prevent some potential problems. There are also processes to produce medium chain length alcohols from cellulose, as methanol and ethanol do not have the energy density necessary to allow planes to fly long distances. One of the fuels that made it through the approval process is Synthetic Paraffinic Kerosene (SPK) made by Fischer-Tropsch synthesis. (Hileman and Stratton, 2014) It may also be possible to use by-products from the production of ethanol from corn (corn stover), sugar cane (bagasse), and paper production (tall oil). Westfall et al. (2008) and Liu et al. (2013) have also outlined other potential sources to produce fuels, with most processes including a catalytic deoxygenation aspect. (2008) The next section of this lesson will discuss Fischer-Tropsch and other chemical processes to make liquid fuels; it is an indirect method, as either natural gas or carbon materials that have been gasified must be used in these processes.
Additionally, there are some processes being developed for the production of alternative jet fuel from biomass-natural gas and biomass-coal. Researchers are working to develop processes at the demonstration scale for eventual commercialization. Virent, along with Battelle in Ohio, have produced ReadiJet Fuel using a pilot scale facility. (Conkle et al., 2012a, 2012b) Their paper includes a diagram of a schematic of their process (p. 3), a catalytic process to deoxygenate oils similar to the HEFA process. Liu et al. point out that jet fuel from natural gas has some advantages, especially the transportation aspect of fuels (2013). Jet fuel made from natural gas is made via steam reforming to CO and H2, then use of the Fischer-Tropsch method to make long chain alkanes (see additional explanation toward the end of the lesson). The fuel is very clean (no sulfur and no aromatics) and can be a drop in replacement for petroleum-derived jet fuel - jet fuel made in this way has been thoroughly tested and the fuel has been qualified for use in military and commercial jet airliners so long as the alternative fuel composes less than 50% of the fuel mixture. Penn State and the Air Force have been involved with the production of a coal-based jet fuel (Balster et al., 2008) that could possibly be co-processed with some type of bio-oil, such as vegetable oil or low-oxygen pyrolysis oil. The potential for using the coal-based jet fuel lies in its high energy density, superior thermal properties, and few issues with lubricity. Table 8.4 shows how the fuel produced by Battelle/Air Force and PSU/Air Force meets some of the ASTM requirements; Battelle’s fuel has been certified, but PSU’s fuel has not completely met certification criteria. Recently, Penn State received DOE funding to expand the solvent extraction unit to a continuous reactor and will use solvent from Battelle’s process to extract the coal – the goal is to incorporate coal into a biomass process in a more environmentally sound way than using other solvents. Figure 8.7 shows a schematic of the PSU unit. Elliot et al. (2013) at Pacific Northwest National Laboratory have developed a specific hydrothermal process to convert algal water slurries into organic hydrocarbons at subcritical water conditions (350 °C and 20 MPa pressure). The process also includes catalytic processes to remove oxygen, sulfur, and nitrogen, and the liquids generated are most likely of fuel quality.
Table 8.4: Some jet fuel properties for certified military fuel JP-8 compared to fuel produced by PSU/Air Force and BETTELLE/Air Force
(Credit: Conkel et al. and Balster et al.)
JP-8 spec limits, Min JP-8 spec limits, Max. JP-900 (actual) PSU/Air Force ReadiJet (actual) Battelle/Air Force
Flash point, °C 38 (min.) 61 42
Viscosity, cSt, -20°C 8.0 (max.) 7.5 4.2
Freezing point, °C -47 (max.) -65 -44
Smoke pt., mm 19 (min.) 22 25
Sulfur, wt% 0.3 (max.) 0.0003 0.0
Aromatics, % 25 (max.) 1.9 10
Thermal stab.@260°C 25 mm (max.) 0 1
Calorific value, Btu/lb 18,400 18,401 18,659
Hydrogen content 13.4 13.2
API gravity, 60° 37.0 51.0 31.1 44.5
FSII (DiEGME)* 0.10 0.15 0 0
conductivity pS/m* 150 600 0 0
*No additives were included in these fuels for these tests. Balster et al., Conkel et al.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/08%3A_Thermochemical_Methods_to_Produce_Biofuels/8.03%3A_Bioprocessing_to_Make_Jet_Fuel.txt
|
8.4 Natural Gas and Synthetic Natural Gas as Feedstocks for Liquid Fuels
In Lesson 4, we discussed gasification in depth. We also briefly discussed using syngas to make liquids. In Lesson 8, we will go into a little more depth. These types of processes are called indirect liquefaction.
The primary objective of gasification is to produce a syngas primarily composed of carbon monoxide (CO) and hydrogen (H2). After gasification, the product needs to be cleaned to remove any liquids, and there are several reactions that can be done to change the H2/CO ratio or to make different products. This is where we will start this lesson.
Figure 8.8 (a, b and c) shows the three different process phases that the gas must go through. The first phase (Figure 8a) is the gasifier and separation of gas, liquid, and solid products. The biomass is not pure carbon, and all the streams will contain a variety of other compounds that may not be wanted or may be harmful.
Solids are removed by a cyclone or an electrostatic precipitator. The particles are similar to what is seen in combustion – ungasified or partially gasified particles. Some mineral matter/ash can also be in the solids. A separator is used to remove the liquids, mainly tars and water that must be separated and processed for use. The water fraction can be used to react the organic compounds further, and the tars can be distilled and reacted further similar to the direct liquefaction processes we described in previous sections. In any case, the water must be treated before disposal.
The gases can also contain unwanted gases. Three gases that need to be removed from the gas phase are ammonia (NH3), carbon dioxide (CO2), and hydrogen sulfide (H2S). They are corrosive and/or toxic and need to be removed. The acid gases (H2S and CO2) are called acid gases because they can dissolve in water and produce weak acids that can be corrosive to metals. There is a range of processes that can separate out the acid gases. One typical method to remove these harmful gases is called the Rectisol process (Figure 8.8b). Both H2S and CO2 are soluble in methanol, while H2 and CO are not. In the simplified schematic in Figure 8.10b, there are two parts to the process, an absorber, and a regenerator. The raw gas goes into the absorber and comes into contact with the lean solution of methanol. The purified gas goes out the top, and the solution rich in unwanted absorbed gases goes to the regenerator – the acid gases are then separated out from the methanol so that a lean methanol solution comes out the bottom to be recycled for use in the absorber. The H2S in the acid gas can be burned or reacted with SO2 to form solid sulfur, which is used for making chemicals. The CO2 goes out the stack but could also be captured if processes to capture it are put into place.
The H2/CO ratio may not be ideal for downstream synthesis reactions. Figure 8.8c shows a process to use the water-gas shift reaction to change the ratio of H2/CO.
Ideally, the gas stream coming off the gasifier followed by a Rectisol unit could be reasonably pure H2 and CO. The water-gas shift can change the ratio of H2/CO. The reaction is shown below:
CO + H2O←→CO2 + H2
This would be the way to make less CO and more H2, but the reaction can go in reverse to make more CO and less H2 as well. From coal gasification, we want to shift it to the right as written. Advantages include:
• that it's an equilibrium process;
• can shift reaction other direction by taking advantage of LeChatlier’s Principle;
• with the same number of moles on both sides, the equilibrium position is independent of pressure – no requirement of compression or release of pressure of shift reactor;
• can be adapted to any operating pressure.
The major disadvantage of the water-gas shift reaction is it’s a CO2 factory! There only a few ways of separating CO2, such as a monoethylamine (MEA) scrubber. You can separate the CO2 to a 99% concentration, which would be ideal for CCS. Another thing to consider is we do not need to separate the entire gas stream and shift it; we only need to do enough to get the H2/CO ratio where we want it to be. Once we get to this point, we can be ready to do some synthesis of liquids.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/08%3A_Thermochemical_Methods_to_Produce_Biofuels/8.04%3A_Natural_Gas_and_Synthetic_Natural_Gas_as_Feedstocks_for_Liquid_Fuels.txt
|
8.5 Fischer-Tropsch Process to Generate Liquid Fuels
So, what can be done with synthesis gas? It can be burned and used in a gas turbine to heat exchange the heat to produce steam and operate a second turbine for electricity. The gas can be fed to a solid oxide fuel cell to generate electricity. We can also use synthesis gas to generate fuels, chemicals, and materials. In fact, the dominant application of synthesis gas from coal is the production of synthetic hydrocarbons for transportation fuels – Fischer Tropsch (FT) synthesis. This is what is primarily done in South Africa by the company Sasol and was also one of the methods used by the Germans in WWII to generate liquid fuels; in fact, direct liquefaction was the primary method used to produce liquid fuels in Germany in the 1940s. However, it is not the only gasification to liquids process. As noted in Lesson 4, the FT synthesis reaction can be presented by:
CO + nH2 → (-CH2−)x + H2O
We are taking carbon atoms and building them up as alkanes, containing up to at least 20 carbon atoms. It is really a polymerization process, and it follows polymerization statistics. Figure 8.9 shows a typical polymerization statistical function. You will not obtain one single pure alkane from the FT process, and there will be a distribution of products. As with all chemical reactions, you will have reaction variables to adjust, such as temperature, pressure, residence, and addition of a catalyst. By skillful selection of variables (T, P, t, and catalyst), we can, in principle, make anything from methane to high molecular weight waxes. The intent is to maximize liquid transportation fuel production.
The primary process for FT is the Synthol Process; the schematic is shown in Figure 8.10. The synthesis gas goes into the reactor at 2.2 MPa of pressure and 315-330°C. The product leaves the reactor where the catalyst is recovered, oils are removed by a hydrocarbon scrubber, and the tail gas recovered. The gas part is recycled, and the rest of the material is then distilled into gasoline, jet fuel, and diesel fractions. The Synthol reactor is a fluid bed reactor that uses an iron-based catalyst.
The liquids produced make very clean fuels. The product is near zero sulfur and low in aromatic compounds, and it is composed of mainly straight-chain alkanes. When considering the carbon-steam reaction, it is an endothermic reaction (the gasification, need to add heat). In this case, the reaction is “backward," or going the other direction. Therefore, the FT synthesis reaction is an exothermic reaction. Because the reaction is exothermic, heat is generated, so Synthol reactors have internal cooling tubes with steam that when heated generate high-pressure steam that can be used in other processes.
FT diesel fuel is high-quality diesel fuel – we want to have linear alkanes, low aromatic content, and low sulfur. FT diesel fuel has all three aspects of diesel fuel that we want, and has a cetane number ≥ 70 – it is an ideal diesel fuel (recall that a good diesel fuel has a cetane number of 55).
Jet fuel made from FT synthesis makes a decent fuel. It is low in aromatic and sulfur content. It is the first bio-based jet fuel that has been certified for use in aircraft and has been tested in blends with major airlines (Virgin). However, for use in military jets, it must be blended because newer designs use the fuel as a coolant for electronics, as the fuel can have issues in these aircraft. For example, alkanes have the lowest density of various compound classes in jet fuel, so FT jet fuel has borderline volumetric density. Alkanes are also likely to undergo pyrolysis reactions at certain high temperatures, and if the fuel is used as a heat-exchange fluid to reduce the heat load, carbon formation can occur – this is mainly a problem for some of the newer military jet aircraft.
FT gasoline that comes straight off of the reaction is not great gasoline, as it has a low octane number. Recall that branched alkanes and aromatic compounds have higher octane numbers. Since FT compounds tend to be straight-chain alkanes, the isomerization is required, and an appropriate catalyst must be used for catalytic reforming.
The primary location for gasification and FT synthesis is in South Africa – the gasoline being sold in South Africa has an octane number of 93. An integrated plant will also produce aromatics, waxes, liquid petroleum gas, alcohols, ketones, and phenols in addition to liquid hydrocarbon fuels. The reasoning behind marketing multiple products is because all the products will go up and down in price; when something goes up in price, you make more of it, when something goes down in price, you may make less. This is a way for plants to maximize their profits.
Methanol Production
Synthesis gas can also be used to produce methanol, CH3OH. The current technology for making methanol is fairly mature. Typically natural gas is used as the feedstock, which is steam reformed to make CO and hydrogen:
CH4 + H2O → CO + H2
Then methanol is synthesized by the reaction:
CO + 2H2 → CH3OH
However, another methanol synthesis reaction allows for CO2 to be in the feed gas:
CO2 + 3H2 → CH3OH + H2O
But because water and methanol are infinitely soluble, an additional step is required downstream to isolate methanol from water. Typical operating conditions in the methanol synthesis reactor are 5-10 MPa pressure, 250-270°C, using a copper/zinc catalyst. The reaction is extremely exothermic, so heat must be removed to keep the reaction under control. Similar to the FT reaction, the reactor has a shell and tube heat exchanger where the coolant is circulated through the shell, and catalyst particles are packed into the tubes where the reactant/product liquids flow. Figure 8.11 shows a schematic of the methanol synthesis process.
So, what can methanol be used for? It is periodically used as a replacement for gasoline, particularly for racing fuel, as it has a high octane number. It has no sulfur in it, will produce almost no NOX due to the low flame temperature and can be blended with gasoline.
There are also some disadvantages to using methanol as a fuel. It is infinitely miscible with water, it has health and safety issues, provides only half the volumetric energy density of gasoline, and may have compatibility issues with materials in some vehicles.
Enormous tonnages of methanol are produced and handled annually with excellent safety – but within the chemical process industry. However, if the general public is handling methanol, safety may be an issue, because methanol has toxic properties. Methanol is being seriously considered as the fuel of choice in the use of fuel cells. However, there is a process to make methanol directly into gasoline, so concerns about methanol aren’t an issue then.
Methanol-To-Gasoline (MTG)
Methanol can be used to make a gasoline product. The process uses a special zeolite catalyst with pore size such that molecules up to C10 can get out of the catalyst. Larger molecules cannot be made with this process; therefore, a product is made with no carbon molecules greater than C10, which boils in the gasoline range. In this process, aromatics and branched-chain alkanes are made, which means the MTG process produces very high octane gasoline. Gasoline is the only product. In the reaction, methanol is converted into dimethyl ether (which can be a good diesel fuel) by the following reaction:
2CH3OH→CH3OCH3 + H2O
As the reaction progresses, the dimethyl ether is dehydrated further to the product hydrocarbons. The overall reaction is:
CH3OH → −(CH2)−n + H2O
As with the other reactions we’ve looked at in this section of the lesson, the reaction is highly exothermic, so the reactor and process has to be designed to remove heat from the reaction to keep it under control. The conditions for this reaction are 330-400°C and 2.3 MPa. If one wanted to envision how a plant could incorporate all of these processes together, the following would be one scenario:
1. Add an MTG unit to existing natural gas-fed methanol plants (produce high octane gasoline).
2. Replace the natural gas units with coal and/or biomass gasification and gas conditioning.
3. Add parallel trains of Synthol reactors (produce high cetane diesel).
4. Add a third section, using a solid oxide fuel cell to generate electricity using synthesis gas as the feed material.
The plant then produces gasoline, diesel, and electricity.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/08%3A_Thermochemical_Methods_to_Produce_Biofuels/8.05%3A_Fischer-Tropsch_Process_to_Generate_Liquid_Fuels.txt
|
8.6 Assignments
Final Project Outline Assignment
Biomass Choice: Choose biomass to focus paper on.
This week, I am asking you to begin your biomass project. You will do a 1-2 page write-up, stating the information listed below. It can be an outline at this point but needs to have enough information so I can see what your project will be about and to see that you have begun to work on it.
1. Biomass Choice. Reminder - do not make choices that already exist in the marketplace. This includes sugar cane used to make ethanol in Brazil or corn used to make ethanol in the midwest of the USA.
2. Literature review on the biomass (at this point, this should consist of a list of resources that you have consulted - APA style, please!)
3. Requirements for location
• Climate (i.e., tropical, subtropical, moderate,…)
• Land area required or another type of facility to grow
4. Method of Production
5. Product markets around location
6. Economic Evidence
7. Other factors (environmental, political, tax issues, etc.)
Some notes on format:
• Approximate length – 2 pages, double-spaced, 1” margins, 12 point font, name at the top.
• The outline should include elements listed above.
• Use as filename your user ID_Outline (i.e., ceb7_Outline).
• Upload it to the Outline Dropbox.
8.07: Summary and Final Tasks
Summary
Lesson 8 covered thermochemical methods of converting biomass into fuels. These are the main methods being considered at this point; however, research continues on additional methods and what is included may not be a complete list. The main advantage of fermentation is that it is a natural process that does not require additional chemicals, but the main disadvantage to fermentation is the processes tend to be slow. All of the thermochemical processes require heat and some other process parameters that may make it more expensive – another lesson will discuss the economics behind all of the processes we’ve discussed for comparison.
We discussed both direct and indirect methods for making fuel from biomass. These methods are not just directed towards making ethanol. Many of these processes are used to make hydrocarbon fuels that limit the amount of oxygen in the product, as too much oxygen typically results in either causing the fuel to form unwanted “gums" or a corrosive environment that will cause problems in the units and storage containers. For jet fuels, oxygenated compounds will keep the fuel from being certified for use, so most of the methods presented make deoxygenated jet fuel. I would suggest continuing to monitor the news to see the progression in certification for additional bio-based fuels.
References
Al-Sabawi, M., Chen, J., Ng, S. “Fluid Catalytic Cracking of Biomass-Derived Oils and Their Blends with Petroleum Feedstocks: A Review,” Energy Fuels, 26, 5355-5372, 2012.
Balster, L., Corporan, E., DeWitt, M., Edwards, J. T., Ervin, J.S., Graham, J.L., Lee, S-Y., Pal, S., Phelps, D.K., Rudnick, L.R., Santoro, R.J., Schobert, H.H., Shafer, L.M., Striebich, R.C., West, Z.J., Wilson, G.R, Woodward, R., Zabarnick, S. “Development of an advanced, thermally stable, coal-based jet fuel,” Fuel Processing Technology, 89 (4), 364-378, 2008.
CAPA Centre for Aviation, December 25, 2013, accessed June 17, 2014.
Conkle, H.N., Marcum, G.M., Griesenbrock, E.W., Edwards, E.W., Chauhan, S.P., Morris, Jr., R.W., Robota, H.J., Thomas, D.K., “Development of Surrogates of Alternative Liquid Fuels Generated from Biomass,” 2012, accessed June 17, 2014.
Conkle, H.N., Griesenbrock, E.W., Robota, H.J., Morris, Jr., R.W., Coppola, E.N., “Production of Unblended, “Drop-in,” Renewable Jet Fuel,” 2012, accessed June 17, 2014.
Elliott, D.C., Hart, T.R., Schmidt, A.J., Neuenschwander, G.G., Rotness, L.J., Olarte, M.V., Zacher, A.H., Albrecht, A. O., Hallen, R.T., Holladay, J.E., “Process development for hydrothermal liquefaction of algae feedstocks in a continuous-flow reactor,” Algal Research-Biomass Biofuels and Bioproducts, , 2 (4), 445-454, 2013.
Fremont, M. “Jet Fuel Contamination with FAME- World Jet Fuel Supply,” Airbus FAST Magazine, #46, August 2010, 8-13.
Kramer, S., “Alternative Fuels: Specifications and Testing,” CAAFI: Research and Development Team White Paper Series: Specifications and Testing, 2011, accessed August 20, 2014.
Hileman, J.I., Stratton, R.W., “Alternative jet fuel feasibility,” Transport Policy, article in press, 2014.
Liu, G. Yan, B., and Chen, G., “Technical review on jet fuel production,” Renewable and Sustainable Energy Reviews, 25, 59-70, 2013.
Martinkus N, W. Shi, N. Lovrich, J. Pierce, P. Smith, and M. Wolcott. 2014. Integrating biogeophysical and social assets into biomass-to-biofuel supply chain siting decisions, Biomass and Bioenergy 66 (2014) 410-418.
Mullen, C.A., Boateng, A., and Goldberg, N., “Production of Deoxygenated Biomass Fast Pyrolysis Oils via Product Gas Recycling,” Energy Fuels, 27 (7), 3867–3874, 2013.
Westfall, P.J., Gardner, T.S., “Industrial fermentation of renewable diesel fuels,” Current Opinion in Biotechnology, 22 (3), 344-350, 2011.
The World Bank. Health [Internet]. Social capital: what is social capital? c2012. [Cited 2012 Feb 13].
Reminder - Complete all of the Lesson 8 tasks!
You have reached the end of Lesson 8! Double-check the Road Map on the Lesson 8 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 9.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/08%3A_Thermochemical_Methods_to_Produce_Biofuels/8.06%3A_Assignments.txt
|
9.1 Terminology for Vegetable Oils and Animal Fats
Fat is a generic term for lipids, a class of compounds in biochemistry. You would know them as greasy, solid materials found in animal tissues and in some plants – oils that are solids at room temperature.
Vegetable oil is the fat extracted from plant sources. We may be able to extract oil from other parts of a plant, but seeds are the main source of vegetable oil. Typically, vegetable oils are used in cooking and for industrial uses. Compared to water, oils and fats have a much higher boiling point. However, there are some plant oils that are not good for human consumption, as the oils from these types of seeds would require additional processing to remove unpleasant flavors or even toxic chemicals. These include rapeseed and cottonseed oil.
Animal fats come from different animals. Tallow is beef fat and lard is pork fat. There is also chicken fat, blubber (from whales), cod liver oil, and ghee (which is a butterfat). Animal fats tend to have more free fatty acids than vegetable oils do.
Chemically, fats and oils are also called “triglycerides.” They are esters of glycerol, with a varying blend of fatty acids. Figure 9.1 shows a generic diagram of the structure without using chemical formulas.
So what is glycerol? It is also known as glycerin/glycerine. Other names for glycerol include: 1,2,3-propane-triol, 1,2,3-tri-hydroxy-propane, glyceritol, and glycyl alcohol. It is a colorless, odorless, hygroscopic (i.e., will attract water), and sweet-tasting viscous liquid. Figure 9.2 shows the chemical structure in two different forms.
So now we need to define what the fatty acids are. Essentially, fatty acids are long-chain hydrocarbons with a carboxylic acid. Figure 9.3a shows the generic chemical structure of a fatty acid with the carboxylic acid on it.
Figure 9.3b shows different fatty acid chemical structures. The chemical structures are shown as line chemical structures, where each point on the links is a carbon atom and the correct number of hydrogen atoms is dependent on whether there is a single or double bond. Fatty acids can be saturated (with hydrogen bonds) or unsaturated (with some double bonds between carbon atoms). Because of the metabolism of oilseed crops, naturally formed fatty acids contain even numbers of carbon atoms. In organic chemistry, carbon atoms have four pairs of electrons available to share with another carbon, hydrogen, or oxygen atom. Free fatty acids are not bound to glycerol or other molecules. They can be formed from the breakdown or hydrolysis of a triglyceride.
The fatty acids shown have slightly different properties. Palmitic acid is found in palm oil. Figure 9.4 shows the relationship of each fatty acid to its size and saturation. Palmitic and steric acids are saturated fatty acids, while oleic and linoleic acids are unsaturated with different amounts of double bonds. Figure 9.4 shows differing amounts of carbon atoms compared to the number of double bonds in the compound.
Figure 9.5a shows the part of the triglyceride that is a fatty acid and the part that is glycerol, including chemical structures this time. The chemical structure shown here is a saturated triglyceride.
So, we’ve discussed what fats and oils are. Now, what is biodiesel? What is at least one definition? It is a diesel fuel that was generated from biomass. However, there are different types of biodiesel. The most commonly known type of biodiesel is a fuel comprised of mono-alkyl esters (typically methyl or ethyl esters) of long-chain fatty acids derived from vegetable oils or animal fats – this is according to ASTM D6551. An ASTM is a document that contains the standards for particular types of chemicals, particularly industrial materials. This is a wordy definition that doesn’t really show us what it is chemically.
So when we talk about an alkyl group, it is a univalent radical containing only carbon and hydrogen atoms in a hydrocarbon chain, with a general atomic formula of CnH2n+1. Examples include:
Another term we need to know about is an ester. Esters are organic compounds where an alkyl group replaces a hydrogen atom in a carboxylic acid. For example, if the acid is acetic acid and the alkyl group is the methyl group, the resulting ester is called methyl acetate. The reaction of acetic acid with methanol will form methyl acetate and water; the reaction is shown below in Figure 9.6. An ester formed in this method is a condensation reaction; it is also known as esterification. These esters are also called carboxylate esters.
This is the basic reaction that helps to form biodiesel. Figure 9.7 shows the different parts of the chemical structure of the biodiesel, the methyl ester fatty acid, or fatty acid methyl ester (FAME).
So, at this point, let’s make sure we know what we have been discussing. Biodiesel is a methyl (or ethyl) ester of a fatty acid. It is made from vegetable oil, but it is not vegetable oil. If we have 100% biodiesel, it is known as B100 – it is a vegetable oil that has been transesterified to make biodiesel. It must meet ASTM biodiesel standards to qualify for warranties and sell as biodiesel and qualify for any tax credits. Most often, it is blended with petroleum-based diesel. If is B2, it has 2% biodiesel and 98% petroleum-based diesel. Other blends include: B5 (5% biodiesel), B20 (20% biodiesel), and B100 (100% biodiesel). We’ll discuss why blends are used in the following section. And to be clear: sometimes vegetable oil is used in diesel engines, but it can cause performance problems and deteriorate engines over time. Sometimes, vegetable oil and alcohol are mixed together in emulsions, but that it is still not biodiesel, as it has different properties from biodiesel.
So, if straight vegetable oil (SVO) will run in a diesel engine, why not use it? Vegetable oil is significantly more viscous (gooey is a non-technical term) and has more poor combustion properties. It can cause: carbon deposits, poor lubrication within the engine, and engine wear, and it has cold starting problems. Vegetable oils have natural gums that can cause plugging in filters and fuel injectors. And for a diesel engine, the injection timing is thrown off and can cause engine knocking. There are ways to mitigate these issues, which include: 1) blend with petroleum-based diesel (usually < 20%), 2) preheat the oil, 3) make microemulsions with alcohols, 4) “crack” the vegetable oil, and 5) use the method of converting SVO into biodiesel using transesterification. Other methods are used as well, but for now, we’ll focus on biodiesel from transesterification. Table 9.1 shows three properties of No. 2 diesel, biodiesel, and vegetable oil. As you can see, the main change is in the viscosity. No. 2 diesel and biodiesel have viscosities that are similar, but vegetable oils have much viscosity and can cause major problems in cold weather. This is the main reason for converting the SVO into biodiesel.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/09%3A_Biodiesel_Production/9.01%3A_Terminology_for_Vegetable_Oils_and_Animal_Fats.txt
|
9.2 The Reaction of Biodiesel: Transesterification
So, how do we make biodiesel?
The method being described here is for making FAMEs biodiesel. The reaction is called transesterification, and the process takes place in four steps. The first step is to mix the alcohol for reaction with the catalyst, typically a strong base such as NaOH or KOH. The alcohol/catalyst is then reacted with the fatty acid so that the transesterification reaction takes place. Figure 8a shows the preparation of the catalyst with the alcohol, and Figure 8b shows the transesterification reaction.
The catalyst is prepared by mixing methanol and a strong base such as sodium hydroxide or potassium hydroxide. During the preparation, the NaOH breaks into ions of Na+ and OH-. The OH- abstracts the hydrogen from methanol to form water and leaves the CH3O- available for reaction. Methanol should be as dry as possible. When the OH- ion reacts with H+ ion, it reacts to form water. Water will increase the possibility of a side reaction with free fatty acids (fatty acids that are not triglycerides) to form soap, an unwanted reaction. Enzymatic processes can also be used (called lipases); alcohol is still needed and only replaces the catalyst. Lipases are slower than chemical catalysts, are high in cost, and produce low yields.
Once the catalyst is prepared, the triglyceride will react with 3 mols of methanol, so excess methanol has to be used in the reaction to ensure complete reaction. The three attached carbons with hydrogen react with OH- ions and form glycerin, while the CH3 group reacts with the free fatty acid to form the fatty acid methyl ester.
Figure 9.9 is a graphic of the necessary amounts of chemicals needed to make the reaction happen and the overall yield of biodiesel and glycerin. The amount of methanol added is almost double the required amount so the reaction goes to completion. With 100 lbs of fat and 16-20 lbs of alcohol (and 1 lb of catalyst), the reaction will produce 100 lbs of biodiesel and 10 lbs of glycerin. The reaction typically takes place at between 40-65°C. As the reaction temperature goes higher, the rate of reaction will increase, typically 1-2 hours at 60 °C versus 2-4 hours at 40°C. If the reaction is higher than 65°C, a pressure vessel is required because methanol will boil at 65°C. It also helps to increase the methanol to oil ratio. Doubling the ratio of 3 mols of alcohol to 6 mols will push the reaction to completion faster and more completely.
The following video shows a time-lapsed reaction of transesterification of vegetable oil into biodiesel. It also incorporates the steps after the reaction to separate out the biodiesel (9:44).
Making biodiesel
Click here for a transcript of the making biodiesel video.
MARK HALL: Hello. I'm Mark Hall of the Auburn University Extension Renewable Energy Specialist. We're doing several of these things on energy options that you can do, several pieces that, each piece of the puzzle, that you can contribute to our energy independence by making ethanol, making biodiesel, being more energy efficient in how you operate your home.
Today, we're going to talk about making biodiesel. And we have Lance Hall. Lance has been making biodiesel to run in his car. He bought a used Volkswagen off eBay and started making biodiesel. And he's liked it so much that he's bought a new diesel car. And he's been real successful doing this for a couple of years.
Before we bring Lance in, I'd like to thank my friend and coworker Walter Harris, the county agent coordinator in Madison County, for filming us today. Lance, come in and show us what you've been doing. And congratulations. You've been successful doing this.
I was talking to my daddy about my new job several years ago. And he said, well, Lance has been doing that for a long time. I said, what? I didn't know that. So Lance, show people how to make biodiesel.
LANCE HALL: OK. A lot of people know about the biodiesel. They've read the stories. They've done some research. But, yet, they still don't have enough confidence in their ability to actually make a batch. So I'm going to show you today on how to make a batch of biodiesel, just small scale, but it's easy.
OK. The first thing that we're going to do is start off with vegetable oil. Now, this will be 800 milliliters. And don't be confused between the milliliters and your normal units of measure. It's a simple conversion that anybody can do with a handheld calculator.
So we've got 800 milliliters here. Well, first thing we want to do is heat it. Now, don't be concerned about this fancy piece of equipment, either. The main element of this is to heat it any way that you can safely.
And these things here are magnetic stirrers. Again, don't be concerned with this. Just stir it while you're heating it to even things out. And we're going to heat this up to about 130 degrees Fahrenheit.
MARK HALL: Lance, tell them about where you get this equipment.
LANCE HALL: All of this equipment that I've got in my shop, all my lab stuff, eBay is a wonderful place to find a used lab supplies, lab glass. These are magnetic stirrer plates. These are really handy to have if you have the means to buy them. You don't have to have them, of course. But I like to use them.
And this is also an electronic scale that comes in handy when you start weighing out your catalyst, doing anything that you want to measure a precise weight. That's worth the money there. And that's going to take a little while, so--
MARK HALL: Lance, is there any other sites, internet sites that you would recommend for people that are interested in making biodiesel?
LANCE HALL: There are several sites out there. One of the most informative on what biodiesel is, where it's being used, is biodiesel.org. That's the National Biodiesel Board website, lots of good information there. It won't really tell you as much how to make it, but hopefully, this will be one of the more informative sites that you'll actually be able to see somebody make one, make a batch.
OK. As our oil is heating up, we have to mix up our methanol potassium hydroxide mixture. So safety is paramount with the use of methanol or the strong caustic lye potassium hydroxide. Methanol can cause blindness or death, and it can be absorbed through the skin. And the potassium hydroxide will burn your skin if it gets on you.
So here's what we're going to do. We're going to take our methanol and we're going to pour this into a container. Face shields are good, too.
We're going to use 175 milliliters of the methanol. That's roughly 20% of the 800 milliliters of oil. You usually want to use about a 20% methanol volume compared to the veggie oil volume.
OK. Our next ingredient is our potassium hydroxide. That's our lye. Now, we have to do a quick calculation on how much of this we need to mix with our methanol in order for the reaction to take place.
I've got a nice spreadsheet that I like to use. It's the Biodiesel-o-matic. You can usually find it online from different biodiesel websites. I'm going to pull that up.
OK. We want to use 7 grams of potassium hydroxide per each liter of veggie oil. So you take 7 divided by 0.8. And that gives you 6.4 grams.
Double bag this stuff, or it will absorb moisture. And that will kill your process.
So we're going to use our scale. We're going to zero the container. And then we're going to put 6.4 grams into it. Make sure you have your gloves on.
OK. That's our 6.4 grams. Close this immediately. Keep it double bagged. OK. Now, you're going to take your 6.4 grams of potassium hydroxide and put that into your 175 milliliters of methanol.
Again, you want to stir this. It's not necessary to heat it, though. Just stir. And stir this until at least the potassium hydroxide is completely dissolved into the methanol. You don't want to see any chunks of white potassium hydroxide flakes.
All right. Our potassium hydroxide is fully mixed into our methanol. We want to remove the stir bar. And then we're just going to slowly pour this into our oil as it's being stirred.
Again, you don't have to have fancy equipment. Just pour it in as you're stirring it manually. But the key is to do it slowly.
Figure 9.10 shows a schematic of the process for making biodiesel. Glycerol is formed and has to be separated from the biodiesel. Both glycerol and biodiesel need to have alcohol removed and recycled in the process. Water is added to both the biodiesel and glycerol to remove unwanted side products, particularly glycerol, that may remain in the biodiesel. The wash water is separated out similar to solvent extraction (it contains some glycerol), and the trace water is evaporated out of the biodiesel. Acid is added to the glycerol in order to provide neutralized glycerol.
As briefly discussed, the initial reactants used in the process should be as dry as possible. Water can react with the triglyceride to make free fatty acids and a diglyceride. It can also dissociate the sodium or potassium from the hydroxide, and the ions Na+ and K+ can react with the free fatty acid to form soap. Figure 9.11 shows how water can help to form a free fatty acid, and that free fatty acid can react with the Na+ ion to form soap. The sodium that was being used for a catalyst is now bound with the fatty acid and unusable. It also complicates separation and recovery. All oils may naturally contain free fatty acids. The refined vegetable oil contains less than 1%, while crude vegetable oil has 3%, waste oil has 5%, and animal fat has 20%. Animal fats are a less desirable feedstock.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/09%3A_Biodiesel_Production/9.02%3A_The_Reaction_of_Biodiesel-_Transesterification.txt
|
9.3 Various Processes Used to Make Biodiesel
Some of the processes used in making biodiesel are different from what we’ve discussed. The first of these processes we’ll discuss is solvent extraction.
In the process of making biodiesel through transesterification, we noted that biodiesel and glycerol are the products, with some water formation and unwanted potential soap formation. So, the products are liquid, but they are also immiscible (do not dissolve in each other) and have differences in specific gravity. The specific gravity of the products is shown in Table 9.2.
Table 9.2: Specific gravity of products and unused reactants in biodiesel transesterification processing.
Material Specific gravity (g/cm3)
Glycerol (pure) 1.26
Glycerol (crude) 1.05
Biodiesel 0.88
Methanol 0.79
In batch processing, gravity separation is used, and the products remain in the reactor; the reactor then becomes a settler or decanter. Once the reaction is finished, the product mixture then sits without agitation. After 4-8 hours, the glycerol layer settles at the bottom (because it has higher gravity) and the biodiesel settles at the top. However, if a continuous flow facility is utilized, the products separate too slowly in a settler, so a centrifuge is used. A centrifuge will spin the liquids at a very high speed, which helps to promote density separation. Figure 9.12 shows a few different types of industrial centrifuges that can be used for biodiesel separation.
One of the issues that can happen during separation is the forming of a layer containing water and soap, in between the glycerol and biodiesel. That will hinder the separation. Another issue is the glycerol contains 90% of the catalyst and 70% of the excess methanol. In other words, the glycerol fraction is kind of the “trashcan” layer of the process. The biodiesel layer also contains some contaminants, including soap, residual methanol, free glycerol, and residual catalyst. The catalyst in the biodiesel is extremely problematic if introduced into fuel systems. One way to improve the separation is through water washing with hot water, as the contaminants are soluble in water, but the biodiesel is not. Water washing will remove contaminants such as soap, residual methanol, free glycerol, and catalyst. The water should be softened (had ions removed) and be hot (both the biodiesel and water should be at 60°C). Thorough mixing with the wash water is needed so that all the contaminants can be removed, but the mixing intensity should also be controlled so that emulsions do not form between the biodiesel and water. Sometimes acid is added in the wash process to separate out the soaps. However, the last portion of washing needs to be acid free, so a step may need to be added to neutralize the glycerol.
There is more than one way to implement the washing process. For batch processes, two of the methods are: a) top spray and b) air bubbling (see Figure 9.13). For the top spray, a fine mist of water is sprayed top-down in a fine mist. The water droplets contact the biodiesel as the water flows down, separating out the impurities. The air bubbling is a method that uses air as a mobile phase. Air bubbles through a layer of water and carries water with it on the way up. As the air bubbles burst on the way up, water droplets are released and drop down on the biodiesel at the bottom, contacting the biodiesel and washing out impurities. It can be a relatively slow process; a combination of the two is also possible.
For continuous-flow processes, different equipment is used, which typically incorporates some sort of counter-current flow process. The lighter biodiesel is introduced at the bottom and the heavier water is introduced at the top, and as they flow the fluids contact each other so that the biodiesel at the top has impurities removed and the water flowing down out the bottom contains the contaminants. Figure 9.14 shows two types of counter-current units: a) counter-flow washing system and b) rotating disc extractor. Both units contain materials to increase the interaction between the water and biodiesel. For the counter-flow system, packing increases the interaction, while for the rotating disk extractor, disks rotate around as the fluid flows through. These types of equipment are typically used on an industrial scale and need precise mechanical design and process control; these units cost much more than the other type of system.
The most problematic step in biodiesel production, however, is water washing. It requires heated, softened water, some method of wastewater treatment, and water/methanol separation. Methanol recovery from water is somewhat costly using methanol-water rectification. Water can also be removed by vacuum drying. One of the alternative methods for removing water is the use of absorbent materials such as magnesium silicate. One company that provides a process for doing this is Magnesol, which is produced by the Dallas Group. Once the magnesium silicate removes the water, it can be regenerated by heating it up and evaporating the water. Methanol must also be removed from the biodiesel; one method for doing this is flash vaporization of methanol.
So, which type of process should be used? Should it be a batch or continuous flow system? Smaller plants are typically batch (< 1 million gallons/yr). They do not require continuous operation 24 hours per day for 7 days a week. The batch system provides better flexibility and the process can be tuned based on particular feedstocks. However, in a commercial, industrial setting, most likely a continuous flow system will be used because of increased production and high-volume separation systems, which will increase the throughput. There is automation and process controls, but this also means higher capital costs and the use of trained personnel. It is feasible to have hybrid systems as well.
The primary byproduct is glycerin (aka glycerine, glycerol). It is a polyhydric alcohol, which is sometimes called a triol. The structure is shown in Figure 9.2. It is a colorless and odorless liquid, which is viscous (thick flowing) and sweet-tasting. It is non-toxic and water-soluble. Parameters to test quality are purity, color, and odor. Glycerol properties and chemical information are shown in Table 9.3.
Table 9.3: Chemical information and properties of glycerol. (Credit: BEEMS Module B4)
Chemical name Propane-1,2,3-triol
Chemical formula C3H5(OH)3
Molecular Weight, g/mol 92.09
Density, g/cm³ @ 20°C 1.261
Viscosity, mPa.s, @ 20°C
(93% w/ water)
1500
(400)
Melting point, °C (°F) 17.9 (64.2)
Boiling point, °C (°F) 290 – 297 (554-567)
Auto-ignition, °C (°F) 370(700)
Flash Point, °C (°F) 188 - 199 (370 - 290)
Food energy, kJ/g 18
There are several different applications that glycerol can be used for, including the manufacture of drugs, oral care, personal care, tobacco, and polymers. Medical and pharmaceutical preparations use glycerol as a means to improve smoothness, lubrication, and moisturize – it is used in cough syrups, expectorants, laxatives, and elixirs. It can also be substituted for alcohol, as a solvent that will create a therapeutic herbal extraction.
Glycerol can be used in many personal care items; it serves as an emollient, moisturizer, solvent and lubricant – it is used in toothpaste, mouthwashes, skincare products, shaving cream, hair care products, and soaps. Glycerol competes with sorbitol as an additive; glycerol has better taste and a higher solubility.
Since it can be used in medical and personal care products, glycerol can also be used in foods and beverages. It can be used as a solvent, moisturizer, and sweetener. It can be used as a solvent for flavors (vanilla) and food coloring. It is a softening agent for candy and cakes. It can be used as part of the casings for meats and cheeses. It is also used in the manufacture of shortening and margarine, filler for low-fat food, and thickening agent in liqueurs.
Glycerol is also used to make a variety of polymers, particularly polyether polyols. Polymers include flexible foams and rigid foams, alkyl resins (plastics) and cellophane, surface coatings and paints, and as a softener and plasticizer.
Unfortunately, there is already enough glycerol produced for the glycerol market. Glycerol consumption in traditional uses is 450 million lb/yr, and traditional capacity is 557 million lb/yr. If we produce glycerol from making biodiesel, it has the potential of producing 1900 million lb/yr. Therefore, we need to find a new market for glycerol or it will be wasted in some fashion.
There is research being done to find new uses for glycerol. This includes use in additional polymers as an intermediate, conversion to propylene glycol for antifreeze, production of hydrogen via gasification, as a boiler fuel (have to remove alkali), in an anaerobic digester supplement, and for algal fermentation to produce Omega-3 polyunsaturated fatty acids.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/09%3A_Biodiesel_Production/9.03%3A_Various_Processes_Used_to_Make_Biodiesel.txt
|
9.4 Biodiesel Properties and Specifications
To ensure quality biodiesel, there are standards for testing the fuel properly to see that it meets specifications for use. ASTM (an international standards and testing group) has a method to legally define biodiesel for use in diesel engines, labeled ASTM D6751. Table 9.4 shows the test methods necessary for all the expected standards for biodiesel.
Table 9.4: Legal definition of biodiesel according to ASTM D6751
Credit: www.biodiesel.org
Property ASTM Method Limits Units
Ca & Mg, combined EN 14538 5 max ppm (ug/g)
Flash point D 93 93 min °C
Alcohol Control - - -
1. Methanol content EN14110 0.2 max % mass
2. Flash point D 93 130 min °C
Water & Sediment D2709 0.05 max % vol
Kinematic Viscosity, 40°C D445 1.9-6.0 mm2/sec
Sulfated Ash D874 0.02 max % mass
Sulfur - - -
S 15 Grade D5453 0.0015
max (15)
% mass
(ppm)
S 500 Grade D5453 0.05 max
(500)
% mass
(ppm)
Copper Strip Corrosion D130 No. 3 max -
Cetane D613 47 min -
Cloud Point D2500 report °C
Carbon Residue (100% sample) D4530 0.05 max % mass
Acid Number D664 0.50 max mg KOH/g
Free Glycerin D6584 0.020 max % mass
Total Glycerin D6584 0.240 max % mass
Phosphorus Content D4951 0.001 max % mass
Distillation, T90 AET D1160 360 max °C
Sodium/Potassium, combined EN 14538 5 max ppm
Oxidation Stability EN 14112 3 min Hours
Cold Soak Filtration Annex to D6751 360 max seconds
For use in temperatures below -12 °C Annex to D6751 200 max seconds
There are advantages and disadvantages to using biodiesel compared to ultra-low sulfur diesel. It has a higher lubricity, low sulfur content, and low CO and hydrocarbon emissions. This makes it good to blend with diesel from petroleum to be able to achieve the required specifications for ultra-low sulfur diesel, because ultra-low sulfur diesel has poor lubricity. But as discussed previously, biodiesel has poor cold weather properties. It really depends on the location; for instance, if using biodiesel in the upper Midwest, there could be problems in the winter.
As with all materials, production and quality of biodiesel is important. Most importantly, the transesterification reaction should reach completion for highest production and quality. Due to the nature of transesterification of triglycerides, a small amount of tri-, di-, and mono-glycerides remain. Figure 9.15 shows the changes in these compounds as the glycerides react to form biodiesel. Some terminology to be aware of: 1) bound glycerol is glycerol that has not been completely separated from the glyceride and is the sum of tri-, di-, and mono-glycerides and 2) total glycerol combines the bound glycerol with the free glycerol.
Glycerol content in biodiesel must be as low as possible, as ASTM standards state. The biodiesel will not technically be “biodiesel” unless ASTM standards are met, which means being below the total glycerol specifications. High glycerol content can cause issues with high viscosity and may contribute to deposit formation and filter plugging. Crude glycerol is often a dark brown color and must be refined and purified before use elsewhere. In biodiesel preparation, brown layers will form, and, possibly, white flakes or sediments, formed from saturated mono-glycerides, that will fall to the bottom of the tank the biodiesel is being stored in.
Biodiesel is also a great solvent, better than petroleum-based diesel. It can loosen carbon deposits and varnishes that were deposited by petro-diesel and can cause fuel-filter plugging when switching over to biodiesel. Filters should be changed after the first 1,000 miles with biodiesel.
Another issue is cold weather properties for biodiesel. These properties include cloud point, pour point, and cold soak filtration. Biodiesel can form cloud points at a much higher temperature than petro-diesel, close to the freezing point. The cloud point is the temperature that crystals begin to form; it can cause the biodiesel to gel and flow slower than it should. Once the pour point is reached (basically completely frozen), the fuel cannot move. It depends on the normal temperature of the climate as to whether the fuel can be used or blended with petrodiesel. What can complicate it more is the saturated or unsaturated fatty acid content. High saturated fatty acid content can lead to higher fuel stability but higher pour points. High unsaturated acid content can lead to lower pour points but less stability for storing. Figure 9.17 shows a pour point comparison of biodiesels made from various oils (including fatty acid content) compared to petrodiesel. Petro-diesel pour points are significantly lower than biodiesels.
Cetane number is also an important property for diesel fuels. Cetane number measures the point that the fuel ignites under compression, and this is what we want for a diesel engine. The higher the cetane number, the greater the ease of ignition. Most petro-diesel fuels have a cetane number of 40-50 and meet the ASTM specification for ASTM D975. In general, most biodiesels have higher cetane numbers, 46-60 (some as high as 100) and meet the specifications for ASTM D6751. Because of the higher cetane numbers of biodiesel, the engine running on biodiesel will have an easier time starting and have low idle noise. Table 9.5 shows the heats of combustions for various fuels along with their cetane number.
Table 9.5: Various biodiesels and No. 2 diesel heats of combustion and cetane number (Credit: National Biodiesel Education Program)
Fuel Heat of Combustion (Mj/kg) Cetane No.
Methyl Ester (Soybean) 39.8 46.2
Ethyl Ester (Soybean) 40.0 48.2
Butyl Ester (Soybean) 40.7 51.7
Methyl Ester (Sunflower) 39.8 47.0
Methyl Ester (Peanut) - 54.0
Methyl Ester (Rapeseed) 40.1 -
No. 2 Diesel 45.3 47.0
If full-strength biodiesel is used (i.e., B100), most engine warranties will not be covered. It will also require replacing rubber seals in older engines. Blends include B2, B10, and B20 (2%, 10%, and 20% biodiesel, respectively). Adding biodiesel as a blend with ultra-low sulfur should improve lubricity for ultra-low sulfur diesel fuel, which will improve engine wear. Emissions of hydrocarbons, CO, NOx, and particulate matter are similar to petrodiesel fuels, although can be reduced in some cases.
Biodiesel is stored very similarly to petrodiesel. It is stored in clean, dark, and dry environments. It can be stored in aluminum, steel, fluorinated polyethylene, fluorinated polypropylene, and Teflon types of containers. It is best to avoid copper, brass, lead, tin, and zinc containers.
In another lesson, we will discuss the economics behind using biodiesel.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/09%3A_Biodiesel_Production/9.04%3A_Biodiesel_Properties_and_Specifications.txt
|
Homework (Quiz)
Watch the video on biodiesel production. It is in three parts, so watch all three parts. It will take less than 30 minutes and will give you a first-hand view of how biodiesel is made in a batch process.
Part 1
Biodiesel Production Demonstration
Click here for a transcript of Biodiesel Production
MARK HALL: Hello. I'm Mark Hall of the Auburn University Extension Renewable Energy Specialist. We're doing several of these things on energy options that you can do, several pieces that, each piece of the puzzle, that you can contribute to our energy independence by making ethanol, making biodiesel, being more energy efficient in how you operate your home.
Today, we're going to talk about making biodiesel. And we have Lance Hall. Lance has been making biodiesel to run in his car. He bought a used Volkswagen off eBay and started making biodiesel. And he's liked it so much that he's bought a new diesel car. And he's been real successful doing this for a couple of years.
Before we bring Lance in, I'd like to thank my friend and coworker Walter Harris, the county agent coordinator in Madison County, for filming us today. Lance, come in and show us what you've been doing. And congratulations. You've been successful doing this.
I was talking to my daddy about my new job several years ago. And he said, well, Lance has been doing that for a long time. I said, what? I didn't know that. So Lance, show people how to make biodiesel.
LANCE HALL: OK. A lot of people know about the biodiesel. They've read the stories. They've done some research. But, yet, they still don't have enough confidence in their ability to actually make a batch. So I'm going to show you today on how to make a batch of biodiesel, just small scale, but it's easy.
OK. The first thing that we're going to do is start off with vegetable oil. Now, this will be 800 milliliters. And don't be confused between the milliliters and your normal units of measure. It's a simple conversion that anybody can do with a handheld calculator.
So we've got 800 milliliters here. Well, first thing we want to do is heat it. Now, don't be concerned about this fancy piece of equipment, either. The main element of this is to heat it any way that you can safely.
And these things here are magnetic stirrers. Again, don't be concerned with this. Just stir it while you're heating it to even things out. And we're going to heat this up to about 130 degrees Fahrenheit.
MARK HALL: Lance, tell them about where you get this equipment.
LANCE HALL: All of this equipment that I've got in my shop, all my lab stuff, eBay is a wonderful place to find a used lab supplies, lab glass. These are magnetic stirrer plates. These are really handy to have if you have the means to buy them. You don't have to have them, of course. But I like to use them.
And this is also an electronic scale that comes in handy when you start weighing out your catalyst, doing anything that you want to measure a precise weight. That's worth the money there. And that's going to take a little while, so--
MARK HALL: Lance, is there any other sites, internet sites that you would recommend for people that are interested in making biodiesel?
LANCE HALL: There are several sites out there. One of the most informative on what biodiesel is, where it's being used, is biodiesel.org. That's the National Biodiesel Board website, lots of good information there. It won't really tell you as much how to make it, but hopefully, this will be one of the more informative sites that you'll actually be able to see somebody make one, make a batch.
OK. As our oil is heating up, we have to mix up our methanol potassium hydroxide mixture. So safety is paramount with the use of methanol or the strong caustic lye potassium hydroxide. Methanol can cause blindness or death, and it can be absorbed through the skin. And the potassium hydroxide will burn your skin if it gets on you.
So here's what we're going to do. We're going to take our methanol and we're going to pour this into a container. Face shields are good, too.
We're going to use 175 milliliters of the methanol. That's roughly 20% of the 800 milliliters of oil. You usually want to use about a 20% methanol volume compared to the veggie oil volume.
OK. Our next ingredient is our potassium hydroxide. That's our lye. Now, we have to do a quick calculation on how much of this we need to mix with our methanol in order for the reaction to take place.
I've got a nice spreadsheet that I like to use. It's the Biodiesel-o-matic. You can usually find it online from different biodiesel websites. I'm going to pull that up.
OK. We want to use 7 grams of potassium hydroxide per each liter of veggie oil. So you take 7 divided by 0.8. And that gives you 6.4 grams.
Double bag this stuff, or it will absorb moisture. And that will kill your process.
So we're going to use our scale. We're going to zero the container. And then we're going to put 6.4 grams into it. Make sure you have your gloves on.
OK. That's our 6.4 grams. Close this immediately. Keep it double bagged. OK. Now, you're going to take your 6.4 grams of potassium hydroxide and put that into your 175 milliliters of methanol.
Again, you want to stir this. It's not necessary to heat it, though. Just stir. And stir this until at least the potassium hydroxide is completely dissolved into the methanol. You don't want to see any chunks of white potassium hydroxide flakes.
All right. Our potassium hydroxide is fully mixed into our methanol. We want to remove the stir bar. And then we're just going to slowly pour this into our oil as it's being stirred.
Again, you don't have to have fancy equipment. Just pour it in as you're stirring it manually. But the key is to do it slowly.
Credit: Alabama, A&M, & Auburn Universities Extension
Part 2
Biodiesel Production Demonstration
Click here for a transcript of Biodiesel Production
PRESENTER: You always want to start with a much bigger container than for the amount of oil that you're going to make because you have to add another 20% to that with your methanol. This is a 1,000 milliliter container. So if I would've made a 1,000 milliliters, I would have overflowed it when I put in an extra 200 milliliters of the methanol. OK, we've added about half of our methanol. Let it stir. Then I'm going to add. Add some more.
You can tell when you need to add more. Once the color change goes from an opaque to a translucent. And then when we come back, it will be-- we'll show you how it separates.
OK. We're done with the first batch of bio-diesel. We're making bio-diesel with regular, unused oil. And as you watch this will eventually go down. And it will be clear. And you'll have the separated layers.
So while we're waiting on that to separate out, most people that want to make bio-diesel want to make it out of recycled veggie oil from restaurants or cafes or whatever it may be. There's a few things we need to do before we start making that though. The most important thing you need when you're using with recycled veggie oil is the titrant solution. This is going to determine-- this is going to help us determine how much extra catalyst we need to neutralize the free fatty acids that are in the vegetable oil that are made during the cooking process.
So we're going to start on our scale over here in at least a one liter jar. I'm going to zero that out. We need one gram of our potassium hydroxide. Now we want to add 1,000 grams of water, which is extremely close to one liter, or 1,000 milliliters. So we're going to add that to our current jar. Oh, maybe. OK, there's our one liter of titrant solution. OK.
So we've made our solution. We're going to heat our 800 milliliters again because we're only using 1,000 milliliter bakers to 140 degrees like we did earlier. I'm going to wait while that heats up. Bio-diesel is messy. Be prepared to have all kinds of paper towels and soap and everything else to clean up. OK, while that's heating up, we're going to do the titration to determine how much we need to add.
So we're going to use this. This is a titration Burette. These can be found on eBay for really cheap, and they're really precise. So I recommend getting at least getting this. OK, we're going to take a small container. And then what we want to do is we want to measure out 10 milliliters of isopropyl alcohol. Also known as ISO-HEET. The regular HEET version is not isopropyl. We're going to get 10 milliliters. We've get milliliters of our isopropyl alcohol. Now we need one milliliter of the oil that we're about to make into bio-diesel.
And this is-- you can find these just about anywhere. These are turkey flavor injectors from any grocery store. Don't use them as turkey injectors once you make bio-diesel with them. Get the air out. And these are graduated in 1 milliliter so we want to move from one line to the next line. OK. Put the rest back. So we've got 10 milliliters of isopropyl alcohol and we've got 1 milliliter of our veggie oil in it. Now we need an indicator solution to know when the pH has changed to the appropriate level. So we'll put about four drops of a phenol saline solution. This can be purchased off of eBay. A couple little bottles go a long way.
And as with everything else, I have stir bar for this. And a bug. So we'll let the oil dissolve into the isopropyl alcohol. OK. Now we're going to use this. These are graduated in 0.1 milliliter increments. So it's very precise. So I'm going to run just briefly some distilled water though it just to wash out any contaminants. And then we're going to pour in our titrant solution, our 0.1 potassium hydroxide titrant solution. And we want to do this a couple of times to make sure that the water that may be left in this does not affect our readings.
OK. We've washed our titration duray with our titrant solution. Now what we're going to do is slowly add the titrant solution to our isopropyl alcohol and veggie oil mixture down here. The indicator solution that we put in at the phenol saline will turn a bright pink when we reach a certain pH, which is the pH that we want. So as soon as it turns pink and stays pink, that's when we stop.
Credit: Alabama, A&M, & Auburn Universities Extension
Part 3
Biodiesel Production Demonstration
Click here for a transcript of Biodiesel Production
LANCE HALL: OK you can see it's remaining pink, now. So that gives us about 4 milliliters of our solution went in to change this from a dull yellow to a pink.
So we're going to put this in our spreadsheet.
I'm using the spreadsheet. You can also make a worksheet that you just write your numbers down. And you use a standard calculator for.
So as we did before, we're going to use same jar. This time, we're going to put 9 grams of our potassium hydroxide.
OK, we've gout our 9 grams. Now, we're going to add 175 milliliters of methanol.
OK, we're going to stir that up again.
All right, so we've got our methanol and potassium hydroxide mixture. And then, we're going to do the same thing, just add it slowly to our waste veggie oil, recycled veggie oil, however you want to say it. And you'll be able to know when you need add more by the color change.
I don't know if the camera can see it or not, but while you're looking right at through the top, you can tell when it turns translucent. And see how it lightened it up. It's not quite as clear.
All right, can tell how it's lightened up there, from adding some to it. That will turn to a darker translucent as it gets mixed in. OK, we're going to let that mix for a few minutes. And then, when we come back, we'll show you how it separated.
Just as a demonstration, this is a batch that I made, a larger batch, that I made a couple of weeks ago. And you can see how clear it actually gets after a little bit of time of settling. This is almost, if not just as, clear and the same color as the virgin veggie oil. So once you get your process down, you should be able to make just as high quality stuff with the waste veggie oil as you do with the virgin.
OK once you feel comfortable with your biodiesel-making technique, it's time to step up for a reactor. I've got five tanks in my system. This is a oil processing tank. When I first get my waste veggie oil, I pour it through a large strainer to catch all the big bits of French fries, and tater tots, and chicken nuggets, and everything else.
And I heat it up. So I can put it into my storage tank. I have to heat it because it's got another filter on the inside of this tank to filter even the smaller bits out. And it has to be heated up to go through that filter. So once we're done with it in storage, and we're ready to make a batch, it comes to this 60 gallon full drain inductor tank.
It's got a large lid on it. So I put my veggie oil in here. This tank is the methanol lye mixture tank. Only methanol and lye's ever in this tank. Once I figured out how much lye and methanol I need for my batch of oil, I mix it up in here. And then it's mixed in the pump and also heated. And it reacts in this tank.
And once it's done reacting, I will drain the glycerine off of the bottom using these series of valves. And then I'll do whatever means necessary to get the methanol and other nasties out of it. And once it's done, and I'm happy with it, it comes to this storage tank. And once it's in the storage tank, I can then put it in my tractor, car, whatever.
MARK HALL: Walter, Lance, thank you so much for showing us how to make biodiesel today. Wherever you live, you can make a difference in our nation's energy crisis. Please look and do what you can to lessen our dependence on foreign oil. Look at our website, and you'll see more opportunities to save renewable energy. For the Alabama Cooperative Extension system, this is Mark Hall.
Credit: Alabama, A&M, & Auburn Universities Extension
9.06: Summary and Final Tasks
Summary
In this lesson, you’ve learned about how to make biodiesel from vegetable oils. You’ve had the opportunity to see how it is made and that it’s fairly simple to make. You’ve been provided with information on properties and how biodiesel is typically used. In a future lesson, you will be provided with information on the economics behind biodiesel and ethanol, as well as determining how much energy it takes to make biodiesel, versus the amount of energy that is produced. The additional information is important to determining the use of, and best practices for making alternative fuels.
References
Scott W. Pryor 1*, B. Brian H 2, J.H. Van Gerpen 2 1. Department of Agricultural and Biosystems Engineering, North Dakota State University, 2. Department of Biological and Agricultural Engineering, University of Idaho BEEMS Module B4, Biodiesel, USDA Higher Education Challenger Program, 2009-38411-19761. Contact: Scott Pryor, [email protected]
Reminder - Complete all of the Lesson 9 tasks!
You have reached the end of this Lesson! Double-check the Road Map on the Lesson Overview page to make sure you have completed all of the activities listed there before you begin the next Lesson.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/09%3A_Biodiesel_Production/9.05%3A_Assignments.txt
|
10.1 Introduction
Algae are generated from sunlight, water, CO2, and nutrients as well as algal cultures. There are more than 30,000 species of algae. One of the major factors in the use of algae to generate fuels is choosing the best species for oil generation and developing methods for removing the oil and making it into a fuel. Fuels that can be made from algae oil are biodiesel, n-alkane hydrocarbons, ethanol, methane, and hydrogen. Algae can also be used for soil conditioners and agrochemicals such as fertilizers and proteins as well as fine chemicals and bioactive substances such as polysaccharides, antioxidants, omega-3 and-6 fatty acids, proteins and enzymes.
There are currently several applications for algae including: 1) algin – a thickening agent for food processing (brown algae), 2) carrageenan – foods, puddings, ice cream, toothpaste (red algae), 3) iodine (brown algae), 4) agar – growth media in research (red algae), 5) as food (red and brown algae), 6) plant fertilizers, and 7) diatomaceous earth – used for filtering water, insulating, soundproofing. Table 10.1 shows some additional applications detailing the species, end product, origin, and main way to culture the algae.
Table 10.1: Current commercial applications of algae
Species End Product Origin Main Culture Systems
Chlorella spp. Health food Germany
Indonesia
Japan
Tubular photobioreactors
Circular pivot ponds
Raceway ponds
Spriulina spp. Health food China, India
Japan
Thailand, USA
Raceway ponds
Dunaliella salina β-carotene Australia
India
Extensive open ponds
Raceway ponds
Haematococcus pluvialis Astaxanthin Israel
USA
Photobioreactors
Raceway ponds
Crypthecodinium cohnii DHA USA Heterotrophic cultivation (glucose)
Chaetoceros spp.
Nannochloropsis spp.
Navicula spp.
Tetraselmis spp. Pavlova spp.
Aquaculture feed Throughout the world Tanks
Bag reactors
Raceway ponds
The role of algae in the aqueous world is that they are the base of the aquatic food chain and are photosynthetic organisms. There is a symbiotic relationship between fungi and algae known as lichens. A lichen is a composite organism that emerges from algae or cyanobacteria living among filaments of a fungus in a mutually beneficial relationship. Their properties are plant-like, but lichens are not plants. Lichens help to cause the pigments of algae, reduce harmful amounts of sunlight, and kill bacteria. They can also serve as shelters, such as kelp forming underwater forests and red algae that form reefs.
Algae can have some negative impacts via eutrophication. Eutrophication is the ecosystem response to the addition of artificial or natural substances, mainly phosphates through detergents, fertilizers, or sewage to an aquatic system. It can also be caused by dense bloom of cyanobacteria or algae. These can cause impacts through 1) clogging of waterways, streams, and filters, 2) a decrease in water taste and quality, and 3) potential toxicification. Red tide is one event that can be caused by dinoflagellates.
So why make biofuels from algae? There are several reasons. Algae have high lipid content (up to 70%); they grow rapidly and will produce more lipids per area than other terrestrial plants (10-100 times). To grow algae, non-arable land (this can be thought of as land that is not typically used for farming) can be used along with saline or brackish water. Algae don’t have the same competition with generating food or feed as other oil producing plants. One of the most amazing features is the use of CO2 in growing algae; it helps grow algae significantly. It also provides nutrient (N, P) removal in agricultural and municipal wastewater. Table 10.2 shows a comparison of annual oil yield from a variety of plants and algae. Even microalgae with lower lipids content (30%) will generate 50.00 m3/ha, significantly higher than palm oil. (Mata et al., 2010)
Table 10.2: Oil yields from various plants and microalgae.
Source Annual oil yield (m3/ha)
Corn 0.14
Soybeans 0.45
Sunflower 0.95
Canola (Rape) 1.20
Jatropha 1.90
Palm 5.90
Microalgae (30% lipids) 59.00
Microalgae (50% lipids) 98.00
Microalgae (70% lipids) 140.00
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/10%3A_Algae_as_a_Source_for_Fuels/10.01%3A_Introduction.txt
|
10.2 What are Algae?
Algae are eukaryotic organisms, which are organisms whose cells contain a nucleus and other structures (organelles) enclosed within membranes. They live in moist environments, mostly aquatic, and contain chlorophyll.
Algae are not terrestrial plants, which have 1) true roots, stems, and leaves, 2) vascular (conducting) tissues, such as xylem, and phloem, and 3) lack of non-reproductive cells in the reproductive structures. Algae are not cyanobacteria. Cyanobacteria are prokaryotes, which lack membrane-bound organelles and have a single circular chromosome. Figure 10.1a shows the cellular composition of blue-algae and 1b shows a micrograph of the cells. The cell has a wall with a gelatinous coat. Just beneath the cell wall is a plasma membrane. Within the cell, there are layers of phycobilisomes, photosynthetic lamellae, ribosomes, protein granules, and circular DNA known as nucleoids. These are typical components of growing plants - however, the component we are interested in are lipid droplets, which are oils that can be extracted from the algae.
Algae is composed of ~ 50% carbon, 10% nitrogen, and 2% phosphorus. Table 10.3 shows the composition of various algae looking at the percentages of protein, carbohydrates, lipids, and nucleic acid.
Table 10.3: Composition of algae – protein, carbohydrates, lipids, and nucleic acid.
Species Protein Carbohydrates Lipids Nucleic acid
Scenedesmus obliquus (green alga) 50-56 10-17 12-14 3-6
Scenedesmus quadricauda 47 - 1.9 -
Scenedesmus dimorphus 8-18 21-52 16-40 -
Chlamydomonas rheinhardii (green alga) 48 17 21 -
Chlorella vulgaris (green alga) 51-58 12-17 14-22 4-5
Chlorella pyrenoidosa 57 26 2 -
Spirogyra sp. 6-20 33-64 11-21 -
Dunaliella bioculata 49 4 8 -
Dunaliella salina 57 32 6 -
Euglena gracilis 39-61 14-18 14-20 -
Prymnesium parvum 28-45 25-33 22-38 1-2
Tetraselmis maculata 52 15 3 -
Porphyridium cruentum (red alga) 28-39 40-57 9-14< -
So what are the characteristics of algae?
1. Eukaryotic organisms:
As mentioned above, algae are eukaryotic organisms. The structure of a eukaryote (a typical plant cell) is shown in Figure 10.2a. Figure 10.2b shows the cell structure of a prokaryote, a bacterium, one of two groups of the prokaryotic life. Some do not consider the prokaryotes as true algae because they have a different structure, but most include these in the family of algae. There are labels for the different parts of the organisms, but I will not require you to know this information in detail - it is there so if you have a desire to look up more information, you can. Table 10.4 shows a comparison of both these types of cells.
Table 10.4: Comparison of eukaryotic cells and prokaryotic cells.
-- Eukaryotic cells Prokaryotic cells
Size Fairly large in size Very minute in size
Nuclear region Nuclear materials surrounded by a membrane Nuclear region (nucleoid) not surrounded by a nuclear membrane
Chromosome More than one chromosome present Single chromosome present
Membrane Membrane-bound cell organelles are present Membrane-bound cell organelles are absent
2. Live in moist environments
These organisms lack a waxy cuticle (the wax in terrestrial plants prevents water loss). There are a wide variety of growth environments for algae. The typical conditions for algae are moist, tropical regions and they can grow in marine and freshwater. Freshwater algae grow in animals, aquatic plants, farm dams, sewage, lakes, rivers, lagoons, snow, mud/sand, and soil.
3. Contain chlorophyll
Algae are mostly photosynthetic, like plants. They have five kinds of photosynthetic pigments (chlorophyll a, b, c, d, and f) and have many accessory pigments that are blue, red, brown, and gold. Chlorophyll is a green pigment found in almost all plant algae and cyanobacteria. It absorbs light and transfers light energy to ATP (adenosine triphosphate).
So how are algae classified?
Algae belong to the Protista kingdom. Figure 10.3 shows a schematic of where Protista fits with other classifications of Plantae, Animalia, Fungi, Eubacteria, and Archaebacteria.
Algae can also be classified based on chlorophyll content. The first type is chromista. These types of algae contain chlorophylls a and c, and examples of the algae include brown algae (golden-brown algae), kelp, and diatoms. These materials are a division of Phaeophyta. These types have a habitat on rocky coasts in temperate zones or open seas (cold waters). The structure is multicellular and they can grow up to 50 m long.
Red algae are another type and contain chlorophyll a, such as marine algae (seaweed). These organisms are in the division of Rhodophyta, which has over 4000 species. These are some of the oldest eukaryotic organisms on Earth (there are 2 billion-year-old fossils). They are abundant in tropical, warm waters. They act as food and habitat for many marine species. The structure ranges from thin films to complex filamentous membranes. These algae have accessory pigments, and the phycobilins (red) mask chlorophyll a. Figure 10.5b shows various red algae. Dinoflagellates are unicellular protists, and these are associated with red tide and bioluminescence.
Green algae contain chlorophylls a and b. They are in the division Chlorophyta. This is the largest and most diverse group of algae. It is found mostly in freshwaters and also on land (rocks, trees, and soil). The structures are single cells (Micrasterias), filamentous algae, colonies (Volvox), and leaf-like shape (Thalli). Terrestrial plants arose from a green algal ancestor. Both have the same photosynthetic pigments (chlorophyll a and b). Some green algae have a cell wall made of cellulose, similar to terrestrial plants. Figure 5c shows examples of green algae.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/10%3A_Algae_as_a_Source_for_Fuels/10.02%3A_What_are_Algae.txt
|
10.3 Algae Growth and Reaction Conditions
There are two primary ways that algae reproduce. Some algae are unicellular and demonstrate the simplest possible life cycles (see Figure 10.6a). Note that there is a generative phase and a vegetative phase. During the generative phase, cysts are freed. The cysts open to form gametes and then form the zygote. From there, the vegetative phase occurs so the plant grows and new cysts can form. Most algae have two recognizable phases, sporophyte and gametophyte. Figure 10.6b shows a schematic of the two phases. The main difference is a male and female type is required to form the zygote. I will not be expecting you to know the details in depth, but want you to recognize there are differences.
Algae have a particular path of growth, beginning with a lag phase, and continuing on to an exponential phase, a linear phase, and stationary phase, and decline of death phase. Figure 10.7 shows a schematic of algal growth rate in a batch culture.
There are several factors that influence the growth rate. The temperature will vary with algae species. The optimal temperature range for phytoplankton cultures is 20-30°C. If temperatures are higher than 35°C, it can be lethal for a number of algal species, especially green microalgae. Temperatures that are lower than 16°C will slow down the growth of algae.
Light also has an effect on the growth of algae: it must not be too strong or weak. In most algal growth cultivation, algae only need about 1/10 of direct sunlight. In most water systems, light only penetrates the top 7-10 cm of water. This is due to bulk algal biomass, which blocks light from reaching into deeper water.
Mixing is another factor that influences the growth of algae. Agitation or circulation is needed to mix algal cultures. An agitator is used for deep photo reactor systems. Paddle wheels are used for open pond systems. And pump circulation is used for a photo-tube system.
Of course, algae need nutrients and the proper pH to grow effectively. Autotrophic growth requires carbon, hydrogen, oxygen, nitrogen, phosphorous, sulfur, iron, and trace elements. The compositional formula of C O1.48 H1.83 N0.11 P0.01 can be used to calculate the minimum nutrient requirement. Under nutrient limiting conditions, growth is reduced significantly and lipid accumulation is triggered. Algae prefer a pH from neutral to alkaline.
There are particular steps for algal biodiesel production. Figure 10.8 shows the processing steps for algae production in biodiesel production. The first step is the cultivation of algae, which includes site selection, algal culture selection and process optimization. Process optimization includes design of the bioreactor and necessary components for algal cell growth (nutrients, light, and mass transfer). Once the algae grow to the necessary level, the algae are harvested. The biomass must first be processed in order to dewater, thicken, and dry the algae in order to extract the oil that will then be processed into biodiesel. The biomass process differs depending on the method of oil extraction and biodiesel production. You primarily learned about transesterification to make biodiesel in Lesson 9, but there are other processes being researched and developed.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/10%3A_Algae_as_a_Source_for_Fuels/10.03%3A_Algae_Growth_and_Reaction_Conditions.txt
|
10.4 Design of Algae Farms
Site selection is an important area to investigate. The best areas to grow algae are areas with adequate sunlight year round, with tropical and subtropical climates. In the US, this includes the following states: Hawaii, California, Arizona, New Mexico, Texas, and Florida. This also means that the temperature will moderate year round. There also has to be adequate land availability (for open-pond systems) and close proximity to CO2 (i.e., near a power plant or gasifier). To keep costs at a minimum, water and nutrients must be available at lower costs and manpower kept at reasonable rates.
There are two main types of culturing technologies: open systems and closed systems. Open systems include tanks, circular ponds, and raceway ponds. Closed systems include three different types: flat-plate, tubular, and vertical-column enclosed systems. Figure 10.9 shows several different examples of open and closed systems.
Open systems can be in natural waters or specifically engineered to grow algae. Natural water systems include algae growth in lakes, lagoons, and ponds, while the engineered systems are those described in the previous paragraph: tanks, circular ponds, and raceway ponds. Of course, there are going to be advantages and disadvantages of open systems. The main advantages are that open systems are simple in design, require low capital and operating costs, and are easy to construct and operate. However, disadvantages include: little control of culture conditions, significant evaporative losses, poor light utilization, expensive harvesting, use of a large land area, limited species of algae, problems with contamination, and low mass transfer rates. One of the more common designs is the raceway pond. It has existed since the 1950s and is a closed loop for recirculation channel for mass culture. The design includes a paddlewheel for mixing and recirculation, baffles to guide the flow at bends, and algal harvesting is done behind the paddlewheel. Cyanotech has a field of raceway ponds located in Kona, Hawaii, with a wide variety of algae.
So, what are some of the design features to keep in mind with algae systems? Algal systems are phototrophic, which means they need to obtain energy from sunlight to synthesize organic compounds. Therefore, the growth rate depends on: light intensity, temperature, and substrate concentration, as well as pH and species type.
There are also factors that affect how specific systems are designed. Open pond design is affected by factors including: the pond size, the mixing depth, paddle wheel design, and the carbonator. The carbonator is how the carbon is added to the algae - it can be done in a number of ways, including carbonaceous seed materials, but utilizing CO2 from power systems (generated from combustion of carbon-based materials) is one of the more common for algae growth - it also mitigates generation of GHG. We will not go into ways to design these systems, as that is above the level of this course.
There are also a variety of closed systems. One type of system is the photobioreactor (PBR). Advantages of a system such as this include: 1) compact design, 2) full control of environmental conditions, 3) minimal contamination, 4) high cell density, and 5) low evaporative losses. The disadvantages include: 1) high production costs, typically an order of magnitude higher than open ponds, 2) overheating, and 3) biofouling. A company that has systems such as this is Algatechnologies. They have a plant located in Kibbuz Ketura, Israel. Figure 10.10 shows a picture of the various algae they have growing in Israel.
There are three types of designs for the PBRs: flat plate, tubular, and vertical column. Advantages of the flat plate PBR include: 1) large surface area, 2) good light path, 3) good biomass productivity, and low O2 build-up. However, the drawbacks include: 1) difficulty in scaling up, 2) difficulty in controlling temperature, and 3) algae wall growth. A flat plate PBR is shown in Figure 10.9c. For the tubular PBR, advantages include: 1) good biomass productivity, 2) good mass transfer, 3) good mixing and low shear stress, and 4) reduced photoinhibition and photooxidation. Tubular PBRs also have disadvantages: 1) gradients of pH, dissolved O2 and CO2 along the tubes, 2) O2 build-up, 3) algae wall growth, 4) requirement of large land area, and 5) a decrease of illumination surface area upon scale-up. Figure 10.9c and Figure 10.10 are examples of tubular PBRs. Vertical column PBRs have different advantages and disadvantages. The positive features include: 1) high mass transfer, 2) good mixing and low shear stress, 3) low energy consumption, 4) high potential for scalability, 5) easy sterilization, and 6) reduced photoinhibition and photooxidation. The negative features are: 1) small illumination surface area, 2) need for sophisticated materials for construction, and 3) decrease of illumination surface area upon scale-up. Figure 10.11 shows a vertical column PBR.
We can compare open and closed systems by looking at various parameters and providing general comparisons of these systems. Table 10.5 provides a list of parameters to compare for each type of system. Open systems tend to cost less, but process control is difficult and growth rate lower. Closed systems are a much higher cost, but control is much better and productivity is therefore higher.
Table 10.5: Comparison of open and closed systems for growth of algae.
Parameters Open systems Closed systems
Contamination High Low
Process control Difficult Possible
Species control Not possible Possible
Mixing Not uniform Uniform
Foot-print Extremely high Very low
Area/volume ratio Low (5 to 10 m-1) High (20-200 m-1)
Capital cost Low High
Operation cost Low High
Water losses Very high Low
Light utilization Low High
Productivity Low High (3-5 times)
Biomass conc. Low High (3-5 times)
Mass transfer Low High
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/10%3A_Algae_as_a_Source_for_Fuels/10.04%3A_Design_of_Algae_Farms.txt
|
10.5 Algae Harvesting and Separation Technologies
The following video is produced by Los Alamos National Laboratory in New Mexico. The video provides a nice overview of how algae are generated, how to harvest them, and the areas of research LANL are focusing on to improve various aspects of the process to make it more economical (3:12).
Turning Algae into Energy
Click here for transcript of the Turning Algae into Energy video.
RICHARD SAYRE: The upside of renewable fuels is that they're sustainable. They reduce the environmental impact, and they can help potentially mitigate climate change. We particularly like algae as biomass or bio fuel feedstock. Algae grow about two to 10 times faster than the best terrestrial crop plants.
They often will store oils as a energy reserve product. And oils that come out of these algae, we've found can be directly converted into fuels, using preexisting technologies.
JOSE OLIVARES: So the laboratory is interested in this area because we have a mission around energy security, providing new technologies for energy for the nation. The big problem, the big challenge is how to get that whole process to be economically and energetically efficient.
RICHARD SAYRE: To make algal biofuels economically viable, there are two very important factors that we have to improve, and that's the biomass productivity per unit land area or the yield. And the other very important factor that we need to improve is reducing the costs of harvesting the algae from the pond.
JOSE OLIVARES: The laboratory is actually developing some nice technologies in a number of different areas, transforming the algae so that it can produce more lipids, more biomass, overall better productivity, under better conditions.
PETER LAMMERS: You've seen how we transfer the algae from the lab from, colonies on a Petri dish to larger cultures. We bring them out here, adapt them to the outdoors and the sunshine. We begin to scale them up. Pretty soon, we'll have algae at hundreds of acres, if not thousands of acres.
RICHARD SAYRE: Another important concern is water, how much water are we going to use. And to address those issues, we're now focusing on developing heat tolerant strains of algae that can be grown in ponds that are covered with plastic to reduce the evaporation. We've figured out how to engineer algae so they can use light more efficiently than normal algae do. We've seen up to a two-fold increase in growth.
We've also figured out how to engineer algae to make more oil. So at the time that we want to harvest the algae, we'll induce the expression of a gene that will cause all of the algae to stick to each other, settle out of the pond and then we pick them up. Maybe the last reason why we like algae is that we can recycle the nutrients that are in waste waters.
PETER LAMMERS: Algae can do wastewater treatment better than conventional systems. So why not take an energy intensive expensive process and turn it into a energy generating system where you're getting clean water and liquid fuels as your two products, and do that in a way that generates revenue rather than consumes revenue.
Credit: Los Alamos National Lab
Algae are typically in a dilute concentration in water, and biomass recovery from a dilute medium accounts for 20-30% of the total production cost. Algae can be harvested using: 1) sedimentation (gravity settling), 2) membrane separation (micro/ultra filtration), 3) flocculation, 4) flotation, and 5) centrifugation.
Sedimentation is the initial phase of separating the algae from water. Once agitation is completed, the algae are allowed to settle and densify. However, other methods most likely will also be required to achieve complete separation.
Membrane separation is a form of filtration. In the lab, a funnel is attached to a vacuum flask. The contents are poured out onto the filter on the funnel and allowed to dry some on the filter as the vacuum continues to be pulled. This method can be used to collect microalgae with low density, but is typically done on a small scale. But the main disadvantage is membrane fouling. There are three modifications: 1) reverse-flow vacuum, 2) direct vacuum with stirring blade above the filter, and 3) belt compression.
Flocculation is another technique. Flocculation is a method where something is added to the mixture of water and algae that causes the algae to “clump” together (or aggregate) and form colloids. Chemical flocculants include alum and ferric chloride. Chitosan is a biological flocculant, but has a fairly high cost. Autoflocculation is an introduction of CO2 to an algal system to cause algae to flocculate on their own. Often flocculation is used in combination with a filter compressor as described in the last paragraph.
Froth floatation is another method for harvesting and separating algae from water. This is a technique that has been used in coal and ore cleaning technology for many years. It is based on density differences in materials. Typically air bubbles are incorporated into the unit. Sometimes an additional organic chemical or adjustment of pH will enhance separation. For algal systems, the algae will accumulate with the froth of bubbles at the top, and there is some way to collect or scrape the froth and algae from the top to separate it from the water. It is an expensive technology that, at this point, may be too expensive to use commercially. There is also the possibility of combining froth flotation with flocculation. For example, when alum is used as a flocculant for the algae, air is bubbled through to separate the flocculant by density. It can also be combined with a filter compressor.
One of the more commonly used machines is a continuous-flow centrifuge. It is efficient and collects both algae and other particles. However, it is more commonly used for production of value-added products from algae and not for fuel generation.
Along with these separation techniques, moisture needs to be removed from algae to improve the shelf-life. Algae are concentrated from water through a series of processes, including the separation process. The concentration of algae in the pond starts at about 0.10-0.15% (v/v). After flocculation and settling, the concentration is increased to 0.7%. Using a belt filter process, the concentration increases to 2% (v/v). Drying algae from 2% to 50% v/v requires almost 60% of the energy content of the algae, which is a costly factor of algae use.
Lipid Separation Technologies
This is an important aspect to the use of algae to generate fuels. It is likely also an expensive option. The algae cells have to be subjected to cell disruption for the release of the desired products. Physical methods include: 1) mechanical disruption (i.e., bead mills), 2) electric fields, 3) sonication, 4) osmotic shock, and 5) expeller press. There are also chemical and biological methods, including: 1) solvent extraction (single solvent, co-solvent, and direct reaction by transesterification), 2) supercritical fluids, and 3) enzymatic extraction.
Single solvent extraction is one of the more common methods. A solvent that is chemically similar to the lipids is used, such as hexane or petroleum ether (this is just a light petroleum-based solvent). This is a commercial process. Extraction takes place at elevated temperatures and pressure. Advantages include an increased rate of mass transfer and solvent accessibility and a reduced dielectric constant of immiscible solvent. The use of a co-solvent process is a little different. There are two criteria used to select a co-solvent. Selection should include: 1) a more polar co-solvent that disrupts the algae cell membrane, and 2) a second less polar co-solvent to better match the polarity of the lipids being extracted (alkanes can meet this criteria). There are several examples of co-solvent extraction. One method was developed by Bligh and Dyer in 1959. Alcohol and chloroform are the solvents, and the majority of the lipids dissolve into the chloroform phase. The interactions include water/methanol > methanol/chloroform > lipid/chloroform. Other combinations of co-solvents include: 1) hexane/isopropanol, 2) dimethyl sulfoxide (DMSO)/petroleum ether, and 3) hexane/ethanol.
Supercritical extraction is similar to solvent extraction. The main difference is that the solvent is maintained until certain pressure and temperature conditions are met, which change the solvent properties and helps extract the materials. It is often done on a smaller scale and may not be useful at an industrial level.
Enzymatic extraction is also similar to solvent extraction, except instead of a solvent, an enzyme is used to separate the materials.
As discussed in the biodiesel lesson (Lesson 9), the reaction of transesterification is often used to convert lipids into fatty ester methyl esters (FAMEs) using alcohol and a catalyst. The advantages of using this method are the high recovery of volatile medium chain triglycerides, and the fact that antioxidants are not necessary to protect unsaturated lipids. There are other methods as well, as discussed near the end of the biodiesel lesson.
Direct Biofuel Production from Algae
Besides separating out the lipids to make diesel fuel, other fuels can be obtained from algae directly. These include alcohols such as ethanol and butanol, hydrogen, and methane.
Alcohols can be made from algae by heterotrophic (carbon nutrients from organic materials) fermentation of starch to alcohols, including ethanol and butanol. Marine algae used for this are Chlorella vulgaris and Chlamydomonas perigramulata. Procedures include starch accumulation via photosynthesis, subsequent anaerobic fermentation under dark conditions to produce alcohol, and alcohol extracted directly from the algal culture media. Hydrogen can also be produced directly from algae through photofermentation and dark fermentation. Methane can be produced by anaerobic conversion of algae. It can be coupled with other processes (using the residue after lipids are removed, for example). Challenges include high protein content of biomass, which can result in NH3 inhibition and can be overcome by co-digestion with high carbon co-substrates. Figure 10.12 shows a schematic of different processes to convert algae, and the range of fuel products that can be made.
10.06: Assignments
10.6 Assignments
Final Project Rough Draft
Please submit a rough draft of your final project by the specified due date in Canvas. To review, your project should include the following elements:
Format
The report should be 8-12 pages in length. This includes figures and tables. It should be in 12-point font with 1” margins. You can use line spacing from 1-2. It is to be written in English, with proper grammar and as free from typographical errors as possible. You will lose points if your English writing is poor.
The following format should be followed:
• Cover Page – Title, Name, Course Info
• Introduction
• Body of Paper:
• Biomass choice
• Literature review on the biomass (APA style, please!)
• Requirements for location
• Climate (i.e., tropical, subtropical, moderate,…)
• Land area required or other type of facility to grow
• Method of production
• Product markets around location
• Economic evidence
• Other factors (environmental, political, tax issues, etc.)
• Summary and Conclusions
• References
Use as filename your user ID_Draft (i.e., ceb7_Draft)
Upload it to the Rough Draft Dropbox.
(20 points)
10.07: Summary and Final Tasks
Summary
Algae are an excellent source of lipids for biodiesel production. They can grow under conditions that terrestrial plants cannot grow, in water, using CO2 and waste materials. Pairing the proper species of algae with appropriate growing conditions will increase the amount oil produced with the algae. Discussion on the economics of algae production will be included in another lesson, in order to compare it to other sources of biodiesel. For now, I will note that production under highly controlled conditions can be expensive.
References
Mata et al. (2010) Renew. Sust. Energy Rev. 14: 217–232.
Liao, W. Khanal, S., Park, S.Y., Li, Y., BEEMS Module A3, Algae, USDA Higher Education Challenger Program 2009-38411-19761, 2009.
Reminder - Complete all of the Lesson 10 tasks!
You have reached the end of Lesson 10! Double-check the Road Map on the Lesson 10 Overview page to make sure you have completed all of the activities listed there before you begin Lesson 11.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/10%3A_Algae_as_a_Source_for_Fuels/10.05%3A_Algae_Harvesting_and_Separation_Technologies.txt
|
11.1 Background for Economic Evaluation of Biofuel Use
Essentially, I am a big proponent of the use of alternative fuels, as I believe they are necessary for our environment. However, to convince others of the environmental benefits, we must also have economic benefits. Use of biofuels in power generation must be competitive with coal and natural gas, and using alternative fuels in transportation must be economically competitive with petroleum refining of crude oil. The following section provides methods to evaluate economics of energy facilities.
There is a time value of money. The purchasing power of money is continuously declining, there is inflation, and investors want an increase in their investment beyond inflation. There are ways to track the time value of money, and this is done through the calculation of annual price indexes. The Consumer Price Index (CPI) is a measure of the overall cost of living, while the Producer Price Index (PPI) is a measure of cost of goods and other expenditures needed to stay in business. Table 1 shows the CPI and PPI for the USA and the CPI for the United Kingdom from 1997-2010.
Table 11.1: Consumer Price Index and Producer Price Index for the USA, and the Consumer Price Index for the United Kingdom. Sources: “Energy Systems Engineering” (F. Vanek, L. Albright, and L. Angenent; McGraw-Hill).
Year CPI, USA PPI, USA CPI, UK
1997 87.4 95.5 96.3
1998 94.7 94.7 97.9
1999 96.7 96.4 99.1
2000 100.0 100.0 100.0
2001 102.8 102.0 101.2
2002 104.5 100.7 102.5
2003 106.9 103.8 103.9
2004 109.7 107.6 105.3
2005 113.4 112.8 107.4
2006 117.1 116.2 109.9
2007 120.4 120.7 112.5
2008 125.0 128.3 116.5
2009 124.6 125.1 119.0
2010 126.7 n/a 123.0
Indexed to Year 2000 = 100
Note: 2010 PPI value for US not available at time book went to press
Source for data as cited in Energy Systems Engineering: US Bureau of Labor Statistics (2011) for USA data; UK National Statistics (2011).
Example 11-1 Factor of CPI and PPI (from ESE book)
A particular model of car costs $17,000 in 1998, and$28,000 in 2005, given in current dollars for each year. How much are each of these values worth in constant 2002 dollars? Use the US CPI values from Table 1.
Values from the table are used to correct the constant 2002 dollars, for $17,000 in 1998 dollars, and$28,000 in 2005 dollars, respectively.
$17,000 × \dfrac{(104.5)}{(94.7)} = 18,763$
$28,000 × \dfrac{(104.5)}{(113.4)} = 24,986$
If energy projects are going to be funded, the costs and predicted earnings for these projects must be valued for a later date and compared to the option of keeping money in savings or investments. A method to include the actual cost of money in the future is discounting. In discounting, current funds are projected into the future, knowing that the money today is worth less in the future due to inflation. Other terms are also defined:
• Term of project: Planning horizon over which cash flow is assessed – typically the value of years N is divided over the project.
• Initial cost: One time expense at the beginning of first compounding period.
• Annuity: Annual increment of cash flow related to project – can be positive or negative.
• Salvage value: One time positive cash flow at end of planning horizon of project – due to sale of asset at actual condition at end.
Projects can be evaluated without discounting. We are not going to discuss discounting beyond this part because I don’t want to spend too much time on discounting – we can look at energy projects without it. Discounting is also ignored for projects with short lifetimes. These shorter projects are evaluated with what is called a simple payback, and this is the method we’ll focus on. The factors in the simple payback include:
• adding up all cash flows in and out of the project;
• this is known as net present value (NPV);
• if NPV is positive, the project is financially viable;
• breakeven point – the year in which total annuities from the project surpass the initial costs.
There is also terminology for energy projects. One such value is called the Capital Recovery Factor (CRF). This is applied to electricity generation. It is a measure used to evaluate the relationship between cash flow and investment cost. This can be applied to short-term investments (i.e., a project that takes place over 10 years or less).
Annual capital cost (ACC) can be determined from the following equation (1) and the CRF can be determined from ACC and NPV shown in equation (2):
(1) ACC = annuity – NPV/N, where NPV is the net present value and N is the number years.
(2) CRF = ACC/NPV
The Electric Power Research Institute (EPRI) recommends a maximum CRF value of 12%.
So, how can energy projects be evaluated in order to determine their financial viability? There are multiple ways – the most common is the present worth method (PW). This method takes into account discounting of money. For the present worth method, all the elements of the financial analysis are discounted back to the present worth. This takes into account the positive and negative elements of cash flow summed up. If the NPV is positive, it would be a financially attractive project. In this method, a minimum attractive rate of return (MARR) would be chosen (kind of like an interest rate). Example 11-2 first looks at a simple payback NPV. While I do not expect you to know how to discount, I will expect you to know that it can affect the cost of a project, as suggested in the following example.
Example 11-2 Net Worth of a Plant (from ESE book)
A municipality is considering an investment in a small-scale energy system that will cost $6.5 million to install, and then generate a new annuity of$400,000/year for 25 years, with a salvage value at the end of $1 million. Calculate the net worth of the project using a simple payback approach. Annuity= +$400,000 per year
N = 25 years
Salvage value = $1,000,000 Installation cost =$6,500,000
NPV = total value of annuities + salvage value – installation cost
NVP = (25·$400,000) +$1,000,000 − $6,500,000 =$4,500,000
This looks like the project is a good deal.
However, if discounting were to be included in this, there would be a factor to reduce the value of the annuities for the 25 years, so the salvage value would be reduced by a significant factor. These factors would be based on a parameter called the minimal attractive rate of return (MARR) – if the MARR is 5%, this project would not be viable.
Another parameter that can be used is the called the benefit-cost ratio (B/C) method. This is a method that is a ratio of all the benefits of the project to all of the costs. If the B/C ratio is greater than 1, the project is acceptable. When the B/C ratio is less than 1, the project is unacceptable. If the B/C value is close to 1, it may be necessary to reevaluate the project to see if minor changes would make it acceptable. The conventional B/C ratio equation is shown in equation (3).
(3) B/C = Total benefits / (Initial cost + Operating costs)
Example 11-3 Benefit to Cost Ratio
Let’s take the example in 11-2. In Part a, we’ll calculate the B/C for the investment using the simple payback method. In Part b, we’ll add in $50,000/year in operating costs for 25 years. 1. Total benefits include: Income (annuity over 25 years)$400,000 x 25 = $10,000,000 Salvage$1,000,000
Total costs = $6,500,000 $B/C = \dfrac{\text{Total benefits}}{\text{Total costs}} =\dfrac{\text{Annuity + Salvage}}{\text{Total costs}} = \dfrac{10,000,000 + 1,000,000}{6,500,000}$ $B/C = 1.69$ 1. Now we’ll add in the operating costs for 25 years 25 x$50,000 = $1,250,000 $B/C = \dfrac{11,000,000}{6,500,000+1,250,000} = 1.42$ So operating costs can influence the costs. Discounting will also influence the costs, maybe to the point that the project would not be viable. The last factor we will look at is the Levelized Cost of Energy. This is a method that incorporates the role of both the initial capital costs and ongoing costs. The levelized cost is determined per unit of energy output. Therefore, all the cost factors are combined into a cost-per-unit measure. We need a predicted average output of electricity in kWh and a sum of all the costs on an annual basis, divided by the annual output (see equation 4): (4) Levelized cost = (Total annual cost)/(annual output) Total annual cost = annualized capital cost + operating cost + return on investment (ROI) Annual output is in kWh Example 11-4 Levelized Cost of Energy So, now we’ll continue with Example 11-3 and input the information into a formula to examine the Levelized Cost of Energy. This plant would produce 2.6 million kWh per year. Income per year$400,000
Salvage $1,000,000 Total costs =$6,500,000
$50,000/year operating costs for 25 years So, the first thing to do is to determine the overall costs on an annual basis – recall that we are not discounting at all, we are doing a simple payback method. Costs on an annual basis = Income/year – Operating Costs/Year + (Salvage – Initial costs) /25 years =$400,000 – $50,000 + ($1,000,000 - $6,500,000)/25 =$130,000 per year
Levelized costs = $130,000/2.6 million kWh =$0.050/kWh
The average electric energy price in the US in 2004 for all types of customers was $0.0762 – this has not changed drastically in the current year. Therefore, with a plant of this size, this would be competitive in the US. Another aspect that needs to be considered is the direct costs versus external costs and benefits. Direct costs include capital repayment costs and operating costs. Operating costs include energy supply, labor, and maintenance costs. However, there are also external costs that are sometimes called overhead. These costs include health care and lost productivity due to pollution. Direct benefits include revenues from selling the product and services. External benefits include benefits to the local environment or the use of unusual energy technology, which could encourage visitors to the company. Costs are important, but by using biofuels, we also expect a benefit to the environment. So, there have been interventions in energy investments for social aims. We expect the alternative form of energy to be “clean” energy. This means that there may be intervention in the marketplace because of the potential social benefit, which is typically done by government. Intervention by government can be on the local, state, and federal levels. Why do this? It is because we can’t put a “value” on the social benefit, and it gives fledgling technologies a chance to grow in sales volume to allow for competition in the marketplace. For example, government subsidies were given to the production of ethanol from corn for many years, and now ethanol from corn in the US is the most viable method of ethanol production (data will be presented on this in the following section). There is more than one method of intervention. One support mechanism is to support research and development (R&D). The support usually comes from the government, but industry may also participate so that they are not supporting the funding all on their own. Government can also support commercial installation and operating systems. This can be in the form of direct costs, tax credits, and interest rate buydown. Most of our discussion so far has been on electricity systems. But how do we evaluate the production of biofuels and economic viability? There are two metrics that are used: 1) net energy balance ratio and 2) life cycle assessment. The net energy balance ratio is a metric to compare bioenergy systems. It is a ratio to compare energy available for consumption to the energy used to produce the fuel. So, for example, how might we look at ethanol? The energy carrier itself is ethanol. However, energy was consumed in order to grow, harvest, and process the corn in order to produce the ethanol. This is known as energy to produce. The ratio would look like this: (5) NEB = Energy from fuel/ Energy to produce If the NEB ratio is greater than 1, there is more energy available for consumption than is used to produce the biofuel. If the NEB ratio is less than 1, then more energy is required to produce the fuel than is available in the fuel for consumption – which makes for an unattractive project. This is a good metric for debate, but it is not a parameter than can stand alone. The other metric is life cycle assessment (LCA). It is a method of product assessment that considers all aspects of the product’s life cycle. One way to express this is a cradle-to-grave analysis. For biofuels in transportation, it could be plant/harvest-to-wheels. In the petroleum industry, it’s known as well-to-wheels. Example 11-5 Shows How the NEB and LCA are Determined There are two farms that grow corn to produce ethanol. Farm A is 40.2 km from the ethanol plant. Farm A sells corn for$289.36 per metric ton. Farm B is 160.0 km from the ethanol plant, and corn sells for $284.02 per metric ton. Other information: Truckload can carry 10.9 metric tons (500 bushels) Truck emits 212.3 g CO2eq/metric ton-km (310 g CO2eq/ton-mile) Plant needs 130.6 metric tons per year (6000 bushels/year) Truck weighs 9.1 metric tons empty Plants needs: 130.6 metric tons/year @ 10.9 metric tons = 12 truckloads per year For the two farms, examine the two farms – both for economics and GHG emissions. Farm A • Economic return • 130.6 metric tons/year ($289.36)=$37,800/year • Transportation • 40.2km (12 trips/year) = 482.4km/year • GHG • Empty truck: 482.4km/year (9.1 metric tons) (212.3 CO2eq/metric ton-km) = 0.93 MgCO2eq • Full truck: 482.4km/year (20 metric tons) (212.3 CO2eq ton-km) = 2.05MgCO2eq • Total 2.98 Mg CO2eq Farm B • Economic return • 130.6 metric tons/year ($248.02) = \$32,400/year
• Transportation
• 160.9 km (12 trips/year) = 1930.8km/year
• GHG
• Empty truck: 1930.8 km/year (9.1 metric tons) (212.3 CO2eq/metric ton-km) = 3.72MgCO2eq
• Full truck: 1930.8 km/year(20 metric tons)(212.3CO2eq ton-km) = 8.18MgCO2eq
• Total 11.90 Mg CO2eq
As you can see, Farm A produces a better economic return. And it also puts out less CO2eq as well, so Farm A is the better plant to provide raw material.
We also want to determine the fuel productivity per unit of cropland per year. This should be done before choosing a regional crop. Keep in mind that, depending on location, sunlight provides 100-250 W/m2 – however, less than 1% is available in starches and oils as a raw material for conversion to fuel. Research has focused on conversion of lignocellulosic biomass (whole biomass) and utilization of the entire plant in order to produce fuel and/or value-added products – the economics improve under these conditions. Data on changes to the land must also be incorporated, especially if the land is being changed to sustain a large-scale biofuels program. For example, if a rainforest or peatland is removed to make space, the material from the land is typically burned, adding CO2 to the atmosphere before even getting the system started.
So, what is the NEB ratio of ethanol? Early on in conversion of corn to ethanol, the ratio was positive, but not a very high ratio – 1.25. However, recent assessments show improved NEB of 1.9-2.3. And if the fuel used to run the plant that produces the ethanol is 50% biomass, the NEB is 2.8, almost 3. And you will see in the economics, production of ethanol is economical. The problem comes up when petroleum prices go down, such as in recent months; ethanol production is less economical.
So, what about the NEB ratio of biodiesel? When compared to the NEB of ethanol, in early stages of biofuel production, the NEB ratio of biodiesel was 1.9 when co-products are included. It is higher because biodiesel production requires reduced energy requirements during processing, mainly because less distillation is required. The one drawback to biodiesel production is the GHG contains N2O, so use of biodiesel has ~60% of petrodiesel emissions instead of being neutral. Another issue for biodiesel is soybeans suffer as a crop due to lower yields per land area compared to corn. Example 11-6 shows the NEB ratio calculation of soybeans to biodiesel.
Example 11-6 Calculate the Ratio of Energy Available in the Resulting Biodiesel to the Total Energy Input
Does biodiesel provide more energy than it consumes?
• Each gallon of biodiesel requires 7.7 lbs. of soy as feedstock
• Acre yields ~452 lbs. soy
• Assume pure biodiesel
• Assume a gallon of biodiesel contains 117,000 Btu net
Farming Inputs
Input Energy (1000 Btu)
Fuels 1025
Fertilizer 615
Embodied energy 205
All other 205
Plant inputs: Per 1000 lbs. of Soybeans
Input Energy (1000 Btu)
Process heat & electricity 1784
Embodied energy 595
Transportation 297
All other 297
Solution:
• Energy in to produce biodiesel
• 452 lbs. soy/acre $\dfrac{\text{1gal biodiesel}}{\text{7.7lbs soy}}$ = 58.7
• 452lbs. soy/acre (2973) $\dfrac{\text{1000Btu}}{\text{1000lbs. soy}}$ = 1.34×106Btu/acre
• 2.050×106Btu/acre + 1.34×106Btu/acre = 3.39×106Btu/acre
• Energy out from biodiesel produced
• 117,000Btu/gal (58.7gal/acre) = 6.87×106Btu/acre
• NEB=$\dfrac{6.87×10^6\text{Btu/acre}}{3.39×10^6\text{Btu/acre}}$
• = 2.02
This 2.02. A typical NEB for biodiesel production is ~1.9 to as high as 2.8.
So, are ethanol and biodiesel being consumed in our current fuel supply? Yes, they are, but partially because of an Environmental Protection Agency (EPA) mandate to use oxygenated fuel in a blend with gasoline and diesel fuel. Approximately 10% of the gasoline supply is ethanol, while in diesel, it’s estimated that diesel sold contains ~5-6% biodiesel.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/11%3A_Economics_of_Biomass_Production__Ethanol_Butanol_and_Biodiesel/11.01%3A_Background_for_Economic_Evaluation_of_Biofuel_Use.txt
|
11.2 Ethanol Production and Economics
The major feedstock for ethanol has been coarse grains (i.e., corn). Second-generation ethanol (from cellulosic biomass) is around ~7% of the total ethanol production. Figure 11.1 shows the global ethanol production by feedstock from 2007-2019.
In 2013, the world ethanol production came primarily from the US (corn), Brazil (sugarcane), and Europe (sugarbeets, wheat). Figure 11.2 shows ethanol production contributions, in millions of gallons, from all over the world. In addition to Brazil, ethanol production from sugarcane is also being done in Australia, Columbia, India, Peru, Cuba, Ethiopia, Vietnam, and Zimbabwe. In the US, ethanol from corn accounts for ~97% of the total ethanol production in the US.
Table 11.2 shows a comparison of costs for first-generation ethanol feedstock along with their production costs. The data in this table is from 2006, but it gives you an idea of why ethanol is made from corn in the US: because it is less expensive and more profitable. However, as seen in the other charts, the use of sugar-based materials like sugarcane and sugarbeets is growing, as well as the use of cellulosic materials.
Table 11.2: Summary of 2006 Estimated Ethanol Production Costs in the U.S. (\$/gal)a
Cost Item Feedstock Costsb Processing Costs Total Costs
US Corn wet milling 0.40 0.63 1.03
US Corn dry milling 0.53 0.52 1.05
US Sugarcane 1.48 0.92 2.40
US Sugar beets 1.58 0.77 2.35
US Molassesc 0.91 0.36 1.27
US Raw Sugarc 3.12 0.36 3.48
US Refined Sugarc 3.61 0.36 3.97
Brazil Sugarcaned 0.30 0.51 0.81
EU Sugar Beetsd 0.97 1.92 2.89
a Excludes capital costs
b Feedstock costs for US corn wet and dry milling are net feedstock costs; feedstock for US sugarcane and sugar beets are gross feedstock costs
c Excludes transportation costs
d Average of published estimates
Credit: rd.usda.gov
Figure 11.3 shows the overall process of making ethanol from corn. It also shows the additional products made from corn. If you recall from Lesson 7, DDGS is a grain that can be used to feed cattle. Corn oil is also produced for use. Typical yields of each product per bushel of corn are shown (2.8 gal of ethanol, 17 lbs. of CO2, and 17 lbs. of DDGS).
So, what are the ethanol revenue streams? Figure 11.4 shows that the revenue streams are ethanol, DDGS, and CO2. The revenue streams are market driven; ethanol is the plant’s most valuable product and typically generates 80% of the total revenue. The DDGS represents 15-20% of the revenue, and CO2 represents a small amount of revenue. The revenue margins are tight, however, and sale of DDGS and CO2 is probably essential for the plant to be profitable.
Figure 11.5 shows the volatility of the price of corn, the price of ethanol, and the price of gasoline. Notice the price of gasoline and the price of ethanol are highly correlated, at least since 2009. For example, in 2010, the price of gasoline and the price of ethanol were ~\$2.00 per gal. However, in recent months, with the price of oil going down significantly, expect that the profitability of ethanol will be less.
The major cost in producing ethanol from corn is the cost of the feedstock itself. Figure 11.6 shows the cost of feedstock is 55% of the expenses for the production of ethanol from corn, while energy is 21%, materials are 11%, and maintenance and personnel are 13%. If a bushel of corn sells for \$4/bu or more, then the percentage for the feedstock price goes to 65-75% of the expenses.
Another issue with the production of ethanol is that water is used, and water is becoming less available. Water is used for gasoline production as well, but water use is a little higher for ethanol production (for gasoline, 2.5 gallons water per gallon of gasoline is used, while for ethanol it is 3 gallons of water per gallon of ethanol). The extra water use is due to growing the plants for harvest.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/11%3A_Economics_of_Biomass_Production__Ethanol_Butanol_and_Biodiesel/11.02%3A_Ethanol_Production_and_Economics.txt
|
11.3 Economics of Butanol Production
Just as ethanol can be produced from corn, so, too, can butanol. The main disadvantage to butanol production is yields are significantly lower. But recall that the advantages include a better interaction of butanol with gasoline than ethanol as well as the higher energy content of butanol. If all of the available residues of the corn along with the corn are converted to acetone-butanol (AB), the result would be the production of 22.1 x 109 gallons of AB biofuel – this is a yearly amount. In 2009, 10.6 x 109 gallons of ethanol were produced from corn – this would be equivalent to 7.42 x 109 gallons of butanol on an equal energy basis. Recall that butanol production is accomplished in a similar fashion to ethanol production; it uses different enzymes. Figure 11.7 shows the schematic of wheat straw processing that was shown in Lesson 7. A description of the process and some information on production using wheat straw and other feedstocks can be found at the end of Lesson 7.
So, how much investment might be necessary to build a plant for butanol production? An extensive computer simulation was done to determine costs, payback time and return on investment – we will not discuss the details of the computer simulation because it is beyond the scope of this course. However, I will provide a summary of the study to give you an idea of the related costs. An estimate was done for producing butanol from wheat straw (BEEMS Module B6). The estimate was based on a plant size of 150 x 106 kg/year, or 48 x 106 gallons of butanol per year, and the following costs were determined:
• Equipment purchase cost \$27.66 x 106
• Total plant direct cost (TPDC) \$88.08 x 106
• Total plant indirect cost (TPIC) \$52.85 x 106
• Total plant cost (TPC = TDPC + TPIC) \$140.93 x 106
• Contractor’s fee & contingency (CFC) \$21.14 x 106
• Direct fixed capital cost (DFC = TPC+CFC) \$162.06 x 106
It would take ~\$162 million to build a butanol plant that produces 48 million gallons of butanol. Operating costs must also be taken into consideration, which would be more than \$200,000,000. The major factors in the operating costs are utilities (59% of the costs) and raw materials (21%). With these costs in mind, it was estimated that for a grass roots plant, the butanol production cost would be \$1.37/kg, or \$4.28/gal. For an annexed plant the cost of butanol production would be lower, or \$0.82/kg, or \$2.55/gal. The researchers doing this work have been successful at producing AB from lignocellulosic substrates, though there are some challenges ahead. In conclusion, these are the overall estimates:
• Cost of production of butanol from WS is \$1.37/kg (distillative recovery)
• Return on investment is 19.87% and payback time is 5.03 years
• Expansion of an existing plant would result in production cost of \$1.07/kg (distillative recovery), and \$0.82/kg (membrane recovery)
• Utilities affect butanol production cost most
This kind of information may convince a venture capitalist to contribute to the building of a butanol plant, especially if payback is in 5 years and the return on investment is 19.87%. However, this was a computer simulation, and would have to be updated for current prices. That could alter the numbers.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/11%3A_Economics_of_Biomass_Production__Ethanol_Butanol_and_Biodiesel/11.03%3A_Economics_of_Butanol_Production.txt
|
11.4 Economics of Biodiesel Production, Including Economics of Algae
One of the major feedstocks for biodiesel is soy oil or any other vegetable oil. Animal fats can also be used, but as pointed out in a previous lesson, animal fats can produce more by-products that cause issues (i.e., free fatty acids which can cause soap formation). Jatropha is another oil used for biodiesel production. Second-generation biodiesel can be produced from algae, and use of algae for biodiesel production is a growing market (Figure 11.8). The feedstock is the primary cost involved in producing biofuel. Haas et al (2006) estimate that the cost of oil is ~ 85% of the production costs, while Duncan (2003) estimates it to be ~80% of the production costs.
Figures 11.9a and 11.9b show the challenges of the economics behind biodiesel production. Figure 11.9a shows the breakeven price of biodiesel plotted with the actual biodiesel price. Most years, the biodiesel price and breakeven price were the same. But in 2011 and 2013, the biodiesel sold at a higher price than the breakeven price, a good indicator for demand. In Figure 11.9b, we can see that in most years, the ULSD price was lower than biodiesel, by ~\$1 per gallon, but in late 2013 and 2014, the price was almost the same and the margin between the two were much closer. Again, this suggests a greater demand for biodiesel as well as the costs becoming closer to the processing of petrodiesel.
Most of the information we are looking at is based on soy oil production, but as we have discussed in Lessons 9 and 10, biodiesel can be produced in ways other than transesterification and can be produced from other sources, such as algae. The reason for producing biodiesel using other methods would be to reduce the oxygenates from the biodiesel, mainly so the biodiesel could be used as jet fuel, as jet fuel cannot have oxygenates in it. The reason behind using algae is because of the advantages of 1) using land areas that would not be able to grow terrestrial plants, 2) the ability of algae to produce much greater amounts of oil per landmass than plants like soy, and 3) algae plants take in much greater amounts of CO2 and could be located near a power plant to utilize emissions from the flue gases. However, as you will see from data in the following tables, the costs of oil from algae are still fairly high. One of the useful aspects of growing algae is it can be grown in water sources that can contain salt or some sediment - even though water usage may be high, some of the water may be separated out and used again.
The following data is from an overview review article on producing biodiesel from algae, from 2007. In Table 11.3, Chisti compared the production of biodiesel from PBR and an open raceway pond, to show differences in production.
Table 11.3: Comparison of Photobioreactor and Raceway Production Methods
Variable Photobioreactor facility Raceway ponds
Annual biomass production (kg) 100,000 100,000
Volumetric productivity (kg m-3 d-1) 1.535 0.117
Areal productivity (kg m-2 d-1) 0.048a
0.072c
0.035b
Biomass concentration in broth (kg m-3) 4.00 0.14
Dilution rate (d-1) 0.384 0.250
Area needed (m2) 5681 7828
Oil yield (m3 ha-1) 136.9d
58.7e
99.4d
42.6e
Annual CO2 consumption (kg) 183,333 183,333
System geometry 132 parallel tubes/unit; 80 m long tubes; 0.06 m tube diameter 978 m2/pond; 12 m wide, 82 m long, 0.30 m deep
Number of units 6 8
a Based on facility area.
b Based on actual pond area.
c Based on projected area of photobioreactor tubes.
d Based on 70% by wt oil in biomass.
e Based on 30% by wt oil in biomass.
Credit: Chisti, Y., Biotechnology Advances, 2007
As you can see, the PBR can produce more oil for a number of reasons, but costs associated with the PBR are higher. Chisti estimates the cost to produce algae oil on a scale of 10,000 tons at \$0.95 per pound. This particular article estimated that the cost of producing biodiesel from algae would range from \$10.60 – \$11.13 per gallon. The processing costs for palm oil would be ~\$0.53 per gallon, and the processing costs for soybean oil would be \$3.48 per gallon. A slightly more recent report, a project to estimate the price of biodiesel, indicates prices from \$10.66 per gallon to as high as \$19.16 depending on location and oil production. (Davis, R., et al., 2012) Yet another computer-simulated estimate provided by Richardson et al. shows a cost to produce algal oil at \$0.25 - \$1.61 per pound, not too far off of costs to produce soybean oil (\$0.35 - \$0.38 per pound) (Richardson et al., 2010). They also estimated the cost of biodiesel from algae oil to be \$2.35 per gallon, less than a gallon of ULSD. However, they may not be including algae harvesting costs in this estimate.
So, you can see that biodiesel from microalgae is a long way off from being a reality, yet it probably has a future if the costs can be brought down due to benefits.
So, how does biodiesel compare to ethanol as a fuel? Ethanol from corn has become big business here in the US. Figure 11.10a shows the production by county and plant locations for ethanol production from corn. We saw earlier in the lesson that ethanol from corn has a NEB ratio greater than 1 and is improving, and that production of ethanol from corn is the least expensive here in the US. The amount of ethanol produced in 2014 was 13 billion gallons per year and is a \$30 billion a year industry. It makes up 10% of the gasoline pool, although the government has mandated that the gasoline pool can utilize up to 10% ethanol, and companies get tax credits for using ethanol. Because of the high price of gasoline due to oil prices, demand for gasoline fell, and this has made it more difficult for ethanol producers to continue selling ethanol at the level that they had over the last several years. Recently, the EPA examined whether to raise the ethanol limit to 15%, but because of the drop in oil prices, the EPA has been reconsidering this and has not set levels for 2015. In the assignment section, you will read an article related to this so you can get a feel for what is facing the biofuels industry. Remember that by including ethanol in the gasoline pool, we reduce GHG emissions.
Figure 11.10b shows the location of biodiesel facilities in the US from 2007. If you would like to view a more recent map and location of facilities (2015), go to Biodiesel Magazine. According to the most recent statistics, there are 145 biodiesel facilities in the US that provide more than 2.60 billion gallons of biodiesel per year (in 2010, the production of biodiesel was less than 0.4 billion gallons per year, which is an order of magnitude increase). Biodiesel makes up about 5-6% of the diesel fuel pool. Biodiesel plants tend to be smaller and more evenly distributed across the US than ethanol. You will also read an article related to biodiesel pricing. RIN means renewable identification number and RFS stands for renewable fuel standard.
11.05: Assignments
11.5 Assignments
Discussion #2
Please read the following selections:
After reading the selections, write a paragraph discussing how these articles relate to biomass production.
After posting your response, please comment on at least one other person's response. Discussions will be reviewed , and grades will reflect critical thinking in your input and responses. Don't just take what you read at face value; think about what is written.
(5 points)
11.06: Summary and Final Tasks
Summary
The economics of biofuels depends on the overall economic health of the country as well as what is going on in the fuel industry for fossil fuels. Biofuels have come a long way in becoming a part of US industry, but issues with gasoline and diesel prices, as well as government involvement, will continue to dictate the use of biofuels.
Are you beginning to grasp the complexity of the issues surrounding how the biofuel production industry will survive going forward? If it reduces the carbon footprint, isn’t it worthwhile to continue to support biofuel production?
References
Vanek, F., Albright, L., and Angenent, L., “Energy Systems Engineering: Evaluation and Implementation”, Second Edition, McGraw-Hill, 2012.
Pryor, Scott; Li, Yebo; Liao, Wei; Hodge, David; “Sugar-based and Starch-based Ethanol,” BEEMS Module B5, USDA Higher Education Challenger Program, 2009-38411-19761, 2009.
Xiaodong Du and Lihong Lu McPhail, "Inside the Black Box: The Price Linkage and Transmission Between Energy and Agricultural Markets," Energy Journal, Vol. 33, No. 2, 2012, pp. 171-194.
Nasib Qureshi, Adriano Pinto Mariano, Vijay Singh, Thaddeus Chukwuemeka Ezeji, “Biomass to Butanol,” BEEMS Module B6, USDA Higher Education Challenger Program, 2009-38411-19761, 2009.
Haas, M.J., McAloon, A.J., Yee, W.C., Foglia, T.A., “A process model to estimate biodiesel production costs,” Bioresource Technology, 97, 671-678, 2006.
Duncan, J., “Costs of biodiesel production,” Energy Efficiency and Conservation Authority Report, 2003.
Chisti, Y., “Biodiesel from microalgae: A research review,” Biotechnology Advances, 25, 294-306, 2007.
Davis, R., et al., “Renewable diesel from algal lipids: An integrated baseline for costs, cost, emission and resource potential from a harmonized model,” Technical Report, prepared for US DOE Energy Biomass Program, ANL/ESD/12; NREL/TP-5100; PNNL-21437, June 2012.
Richardson, J.W., Outlaw, J.L, and Allison, M., “The economics of microalgae,” AgBioForum, 13 (2), 119-130, 2010.
Reminder - Complete all of the Lesson 11 tasks!
You have reached the end of this lesson! Double-check the Road Map on this lesson Overview page to make sure you have completed all of the activities listed there before you begin the next Lesson.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/11%3A_Economics_of_Biomass_Production__Ethanol_Butanol_and_Biodiesel/11.04%3A_Economics_of_Biodiesel_Production_Including_Economics_of_Algae.txt
|
12.1 Anaerobic Digestion
Anaerobic digestion (AD) is a biological process that breaks down organic materials (feedstocks) in the absence of oxygen (anaerobic conditions) into methane (CH4) and carbon dioxide (CO2). It is a process that occurs naturally in bogs, lake sediments, oceans, and digestive tracts. Cows contain one of the most well known fermentation vats, the rumen, which is part of the stomach (in other animals as well). Fermentation takes place during digestion! Figure 12.1 shows a schematic of anaerobic digestion.
There are benefits to using an anaerobic digester, particularly when raising livestock. A biogas, which contains methane and hydrogen, will be produced that can be used as a fuel. From a waste treatment point of view, it reduces the volume and mass of the waste, as well as reduces organic content and biodegradability of waste so that the residual matter can be better used as soil amendment and fertilizer. There are also environmental benefits: 1) odors and emissions of greenhouse gases (i.e., methane) and volatile organic compounds are reduced, and 2) the digester will destroy pathogens in the waste.
So, what are the biological processes that occur during AD? The bacteria ferment and convert complex organic materials into acetate and hydrogen. There are four basic phases of anaerobic digestion, which is a synergistic process using anaerobic microorganisms: 1) hydrolysis, 2) acidogenesis, 3) acetogenesis, and 4) methanogenesis. Figure 12.2 shows the progression and types of products for each phase.
Hydrolysis Biochemistry
We have talked about hydrolysis in earlier lessons. Hydrolysis is a reaction with water. Acid and base can be used to accelerate the reaction. However, this occurs in enzymes as well. Figure 12.3 shows the hydrolysis reaction, and how cellulose, starch, and simple sugars can be broken down by water and enzymes. In anaerobic digestion, the enzymes are exoenzymes (cellulosome, protease, etc.) from a number of bacteria, protozoa, and fungi (see Reaction 1).
(1) biomass + H2O → monomers + H2
(Sources: cellulose, starch, sugars, fats, oils) (Products: mono-sugars [glucose, xylose, etc.], fatty acids)
Acidogenesis Biochemistry
During acidogenesis, soluble monomers are converted into small organic compounds, such as short chain (volatile) acids (propionic, formic, lactic, butyric, succinic acids – see Reaction 2), ketones (glycerol, acetone), and alcohols (ethanol, methanol – see Reaction 3).
(2) C6H12O6 + 2H2 → 2CH3CH2COOH + 2H2O
(3) C6H12O6 → 2CH3CH2OH + 2CO2
Acetogenesis Biochemistry
The acidogenesis intermediates are attacked by acetogenic bacteria; the products from acetogenesis include acetic acid, CO2, and H2. The reactions 4-7 shows the reactions that occur during acetogenesis:
(4) CH3CH2COO- + 3H2O → CH3COO- + H+ + HCO3- + 3H2
(5) C6H12O6 + 2H2O → 2CH3COOH + 2CO2 + 4H2
(6) CH3CH2OH + 2H2O → CH3COO- + 2H2 + H+
(7) 2HCO3- + 4H2 + H+ → CH3COO- + 4H2O
Several bacteria contribute to acetogenesis, including:
Syntrophobacter wolinii, propionate decomposer
Syntrophomonos wolfei, butyrate decomposer
Clostridium spp., peptococcus anaerobes, lactobacillus, and actinomyces are acid formers.
Methanogenesis Biochemistry
The last phase of anaerobic digestion is the methanogenesis phase. Several reactions take place using the intermediate products from the other phases, with the main product being methane. Reactions 8-13 show the common reactions that take place during methanogenesis:
(8) 2CH3CH2OH + CO2 → 2CH3COOH + CH4
(9) CH3COOH → CH4 + CO2
(10) CH3OH → CH4 + H2O
(11) CO2 + 4H2 → CH4 + 2H2O
(12) CH3COO- + SO42- + H+ → 2HCO3 + H2S
(13) CH3COO- + NO- + H2O + H+ → 2HCO3 + NH4+
Several bacterial contribute to methanogenesis, including:
Methanobacterium, methanobacillus, methanococcus, and methanosarcina, etc.
As you can see, the bacteria for anaerobic digestion are different from other enzymes for making biofuels, and could even be in our own stomachs!
Any kind of organic matter can be fed to an anaerobic digester, including manure and litter, food wastes, green wastes, plant biomass, and wastewater sludge. The materials that compose these feedstocks include polysaccharides, proteins, and fats/oils. Some of the organic materials degrade at a slow rate; hydrolysis of cellulose and hemicellulose is rate limiting. There are some organic materials that do not biodegrade: lignin, peptidoglycan, and membrane-associated proteins. The organic residues contain water and biomass composed of volatile solids and fixed solids (minerals or ash after combustion). And it’s the volatile solids (VS) that can be non-biodegradable and biodegradable.
As we discussed regarding pretreatment of biomass for making of ethanol, efficiency of anaerobic digestion improves with pretreatment. Hydrolysis of cellulose and hemicellulose (phase 1 in AD) is improved with pretreatment because it overcomes biomass recalcitrance. As discussed in a previous lesson, pretreatment options include treatments with acids, alkalines, steam explosion, size-reduction, etc. Common alkaline agents include: NaOH, Ca(OH)2, and NH3.
Theoretical methane yield (YCH4, m3 STP/kg substrate converted) can be calculated from the elemental composition of a substrate:
CcHhOxNnSs
YCH4 = $\dfrac{22.4 (\frac{c}{2} + \frac{h}{8} + \frac{x}{4} - \frac{3n}{8} - \frac{s}{4})}{12c+h+16x+14n+16s}$
Table 12.1 shows the substrate, a common elemental formula, and the theoretical methane yield for each.
Table 12.1: Theoretical methane yield (m3 STP/kg substrate converted) for several biomass sources (Credit: Frigon and Guiot, 2010)
Substrate Elemental formula
Theoretical methane yield
(m3 STP/kg)
Carbohydrates (CH2O)n 0.37
Proteins C106H168O34N28S 0.51
Fat C8H15O 1.0
Plant biomass C5H9O2.5NS0.025 0.48
Figure 12.4 shows the biogas yield for several different feedstocks in m3/ton. Be aware that after digestion, there is a biogas yield and the remainder of the digestion, known as digestate. The biogas typically contains 50-60% CH4, with the rest primarily composed of CO2 and other trace gases. The digestate contains fiber, nutrients, and water, and these can be used for compost, animal bedding, and composite boards. Figure 12.5 shows a schematic of the components of the digester.
There are several factors that will affect anaerobic digestion. Different feedstocks will degrade at different rates and produce different amounts of methane (as seen in Figure 12.4 and Table 12.1). That depends on the biological degradability and methane potential, the carbon and nutrients available, and the moisture content of each feed material. As noted in Figure 4 and Table 12.1, fats contain the highest volatile solids and can generate the greatest amount of biogas. Solids take a longer time to digest than feedstocks that are soluble. Nutrients are also important. A suitable carbon to nitrogen ratio (C/N) is less than 30, and the carbon to phosphorous ration (C/P) should be less than 50. For example, lignocellulosic biomass has a high C/N ratio, so nitrogen sources must be added. Nutrients also must be free of toxic components. Other factors that can influence digestion are the availability and location of feed materials (transportation costs involved here), logistics of how to get materials to certain sites, and if size reduction is going to be necessary.
Digester performance will also depend on the microbial population in the digester. This means maintaining adequate quantities of fermenting bacteria and methanogens. A recycled stream is used to take a portion of the liquid digestate as inoculum (material used for inoculation of feed materials). And depending on feeds, there may be an acclimation period to reach acceptable conditions.
There are also variations in the operational factors and environmental conditions of the digester. It is important to know the total solids (TS) and volatile solids (VS) in the feeds, the best retention times, and to provide mixing. Operational factors include the amount and strength type of feedstocks added to the digester. The operation also depends on maintaining the microorganism population and organic loading in reactors, whether operating in a batch or continuous reactor. Mixing is also an important factor in any reaction. The goal of mixing is to keep the microorganisms in close interaction with the feed and nutrients. Mixing also prevents the formation of a floating crust layer, which can reduce the amount of biogas percolating out of the slurry. Mixing will benefit the breakdown of volatile solids and increase biogas production, but keep in mind mixing adds energy cost, so this must be balanced. The types of mixing in this system include gas bubbling and/or mechanical mixing.
Environmental conditions include the temperature and pH of the reactor, as well as concentrations of materials, including the volatile fatty acids, ammonia, salt, and cationic ions. Different methanogens react in temperature ranges. The type of methanogens that produce the most biogas are thermophiles, but the digester must be operating between 40-70 °C. Methanogens also prefer neutral pH conditions (6.5-8.2). Accumulation of volatile fatty acids (VFAs) can cause the digestion to stop producing gas – this happens when too much digestible organic material is added, a toxic compound is added, or there is a sudden temperature change. Toxic materials include: 1) oxygen, 2) antibiotics, 3) cleaning chemicals, 4) inorganic acids, 5) alkali and alkaline earth salt toxicity, 6) heavy metals, 7) sulfides, and 8) ammonia. An additional reason for AD process failure has to do with the reaction within being out of balance. In particular, the rate of acid formation and methane production should be equal. This is done by maintaining definite ranges and ratios of the following: solids loading, alkalinity, temperature, pH, mixing, and controlling VFA formation. When the methanogen microorganisms cannot keep up with the fermenting bacteria, the digester becomes acidic – also known as “sour.”
An ambient temperature liquid phase AD reactor is called a covered lagoon. Advantages of the covered lagoon are the low cost, ease of construction, and control of odor control with manure storage. Disadvantages include difficult sludge removal and only seasonal production. However, there are several designs that have controlled temperature, and are typical to different types of reactors: 1) complete mixing, 2) plug flow, 3) sequencing batch, and 4) fixed film. Table 12.2 shows a comparison of the variables for each type of anaerobic digester configuration.
Table 12.2: Comparison of various types of anaerobic digester configurations (Credit: On-Farm Anaerobic Digester Operator Handbook. M.C. Gould and M.F. Crook. 2010. Modified by D.M. Kirk. January 2010.)
Characteristic Covered Storage Plug Flow Digester Mixed Plug Flow Digester Complete Mix Digester Fixed Film Digester Induced Blanket Digester Two- Stage Digester
Digestion Vessel Clay or synthetic lined storage Rectangle tank in ground Rectangle tank in ground Round/square in/above ground tank In/above ground tank In/above ground tank In/above ground tank
Level of technology Low Low Medium Medium Medium High High
Added heat No Yes Yes Yes Optional Yes Yes
Total solids 3-6% 11-13% 3-13% 3-10% 2-4% <8% ~5%
Solids characteristics Coarse Coarse Medium Coarse Coarse Fine .. ..
Retention time(days) 60+ 15+ 15+ 15+ <4 3-5 10-13
Farm type Dairy, Swine Dairy, Swine Dairy, Swine Dairy Dairy, Swine Dairy, Swine Dairy, Swine
Optimum location All climates All climates All climates All climates Temperate/warm All climates All climates
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/12%3A_Additional_Processes_for_Fuels_from_Biomass/12.01%3A_Anaerobic_Digestion.txt
|
12.2 Syngas Fermentation
There is an unusual process for liquids production from biomass: gasification followed by fermentation of gases into liquids. During gasification, the gases of CO, H2, and CO2 are formed (as we have learned in past lessons), but instead of using something like FT or MTG, this is formation of liquids fuels through a fermentation process using a microbial catalyst. Products are typically ethanol, acetone, and butanol. Gasification was discussed in depth in Lesson 4, but I will cover it briefly here to remind you of the various processing aspects. Gasification takes place at temperatures of 750-900°C under partial oxidation. It happens in the following steps: drying; pyrolysis in absence of O2; gas-solid reactions to produce H2, CO, and CH4 from char; and gas-phase reactions that manage the amounts of H2, CO, and CH4. It is most often known as syngas, but if it contains N2, then it is called producer gas. Syngas can be generated from any hydrocarbon feed. The main cost associated with gas-to-liquid technologies has to do with the syngas production, which is over half of the capital costs. Costs can be improved using improved thermal efficiency through better heat utilization and process integration and by decreasing capital costs.
There are advantages to using fermentation as part of liquids generation rather than using something like Fischer-Tropsch:
1. As with any gasification, it is independent of feedstock, and therefore, independent of biomass chemical composition.
2. Microorganisms are very specific to ethanol production, whereas with chemical catalysts, there are a wide range of reaction products.
3. No pretreatment is required as part of the biochemical platform.
4. Complete conversion of biomass is achieved, including lignin conversion. This can reduce the environmental impact of waste disposal.
5. Fermentation takes place at near ambient temperature and pressure, thus at a place where costs can be reduced significantly.
6. The requirement for CO/H2 ratio is flexible.
Of course, there are disadvantages as well. These include:
1. Gas-liquid mass transfer limitations.
2. Low ethanol productivity, usually related to low cell density.
3. Impurities in syngas generated from biomass.
4. Sensitivity of microorganisms to environmental conditions (pH, oxygen concentration, and redox potential).
The microorganisms that are used for ethanol production from syngas are acetogens that can produce ethanol, acetic acid, and other products from CO and H2 in the presence of CO2. The organisms are: 1) Clostridium strain P11, 2) Clostridium ljungdahlii, 3) Clostridium woodii, 4) Clostridium thermoaceticum, and 5) Clostridium carboxidivorans P7. (Wilkens and Atiyeh, 2011) The bacteria are some of the same ones that occur during anaerobic digestion: acetogens and acidogens. I won’t go into great detail about the biochemistry, as it is a little beyond the scope of this class. The acetogens utilize the reductive acetyl-CoA (or Wood-Ljungdahl) pathway to grow carbons and hydrogens on single carbon substrates such as CO and CO2. Clostridium bacteria use H2 or organic compounds as the electron source for the reduction of CO2 to acetyl-CoA, which are further converted into acids and alcohols. The process proceeds in two phases: acidogenic and solventogenic phases. In the acidogenic phase, mainly acids are produced (i.e., acetic acid and butyric acid). In the solventogenic phase, mainly solvents are produced (i.e., alcohols such as ethanol and butanol). Reactions 14 and 15 show the reaction chemistry for acetic acid formation, and reactions 16 and 17 show the reaction chemistry for ethanol formation:
Acetic acid formation:
(14) 4CO + 2H2O → CH3COOH + 2CO2
(15) 2CO2 + 4H2 → CH3COOH + 2H2O
Ethanol formation:
(16) 6CO + 3H2O → C2H5OH + 4CO2
(17) 2CO2 + 6H2 → C2H5OH + 3H2O
In summation, gasification-fermentation alternative is a method for biofuel production utilizing syngas generated from gasification of biomass feedstocks. Because it is biologically based, it has the potential for reducing costs compared to other syngas to liquid technologies, but there are several challenges related to this technology. Challenges include low alcohol productivity, low syngas conversion efficiency, and limitations in gas-liquid mass transfer. These challenges must be solved if this technology is to become economically viable.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/12%3A_Additional_Processes_for_Fuels_from_Biomass/12.02%3A_Syngas_Fermentation.txt
|
12.3 Microbial Fuel Cells
A microbial fuel cell is a bio-electro-chemical device that can convert chemical energy directly into electrical energy. But first, let’s go over what a fuel cell is. A fuel cell is a battery of sorts. So, what is a battery? A battery is when two different types of metals are connected together through what is called an electrolyte. One metal is an anode, which is a metal that “wants” to give off electrons when under the right conditions. One metal is a cathode, which is a metal that “wants” to accept electrons when under the right conditions. When these two metals are in close proximity, and there is a fluid that will conduct the electrons (an electrolyte), then the flow of electrons from one metal to the other can occur. And we can capture that flow to extract electricity. Batteries that we use in remotes for televisions will eventually get used up and need to be replaced. This is an example of a primary battery. Figures 12.6a-12.6c show a generic cell, a stack of cells using zinc and copper, and a picture of a Voltaic cell as created by Alessandro Volta (inventor).
We can also have secondary batteries, where the flow of electricity is established to provide electrical energy, but we can also apply electricity to the battery to reverse the flow of electrons and regenerate the life of the battery. While regenerated batteries don’t last forever, you can definitely get your money’s worth because you can regenerate them.
Fuel cells are also a sort of battery, but the materials are different and flow continuously to produce electricity. Figure 12.7 shows a generic fuel cell. As with a battery, it has an anode, cathode, and electrolyte. The anode typically uses hydrogen as the fuel, on the left side of the figure, and oxygen as the oxidant used in the cathode, on the right side of the figure. The electrolyte contains a fluid but also a membrane that removes the protons from the hydrogen, leaving the electrons to flow, and allows oxygen to accept the protons to form water. Typically, these cells are run on hydrogen and oxygen, but we get electrical energy out rather than heat from burning hydrogen and oxygen.
At the anode, hydrogen reacts as shown in reaction 18:
(18) H2 → 2H+ + 2e-
This is an oxidation reaction that produces protons and electrons at the anode. The protons then migrate through an acidic electrolyte, and the electrons travel through an external circuit. Both arrive at the cathode to react with the oxidant, oxygen, as shown in reaction 19:
(19) ½ O2 + 2H+ + 2e- → H2O
This is the reduction reaction, where oxygen can be supplied purely or in air. Essentially, the total circuit is completed through the mass transfer of protons in the electrolyte and external electrical circuit. There will be a small amount of heat lost through the electrodes. The overall reaction is shown in reaction 20:
(20) H2 + ½ O2 → H2O + work + waste heat
The water and waste heat are the by-products and must be removed on a continuous basis. An ideal voltage from this reaction is 1.22 volts, but less than that voltage will be realized. Other issues include use of fuel different from hydrogen (methanol, hydrocarbons, etc.) and the fact that fuel cells produce direct current (DC) when most applications require alternating current. This little bit of background has been provided so a short discussion on microbial fuel cells can be had.
We are not going to go into great depth on microbial fuel cells, as some of the biochemistry can be complex. I will provide you with some generic information on how a microbial fuel cell is set up. These are bio-electro-chemical devices, which convert chemical energy directly into electrical energy. As we have discussed before, there are some steps that need to occur. Cellulose is hydrolyzed into sugars, i.e., glucose. The sugars are fermented into short chain fatty acids, alcohols, hydrogen, and carbon dioxide. Finally, electricigenesis takes place, producing electricity, and carbon dioxide is carried through. Electricigenesis converts chemical energy to electrical energy by the catalytic reaction microorganisms. The anode is anaerobic, and the anode chamber contains microbes and feedstock. The fuel is oxidized by microorganisms, which generates CO2, electrons, and protons. The cathode is the aerobic chamber, and just like other fuel cells, a proton exchange membrane separates the two chambers and allows only protons (H+ ions) to pass.
There are two types of microbial fuel cells (MFCs): mediator or mediator-less. The mediator type was demonstrated in the early 20th century and uses a mediator: a chemical that transfers electrons from the bacteria in the cell to the anode. Some of these chemicals include thionine, methyl viologen, methyl blue, humic acid, and neutral red. These chemicals are expensive and toxic. Mediator-less MFCs are a more recent development, from the 1970s. These types of cells have electrochemically active redox proteins such as cytochromes on their outer membrane that can transfer electrons directly to the anode. Some electrochemically active bacteria are Shewanella putrefaciens and Aeromomas hydrophila. Some bacteria have pili on the external membrane, which allows for electron production through the pili. They are beginning to find commercial use in the treatment of wastewater. I’ve included some YouTube videos explaining how the MFCs work. The first video is a brief but complete explanation of how an MFC works (2:03).
MudWatt Microbial Fuel Cell
Click here for transcript of the MudWatt video.
[MUSIC PLAYING]
PRESENTER: The MudWatt Microbial Fuel Cell is a bio electrical device that uses the natural metabolisms of microbes found within soil to produce electrical energy. Here's how it works.
The MudWatt is comprised of two graphite felt electrodes-- the anode and the cathode-- held within a durable airtight container. The piece of electronics on top is used for experimentation and also features an LED light which blinks using the power of the microbes within your soil. The user simply fills the MudWatt with wet soil, burying the anode while resting the cathode on top.
In this configuration, a healthy community of electricity generating microbes will develop on the surface of the anode in a matter of days. These bacteria have unique metabolic abilities which enable them to respire the sugars and nutrients within the soil and deposit electrons onto the anode as part of their natural metabolism. Protons and carbon dioxide are released into the soil as metabolic byproducts and diffuse toward the cathode.
Once transferred to the anode, the electron then travels through the electrode wire, through the MudWatt electronics to the cathode. While passing through the electronics, this electrical current will light the LED light on top, giving you a visual indication that your microbes are healthy and happy.
At the cathode, the electron interacts with oxygen in the air, as well as protons coming from the anode, to form water. The carbon dioxide originating from the anode is released into the air. The cycle continues, limited only by the availability of nutrients within the soil and oxygen within the air.
[MUSIC PLAYING]
Credit: KeegoTech
The next video is a little longer and goes into a little more depth of what was described above (7:23).
Microbial Fuel Cell
Click here for transcript of the Microbial Fuel Cell video.
[MUSIC PLAYING]
PRESENTER: Inspiration to build bio-electrochemical systems came from a discovery of certain microbes that live in soil. These bacteria swim up to the solid metal-- such as iron-- transfer electrons to the metal, dissolving it in process. This is similar to aerobic bacteria that transfer electrons to molecules of oxygen during respiration. The electron transfer generates electricity. To where there's electricity, there is power.
PRESENTER: To harvest the electricity, a bio-electrochemical fuel cell is used. This system consists of two compartments-- an anode compartment and a cathode compartment. These two compartments are separated by a membrane. A biofilm grows on the end of it.
[MUSIC PLAYING]
An organic feed stream, such as waste water, enters the fuel cell, where it is oxidized by the biofilm. Simultaneously, oxidized products leave the fuel cell. The oxidation of organics-- for instance, acetate-- produces electrons and protons. This half reaction releases a certain amount of energy.
Electrons are conducted over the wire while protons move through the membrane to the cathode to uphold electron neutrality. Oxygen is supplied to the cathode chamber. There accepts the electrons and reacts with the protons to form water. This half reaction also releases a certain amount of energy.
[MUSIC PLAYING]
The theoretical maximum energy gain is determined by combining both half reactions. However, resistances are found in multiple layers of the fuel cell.
[MUSIC PLAYING]
The olmec losses are found in the electrical wire and in the proton transfer from the anode to the cathode. Concentration losses occur when the rate of mass transferred either to anode or cathode compartment limits the rate of product formation. Bacterial metabolic losses can be described by the amount of energy that is used by the microbes to grow.
The energy is harvested to form a protein gradient over the inner membrane. Activation losses are described by the capacity of the biofilm to transfer the electrons to the anode. Certain organisms can grow conductive nanowires-- called pili-- that directly interact with anode to transfer the electrons.
[MUSIC PLAYING]
[BIRDS CHIRPING]
PRESENTER: OK. Today, we're out in the wild. And we're looking for some sludge to power our bio-battery. I think this a nice spot. Ah, it's perfect.
PRESENTER: Another application of bio-electrochemical systems is the production of chemicals. In this case power must be applied to biofilm by external source. Electrons are produced by the oxidation of water. Now the biofilm grows on the cathode . The energy-rich electrons are used by organisms to fixate carbon dioxide.
[MUSIC PLAYING]
Carbon dioxide enters the cathode compartment and fuses to the cathode. There, to harvest energy from the electrons, CO2 is fixed. And acetate is formed.
PRESENTER: All right. Let's have a look at our bio-battery. We buried the anode in our soil. Make sure there are no air bubbles in the soil. It has to be anaerobic. On top of the soil, we placed the cathode, which is in direct contact with the oxygen in the air.
Now let's look at another application of microbial fuel cell. Desalination can be achieved by inserting an extra compartment in between the anode and the cathode. A forward osmosis membrane is placed at the anode. This allows transport on both positively and negatively charged ions.
At the cathode, a cation exchange membrane is placed, which permits only transport of positively charged ions. Salt water is flowing through this compartment while negatively charged ions move to the anode, and positively charged ions moved to the cathode.
To finalize our bio-battery and to see the energy production, we have to connect the cathodes to the anodes. the electricity is stored in a transformer and used to power a LED light.
[MUSIC PLAYING]
Credit: Sebastiaan de Bruin
The next short video has Dr. Bruce Logan, a professor in the Civil Engineering Dept. at PSU providing a brief explanation on using these MFCs for wastewater treatment facilities (3:05).
Electrifying Wastewater
Click here for transcript of the Electrifying Wastewater video.
PRESENTER: Clean water and electricity are essential for everyday life. But to get one, we often need the other. We generate electricity to purify water. And in many parts of the world, we use water to create energy.
In the United States, an average of 5% of the electricity we produce goes towards powering our water infrastructure. But what if we could use wastewater for energy? As it turns out, a decades' old technology known as microbial fuel cells can help extract the energy and wastewater to produce electricity.
BRUCE LOGAN: A microbial fuel cell is a device where we use bacteria to directly produce electrical current from something as simple as waste water. Right now, we have wastewater treatment plants that consume electrical power. And we can imagine a time when these treatment plants are transformed into what we hope would be power plants.
PRESENTER: Deep in the sewers in wastewater treatment plants, there are billions of bacteria, or microbes, that break down organic matter to produce electrons. They surrender their electrons to oxygen molecules in exchange for energy. But in a microbial fuel cell, the electrons take a detour.
BRUCE LOGAN: This is a microbial fuel cell. It's really a very simple device. It's just a tube with electrodes on either side of that tube-- one which is sealed off, so the bacteria can't get at the oxygen, the other one which is exposed to oxygen.
The most important part of a microbial fuel cell are the microbes. The microbes grow on an electrode, which is oxygen free so that they send off those electrons to an electrode rather than oxygen. The electrons flow through a circuit, so we extract that electrical current as electrical power.
PRESENTER: To complete the circuit, the electrons end up on the other side of the tube and combine with oxygen. The theory is simple. But putting it into practice is not so easy.
BRUCE LOGAN: We initially thought that the greatest challenge would be the bacteria. But as it turns out, it's actually everything but the bacteria that we've had the greatest challenges with. We have systems the size of this cube and maybe a little bit bigger, but we really haven't gone out and built 1,000-liter systems or 10,000-liter systems. And that's an engineering challenge that we need to address next.
PRESENTER: Despite these challenges, Logan is optimistic about microbial fuel cells.
BRUCE LOGAN: I am really excited about microbial fuel cells and these different technologies because it creates a truly sustainable way to produce energy and power our water infrastructure.
PRESENTER: In other words, if microbial fuel cells deliver on their promise, wastewater will be waste no longer.
[MUSIC PLAYING]
Credit: BytesizeScience
And the last video, put together by Dr. Logan’s group at PSU, shows how to construct three different types of MFCs (4:26).
Several Types of Microbial Fuel Cells
Click here for transcript of the types of microbial fuel cells video.
[MUSIC PLAYING]
PRESENTER: Here we have three different microbial fuel cell reactor architectures. In front we have cube reactors. This is a single bottle MFC reactor. And this is a double bottle reactor, also known as an H-type reactor.
[MUSIC PLAYING]
Here we have a cube reactor disassembled. I just want to be able to show you all the parts before I put them back into the reactor itself.
This is the body of the reactor. It's a lexan cube with a 28 milliliter volume anode chamber; two holes drilled in the top for refilling and emptying of substrate.
This is the cathode end plate, and this is the cathode. This is actually an air cathode made out of carbon cloth.
This side has the four diffusion layers of PTFE. This side has carbon black and platinum catalyst. This is the side that actually faces the interior of the MFC.
This is the other end plate. There's many different nanomaterials that can be used. In this example, we are using a carbon brush fiber anode.
Now I will demonstrate assembly of a cube reactor.
First place the cathode in, the platinum-- carbon black side facing in. We use gaskets to make sure that there is no leakage and good connection between the platinum side and the current collector, which is a titanium wire.
The cube reactor is held together just with all thread and wing nuts using compression to keep all the places in-- all the pieces in place.
So the front's done. Apply a gasket, O-ring; the end plate gets slid on.
There you have it-- a cube reactor.
[MUSIC PLAYING]
So here we have a single bottle reactor. This is very similar to the cube reactors. In this setup we have a brush anode as well as another air cathode.
Now I'll demonstrate how to assemble a bottle reactor. So this is just a standard media bottle with an arm attached to it. Once again, replace the cathode with the platinum and carbon black side facing into the reactor and the PTF diffusion layer side facing towards the air.
So all we're doing is placing the cathode, current collector, O-ring-- this is just a cap that will be used for compression to keep the cathode in place-- using just a regular arm clamp. All that remains to be done is placing of the brush anode into the bottle.
And that is a single bottle reactor.
[MUSIC PLAYING]
OK. Now I'll demonstrate the assembly of two bottle MFC reactor architecture, or an H-type reactor.
In this situation, the anode and cathode are both the same size, and we're using the same material for the anode and cathode, which is carbon paper.
And it's just been attached to a titanium wire as the current collector. And once again, this is being used as both the anode and the cathode.
In this situation, the cation exchange membrane is placed in between the two chambers.
Use a clamp. Tighten the clamp if necessary.
And you have a two-chamber microbial fuel cell reactor.
Credit: MFC Technology
MFCs can also be used in food processing plants and breweries, as well as being implanted as biomedical devices. There are technical challenges to MFCs. They have relatively low power densities, which means they don’t generate much power. Therefore research continues to improve power densities. These devices have an incredible future, but still need more research to be commercialized at a large scale.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/12%3A_Additional_Processes_for_Fuels_from_Biomass/12.03%3A_Microbial_Fuel_Cells.txt
|
12.4 Final Thoughts on the Use of Biomass for Fuel Generation
We have explored many uses of biomass to generate fuels, from use in electricity generation through combustion, to several conversion technologies to make ethanol, biodiesel, and fuels that are very similar to petroleum-based fuels like gasoline, jet fuel, and diesel fuel. I also wanted you to have the opportunity to see what is still being researched and what has been commercialized. While currently, due to the cost of crude oil being low, biofuels may not be as competitive economically, they show great benefit environmentally, especially when renewable methods are used to harvest and generate the fuels. You looked up several very interesting articles related to biofuel generation and use. Read articles on biofuels with a critical eye; you now know enough about biofuels so that you can be a more critical reader. I hope that you enjoyed the class.
12.05: Assignments
12.5 Assignments
Final Project Reminder
Remember that your Final Project will be due at the specified time in Canvas.
Quiz#4
You will complete take home Quiz #4.
12.06: Summary and Final Tasks
Summary
This last lesson covered topics that are practical, such as anaerobic digestion (it is part of digestion), and also topics that are more research-oriented, such as syngas fermentation and microbial fuel cells. The best application for anaerobic digestion is on farms, as a way to utilize waste from animals. Syngas fermentation could be a lower energy-intensive process that could make gasification/syngas fermentation less expensive than using FT synthesis. And, finally, microbial fuel cells are a unique fuel cell that can take advantage of bacteria to make electricity or clean up wastewater inexpensively.
References
Jian Shi1, Ruihong Zhang2, Wei Liao 3, Conly L. Hansen4, Yebo Li1, * 1. Department of Food, Agricultural, and Biological Engineering, The Ohio State University, 2. Department of Biological and Agricultural Engineering, University of California-Davis. 3. Department of Biosystems and Agricultural Engineering, Michigan State University, 4. Department of Nutrition and Food Sciences, Utah State University, BEEMS Module B7, Anaerobic Digestion, USDA Higher Education Challenger Program, 2009-38411-19761. Contact: Yebo Li, [email protected]
Frigon and Guiot, 2010, Biofuels, Bioproducts & Biorefining, 4, 447-458.
On-Farm Anaerobic Digester Operator Handbook. M.C. Gould and M.F. Crook. 2010. Modified by D.M. Kirk. January 2010.
Hasan K. Atiyeh, Department of Biosystems and Agricultural Engineering, Oklahoma State University, Syngas Fermentation, BEEMS Module N1, USDA Higher Education Challenger Program, 2009-38411-19761, Contact: Hasan K. Atiyeh, [email protected].
Wilkins and Atiyeh. Current Opinion in Biotechnology 2011, 22:326–330.
Reminder - Complete all of the Lesson 12 tasks!
You have reached the end of Lesson 12! Double-check the Road Map on the Lesson 12 Overview page to make sure you have completed all of the activities listed there.
Questions?
If there is anything in the lesson materials that you would like to comment on, or don't quite understand, please post your thoughts and/or questions to our Throughout the Course Questions & Comments discussion forum and/or set up an appointment for office hour. While you are there, feel free to post responses to your classmates if you are able to help.
|
textbooks/eng/Biological_Engineering/Alternative_Fuels_from_Biomass_Sources_(Toraman)/12%3A_Additional_Processes_for_Fuels_from_Biomass/12.04%3A_Final_Thoughts_on_the_Use_of_Biomass_for_Fuel_Generation.txt
|
As an introduction we will define some multidisciplinary terminology, consider our motivations, and cover some relevant academic activity as well as research publications.
Biomimetic implies the mimicry of biology; note the word mime embedded in the term. In recent years biomimicry seems to be used more and more for sensory systems applications (our interest), while biomimetics implies molecular-level mimicry. This text is focused on biologically-inspired paradigms used for sensory systems and the signal processing that goes with such systems. The subject is sensory systems and not the research associated with mimicking organic chemistry, muscle tissue, etc. In this text cursory descriptions of biological phenomena will be followed by electronic sensor designs and the signal processing algorithms that emulating such phenomena for useful technological application. Bioprincipic is a similar term used recently implying the mimicry of biological principles.
Biometric implies measuring biological features unique to an individual to determine the identity of the person. For example, authentication can be granted based on a pattern matching of a fingerprint, scanned iris image, or recorded voice pattern. This could be used for building and computer security purposes.
Biomedical means the branch of medicine associated with survival in stressing environments. Bionic means enhancing normal biological capability with electronic or mechanical devices [Webster].
Bioinformatics is used to describe computer applications of extracting information about biological phenomena, primarily in the field of molecular biology. Bioinformatics is more formally defined as "The collection, classification, storage, and analysis of biochemical and biological information using computers especially as applied to molecular genetics and genomics." [Webster].
Anatomy and Physiology imply structure and function, respectively. Scientists from many disciplines often organize their thoughts in similar ways. However, until there is a reason to communicate across disciplines, the terminology in each tends to develop into differently. Table 1 is an observation of the separation of phenomena into physical and abstract categories.
Table 1. Different Terminology, Similar Concepts
Concept Biology Engineering Computer Common Term
Physical Anatomy Architecture Hardware Structure
Abstract Physiology Algorithm Software Function
Genetic Algorithms refers to computational methods inspired by genetics. A genetic algorithm may consist of improving on an existing solution (or one chosen initially at random) based on an evaluation of fitness representing the problem solution. Improved solutions may be derived from fitness evaluation and genetic operators representing mutation and crossover.
Evolutionary Computation refers to the computational methods mimicking natural evolutionary forces.
Neural Networks is used to refer to networks of computational elements that process information in an analogous way to biological neuronal networks. Both natural and artificial neural networks perform a nonlinear transform on an aggregation of many weighted input signals. There are many artificial “neural network” paradigms (ANN's) that include many ideas not found in biological neuronal networks, although the general concept has its original inspiration from biology.
Nevertheless, most ANN variations have these features in common with natural neural networks:
- A summation of many inputs, each weighted differently based on learned examples
- A non-linear output mapping function follows the summation
- massively parallel
- distributive processing
- adaptive
The application of ANN’s to various engineering applications has grown into an academic field of its own, with separate texts and courses dedicated to the field. Further study of neural networks is reserved for courses and texts dedicated to this subject.
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/01%3A_Introduction/1.01%3A_Relevant_terminology_and_related_bio-inspired_technologies.txt
|
1.2 Motivation for this multidisciplinary study
So, why a special electrical engineering course focused on biologically-inspired sensory system designs and signal processing techniques? A few of the reasons include:
- Natural systems solve engineering problems
- Biological information is becoming increasingly more available
- Technology is becoming increasingly more affordable and available
- Research agencies continue to support bio-inspiration
- A better understanding of biology can result from attempting to imitate biology
1.2.1 Natural systems solve engineering problems.
From the earliest times we have looked to biological systems for engineering solutions to our technical problems. For example, in Greek mythology the legendary Daedalus, builder of the Cretan labyrinth, was motivated by birds to build wings to help him and his son, Icarus, escape imprisonment. Later observations of birds, such as wing-shape, have led to modern aircraft design features.
Velcro has been inspired by the way burrs attach themselves to clothing. Autonomous robots can benefit from the study of natural control mechanisms found in similar creatures in the animal kingdom. Machine vision systems for robotics require the separation of objects from the background, a task inherently embedded into the design of natural vision systems. The image recognition capability of humans is difficult to duplicate with computer technology, although neurons are five or six orders of magnitude slower than silicon transistors and heterogeneous (or considerably ‘mismatched’ when compared to transistors).
1.2.2 Biological information is becoming increasingly more available
The difficulty in reverse-engineering natural systems is due in part to our lack of complete understanding of these complex systems. In organic chemistry and microbiology, we have uncovered much detail of the fundamental physical processes at the neuronal level. We also have considerable understanding of the overall systems behavior from fields such as psychology or psycho-physics. What is difficult to grasp, however, is how the microscopic processes transforms sensory information into the macroscopic decisions and behaviors. This leads to an interest in natural design optimizations and interconnection schemes.
It is commonly agreed that many people will do almost anything for money but will also freely give it up for their health. This captures our limited existence in time and space while desiring permanence, which leads to our willingness to do whatever we can to maintain or improve our health. As a result, there is and will always be enormous resources (funds, etc.) available for exploring a deeper understanding of biological phenomena. Although guided for medical purposes, system concepts applicable for other uses will eventually unfold. As we move further into the information age with better and better technology, many of the details are already available for exploiting natural sensory design concepts.
Although there is already an abundance of information available on natural sensory systems and signal processing, it is difficult for engineers to decipher useful information from the biomedical literature. This is due in part to the different motivations: The medical community is interested in diagnosing (organic) system problems and formulating procedures and medications to fix those problems or allow the patient the ability to adequately deal with the problems. The engineer, on the other hand, is more interested in how specific tasks are accomplished from the available sensory signals.
1.2.3 Abundant technology is affordable and user-friendly
Due to rapid advances in processing speeds and throughput capabilities many successful applications have now been developed using artificial intelligence, deep learning neural network architectures, and other related technologies. A small sample of tools readily available for students and researchers include:
- Reconfigurable computing tools such as Quartus (Altera) and Vitis (Xilinx)
- Circuit simulation tools such as PSPICE (Microsim)
- Data Acquisition such as LabVIEW (National Instruments)
- Computational tools such as Matlab (Mathworks)
- Development platforms such as Raspberry, Arduino, etc.
- Languages such as Python
1.2.4 Research agencies continue to support bio-inspiration
The author draws from former work experience at the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN). To address the high signal processing throughput and short latency of an imager that guides an exo-atmospheric hypervelocity missile, novel concepts were explored that involved biologically-inspired approaches. Funded concepts included an infrared sensor with retina-inspired readout, multi-resolution targeting inspired by foveated vision, and other research projects exploiting various bio-inspired sensory design ideas.
Some historical efforts (late 1980’s and 1990’s)
Much of the work at AFRL/MN was leveraged from former research sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Office of Naval Research (ONR). Research funded by ONR and DARPA as well as National Science Foundation (NSF), National Institute of Health (NIH), and others have resulted in books whose individual chapters are written by the various researchers, which can lead to a considerable lack continuity and consistency. Nevertheless, the material in such books is proven to very useful; a few examples include
- Mead, Carver, Analog VLSI and Neural Systems, Addison-Wesley, 1989.
- Zornetzer, Steven, Davis, Joel, and Lau, Clifford, editors, An Introduction to Neural and Electronic Networks, Academic Press, 1990.
- Ayers, J., Davis, J. and Rudolph, A., editors Neurotechnology for Biomimetic Robots, MIT Press, 2002.
- Bar-Cohen, Yoseph, and Breazeal, Cynthia, editors, Biologically-inspired Intelligent Robots, Taylor and Francis, 2003.
- Bar-Cohen, Yoseph, editor, Biomimetics: Biologically-inspired Technologies, Taylor and Francis, 2006.
The following book and the 2nd edition have been useful for covering the structure and function biological sensory systems:
- Smith, C.U.M, Biology of Sensory Systems, John Wiley and Sons, ISBN: 0-471-85461-1, 2000.
As an example of continued strong and direct support for biomimetics, consider this excerpt from an announcement for Biomimetics for Computer Network Security Workshop (1999):
"The Office of Naval Research is sponsoring a workshop whose goal will be to identify technologies that are inspired by biological foundation and that, when matured, may contribute to a significant increase in network security capability...This research is aimed at developing a new class of biologically inspired robots that exhibit much greater robustness in performance in unstructured environments than today's robots.... The research involves a close collaboration among robotics and physiology researchers at Stanford, U.C. Berkeley, Harvard and Johns Hopkins Universities... sponsored by the Office of Naval Research under grant N00014-98-1-0669…"
More recent developments
In August, 2020, the Office of Naval Research (ONR, www.onr.navy.mil, Code 341) continued to solicit contract and grant proposals in the area of “Bio-inspired Autonomous Systems” with the following description:
The aim of Bio-inspired Autonomous Systems is to extract principles of sensorimotor control, biomechanics and fluid dynamics of underwater propulsion and control in aquatic and amphibious animals that underlie the agility, stealth, efficiency, and sensory adaptations of these animals. The principles that emerge from this interdisciplinary research are formalized and explored in advanced prototypes. The goal of this program is to expand the operational envelope of Navy underwater and amphibious vehicles and enable enhanced underwater manipulation.
as well as in “Bio-inspired Signature Management” with the following description:
The Bio-inspired Signature Management program aims to discover biologically-inspired adaptations and bioengineered solutions to expand current warfighter capabilities in detection mitigation and undersea navigational challenges. This will be accomplished through multidisciplinary research in science and technology fields such as bio-inspired / biomimetic materials, visual and sensory perception, and bio-optics / bioelectronics.
Also in August 2020 the Defense Advanced Research Projects Agency (DARPA, www.darpa.mil) gives the following description of their “Nature As Computer (NAC)” program:
Certain natural processes perform par excellence computation with levels of efficiency unmatched by classical digital models. Levinthal’s Paradox illustrates this well: In nature, proteins fold spontaneously at short timescales (milliseconds) whereas no efficient solution exists for solving protein-folding problems using digital computing. The Nature as Computer (NAC) program proposes that in nature there is synergy between dynamics and physical constraints to accomplish effective computation with minimal resources. NAC aims to develop innovative research concepts that exploit the interplay between dynamic behaviors and intrinsic material properties to develop powerful new forms of computation. The ability to harness physical processes for purposeful computation has already been demonstrated at lab-scales. NAC seeks to apply these concepts to computation challenges that, for fundamental reasons, are poorly suited to, or functionally unexplored with, classical models. NAC will lay the foundation for advancing new theories, design concepts and tools for novel computing substrates, and develop metrics for comparing performance and utility. If successful, NAC will demonstrate the feasibility of solving challenging computation problems with orders-of-magnitude improvements over the state of the art.
1.2.5 Imitating biology can lead to a better understanding of biology
Although engineering applications may result from biological inspiration, sometimes those applications are biomedical. For example, artificial neural networks are used for identifying potential cancerous sites in x-ray images. Meanwhile, biomimetic robots are not only used as testbeds for potential engineering applications, but also as tools for biologists to better understand complex animal-environment relationships. An example of this expressed is found concerning MIT’s “RoboLobster” in the following quote:
The major result of these studies was a solid demonstration that tropotactic concentration-sensing algorithms could not explain the plume-tracking behavior in lobsters…So we are forced to consider other biologically feasible algorithms to find a reasonable explanation…Thus RoboLobster revealed to us something about the lobster’s world that we had previously only suspected: the need to switch tracking strategies between different regions of the plume [Grass02]
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/01%3A_Introduction/1.02%3A_Motivation_for_this_multidisciplinary_study.txt
|
1.3 Academic research activity
Using natural biology as a source of inspiration for solving sensing problems requires a solid understanding of biology. Although there is a long history of our understanding how biological sensory systems perform certain tasks, there is still very much that is not yet understood. Biological research institutions continually reveal deeper knowledge of the structure and function of sensory systems, which gives engineering problem-solvers more to consider. Models and algorithms are developed to match measured data, such as the Hassenstein-Reichardt Elementary Motion Detection (HR-EMD) model [Hass56], DeValois spatial vision models [DeV88], and others more focused on a specific sensory system, such as Frank Werblin’s efforts to simulate primate vision processing in the retina [Werb91], John Douglas’ and Nicholas Strausfeld’s work to map the neural circuitry of the fly [Doug00], and many others.
Sometimes biology is deliberately considered for inspiration for new ideas. One example is the funding provided during the 1980’s by the Office of Naval Research (ONR) and the Defense Advanced Research Project Agency (DARPA) to pursue novel military sensor designs. One of the products is a collection of biologically-inspired sensory system design concepts implemented in VLSI technology. Several of these designs developed at California Institute of Technology (CalTech) are detailed in Analog VLSI and Neural System [Mead89]. One of these designs, the ‘silicon retina’, was expanded by the Air Force Research Laboratory (AFRL) for military seeker applications by integrating with an array of infrared sensors [Mass93]. Some graduates from Mead’s lab began their own bio-inspired labs at institutions such as Georgia Institute of Technology, University of Florida, Massachusetts Institute of Technology, etc. while other graduates started their own companies building bio-inspired components or researching follow-on design concepts.
Much more work in bio-inspired sensing can be found in technical journals and conferences such as the IEEE International Conference on Robotics and Biomimetics. This conference alone has more than 500 papers and has been an annual conference since 2012. A common application for this conference is robotic fish, which has its inspiration in the design of fish for underwater maneuverability. Other popular topics include deep-learning neural networks, actuators, flocking (or swarming), and biomimetic materials. These topics are popular in other bio-inspired conferences, journals magazines, etc. Although the original neural network is bio-inspired, many subsequent efforts deviate from biology (not to mention very little is known about how real neural networks work). Any research using a neural network or adding something to a robotic fish or other originally bio-inspired concept could arguably be labeled ‘bio-inspired’, which complicates isolating truly new bio-inspired contributions.
In addition to the technology applications we have the more biology-focused efforts, where biologists are attempting to derive models that adequately reflect measured data. Example journals include Vision Research and Biological Cybernetics. A drawback for engineers is the biology-intensive language necessary to convey their models, as well as efforts typically are very focused on a very specific part of one species’ neural circuitry, such as the mechanism for turning in the salamander [Liu20]. Therefore due to the quantity of technical efforts and the wide diversity of disciplines that consider this general topic area it is quite challenging to encompass all significant efforts in any of the basic modalities (vision, olfaction, gustation, tactile, audition) of bio-inspired sensory design.
1.04: Questions
Chapter 1 Questions
1. What do the terms biomimetics, biometrics, biomedical, bionics, and bioinformatics mean?
2. What are some of the motivations for studying biomimetics?
3. What terms do the various academic disciplines use to describe system structure and system function?
4. What are some of the characteristics of both artificial and natural neural networks?
5. There is a wealth of knowledge available concerning biological processing at the molecular and neuronal levels as well as a wealth of knowledge concerning human behavior. What's missing?
6. Why is it a good assumption that there will continue to be significant spending on biological research?
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/01%3A_Introduction/1.03%3A_Academic_research_activity.txt
|
Some common groundwork is necessary before investigating natural and biomimetic sensory systems and signal processing. A review of the salient aspects of linear systems (Section 2.1) is covered first. This is followed by fundamentals of neuronal systems (Section 2.2), neuronal processing (Section 2.3), an electric circuit model of a neuron (Section 2.4), and basic neuronal motion detection models (Section 2.5). The following free texts are recommended for more thorough treatment of linear systems theory and image processing:
Ulaby, F. and Yagle, A, Signals and Systems: Theory and Applications, Michigan Publishing, ISBN 978–1–60785–487–6, 2018. Available at ss2.eecs.umich.edu.
Yagle, A. and Ulaby, F., Image Processing for Engineers, Michigan Publishing, ISBN 978–1–60785–489–0, 2018. Available at ip.eecs.umich.edu.
2.1 Relevant Linear Systems Theory
This section summarizes the salient points of linear systems theory relevant to biomimetic sensory systems and signal processing. Many facets of natural vision processing can be modeled as spatial-temporal filters operating on input signals or image sequences. The two-dimensional spatial filters operate on each image frame, which is subsequently modified to account for the temporal history of previously filtered image frames. These operations are extensions of 1D discrete-time convolutions and other signal processing operations. The topics covered here include the motivation for LTI system modeling, continuous-time convolution and impulse response, δ(t), discrete-time convolution and unit pulse function, δ[n], 2D discrete-time convolution, and the Fourier Series and Fourier Transform.
2.1.1 Motivation for linear time-invariant (LTI) system modeling
Biological systems are naturally non-linear and time-varying. However, there is much practicality in approximating portions of the system as piece-wise linear over a nominal range of input values and time-invariant for relatively short periods of time. Such models many times can be "close enough" to be very useful. To simplify signal processing, we desire to use models of biological information transforms that are linear and time-invariant, or LTI transforms.
A system is linear if the law of superposition applies. In electrical engineering, superposition implies mathematical homogeneity and additivity. If the system output is y1 = f(x1) for an input x1 and y2 = f(x2) for an input x2, then the system is homogeneous if f(αx1) = αy1, additive if
f(x1 + x2) = f(x1) + f(x2) = y1 + y2
and therefore, linear if f(αx1 + βx2) = f(αx1) + f(βx2) = αy1 + βy2
A system is time-invariant if for the same response is given for the same set of inputs, regardless of when the inputs are presented. That is, for y(t) = f(x(t)), then f(x(t - t0)) = y(t - t0) for a constant time interval t0.
Many natural (biological) signal processing functions can be modeled as a sequence of LTI subsystems we refer to as filters. The signal is sent through a filter and looks different at the output. For example, a filter simulating the layer of photoreceptors in mammalian vision systems may include a logarithmic conversion of light intensity followed by a blurring effect due to interactions with nearby photoreceptors. This model would include a logarithmic filter followed by a Gaussian blurring filter:
If h(t) is the impulse response to the system, then, for a continuous-time input signal f(t)
$h(t) = f(t) * \delta (t) = \int_{0}^{t} f(\tau) \delta (t - \tau) \,d\tau$
and for a discrete-time input sequence f[n]
$h[n] = f[n] * \delta [n] = \sum_{k = -\infty}^{\infty} f[k] \delta [n - k]$
The symbol * is used to denote convolution, and δ denotes the Direct delta function, both of which are discussed later. For the discrete-time case, time-invariance implies that if h[n] is the response to δ[n], then h[n-k] is the response to δ[n-k]. The impulse response (h[n] for discrete-time systems and h(t) for continuous-time systems) of a linear time-invariant (LTI) system completely characterizes that system. The output, y[n], of an LTI system is the sum of individual impulse responses weighted by the current input value, x[n]:
$y[n] = \sum_{k = -\infty}^{\infty} x[k] h[n - k]$
For a continuous-time signal, s(t), the Fourier Transform of s(t), represented as S(ω), shows the frequency content of s(t) as a function of radian frequency ω. The Inverse Fourier Transform of S(ω) gives back the original time-domain representation. The two functions, both representing the same signal from different domain perspectives form a Fourier Transform pair, denoted as: s(t) <==> S(ω).
The response of a system to an impulse is called the impulse response and is denoted as h(t). The frequency representation of the impulse response, H(ω), is called the frequency response. Given an input signal to a known LTI system, the output can be determined:
in the time domain as the convolution of input and LTI impulse response, or
in the frequency domain as the product of the input spectrum and the LTI frequency response.
To put it another way, given an input, x(t), whose spectrum is X(ω), to an LTI systems whose impulse response is h(t) (whose spectrum is H(ω), the frequency response) then the output, y(t), and its spectrum, Y(ω), is given as
Time domain: y(t) = x(t) * h(t)
Frequency domain: Y(ω) = X(ω) H(ω)
2.1.2 Continuous-time convolution
For continuous functions, f(t) and g(t), the convolution is defined by
$f(t) * g(t) = \int_{0}^{t} f(\tau) g(t - \tau) \,d\tau$
and has these properties:
f * g = g * f commutative property
f * (g1 + g2) = f *g1 + f *g2 distributive property
(f * g) * v = f * (g * v) associative property
2.1.3 Continuous-time unit impulse function
The unit impulse function or Dirac delta function is defined as
δ(t) = ∞, t = 0 δ(t-a) = ∞, t = a
= 0, t ≠ 0 = 0, ta
and
$\int_{-\infty}^{\infty} \delta(t) \,dt =\int_{-\infty}^{\infty} \delta(t - a) \,dt = 1$
since δ(t-a) has infinite strength at t = a, has zero duration at t = a, and has unity area.
A continuous-time signal x(t) can be replicated by the convolution with the unit impulse function:
$x(t) = \int_{-\infty}^{\infty} x(\tau) \delta (t - \tau) \,d\tau$
2.1.4 Discrete-time unit pulse function
A continuous-time signal can be discretized into a sequence, x[0], x[1], x[2] ..., by collecting the values of this integral at the specific times t0, t0+T, t0+2T, t0+3T, etc. so that
x[0] = x(t)*δ(t0) = x(t0)
x[1] = x(t)*δ(t0+T) = x(t0+T)
x[2] = x(t)*δ(t0+2T) = x(t0+2T)
: : :
x[n] = x(t)*δ(t0+nT) = x(t0+nT)
Brackets, [ ], are used to denote discrete-time sequences while parentheses, ( ), are used to denote continuous-time functions. In this example, the sequence x[0], x[1], ...x[n]... is a discrete-time representation of the continuous-time function x(t).
The discrete-time version of the unit impulse sequence is called the unit sample sequence or the unit pulse function. The unit pulse function is defined as
δ[n] = 1, n = 0,
0, n ≠ 0
To properly interpret between continuous-time functions and discrete-time sequences, it should be noted that the unit pulse function must have unity area. To maintain this definition for a pulse of duration T, then the amplitude must be defined as 1/T. The definition above presumes T = 1.
A discrete-time signal x[n] can be replicated by the convolution with the unit pulse function:
$x[n] = x[n] * \delta [n] = \sum_{k = -\infty}^{\infty} x[k] \delta [n - k]$
2.1.5 One-dimensional discrete-time convolution
The discrete-time convolution operation is defined as
$f[n] * g[n] = \sum_{k = -\infty}^{\infty} f[k] g[n - k]$
Equation 2.1-1
Example 2.1-1
Use Equation 2.1-1 to calculate y[n] = f[n] * g[n] where f[n] = {1 2 3} and g[n] = {1 1 1 1 1 1}.
Example 2.1-1
Use Equation 2.1-1 to calculate y[n] = f[n] * g[n] where f[n] = {1 2 3} and g[n] = {1 1 1 1 1 1}.
Solution:
f[n] = {1 2 3} implies f[0] = 1, f[1] = 2, f[2] = 3 and
g[n] = {1 1 1 1 1 1} implies g[0] = g[1] = g[2] = g[3] = g[4] = g[5] = 1
For n = 0, Equation 2.1-1 gives the summation y[0] = …f[−1]g[1] + f[0]g[0] + f[1]g[−1] + …
but all values of f[n] and g[n] are zero for n < 0, so y[0] = f[0]g[0] = 1.
$y[0] = \sum_{k = 0}^{5} f[k] g[0 - k] = f[0]g[0] = 1$
$y[1] = \sum_{k = 0}^{5} f[k] g[1 - k] = f[0]g[1] + f[1]g[0] = 1 + 2 = 3$
$y[2] = \sum_{k = 0}^{5} f[k] g[2 - k] = f[0]g[2] + f[1]g[1] + f[2]g[0] = 1 + 2 + 3 = 6$
$y[3] = \sum_{k = 0}^{5} f[k] g[3 - k] = f[0]g[3] + f[1]g[2] + f[2]g[1] = 1 + 2 + 3 = 6$
$y[4] = \sum_{k = 0}^{5} f[k] g[4 - k] = f[0]g[4] + f[1]g[3] + f[2]g[2] = 1 + 2 + 3 = 6$
$y[5] = \sum_{k = 0}^{5} f[k] g[5 - k] = f[0]g[5] + f[1]g[4] + f[2]g[3] = 1 + 2 + 3 = 6$
$y[6] = \sum_{k = 0}^{5} f[k] g[6 - k] = f[1]g[5] + f[2]g[4] = 2 + 3 = 5$
$y[7] = \sum_{k = 0}^{5} f[k] g[7 - k] = f[2]g[5] = 3$
Where products are omitted when they are zero for specific k values. The result is y[n] = { 1 3 6 6 6 6 5 3 }.
We could also visualize the answer graphically by reversing the sequence g[n] and placing it below f[n] and offsetting by the value of n in Equation 2.1-1. The first three values determined using this graphical approach are shown here:
n =0:
k: -5 -4 -3 -2 -1 0 1 2 3 4 5
f[k]: 0 0 0 0 0 1 2 3 0 0 0
g[n-k]: 1 1 1 1 1 1 0 0 0 0 0
$y[0]=\sum_{k=0}^{5} f[k] g[0-k]=f[0] g[0]=1$
n =1:
k: -5 -4 -3 -2 -1 0 1 2 3 4 5
f[k]: 0 0 0 0 0 1 2 3 0 0 0
g[n-k]: 0 1 1 1 1 1 1 0 0 0 0
$y[1]=\sum_{k=0}^{5} f[k] g[1-k]=f[0] g[1] + f[1] g[0]=1 + 2 = 3$
n =2:
k: -5 -4 -3 -2 -1 0 1 2 3 4 5
f[k]: 0 0 0 0 0 1 2 3 0 0 0
g[n-k]: 0 0 1 1 1 1 1 1 0 0 0
$y[2] = \sum_{k = 0}^{5} f[k] g[2 - k] = f[0]g[2] + f[1]g[1] + f[2]g[0] = 1 + 2 + 3 = 6$
And so on.
Example 2.1-2
Solve the same convolution (Example 2.1-1) using Equation 2.1-1 but reverse the roles of f[n] and g[n].
Solution:
Due to the commutative property the result should be the same. Using Equation 2.1-1, the order could also be reversed, as y = g * f, giving:
$y[0]=\sum_{k=0}^{5} g[k] f[0-k]=g[0] f[0]=1$
$y[1]=\sum_{k=0}^{5} g[k] f[1-k]=g[0] f[1]+g[1] f[0]=2+1=3$
$y[2]=\sum_{k=0}^{5} g[k] f[2-k]=g[0] f[2]+g[1] f[1]+g[2] f[0]=3+2+1=6$
$y[3]=\sum_{k=0}^{5} g[k] f[3-k]=g[1] f[2]+g[2] f[1]+g[3] f[0]=3+2+1=6$
$y[4]=\sum_{k=0}^{5} g[k] f[4-k]=g[2] f[2]+g[3] f[1]+g[4] f[0]=3+2+1=6$
$y[5]=\sum_{k=0}^{5} g[k] f[5-k]=g[3] f[2]+g[4] f[1]+g[5] f[0]=3+2+1=6$
$y[6]=\sum_{k=0}^{5} g[k] f[6-k]=g[4] f[2]+g[5] f[1]=3+2=5$
$y[7]=\sum_{k=0}^{5} g[k] f[7-k]=g[5] f[2]+g[5] f[1]=3$
so that once again y[n] = { 1 3 6 6 6 6 5 3}.
As before we could also visualize the answer graphically, but this time by reversing the sequence f[n] and placing it below g[n] and offsetting by the value of n in Equation 2.1-1. The first three values determined using this graphical approach are shown here:
n =0:
k: –2 –1 0 1 2 3 4 5 6
g[k]: 0 0 1 1 1 1 1 1 0
f[n-k]: 3 2 1 0 0 0 0 0 0
$y[0]=\sum_{k=0}^{5} g[k] f[0-k]=g[0] f[0]=1$
n =1:
k: –2 –1 0 1 2 3 4 5 6
g[k]: 0 0 1 1 1 1 1 1 0
f[n-k]: 0 3 2 1 0 0 0 0 0
$y[1]=\sum_{k=0}^{5} g[k] f[1-k]=g[0] f[1]+g[1] f[0]=2+1=3$
n =2:
k: –2 –1 0 1 2 3 4 5 6
g[k]: 0 0 1 1 1 1 1 1 0
f[n-k]: 0 0 3 2 1 0 0 0 0
$y[2]=\sum_{k=0}^{5} g[k] f[2-k]=g[0] f[2]+g[1] f[1]+g[2] f[0]=3+2+1=6$
Convolutions can be computed quickly in MatLab, which stands for “matrix laboratory”, a software product of Mathworks, Inc. [MatLab]. The sequence y = f * g {implying y[n] = f[n]*g[n]} as defined in Example 2.1-1 is computed in Matlab as
>> f = [1 2 3];
>> g = [1 1 1 1 1 1];
>> y = conv(f,g)
y = 1 3 6 6 6 6 5 3
Note that the resulting sequence is longer that both input sequences; this is a natural consequence of the convolution operation. Also, MatLab does not use a zeroth element; the first element of y is referenced in MatLab as y[1], which is the same value as y[0] in Equation 2.1-1. Given these considerations the simulation confirms the hand-written results in Example 2.1-1.
Exercise 2.1-1
Use Equation 2.1-1 to calculate y[n]= x[n] * h[n] for x[n] = {1 2 2} and h[n] = {1 3}. Check your answer using graphical convolution.
Answer: y[n] = {1 5 8 6}
Exercise 2.1-2
Use Equation 2.1-1 to calculate y[n]= x[n] * h[n] for the given sequences. Check your answer using graphical convolution.
1. x[n] = {1 1 1 1 1} and h[n] = { 0.25 0.5 0.25}.
2. x[n] = {1 1 1 1 1} and h[n] = { 0.25 -0.5 0.25}.
Answers:
1. y[n] = {0.25 0.75 1.0 1.0 1.0 0.75 0.25}
2. y[n] = {-0.25 0.25 0 0 0 0.25 -0.25}
2.1.6 Two-dimensional discrete-time convolution
In signal processing applications, one sequence may represent a filter and another a given input signal. Convolving the two signals would give an output that is a filtered representation of the given input signal. Image processing is two-dimensional (2D) signal processing, where one 2D signal (image) represents a filter and the other the input image. For biomimetic applications, the filter could represent a model of a natural phenomenon as it affects the raw imagery. Assuming 2D variables f(x,y) and g(x,y), the 2D convolution operation is given as:
$f[x, y] * g[x, y]=\sum_{m=0}^{M-1} \sum_{n=0}^{N-1} f[m, n] g[x-m, y-n]$
Example 2.1-3
Using MatLab computer the 2D convolution y = f * g given the 2D variables f and g defined here:
f = 1 1 g = 2 2 2
1 1 2 2 2
2 2 2
Solution:
>> f = [1 1; 1 1];
>> g = [2 2 2; 2 2 2; 2 2 2];
>> y = conv2(f,g) y = 2 4 4 2
4 8 8 4
4 8 8 4
2 4 4 2
Notice that for both 1D and 2D convolutions the result tends to grow. Sometimes it is necessary to ‘crop’ out the internal pixels so that the filtered version is the same size as the original. For example, defining f below as a 5x5 2D unit pulse and performing the convolution with the 3x3 2D variable g results in a 7x7 result:
>> f = zeros(5);
>> f(3,3) = 1; f = 0 0 0 0 0
0 0 0 0 0
0 0 1 0 0
0 0 0 0 0
0 0 0 0 0
>> g = [1 2 3; 4 5 6; 7 8 9]; g = 1 2 3
4 5 6
7 8 9
>> y=conv2(f,g); y = 0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 1 2 3 0 0
0 0 4 5 6 0 0
0 0 7 8 9 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
To visualize the computation, flip g both vertically and horizontally:
g* = 9 8 7
6 5 4
3 2 1
The values for y are found by placing g* over each element of f, and then performing a dot product; that is, multiply element-for-element, and then add the products (sum of products). One way to crop out the middle 5x5 is to manually redefine y:
>> y = y(2:6,2:6); ;This command redefines y to be row 2 through 6 and column 2 through 6. The result is cropping the internal 5x5 image of the previous convolution so that the filtered version is now the same size as the original image, f.
y = 0 0 0 0 0
0 1 2 3 0
0 4 5 6 0
0 7 8 9 0
0 0 0 0 0
A better alternative for cropping out the central pixels is to use the same attribute in the conv2 Matlab function when determining y:
>> y=conv2(f,g,’same’); y = 0 0 0 0 0
0 1 2 3 0
0 4 5 6 0
0 7 8 9 0
0 0 0 0 0
Exercise 2.1-3
Given the 2D filter f and image x, give the output filtered image y = f * x.
f = -2 0 2 x = 0 0 0
-1 0 1 0 1 1
0 1 0 0 0 0
Give both the cropped and uncropped representations of the output.
Answers:
Uncropped: y = 0 0 0 0 0 Cropped: y = -2 -2 2
0 -2 -2 2 2 -1 -1 1
0 -1 -1 1 1 0 1 1
0 0 1 1 0
0 0 0 0 0
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/02%3A_General_Concepts_from_Engineering_and_Biology/2.01%3A_Relevant_Linear_Systems_Theory.txt
|
One of the difficulties in exploiting natural neuronal structure and function for engineering applications is the lack of understanding between the microscopic physiology and macroscopic behavior. Neuroscience is the study of neurons at the microscopic level, while psychophysics is the study of correlations between specific physical stimuli and the sensations that result. Neuroscience is somewhat focused on the chemistry of the neuron and psychophysics is more focused on the macroscopic behavior of the complete organism. The story of how and why neurons are connected the way they are continues to unfold and will continue for centuries to come.
2.2.1 Massive Interconnections
Neurons are highly interconnected to form the information channels in sensory systems. For example, the human brain contains about 1010 to 1012 neurons, each making up to 103 to 104 connections to other neurons. Small groups of neurons are called ganglia, which typically controls specific behaviors of an animal. Many invertebrates have large neurons and small ganglia, which allows researchers an opportunity to investigate neuronal signaling and primitive neuronal networks.
2.2.2 Hebbian Learning
Natural neuronal connections are often strengthened with continued use, known as Hebbian Learning. The result is an adaptation of the network to the most frequent signal sequences from external stimuli. Artificial neural networks (ANN) are now commonly used to solve computational problems when direct analytical methods are difficult or impossible. These networks are inspired by the natural neuronal paradigm, and many have taken on variations that diverge from these original examples. This is quite acceptable since the typical goal is to solve some engineering or computational problem, not necessarily to mimic the natural paradigm.
2.2.3 Physical Types of Natural Sensors
There are different ways to categorize the natural neuronal systems designs that are available for engineering exploitation. The method chosen here is based on the physics of the stimulus. The ones given the most attention are the ones most commonly found in nature:
- Photo-sensory systems, stimulated by photons
-- vision systems in vertebrates and invertebrates
- Mechano-sensory systems, stimulated by physical motion in the environment
-- touch systems in vertebrates and invertebrates
-- auditory systems in vertebrates and invertebrates
-- kinesthesia, which is knowing the relative positions of body parts
- Chemo-sensory systems, stimulated by changes in chemical content of stimuli
-- olfactory systems providing the sense of smell
-- gustation systems providing the sense of taste
Other physical senses occasionally found in biology include those sensitive to heat, infra-red radiation, polarized light, electric fields, and magnetic fields.
There are three basic types of stimulus reception in biological sensory systems:
- Exteroception is the receiving of signals from outside the organism, such as photons of light for the vision system, sound waves for the auditory system, and chemical traces for the olfactory system. Sensory systems in this group are the subject of this text and most of the bio-inspired sensory system research that has been done.
- Proprioception is the receiving of signals that relate position of body segments to one another and the position of the body in space, which involves kinesthesia mentioned earlier.
- Interoception is the receiving of signals from conditions inside the organism, such as blood glucose level and blood pressure level.
There are three basic maps of sensory receptive fields to portions of the brain:
- Somatotopic Map is a map of the body surface in the somatosensory cortex.
- Retinotopic Map is a map of the visual field (as focused onto the retina) in the primary visual cortex in the occipital lobe of the brain.
- Tonotopic Map is a map of the basilar membrane in the primary auditory cortex in the temporal lobe of the brain.
The amount of brain surface area dedicated to various regions of reception varies dramatically, as certain reception areas are more important and require more dedicated processing. For example, the allocation in the brain on the somatotopic map for the sensation of touch in the index finger is much larger than the same relative skin surface area of the back. Another interesting point is the nearest-neighbor receptor mapping is generally preserved in the cortex. That is, adjacent receptors in the peripheral sensory system tend to stimulate adjacent neurons in the cortex.
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/02%3A_General_Concepts_from_Engineering_and_Biology/2.02%3A_Sensory_Systems_and_Concepts.txt
|
2.3 Fundamentals of neuronal processing
This section reviews basic neuronal topics that are prevalent in biological sensory systems such as the vision system, the auditory system, and the olfactory system. Although biological neurons are much slower than modern transistor electronics, the fundamental principles of neuronal processing exploit natural logarithmic behavior of charge distributions and transport. The pn junction exhibits this natural relationship, but we tend to take pairs of pn junctions (transistors) and create a binary switch (digital bit). If the exponential v-i relationship of a pn junction could be used at today’s computer clock speeds, there could be many orders of magnitude improvement in computational performance and power consumption. In addition, there is still much to be learned about the interconnection strategies found in natural neuronal networks.
2.3.1 Adaptation and Development
Although biological systems can be studied as existing systems that solve processing problems, it should be noted that these systems are constantly developing and adapting. From conception to death every known biological system is continually maturing, never reaching an unchanging physical state. The neuronal system of a mature adult is relatively stable, thus representing some level of neuronal optimization due to environmental adaptation.
Adaptation can be immediate or long term. An example of immediate adaptation is the response of the iris of the eye to light levels, controlling the amount of photonic flux entering the pupil. A short-term adaptation, called habitualization and sensitization, is demonstrated in the marine snail, Aplysia. The gill is withdrawn beneath a mantle in defense when the siphon, attached to the mantle, is stimulated. The reflex magnitude is decreased as the siphon is artificially stimulated, resulting in the habitualization, or desensitization, of the response to the experimental environment. The response can be subsequently sensitized by stimulating other parts of the body. Through training these reflex conditions can be made to last for days, indicating a primitive form of memory and learning [Dowl87].
In higher life forms these simple neuronal adaptations combine with massive interconnections to provide more complex adaptation concepts. For example, it has been demonstrated that detection of spatial harmonics and identification of complicated sinusoidal grating patterns depends on adaptation to the harmonics and harmonic patterns. The detection threshold increases after adaptation to the harmonic, and the pattern identification threshold increases after adaptation to the patterns [Vass95].
Training during one’s lifetime is an example of long-term adaptation. Training results in neurons being connected or strengthened, which is called coincidence learning or Hebb learning [Hech90]. A more long-term adaptation is genetic coding, passing adaptation information from one generation to the next.
2.3.2 Sense Organs and Adaptation
The following italicized text concerning the sense organs in crabs is quoted from [Warner77] (non-italicized text is additional commentary):
“Sense organs function at the cellular level by converting the stimulus into a change in the electrical potential across the receptor cell membrane. This receptor potential, if sufficiently large, results in the initiation of nerve impulses (action potentials) which are transmitted along nerve to the CNS” (Central Nervous System).”
In some cases, however, such as primate vision, there are additional layers of cells between the receptors and action-potential transmission axons. In these instances, graded preprocessing functions occur before the information is encoded into action potentials. But for the simpler sensory system designs, the action potential...
“...frequency is a measure of the strength of the stimulus...Each receptor cell is specialized to convert a particular type of stimulus (light, mechanical deformation, etc.) and each has a particular threshold below which the stimulus is insufficient to trigger nerve impulses. Maintained stimulation generally results in the threshold of a receptor being raised (i.e. the receptor becomes less sensitive). Thus, many receptor cells, described as rapidly adapting, respond with a short burst of impulses only at the initiation of stimulation. Others which respond over longer periods of maintained stimulation are referred to as slowly adapting or, in extreme cases, non-adapting. Single sense organs may be composed of several receptor cells each with a different rate of adaptation.”
To illustrate slow and rapid adaptation, consider two neurons whose threshold for action-potential generation is –55mV. As ions entered the neuron from input dendrites, the membrane potential would increase from a resting potential (about –70 mV) to the –55 mV threshold. An action potential spike would then cause positive ions to be discharged during the spike, resulting in the membrane potential returning to the resting potential. Then the process would start over, where incoming ions would build up the membrane potential to the threshold for the initiation of an action potential. For rapid adaptation, the threshold might rise significantly after each spike generation, and degrade rapidly back to –55 mV when the input stimulus is removed. For slow adaptation, the threshold might rise nominally after each spike generation, and degrade more slowly back to –55 mV once the stimulus is removed.
Figure 2.3.2-1 shows the results of a neuronal adaptation model that accounts for increases in action-potential threshold with firing activity as well as a return to a nominal resting threshold in the absence of input stimulus. The input rectangular waveform is amplified to show when the stimulus is on and when it is off. As the neuronal membrane potential increases with input stimulus, there is a constant leakage that tends to bring the potential back to its resting state (about –70mV). Similarly, there is a constant leakage in the threshold that tends to bring it back to its resting state (about –55 mV). As the neuron adapts to the stimulus, the threshold level is raised, which subsequently reduces the action potential (spike) frequency.
Initially, the spike frequency is high; as the neuron adapts, it is reduced, as seen in the figure. During slow adaptation, the threshold is not increased very much after each spike so that it takes longer for the frequency to reach a steady-state low frequency. During fast adaptation, the threshold is increased significantly after each spike so that the neuron quickly reaches a steady-state low frequency. An alternate mechanism for rapid adaptation is for the neuron to return to a potential higher than the normal resting potential after an action potential. Instead of returning to –70 mV, it may only return to, say, –60 mV. Then the neuron has a much smaller potential increase to obtain before reaching the next threshold level for action potential firing.
2.3.3 Ionic Balance of Drift and Diffusion
A fundamental principle of semiconductor physics is the balance of charge carrier drift and diffusion currents in the depletion region of a pn junction. Both materials are characterized by a high concentration of mobile carriers (holes for p-type, electrons for n-type) within an electrostatically neutral volume. When brought together, electrons in the n-type material diffuse from a higher concentration area, leaving charge-depleted positively charged lattice ions and combine in the p-type material to form a negatively charged lattice ion there. As this process continues, an electric field from the positive lattice (n-type) to the negative lattice (p-type) is growing in strength. This field causes carriers to retreat, thus developing a drift current opposing diffusion current. Equilibrium is reached when the diffusion current equals the drift current [Horen96].
Biological cells are also held in equilibrium in part by a balance of drift and diffusion currents. However, the charge carriers are primarily potassium, K+, sodium, Na+, and chlorine Cl- ions, which are not as mobile as holes and electrons. The electronic currents are defined in terms of diffusion and mobility device constants, functions primarily of doping and geometry [Horen96]. Biological currents are functions of cell membrane permeabilities, which are functions of time, membrane potential, and ionic concentrations [MacG91].
In the case of potassium and sodium, there is a separate organic mechanism called the Na-K Pump that keeps ionic concentrations stable inside the neuron. The Na-K Pump uses metabolic energy supplied by the stored biological energy in the organism. For chlorine, however, the interior and exterior concentrations are maintained close together, balanced by drift and diffusion.
2.3.4 Nernst and Goldman Equations
An electric potential, or voltage, is established across a membrane when there is an unequal concentration of ions in the two regions. The Nernst equation (1908) showing this relationship is given by
$E=\frac{R T}{n F} \ln \frac{[C]_{o}}{[C]_{i}}=\frac{k T}{z q} \ln \frac{[C]_{o}}{[C]_{i}}$
Nernst Equation
$\cong(26 m V) \ln \frac{[C]_{o}}{[C]_{i}}, \text { for } K^{+}, N a^{+}$
$\cong-(26 m V) \ln \frac{[C]_{o}}{[C]_{i}} \cong(26 m V) \ln \frac{[C]_{i}}{[C]_{o}}, \text { for } C l^{-}$
where R is the universal gas constant, T is the absolute temperature, F is the Faraday constant (electrical charge per gram equivalent ion), n is the charge on the ion and [C ]o and [C ]i are the ion concentrations outside and inside the cell. Note that $\frac{R T}{F}=\frac{k T}{q} \approx 26 mV$ at room temperature. The latter term is used frequently in electronic circuits.
Bernstein (1912) presented the significance to neuroelectric signaling of ionic fluxes about neuronal membranes. Building on this concept, the Nernst equation, and fundamental physics of ionic media contributed by Planck and Einstein, Goldman (1958) contributed the primary model for resting potential in neurons as
$E=\frac{k T}{q} \ln \left(\frac{P_{K+}\left[K^{+}\right]_{o}+P_{N a+}\left[N a^{+}\right]_{o}+P_{C l-}\left[C l^{-}\right]_{i}}{P_{K+}\left[K^{+}\right]_{i}+P_{N a+}\left[N a^{+}\right]_{i}+P_{C l-}\left[C l^{-}\right]_{o}}\right)$
Goldman Equation
where T is absolute temperature, k is Boltzmann's constant, q is unitary electric charge, P's are permeabilities, [ ]o's are concentrations outside the cell, and [ ]i's are concentrations inside the cell. If the permeabilities are constant, the Goldman equation gives good steady-state results [MACG91].
Example 2.3.4-1
Find the Nernst potential between the interior and exterior of a neuron due to each ion if their concentrations are as follows:
Inside Outside
K+ 360 mM 20 mM
Na+ 45 mM 450 mM
Cl- 50 mM 600 mM
(M stands for mole, a unit of concentration)
Solution:
Potential due to concentration differences alone are solved by the Nernst Equation, keeping in mind the ratio of outer-to-inner ion concentration are reversed for negatively-charged ions:
$E_{K+}=(26 m V) \ln \frac{[C]_{o}}{[C]_{i}}=0.026 \ln \frac{20}{360}=-75.1 \mathrm{mV}$
$E_{\mathrm{Na}+}=(26 \mathrm{mV}) \ln \frac{[C]_{o}}{[C]_{i}}=0.026 \ln \frac{450}{45}=59.9 \mathrm{mV}$
$E_{C l-}=(26 m V) \ln \frac{[C]_{i}}{[C]_{0}}=0.026 \ln \frac{50}{600}=-64.6 \mathrm{mV}$
Example 2.3.4-2
For the given ion concentration determine the membrane potential assuming the following ratio of ion permeabilities: PK+ : PNa+ : PCl- = 1.0 : 0.5 : 0.2
Inside Outside
K+ 320 mM 25 mM
Na+ 40 mM 420 mM
Cl- 60 mM 540 mM
Solution:
The relative permeabilities given means that PNa+ = 0.5PK+ and PCl- = 0.2PK+; using the Goldman Equation
$E_{m}=(0.026) \ln \left(\frac{P_{K+}\left[K^{+}\right]_{o}+P_{N a+}\left[N a^{+}\right]_{o}+P_{C l-}\left[C l^{-}\right]_{i}}{P_{K+}\left[K^{+}\right]_{i}+P_{N a+}\left[N a^{+}\right]_{i}+P_{C l-}\left[C l^{-}\right]_{o}}\right)$
$E_{m}=(0.026) \ln \left(\frac{P_{K+}\left[K^{+}\right]_{o}+0.5 P_{K+}\left[N a^{+}\right]_{o}+0.2 P_{K+}\left[C l^{-}\right]_{i}}{P_{K+}\left[K^{+}\right]_{i}+0.5 P_{K+}\left[N a^{+}\right]_{i}+0.2 P_{K+}\left[C l^{-}\right]_{o}}\right)$
$E_{m}=(0.026) \ln \left(\frac{\left[K^{+}\right]_{o}+0.5\left[N a^{+}\right]_{o}+0.2\left[C l^{-}\right]_{i}}{\left[K^{+}\right]_{i}+0.5\left[N a^{+}\right]_{i}+0.2\left[C l^{-}\right]_{o}}\right)$
$E_{m}=(0.026) \ln \left(\frac{25+0.5(420)+0.2(60)}{320+0.5(40)+0.2(540)}\right)=(0.026) \ln \frac{247}{448}=-15.48 m V$
$E_{m}=(0.026)$
Example 2.3.4-3
Given the permeability ration PK+ : PCl- = 1.0 : 0.5 and the ion concentrations
Inside Outside
K+ 400 mM 20 mM
Na+ 50 mM 440 mM
Cl- 52 mM 560 mM
determine the relative permeability k of PNa+ such that PK+ : PNa+ : PCl- = 1.0 : k : 0.5 and the resting membrane potential is +50 mV.
Solution:
The relative permeabilities given means that PNa+ = kPK+ and PCl- = 0.5PK+; using the Goldman Equation
$0.05=(0.026) \ln \left(\frac{P_{K+}\left[K^{+}\right]_{o}+P_{N a+}\left[N a^{+}\right]_{o}+P_{C l-}\left[C l^{-}\right]_{i}}{P_{K+}\left[K^{+}\right]_{i}+P_{N a+}\left[N a^{+}\right]_{i}+P_{C l-}\left[C l^{-}\right]_{o}}\right)$
$0.05=(0.026) \ln \left(\frac{P_{K+}\left[K^{+}\right]_{o}+k P_{K+}\left[N a^{+}\right]_{o}+0.5 P_{K+}\left[C l^{-}\right]_{i}}{P_{K+}\left[K^{+}\right]_{i}+k P_{K+}\left[N a^{+}\right]_{i}+0.5 P_{K+}\left[C l^{-}\right]_{o}}\right)$
$0.05=(0.026) \ln \left(\frac{\left[K^{+}\right]_{o}+k\left[N a^{+}\right]_{o}+0.5\left[C l^{-}\right]_{i}}{\left[K^{+}\right]_{i}+k\left[N a^{+}\right]_{i}+0.5\left[C l^{-}\right]_{o}}\right)$
$0.05=(0.026) \ln \left(\frac{20+k(440)+0.5(52)}{400+k(50)+0.5(560)}\right)$
$1.923=\ln \left(\frac{440 k+46}{50 k+680}\right)$
$e^{1.923}=6.84=\frac{440 k+46}{50 k+680}$
$6.84(50 k+680)=440 k+46==>\quad k=47$
Exercise 2.3.4-1
Find the Nernst potential for each ion using the concentrations given in Examples 2.3-2 and 2.3-3
Answers:
Example 2.3-2: EK = -66.3 mV, ENa = 61.1 mV, ECl = -57.1 mV
Example 2.3-3: EK = -77.9 mV, ENa = 56.5 mV, ECl = -61.8 mV
Exercise 2.3.4-2
1. Repeat Example 2.3-2 with permeability ratio PK+ : PNa+ : PCl- = 1.0 : 5.0 : 0.2
2. Repeat Example 2.3-2 with permeability ratio PK+ : PNa+ : PCl- = 1.0 : 0.05 : 0.2
Answers:
1. 31.8 mV
2. -52.1 mV
Exercise 2.3.4-3
Repeat Example 2.3-3 so that the resting membrane potential is -50 mV.
Answer:
1. k = 0.123
2.3.5 The Action Potential
In general, neuronal processing within localized areas occurs as graded processing, while information transmission over reasonable lengths occurs as a frequency of asynchronous pulses, called spikes. In graded processing, the potential of the cell and its output rise slowly as the input signal levels strengthen. When input signals cease, then the cell and its output slowly return to the resting membrane potential. In spike-train processing, the inputs to the cell cause the interior to rise in potential until a certain threshold is reached. Typically, the resting potential is about –70mV (with respect to the extracellular fluid) and the threshold for initiating a spike is about –55 mV. The profile of voltage per time during a spike is known as the action potential.
When signals are transmitted over long distances, the signal transmission process tends to be a series of action potentials whose occurrences increase (higher frequency) as the input signal increases. Hodgkin and Huxley (1952) presented the original set of equations that describe the generation of a single action potential in the giant squid axon. The intracellular membrane potential typically builds until a threshold is met, which is around -55mV in the giant squid axon. At the threshold voltage, an action potential is generated. For subsequent action potential firings, however, the threshold value changes. Describing these threshold variations requires knowledge of molecular processes controlling the conductance channels that trigger action potentials. This knowledge is not yet understood enough to qualitatively describe the electrical signal behavior [MacG91].
The general shape of the action potential is caused primarily by significant increases in PK+ and PNa+, where PK+ is a smoother, more gradual increase than PNa+. Both return close to the original values: PNa+ within about 1 ms and PK+ within about 2 ms (see Figure 2.3.5-1). Both are positive ions, but [K]i > [K]o and [Na]i < [Na]o, so the effect on E are opposing [Dowl92, Kand81]. If PNa+ increases, more positive ions will flow from the outside to the inside, raising the potential between the inside and outside, denoted as Em. However, if PK+ increases, more positive ions will flow from the inside to the outside, lowering the potential between the inside and outside.
The action potential sequence is therefore something like this:
Resting potential: Em ≈ –70 mV
Cell receives input: Em increases to about –55 mV
Action potential initiated: Sharp increase in PNa+, further increasing Em to about +50 mV
More gradual increase in PK+ as PNa+ decreases, reducing Em
PNa+ returns to resting value, while PK+ is still high Em reduces to about –80 mV (overshoot) as PK+ decreases
Ion concentrations and permeabilities return to resting sate, Em ≈ –70
The stronger the input signal to the cell is, the more frequent the action potentials. If the input is present but very weak, then Em may settle somewhere between –70 mV and –55 mV with no action potentials. An analogy might be a leaky cup being filled from a faucet -- when filled, it is emptied, simulating the action potential. If the incoming water flow is not sufficient, an action potential is never generated.
2.3.6 Axonal Signal Transmission
Neurons receive inputs from ionic channels called dendrites and transmit (output) signals through their axons. Some neurons have no axons and serve to mediate signals by allowing ionic charges to be shared between adjacent neurons. Neurons transmitting action potentials typically have long, conductive axons for the signal transmission. Figure 2.3.6-1 shows a lossy transmission line circuit that simulates the charge-transmission behavior of an axon with membrane capacitance, CM, membrane resistance, RM, measured in Ω-cm (longer axons or wider-diameter axons ==> more surface area, less resistance), and axonal resistance RA, measured in Ω/cm (each unit length can be thought of as a series resistor). The space constant, λ, is given as
$\lambda=\sqrt{\frac{R_{M}}{R_{A}}} \Rightarrow V_{x}=V_{0} e^{-x / \lambda}$
Space Constant Equation
The space constant is analogous to the time constants of RC circuits. For rapidly changing signals, the membrane time constant is RMCM. V0 here is the DC value at the beginning of the transmission line, and Vx is the value at a distance x from there. Typical values of λ are on the order of 0.1 to 1.0 mm [Kand81]. The space constant is determined from DC values, so CM is not considered.
so CM is not considered.
The resistance of a conductor decreases as the cross-sectional area increases. As a result, some species have developed relatively large axons, such as the giant squid axon reaching about 1mm in diameter. The resistance decreases in proportion to the square of the diameter, but the capacitance increases in proportion. The net effect is a decrease in the time constant, RMCM, resulting in faster transmission. Another biological approach is surrounding the axon with an insulating layer of myelin, called the myelinated sheath. The result is an increase in the separation of the membrane capacitance charge densities, which reduces the RMCM time constant as the capacitance is inversely proportional to the separation distance
Example 2.3.6-1
If the membrane resistance is RM = 100 Ω mm and axonal resistance is RA = 10 Ω /mm, at what length will a DC the signal be down to 10% of its original value?
Solution:
The space constant equation gives
$\lambda=\sqrt{\frac{R_{M}}{R_{A}}}=\sqrt{\frac{100 \Omega \mathrm{mm}}{10 \Omega / \mathrm{mm}}}=3.16 \mathrm{~mm}$
$0.1 V_{0}=V_{0} e^{-x / \lambda}$
$0.1=e^{-x / 3.16}$
$\ln 0.1=-\frac{x}{3.16}$
$x = 7.28 mm$
Exercise 2.3.6-1
If the membrane resistance is RM = 80 Ω mm and axonal resistance is RA = 15 Ω /mm, at what length will a DC the signal be down to a) 10% of its original value, b) 5% of its original value, and c) 1% of its original value?
Answers
a) 5.32 mm, b) 6.92 mm, c) 10.64 mm
2.3.7 Neuronal Adaptation through Lateral Inhibition
Lateral inhibition is a general phenomenon occurring frequently in layers of interconnected neurons. Cells are electrically coupled so that when one cell fires, it inhibits cells in its neighborhood from firing. The more directly connected a cell is (for example, a nearest neighbor), the greater the inhibition effect. This process is exhibited in the horizontal cell (HC) layer in the retina to provide the spatial-temporal smoothing function [Dowl87]. The HC layer can be modeled as a 2D resistor-capacitor (RC) ladder network, where the RC time constant is the inhibiting force of the network [Koch91].
Lateral inhibition in retina decreases photoreceptor output signal due to activity in nearby photoreceptors; therefore, photoreceptor outputs adapt to significant changes in the local neighborhood. In engineering terms, this is considered a localized automatic gain control (AGC). A simple AGC applied across the whole image would help prevent image saturation when bright lights are present and help bring out darkened details in the absence of bright lights. In a conventional camera system, a single gain adjustment may be applied to the whole image based on image content. A variation of this is the shutter speed on film that limits the time duration for receiving photons. A high-speed (short integration time) film may be used in the presence of bright lights, while a slow-speed (longer integration time) film may be used in darkened rooms. The difficulty is that such a choice is made across the entire image. It would not be possible to capture both dark and bright contrasts in the same picture.
A localized AGC, however, will prevent saturation due to bright intensity sources and allow for sufficient detail to bring out dark objects against a dark background. Lateral inhibition in a neuronal layer effectively varies the ACG across the processing plane based on the average activity in the localized area. This is accomplished in the first horizontal cell layer in the retina.
The second laterally-connected retinal cell layer, the amacrine cells, add further lateral inhibition to retinal processing which causes ganglion cell outputs in optic nerve to adapt to motion in the image stream. Motion information is sent initially, but then inhibited by this layer. Similar adaptation to transient signals is observed in the auditory system, as attention is quickly drawn to the onset of a sound, but then suppressed as the neuronal inhibitory signals adapt to this stimulus.
2.3.8 A Circuit Model of a Neuron in Equilibrium
The ionic permeabilities (PK+, PNa+, and PCl-) regulate how slow or fast ions can move from inside-to-outside or outside-to-inside through a neuron’s cell membrane. These parameters can be modeled as conductances, such as GK+ , or resistances, such as RK+ = 1/GK+ . The Nernst potential associated with each ion can be modeled as an independent source. Since [Cl-]i ≈ [Cl-]o the Nernst potential for Cl- is essentially zero. This is due in part by the fact that PCl- tends to change to keep the concentrations about the same.
Figure 2.3.8-1 shows a simplified model of a neuron in equilibrium. The independent sources represent the Nernst potentials due to the concentration differences between inside and outside the cell. To determine the resting membrane potential, we use Kirchoff’s Voltage Law (KVL) to solve for the inside with respect to the outside (ground).
Example 2.3.8-1
Using the model shown in Figure 2.3.8-1 where RK+ = 2MΩ, RNa+ = 1MΩ, EK+ = -75mV, and ENa+ = +55mV determine the intracellular fluid potential with respect to the extracellular fluid.
Solution:
Letting IK be the current upward through RK and INa be the current upward through RNa,
-75mV + IKRK - INa RNa - 55mV = 0, and IK = -INa
=> IK (RK + RNa) = 130mV
=> IK = 43.3nA
=> Vm = -75mV+ IKRK = -75mV+ (43.3nA)(2M) = 11.67 mV
Check: - Vm = +55mV+ INaRNa = +55mV+ (-43.3nA)(1M) = 11.67 mV
Figure 2.3.8-2 shows the same model with dependent current sources representing the Na-K Pump. Sometimes the K+ ions pumped in are not at the same rate as the Na+ ions pumped out. However, if the neuron is in a steady-state condition and [K+]i and [Na+]i are constant, then computations are simplified as
IK = -IK-Pump and INa = -INa-Pump
To solve for the resting potential, Vm, observe these two equations shown above that can be combined with these current relationships:
Vm = Ek+ IKRK and Vm = ENa + INaRNa
Example 2.3.8-2
In Figure 2.3.8-2 let IK be the upward current through RK and INa be the upward current through RNa. Calculate Vm, IK , and INa if Ek = -75 mV, ENa = +55 mV, PK = gK = 1μ mho, PNa = gNa = 0.2μ mho, and 4 Na+ ions are pumped out for every 3 K+ ions pumped in. Assume [K+]i and [Na+]i are constant, and other ionic influences can be neglected.
Solution:
RK = 1/ gK = 1MΩ
RNa = 1/gNa = 5MΩ
“...4 Na+ ions out for every 3 K+ in...” => 3INa-Pump = - 4IK-Pump
=> IK-Pump = -0.75INa-Pump
“...[K+]i and [Na+]i are constant...” => INa = -INa-Pump
=> IK = -IK-Pump = 0.75INa-Pump
= -0.75INa
Vm = -75mV + IKRK = +55mV + INaRNa
=> -130mV = INaRNa -(-0.75INa)RK
=> INa = -22.61 nA
=> IK = (-0.75) INa = 16.96 nA
Vm = -75mV + IKRK = -58.04 mV
Check: Vm = +55mV + INaRNa = -58.04 mV
Exercise 2.3.8-1
Let EK+ = -75mV and ENa+ = +55mV in the circuit of Figure 2.3.8-1, where IK is the current upward through RK and INa is the current upward through RNa. Calculate Vm for
1. RK+ = 200KΩ, RNa+ = 5MΩ,
2. RK+ = 2.6MΩ, RNa+ = 2.6MΩ, and
3. RK+ = 5.1MΩ, RNa+ = 100KΩ
Based on these values, complete the rest of the following table:
Vm ( mV)
RK+ << RNa+ ________________
RK+ = 0.2MΩ, RNa+ = 5MΩ ________________
RK+ = 2.6MΩ, RNa+ = 2.6MΩ ________________
RK+ = 5.1MΩ, RNa+ = 0.1MΩ ________________
RK+ >> RNa+ ________________
Answers:
Vm ( mV)
RK+ << RNa+ ~ -75 mV
RK+ = 0.2MΩ, RNa+ = 5MΩ -70 mV
RK+ = 2.6MΩ, RNa+ = 2.6MΩ -10 mV
RK+ = 5.1MΩ, RNa+ = 0.1MΩ 52.5 mV
RK+ >> RNa+ ~ 55 mV
Exercise 2.3.8-2
In Figure 2.3.8-2 let IK be the upward current through RK and INa be the upward current through RNa. Calculate Vm, IK , and INa in the model in Figure 2.3.8-2 if Ek = -75 mV, ENa = +55 mV,
PK = gK = 5μ mho, PNa = gNa = 0.2μ mho, and 3 Na+ ions are pumped out for every 2 K+ ions pumped in. Assume [K+]i and [Na+]i are constant, and other ionic influences can be neglected.
Answers:
Vm = -71.6mV, IK = 16.97nA, and INa = -25.32nA
Exercise 2.4-3
Rework Exercise 2.3.8-2 assuming 5 K+ ions are pumped in for every Na+ ion pumped out:
In Figure 2.3.8-2 let IK be the upward current through RK and INa be the upward current through RNa. Calculate Vm, IK , and INa in the model in Figure 2.3.8-2 if Ek = -75 mV, ENa = +55 mV,
PK = gK = 5μ mho, PNa = gNa = 0.2μ mho, and 5 K+ ions are pumped in for every Na+ ion pumped out. Assume [K+]i and [Na+]i are constant, and other ionic influences can be neglected.
Answers:
Vm = -53.3 mV, IK = 108.3 nA, and INa = -21.67 nA
2.3.9 Neuronal Motion Detection
Inputs from adjacent neurons can be connected in a way that provides motion detection. Figure 2.5-1(a) and (b) show two versions of a Hassenstein-Reichardt motion detection model [Zorn90, Hass56]. The two inputs, x1 and x2, represent outputs from two adjacent receptors in a sensory system. In the first instance, a time derivative of one input is multiplied by the value of the adjacent receptor. In the second instance, a delayed version of one input is multiplied by the value of the adjacent receptor. In both instances, the outputs of both products are compared: If equal, they cancel each other in the summation. Otherwise, the result is either positive or negative, depending on the direction of the object.
In Figure 2.3.9-1(c) a three-element binary block is moved right (upper half) and then left (lower half). The input is a ten-element array, and the block is seen by the pattern of ones. The motion detector output is shown, followed by the interpreted results. Both models (a) and (b) result in the same output and interpreted results. The example here is binary, so that the results are very clean. For more realistic values, thresholds would need to be established to reduce effects of noise.
Exercise 2.3.9-1
Give the expected output (Right, Left or No Motion) of a two-element Hassenstein-Reichardt motion detector given the following two input sequences; assume all previous values are zero. Either model (in-channel derivatives or cross-channel delays) should give the same results:
Sequence 1 Sequence 2
x1 x2 Output x1 x2 Output
0 0 No Motion 0 0 No Motion
0 0 __________ 1 0 __________
1 1 __________ 0 1 __________
0 0 __________ 0 0 __________
1 0 __________ 0 1 __________
0 1 __________ 1 1 __________
1 0 __________ 1 0 __________
0 0 __________ 0 0 __________
Answers:
Sequence 1 Sequence 2
x1 x2 Output x1 x2 Output
0 0 No Motion 0 0 No Motion
0 0 No Motion 1 0 No Motion
1 1 No Motion 0 1 Motion Right
0 0 No Motion 0 0 No Motion
1 0 No Motion 0 1 No Motion
0 1 Motion Right 1 1 Motion Left
1 0 Motion Left 1 0 Motion Left
0 0 No Motion 0 0 No Motion
2.04: Questions
Chapter 2 Questions
1. Nothing in the universe is linear, and everything varies with time. Why do we study linear time-invariant systems if they do not exist?
2. What are the primary physical sensor types most useful for reverse-engineering? How do the primary senses (touch, taste, smell, vision, and hearing) fit into these types?
3. What are the three basic types of stimulus reception in biological sensory systems?
4. What are the three basic maps of sensory receptive fields found in the brain?
5. What are the similarities between natural and artificial neural networks?
6. If neurons are so much slower than transistors, how could there be promise of significant performance improvement for computers built with diodes and transistors that model neuronal behavior?
7. If biological systems are constantly maturing and adapting, why is it beneficial to study the structure and function of the neuronal system of a mature adult animal (or human).
8. What is an immediate environmental adaptation in the human vision system?
9. Compare and contrast charge and steady-state charge neutrality in neurons and transistors.
10. How do the membrane resistance and axonal resistance affect transmission ability of an action potential down an axon? What other factors can improve transmission? That is, what other factors increase the length of the spatial constant?
11. If a layer of cells exhibits lateral inhibition and a single neuron fires (produces an action potential), what happens to adjacent cells that are connected to this cell? What happens to cells that are connected but are farther away?
12. How are signals typically processed in neuronal layers? Examples may include the neuronal layers of the retina or the brain.
13. How is the strength of a signal measured when encoded as action potentials of the same peak value (around +55 mV)?
14. Action potentials are triggered when the intracellular fluid potential exceeds a threshold. How is it that for a steady input the output firing rate (frequency of action potentials) adapts from an initial firing rate to a slower firing rate?
15. What controls the rate of adaptation?
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/02%3A_General_Concepts_from_Engineering_and_Biology/2.03%3A_Fundamentals_of_neuronal_processing.txt
|
Biological sensory systems perform energy-efficient and computationally elegant algorithms to accomplish tasks like those required of certain engineering applications. Animals and some engineered systems have the capacity for limited movement within the natural environment in response to sensory stimuli. For example, consider a frontend seeker on a missile designed to autonomously seek and hit a specified target. The missile needs to be guided to a target seen by a seeker with background sensory noise; this requirement is like that of a dragonfly searching and acquiring smaller flying insects. Tasks common to both systems include navigating and guiding the system within the natural environment, detecting, identifying, and tracking objects identified as targets, efficiently guiding the system to the targets, and then intercepting these targets.
This part is about photo-sensory systems, or vision, which involves the conversion of photonic energy into electronic signals. These signals are subsequently processed to extract pertinent information. The primary emphasis will be on vision computational models based on the primate vision system since much study has been made in this area. We begin with some vision principles common across many species within the animal kingdom. Then the structure and function of natural vision systems is investigated, with emphasis on information processing first within invertebrates (specifically arthropods) and then within vertebrates (specifically primates). Engineering application examples that leverage natural vision concepts follow.
3.1 Natural Photo-sensory Systems
Passive means the sensor observes natural stimuli that might be available within the environment, while active implies the sensor sends stimuli out and observes the response from the environment. Physical sensors in the animal kingdom include photo-sensory, such as passive vision systems processing photons, mechano-sensory, such as passive sonar (audition), active sonar (bats, dolphins, whales), passive compression (touch) and active compression (insect antennas), and chemo-sensory, such as gustation (taste) and olfaction (smell). This chapter will focus on passive photo-sensory vison systems.
3.1.1 Common principles among natural photo-sensory systems
A photon is the wave-particle unit of light with energy E = , where h is Plank’s constant and υ is the electromagnetic frequency. The energy per time (or space) is modeled as a wavelet since it satisfies the general definition of having a beginning and ending and unique frequency content. Information contained in the frequency and flux of photons is photonic information, which gets converted into electronic information coded in the graded (or analog) neural ionic voltage potentials or in the frequencies of action potentials.
Biological systems can be divided into vertebrates, such as mammals and reptiles, and invertebrates, such as insects. Animals collect and process information from the environment for the determination of subsequent action. The many varied species and associated sensory systems in existence reflect the wide range of environmental information available as well as the wide range of biological task objectives.
Commonality of Photo-reception and Chemo-reception
Photo-reception is made possible by the organic chemistry of photopigments, which initiate the visual process by capturing photons of light. Photopigments are composed of a form of Vitamin A called retinal and a large protein molecule called opsin. Opsins belong to a large family of proteins which include olfactory (sense of smell) receptor proteins. Odorant and tastant molecules attached themselves to a special membrane receptor, causing a sequence of molecular reactions eventually resulting in neuronal signaling. Photopigment molecules are like these chemo-sensory membrane receptors with retinal serving as the odorant or tastant already attached. The incoming photon of light gives the molecule enough energy to initiate a chain reaction like that in chemo-sensory reception when an odorant or tastant molecule come in contact with the receptor. As a result, the photo-reception process is really a simplified form of the chemo-reception process. A photo-sensory (or visual) system begins by converting the photonic stimulus into a chemical stimulus (photopigments) and the remaining information processing of the visual system is that of a chemo-sensory system.
Curvature and Reflection
The two primary eye designs are the vesicular (containing a cavity) eye found in vertebrates and certain mollusks and the compound eye found in arthropods. Figure 3.1-1 shows the concave nature of the vesicular eye and the convex nature of the compound eye. Images in biological systems are formed on a curved sheet of photoreceptors, called the retina. In a similar way, cameras form images on a sheet of photographic film, where the film is flat instead of curved. The ancient sea-going mollusk Nautilus has the concave retina structure with a pinhole aperture, which creates an inverted image with no magnification. Most concave retinas (vertebrates, etc.) depend on the refraction of light through a larger aperture. The lens serves this purpose. A larger aperture is needed to allow more photonic flux to enter the reception area to ensure sufficient energy is available to stimulate photoreceptors, and refraction through the lens and eyeball fluid (vitreous humor) serves to compensate for the otherwise blurred view of the environment as the aperture is increased.
Physical properties of reflection are also used in the eye designs of scallops and certain fish and mammals. Some of the purposes of designs based on reflection are not known (scallops), but other vision system designs exploit reflection in nocturnal (night-time low-light level) conditions. For example, night-hunting by certain mammals is augmented by the fact that a photon of light has twice as much a chance of being captured by the same photoreceptor as the light passes through a second time after being reflected. A special reflective tissue (tapetum lucidum) behind the retina gives this advantage in nocturnal conditions. This reflection can be observed when shining a light (flashlight, headlight) toward the animal and it is looking back.
The photoreceptors are typically long and cylindrical cells containing photopigments arranged in many flat disc-shaped layers. This design gives a small angular reception area, leading to sufficient spatial acuity, while providing many opportunities for the incoming photon to be captured by the photopigment.
Optical Imperfections
There are several imperfections that are dealt with in natural vision systems. Some of these include spherical aberration, chromatic aberration, and diffraction. Natural vision system parameters typically represent an optimal balance of the effects of these imperfections. Spherical aberration is caused by light coming into focus at a shorter distance when coming through the periphery of the lens than from the center. Chromatic aberration is caused by the dependency on wavelength of the index of refraction: The shorter the wavelength, the greater the amount of refraction. This means that if the blue part of the image is in focus, then the red part of the image is slightly out of focus. The optical properties of the available biological material do not allow for perfect compensation of these effects. For example, to correct for spherical aberration requires a constant decrease in the cornea index of refraction with distance from the center. Since the molecular structure of the cornea is constant, this is not possible. The general shape, however, of the primate eye is slightly aspherical, which minimizes the effects of spherical aberration. As the primate eye changes shape with age, these aberrations are corrected by external lenses (eyeglasses).
The third imperfection is caused by diffraction. Diffraction is a geometrical optics phenomenon resulting from the edge effects of the aperture. When combined with spherical and chromatic aberration, the result is a spatial frequency limit on the image that can be mapped onto the retina. This limit is typified by the angular distance that two separate point sources can be resolved, called angular acuity. Spatial acuity refers to the highest spatial frequency that can be processed by the vision system. The displacement between photoreceptors in highly evolved species is typically the distance represented by the angular acuity. Any further reduction in distance is not practical as there would be no advantage concerning image information content.
Another consideration is contrast sensitivity, which is how sensitive two separate photoreceptors are to varying levels of photon flux intensity. In biological systems, the information forwarded is frequently a difference in contrast between two adjacent photoreceptors. If the photoreceptors are very close, then the difference will never be great enough to show a relative contrast since edges in the image are already blurred due to the aforementioned imperfections. The photoreceptor spacing in the retina is on the order of the Nyquist spatial sampling interval for frequencies limited by these imperfections. In the adult human retina, this turns out to be about 120 million photoreceptors: about 100 million rods, which are very sensitive and used in nocturnal conditions, and about 20 million cones, which come in three types and provide color information in daylight conditions.
Visual Information Pathways
Receptive fields for the various sensory systems are mapped to specific surface regions of neuronal tissue (such as retina, brain, and other neuronal surfaces). Due to the connectivity, several pathways are usually observed. For example, one photoreceptor may be represented in several neurons that are transmitting photonic information to the brain. One neuron may represent the contrast between that particular photoreceptor and the most adjacent ones. This would be an example of a parvocellular pathway neuron (parvo means small). Another neuron may represent the contrast between an average of that photoreceptor and the most adjacent ones, and an average of a larger region centering on that photoreceptor. This would be an example of a magnocellular pathway neuron (magno means large). As it turns out, the names come from the relative physical size of these neurons, and they happen to also correspond to the size of the receptive field they represent. Parvocellular and Magnocellular pathways are common among many species, for example, both humans (and other mammals) and certain arthropods.
Connectivity and Acuity
There is a balance between temporal acuity, which is the ability to detect slight changes in photonic flux in time, and spatial acuity, which is the ability to detect slight changes between two adjacent objects whose images are spatially separated on the retina. As receptors are more highly interconnected, there is better temporal acuity due to the better photon-integrating ability of the aggregate. Receptors that are not highly interconnected exhibit better spatial acuity.
To illustrate this concept, consider a steady photonic flux represented by 1 photon per 10 photoreceptors per unit of time. On average, each photoreceptor would receive 1 photon every 10 units of time. If this incoming photon rate changed to 2 photons per 10 photoreceptors, then the output of a single photoreceptor would have to be monitored for a duration of 10’s of units of time to detect an average increase in photon flux. If an aggregate of 100 photoreceptor cells were integrated, and if the photonic flux were uniformly distributed, then the total output would jump from 10 photons to 20 photons, which might be noticeable at the very next unit of time. The result is that the animal will be able to detect slight changes in photonic flux much better if the cells are highly connected, while the ability to distinguish between two adjacent small objects would deteriorate. Thus, a higher connectivity results in sharp temporal acuity at the cost of spatial acuity.
Coarse Coding
Coarse coding is the transformation of raw data using a small number of broadly overlapping filters. These filters may exist in time, space, color, or other information domains. Biological sensory systems tend to use coarse coding to accomplish a high degree of acuity in sensory information domains. For example, in each of the visual system information domains (space, time, and color, or chromatic) we find filters that are typically few in number and relatively coarse (broad) in area covered (or bandwidth): There are essentially only four chromatic detector types, whose spectral absorption responses are shown in Figure 3.1-2, three temporal channels, and three spatial channels. Neurons in the retina receiving information from the photoreceptors are connected in such a way that we can observe these spatial, temporal, and chromatic information channels in the optic nerve.
Coarse coding can take on many different forms, and one coarsely coded feature space may be transformed into another. For example, within the color channels of the vision system we find a transformation from broad-band in each of the three colors at the sensory level to broad-band in color-opponent channels at the intermediate level. Other interesting examples of coarse coding include wind velocities and direction calculation by cricket tail sensors and object velocity calculations with bursting and resting discharge modes of neuronal aggregates in the cat superior colliculus.
The responses of vision system rods and cones must be broad in scope to cover their portion of the data space. For example, in daytime conditions only the three cone types have varying responses. As a minimum each type must provide some response over one-third of the visible spectrum. Each detector type responds to much more than one-third of the visible spectrum. Since a single response from a given detector can result from one of many combinations of color and intensity, the value by itself gives ambiguous local color and intensity information. If the response curve was very narrow band, then any response is the result of a particular frequency, and the value of the response would reflect its intensity. However, many of these detectors would be required to achieve the wide range (millions) of colors we can perceive. It is not practical to have each of many narrow-band detectors at each spatial location. The natural design is optimized to allow for many colors to be detected at each location while minimizing the neuronal hardware (or “wet-ware”) requirements.
3.1.2 Arthropod vision system concepts
Although there are millions of species within the animal kingdom, there are relatively few photo-receptor design concepts that have stood the test of time, such as the arthropod compound eye. There are some interesting similarities between the vision systems of the insect phyla and primates. For example, both map incoming light onto an array of photoreceptors located in a retina. Both exhibit distinct post-retina neuronal pathways for what appears to be spatial and temporal processing.
Of course, there are some key differences between insect and primate vision systems. Insects have non-movable fixed-focused optics. They are not able to infer distances by using focus or altering gaze for object convergence. The eyes are much closer together, so that parallax cannot be used to infer distances either. The size is much smaller, and the coverage is in almost every direction so that the overall spatial acuity is much worse than primates. As a result, navigation appears to be done more by relative image motion than by any form of object detection and recognition [Srini02].
Arthropod Compound Eye
The arthropod compound eye is a convex structure. The compound eye is a collection of individual ommatidia, which are complex light-detecting structure typically made up of a corneal lens, crystalline cone, and a group of photosensitive cells. Each ommatidium forms one piece of the input image so that the full image is formed by the integration of all ommatidia. There are three basic designs for integrating ommatidia into a composite image:
1. Apposition. Each ommatidia maps its signal onto a single photoreceptor.
2. Superposition: Several ommatidia contribute to the input signal for each photoreceptor
3. Neural superposition: Not only are the photoreceptor inputs a superposition of several ommatidia, but neurons further in the processing chain also receive their inputs from several photoreceptor outputs.
Apposition eyes form relatively precise images of the environment. This design is common among diurnal (daytime) insects. Superposition eyes are common among nocturnal (night-time) and crepuscular (twilight) insects. In conditions of low light levels, the superposition design allow for greater sensitivity since light from several ommatidia are focused onto a single photoreceptor. The greater sensitivity of the superposition eye comes at a cost of spatial acuity since image detail is shared by neighboring pixels. This is an example of “higher connectivity results in sharp temporal acuity at the cost of spatial acuity” explained earlier. The neural superposition eye is found in the dipteran (two-winged) fly. This design allows for further processing to compensate for the loss of spatial acuity, resulting in both good spatial acuity and sensitivity.
The superposition eye has greater sensitivity to changes in photonic flux because of the higher degree of connectivity of the ommatidia to a single photoreceptor. In a similar way, the primate rod system is highly interconnected, which results in a high degree of temporal sensitivity. The primate photoreceptors are divided into rods and cones, named for the shape of the outer photopigment-containing segment. Certain cone cells are also highly interconnected, bringing better sensitivity to temporal changes.
Scanning Eyes
A few mollusks and arthropods have developed a scanning mechanism for creating a visual image of the external environment. A narrow strip of photoreceptors is moved back and forth to generate the complete image. Certain sea snails have retinas that are 3 to 6 photoreceptors wide and 400 photoreceptors long. The eye scans 90°, taking about a second to scan up, and about a fourth of a second to return down [Smith00].
Mantis shrimp contain 6 rows of enlarged ommatidia in the central region of the compound eye. The larger ommatidia contain color visual pigments that can be used to further investigate an object of interest by scanning with these central photoreceptors. This allows the shrimp to use any color information in the decision process [Smith00].
Certain jumping spiders contain retinas 5 to 7 photoreceptors wide and 50 photoreceptors long. The spider normally scans from side but can rotate the eye to further investigate a particular object of interest. The lateral (additional) eyes on this spider contain highly interconnected photoreceptors for detecting slight rapid movements. Once detected, the attention of the primary eye can be directed to the newly detected object. This process is analogous to primate vision, where the more periphery cells are highly connected and the central area (the fovea to be discussed later) are more densely packed and not so interconnected. A sharp movement in the periphery causes a primate to rotate the eyes to fixate on the source of the movement. Once fixated, the higher spatial acuity of the central area can be used to discern the spatial detain of the new object of interest [Smith00].
3.1.3 Primate vision systems
Early vision can be defined as the processes that recover the properties of object surfaces from 2D intensity arrays. Complete vision would be the process of using early vision information to make some decision. The focus in this section is on vertebrate vision information pathways that begin in the retina and terminate in cortical processing stages. Cortical comes from cortex, which is used to describe the part of the brain where sensory system information is processed. Vision is processed in the primary visual cortex, hearing is processed in the auditory cortex, and touch is processed in the somatosensory cortex. Many of these concepts are also common in insect vision.
Figure 3.1-3 shows the relevant parts of the primate eye. Photonic energy is first refracted by the cornea and further by the lens and the vitreous humor, which fills the optics chamber. The retina covers most of the inner portion of the eye and serves as the first vision processing stage. Approximately 120 million photoreceptors are encoded into about 1 million axons that make up the optic nerve.
Figure 3.1-4 shows the other basic components of the primate vision system. A projection of the 3D environment is mapped onto the 2D sheet of neuronal tissue called the retina. The primate retina is composed of several layers of neurons, including photoreceptor, horizontal, bipolar, amacrine, and ganglion cell layers to be discussed in more detail later. The information is graded, which basically means analog to electrical engineers, until it reaches the axon (output) of the ganglion cell layer. The graded potential signaling is replaced by action potential signaling through the optic nerve. Upon reaching the optic chiasm, the right side of both retinas (representing the left side of the visual field) are mapped to the right side of the brain, and the left side of both retinas (right side of visual field) to the left side of the brain.
The retina, lateral geniculate nucleus (LGN) and the brain are all composed of layers of neurons. Figure 3.1-4 highlights the LGN whose outer 4 layers are the termination of Parvocellular Pathway (PP) optic neurons and inner 2 layers the termination of Magnocellular Pathway (MP) optic neurons. Both PP and MP signals are opponent signals, meaning the signal levels correspond to the contrast between a central receptive field (RF) and a larger surrounding RF which would include responses from neurons not represented by the central RF. Parvo (small) and magno (large) were names given by anatomists who based the names on the size of the cell bodies. Conveniently, it was later learned that the PP corresponds to smaller RFs (central RF could be one cell) and MP to larger RFs (central RF would be a larger aggregate of cells). In both cases the surrounding RF would be larger than the central RF. There is duality in the center-surround contrast signals in that some represent the central signal minus the surround (“ON” signals) while others represent the surround signal minus the central (“OFF” signals).
The PP contains color information as the cone response of a single central signal will have a different spectral response from the average response of the surrounding neurons. Some earlier researchers would use r, g, b for designating the three cone receptors. But since the spectral absorption curves broadly overlap much of the visible spectrum (as show in Figure 3.1-2) a better notation is l, m, s for long-, medium-, and short-wavelength cone types [DeV88]. We adopt that convention in this book.
Spatio-temporal Processing Planes
The retina can be considered a “part” of the brain, as suggested by the subtitle of John Dowling’s book The Retina: An Approachable Part of the Brain [Dowl87]. The retina is a multi-layered region of neuronal tissue lining the interior surface of the eye, as shown in Figure 3.1-3. In the early stages of primate central nervous system (CNS) embryonic development, a single neural tube develops two optic vesicles with optic cups that eventually develop into the retinas for each eye. The physiology (or functioning) of layers of neurons are similar, whether located peripherally in the retina (about 5 layers), in the LGN (about 6 layers), or in the visual cortex (about 10-12 layers). If we can better understand the spatial-temporal-chromatic signal processing that exists in the retinal it will better our understanding of what is also going on in the LGN and the higher processing centers of the visual cortex.
The vision processing mechanics can be best visualized as a series of parallel-processing planes, each representing one of the neuronal layers in the retinal or in the brain, as shown in Figure 3.1-5. Parallel incoming photons are received by the outer segments of the photoreceptors resulting in signals that propagate to the visual cortex in the brain. Each plane of neuronal processing acts upon the image in a serial fashion. However, the processing mechanism cannot be simply described as simple image filters acting on each separate plane. As the energy is propagated through the neuronal layers, the ionic charge spreads laterally across each processing plane. As a result, the output of each processing plane is a combination of the current and historic inputs of the cells in the path as well as the historical input of the adjacent cells.
To adequately model spatial and temporal effects of the neuronal interconnections, each cell in each neuronal processing plane must consider mediation effects of neighboring cells as well as temporal effects of signal degradation in time. One way to model both effects is to apply a 2D spatial filter to each image plane and follow the filter with a leaky integrator, that allows for temporal ionic equilibrium effects.
Information Encoding
Natural vision systems extract space (spatial), time (temporal) and color (chromatic) information to make some decision. Information is often encoded for transmission, for example, from the retina to the LGN. Figure 3.1-6a shows the basic information blocks in the vision system. Figure 3.1-6b illustrates the overall numerical processing elements in each of the various vision processing stages. There is an approximately 100:1 compression of the retina photoreceptors to the optic nerve signals, but an expansion of 1:1000 optic nerve signals to visual cortex neurons. This expansion is known as optic radiation. Combining the compression and expansion there is an overall expansion of about 1:10 retinal photoreceptors to visual cortex neurons. As typical in biology, the compression and expansion is quite non-uniform, as there are about 2 optic nerve neurons per photoreceptor in the retina’s fovea (very central part of vision), but only 1 optic nerve neuron for about every 400 photoreceptors in the peripheral part of the retina. This unbalance is a consequence of the importance of information in the center-of-gaze.
Natural vision filtering begins with photonic refraction through the cornea and lens (Figure 3.1-3). Figure 3.1-7 depicts the various cell layers within the retina and a gross approximation of the mathematical function performed by each layer on the incoming imagery. The incoming light then passes through the vitreous humor and retinal cell tissue and is focused onto a photoreceptor mosaic surface. The flux within a photoreceptor’s receptive region of the retina is averaged to a single output at the triad synapse (at the root of the photoreceptor). As a result, the information can be visualized as a mosaic, where each piece represents a single photoreceptor’s output.
Photonic energy is converted to electronic charge in the photopigment discs of the photoreceptors (rods and cones). It is believed that the rate of information transfer is proportional to the logarithm of the incoming intensity. The photoreceptors, with the help of a layer of horizontal cells, spread the charge in space and time within a local neighborhood of other receptors. Such charge-spreading can be modeled by spatio-temporal gaussian filters. Two separate variances (horizontal and vertical) are required for the spatial 2D filter and another for how the signal degrades in time.
The spread charge and original photoreceptor charge, both of which can be modeled as a gaussian-filtered version of the incoming imagery, are both available at the root of the photoreceptor, at the triad synapse. The bipolar cells connect to triad synapses and presumably activate signals proportional to the difference between the photoreceptor input and the horizontal cell input. Therefore, the bipolar cell output represents the difference-of-gaussian version of the original image.
Spatial edges are detected by two types of bipolar cells, on-bipolars and off-bipolars, which respond to light and darkness, respectively. The on-bipolar responds if the central receptive field exceeds the surrounding receptive field, while the off-bipolar cells respond if the surrounding receptive field exceed the central receptive field. Temporal edges (rapid changes in photonic flux levels) are detected by on-off and off-on bipolar cells, which respond to quick decrements or increments in photonic flux, respectively. Corresponding ganglion cells (on, off, on-off, and off-on) propagate amacrine-cell-mediated responses to these bipolar cells.
The difference signal propagated by the bipolar cells is a consequence of the lateral inhibition caused by the connectivity of photoreceptors and horizontal cells. The horizontal cells connect horizontally to numerous photoreceptors at the triad synapse. Horizontal cells only have dendrites, which for other neurons would typically serve as input channels. The dendrites (inputs) for these cells pass ions in both directions, depending how the ionic charge is distributed. The net effect is that adjacent photoreceptors have their information partially shared by this mediation activity of the horizontal cells.
Gap junctions between adjacent photoreceptors influence the photoreceptor charge. The response from a photoreceptor aggregate can be modeled as a spatial-temporal Gaussian with a small variance. The input from the neighboring aggregate of horizontal cells can be modeled with a similar Gaussian with a larger variance. The differencing function results in the difference-of-Gaussian (DOG) filter operation, resulting in a center-surround antagonistic receptive field profile. DOG functions and functions of the second derivative of Gaussian, called the Laplacian-of-Gaussian (LOG), have been used to model the bipolar cell output.
The analog charge information in the retina is funneled into information pathways as it is channeled from the mosaic plane to the optic nerve. These information channels originate in the retina and are maintained through the optic nerve and to portions of the brain. These include the rod channel, initiated by rod bipolars, the parvocellular pathway (PP) and the magnocellular pathway (MP), the latter two initiated by cone bipolars. Both the PP and the MP exhibit center-surround antagonistic receptive fields. PP cones are tightly connected, responding to small receptive fields, while the MP cones are more loosely connected (together with rod inputs), responding to large receptive fields.
The MP and PP perform separate spatial band-pass filtering, provide color and intensity information, and provide temporal response channels, as illustrated in Figure 3.1-8. A relatively high degree of acuity is achieved in each domain (space, time, and color, or chromatic) from these few filters. The MP is sensitive to low spatial frequencies and broad color intensities, which provide basic information of the objects in the image. The PP is known to be sensitive to higher spatial frequencies and chromatic differences, which add detail and resolution. In the color domain, the PP provides color opponency and thus spectral specificity, and the MP provides color non-opponency and thus overall intensity. In the time domain, the PP provides slowly varying dynamics, while the MP provides transient responses to image dynamics.
Graded Potential Processing
Retinal information is primarily in the form of graded potentials as it moves from the photoreceptor cell (PC) layer through the retina to the amacrine cell (AC) and ganglion cell (GC) layers. The GC output axons make up the optic nerve, transporting spikes to the LGN. The ganglion axonal signals begin the optic nerve transmission of color, time, and space information to the remaining neuronal organs in the vision pathway. It is typical that localized processing is graded, like an analog voltage level in an RLC circuit, but is pulsed via action potentials when travelling distances, such as from the retina to the LGN, and from there to the superior colliculus and to the visual cortex.
Figure 3.1-9 shows the signal and image processing functions at the various stages of the retina. Figure 3.1-10 shows greater detail of the lower left region of Figure 3.1-9. The spatio-temporal filtering characteristic is due to the connectivity of the first three layers of neurons: photoreceptors, horizontal cells, and bipolar cells.
Coarse-coding in the Signal Frequency Domain
We extend the use of coarse-coding to the signal frequency domain by considering Gaussian curves that simulate signal-processing filters. Gaussian-based filters were chosen due to the Gaussian nature of various stages of neuronal processing in vision systems as well as the ease of implementing Gaussian filters in electronic systems.
The Gaussian-based filters with different variances and their power spectra are shown in Figure 3.1-11. Gaussian curves G1 through G4 have increasing variances. Each curve is normalized so that the peak is at the same location. This way, the shape of the curve can be observed. In practical applications, the curves would be normalized for unity area so that filtering changes the signal without adding or taking away energy.
The spectrum of these Gaussian filters is Gaussian with decreasing variances. A curve with a small variance, such as G1, will pass low and medium frequency components and attenuate high ones, while one with a larger variance, such as G4, will only pass very low frequency components. Subtracting these filters gives us the Difference-of-Gaussian (DoG) filters shown. For the variances selected, DoG G1-G2 serves as a high-pass filter, while the others serve more as band-pass filters.
Keep in mind that frequency here implies signal frequency. The signal could contain variations in spatially distributed energy (spatial frequency), variations of intensity with time at a single location (temporal frequency, or variations in color with respect to either time or space (chromatic frequency).
Pairs of filters can be selected to decompose a signal into selected specified frequency components. For example, if it is desired to measure the strength of a signal at around 10% of the sampling frequency (horizontal axis in Figure 3.1-11), then the difference between gaussians G3 and G4 would be used to filter the signal. Due to linearity of the Fourier Transform, the spectral responses (middle plot in Figure 3.11) can be manipulated by addition or subtraction to get the desired spectral response of the filter (bottom plot). This simply translates to the same manipulation in the signal domain (top plot).
Photoreceptor Mosaic
These filtering concepts are readily extended to two dimensions for use with the planar processing behavior of vision system models. To fully appreciate the nature of the image filter, it is essential to understand that the pixels are not uniformly distributed in size or type. The input image comes from a photoreceptor mosaic composed of S, M, and L cones and Rods.
Figure 3.1-12 shows a gross simplification of the photoreceptor mosaic. The central region is called the fovea and represents a circular projection of about a 1o conical view of the environment. In this region are only two photoreceptor types: M and L cells. Two cone types allow for color discrimination in the fovea, and the lack of rod cells allows for a high degree of spatial acuity. The rapid decline of spatial acuity with eccentricity, or the amount of separation from the center, can be clearly demonstrated by looking at a book on a bookshelf. Keeping the eyes fixed, it becomes difficult to read titles that are still relatively close to the fixation point.
The lack of rod cells in the fovea accounts for the disappearance of a faint star when we look directly at it. Rod cells are far more sensitive, so they respond in nighttime dim lighting conditions. However, if cones are not stimulated, there is no color discrimination since a strong signal at a frequency with weak response is the same as a weak signal at a frequency with strong response.
Figure 3.1-13 shows a representative mapping of fovea L and M cells into the parvo- (PP) and magnocellular (MP) pathways. The PP cells are physically smaller, but also carry information pertaining to smaller receptive fields. In the figure, the L and M ratios in the MP are kept nearly constant (2:1) so that the only response would be increased or decreased intensity (luminance). The PP surround cells, however, are skewed toward the cell not in the center. In other words, overall, there is a 2:1 ratio of L:M cells. The surround field in the upper left connection is 1:1, which favors the M cell contribution when the L cell is the center. The other example (upper right), the surround is purely L, which favors L over the 2:1 ratio when M is in the center. The surround, therefore, is at a slightly different cellular concentration that helps to favor local contrast between the two spectrally different cone types, allowing for a stronger acuity in the chromatic domain.
3.1.4 Color Vision Processing Models
There are several ways to designate the three cone types shown by their spectral responses in Figure 3.1-2. Some researchers use B, G, and R to represent blue, green, and red peaks in the photon absorption curves, although the peaks are not at those precise colors. Others prefer to use S, M, and L to denote the short wavelength, medium wavelength, and long wavelength responses, respectively. This latter designation is more appropriate since the notation in Boynton’s model is changed to keep consistency between the three models presented in the next sections. All three describe separate luminance and chromatic channels of information within color vision processing.
Guth Color Model [Guth91]
A model proposed by Guth included luminance and chromatic channels, as shown in Figure 3.1-14. The response of the luminance channel can be summarized as L+M, while the response of the chromatic channel can be described as L - S. A variation of this model mixes chromatic and luminance channels with automatic gain control in an artificial neural network trained by psychophysical data. The localized gain control simulates the spatial-temporal characteristics of the photoreceptor-horizontal cell network. There are numerous research efforts that have used various methods of emulating lateral inhibition for the spatial-temporal feature extraction inherent in the photoreceptor-horizontal cell network.
The first stage of the Guth model is the summation of simulated receptor noise sent to each cone followed by a steady-state self-adapting nonlinear gain control. The second stage is linear combinations of signals divided into two sets of three channels each. The third stage is a nonlinear compression of the second stage channels. One set includes two opponent channels and one non-opponent channel compressed to provide visual discriminations and apparent brightness. The other set includes three channels compressed to provide the appearances of light in terms of whiteness, redness or greenness, and blueness or yellowness [Guth91, Guth96].
This model has been criticized as being a poor emulation of retinal structure since no provision is made for cone proportions, the nature of anatomical connections, and the receptive field structure of ganglion and geniculate (LGN) neurons. Also, it appears to be an artificial neural network, with no physiological basis, which is trained to fit psychophysical data [DeV96]. Nevertheless, the division of color processing into luminance and color channels is an integral part of the model, and the point here is that several of these models include similar arrangements of cone types for these vision channels.
Boynton’s Color Model [Boyn60]
A classic model by Boynton also divides the color vision pathways into luminance and chromatic channels. The luminance channel in his model is described as L+M. The chromatic channels are described as L-M and (L+M) - S. He points out the similarity in numerous others. The opponent chromatic channels are known from recordings at the horizontal cell layer. The horizontal cells connect to the photoreceptors and perform spatial and temporal photoreceptor signal mixing. The bipolar cells are thought to propagate difference signals in the opponent pathways [Boyn60].
DeValois’ Color Model [DeV88]
A later model proposed by DeValois (Figure 3.1-14) goes into more detail by considering the relative concentrations of cells into account. It is observed that the concentration of the various cone cells is a function of eccentricity, or the distance from the center. In the center, the foveola, there are only L and M cells in a respective ratio of about 2:1. S cones become more apparent in the parafovea and more peripheral regions of the retina. There is an overall presumed ratio of L:M:S cells of 10:5:1. The normalized response of a neighborhood with these concentrations gives:
DeV_LMS = 0.625L + 0.3125M + 0.0625S.
The variable DeV_LMS represents the response from a typical photoreceptor neighborhood with representative cell population densities. The DeValois color model consists of 4 center-antagonistic-surround channels, 3 representing PP channels and one representing an MP channel. Each of the 4 channels exists in two polarities for a total of 8 channels. The 6 chromatic channels model PP channel responses as
PPL = (+/-) (L - DeV_LMS)
PPM = (+/-) (M - DeV_LMS)
PPS = (+/-) (S - DeV_LMS)
while the luminance channels model the MP channel responses as
MP = (+/-) ((L + M) - DeV_LMS)
The general concept for the Guth and DeValois color vision model is illustrated in Figure 3.14.
Generic Color-Opponent Model
The Boynton and DeValois models along with models from Martinez-Uriegas [Mart94] and Chittka [Chittka96] are compared in Figure 3.1-15. All of these (as well as Guth) have some sort of L and M cell synergism for encoding luminance and cell antagonism for encoding color. (N and W in Martinez-Uriegas model are for narrow and wide receptive field areas. S in the other models are for small-wavelength cones). Based on these popular models a simple color model could include a center receptive field contrasted with its local neighborhood. The center receptive field is modeled as a single picture element, or pixel. Ratios of the center pixel with the local neighborhood represent the color-opponent response. The models presented use differences, but ratios are in this generic model. This is plausible since many neurons respond logarithmically with stimulus, and ratios become differences after a logarithmic transformation. The actual responses of bipolar cells are presumed subtractive, but they can be considered divisive since the subtraction follows the logarithmic response of the photoreceptors.
The photoreceptor responses are believed to be logarithmic, while the bipolar cell responses are believed to be subtractive. Due to the logarithmic nature of the photoreceptor response, the bipolar difference signal really reflects a contrast ratio of the photoreceptor with the horizontal-cell-mediated signal (which is a localized spatial-temporal average signal). This is because a logarithm transform of the ratio reduces a multiplication to an addition. For example, if an M detector responds with an output value of Mo and an L detector responds with an output value of Lo, then the logarithm of the ratio is the same as a subtraction of the individual logarithm-transformed cell responses. That is,
ln (Mo / Lo) = ln(Mo) – ln(Lo).
3.1.5 Extracting color from parvocellular color-opponent pathway
Figure 3.1-13 shows on and off parvocellular pathways as a difference between a single photoreceptor cell in the center and a local neighborhood of a few adjacent photoreceptors. A representative photon absorption curve for each receptor (S, M, L, and Rod) is shown in Figure 3.1-2 If the neighboring receptors are averaged together the average response will be different form the center cell’s response because on average the response of the center field is different from that of the neighborhood. To illustrate this concept, consider this example:
Example 3.1, Center-Surround Opponent Processing
Given photoreceptor spectral response curves in Figure 3.1-2 and a unity-intensity mono-chromatic stimulus determine the output of a center-surround antagonistic. Assume the surround input is made of a ratio of long-wavelength (L) to medium-wavelength (M) to short-wavelength (S) cones of L:M:S = 10:5:1. Assume the center field is only one cell (L, M, or S). Determine the output for a center cell of each cell type (S, M, and L) for a stimulus whose wavelength is
1. 450 nm
2. 500 nm
3. 550 nm
4. 600 nm
Solution:
Using Figure 3.1-2 we need to estimate the response of each stimulus that is expected from each of the three cell types. Looking at the normalized values at 450 nm the S-cone response is about 0.6, the M-cone about 0.3, and the L-cone about 0.1. The estimated measurements are shown in Figure 3.1-16. If the center cell is an S-cone cell the center value is 0.6. The surrounding neighborhood is calculated as a weighted average of the different responses. For L:M:S = 10:5:1 then the weighted average would be
surround_response = $\frac{1}{16}(10(0.1)+5(0.3)+(0.6))=\frac{3.1}{16}=0.194$
and the S-cell center-surround response would be
S cell: center_response – surround_response = 0.6 – 0.194 = 0.406
Similarly, at 450 nm,
M cell: center_response – surround_response = 0.3 – 0.194 = 0.106
L cell: center_response – surround_response = 0.1 – 0.194 = -0.094
Then the same can be done at 500, 550, and 600 nm. The following figure shows an estimated measured response for all three cell types at each of the 4 wavelengths:
Using the weighted average as before, the result for each of the three cell types for each of the four wavelengths are:
Stimulus
Wavelength
Center-surround opponent response
S-cell M-Cell L-cell
450 nm 0.41 0.11 -0.09
500 nm -0.53 0.22 -0.06
550 nm -0.89 0.07 0.06
600 nm -0.61 -0.31 0.21
Looking at the results of this example we see positive responses in the forward diagonal and negative responses away from it. This makes sense as the input wavelengths used for this example are incrementally increasing as are the peak response wavelengths going from S to M to L cell. When the input stimulus is near the peak response of the center cell then the weighted average of the local neighborhood is lower since it is influenced by cells not responding as strongly. Of course, this contrast is far more significant in the PP channel than the MP channel since the PP center field is typically a single cell instead of an aggregate of cells in a typical MP channel. The contrast caused by color is therefore much stronger in the PP channel than the MP channel, which is why color is attributed to the PP channel in Figure 3.1-8.
This example assumes an object emitting (or reflecting) energy at a single monochromatic frequency, but most natural objects emit a wide distribution of frequencies across the visible spectrum. Regardless of the chromatic frequency distribution the algorithm results in a single specific response for each input that the higher brain processing can use to perceive a specific color. The color difference of an object against its background is amplified by this contrast, which benefits a species dependent on color perception for survival.
3.1.6 Gaussian Filters
One of the original models for the outer plexiform layer (photoreceptor-horizontal-bipolar cell interconnection layer) is the Laplacian-of-Gaussian (LoG) filter. For a gaussian function, G, defined in terms of a radius from the center, r, so that r2 = x2 + y2 for cartesian coordinates x and y, then G is defined in terms of the variance, σ, as
$G=e^{\frac{-\left(x^{2}+y^{2}\right)}{2 \pi \sigma^{2}}}=e^{\frac{-r^{2}}{2 \pi \sigma^{2}}}$
Gaussian Filter
The LoG filter is defined as the second derivative of G:
$\nabla^{2} G(r)=\frac{-1}{\pi \sigma^{2}}\left(1-\frac{r^{2}}{\pi \sigma^{2}}\right) e^{\frac{-r^{2}}{2 \pi \sigma^{2}}}$
Laplacian-of-Gaussian (LoG) Filter
The Difference-of-Gaussian (DoG) for two gaussians with variances σ1 and σ2, is
$G_{1}-G_{2}=e^{\frac{-r^{2}}{2 \pi \sigma_{1}{ }^{2}}}-e^{\frac{-r^{2}}{2 \pi \sigma_{2}^{2}}}$
Difference-of-Gaussian (DoG) Filter
Under certain conditions, the DoG filter can very closely match the LoG filter [Marr82]. The DoG filter allows more flexibility as two variances can be modified, thus there are two degrees of freedom. The LoG filter only uses one variance, thus only one degree of freedom.
The spectrum of a gaussian is also a gaussian:
$e^{-t^{2} / 2 \sigma^{2}} \Leftrightarrow \sigma \sqrt{2 \pi} e^{-\sigma^{2} \omega^{2} / 2}$
Note that the variance, σ2, is in the denominator of the exponent in the time domain and in the numerator of the exponent in the frequency domain. This is shown graphically in Figure 3.1-11 as the broad (large variance) gaussians result in sharp spectral responses, passing only very low frequencies. The narrow (small variance) gaussians pass more of the lower and middle frequencies. The limits are a zero-variance gaussian, which, when normalized to unity area, becomes the impulse function, and an infinite-variance gaussian, which becomes a constant. An impulse function passes all frequencies, and a constant only passes the DC component of the signal, which, in frequency domain, is represented as an impulse at ω = 0 (repeated every 2π increment of ω due to the periodicity of the Fourier Transform:
$\delta(t) \Leftrightarrow 1$ Zero-variance gaussian limit
$1 \Leftrightarrow 2 \pi \delta(t)$ Infinite-variance gaussian limit
3.1.7 Wavelet Filter Banks and Vision Pathways
The two primary vision pathways are the magnocellular pathway (MP) and the parvocellular pathway (PP). Each neuronal response in the MP represents a local average over a large receptive field. Each neuronal response in the PP represents local detail in a smaller receptive field. Thus, the MP and PP decompose the natural input image into local average and local detail components, respectively.
Similarly, digital images can also be decomposed into a set of averages and another set of details using quadrature mirror filtering (QMF). This method of image analysis (breaking apart images into components) and synthesis (reconstructing images from the components) results in a series of averaging components and another series of detailing components [Strang96]. QMF is a special case of sub-band coding, where filtered components represent the lower and upper frequency halves of the original signal bandwidth. If the analyzing filter coefficients are symmetric, then the synthesizing components are mirrored with respect to the half-band value, thus the term quadrature mirror. The structure of such a wavelet analyzer and synthesizer is shown in Figure 3.1-17. The low pass filter (LPF) and high pass filter (HPF) are similar in functionality to the MP and PP in time, space, and color domains. A variety of applications have emerged from the QMF.
To illustrate QMF the following example and exercise decomposes a sequence into its averages (after LPF) and details (after HPF). The sequence is down-sampled after each pass through the LPF; all LPFs are the same and all HPFs are the same (technically, the reconstruction filters are adjoint filters, but are the same for real-valued coefficients).
To illustrate QMF the following example and exercise decomposes a sequence into its averages (after LPF) and details (after HPF). The sequence is down-sampled after each pass through the LPF; all LPFs are the same and all HPFs are the same (technically, the reconstruction filters are adjoint filters, but are the same for real-valued coefficients).
Example 3.2, 1D QMF Analysis and Synthesis
1. Using the discrete Harr wavelets [0.5 0.5] and [0.5 -0.5] for LPF and HPF respectively, show how to decompose the following sequence into one average value and a set of detailed values.
2. Reconstruct the original sequence from the calculated components to verify correct decomposition.
3. Compare the energy of the original sequence with the energy of the components.
x[n] = {12 16 8 10 10 18 13 17}
Solution:
Figure 3.1-18 shows the QMF symmetry of the PSD for the given LPF and HPF.
Part a:
We now filter the input sequence with the LPF and HPF (and stop once we have the same number of values, thus discarding the last value). Using the graphical method of convolution, flipping the LPF (which is symmetrical) and passing under x[n], taking dot product, and shifting results in
x[n]: 12 16 8 10 10 18 13 17
LPF[-n]: 0.5 0.5 = 6
0.5 0.5 = 14
0.5 0.5 = 12
0.5 0.5 = 9
0.5 0.5 = 10
0.5 0.5 = 14
0.5 0.5 = 15.5
0.5 0.5 = 15
First LPF result is {6 14 12 9 10 14 15.5 15}
Down-sampling LPF results gives {14 9 14 15}, which will be the input to the next LPF stage.
Similarly, using the graphical method of convolution, flipping the HPF and passing under x[n], taking dot product, and shifting results in
x[n]: 12 16 8 10 10 18 13 17
HPF[-n]: -0.5 0.5 = 6
-0.5 0.5 = 2
-0.5 0.5 = -4
-0.5 0.5 = 1
-0.5 0.5 = 0
-0.5 0.5 = 4
-0.5 0.5 = -2.5
-0.5 0.5 = 2
First HPF result is {6 2 -4 1 0 4 -2.5 2}
Down-sampling HPF results gives {2 1 4 2}, which will be saved as detailed components.
To determine the results of the second stage we repeat the LPF and HPF on the down-sampled LPF results of the first stage:
Down-sampled first-stage LPF results: 14 9 14 15
LPF[-n]: 0.5 0.5 = 7
0.5 0.5 = 11.5
0.5 0.5 = 11.5
0.5 0.5 =14.5
Second LPF result is {7 11.5 11.5 14.5}
Down-sampling gives {11.5 14.5}, which will be the input to the next LPF stage.
Down-sampled first-stage LPF results: 14 9 14 15
HPF[-n]: -0.5 0.5 = 7
-0.5 0.5 = -2.5
-0.5 0.5 = 2.5
-0.5 0.5 =0.5
Second HPF result is {7 -2.5 2.5 0.5}
Down-sampling gives {-2.5 0.5}, which will be saved as detailed components
To determine the results of the third stage we repeat the LPF and HPF on the down-sampled LPF results of the second stage. Subsequent down-sampling results in one value with will be saved:
Down-sampled second-stage LPF results: 11.5 14.5
LPF[-n]: 0.5 0.5 = 5.75
0.5 0.5 = 13
Third LPF result is {5.75 13}
Down-sampling gives the value 13. This value represents the sequence average.
Down-sampled second-stage LPF results: 11.5 14.5
HPF[-n]: -0.5 0.5 = 5.75
-0.5 0.5 = 1.5
Third HPF result is {5.75 1.5}
Down-sampling gives the value 1.5, and the analysis is complete.
A summary of the filter outputs is listed here, and the value after down-sampling is underlined:
First LPF result: {6 14 12 9 10 14 15.5 15}
First HPF result: {6 2 -4 1 0 4 -2.5 2}
Second LPF result: {7 11.5 11.5 14.5}
Second HPF result: {7 -2.5 2.5 0.5}
Third LPF result: {5.75 13}
Third HPF result: {5.75 1.5}
The QMF components in x[n] are the down-sampled HPF results and the final average, which is the sequence {2 1 4 2 -2.5 0.5 1.5 13}, where the last value is the sequence average.
Part b:
For the purposes of this text, which is to illustrate reconstruction from the components, we will simply subtract the detail from the average and then add the detail to the average to show the original sequence can be reconstructed. The final detail, 1.5 will be subtracted from the final average, 13, to give 11.5, and then the same two values will be added to give 14.5:
Reconstructing second stage: { (13-1.5) (13+1.5)}
= {11.5 14.5}
Then the second-stage down-sampled detail, the sequence {-2.5 0.5} will be used to subtract and add to the second stage average values just determined above:
Reconstructing first stage: { (11.5-(-2.5)) (11.5+(-2.5)) (14.5-0.5) (14.5+0.5)}
= {14 9 14 15}
and the original sequence determined from those value minus then plus the down-sampled first-stage details:
x[n] = {14-2 14+2 9-1 9+1 14-4 14+4 15-2 15+2}
= {12 16 8 10 10 18 13 17}
Part c:
One of the benefits of decomposition is the great reduction in signal energy. The total energy is the sum of the square of each of the components, which results in
Power in x[n]: 122 + 162 + 82 + 102 + 102 + 182 +132 + 172 = 1446
Energy in QMF components of x[n]: 22 + 12 + 42 + 22 + (-2.5) 2 + 0.52 + 1.52 + 132 = 202.8
As sequences become larger and signals become multidimensional (such as images or image sequences) the comparison can be far more dramatic (orders of magnitude).
Exercise 3.1, 1D QMF Analysis and Synthesis
Using the discrete Harr wavelets [0.5 0.5] and [0.5 -0.5] for LPF and HPF respectively, show how to decompose the following sequence into one average value and a set of detailed values.
x[n] = {2 22 4 12 0 16 0 4}
Answer: QMF Components of x[n]: {10 4 8 2 -2 -3 -2.5 7.5},
where the last value is the sequence average.
Vision pathways (MP and PP) and QMF filter banks both therefore break up the input image signal into high and low frequency components. The MP and PP are further augmented by the rod-system pathway. Rod cells are highly interconnected and although the rods themselves are basically saturated in daylight conditions; the rod bipolar cells are mediated by neighboring cone cells. The overall effect is a spatial low-pass filter of the mosaic image.
A model of the low frequency rod system filter can be combined with a model of the PP to create a pair of filters whose spectral response crosses at one-fourth the sampling frequency, or half the Nyquist-limited frequency. A carefully chosen pair can give a striking resemblance to typical filter pairs chosen for QMF applications. A model of the MP can be substituted for the low frequency filter, but the spectral response will diminish with very low frequencies.
3.1.8 Coarse Coding and the Efficient Use of Basis Functions
Natural vision systems process information in space, time, and color domains. In each of these domains we find filters that are typically few and relatively coarse in bandwidth. There are essentially only four chromatic detector types, three temporal channels, and three spatial channels. The responses of these elements must be broad in scope to cover their portion of the data space. For example, in daytime conditions only three detector types have varying responses. As a minimum each type must cover one-third of the visible spectrum.
Coarse coding resembles the more common wavelet applications typified by complementary coarse low pass and high pass filters. QMF signal reconstruction capability is a practical demonstration of extracting specific spectral detail from only two broadband filters. An interesting corollary to this line of research is that the behavior of such synthetic applications may lead to a deeper understanding of natural information processing phenomena.
3.1.9 Nonorthogonality and Noncompleteness in Vision Processing
Sets of wavelets can be subdivided into orthogonal or nonorthogonal and complete or noncomplete categories. A set of functions is orthogonal if the inner product of any two different functions is zero, and complete if no nonzero function in the space is orthogonal to every vector in the set. Orthogonality provides computational convenience for signal analysis and synthesis applications. Completeness ensures the existence of a series representation of each function within the given space. Orthogonality and completeness are desired properties for wavelet bases in compression applications.
However, biological systems are not concerned with information storage for perfect reconstruction. Any machine-vision application requiring some action to be taken based on an understanding of the image content will also fit this general description. In fact, many biological processes can be modeled by sets of functions that are nonorthogonal [Daug88]. The task is processing information to take some action, not processing information for later reconstruction. Using nonorthogonal filters leads to a redundancy of information to cover the span of information. The redundancy of vision filters is balanced by the need for efficiency, simplicity, and robustness. Information redundancy results in unnecessary hardware and interconnections, but often redundancy may be required to sufficiently span the information space inherent in the environment. The cost of supporting the redundancy may be less significant than the benefit of using simpler processing elements that degrade gracefully. Since there is a closeness between Gaussian-based filters and more mathematically elegant filters (such as Laplacian) there is good retention of pertinent information (though not perfect).
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/03%3A_Photo-sensory_Systems/3.01%3A_Natural_Photo-sensory_Systems.txt
|
3.2 Applications inspired by natural photo-sensory Systems
The first photosensory application is the author’s own idea to use gaussian filters for emulation of low-pass spatial-temporal filters of the photoreceptors and horizontal cells and to do that at three levels, each resulting in inherent delays that are used for elementary motion detection (EMD) models. The three different levels allow for modeling the well-known center-surround contrasting signals (propagated by bipolar cells) that comprise the magnocellular and parvocellular pathway signals. It also allows for two different EMD’s at each location. The additional EMD gives a degree of freedom needed to determine edge velocity.
The next group of research efforts are focused on modeling the outer plexiform layer (OPL) of the retina (photoreceptors, horizonal cells, and bipolar cells) using VLSI circuits. Biology is made of material with a natural plasticity for adapting to the organisms needs. Silicon is brittle, but very reliable as a technology for implementing the behavior of the OPL. Following those efforts are the ones combining the silicon retina concepts with optic flow for a more comprehensive adaptive pixel that better emulates the OPL.
A few examples of exploiting natural foveal vision are then presented. The densely-packed photoreceptors in the very center of the retina provides much better spatial acuity than the periphery, where photoreceptors are not as densely packed. This can be misleading as our ability to see detail in the very center far surpasses that of the periphery, and the photoreceptor packing is a very small part of that. There are about 5 times the number of rod cell than cone cells in the retina, but none in the fovea (thus a faint star may disappear when we look right at it). Also, cells are more interconnected in the periphery to afford better temporal resolution at the cost of spatial resolution. Most fovea-vision-inspired applications concern the higher resolution in a region of interest and not the representative rod and cone cell distributions and non-uniform level of cell interconnections.
The group following is focused on asynchronous event-based signaling which, like biology, results in a spike (or action potential) when a significant event happens (or a threshold is exceeded). Diverging from biology into a possible realm of much higher signal processing capabilities is the notion of doing the same OPL signal processing but with photonics rather than electronics. This would be a significant deviation from biology, but as pointed out before many times researchers are using biology to glean novel ideas and not necessarily attempting to duplicate biology. Another frontier being pursued is the incorporation of polarization information in vision systems as indicated in the final section.
3.2.1 Combined EMD and Magno/Parvo channel model [Brooks18]
The Hassenstein-Reichardt elementary motion detection (HR-EMD) model [Hass56] reviewed earlier cannot accurately measure optic flow velocity. A simplified version of the HR-EMD is shown in Figure 3.2.1-1. There is an optimal speed for the peak response of the EMD based on the design of the delay element. If the spatial contrast is weak but moving across the image at that speed the response can be moderate and can be the same as a stronger spatial contrast moving at a sub-optimal speed. Another information dimension is needed to determine edge velocity; one approach is to measure the power spectral density (PSD) of the image and combine that with a global EMD response in the form of a look-up table [Wu12] although a PSD measurement of an image is not known to exist in biology. These and similar approaches have used the delay inherent in traditional low-pass filters (LPFs) such as Butterworth filters (popular due to being maximally flat in pass band). Again, Butterworth filters are not known in biology. The best model of LPFs in biology are gaussian filters, which are not popular in conventional applications due to properties such as non-orthogonality. However, gaussian filters are naturally occurring in biology due to ion leakage, charge-sharing amongst receptor cells and excitatory and inhibitory signals of adjacent layers of neurons.
Gaussian filters can also model the magnocellular and parvocellular pathways (MP and PP); each channel of the MP or PP can be modeled as a difference-of-gaussian filter between the center receptor (or group of receptors) and the surrounding receptors, referred to as center-surround antagonistic signals. To model the either the MP or PP two gaussian are needed, a smaller variance gaussian for the center field and a larger variance gaussian for the surrounding field. Possibly (a subject for future experimentation) both channels can be modeled with a total of 3 gaussian filters, where the variance of the surrounding PP signal is the same as the variance of the center MP signal. These three gaussian filters are identified in Figure 3.2.1-2 as having high, medium, and low cutoff frequencies. Keep in mind these are spatial-temporal filters, so the frequencies are multidimensional to include both time and space. In the primate vision system these spatial-temporal filters would be implemented at each receptor location by the effects of weak inter-photoreceptor connections, the effects of lateral inhibition of the horizontal cells, the propagation of bipolar cells, the further mediation by the amacrine cells as the signal is passed through the ganglion cells.
Spatial-temporal gaussian filter effects are well known in vision. The three gaussians in Figure 3.2.1-2 provide the necessary information for both MP and PP channel modeling as well as two separate EMD channels, referred to in the figure as the Parvo EMD and Magno EMD. Having two separate EMD channels gives the additional degree-of-freedom needed for object velocity determination. The initial LPF (with high cutoff frequency) is used as the ‘receptor’ signal in Figure 3.2.1-1 for both EMDs, and the delayed signal is the output of the second LPF (medium cutoff) for the Parvo EMD while the delayed signal is the output of the third LPF (low cutoff) for the Magno EMD.
The object velocity is a function of location in the image, and ambiguity would be expected if only one EMD measurement were available. However, in this model two independent EMD outputs are available, so the object velocity would be determined by some combination of the responses of the Parvo EMD and Magno EMD. Another subject for future experimentation would be how the signals are combined to give the unique velocity. This is very consistent with the coarse coding concepts we see throughout biological sensory systems (and likely higher brain function).
Figure 3.2.1-3 shows how the two separate EMDs can be combined to give a specific object motion velocity at the given location in the receptive field. The output of the left and right receptors in this figure would be the output of the high cutoff LPF of Figure 3.2.1-2. The output of delays D1 and D2 correspond to the outputs of the medium cutoff LPF and the low cutoff LPF of Figure 3.2.1-2, respectively.
The effectiveness of this magno/parvo EMD model can be simulated in MATLAB or other visualization tool. Letting γ control the amount of spatial spreading between frames (limiting it to a value between 0 and 1) then the pixel value retained will be γ times the current value which will be added to (1- γ) times the average of the 4 nearest neighbor current pixel values. Letting α control the amount of temporal smoothing so that the current pixel value is multiplied by α and added to (1- α) times the current spatially-processed pixel. The spatial-temporal effects are provided by the horizonal cells, so we reference that signal as Hi,j, where i is the row index and j is the column index. Letting Pi,j represent the pixel value (the modeled receptor value) at the ith row and jth column and using T as a temporary variable (for clarity) we have the following update algorithm:
T = 0.25(1- γ)(H(i-1),j + Hi,(j-1) + Hi,(j+1) + H(i+1),j) + γPi,j
Hi,j = (1- α)T + α Hi,j
The constants γ and α represent levels of spatial smoothing and temporal smoothing respectively, which in both cases gives the low-pass filtering effects of a gaussian filter (simultaneously in both time and space domains). These can be made adaptive once a performance metric is determined. Simulating the three filters in Figure 3.2.1-2 is accomplished by tapping the results at differing numbers of iterations as the visual information is processed. A few iterations implement a high cutoff (spatial-temporal) frequency, more iterations would give a medium cutoff frequency, and even more iterations a lower cutoff frequency. There are several degrees-of-freedom for experimentation, including the spatial and temporal smoothing constants along with the number of iterations for implementing the gaussian filters.
3.2.2 Autonomous hovercraft using insect-based optic flow [Roub12]
It is well known that insects such as honeybees navigate their environment by optic flow queues in the visual field. Insect-inspired optic flow was demonstrated in a small hovercraft robot [Roub12] autonomously following a wall and navigating a tapered corridor. The design is focused on obstacle avoidance in the azimuth plane with 4 2-pixel optic flow (OF) sensors at 45O and 90O on both left and right sides. The hovercraft robot followed a wall at a given distance as well as successfully navigating through a tapered corridor. As seen in experiments with honeybees [Srini11] the velocity decreases as it successfully navigates through a tapered corridor, a natural consequence of maintaining constant OF as side get closer. The honeybee navigation was presumed to be the result of the balancing OF on both sides of the insect.
The hovercraft demonstrated the ability to adjust forward speed and clearance from the walls without rangefinders or tachometers. However, a magnetic compass and accelerometer were used to prevent movement in the yaw axis direction so that the craft continues to move forward. This is necessary since the experiment focused on the OF queues and the ability to navigate the corridor.
The algorithm was developed in simulation and implemented on this hovercraft. All 4 sensors (two at 45° and two at 90° from the forward direction on each side) were used in the navigation algorithm the authors call dual lateral optic flow regulation principle. It demonstrates a more comprehensive suggestion as to how honeybees navigate their environment than simply balancing optic flow from the two sides. This is an example of a bio-inspired sensor that is used to help biologists better understand how honeybees navigate their environment.
3.2.3 Autonomous hovercraft using optic flow for landing [Dup18]
In this effort 12 optic flow pixel sensors implementing a threshold-based motion detection is compared to a more traditional set of 12 optic flow pixels implementing a cross-correlation method. The cross-correlation method is more robust, but also more computationally complex. If a sufficient threshold method can work, then the complexity is greatly reduced. The drawback is the performance is strongly dependent on the threshold, which can vary from scene to scene and differing illumination conditions.
The application in mind is a hovercraft using optic flow sensing on the ventral side (under side) of the craft to ensure smooth landing. As an insect gets closer to the landing point the optic flow underneath will increase since the image texture is getting closer. If the insect keeps the optic flow constant, then its speed must be reducing as the insect approaches, until the point where the insect is at rest on the landing surface. To measure the performance the optic flow sensor was fixed with a textured visual field passed in front of the sensor.
3.2.4 Silicon Retina [Maha89]
The silicon retina [Maha89] is designed to emulate the initial processing layers of the retina, which include the photoreceptors, horizontal cells, and bipolar cells. An array of 48 x 48 pixels was fabricated using 2.0 µm design rules (width of conducting path) and pixel circuits about 109 x 97 µm in size. A hexagonal resistive grid is used so that local averages of pixels are more highly influenced by the six nearest neighbors than those farther away.
The triad synapse (connecting these three cell types) is modeled in silicon as a follower-connected transconductance amplifier. A capacitor stores the spatial-temporal signal of the photoreceptor, and an amplifier propagates the difference if this signal and the photoreceptor signal, modeling the bipolar cell center-surround antagonistic signal. The photodetector circuit is a bipolar transistor biased with a depletion region responding logarithmically with the incoming light intensity, which corresponds to physiological recordings of natural photoreceptors.
The design was later revised with an adaptive photoreceptor circuit modulated by three feedback paths and individual time constants. The gain of the receptor is adaptive, and the circuit was more robust to transistor mismatches and temperature drifts than the original silicon retina. Another improvement was the incorporation of the edge signal position without the need for off-chip subtraction [Maha91].
3.2.5 Neuromorphic IR analog retina processor [Mass93]
Building on the silicon retina design the Air Force Research Lab (AFRL, Eglin AFB) funded the development of an infrared sensor. One of the problems emulating biological retinae with VLSI technology is the area required to model the time constants observed in biology make the design of a 2D array of pixels unreasonably large. This IR sensor design used switch-capacitor technology with small capacitors to emulate time constants of larger capacitors. Although such technology has no biological counterpart, it was successful in achieving biomimetic spatial-temporal response rates. The drawback of this technology is additional noise caused by the 10KHz switching speeds required for the design.
A 128 x 128 array of Indium Antimonide (InSb) detector elements at 50 µm pitch were connected 4-to-1 to create a 64 x 64 array at 100 µm pitch. This detector plane was bonded to a readout chip where each pixel used the 100 µm pitch area for the switched-capacitor and readout circuitry. The InSb diodes were connected in photovoltaic mode and responded logarithmically as the biological photoreceptors do. The CMOS transistors configured as switched-capacitors were used between pixel nodes to provide the spatial-temporal smoothing inherent in laterally-connected horizontal cell layers of the retina.
The result was a medium-wave IR (MWIR) camera with localized gain control. The camera captured imagery of a gas torch in front of a lamp with a large flood light bulb. Conventional cameras at that time would saturate in all the lighted areas unless a global gain control were in place, in which case the objects in the darker parts of the image would not be seen. In this experiment the filament of the light bulb, the outline of the torch flame, as well as the object in the darker parts of the image could be clearly seen. This is the benefit of localized gain control of natural biological retinae and bio-inspired sensors that model them.
3.2.6 Michaelis-Menten auto-adaptive pixels M2APix [Maf15]
The vision system of primates (and other animals) provides responses over a wide range of luminosities while at the same time provides good sensitivity to local contrast changes, giving the vision system the ability to simultaneously distinguish a bright object against a bright background in one part of the image and a dark object against a dark background in another part of the image. The wide range of luminosities is facilitated by the opening and closing of the iris as well as the natural logarithmic response of the photoreceptors. The good sensitivity is facilitated by the lateral inhibition of the post-photoreceptor processing neurons, the horizontal cells.
Many machine vision designers have sought to develop wide dynamic range sensors and have looked to the natural vision system for inspiration. The Delbruck adaptive pixel [Del94] used the logarithmic photoreceptor circuit of the original silicon retina [Maha88] and is used in comparison with the Michaelis-Menten auto-adaptive pixel (M2APix) proposed here [Maf15].
The Michaelis-Menten equation [Mich1913] was derived to model enzyme kinetics in biochemistry. It describes the rate of enzymatic reactions in terms of the maximum rate achieved when the substrate is saturated and a constant representing the substrate concentration when the reaction rate is half the maximum rate [WikiMM]. It is adapted in [Maf15] to describe the photoreceptor’s response, V, in terms of the maximum response at lamination saturation, Vm, the light intensity, I, and an adaptation parameter, σ, given in [Maf15] as
$V=V_{m} \frac{I^{n}}{I^{n}+\sigma^{n}}$
Substituting V with the enzymatic reaction rate, Vm with the maximum rate when the substrate concentration is saturated, I with the substrate concentration, and σ with the Michaelis constant, which is the substrate concentration when the rate is half Vm, and letting n = 1 this equation reduces to the original biochemistry equation [WikiMM].
The Delbruck adaptive pixel provides a 7-decade range of light adaptation and a 1-decade range of contrast sensitivity. There were some issues raised concerning steady-state responses increasing with light intensity and inconsistent transient responses under large contrast sensitivity. Other methods using resistive grids to emulate horizontal cell networks resulted in 4 decades of sensitivity but required external voltage sources to set bias points [Maf15].
A photoreceptor array of 12 M2APix pixels and 12 Delbruck pixels was fabricated and used for comparison. The 2 x 2 mm silicon retina was fabricated into a 9 x 9 mm package with the two 12-pixel arrays side-by-side for comparison. The experimental results confirmed that the M2APix pixels responded to a 7-decade range of luminosities and with a 2-decade range of contrast sensitivities. The advantage over the Delbruck adaptive pixel is that it produces a more steady contrast response over the 7 decades of luminosities so that the least significant bit (LSB) will be a lower value and therefore a better contrast resolution [Maf15]
3.2.7 Autonomous hovercraft using insect-based optic flow [Van17]
A bio-inspired eye is designed to allow an aerial vehicle passive navigation through corridors and smooth landing by having vision sensors responding to optic flow in the front, two sides, and the bottom. A given example would be a quadrotor exploring a building by keeping a certain distance from the walls.
Called the “OctoM2APix” the 50 gm sensor includes 8 Michaelis-Menten auto-adaptive pixels (M2APix): 3 measuring optic flow (OF) on the left side, 3 measuring OF on the right, and 2 on the bottom measuring OF on the ground underneath the vehicle. The center pixel on each side is measuring OF at right angles to the heading; one is pointing between the side and the front, while the other is pointing between the side and the rear. Each side covers about 92° in the horizontal plane. The object is to allow the vehicle to correct for heading based on the differing OF measurements of the three pixels on either side.
The experimental results were with the OctoM2APix sensor stationary and a textured surface moving next to it at various angles with respect to the (simulated) heading, or the direction of the front of the sensor. The experimental heading included 0°, where the vehicle would be following a wall-like surface, +20°, where the (non-simulated) vehicle would eventually collide with the surface if not corrected, -20°, where the vehicle would be separating from the surface, and -45°, where the vehicle would be separating at a faster rate. The OF on forward and rear side pixels should offset each other when heading is parallel to the surface, and the difference between forward and rear side pixels would provide cues for the heading with respect to the wall, and thus allow the vehicle to adjust its heading if the goal were to follow the wall at a constant rate. The experimental results are shown by calculating heading from the center and forward pixel, the center and rear pixel, and the forward and rear pixel, the latter being the best estimate when all three sensors had the surface in view. This makes sense since this would be the widest separation.
3.2.8 Emulating fovea with multiple regions of interest [Azev19]
There are many applications where image resolution is high in some region of interest (ROI) and low in the remaining portion of the image. This is a crude resemblance of a foveated image but could be argued as bio-inspired by the fovea. In both natural and synthetic designs, the idea is to conserve computational resources by using a higher sampling in an ROI (center gaze for biology) and lower sampling elsewhere. Non-uniform sampling is seen in all natural sensory systems as biology has adapted to the different levels of relevance of natural stimuli (passive or active). In many commercial and military applications multiple ROI’s could be employed, but this is rare in biology if it exists. (Vision systems have a single fovea, but it could be argued that there are multiple regions of higher sampling, for example, in the sense of touch as the well-known somatotopic map would suggest).
A vehicle tracking system designed for self-driving cars uses multiple ROI’s and claims fovea inspiration. The subsequent image processing is developed using deep-learning neural networks, which again implies some level of bio-inspiration. The system uses vehicle wave-points, which are the expected future locations of the vehicles, and continually crops the image looking for other vehicles. This is analogous to drivers looking down the road they are traveling. The experimental results claimed an improvement of long-range car detection from 29.51% to 63.15% overusing a single whole image [Azev19]. As pointed out before, many researchers are more focused on solving engineering problems (as they should be) and not too concerned with the level of biomimicry. Therefore, there can be a chasm between the levels of biomimicry between various efforts claiming bio-inspired designs.
3.2.9 Using biological nonuniform sampling for better virtual realization [Lee18]
There are other applications that are not mimicking biology but considering the high spatial acuity of the fovea. For example, a head-mounted display can consider the gaze direction for visualization of 3D scenes with multiple layers of 2D images. The goal is to improve received image quality and accuracy of focus cues by taking advantage of the loss of spatial acuity in the periphery without the need for tracking the subject’s pupil [Lee18]. Another example is a product (called Foveator) that tracks the motion of the pupil and limits high-resolution rendering only in the direction needed. The intended application is for improved virtual reality (VR) experience [see www.inivation.com]. These ideas leverage natural design information to relax requirements of a visual system to avoid providing more than necessary as opposed to using the design of natural fovea to inspire newer designs.
3.2.10 Asynchronous event-based retinas [Liu15a]
Conventional camera systems have pixelated, digitized, and framed pictures for post spatial, temporal, and chromatic processing. Natural vision systems send asynchronous action potentials (spikes) when the neuronal voltage potential exceeds a threshold, which happens at any time, instead of on the leading edge of a digital clock cycle. The information is thus gathered asynchronously, and these information spikes only occur when there is something to cause them. Mimicking this biological behavior is the emerging asynchronous event-based systems. Progress has been slow due in part to the unfamiliarity of silicon industry with non-clocked (asynchronous) circuitry. The emulation of cell types is limiting as industry is reluctant to reduce the pixel fill areas to make room for additional functionality [Liu15a]. In mammals the light travels through the retinal neuron layers and then through many layers of photopigment, allowing numerous opportunities for photon capture than a single pass (such as the depletion region of a pn junction). The chip real-estate for asynchronous (analog) processing in the retina does not conflict with the photon-capturing photoreceptor as the retinal is transparent to the incoming photonic information.
One asynchronous silicon retina design includes an attempt to mimic the magno- and parvo-cellular pathways (MP and PP) of the optic nerve [Zag04]. The sustained nature of the PP and the transient nature of the MP is pursued which results in both ON and OFF ganglion cells for both MP and PP, which is what is observed in natural vision systems. The benefit is natural contrast adaptation in addition to adaptive spatio-temporal filtering. The resulting localized automatic gain control provides a wide dynamic range and makes available two separate spatial-temporal bandpass filtered representation of the image. One of the challenges of using this vision system is the large non-uniformity between pixel responses [Liu15a]. Gross non-uniformity between receptors and neurons is common in natural sensory systems as the adaptive (plastic) nature of neurons compensates for such non-uniformities.
3.2.11 Emulating retina cells using photonic networks of spiking lasers [Rob20]
Silicon retinas and cochleae such as those in [Liu15] use hard silicon to emulate the behavior of biological neuronal networks that are adaptive and exhibit plasticity. Nevertheless, these bio-inspired designs show promise of the applications of such novel sensory systems. In a similar way vertical cavity surface emitting lasers (VCSELs) are used to emulate responses of certain neurons in the retina and are referred to as VCSEL-neurons. In biology the photonic energy is converted to a graded (or analog) potential by the biochemistry of the photopigments of the photoreceptor cells. By keeping the information photonic the speeds of computations can exceed 7 orders of magnitude improvement. This dramatic improvement in information processing performance has wide applications for computationally-intense algorithm frameworks such as artificial intelligence and deep learning.
This effort demonstrates retinal cell emulation using off-the-shelf VCSEL components operating at conventional telecom wavelengths. The VCSEL-neurons were configured to emulate the spiking behavior of ON and OFF bipolar cells as well as retinal ganglion cells. In these silicon and photonic applications, we see biology as an inspiration for novel information processing strategies but then combine those strategies with available technology that does not emulate the way biology works. A similar example of this concept is also seen when the fovea is emulated in the next few applications.
3.2.12 Integrating insect vision polarization with other vision principles [Giak18]
The visual sensory systems of many species are designed to process environment-provided stimulus that have space, time, and color dimensions. Arthropods and some marine species have been shown to have sensitivities to the polarization of light as well. For example, the octopus retina has tightly packed photoreceptor outer segments composed of microvilli that are at right angles to the microvilli of neighboring photoreceptor cells. The microvilli orientation alternates between these right angles and this is believed to give the octopus sensitivity to polarized light [Smith00].
Aluminum nanowire polarization filters are used in this effort [Giak18] to emulate the microvilli of ommatidium, the components that make up the compound eye. Polarization measurements were made to characterize the polarization of several polymers. A previously designed neuromorphic camera system is used with polarization filters to show improvement in visually recognizing a rotating blade if polarization information is used [Giak18].
3.03: Questions
Chapter 3 Questions
1. Differentiate between passive and active sensors.
2. What is the energy in a photon?
3. How are chemo-reception and photo-reception similar?
4. Describe the three most significant imperfections in biological vision systems and what causes them.
5. Discuss the relationship between connectivity and spatial and temporal acuity
6. What is coarse coding?
7. What are the three information domains in which vision systems extract environmental information?
8. Describe the three major compound eye designs.
9. Give some examples of visual scanning systems in the animal kingdom. What are the advantages and disadvantages of such a system?
10. Why is the retina considered a part of the brain, since the two organs are separated by distance and other components (optic nerve, LGN, etc.)?
11. What are the anatomical similarities between the retina, LGN, and the brain?
12. Explain the serial/planar duality that exists in biological vision systems.
13. Describe the encoding and decoding levels (in orders of magnitude) in the various organs within the primate vision system.
14. Name the five major cell types (layers) in the retina. Which three are connected to the triad synapse?
15 Give the three primary vision information channels in primate vision.
16. DoG or LoG filters are primarily used to model what part of the vision system?
17. What are the commonalities in color vision models concerning luminance and color?
18. What is the photoreceptor mosaic, and how is that like an artistic mosaic?
19. What is the difference between LoG and DoG filters?
20. Discuss degrees of freedom with LoG and DoG filters?
21. Compare and contrast vision system pathways with a conventional wavelet filter bank.
22. How is coarse coding manifested in the vision system?
23. When contemplating a new communication encoding scheme, it is very important to choose an orthogonal basis. But a typical biological set of basis functions are not mutually orthogonal. What is the implication?
24. Why are we so interested in biology if natural basis functions are not orthogonal?
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/03%3A_Photo-sensory_Systems/3.02%3A_Applications_inspired_by_natural_photo-sensory_Systems.txt
|
4.1 Natural Mechano-sensory Systems
The primary mechano-sensory systems provide the sense of tough and hearing. Neurons are stimulated by contact with objects in the environment or by contact with fluid compression waves caused by movements in the atmosphere or underwater. The primate auditory sense is caused by air vibrations against the eardrum, which causes bone vibrations in the inner ear, which causes deformations of the basilar membrane that resemble the shape of the frequency spectrum of the incoming sound energy.
4.1.1 Mechano-sensory capability in simple life-forms
The most basic sense is the mechano-sensory tactile sense, which is the response to mechanical distortion. The history of the tactile sense goes back to ancient prokaryocytes, which are cellular organisms with no distinct nuclei, such as bacteria or blue-green algae. For these fundamental life forms, the tactile sense is required for continually detecting the continuity of the cell boundaries. The organism can then 1) counteract swelling due to osmotic forces (fluid entering the cell to balance ionic concentrations) and 2) prepare for cell division when the tactile sense detects swelling for that purpose. [Smith08].
4.1.2 Mechano-sensory internal capability within higher life forms
The human hypothalamus located under the brain serves as an interface between the nervous system and the endocrine (interior secretion) system. The fluid secretions controlled by the hypothalamus are a primary influence for heart rate and other biological rhythm control, body temperature control, hunger and thirst control, digestion rate, and other related functions involving secretions. It is believed to be the center for “mind-over-body” control as well as for feelings such as rage and aggression [Tort84]. Within the hypothalamus is a complex neuronal design based on stretch-sensitive mechanoreceptors that sample the conditions of blood cell membranes in an analogous way that they serve the single-celled organisms. The difference is that the prokarycyte stretch-sensitive mechanoreceptors built into the organism, while the hypothalamic mechanoreceptors sample the blood cells from external to the cell [Smith08].
Mechano-sensors are built around stretch-sensitive channels that allow immediate detection and rapid response. Photo-sensory and chemo-sensory reception involves a complex biochemistry to translate the presence of a photon or a chemical tastant or odorant into an ionic charge presence within the receptor. The ionic charge increase is then translated into nerve impulses to eventually be processed by the higher brain functions. The mechanoreceptors, on the other hand, respond immediately to mechanical distortion.
4.1.3 The sense of touch
Mechanoreceptors are fundamental to the detection of tension and the sense of touch. They are also basic components to detecting vibrations, accelerations, sound, body movement and body positions. They play an important role in kinesthesia, which is sensing the relative positions of different body parts. It is believed that all these senses are ultimately derived from stretch-sensitive channels. However, the human understanding of the molecular structure and nature of most mechanosensory channels is still in its infancy.
4.1.4 Mechano-sensory sensilla
A discriminating characteristic of arthropods is their external skeleton, which limits (fortunately!) their overall size. Sensory organs such as the retina cannot develop in the hard exoskeletons. However, their kinesthetic sense is well developed due to sensory endings in muscles and joints of appendages. The most common insect sensory unit is the mechanosensory sensilla, each of which includes one or more neurosensory cells within a cuticular (external shell) housing. The cuticle is the external surface of an arthropod. Mechanosensitive sensilla may respond to cuticular joint movements or be positioned to detect movements within the cavity. The three primary mechanosensory sensilla in arthropods include:
Hair sensilla
Neurosensory cells have dendritic inputs from within a hair protruding from the cuticle and axonal outputs from the cell bodies located at the root of the hair embedded in the epidermis underneath the cuticle. Deflection in one direction causes depolarization (an increase from the nominal –70 mV resting potential) while deflection in the other direction causes hyperpolarization (a decrease from –70 mV.) Minimum response thresholds are known for distortions down to 3-5 nm with response times down to 100 us (0.1 ms). These figures imply the use of opening and shutting ion gates; the biophysics for mammalian hair cells is similar.
Campaniform sensilla
The hair has been reduced to a dome on the cuticle. The dendritic inputs are just beneath the external surface so that the neurosensory cell senses a slight surface deformation. Responses have been shown with deformations as small as 0.1 nm. Directional selectivity is achieved with elliptically shaped domes, where deformation along the short axis is more sensitive to deformation along the long axis.
Chordotonal organs – Mechanosensory sensilla developed within the body cavity. These are characterized by a cap or other cell that stimulates numerous dendritic inputs to the neurosensory cell. Chordotonal organs are one of the proprioceptor types, located in almost every exoskeletal joint and between body segments. Many are sensitive to vibrations; for example, one type in the cockroach is sensitive to vibrations between 1 kHz to 5 kHz and amplitudes from 1 nm to 100 nm (0.1 micron). These sensing capabilities are important for detection of danger and for social communications.
Other non-mechanosensory sensilla include gustatory (taste), olfactory (smell), hygroscopic (humidity-sensing), and thermal (temperature-sensing) sensilla.
Separating insect mechanoreceptors into vibration detectors and acoustic detectors is difficult since many times the same receptors are used to detect vibrations in air, water, and ground. Certain water insects (pond skater, Gerris, and water-boatman, Notonecta) detect wave amplitudes around 0.5 microns in a frequency range 20-200Hz and time delay range 1 to 4 ms.
Hairs and tympanic membranes for auditory sensing
Two basic types of sound detectors have been developed in insects: Hairs and tympanic organs. Hairs only respond to lateral distortions of air when the insect is very near the sound source, such as the wing beat frequency of a predator insect or a prospective mate. They are accompanied in detecting vibrations by Johnston’s organ, which consists of largely packed sensilla. The Johnston’s organs also detect flight speed in bees and gravity in the water beetle.
Tympanic organs (ears) respond to pressure waves and are thus able to respond to sound sources much farther away. Tympanic organs are used for communications, attack, and defense. The basic parts include a tympanic membrane, air cavity, and a group of chordotonal organs that provide the neuronal signaling from the acoustic stimulus. Across the species the tympanic organs have developed on many different parts of the insect body.
Evasive maneuvers of the lacewing moth
An interesting use of the tympanic organ is found in the green lacewing (Chrysopa carnea). Military pilots being pursued by enemy aircraft may mimic the lacewing when being pursued by a hungry bat. As the bat detects its prey and closes in, its active sonar pulses increase in frequency. When the search pulses are detected, the lacewing folds its wings into a nose-dive out of the sky before the bat’s sonar can lock on. Noctuid moths have two neurons for each tympanic organ. One signals a bats detection sonar pulse while the other starts responding with the higher frequency tracking pulses. With the first signal the moth will retreat in the opposite direction; with the second signal it will try desperate avoidance maneuvers, such as zig-zags, loops, spirals, dives, or falling into cluttering foliage. (Surrounding nearby vegetation “clutters” the returning sonar pulses echoing off a target moth; similarly, vegetation also “clutters” the returning echo radar pulses echoing off a military target.) Some moths will emit sounds during the last fraction of a second; it is not sure if the moth is warning others or trying to ‘jam’ the bats echolocation analysis mechanism [Smith08].
Equilibrium and halteres
Hair cells in different orientations lead to gravitational force detection from different orientations, which lead to balance and equilibrium. Fluid-filled tubes in the vertebrates called the semicircular canals are oriented orthogonal to each other. Two fluids, called endolymph and perilymph are very different with respect to ionic concentration levels. K+ ions flow through the stereocilia, which project well into the K+-rich endolymph. The resulting design is a complex system of orientation signals that are processed to achieve balance and equilibrium.
The membranous labyrinth has developed from early lamprey (eel-like fish). It includes the semicircular canals and fluid-filled chambers called the utriculus and sacculus. It also includes pre-cochlear organs and cochlea (auditory part of hearing system) in the higher species.
Many insects have two pairs of wings to help control their flight, but the dipteran (two-winged) insects have developed halteres to replace the hind wings. These organs are attached to the thorax just under each wing and have dumbbell shaped endings causing responses to changes in momentum. Dipteran insects typically have short, stubby bodies that make it particularly remarkable that they can control their flight. The halteres provide inertial navigation information that is combined with optic flow input through the vision system. The head is kept stabilized by its own visual input, while the halteres provide inertial information used to stabilize flight. The halteres can be thought of as vibrating gyroscopes that serve as angular rate sensors [North01]. It can be shown that a system of two masses suspended on a stiff beam at 45° has the capability to provide sufficient information for stabilized flight control. How the neurons are connected and how the information is processed to accomplish stabilized flight control, however, will remain a mystery for a long time to come [North01].
The halteres have numerous campaniform sensilla nerve endings attached at the end as well as numerous chordotonal organs embedded within. These signals can detect slight motion in each of the three degrees of freedom: pitch, roll, and yaw. Pitch is rotation about a horizontal axis orthogonal to the main horizontal axis, roll is rotation about the main horizontal axis, and yaw is rotation about the vertical axis. To illustrate each of these three, consider the effects of rotational motion when looking ahead from the bow of a ship: Pitch causes up and down motion, roll causes the left side to go up when the right goes down (and vice versa), and yaw causes the ships heading to oscillate to the left and right. Halteres can oscillate through about 180° at frequencies between 100Hz and 500Hz [Smith08].
4.1.5 Mammalian tactile receptors
In mammalian skin tactile receptors can be classified into fast adapting, which respond only during initial skin deformation, and slow adapting, which continue to respond if the deformation is present. Fast adapters include:
- Pacinian corpuscles, which are in the deeper layers of glabrous (non-hairy, like the palm) skin and respond to vibrations in the range of 70-1000 Hz
- Meissner’s corpuscles, which are also in the deeper layers of glabrous skin and respond to vibrations in the range of 10-200 Hz
- Krause’s end bulbs, like Meissner’s corpuscles but found in non-primates, responding to vibrations in the range of 10-100 Hz
- Hair follicle receptors, which are located just below the sebaceous (sweat) glands; numerous nerve endings give hair follicles a wide range of hair-movement sensitivities and response times.
The slow adapting tactile receptors in mammalian skin include:
- Merkel cells, which respond to sudden displacements, such as stroking
- Ruffini endings, which respond to steady displacement of skin
- C-Mechanoreceptors, located just beneath the skin, in the epidermis/dermis interface, have unmyelinated (unprotected) nerve fibers extending into the epidermis (the most external layer of skin). These nerves respond with slowly-adapting discharge to steady indentations of the skin. They also respond to temperature extremes and to tissue damage, interpreted as pain.
Basic hair cells are similar in structure among all vertebrates. Peak sensitivities in the human ear reach movements of only a tenth of a nanometer, which is one angstrom. Hair cell sensitivity “is limited by the random roar of Brownian motion” [Smith08]. Hair cell ending are composed of bundles of fine hair-like bundles called stereocilia and a single, tall cilium with a bulbous tip called a kinocilium. The receptor potential depolarizes (rises from–70 mV) for motion in one direction and hyperpolarizes (decreases below –70 mV) for motion in the other direction. (Biologists refer to the normal neuronal resting potential of –70 mV as the natural voltage “polarization” state).
4.1.6 Human auditory system
The founder of Ohm’s Law, G. S. Ohm, once suggested that the human auditory systems does a Fourier analysis of received sound signals, breaking the signals into separate components with separate frequencies and phases [Kand81]. Although this has proven to be true, the auditory system does more than a simple Fourier analysis. The input is fluid pressure waves (sound in air) from the environment striking the eardrum and the ear transforms the pressure waves to neuronal signals processed by the auditory cortex in the brain.
Figure 4.1.6-1 shows a sketch of the key components of the human auditory system. Sound enters the outer ear, and the vibrations are transferred to the middle ear and then the inner ear. The outer ear is composed of the external cartilage, called pinna, the ear canal, and the tympanic membrane, or ear drum. The middle ear is composed of three bones in an air-filled chamber; the inner ear, or membranous labyrinth, contains the semicircular canals, fluid-filled chambers called the utriculus and sacculus, which are near the semicircular canals (but not labeled in Figure 4.2.1-1) and the cochlea.
The outer ear is designed to collect sound waves and direct them into the ear canal to the eardrum. The middle ear ossicles are the malleus, or “hammer (mallet)”, the incus, or “anvil”, and the stapes, or “stirrups”. The names come from their shapes being similar to familiar objects. The ossicles serve to provide an acoustic impedance matcher between the air waves striking the eardrum and the fluid waves emanating from the oval window in the cochlea. Without the impedance matching most of the air-wave energy would reflect off the surface of the cochlear fluid. Another purpose of the ossicles is to amplify the energy density due to the variation in acoustic surface area: The eardrum surface area is about 25 times larger than that of the oval window.
Time delays and sound localization
For humans, the typical time delay for a sound wave to reach each eardrum is between 350 to 650 microseconds [Mead89], depending on the binaural separation distance. A source directly in front of the listener will reach each ear simultaneously with no time delay, while a source at right angles will reach each ear with this maximum time delay. The difference in wave-front arrival time is therefore one of the horizontal localization cues for the sound source, as will be shown later for the barn owl.
Another horizontal localization cue for humans is the result of high frequency attenuation caused by sound traveling around the head. This is referred to as the acoustic head shadow. A sound source from directly ahead will have the same attenuation effect in both channels, while a source coming from an angle will result in more high frequency attenuation at the contra-lateral (opposite-sided) ear. The sound impulse response from a source between center and right angles shows both a delay and a broadening on the contra-lateral ear with respect to the ipsi-lateral (same-sided) ear.
Elevation information is encoded in the deconstructive interference pattern of incoming sound wavefronts as they pass through the outer ear along two separate paths: The first path is directly into the ear canal, and the second is a reflected path off the pinna (see Figure 4.2.1-1) and again off the tragus before entering the ear canal. The tragus is an external lobe like the pinna but much smaller (and not seen in Figure 4.2.1-1); the tragus is easily felt when the finger is at the opening of the ear canal. The delay time in the indirect pinna-tragus path is a monotonic function of the elevation of the sound source. Since the destructive interference pattern is a function of the delay time, this pattern serves as a cue for the elevation of the sound source with respect to the individual.
Static and dynamic equilibrium
The three semicircular canals are mutually orthogonal to make available signals from each degree-of-freedom. Two chambers connect to the canals, called the utricle and saccule. Static equilibrium is sensed in regions of the chambers, while dynamic equilibrium is sensed at the cristae located at the ends of the semicircular canals.
The macula in the utricle and saccule (inner ear chambers) serve to provide static equilibrium signals. Hair cells and supporting cells in the macula have stereocilia and kinocilium extending into a gelatinous layer supporting otoliths (oto ear, lithos stone). The otoliths are made of dense calcium carbonate crystals that move across gelatinous layer in response to differential gravitational forces caused by changes in head position. The movement stimulates the hair cells that provide static equilibrium signals to the vestibular cochlear nerve. (The vestibular branch contains signals from the semicircular canals and the utricle and saccule chambers, while the cochlear branch contains signals from the cochlea).
The cristae located in the ends of each semicircular canal serve to provide dynamic equilibrium signals. Head movements cause endolymph to flow over gelatinous material called the cupula. When each cupula moves it stimulates hair cells comprising the ampullar nerve at the end of each of the semicircular canals. These signals eventually cause muscular contractions that help to maintain body balance in new positions.
Time-to-frequency transformation in the cochlea
Sound vibrations from the external environment strike the eardrum, causing a chain reaction through the middle-ear ossicles that transforms the air vibrations into fluid vibrations in the basilar membrane of the cochlea. As shown in Figure 4.1.6-2, if the basilar membrane (inside the cochlea) were uncoiled and straightened out, it would measure about 33 mm long, and 0.1 mm (100 microns) wide at the round window end and 0.5 mm (500 microns) wide at the other end [Smith08]:
The basilar membrane is stiffer at the round window end and looser at the apex. This causes the wave propagation velocity to slow down as it travels down the basilar membrane. Depending on the initial frequency of the wave, this variable velocity behavior will cause a maximum resonant distortion along the path from the round window to the apex. The basilar membrane is quite complicated and includes sensitive inner and outer hair cell neurons that will respond to deformations of the basilar membrane at the location of each neuron. The hair cell neurons are located along the entire pathway so that the frequency content of the sound can be determined from the spatial location of the neurons that are firing. Thus, the basilar membrane performs a mechanical Fourier Transform on the incoming sound energy and the spatially-distributed neurons sample that signal spectrum.
It was mentioned (Chapter 2) that sensory receptors adjacent to each other in the peripheral sensory system (such as the auditory system) will eventually fire neurons adjacent to each other in the auditory cortex. The relevant signal characteristic of adjacent neurons in the auditory sensor, namely the hair cell neurons adjacent to each other in the basilar membrane, correspond to adjacent frequency components in the input sound. This tonotopic map of the neurons of the basilar membrane is reconstructed in the auditory cortex as well. So, frequency cues are provided by which neurons are firing.
Data sampling rates and coarse coding
The rate of neuronal firing in the cochlea encodes the mechanical distortion of the basilar membrane, which is a direct consequence of the sound energy level of the source. This design is quite remarkable when considering the firing rate of neurons being a maximum of around 1 to 2 ms. Nyquist sampling criteria states that a 1ms sampling (1 kHz) of a signal can only encode information up to 500 Hz, yet human hearing can discern frequencies well above 10 kHz. Each neuron can only fire at a rate much less than the Nyquist criterion, but there are many neurons firing simultaneously, so the aggregate sampling rate is much more than that required to sample a signal whose bandwidth is that of the typical human hearing range (up to 20 kHz).
The firing rate of neurons in the cochlea (basilar membrane) encodes sound intensity information, and not the sound frequency content. The frequency is coarsely-coded: Each neuron has roughly a gaussian frequency response, responding to around 10% to 20% of its peak frequency response. An adjacent neuron would have a slightly different peak frequency response. If both neurons are firing at the same rate, then the frequency would be the value in between their responses. If one is slightly higher than the other, then the frequency component would be closer to its peak response. With only two broadly overlapping gaussian-like frequency responses, a very specific frequency could be extracted with precision far beyond what either neuron could provide.
This is yet another example of coarse coding. In the vision system we observe 4 photoreceptor types whose spectral response curves broadly overlap, yet due to the complex post-processing of highly interconnected neuronal tissue, millions of combinations of color, tone, and shade can typically be discerned. Similarly, the auditory mechanoreceptors are sensitive to frequencies in a 10-20% band around a peak, yet we can discern more specific frequencies at a much higher resolution.
Figure 4.1.6-3 shows a Matlab-generated plot of three Gaussian curves centered at 1.0 KHz, 1.1 KHz, and 1.2 KHz. For a monotone input somewhere between 0.9 KHz and 1.3 KHz would stimulate all three neurons. Keep in mind that the intensity of each neuron response concerns the intensity of the sound, so a moderate response from one neuron could be a weak signal at its peak frequency response, or a stronger signal at a slightly different frequency. For the neuron whose peak response is 1.0KHz, the response would be about the same for a signal at 1.0 KHz, or a 850 Hz signal at twice the strength (where the normalized response is about 0.5), or a 800 Hz signal at 4 times the strength (where the response is about 0.25). A single neuron cannot use its response for very accurate frequency detection.
It is therefore the relative responses of adjacent neurons (that are spatially located along the basilar membrane) that provides the frequency queues. The following example and exercise are intended to simply illustrate the improved frequency resolution that is obtained by comparing the responses of adjacent auditory neurons.
Example 4.1.6-1
Assume three auditory neurons have gaussian responses around peak frequencies of 2.0 KHz, 2.1 KHz, and 2.2 KHz like those shown in Figure 4.1.6-3. Assume the three gaussian responses have the same variance but these three different peak values. Give an estimate (or a range) for three separate inputs given that the normalized neuron output is measured as
2.0 KHz Neuron 2.1 KHz Neuron 2.2 KHz Neuron
Input_1 0.2 0.8 0.2
Input_2 0.4 0.9 0.1
Input_3 0.1 0.9 0.4
Solution:
For this problem we are not concerned with the significance of one particular response value, but instead how the response values compare to those of adjacent neurons. Conveniently the 2.1 KHz Neuron give the strongest response to all three inputs, so the tone would be at least close to 2.1 KHz. Notice for Input_1 that the response to both adjacent neurons is the same (0.2). Since all three curves have the same variance and due to symmetry of Gaussian curves the only possible frequency giving this set of responses would be one at exactly 2.1 KHz.
The input frequency Input_2 would be closer to 2.1 KHz than 2.0 KHz or 2.2 KHz, but since the response of the 2.0 KHz Neuron is greater than that of the 2.2 KHz Neuron the input would be closer to 2.0 KHz than 2.2 KHz, so something less than 2.1 KHz. If the input frequency were the midpoint 2.05 KHz then we would expect the response values for both the 2.0 KHz Neuron and the 2.1 KHz neuron to be the same, but that is not the case. So, the Input_2 frequency should be greater than 2.05 KHz but less than 2.1 KHz, or in the range of about 2.06 KHz to 2.09 KHz.
The input frequency Input_3 would be closer to 2.1 KHz than 2.0 KHz or 2.2 KHz, but in this case the response of the 2.2 KHz Neuron is greater than that of the 2.0 KHz Neuron, so the input would be closer to 2.2 KHz than 2.0 KHz, so something greater than 2.1 KHz. In this case if the input frequency were the midpoint 2.15 KHz then we would expect the response values for both the 2.1 KHz Neuron and the 2.2 KHz neuron to be the same, but once again that is not the case. So, the Input_2 frequency should be greater than 2.1 KHz but less than 2.15 KHz, or in the range of about 2.11 KHz to 2.14 KHz.
The following give a summary of our estimates for the tonal input frequencies:
2.0 KHz Neuron 2.1 KHz Neuron 2.2 KHz Neuron Estimated tonal frequency (KHz)
Input_1 0.2 0.8 0.2 f ≈ 2.1 KHz
Input_2 0.4 0.9 0.1 ~ 2.06 ≤ f ≤ 2.09
Input_3 0.1 0.9 0.4 ~ 2.11 ≤ f ≤ 2.14
Exercise 4.1.6-1
Assume four auditory neurons have gaussian responses around peak frequencies of 3.0 KHz, 3.1 KHz, 3.2 KHz, and 3.3 KHz like those shown in Figure 4.1.2-5. Assume the four gaussian responses have the same variance but these four different peak values. Give an estimate (or a range) for three separate inputs given that the normalized neuron output is measured as
3.0 KHz Neuron 3.1 KHz Neuron 3.2 KHz Neuron 3.3 KHz Neuron
Input_1 0.1 0.8 0.8 0.1
Input_2 0.4 0.9 0.8 0.2
Input_3 0.7 0.4 0.2 0.1
Answers:
Input_1: f ≈ 3.15 KHz, Input_2: ~ 3.11 ≤ f ≤ 3.14 KHz, and Input 3: f ≤ 3.0 KHz
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/04%3A_Mechano-sensory_Systems/4.01%3A_Natural_Mechano-sensory_Systems.txt
|
4.2 Applications inspired by natural mechano-sensory Systems
There are many potential applications for mechano-sensory systems. As can be seen from the example applications that follow, there are numerous natural paradigms to consider for the inspiration of novel design ideas. For example, barn owls, crickets, bats, dolphins, and primate cochlea represent a sample of designs accomplished by attempting to demonstrate or build mechano-sensory systems based on biological inspiration. There are also many useful applications resulting in a divergence from bio-mimicry, such as transforming photonic energy into sound energy and allowing the organism (blind person) the opportunity to learn how to see based on stimulated auditory cues.
4.2.1 Auditory Pathway of the Barn Owl [Lazz90]
The barn owl localizes its prey by using timing delays between the two ears for determining azimuth (angle from directly forward) and intensity variations to determine elevation (angle from the horizon) with respect to itself. The result is a conformal mapping onto the inferior colliculus (IC) of sound events in auditory space. Each sound source is mapped to a specific location in the IC representing azimuth and elevation with respect to itself [Lazz90].
The auditory signals from the cochlea divide into two primary pathways that eventually meet in the IC. The first is the intensity pathway and passes through the nucleus angularis (NA), encoding elevation information. This is possible in part due to sound absorption variations caused by feather patterns on the face and neck. The second is the time-coding pathway and passes through the nucleus magnocellularis (NM) onto the nucleus laminaris (NL) where it meets the corresponding signals from the time-coding pathway from the opposite side.
Figure 4.2.1-1 represents the two information pathways leading to the IC. The details of the IC are spared to focus more on the pathway structure. Figure 4.2.1-2 shows a notional concept for coincidence detection in the timing circuits of the NL. As drawn, the spatial location of the output signals represents the same spatial direction (azimuth or heading) of the originating sound source.
Assume the total time it takes sound to travel the distance from one ear to the other is divided into 8 time delays, each denoted at Δt, as shown in the model (Figure 4.2.1-2). A stimulus on the immediate left side of the owl (left side of Figure 4.2.1-2) would travel through the bottom row of delays before the right side received the stimulus, therefore resulting in a correlation on the left side. Similarly, a stimulus on the immediate right side of the owl will result in a correlation on the right side of the model. Stimuli in between immediate left or right would result in a correlation somewhere in between these two extremes.
Time-coding Auditory System
The time-coding architecture of the barn owl is implemented in the silicon auditory localization circuit [Lazz90] as shown in Figure 4.2.1-3. Sound enters the system from the left and right ears into respective silicon cochlea described in the previous section. From there 62 equally-spaced taps (representing the basilar membrane neurons in natural cochlea) encode the spectral signature at each side. Each tap feeds a hair-cell circuit that performs half-wave rectification, nonlinear compression, and action potential generation. The action potentials in the silicon version are fixed-width fixed-height pulses. As in natural neurons, the frequency of the action potential pulses represents the intensity, and the timing preserves the temporal characteristics of the signal.
The details of the hair-cell circuits are shown in Figure 4.2.1-4. The half-wave rectifier and nonlinear compression simulate the inner hair cells and the action-potential generator simulates the natural spiral ganglion cells that take signals from the cochlea in owls, primates, and other species. For the barn owl, these circuits feed the NL-model delay lines like the ones modeled in Figure 4.2.1-2.
4.2.2 Robotic Implementation of Cricket Phonotaxis [Webb01, Webb02]
Cricket Phonotaxis
The male cricket gives a mating call to attract female crickets, and a female can find a specific male using phonotaxis, which means movement in response to sound stimulus. In the presence of other noises, the female uses these auditory cues to cover 10 to 20 meters through vegetation and terrain and around obstacles to find the calling male. Phonotaxis is typically seen as a series of start-stop movements with corrective turns.
The “cricket robot” implementing phonotaxis in this example can be modeled as first recognizing the correct song, and then moving toward the source. Each species has a specific sound characterized by a carrier frequency and a temporal repetition structure. A typical pattern is a ten to thirty second syllable of a pure tone (around 4-5 kHz) grouped in distinctive patterns, or chirps. A primary cue serving to discriminate between species is the syllable repetition interval in the song. The correct recognition of this conspecific (same species) song is required before migration toward the source.
The cricket does not use time-delay signals between two ears as mammals do nor can it detect phase of the incoming signal. The geometry of the anatomical structure compensates for this inability and gives the cricket the same capability without the complex circuitry. It has an eardrum on each leg connected by an air-filled tracheal tube and two additional openings on the cricket body. Sound reaches each eardrum in two primary paths: one is direct, striking the eardrum on the same side of the cricket as the sound, and the other is indirect, coming from the opposite side of the cricket body. Since these acoustical vibrations are on opposite sides of the eardrum, their effect generally cancels. However, there is a delay due to a longer path-length as well as a delay due to the tracheal tube properties. These delays cause phase differences between the opposing acoustic signals so that the amplitudes do not cancel.
Robotic Implementation
The robotic model of cricket phonotaxis includes a programmable electronic sound source for modeling the cricket call, and a neural network modeling the dynamics of cell membrane potentials. The neural network model is not a generic architecture, but a specific architecture designed to mimic the neuronal structure of the cricket more closely:
The architectures represent neural processes at appropriate levels of detail rather than using standard artificial neural net abstractions. Individual neuron properties and identified connectivity are included, rather than training methods being applied to generic architectures.” [p. 3, Webb01]
The robot is a modification of an existing miniature robot (Khepera, “K-team 1994”) that is 6 cm in diameter and 4 cm high. It was chosen as it is closer to cricket size than other available robots, although this size is still much more massive than a cricket. A modification for ears added another 6 cm in height. The robot has 2 drive wheels and 2 castors and is programmed in C on a 68332 processor. Due to processor speed limitations, the neuronal model had to be revised (simplified) to run real time. This is a common theme in biomimetic systems: Although conventional processors are 5 or 6 orders of magnitude faster than biological neurons, we still must make sacrifices in computations to achieve any semblance of real-time biomimicry.
Figure 4.2.2-1 shows the simulated neuronal interconnects for the cricket robot. The separation between the microphone ears can be varied but is set at one-quarter of the mimicked species carrier frequency. Another one-quarter period delay is programmed into the inhibitory connection to simulate the delay in the tracheal tube. The inverter (gain of –1) simulates the opposing effects of the direct and indirect pathways striking the eardrum on opposite sides. In real crickets, the auditory neuron sends signals to the brain, where the connectivity and functionality are still not yet understood. The robotic model includes membrane potentials that result in action potential (spike) signal generation, but the reduction to four simple neurons was done in the robotic implementation in part to keep the simulation operating in real-time.
Each time a motor neuron in Figure 4.2.2-1 results in an action potential, the robot moves incrementally in that direction. The auditory neurons fire (send action potentials) when the threshold for firing is exceeded. All neurons exhibit leaky integration so that stray noises will not result in action potentials. A constant input stronger than the signal being leaked out must be sustained in order to bring the neuron to firing an action potential. However, the auditory neurons rapidly fire once initiated. This is modeled by returning the membrane potential closer to the threshold (-55 mV typ.) after an action potential instead of returning to the resting potential (-70 mV typ.)
The calling frequency is 4.7 kHz to match a specific species, the Gryllas bimaculatus. The robot microphones were placed 18 mm apart, which is a quarter wavelength of the 4.7 kHz calling frequency. An additional one-quarter period delay is also programmed into the circuitry as a 53 us delay. When a signal is received from a right angle to the heading, then the combined delays would add to one-half wavelength, which, when inverted, would combine with the direct signal to give a maximum signal for the motor neuron to turn the robot toward the sound. The opposite motor neuron would receive the direct signal and inverted indirect signal at the same time, thus canceling. When directly in front of the robot, the same signal would be received at both motor neurons so that the left-right turning would cancel, and the robot would continue straight.
Results and discussion [Webb01]
The ¼-wavelength physical ear separation and the ¼-wavelength programmable delay for a 4.7 kHz carrier proved to mimic biological observation. Experimental results showed that the robot migrated toward a 4.7 kHz signal more strongly than a 2.35 kHz signal and would ignore a 9.4 kHz signal. It would also move toward the 4.7 kHz signal when played simultaneously with a 6.7 kHz signal.
By tuning the time constants, the response could be made selective for a bandpass of syllable rates. In one example, the robot responded to changes in signal direction when the syllables were 20 to 30 ms long but would not respond for shorter or longer syllables. The programmability built into this cricket robot will allow further study into the alternate hypotheses of how crickets and other animal species perform phonotaxis. The system will also allow for further study into non-phonotaxis capabilities of such a sensorimotor system.
Although the four-neuron model does not mimic the complexity of the cricket brain, it does demonstrate a minimal configuration for accomplishing basic phonotaxis functions, such as tracking of sound sources, selectivity for specific frequencies, selectivity for syllable rates, tracking behavior without directional input, and tracking behavior in the presence of other sound sources.
4.2.3 Mead/Lyon Silicon Cochlea [Lyon89]
The Mead/Lyon [Lyon89] Silicon Cochlea is a transmission line of second-order amplifier circuits illustrated in Figure 4.2.3-1. First order stages are simple circuits such as differentiators or integrators, whose step responses are typically an exponential response toward a steady-state condition. The second-order stages provide sinusoidal response characteristics to step responses that will provide a peak response at a resonant frequency. In the initial silicon cochlea circuit, there were 100 second-order circuits with 10 voltage taps evenly spaced along the design.
Each second-order circuit is composed of three op-amps and two capacitors configured as cascaded follower-integrator circuits with a feedback amplifier providing oscillatory responses. The transconductance of the feedback amplifier is controlled by an external bias voltage. For low feedback transconductance, the circuit behaves as a two-stage follower-integrator, which follows the input voltage. As the feedback transconductance is increased, positive feedback causes the second integrator-follower to leap ahead slightly and oscillate to a steady state value. If the transconductance is set too high, the circuit oscillates out of control (goes unstable).
Once appropriately calibrated (tuned), the peak response of each second-order circuit is a function of the input frequency. Since each stage inherently adds a smoothing effect, the individual frequencies of the input voltage signals will have a peak response somewhere along the 100-stage circuit. As in natural cochlea, the spatial distribution of the voltage taps provides a sample of the Fourier representation of the input voltage signal. However, in natural cochlea the mechanical design of the basilar membrane provides physical peak deflections (corresponding to signal frequency components present in the input signal) while this design models the mechanical cochlear structure with a bank of 2nd order electronics filters.
4.2.4 MEMS-based electronic cochlea [Andr01]
An example of a Micro-electromechanical system (MEMS) approach to a silicon electronic cochlea is described in [Andr01]. MEMS allows for mechanical distortion due to the incident sound energy to change the distance between two polysilicon plates that are implemented as a capacitor. This design concept includes a MEMS-based acoustic pressure gradient sensor and filter bank that decomposes incident acoustical energy into its wavelet components. The pressure transducer is a conventional MEMS polysilicon diaphragm suspended in air over polysilicon backplate. Inspired by mechanically-coupled acoustic sensory organs of the parasitoid fly, the transducers are connected by a first-layer polysilicon beam, allowing for pressure gradient measurement. As acoustical energy strikes the external plate, the plate is distorted toward the backplate, reducing the air distance separating the two plates. This causes a decrease in the capacitance in response to acoustic pressure. The MEMS silicon cochlea implementation is composed of MEMS filter banks that allow for a real-time wavelet decomposition of the received acoustical energy
The advantages of the MEMS-based approach over analog VLSI approach is a lower power requirement as the physical energy of the sound waves is doing some of the work of the VLSI transconductance amplifiers. Also, since the MEMS-based approach more closely resembles natural systems there is a more direct correlation with system response to input acoustical energy.
Another application of MEMS technology for biomimetic robots include cantilever microswitches to model antenna behavior and provide water-flow sensors. These MEMS-based sensors are being used to model lobster and scorpion behaviors on underwater robotic vehicles [McGr02].
4.2.5 “See-Hear” design for the blind by retraining auditory system [Mead89]
The “See-Hear” concept is intended to help a blind person “see” by hearing different sounds based on objects visible in a head-mounted camera system [Mead89, Ch 13]. Successful implementation requires transforming visual signals into acoustic signals so that users can create a model of the visual world with their auditory system.
Both vision and auditory systems have receptive fields representing data distributions within the local environment. The vision system maps light emissions and reflections from 3D objects onto the 2D photoreceptor mosaic in the retina, whose conformal mapping onto the brain is called the retinotopic map. Similarly, the auditory system takes frequency components of local sound energy and maps a spectrum onto the basilar membrane in the cochlea and subsequently (via cochlear nerve) to a conformal map on the brain called the tonotopic map.
Both vision and auditory systems are concerned with detecting transient events. The vision system detects motion by taking time-space derivatives of the light intensity distribution. Transients help to localize events in both space and time, and the brain constructs a 3D model of the world using motion parallax, which is the apparent object motion against the background caused by observer motion. If an observer is focused on a point at infinity and moves slowly, then nearby objects appear to move rapidly against the infinite background, while objects farther away appear to move more slowly. Transient sounds are also easily detected and localized in the auditory system.
The vision and auditory systems differ in how the peripheral information is processed:
“In vision, location of a pixel in a 2D array of neurons in the retina corresponds to location of objects in a 2D projection of the visual scene. The location information is preserved through parallel channels by retinotopic mapping. The auditory system, in contrast, has only two input channels; location information is encoded in the temporal patterns of signals in the two cochleae. These temporal patterns provide the cues that the higher auditory centers use to build a 2D representation of the acoustic environment, similar to the visual one, in which the position of a neuron corresponds to the location of the stimulus that it detects.” [Mead89]
The key biological vision concepts exploited in the See-Hear chip include [Mead 89]:
- Logarithm of light intensity collected at the photoreceptor; using a logarithmic function expands the available dynamic range as compared to a linear function.
- The spatial orientation of light sources (which includes reflected light) is preserved from the photoreceptor mosaic through the retinotopic map
- Depth cues required for mental reconstruction of 3D space are provided by time-derivative signals of the light intensity profile
The key auditory cues for sound localization include:
- Time delay (350 –650 microseconds) between ears, providing horizontal placement cue
- Acoustic high-frequency attenuation, providing further horizontal placement cue
- Direct and indirect pathways in the outer ear causing a destructive interference pattern that is a function of elevation, thus providing a vertical placement cue
As in a natural vision system, the See-Hear system accepts photonic energy through a lens and focuses the energy onto a 2D array of pixel. (A pixel is simply a picture element). Each pixel value represents the light coming from a specific direction in the 3D world. The See-Hear chip includes local processing at each pixel location.
Each pixel processor responds to the time-derivative of the logarithm of the incident light intensity. The incoming photons of light enter the depletion region of a bipolar junction phototransistor creating electron-hole pairs in quantities proportional to the light intensity. Two diode-connected MOS transistors connected to the emitter cause a voltage drop in response to the logarithm of the light intensity. A MOS transconductance amplifier with nonlinear feedback provides a time-derivative output signal of the pixel processor. Each pixel processor is capacitor-coupled to adjacent pixels so that each pixel processor act as a delay line.
Time-derivative signals propagate in two directions in the electronic cochlea circuit, which results in a mimicry of the time delays between the left and right ears. As seen in Figure 4.2.5-1 a transient event in the left visual field will result in sound on the left side before sound on the right side, which mimics the behavior of sound events in auditory systems. The time delay circuit also filters higher frequencies, so that longer delays result in more attenuation of higher frequencies. This feature therefore models the binaural head-shadow, which is the attenuation of high frequencies as the sound travels around the head. The combined effect of delayed signals and high-frequency attenuation of the delay channels serves to combine both natural horizontal localization cues into one circuit.
Since each pixel processor circuit in the electronic cochlea contains its own photoreceptor circuit, multiple sound sources are processed as a superposition of the individual sources. To model the elevation inputs from the pinna-tragus pathway differences, the see-hear chip contains an additional delay circuit at each end. The 2D image is focused on a 2D array of pixel processors, and the output of each horizontal row is added to a delayed version of itself to model the mixing of the pathways relevant to the elevation of the objects in the image. In this way, the outputs of each row are all summed together to create only two separate sound signals, one for each ear. If two of the same objects were at different elevations within the image, the different pinna-tragus pathway delays at the end of their respective rows will provide the user with an audible queue as to where (in elevation) the object is located.
The user can ultimately learn how to hear a 3D model of the external environment based on what is visually captured with the camera system.
4.2.6 A biomimetic sonar system [Reese94]
A “Biologic Active Sonar System (BASS)” based on echo processing of bats and dolphins was designed to detect and classify mines in shallow water [Reese94]. Front-end filters and nonlinear functions emulating auditory neuronal models were used to obtain high resolution with low frequency sonars (which is another example of coarse coding in natural systems). The intended product of this research is a system implementation into an autonomous underwater vehicle.
Figure 4.2.6-1 shows the block diagram of the BASS processing stages. The band-pass filters (BPF’s) have sharp roll-off characteristics at high frequencies and are broad-band, overlapping other channels significantly (coarse coding). This is inspired by natural peripheral auditory processing and provides good time/frequency definition of the signal as well as increases in-band signal-to-noise ratio (SNR).
As in the vision system, the automatic gain control (AGC) allows for covering a much wider dynamic range, which is based on integrate-to-threshold behavior of auditory signals. This sharpens signal onset time, which translates to sharpening range resolution. The half-wave rectifier and sigmoid function is inherent in mammalian auditory processing and serves to sharpen the onset-time and range resolution.
Peak summing and delay provide in-band coherent addition and inter-band signal alignment. This mimics natural biological phase-locked loops and provides pulse compression. The anticipated benefits of such a wide-band low frequency design is longer detection ranges and better target recognition of partially buried mines.
4.03: Questions
Chapter 4 Questions
1. What is the most basic sense?
2. Why is this sense necessary for the most primitive life-forms?
3. How are stretch-sensitive mechanosensory designs fundamentally different from photosensory and chemosensory mechanisms?
4. What is kinesthesia?
5. Lateral inhibition is a form of adaptation. What signal function does it accomplish?
6. How is a military fighter pilot like the green lacewing?
7. How is coarse coding manifested in the human auditory system?
8. What are the three middle-ear ossicles, and what is their function?
9. How are static and dynamic equilibrium changes sensed in the human auditory system?
10. Neurons can fire at most 1kHz, or at a rate of 1ms between action potentials. Nyquist sampling requires two samples per highest-frequency wave period, which means that such a neuronal firing rate can only encode up to 500 Hz. How is it that humans can discern components beyond 20 times that amount (10kHz)?
11. What were some of the significant results from Webb’s robotic implementation of cricket phonotaxis?
12. Why is it so amazing that we must “cut corners” in computational processing to get our electronic models to simulate real-time behavior of biological sensory systems?
13. What are the two information pathways in the auditory system of the barn owl?
14. How do first-order systems, such as differentiators and integrators, and second-order systems differ in their step responses?
15. What is the basic idea behind the “See-Hear” system?
16. What advantages does the MEMS-based silicon cochlea have over the analog VLSI-based silicon cochlea?
17. Define these terms:
phonotaxis –
pitch –
roll –
yaw –
azimuth –
elevation –
halteres –
dipteran –
pixel –
motion parallax –
MEMS –
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/04%3A_Mechano-sensory_Systems/4.02%3A_Applications_inspired_by_natural_mechano-sensory_Systems.txt
|
5.1 Natural Chemo-sensory Systems
Natural chemo-sensory systems provide information from four groups of senses:
General chemical sense: All organisms display this sense. For humans, this sense is mediated by free neurons in the skin.
Olfaction: The sense of smell, generally regarded as a distance sense.
Gustation: The sense of taste, generally regarded as a contact sense. Separating olfaction and gustation is difficult as the cellular and molecular mechanisms can be the same. We could try to separate the two as either atmospheric or fluid medium, but this breaks down in describing the two senses for underwater life forms.
Solitary chemo-receptor cells (SCCs): Best developed in a few species of fish. The receptors are scattered in the fin surfaces and provide information on the presence of food or predators.
5.1.1 Chemo-sensory capability in simple life-forms
The earliest life-forms on earth were the prokaryotes, which are cellular organisms with no nuclei, and the eukaryotes, which do have nuclei. It is believed that these organisms had the world to themselves for about two billion years. Much of our understanding of the molecular biology of chemo-sensitivity comes from experiments with contemporary bacteria called Escherichia coli, or e. coli.
Moving bacteria are propelled by flagella, which are long cilia- or hair-like protrusions that twist or turn in response to chemical stimuli. Some will rotate at around 100 Hz, energized by a transmembrane hydrogen ion concentration gradient. E. coli has 5-10 flagellum that are on either side. When all rotate counter-clockwise, the bacterium moves forward toward a chemical attractant, while when they all rotate clockwise, the motion is a random tumbling motion. With no chemical attractants, the movement is sporadic and random; with attractant present, the motion is the same except there is less tumbling when moving toward the source. The overall motion is a migration toward the source of chemical attractant.
Deep study into certain internal chemosensory system mechanisms will quickly merge into endocrinology (internal secretions) and biochemistry. The olfaction and gustation systems, however, are driven by chemical information external to the organism. Our interest is more on these exteroceptor sensory systems than on the interoreceptor sensory driven systems [Smith08].
5.1.2 Gustation in insects
Chemo-sensory receptors in insects are frequently multi-modal, serving as both a mechano-sensory receptor and a chemo-sensory receptor. The multi-modal sensilla (hairs) protrude from the outer cuticle (shell) with a terminal pore at the tips of the sensilla. Chemicals can enter the pores and travel to the nearby dendritic inputs. Bio-chemical chain reactions result from the combinations of certain chemicals with the nerve endings. These same sensilla would also have another neuron sensitive to the mechanical distortions on the sensilla caused by fluid movement or direct contact or pressure [Smith08].
5.1.3 Gustation in mammals
There are six basic taste qualities [Smith08]:
sweetness
saltiness
sourness
bitterness
umami
water
The first four are in the order of taste receptor cells (TRCs) in the human tongue from the tip and working back. Umami is a Japanese word for the taste of monosodium glutamate, a crystalline salt used for seasoning foods (C5H8O4NaN). Gustatory receptors in mammals are grouped into taste buds, which are located on projections called papillae. Four types of papillae include
filiform – contains no taste buds; serves to give tongue abrasive character (as in cats)
fungiform – resemble mushrooms; located on front and edges of the tongue; visible red spots sensitive to sweetness and saltiness; buried in the surface epithelium
foliate – located in folds at the rear of the tongue; sensitive to sourness or acidity
circumvallate – sunken in moat or trench; sensitive to sourness or bitterness
From the tip of the tongue to the back, the primary qualities that stimulate the taste buds are in this order:
1) sweetness, 2) saltiness, 3) sourness, and 4) bitterness. Taste Receptor Cells (TRCs) typically have dendrites to multiple taste buds. Similarly, each taste bud may provide input to multiple TRCs. New nerve endings “search out” new synaptic contacts as taste buds are turned over. Thus, there is a complex connection scheme of taste buds to associated TRCs. There is ongoing debate as to whether the brain recognizes different tastes by specific fiber activity or by a pattern of activity across the population of fibers [Smith08].
5.1.4 Olfaction in insects
Insect hygro-receptors, which detect humidity, are classed as olfactory (distant receptors) as there is no opening for direct contact to the environment. These sensilla are typically short pegs within a cuticular cavity. Humidity causes sufficient mechanical distortion for receptor signaling, which would explain why they are set within a cuticular cavity: normal contact with the environment will not falsely send a humidity signal.
Hygro-receptors have been detected on the antennas of all insects that have been carefully examined. Although present in all these species, they are typically very sparse among other sensilla. For example, on the cockroach, about one in every 500 sensilla is a hygro-receptor. Hygro-receptor neurons share the same sensilla with other hygro-receptor neurons and with thermo-receptor neurons.
Insect olfactory sensilla are typically multi-porous, allowing extra opportunity for the detection of a semiochemical, which is a chemical stimulant, or pheromone, with carries a specific meaning, such as a mating opportunity, danger, trail, aggregation, or dispersal. Social insects rely on trails and patches of semiochemicals. The detection of the sex pheromone is the most effective, which makes sense considering the importance of reproduction to survival. A male silkworm moth can detect a single molecule of the female pheromone. A single antenna consists of many branches, each having many sensilla. Each antenna has about 17,000 sensilla that are each 100 microns long and 2 microns in diameter. The large number of sensilla effectively amplify the detection of faint odors in windy conditions.
Olfaction begins with a chemical binding of the attractant molecule to an odorant-binding or pheromone-binding protein. There is increasing evidence that the subsequent biochemistry involving G-Protein membrane signaling is the same as found in vertebrate olfactory systems. This suggests a common process that has been developed throughout the animal kingdom [Smith08].
Rheotaxis and Anemotaxis
Insects such as moths use odor-gated anemotaxis, which means the insect moves in response to odorants present in the air currents. The moth’s flight path is modulated by odor concentrations. One simple strategy for anemotaxis is demonstrated by the male moth moving toward an attractant released by the female moth. When an attractant is detected, male moth flies upwind, and when the odorant plume is lost, it zig-zags across wind, increasing distances. If the male moth detects the attractant again, it simply flies upwind.
Lobsters move there antennas back and forth to detect a source of food underwater. However, lobsters do not use rheotaxis, which is basically underwater anemotaxis, since the underwater currents are far too turbulent for that to work. Their irregular and variable tracks to source and increased speed in middle of track suggests lobsters (and other marine animals) are steered toward plume sources by odor patches, not odor-stimulated up-current movements like the moth. An interesting description is provided by [Consi94]: “Lobsters smell via paired antennules, small antennae positioned medially to the large antennae. Each antennule contains an array of thousands of sensory cells arranged in a tuft of hairs. The antennules can act as discrete time sampling sensors: under conditions of low flow they periodically ‘flick’, ejecting a parcel of water and allowing a new packet to enter the tuft of sensory hairs for a new measurement.”
5.1.5 Olfaction in mammals and other vertebrates
Fish make incredible use of the sense of olfaction. Sharks and dogfish can detect blood and other body fluids from long distances. Salmon can use their olfactory sense to return to their spawning ground by tracing faint chemicals unique to their place of birth.
Receptor field mapping is obvious in the visual, auditory, somatosensory, and (to a lesser extent) the gustatory systems. Olfactory systems do not exhibit a receptive-field mapping corresponding to spatial location of the external environment. It does appear that there are three or four expression zones, where each zone represents each of the various types of molecular stimulants.
Individual olfactory receptor cells (ORC’s) are tightly embedded between supporting olfactory epithelium cells, with up to 20 cilia that detect stimulants and transmitting action potentials to the next layer of cells, the mitral cells. Photoreceptors in the vision system are also embedded between epithelium cells, but photoreceptors transform photonic flux into graded (analog) signals for processing by the next layers in the retina instead of action potentials. There is a convergence of about 1000 ORC’s to one mitral cell, and about 25 mitral cells to form one glomerulus. All 25000 or so ORC’s (in the rabbit) that converge to a glomerulus are specialized to detect one (or similar) odorant molecule, so that each glomerulus responds to one specific odor type [Smith 00]
5.1.6 Similarities in vision and olfactory systems: the retina and the olfactory bulb
The following table is a summary of some similarities between the preprocessing stages in the vertebrate vision and olfactory systems. In both the retinal and the olfactory bulb there are two layers of cells connected orthogonal to the direction of information flow that mediate or inhibit the forward flow. This mediation serves to accent the locations of stimuli within the receptor layer and minimize the signal energy propagated to the deeper neuronal processing layers in the brain.
VISION OLFACTION FUNCTION
Retina Olfactory Bulb Preprocess Information
Photoreceptors Olfactory Receptor Receive stimulus
(Rods and Cones) (ORCs)
(graded output) (spiked output)
Horizontal Periglomerular Mediate (Inhibit) nearby response
Bipolar Mitral/Tufted Pass on mediated signal
Amacrine Granule Further mediation
Ganglion Mitral/Tufted Pass on mediated signal
100:1 1000:1 or 25000:1 Receptor signal compression
(1:2 fovea) (ORC:Glom) (ORC to OT)
(400:1 periphery)
5.1.7 Coarse-coding in vision and chemo-sensory systems
There are a relatively few specialized ORC types from which we can discern many different smells. Specific odors cause specific patterns of responses to ORC types, so odors are analyzed by spatial maps in the central nervous system like the way other distant senses are mapped [Smith08]. A ‘model nose’ [Persaud82] is discussed later in this chapter where the authors searched for unique patterns of many odorants (over 20) using the responses of only three commercially-available sensors. This demonstrates coarse coding, previously defined as the transformation of raw data using a small number of broadly overlapping filters. The power of coarse coding is that detailed resolution can be achieved with relatively few broadly overlapping sensor responses. A handful of broadly-overlapping sensors can provide raw data for identifying thousands of different categories (smells, tastes, etc.).
Gustation (taste) sensory systems are like olfactory ones in that there are relatively few types of receptor cells whose responses overlap significantly across numerous input types (tastes). A lot of work has gone into understanding psychophysics, or documenting behavioral responses to organism inputs, as well as microbiology for understanding neuronal and other cellular responses to environmental inputs. A significant gap in knowledge exists for explaining how the individual cellular responses are combined and biologically processed to give the overall behavior.
As previously mentioned, in vision systems coarse coding exists in time, space, color, spatial frequency and temporal frequency domains, and here we find in olfactory and gustatory domains as well. With relatively simple (or low-order) filters or sensory responses biology offers a high degree of acuity in these sensory information domains. We saw that in vision systems there are essentially only four broadly-overlapping chromatic detector types, three basic temporal channels, and three basic spatial channels. Neurons receiving broadly-overlapping photoreceptor responses provide higher brain processing areas the ability to discern details in color, time, and spatial domains. Similarly, broadly-overlapping olfactory and gustatory receptor responses provide higher brain processing areas the ability to discern many distinct odors and tastes.
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/05%3A_Chemo-sensory_Systems/5.01%3A_Natural_Chemo-sensory_Systems.txt
|
5.2 Applications inspired by natural Chemo-sensory Systems
There are many potential applications for chemo-sensory systems. Some of these include
- Health industry to understand our own biology
- Commercial food industry to understand sense of taste
- Commercial perfume industry to understand sense of smell
- Commercial pesticide/herbicide industry to understand insect gustation
- Governments to monitor air and water and find sources of pollution
- Military to trace chemical trails to hidden explosives, etc.
As with photo-sensory and mechano-sensory system applications, the rest of this chapter represents a sample of contributions of scientists and researchers attempting to demonstrate or build chemo-sensory systems based on biological inspiration.
5.2.1 A model nose demonstrating discrimination capability [Persaud82]
This work demonstrates the coarse-coding capability of an olfactory system. The ORC types have different response levels for the basic odor components, and specific odors are believed to be perceived as a combination of the ORC type responses.
A model for an artificial olfactory system simulating biological ones was pursued with a focus on selecting odorant detectors that respond to a wide variety of chemical types and combining the responses so that different odorants can be identified in parallel. A ratio of sensor responses was used to discriminate between different stimulating odorants.
Gas sensors made by using n-type semiconductors is convenient, as the dopant and intensity of the doping could be adjusted to achieve a desired biomimetic ORC type. This would be advantageous as semiconductor technology is well suited to make a wide variety of n-type semiconductors with very uniform responses, each representing an ORC type. A set of such commercially available semiconductor gas sensors were used and gave specific response patterns for specific stimulants, but the response times were not as fast as natural olfactory systems.
The model nose was completed by using three commercially-available sensors from Figaro (www.figarosensor.com), including one intended as a general purpose combustible gas sensor, one more sensitive to alcohols, and one more sensitive to carbon monoxide. The results showed that the responses to over 20 different odorants were consistent and unique. The researchers point out that as in biological systems, such an artificial olfactory system would have to be trained to recognize specific patterns as specific odors.
5.2.2 Integrating a sniff pump in an artificial olfactory sensor [White02]
The Tuft Medical School Nose (TMSN) was designed to improve sensitivity and discrimination ability with respect to previous artificial nose efforts. A fan and valving system were arranged so that odorant molecules were drawn over the olfactory sensor array in short bursts, mimicking inhalation patterns.
A deviation from biology includes the use of polymer and dye mixtures in LEDs whose fluorescence changes based on the present odorants. So electric energy is used to illuminate LED whose spectra change with input odorants, and then the photonic energy is converted to analog electronic for further processing. Presumably, this is done to help meet the desired sensitivity for a specific application, the one here being land mine detection. This device included 32 sensors whose responses were broad across the various odorants, which included TNT, DNT, and other such compounds. This course-coding of the input resembles natural olfactory sensors.
This project (funded by ONR) illustrates the different uses of biology for inspired design. One purpose is to emulate biology to better understand how biology does what it does, so it is very important to make every effort to not deviate from biology. Another purpose involves a separate problem that needs to be solved (detecting land mines) where biology can give some incredible insights into novel designs, but to meet the objectives other technology may be integrated int the design that moves it away from true emulation of biology.
5.2.3 Integrating spike-based processing into artificial olfactory sensor [Liu18]
This effort contributes the integration of spike-based signal processing which is a known characteristic in natural olfactory sensors. The first sensing stage is an array of virtual olfactory receptor neurons (VORNs) that convert the odorant response into a spatio-temporal pattern of spikes. As in biological ORs the array is composed of groups of similar receptors with overlapping responses. The next sensing state is the bionic olfactory bulb (BOB) composed of processing elements named for their biological counterparts, the mitral cell layer which feedforwards to the granule cell layer. Inhibitory responses are fed back from the granule layer to the mitral layer, which is also known in biology. This is another example of lateral inhibition, or the suppression of continued responses once the cell is stimulated.
The task is to discern one of seven Chinese liquors which come from different geographical locations with their own unique combination of odorants. Little is known in biology concerning how the natural olfactory systems process the spike signals for specific odor detection. The researchers here used two traditional methods for electronic nose data processing, namely linear discriminant analysis (LDA) and support vector machine (SVM), as well as backpropagation artificial neural network (BP-ANN). The latter has significant semblance to biological information processing and performed better than the other two.
5.2.4 Integrating insect olfactory receptors for biohybrid gas flow sensor [Yam20]
In this effort biology is used to create chemical sensing since the natural sensor is sensitive and selective. Insect DNA is used to synthesize olfactory receptors which are brought into an artificial cell membrane. The difficulty is getting the input gas into a soluble form for chemical detection of the artificial olfactory receptor. Microscopic slits were designed into the gas flow path and modified with hydrophobic (water-repelling) coating to create microchannels for chemical detection.
Since the odorant detection was sporadic for the given stimulants the design was scaled to monitor 16 channels. This appears to give the detection response that is desired. Biology also relies on multiple channels or opportunities for a successful chemical detection. For example, the male silkworm moth discussed earlier can detect a single molecule of the female pheromone due to antennas each having over 10,000 sensilla (each 100 microns long and 2 microns in diameter).
5.2.5 Robotic lobster chemotaxis in turbulent chemical sources [Grasso02]
The Robo-Lobster experiment is motivated by a desire for autonomy for underwater vehicles. Acoustics is primarily used and sometimes optics, but many biological species make strong use of chemo-sensing. The lobster has long antennas that sample the water chemistry for purposes such as eating, mating, spawning, and avoiding predators. A challenge to locating an odorant source is the turbulent nature of underwater chemical plumes that cause discontinuities in chemical trails; gradient descent will not work. If moving toward a food source that is detected, the lobster antennas meander back and forth in an attempt to catch samples of the odorant and the lobster adjusts its orientation and movement direction in response to what is detected. Numerous underwater chemical source detection applications exist in the scientific, environmental, commercial, and military industries.
The emphasis of the effort was more on the autonomous acquisition of the chemical source. The ability of the lobster to crawl on the bottom was simplified to an underwater wheeled robot in a fish tank. The tank measured 10m by 2m and was filled to 44 cm deep with moving seawater. A chemical source was introduced that brought odorant molecules to the robot in slow-moving turbulent patterns. The robot would move forward when the chemical was detected (when the sensor conductivity exceeding a threshold) and oriented itself so that the responses to the two artificial antennas was more balanced. Sensor responses were converted to digital values and a Motorola microcontroller programmed in C was used to implement the wheel-movement algorithm.
A robot designed to imitate a particular species and attempting to perform a task done by that species can illuminate our understanding of biology. The authors express this by suggesting “construct a robot that is competent to test a hypothesis or set of hypotheses that have been suggested by the biology and then allow the robot’s behavior to inform you of the acceptability of that hypothesis”.
5.03: Questions
Chapter 5 Questions
1. Give examples of people (and their applications) interested in a better understanding of chemo-sensory systems:
2. What are some reasons for pursuing research in biorobotics:
3. What is olfaction?
4. What is gustation?
5. Why is it difficult to differentiate olfactory and gustatory sensing systems?
6. _____ The best way to distinguish between olfaction and gustation is
a) olfaction is simple, and gustation is complex
b) olfaction is a distance sense, and gustation is a contact sense
c) olfaction detects chemicals in air while gustation detects chemicals in fluids
d) there is no difference between the two
7. Photoreceptors are the first neurons in the visual processing system pathway. What are the first neurons in the olfactory system pathway?
8. The retinotopic map (vision) and somatotopic map (touch) in the brain provides a spatial map of external to stimuli in the respective systems. How is the olfactory system receptive field mapped in the brain?
9. Taste buds provide neuronal inputs to what type of cells?
10. What do hygro-receptors detect?
11. What part of the insect have we always found hygro-receptors?
12. Differentiate between a semiochemical and a pheromone.
13. What is anemotaxis?
14. What is rheotaxis?
15. What are some application areas for successful lobster chemotaxis research?
16. What were some of the significant results from MIT’s robotic implementation of lobster chemotaxis in Robo-Lobster?
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/05%3A_Chemo-sensory_Systems/5.02%3A_Applications_inspired_by_natural_Chemo-sensory_Systems.txt
|
Appendix A – Example Literature Review Assignment
The following is a representative literature review assignment. The course website was hosted by CANVAS with an active discussion board.
Use the CANVAS discussion board to claim the paper you would like to review – make sure it has not already been claimed. If you would like to consider other papers, that is fine, but email instructor a copy to approve before you do the review. It must be recent and directly relevant to imaging systems whose designs (or novel parts of it) are inspired by natural imaging systems (vision).
The FSU Libraries are very useful for finding additional publications. If interested:
Search “FSU Libraries” and then “Find a Database”. Under the option to “Search our A-Z list of Databases” pull down “I” and find IEEExplore. At this point you may have to log in to the FSU Portal. Once in IEEE Xplore, you can search on “biologically-inspired”, “bio-inspired imagers”, or some other related term. On the left under “year” refine your search by moving the range to the most recent year-and-a-half (2018 to 2019), and the click on “Apply refinements”.
Once you have selected one of the provided papers (or have approval for a different paper) enter the citation (at least the author, title and year) on the course discussion board and confirm no one else has selected that paper. Read the paper and study it well enough to discuss it in class. You are not required to understand all derivations, equations, etc. but should be able to answer the following questions in a Word file. Your answers do not have to be long but should be in your own words and very clear and accurate; there is not a requirement on length. Use sentences and not phrases or bullet points. While presenting in class pull up your Word document (avoid powerpoint, etc.) and you may pull up the paper you are reviewing as well; feel free to go back and forth between your written answers and the paper. You may refer to the figures, tables, diagrams, or anything else in the paper, but your Word file should answer the questions without the reader having to refer to the paper.
Turn in a Word file or PDF file that answers the questions in this format like the attached example on the next page:
Reviewed by{your name}
Paper citation: {citation}
What is the problem to be addressed or solved?
What is the natural paradigm being considered?
What has already been done?
How is this approach different?
What accomplishment is claimed?
What do they plan to do next?
{Graduate students} Discuss at least one (or more) of the mathematical derivations or equations in the paper. If there are no derivations discussed, then choose a paper which does.
Paper Abstract (pasted): {paste paper abstract here}
Post your Word file (or PDF) in the course Assignment folder. If you selected a paper not yet posted, post a copy of it as well with your file.
Paper citation: H. Wu, K. Zou, T. Zhang, A. Borst, K. Kuhnlenz, Insect-inspired high-speed motion vision system for robot control, Biological Cybernetics, 106:453-463, 2012).
What is the problem to be addressed or solved?
Improve the accuracy of velocity estimation in the Hassenstein-Reichardt Elementary Motion Detection (HR-EMD) model. Velocity estimation of the objects in an image is integral to visual perception and will be a necessity for robot control systems performing autonomous navigation and collaboration with other agents (robots). Motion estimation using conventional imaging technology is slow.
What is the natural paradigm being considered?
Motion detection at the neuronal level in insect vision, specifically the well-known HR-EMD model.
What has already been done?
The basic insect-vision-inspired HR-EMD model is well established. It has been used to address aircraft guidance (collision-avoidance, gorge-following, and landing) and demonstrated in robotic platforms. It has been implemented in VLSI for collision detection and implemented in FPGAs for optic flow detection and motion estimation. It has been applied for course stabilization and altitude control of a blimp-based unmanned aerial vehicle.
How is this approach different?
The former applications of the HR-EMD are based on a more qualitative motion detection and not a quantitative motion velocity estimation. Here the authors are using image pattern statistics (brightness, contrast, and a spatial PSD estimation) combined with the HR-EMD output and by a look-up table estimating the velocity of motion instead of simply motion. Here also, as with former efforts of the authors, a conventional temporal low-pass-filter is used as the delay element in the HR-EMD.
What accomplishment is claimed?
The average EMD response of the entire image was used for closed-loop yaw-angle control system of a robotic manipulator arm. They demonstrated yaw control using a piece-wise linear motion input and an arbitrary motion input.
What do they plan to do next?
They plan to extend to demonstrate motion estimation of 3D objects in a receptive field.
Paper Abstract (pasted): The mechanism for motion detection in a fly’s vision system, known as the Reichardt correlator, suffers from a main shortcoming as a velocity estimator: low accuracy. To enable accurate velocity estimation, responses of the Reichardt correlator to image sequences are analyzed in this paper. An elaborated model with additional preprocessing modules is proposed. The relative error of velocity estimation is significantly reduced by establishing a real-time response velocity lookup table based on the power spectrum analysis of the input signal. By exploiting the improved velocity estimation accuracy and the simple structure of the Reichardt correlator, a high-speed vision system of 1 kHz is designed and applied for robot yaw-angle control in real-time experiments. The experimental results demonstrate the potential and feasibility of applying insect-inspired motion detection to robot control.
|
textbooks/eng/Biological_Engineering/Bio-Inspired_Sensory_Systems_(Brooks)/06%3A_Appendix.txt
|
Introduction
In the two introductory modules (1.1 and 1.2) of the course, we will introduce the main theme of the course: learning about food systems as systems that combine human social systems, with the natural earth system and earth surface processes, to fulfill the food needs of human societies. The objective is to prepare you to tackle learning about sub-components of these systems (e.g. water resources, soil management, adaptive capacity of food systems to climate change) in an integrated rather than a piecemeal way, which is essential to understanding the current function of food systems as well as proposing future solutions for these systems. During this introductory unit, you will also embark on the course capstone project that asks you to structure your learning about food systems of a particular world region. These introductory modules will also present the systems concept as a general way of thinking that applies especially well to food systems.
Goals
• Identify connections between human and natural components of food systems.
• Understand and apply systems thinking principles to food systems.
Learning Objectives
After completing this module, students will be able to:
• Construct a concept map representing two food systems.
• Identify human and natural component parts of food systems.
• Apply systems thinking strategies in analyzing food systems at an elementary level, including assessing relationships between natural and human system factors that display key functions and characteristics of food systems.
• Identify sustainable and unsustainable characteristics of food systems.
Assignments
Print
Module 1 Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 1 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Sage, Colin. "Introduction: Why environment and food?" pp. 1-8, Chapter 1 in Environment and Food. London and New York: Routledge.
3. Public Radio International: Despite Economic Gains, Peru's Asparagus Boom Threatening Water Table. (Module 1.2)
1. You are on the course website now.
2. Available on e-reserves and here as a PDF: Chapter 1 in Environment and Food.
3. Online: Despite Economic Gains, Peru's Asparagus Boom Threatening Water Table. Note: You will listen to this only if you select to analyze the Peruvian asparagus export sector, as a food system example in the Summative Assessment.
To Do
1. Formative Assessment: Environment and Food Issues
2. Summative Assessment: Concept Mapping and Assessment of Food Systems
3. Participate in the Discussion
4. Take Module Quiz
1. In course content: Formative Assessment; then submit in Canvas
2. In course content: Summative Assessment; then take the quiz in Canvas
3. In Canvas
4. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
01: Introduction
This course, The Future of Food, provides introductory-level learning perspectives on human and environmental systems of food and resource use, in order to understand challenges and opportunities. The goal of the course is to understand and be able to apply an integrated perspective on the environmental and human dimensions of environmental issues related to food production and consumption. The content of the course addresses both environmental and human systems of food and resource use to an equal extent. In the case of the first (environmental systems), you will learn about the geosystems and agroecology of soil, nutrients, crops, water, and climate that form the fundamental basics of food-growing environmental systems. In the case of the second (human systems) you will learn about factors such as population and the roles of culture, social interactions, economics, and politics. These multiple perspectives are integrated into the framework of "Coupled Natural-Human Systems" (the CNHS is used beginning in 1.2 (also called Coupled Human-Natural Systems or CHN). We will focus on current environment-food systems, while also including the past trajectories and future trends of food systems. The course also blends information and analysis of local-scale environment and food systems with a focus on the regional, national, and global scales, and asks you as a learner to apply this knowledge in a Capstone Project that you assemble over the course of the semester in collaboration with other students. The course features active learning in both online and classroom settings and a wide variety of learning materials and methods.
1.01: The Future of Food- Course Overview
“We are what we eat.” We’ve all heard this common expression and may think of it in nutritional and biological terms: for example the way that the chicken or beans we consume are turned into muscle tissues. However, this simple phrase has a deeper meaning: Food production, food culture, and organization of food transport and consumption loom very largely in the way that our society "is". These food-related activities also strongly impact the earth's fundamental surface processes and ecosystems. So, we are what we eat, but in a societal as well as an individual sense. This wider vision of food as a driving presence within society is increasingly relevant as groups and individuals like you become more interested in the ramifications of their food for themselves and for the environment. This course is designed to provide you with the tools to understand the combined environmental and human dimensions of food production and consumption. To do so we must start with some simple questions and reflect a bit on how we can address them.
Figure 1.1.1: The above image of a food festival in the United Kingdom captures both the excitement food creates in our culture, the varied cultural influences leading to different types of street food, and behind the scenes, the pathways of production and transportation that keep food moving to consumers' plates at all times. Credit Helen (Afeitar), used with permission, Flickr: Liverpool food festival, Creative Commons (CC BY 2.0)
Where does our food come from? And, how can we make our food supply more sustainable? These two questions may seem simple, but they lead us to a range of considerations that are covered through the remainder of this course. As we consider these questions in each module, we'll explore a model of food systems as human systems in interaction with natural systems, or coupled human-natural systems (Fig. 1.1.2). As the name suggests, the concept of Coupled Human-Natural Systems (CHNS) tries to describe two major components that are involved in the production and consumption of food. The first component is the natural world and a set of interacting natural factors. Some of you may know the term ecosystem, and ecosystems developed from interacting natural components such as water, soils, plants, and animals (e.g. Fig. 1.2.1 in module 1.2) are the context for most food production. Throughout the course, we may also refer to the elements and processes of ecosystems as the earth system and earth system processes, or simply as the environment. These natural systems are a basic foundation of the food supply that we will learn more about in modules four through six (Environmental Dynamics and Drivers). The continued productivity of natural systems is evaluated as being crucial to sustainability, as you will see in the short reading below.
On the other hand, the two questions posed above involve the role of people, both as individuals in groups such as communities, institutions (including colleges and universities, farm and food processing businesses, and farmer organizations, for example) and political units such as countries. To introduce this dimension we often refer to this globally as the "human system" within a coupled human-natural system (Fig. 1.1.2.; a complete definition of human and natural systems are given in Module 1.2). Within the human system, factors such as styles of farming and food choices, tastes, economic inequality, and farmer and scientific knowledge that inform humans' management of ecosystem emerge from human cultural, social, economic, and political influences.
Figure 1.1.2.: A simple illustration of coupled human and natural systems with reference to food production. The list of human (households: urban and rural, businesses, governments, knowledge and science, diet and food traditions, belief systems) and natural system components (water, atmosphere, soils, plants and animals, biodiversity) on each side of the diagram is not exhaustive, and the diagram will be revisited throughout the course. The reciprocal arrows represent the mutual effects of each subsystem on the other, and are highly schematic, although they can denote specific impacts or feedbacks that will be addressed in module 1.2 and beyond. Credit: Steven Vanek, adapted from the National Science Foundation (NSF)
The end result of these interactions between human and natural systems are what we call a food system, which has has also been called an "environment-food system" (see the introductory reading on the next page) with "environment" pointing to the natural components and the "food system" pointing to the human organization needed to produce, transport, and deliver food to consumers, along with a host of cultural, regulatory, and other aspects of human society that relate to food. In terms of geography, the interactions of environment-food systems exhibit a huge range of variation across the world. As we all know this variation exists between countries, so that food and farming types can be associated with “Chinese food,” “French food,” “Peruvian food”, or scores of other examples. Farming and food also vary a great deal among regions within a country and sometimes even among local places, as we know if we compare a large dairy or grain farm with a fresh vegetable farm serving local markets here in the United States. Understanding the geographic variations of environment-food interactions is key to recognizing their increased relevance and importance to people and places.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.01%3A_The_Future_of_Food-_Course_Overview/1.1.01%3A_1_Food_Society_and_the_Environment-_Coupled_Human-Natura.txt
|
Introductory Reading Assignment:
1. Read the brief section (pp 1-7) of Colin Sage's book "Environment and Food", entitled "Why Environment and Food?" (see the assignments page). The author explains why we are interested in considering food's relationship to the environment (the latter is what we are also calling "natural systems"). He presents a provocative and critical account of our relation to food in modern societies (human systems) and the need to think about food production and consumption patterns in relation to the environment.
2. As you read, try to identify three to five main points of the reading, which is always a good practice when you read in this course and other courses.
3. After reading the assignment, continue reading below and see whether your perceptions of this author's analysis agree with the main arguments we have noted below. You may have noted similar points, or additional ones not noted here.
Consult AFTER Reading:
First, consider the list below of some of the main ideas in the reading. Do these roughly agree with your list of main points? You may have identified additional points in the reading.
1. The essential need of humans to eat has defined the relation of all societies to food production and the environment through history.
2. Transformation of food production systems in the last 100 years has dramatically changed diets and societies' impact on the environment:
• Yields have increased with industrial methods and food for many in the world has become more available.
• However, diets have worsened in many cases so that human nutrition has suffered.
• Inequality in access to food based on wealth and poverty of consumers has continued.
3. Negative impacts on the environment have multiplied, which is expressed in the large amounts of water needed to produce food, the strong dependence of food production on fossil fuels, and the contribution of food production to CO2 methane, and other greenhouse gas emissions that cause climate change.
4. A sustainable food system, which is increasingly the vision promoted by some food producers and consumers, involves reducing fossil fuel use in food production, cutting waste of food in transport and consumption, and increasing the just distribution of food to consumers at all levels of wealth.
We can also think of the way that these main points fit into a diagram, sometimes called a concept map, like the one that is drawn here. As part of the final assignment or summative assessment for module 1, and in the capstone assignment for the entire course, you will be drawing concept maps of a food system example. This diagram may get you started on visualizing human and natural components of food systems and their interaction. You'll note that a concept map can start from a very preliminary drawing or rough draft (like this one), and gradually be reorganized as you learn more about a topic use an organizational principle like the coupled human-natural systems concept we present in this course.
Figure 1.1.3.: An example of a concept map applied to the concepts and relationships presented by Colin Sage in the guided introductory reading for this module. Note the attempt to understand whether components of the food system are part of human vs. natural systems. Credit: Sketch by Steven Vanek
Knowledge Check
Human and Natural System Components (from the reading)
Throughout the text for this course, you will occasionally be presented with ungraded assignments that allow you to test your understanding, usually with some sort of mini-quiz or other activity. We refer to these activities as "Activate your Learning" or alternatively as "Knowledge Checks". These activities are optional. However they are designed to help you increase your understanding of the course materials, so we hope that you will take advantage of them. In this initial activity, we are asking you to think about Colin Sage's short reading assigned above and identify whether each concept in the mini-quiz below is a human or natural component.
Instructions: For each item that follows, select whether you think it represents a human or natural system.
1) Agriculture and food system science that was used to increase production of crops, livestock, and food products.
• Natural System
• Human System
2) Freshwater resources used for irrigation by farming systems.
• Natural System
• Human System
3) The atmosphere, which receives greenhouse gases related to agricultural production.
• Natural System
• Human System
4) Inequality of wealth and poverty that contributes to inadequate access to food for poor sectors of the population.
• Natural System
• Human System
5) Food retailers that present and sell food products to consumers in supermarkets.
• Natural System
• Human System
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.01%3A_The_Future_of_Food-_Course_Overview/1.1.02%3A_2_Guided_Introductory_Reading-_Why_Environment_and_Food.txt
|
After reading Colin Sage's brief introduction to the modern-day issues surrounding environment and food, you should be aware of the fact that food production by human societies has transformed the earth's natural systems. In fact, it is very difficult to understate the enormous impact that food production to support human societies has had on the surface of our planet as the earth's population has grown. Here are some of them:
• Humans have replaced permanent forests and wild grasslands with farm fields that allow much higher rates of soil erosion where the soil is not covered year-round. This has led to trillions of tons of soil being washed into rivers, lakes, and oceans, where it is unavailable as a key resource for food production.
• The expansion of farming and grazing has contributed to the reduction and elimination of wild forest and grassland species of plants and animals: the loss of earth's biodiversity.
• In some cases, previously unproductive dryland areas have been made highly productive through the movement of irrigation water into desert areas, allowing the expansion of human settlements.
• In other cases, elimination of forest in favor of farmland has contributed to the expansion of desert areas and worsening droughts.
• Humans have intensively fertilized cropland to make it more productive with manures and chemical fertilizers, leading to excesses of nutrients and pollution in many of the world's waterways.
• Farming and the other human activities that support modern food systems are major contributors to changes in earth's climate linked to increasing greenhouse gas concentrations in the atmosphere.
One term that is used to summarize these human impacts within the history of the earth is the Anthropocene, from Anthropos (human) and cene, a suffix used within the geologic timescale to denote the recent past. The Anthropocene has been proposed as a new geologic epoch because of the profound and unprecedented human alteration of earth's natural systems that we point to above. Scientists researching the Anthropocene tend to agree that it was the beginnings of agriculture that probably marked the onset of the Anthropocene. We will introduce you to the history of agriculture in Module 2. The concept of sustainable food systems that Colin Sage points to in the introductory reading are currently a major topic of debate and discussion in human societies and are a consequence of the sustainability issues that are a key feature of the Anthropocene. The idea of sustainable food systems is also a major topic of this course, and you will be asked to contribute to this discussion in your capstone project. The term Anthropocene helps us to appreciate the epochal change of the extent and degree of these changes. Yet these changes do not suggest or imply that all is lost, or that all cropping and livestock-raising are pervasively damaging to the environment. As you’ll see throughout this course there are already well-developed options worth considering and pursuing in order to expand sustainable environment-food systems.
Studies of the changes in the type of ecosystems that cover different areas of the earth or land cover (e.g. crop fields versus forest versus desert) allow us to appreciate the impact on earth during the Anthropocene (Fig. 1.1.4 below). We can see in the bar chart reflecting changes over time in land cover that farmed and grazed areas involved in food production for rising populations have expanded from less than 10% of earth's usable (ice-free) surface in the 1700s to over 50% in 2000, a stupendous change considering the size of earth's land area (similar expansion of human influence in food production in earth's ocean fisheries has also occurred).
Figure 1.1.4: A graph showing the global allocation of the ice-free land area, on all five continents, to human land use versus wild (bottom stippled bar section) across three centuries from 1700 to 2000, during the rapid expansion of human population in the Anthropocene. Credit: Steven Vanek, adapted from Ellis et. al. 2010. Anthropogenic transformation of the biomes, 1700-2000; Global Ecol. Biogeogr 19, 589–606.
Click for a text description of Figure 1.1.4
(Approximate estimate of the percentages of global allocation in ice-free land area)
Year 1700
• Wild (uninhabited) ≈ 49%
• Seminatural (e.g. inhabited forests) ≈ 45%
• Rangeland (grazed livestock, non-cropped) ≈ 1.5%
• Cropland ≈ 3%
• Villages ≈ 0.25
• Urban ≈ 0.25%
Year 2000:
• Wild (uninhabited) ≈ 24.5%
• Seminatural (e.g. inhabited forests) ≈ 20%
• Rangeland (grazed livestock, non-cropped) ≈ 32%
• Cropland ≈ 14%
• Villages ≈ 8.5%
• Urban ≈ 1%
Year 1900:
• Wild (uninhabited) ≈ 35%
• Seminatural (e.g. inhabited forests) ≈ 35%
• Rangeland (grazed livestock, non-cropped) ≈ 19.5%
• Cropland ≈ 8%
• Villages ≈ 1.75%
• Urban ≈ 0.75%
Year 1800:
• Wild (uninhabited) ≈ 45%
• Seminatural (e.g. inhabited forests) ≈ 45%
• Rangeland (grazed livestock, non-cropped) ≈ 5%
• Cropland ≈ 3%
• Villages ≈ 1.5%
• Urban ≈ 0.5%
Similarly important is that the Anthropocene, or the "human recent history of the earth" if we translate the word slightly, brings to our attention to not only the changes in natural systems or the environment but also the significant alterations of the human dimension of human-natural systems related to food. It’s safe to say that for nearly all of us this human dimension is significantly different than it was for our grandparents or even our parents. Some basic examples can be used to illustrate this trend. In the United States, for example, the population of farmers has continued to shrink. It is now less than 4 percent of the national population. At present this fraction, though generally declining worldwide, is somewhat higher in European countries and much higher in Asia and Africa. The continued importance of food-growing agriculture among large sectors of the populations in Africa and Asia, for example, creates different patterns of livelihoods (Fig 1.1.5a) and landscapes (Fig. 1.1.5b).
Figure 1.1.5a.: Farmer Cleaning Soybeans, Malawi. Credit: Max Orenstein; used with permission.
Figure 1.1.5b.: Rice-growing landscape, Vietnam. This photo illustrates the intensive interactions that exist between human habitation and farming communities in the background and food production which occupies the entire foreground. Credit: Tommy Chiu, used with permission under a creative commons license.
One important point: familiarity with environment-food systems through immediate experience among human populations, including you and your fellow learners in this course, is presumably at an all-time low. It’s also an interesting reflection on the human dimension of the Anthropocene. Other statistics could be quoted to show related trends. For example, the average amount of time being spent on food preparation is roughly one-quarter the standard allocation of time devoted to this activity 40-50 years ago. This course takes these statistics as a challenge and opportunity since environment-food interactions are both less-known than previously and, at the same time, have a very high level of importance to the environment and society.
Knowledge Check
Figure 1.1.4 above was created by biogeographers to track the expansion of human activities for food production on earth’s surface. In your other studies, you may have learned about the global population rise over recent centuries, with its rapid spike from under one billion to almost seven billion since the 1600s until the 2000s. The above graph tracks the same process but in terms of land use devoted to food (and fiber e.g. cotton, wool, wood pulp) production. Population growth, in general, has been associated with images of growing cities. However, if you examine Figure 1.1.4, you will see that even in the 2000s cities represent a minuscule proportion of the total land area. By comparison, these charts document clearly the dramatic transformation of planet earth’s surface to acquire greater amounts of food. Answer the following questions should help you to increase your appreciation for this process.
1) Estimate the percentage of land of the sum of the uses called "villages" plus"cropland" (purple and green in the bars) in 1700 and again in 2000. From what to what % did it change in this time period?
• From about 5% to 10%
• From about 5% to 50%
• From about 5% to 20%
2) How many-fold is this increase, e.g. 3x versus 5x or 10x?
• About 4x (from 5% to 20%)
• About 2x (from 5% to 10%)
• About 10x (from 5% to 50%)
3) Meanwhile, from other data, we know that the increase in total global population from 1700 was from about 500 million to over 6 Billion, which is around a 12 or 13-fold increase. Now consider that the food crops grown for these human populations were mainly grown in the village and cropland land cover types, and humans eat about as much today per person as they did in 1700 (there have been changes, but these are small, averaged across wealth levels, compared to the expansion in cropland and production). Knowing this, did the total production per land area in village and cropland go up or down)?
• increased
• decreased
4) Rangelands (e.g. grazing lands for sheep or cattle on large extents of land) occur where land is too poor or lacks water resources to support cropping so that grazing is preferred by food producers or is the only use allowed by growing conditions. So, If (1) soils are degraded and (2) climate change leads to less sufficient rainfall in dry regions, what would you predict about the percentage of rangelands in South Asia and Equatorial Africa, versus the percentage of cropland?
• The percentage of rangelands will increase if soils become poorer and climates become drier.
• The percentage of rangelands will not change if soils become poorer and climates become drier.
• The percentage of rangelands will decrease if soils become poorer and climates become drier.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.01%3A_The_Future_of_Food-_Course_Overview/1.1.03%3A_3_Drastic_Impacts_of_Food_Production_on_Planet_Earth-_Th.txt
|
The guided reading in this module on concerns around "Environment and Food" and our consideration of the Anthropocene as an era defined by the dramatic expansion of food production on earth's surface lead us naturally to the concept of sustainability, which is a common term in much of our discourse in the present day, in many different settings from the coffee shop and classroom, to dinner tables and company boardrooms, to government offices. As we think about the increasingly obvious impacts of our food system on the global environment and on the social dynamics of global society, we are concerned that this food system needs to (a) be part of society and communities with adequate opportunities for all and just relationships among people and (b) not compromise the future productivity and health of earth's many different environments. As part of the introductory work of this first module, we ought to consider a definition of sustainability that is broad enough to encompass both human and natural systems, and geographic scales from communities to single farming communities to the worldwide reach of food production and transport in the modern global food system. We present below in figure 1.1.6 one relatively common definition of sustainability as a "three-legged stool" (we will return to this concept later in Module 10 when we return to food systems).
Figure 1.1.6.: Three-legged Stool of Sustainability. Credit: Steven Vanek, based on multiple sources and common sustainability concepts
Click for a text description of the three-legged stool of sustainability image.
A green three-legged stool. The seat is labeled Sustainable Food System. One leg is labeled Environment, one leg is labeled Community, and the third is labeled Economy. There is a list for each as follows. Environment: reduce pollution and waste, use renewable energy, conservation, restoration. Community and Social Sustainability: good working conditions, health care, education, community, and culture; Economy: employment, profitable enterprises, infrastructure, fair trade, security.
In the model of the three-legged stool, environmental sustainability reflects protecting the future functioning, biodiversity, and overall health of earth's managed and wild ecosystems. Community and social sustainability reflect the maintenance or improvement of personal and community well-being into the future, versus relations of violence and injustice within and among communities. In the case of food systems, this reflects especially the just distribution of food and food security among all sectors of society, the just treatment of food producers and the rights of consumers to healthy food, and the expression of cultural food preferences. Economic sustainability within food systems has often been conceptualized as relationships of financial and supply chains that support sufficient prosperity for food producers and the economic access of consumers to food at affordable prices.
Dividing the concepts of sustainability into three parts of an integrated whole allows us to think about food production practices or food distribution networks, for example, are sustainable in different aspects. Excessive water use or fossil fuel consumption, for example, are aspects of environmental sustainability challenges in food systems considered further on in this course. Meanwhile, issues of food access, poverty, and displacement from war, and their impacts on human communities and their food security are issues that combine social and economic sustainability, which will also be considered by this course. The three-legged stool is a simple, if sometimes imperfect, way to combine the considerations of sustainability into a unified whole. As you consider the sustainability challenges at the end of module one and in your capstone project, you may be able to use these three different concepts along with the concepts in the guided reading to describe the sustainability challenges of some food system examples. You may want to ask yourself, is this practice or situation environmentally sustainable? socially sustainable? economically sustainable?
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.01%3A_The_Future_of_Food-_Course_Overview/1.1.04%3A_4_Sustainability-_Environments_Communities_and_Economics.txt
|
Individuals, Communities, and Organizations Taking Action on Sustainability: Information Resources on Real-World Efforts
The interest in the sustainability of environment-food systems, as we've just defined them -- see the "three-legged stool" on the previous page -- has skyrocketed in recent years. A brief sampling of these issues involves the following:
• Health and Nutrition concerns over the nutritional quality and nutrient content of food and food-producing environmental systems
• Food security among approximately 1.1 billion persons around the world with low income and other limitations that do not allow them to access sufficient food.
• The need to design food and agricultural systems that can respond successfully to climate change.
We aim that this course will allow you as a learner to this rapidly expanding suite of interests while it offers background and the capacity to understand better and more fully these issues. You will pursue this aim through the readings and evaluations in this course, and also in completing a capstone project on the food system of a particular region.
One way to begin learning about this expanding interest is to consider the activities of individuals, communities, and governments as well as organizations ranging from nonprofits to international and global groups. In the case of individuals and communities, much interest is being generated by local food initiatives, such as farmers’ markets, and other local groups of producers and consumers seeking to improve environment-food systems. A variety of government agencies in the United States and other countries have also become increasingly involved in environment-food issues.
The United States Department of Agriculture, for example, now offers a focus on environment-food issues such as responses to climate change and dietary guidelines in its range of research and science activities. The USDA website also includes the compilation of data through its different research services that you will use in this course.
The United Nation’s Food and Agriculture Organization (FAO), which is based in Rome, Italy, is one of a number of international organizations focused on environment-food issues. It addresses nearly all the topics raised in the course, as well as many others. The statistical branch of the FAO, known as FAOSTATS, is an important source of information on the international dimension of issues involving food and the environment.
Numerous non-profit organizations are involved in environment-food issues in the United States and in other countries. One of these organizations in the U.S., which is called Food Tank, periodically provides the lists of other organizations that it considers leaders in environment-food issues. In 2014, for example, Food Tank named the "101 Organizations to Watch in 2014”. This interesting list, complete with brief descriptions, includes a number of both well-known and lesser-known groups active in environment-food issues. Other organizations have greatly expanded their environment-food focus. National Geographic, for example, now has a major focus on environment-food issues. Its website includes an important section on food and water within the organization’s initiative on EarthPulse: A Visual Guide to Global Trends. This section includes a number of excellent global maps of environmental and food conditions, challenges, and potential solutions.
These resources may be a help to you as you consider not just the learning resources we present in this text, but the real efforts to promote environmental, social, and economic sustainability in food systems, which you will address in the final section of the course and in your capstone project.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.01%3A_The_Future_of_Food-_Course_Overview/1.1.05%3A_5_Increasing_Interest_in_Food_Systems_and_Sustainability.txt
|
Instructions
Look over Food Tank's "101 organizations to Watch in 2014".
Choose one organization from this website that treats the combination of environment-and-food issues. You'll need to be selective since some of the organizations specialize in food-related issues but have little emphasis on environmental one. Also, read the assignment from Colin Sage, pp. 1-8 on "Introduction: Why environment and food?" in Environment and Development that is one of the required readings for this module (see the assignments page)
Then,
1. Write a brief overview description of the organization you chose from the Foodtank website, its summary goals in relation to environment and food issues - distinct from the more detailed description of issues and factors below, funding source or sources, location and scope (local, national, and/or global), longevity (including when it was founded), and what you perceive as its intended audience and/or client or target population.
2. After addressing these overview questions for the organization, continue and address briefly the following two questions where you can draw on the assigned reading from C. Sage:
• What factors or issues of importance to environment-food systems does it address - a more complete elaboration of its summary goals in the previous overview? (1 paragraph)
• How is sustainability defined and addressed by this organization? (1 paragraph)
Your writing should be between one and one and a half pages long, and no longer than two pages. When appropriate, you can relate the work of this organization to the other material in this introductory module regarding multidisciplinary approaches or the concept of the Anthropocene. Be sure to describe what types of environment and food issues are being addressed by this organization, as well as the wider factors and sustainability questions.
Submitting Your Assignment
Please submit your assignment in Module 1 Formative Assessment in Canvas.
Grading Information and Rubric
Your assignment will be evaluated based on the following rubric. The maximum grade for the assignment is 25 points.
Rubric
Criteria Score of 5 Score 3 Score 1
Answer adequately addresses the organization's relationship to the environmental AND food issues as well as its understanding of sustainability and sustainability goals. A clear description of both environment and food issues and sustainability and how the organization interprets the linkages between. Some mention of both environmental and food issues addressed by the organization, and how sustainability is understood. Little mention of any element or one of the elements missing.
Answer addresses summary details of the organization as requested in the assignment (e.g. food/environment goals, longevity, target audience or client group, etc.) Complete mention of all elements clearly explained. Mentions most elements Mentions less than half of the elements.
The answer is legible, correct and clearly written. Clearly structured writing organized into themes, easily readable, with very few grammatical errors. Some gaps in clarity or grammar errors, but significant effort is indicated, easily readable. Difficult to read or many grammatical errors.
The answer relates the organization description to course content and reading. Shows an understanding of environment and food issues as addressed by course materials, as well as relating these to other material in the module – multidisciplinarity or the Anthropocene. Shows an understanding of environment and food issues as addressed by course materials Shows incomplete understanding of environment and food issues as described in the course materials.
Length Writing is sufficiently long and provides an adequate and interesting level of detail about the organization Insufficient length to fully engage the topic. Writing is only 1-2 sentences on all topics or relies on quick, outline-style response.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.01%3A_The_Future_of_Food-_Course_Overview/1.1.06%3A_Formative_Assessment-_Environment_and_Food_Issues.txt
|
Introduction
Module 1.2 continues the goal of the introductory module, which is to introduce the course themes of integrated perspectives on the environmental and human systems that are related to food production and consumption. In the case of the first (environmental systems), the course places emphasis on the Geosystems and agroecology of soil, nutrients, crops, water, and climate that form the fundamental basics of food-growing environmental systems. In the case of the second (human systems) the course emphasizes factors such as population and the roles of culture, social interactions, economics, and politics. Module 1.2 builds on the concepts of multidisciplinarity introduced in Module 1.1 by introducing the Coupled Natural-Human systems framework as a conceptual tool where multiple natural and social disciplines are used to understand food systems. Building from simple examples of home gardens and hunting/fishing considered as natural/human systems, Module 1.2 provides an introductory description of food systems both as integrated production/transport/production chains and as interacting natural and human subsystems. Both of these themes will be deepened in Module 8, but the purpose here is to introduce them in basic form so that the subsequent modules on domestication, water, soils, and agroecology can utilize the framework and place even emphasis on both human and natural factors. Module 1.2 also advances the thesis (and key geosciences concept) that the global food system is a major area in which humans are transforming earth surface properties and processes during the Anthropocene. In Module 1.2 students are asked to complete a formative assessment in identifying introductory concepts in real examples of food systems which span local to global scales, and which take place both within and outside of the United States. The module concludes with a summative assessment that applies systems thinking and asks students to map a food system example and explore how relationships between parts of a food system are as important as knowledge about each part.
1.02: Food Systems Combine Natural and Human Systems
What defines a system?
In this course, we will refer to the term "system" repeatedly, so it is worthwhile to think about how systems are defined. A basic definition of a system is "a set of components and their relationships". Rather than dwelling on this definition in the abstract, it's probably best to immediately think of how the definition applies to real examples from this course. An ecosystem is a type of system you may have heard of, in which the components are living things like plants, animals, and microbes plus a habitat formed of natural, urban, and agricultural environments, and all the relationships among these component parts, with an emphasis on the interactions between the living parts of the system and their interactions, for example, food webs in which plants feed herbivores and herbivores feed carnivores. A food system, as we have just begun to see so far, consists of food production components like farms, farm fields, and orchards, along with livestock; food distribution chains including shipping companies and supermarkets, and consumers like you and your classmates, with myriad other components like regulatory agencies, weather and climate, and soils. In the case of food systems we have already pointed out how these can be considered as human-natural (alternatively, human-environment) systems, where it can help to see the system as composed of interacting human components (societies, companies, households, farm families) and natural components like water, soils, crop varieties, livestock, and agricultural ecosystems.
Figure 1.2.1.: A simplified diagram of a typical ecosystem. Ecosystems are a common system type analyzed by geoscientists, ecologists, and agroecologists. The black rectangular outline is one way to define a boundary for the ecosystem, where climate and sunlight fall outside the system but provide resources and define the conditions under which the ecosystem develops. This diagram can also be considered a type of concept map where for example 'sun', 'plants', and 'climate' are components, and the arrows connecting the components are relationships in the system. These relationships are now labeled as either flows (of energy, food, nutrients) or causal links, like the way that dead plants and animals end up feeding soil microbes, or the ecosystem affects the climate over time. You may be able to see how this simplified diagram could represent a far more complex system containing hundreds of plant and animal species and thousands of types of microbes, interacting in complex ways with each other and the environment. Keep this diagram in mind as a possible example when you think about completing your preliminary concept map of a regional food system at the end of unit 1. Credit: Steven Vanek
Click for a text description of the ecosystem image
A diagram of a typical ecosystem. Soil resources is on the bottom. The sun is at the top. There is a black line (ecosystem boundary) drawn around four boxes and the soil. The four boxes are carnivores (including humans), herbivores (including humans), plants, and microbes. Lines from carnivores, herbivores, and plants go to microbes and say "feed". There are "feed" lines from plants to herbivores and herbivores to carnivores. A line from the sun to inside the ecosystem boundary says, "sunlight energy for plants". A line from soil resources to microbes says, "supply nutrients, water, habitat. A line from microbes to soil resources says, "replenish and cycle nutrients". A line from soil resources to plants says, "supply nutrients and water stored in soils". Lines from plants to microbes and plants to soil resources say, "replenish organic matter". Outside the ecosystem boundary is "climatic conditions". A line from climatic conditions to the ecosystem says, "provides basic conditions, rainwater". A line from the ecosystem to climatic conditions says, "impacts on climate change".
Behavior of Complex Systems
Systems that contain a large number of components interacting in multiple ways (like an ecosystem, above, or the human-natural food systems elsewhere in this text) are often said to be complex. The word "complex" may have an obvious and general meaning from daily use (you may be thinking "of course it is complex! there are lots of components and relationships!") but geoscientists, ecologists, and social scientists mean something specific here: they are referring to ways that different complex systems, from ocean food webs to the global climate system, to the ecosystem of a dairy farm, display common types of behavior related to their complexity. Here are some of these types of behaviors:
• Positive and negative feedback: the change in a property of the system results in an amplification (positive feedback) or dampening (negative feedback) of that change. A recently considered example of positive feedback would be that as the arctic ocean loses sea ice with global warming, the ocean begins to absorb more sunlight due to its darker color, which accelerates the rate of sea ice melting.
• Many strongly interdependent variables: this property results in multiple causes leading to observed outputs, with unobserved properties of the system sometimes having larger impacts than we might expect.
• Resilience: Resilience will be discussed later in the course, but you can think of it here as a sort of self-regulation of complex systems in which they often tend to resist changes in a self-organized way, like the way your body attempts to always maintain a temperature of 37 C. Sometimes complex systems maintain themselves until they are pushed beyond a breaking point, after which they may change rapidly to another type of behavior.
• Unexpected and "emergent" behavior: one consequence of the above three properties is that complex systems can display unexpected outcomes, driven by positive feedbacks and unexpected relationships or unobserved variables. Sometimes this is referred to as "emergent" behavior when we sense that it would have been impossible to predict the behavior of the system even if we knew the "rules" that govern each component part.
To these more formal definitions of complex systems, we should add one more feature that we will reinforce throughout the course in describing food systems that combine human and natural systems, which is that drivers and impacts often cross the boundary between human or social systems and environmental or natural systems (recall Fig. 1.1.2). Our policies, traditions, and culture have impacts on earth's natural systems, and the earth's natural systems affect the types of human systems that develop, while changes in natural systems can cause changes in policies, traditions, and culture.
For more information on complex systems properties with further examples, see Developing Student Understanding of Complex Systems in Geosciences, from the "On the Cutting Edge" program.
On the next page, we'll see an interesting example of complex system behavior related to the food system in India.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.02%3A_Food_Systems_Combine_Natural_and_Human_Systems/1.2.01%3A_1_The_Systems_Concept.txt
|
The Indian Vulture Crisis: An Example of Complex Systems Behavior
The "Indian Vulture Crisis" may or not be a familiar term to you, but it is important enough to the history of modern India that it has involved dozens of research experts as well as major changes in wildlife, human health, and government policies, and now has its own Wikipedia page (Indian vulture crisis) that you can browse. It is also an interesting example of complex systems behavior that involves food systems and unintended consequences of veterinary care for animals. The main causal links are outlined below in figure 1.2.2, and the narrative of the crisis goes as follows:
Beef cattle are hugely important to Indian food systems even though they are usually not consumed by adherents of the majority Hindu religion (however, Indian Christians and Muslims, for example, do consume beef). Cattle are also widely used as dairy animals (think: yogurt and clarified butter as important parts of Indian cuisine) and are even more important as traction animals (oxen) to till soil for all-important food crops by small-scale farmers across India. Because of their importance and to treat inflammation and fevers in cattle, in the 1990s the drug diclofenac was put into widespread use across India. However, timed with the release of this medication, a precipitous drop in the population of Indian vultures began, which became the fastest collapse of a bird population ever recorded. Vultures are not valued in many parts of the world, but scavenging by vultures was the main way that dead animal carcasses were cleared from Indian communities, especially in the case of beef cattle where the meat is not consumed. It was not until the 2000s that the cause of vulture population collapse was discovered to be the diclofenac medicine administered to cattle, which is extremely toxic to vultures eating dead carcasses. However, the consequences of this population collapse did not end with the solving of the mystery of the vulture population collapse, which was already a tragic and unforeseen consequence. Rather, the fact that vultures are a key part of a complex system resulted in further unforeseen consequences in both human and natural parts of the Indian food system. A few of these are shown in figure 1.2.2 below: first, since vultures are in fact an ideal scavenger that creates a "dead end" for human pathogens in rotting carcasses, and since they were no longer present, water supplies suffered greater contamination from carcasses that took months instead of weeks to rot, leading to greater human illness. Second, populations of rats and dogs, which are less effective carcass scavengers, expanded in response to these carcasses and the lack of competition from vultures, which resulted in dramatic increases in rabies (and other diseases) due to larger dog and rat populations and human contact with wild dogs. This is significant since more than half of the world's human rabies deaths occur in India. Finally, the vulture crisis even had implications for religious rituals in India: people of the Parsi faith, who practice an open-air "sky burial" of their dead where the body is consumed by vultures, were forced to abandon the practice because of hygiene concerns when human bodies took months instead of weeks to decompose. A final consequence of these problems was that the drug diclofenac was banned from use in India, Nepal, and Pakistan in hopes of helping vulture populations to revive. This final turn of events is an example of the human system responding to the unforeseen consequences. Additionally, alternatives to these drugs have been developed for veterinary use that have no toxicity to vultures.
Figure 1.2.2.: Diagram of the causal chain leading to the Indian Vulture Crisis. This crisis contains several examples of complex system behavior and unforeseen consequences. Credit: Steven Vanek
Click for a text description of the casual chain image.
A diagram using boxes to show the causal chain leading to the Indian Vulture Crisis. Box 1: Cattle as essential milk and traction animals in the food system. An arrow goes from box 1 to box 2: Veterinary use of diclofenac (diclofenac highly toxic to vultures). An arrow goes from Box 2 to Box 3: Collapse of vulture populations. From box 3 there are two arrows. One leads to a comment box that says, "Banning of diclofenac and alternative developed in India, Nepal, and Pakistan." The second arrow from box 3 leads to box 4: Crisis of unconsumed carcasses. A comment box that says, "vultures as honored, efficient, and most sanitary practical method of carcass disposal", also has an arrow to box 4. From box 4 there are three arrows pointing towards comment boxes as follows: dogs and rat populations expand; lack of vultures for ritual consumption of bodies in Parsi faith; contamination of drinking water in rural areas. There is also an arrow from, dogs and rat populations expand, to a comment box that says, greater incidence of rabies.
Note the properties of complex systems and human-natural systems exhibited by this example. Farmers sought mainly to protect their cattle from inflammation and speed healing in service of the food system, while pharmaceutical companies sought to profit from a widespread market for an effective medication. The additional, cascading effects of the human invention diclofenac, however, were dramatic, far-ranging, and in some cases unexpected, because of the many interacting parts in the food systems and ecosystems of Indian rural areas: cattle, groundwater, wild dogs, and human pathogens like rabies. The crisis eventually provoked responses from the human system, with impacts on human burial practices among the Parsi, laws banning diclofenac, and development of alternative medications. The search for sustainability in food systems, like those you will think about for your capstone regions, involves designing and choosing adequate human responses to complex system behavior.
Complex Systems and Interdisciplinarity
One final note on this example is to point out that to fully understand the Indian vulture crisis a large number of different disciplines were brought to bear: we need cultural knowledge about the beliefs and practical usefulness of both cattle and vultures in India. We also need biological knowledge about drug toxicity to wildlife, pathogens, groundwater contamination by microbes, and rat and dog populations. We also need policy expertise to think about transitioning food systems to less toxic alternatives to current practices. And all of these disciplines needed to be brought together in an integrated whole to assemble the diagram shown in figure 1.2.2. The purpose of this text and this course on food systems is to help you to develop some of the skills needed for this sort of interdisciplinary analysis of human-environment or human-natural systems.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.02%3A_Food_Systems_Combine_Natural_and_Human_Systems/1.2.02%3A_2_Complex_Systems_Behavior-_An_Example_from_I.txt
|
Food System Examples from Household Gardens to Communities and Global Food Systems
Some of you in this course, perhaps even many of you, have had the experience of growing herbs or vegetables (Fig. 1.2.3) or keeping chickens for eggs or animals for meat. Although dwarfed by the enormous dimensions of the global food system, home food production is still a significant part of the food consumed for billions of earth's inhabitants. In other cases, small-scale fishing and hunting provide highly nutrient-dense foods, and coexist with modernized and industrial food systems, as any fishers and hunters in the class may be able to attest. These experiences of food production for personal or family consumption show natural-human interactions in a very simple way. To grow vegetables or hunt or raise animals means bringing together natural factors (seed, animal breeds, soil, water, fishing and hunting ranges, etc.) and also human factors (e.g. knowledge of plants, livestock, or wild animals, government policies) to gain access to food, as well as food storage and preparation, markets for tools and seeds, or human-built infrastructure like a garden fence or a chicken coop. This same interaction between natural and human factors is evident at a larger scale in the photo in Figure 1.2.4, which shows a landscape that has been transformed by a human community for food production.
Figure 1.2.3.: This diverse home garden includes lettuce, kale, beans, sweet corn, peppers, squash, carrots, and garlic, among other crops. During the growing season, it offsets food expenses in an urban setting and offers extremely fresh food as well as an excellent way to recycle kitchen wastes as compost. Credit: Steven Vanek
Figure 1.2.4.: Landscape near Acobamba, Huancavelica, Peru. This Peruvian landscape has been almost completely transformed for food (and firewood) production. Credit: Steven Vanek
Beyond these experiences of auto-sufficient food production and consumption, however, most of humanity also currently depends on global and local versions of the food system which features a web of suppliers, producers, transporters, and marketers that supply all of us as food consumers. Compared to gardening, catching trout, or keeping chickens, these food systems together form a far more complex version of the interactions between natural and human factors that produce and transport the food that we then consume as part of global and local food systems.
One way of viewing these regional and global food systems is that they can be divided by the type of activity in relation to food, and dividing them into components of food production, food transport, and food consumption (Fig. 1.2.5). Like other diagrams we've seen so far, this diagram can be considered a concept map showing relationships between the different components of a food system. The main arrows show the flow of food through the system from the managed natural environments used to produce food and the end result of nutrition and health outcomes. There are some unseen or implicit relationships here as well, like the way that farming practices, technology, communication and education and other attributes of human societies support the functioning of a food system, and are included in the outer system boundary.
Figure 1.2.5.: A simplified diagram of food system components, depicting a linear progression of production, transportation, and consumption of food. It’s helpful to think of this more linear version in conjunction with the interacting natural and human and systems in figure 1.2.6 to remember that food systems are not just linear conveyor belts delivering outcomes. Credit: Figure adapted from Combs et al., 1996.
Click for a text description of the food system component image.
Simplified diagram of food system components. The diagram is within a circle. At the top, is the heading, Natural Resources, and Environments. From here is an arrow pointing directly below to an oval with the word, Production. From Production, there is an arrow pointing directly below to another oval with the word, Transport. From Transport, there is an arrow pointing directly below to a final oval with the word, Consumption. From the consumption oval, an arrow leads direction below it to Nutrition and Health Outcomes. On the left side of the circle are the following items listed from top to bottom: Farming Practices and Agroecology; Food Processing; Policy support; and Food Preparation. On the right side of the circle are the following items listed from top to bottom: Technology and Infrastructure; Crop and Livestock Breeding and Biodiversity and Communication and Education.
In addition to this more linear or "conveyer belt" portrayal of food systems delivering nutrition from natural resources, we may also be interested in thinking about the dramatic impacts humans have made on earth systems during the Anthropocene, discussed in module 1.1. In that light, we know that these natural systems may either be sustained or degraded by management, an important response that either maintains or undermines the entire food system. For this purpose, we may be interested in a food system diagram that makes the interactions among human and natural systems very explicit. Below in figure 1.2.6 is a version of a Coupled Human-Natural Systems diagram -- again, a concept map of sorts -- developed by an interdisciplinary group of social and environmental scientists (Liu et al. 2007) to represent the human-environment interactions in food systems.
Figure 1.2.6.: A food system as a Coupled Human-Natural System, a way of considering food systems that will be explored throughout the course. This more detailed presentation compared to that in module 1.1 shows that both human systems (communities, regions, food supply chains) and natural systems (agroecosystems, landscapes, water bodies) have internal interactions. Human systems also organize and modify natural systems to produce food, and natural systems respond via feedbacks (food provision, aggradation or degradation depending on the human modification and management of the natural systems.Credit: Steven Vanek and Karl Zimmerer; modified from the National Science Foundation.
Click for a text description of the Human Natural Coupling image.
A coupled human-natural system. Heading at the top says, Human to natural coupling: Human systems reorganize natural systems to produce food: e.g. gardens, farms, fisheries, managed forests. Below this heading are two boxes, side by side. On the left is a box with the heading, Human System. Outside of the box is a descriptor that says, human system internal interactions. There are two lists inside the box on the left. The first list is Farms, farming households; food policies; food distribution companies, consumers. The second list is management knowledge for farming, herding, hunting, fishing, etc.; local and national governments; agriculture input companies. Inside this box are arrows indicating a continuous relationship among the listed items. On the right side is a box with the heading, Natural System. Outside the box is a descriptor that says, natural system internal interactions. There are two lists inside the box. The first list is plants and animals (crops, livestock, pests, wild species); soils. The second list is the climate system and water. Below the two boxes is the heading Natural to Human Coupling: Altered ecosystems respond with food production, degradation, sustained production. There are arrows around the entire diagram indicating the continuous relationship among all items.
This diagram highlights both internal interactions within both the natural and human components of the food system. The natural components of food systems shown here are those we will tackle first in the first part of the course, while the latter half of the course will address the human system aspects of food systems and human-environment interactions shown as the large arrows connecting these two major components. As we saw in comparing home garden production, smallholder production landscapes and global food production chains above, food systems and their components are highly varied. However many similarities apply across the different components, actors, and environments of the food system:
• Food systems modify the natural environment and capture the productivity of earth’s natural systems to supply food to human populations. Globally they create huge changes in the earth’s surface and its natural populations and processes.
• As portrayed in Figure 1.2.6, despite their complexity, food systems often involve coupling between human management and the response of natural systems. As pointed out by author Colin Sage in Module 1.1, the response of natural systems to human management can create sustainability challenges in food systems.
• Food systems involve production, transportation, distribution, and consumption of food (Figure 1.2.5). The scale of these three processes can differ among food systems, which can be local, regional, and global.
• Food systems are examples of complex systems: they involve many interacting human and natural components, as well as important variability, for example, droughts, soil erosion, population changes and migration, and changing policies. All of these affect the natural and human systems and can disrupt simple cause and effect relationships, in spite of the large-scale drivers and feedbacks shown in figure 1.2.6.
Knowledge Check
Natural/Human component identification: Check the following potential parts/actors within the food system that would form part of the natural subsystem as portrayed in a coupled human natural system diagram, figure 1.2.6. Select all that apply.
• Knowledge of local smallholder farmers in the Andes to select and maintain crop varieties
• Wet climates of temperate Europe
• Concrete-lined irrigation canal on a California farm
• Corporate activities to develop and promote pesticides.
• Cucumber beetle pest of squash and pumpkins
• Truck
• The Gulf of Maine off of New England, USA.
• Government subsidies that provide incentives for taking land out of crops for soil conservation.
• Fertilizer factory
• Farm field containing soils and plants
Knowledge Check (flashcards)
Consider how you would answer the question on the card below. Click "Turn" to see the correct answer on the reverse side of the card.
Side 1:
State the three parts or functions of a food system from the simple linear food system model (figure 1.N) and give an example of each from your own experience and knowledge of the food system as a consumer.
1. _______________________ + example: ______________
2. _______________________ + example: ______________
3. _______________________ + example: ______________
Side 2:
1. Production + examples: farm field, ranch, hunting range, etc.
2. Transport (or transportation, distribution) + examples:Truck, ship, food warehouse, etc.
3. Consumption + examples: kitchen, dining room, restaurant, school cafeteria, picnic, etc.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.02%3A_Food_Systems_Combine_Natural_and_Human_Systems/1.2.03%3A_3_Food_Systems_as_Human-Natural_Systems.txt
|
First, download the worksheet to understand and complete the assessment. You will submit the completed worksheet to Canvas. This assignment will require you to draw on your reading of this online text from module one, as well as several options for case studies where we have provided brief descriptions and audiovisual resources (radio clips, videos, photos) that describe these systems. You will accomplish two parts of an assignment that will not only evaluate the learning objectives for module one but will also give you practice in skills you will need to complete your capstone project. These two parts are:
1. Draw a concept map of the system that distinguishes between human and natural components or sections of the system (an example is given below)
2. Fill in a table that identifies some key components, relationships, and sustainability concerns for this system.
You will complete this assignment for your choice of two food system examples, as described in the detailed instructions below. You will first read, then draw a concept map, and then fill in a table with short responses.
Instructions
1. Choose ONE national to global food system example and ONE local to regional food system example from the options that follow this assignment page in the text (see links in outline view at right, or the link to the next page at the bottom of this page). National to global food system examples are Pennsylvania Dairy, Colorado Beef Production, and Peruvian Asparagus, while local to regional examples are the Peruvian smallholder production and New York City greenmarkets examples. Read the descriptions of the system, which may include photos, videos, audio clips, or visiting other websites. Completely read through the description of the two systems you have chosen (one national/global and one local/regional), including these external links before continuing on to the following steps (though you may certainly return to the descriptions as needed). You are welcome to consult other resources online regarding the system you have chosen since that is a skill that will be helpful when embarking on data gathering for your capstone project.
2. Using a sheet of paper, or composing in PowerPoint, develop a concept map of only ONE of the systems you chose, subject to the following guidelines:
1. Title your concept map with the name of the system you are describing (among the five described on the following pages) and put your name on the diagram.
2. Before you begin your concept map, draw a vertical line in your diagram to distinguish between Human and Natural components of the system to right and left, drawing on Fig. 1.2.6, 1.2.7 below, and 1.1.3 (the last one is the concept map example from the guided introductory reading by Colin Sage). However, you do not need to make your diagram look like the highly schematic diagrams in the text of the previous pages (see rather Fig. 1.2.7 below) -- you should include components that are discussed in the examples on the following pages, and connect them in the way that makes sense to you.
3. Your concept map should be legible, but it does not need to be extremely neat since it reflects a first attempt to characterize a system. Additional components and relationships will occur to you as you draw, and you may need to squeeze them in. Therefore, leave space as you begin your diagram. If you feel your map becoming too hard to understand, please do compose a second "clean" copy.
4. Remember that systems are defined as components and the relationships between them. If you are having trouble thinking of what to draw, think what the components are in the system (these can be boxes or ovals), and then how they are related (these may be labeled arrows)
5. Below in Fig. 1.2.7 is an example of a concept map drawn from a food production system producing field crops (wheat, oats, barley, soybeans) and hogs for pork in Western France. Material for this concept map is drawn from Billen et al., 2012, "Localising the nitrogen footprint of the Paris food supply"1
Figure 1.2.7.: Example concept map of a food production system, divided into human and natural components. Credit: Sketch by Steven Vanek
3. Fill in the table on the worksheet with short answer responses regarding the two food systems you have chosen. The worksheet asks for responses in the following areas:
1. Identify two natural components of the food system.
2. Identify three human components of the food system.
3. Tell how products from the system are transported to markets or to households for consumption.
4. Name one sustainability challenge for the system, and state whether it represents a challenge in the area of environmental, social, or economic sustainability.
Submitting Your Assignment
After completing the assessment worksheet, submit your assignment in Canvas by taking the Module 1 Summative Assessment. You will provide your answers as if you are completing a quiz. You will upload your concept map as part of the quiz. Please do not skip doing the worksheet because the quiz is timed and only gives you enough time to read the question and select an answer.
1.03: Summary and Final Tasks
Summary
In the following modules, you will be learning about aspects of natural systems within human-natural food systems that support food production. As you pursue your learning about these natural systems, keep in mind that natural systems (freshwater resources, soil, the oceans, and the atmosphere) within food systems are always interacting with human systems components (knowledge, management, and policies for example).
Reminder - Complete all of the Module 1 tasks!
You have reached the end of Module 1! Double-check the to-do list on the Module Roadmap to make sure you have completed all of the activities listed there before you begin Module 2.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/01%3A_Introduction/1.02%3A_Food_Systems_Combine_Natural_and_Human_Systems/1.2.04%3A_Summative_Assessment-_Concept_Mapping_and_Ass.txt
|
Future of Food Capstone Assignment: Analysis of Regional Future Food Scenarios
Course-level Learning Goals
1. Describe and assess the soil, biological, and water resources and climatic conditions that support food production systems.
2. Analyze how human food systems significantly alter earth's ecosystems, specifically the biological, soil and water resources.
3. Evaluate the resilience of food production systems in the context of future climate, human population growth, and socio-economic factors.
Summary of Capstone Assignment
At the beginning of the semester you will select a food region. Throughout the semester, you will study different aspects of the food systems of your assigned region. By the end of the semester, you will have prepared a paper about your assigned food region that explores and analyzes the current status and the future resilience and sustainability of the food systems in your assigned region.
Capstone Overview
In order to assess your understanding of the interdisciplinary topics covered in this course, The Future of Food, you will need to demonstrate your mastery of the course learning objectives via the completion of a capstone project. The capstone project requires that you assess the current status of the food systems in an assigned region, and to consider the food systems in your assigned region for the future scenarios of human population growth and increased temperatures.
The capstone assignment is broken down into five stages that allow you to develop your assessment of the current status of the regional food system gradually as you progress through the course material. At the end of every third module, you will complete an assignment (or stage) designed to help you gather and organize the information you will need to assess the future food scenarios. Each stage has an associated worksheet, which includes a table containing questions and suggestions for where to go to gather information or data.
During week 2, you will decide on a capstone region and gain instructor approval. In deciding this please consider a region with (a) significant agricultural production, (b) clearly defined boundaries of interest, (c) enough information published in reputable sources to collect enough information related course content, and (d) are not too big of an area. To clarify the last point, usually people choose a selections of small states or provinces that around about 100 miles in diameter.
Outline of capstone stages
You will find a worksheet associated with each stage that outlines in detail the data and information you should be gathering at that stage. The final Stage 5 document provides details regarding what should be included in your final paper or on your final web page. The stages will progress through the semester as outlined in the diagram below:
Click here for a detailed description of the Capstone Stages Outline image.
This image is an outline of the Capstone Project as follows:
Stage 1: Introduction to your region, history and diet/nutrition.
• Complete at the end of Module 3
• Individual assessment - strategy for capstone
• Initial data gathering (ppt & worksheet) documents, history and diet/nutrition.
State 2: Water, nutrients, and crops
• Complete at the end of Module 6
• Continue work on data gathering -submit ppt and worksheet
State 3: Soil/crop management, pests, and climate change
• Complete at the end of Module 9
• Individual assessment - 1-page essay about your progress so far (see State 3 document)
• Submit updated ppt and worksheet
State 4: Food systems and resilience, adaptive capacity and vulnerability (RACV)
• Complete at the end of Module 11
• Submit updated ppt and worksheet
State 5: Final future food scenario website production
• Website (see more info below and in Stage 5 document)
• Individual assessment - 2-page essay about your project (See Stage 5 document
Rubric
Component % of Capstone Grade
Individual Assessment - Stage 1 15%
Individual Assessment - Stage 2 15%
Individual Assessment - Stage 3 15%
Individual Assessment - Stage 4 15%
Rough Draft Final Paper & Peer Review - Stage 5 10%
Final Individual Assessment - Stage 5 5%
Final Paper - Stage 5 25%
The Final Capstone Assignment (Stage 5) - Paper or website (depending on instructor) on your region's future food scenario
At the end of the semester, you will create a paper or website about your region. More details are provided in the Stage 5 worksheet. Your paper or website will including the following information:
Grading Information and Rubric for Final Capstone Paper or Website:
Rubric
Criteria 9 6 3 1
Completeness of paper & all supporting documents: Conforms to all instructions and guidelines All specific instructions are met and exceeded; no components are omitted. Most instructions are met with only 1 to 2 minor omissions Some components are present with the omission of several key elements Missing most components of the project, minimal conformity to guidelines.
Identification of the key food systems of the region Clearly and thoroughly identifies the regional food systems with a clear application of material from Modules 1, 2, & 10 Satisfactory identification of the regional food systems some mention of material from modules 1, 2, & 10 Minimal identification of the regional food systems some mention of material from modules 1, 2, & 10 Little to no identification of the regional food systems some mention of material from modules 1, 2, & 10
Assessment of the regional food system and the physical environment of the region (water resources, soils, crops, climate) Thoroughly articulates specified elements with in-depth & accurate application of key concepts from Modules 4, 5, 6 & 9 Satisfactory articulation of specified elements with some application of key concepts from Modules 4, 5, 6 & 9 Minimal articulation of specified elements with little application of key concepts from Modules 4, 5, 6 & 9 Little to no articulation and application of key concepts from Modules 4, 5, 6 & 9
Analysis of the resilience of the regional food system based on data and facts Thoughtful and thorough consideration of potential vulnerabilities using concepts from Module 11 Satisfactory consideration of potential vulnerabilities using concepts from Module 11 Minimal consideration of potential vulnerabilities with little use of concepts from Module 11 Little to no consideration of potential vulnerabilities with little use of concepts from Module 11
Proposes reasonable strategies for sustainability and resilience based on data and facts Clearly develops viable & insightful strategies with well‐ supported data & research Develops viable strategies supported by some data and research Develops minimal strategies supported with limited data and research Little to no strategies provided or not supported by data and research
Criteria 5 3 2 1
Overall professionalism and timing Advanced ‐ no typos, or grammatical concerns, attention to detail with superior effort demonstrated A solid effort with few typos, or grammatical concerns, attention to detail evident with some effort demonstrated Minimal effort with numerous typos, or grammatical concerns, little attention to detail minimal effort demonstrated Little to no effort demonstrated with extensive typos, or grammatical concerns, little to no to attention to detail
Total Points
(out of 50)
1. Summary of Current Regional Food System
• Summarize the data and information that you’ve gathered throughout the semester about your assigned regional food system(s) and the interaction between those food systems and the environment, as well as any relevant socioeconomic, cultural and policy factors.
• Provide an overview of the current status of your assigned regional food system(s). Summarize the data and information that you acquired in the previous modules to present the current status of your regional food system. Details are provided in the Stage 5 worksheet document.
2. Discussion of future scenarios
• What are projections for regional human population growth in your assigned region?
• What are the projections for temperature increases in your assigned region?
3. Analysis of the resilience of future food system
• Provide a discussion of the resilience of your food system given the potential of increasing human population growth and increasing temperatures.
• Consider possible impacts of climate change and human population growth on the regional food system and the resilience and/or vulnerability of the food system to those changes.
4. Proposed strategies for sustainability enhanced resilience
• Propose strategies that contribute to the increased resilience of your assigned regional food systems in the face of human population growth and rising temperatures and evaporation rates.
02: Capstone Project Overview
Modules 1-3
The diagram below summarizes the topics you will explore in Stage 1 for your region based on what we've covered in Modules 1, 2 and 3. For Stage 1, you will do your data collection on your own, and submit a PowerPoint and your completed worksheet electronically via Canvas.
Click for a text description of the Capstone Stage 1 image
This diagram outlines the requirements for Capstone Stage 1, Introduction to your regional food system, history and diet/nutrition, as follows:
Introduction to your region
1. Describe physical environment
2. Describe human environment
3. Explore history of food system
4. Discuss Diet & Nutrition
What to do for Stage 1?
• Confirm with your instructor which region you will be studying.
• Make a CHNS diagram similar to the one included in this worksheet based on the information you’re gathering about your region. At each capstone stage you will be able to remake it, add more to it, and/or refine it.
• Complete the worksheet below that contains a table summarizing the data you’ll need to collect to complete this stage. There are questions in the left column, and space to answer them in the right column. It’s very important that you cite the source of each piece of information that you type into the right column with an in-text citation and a list full end citation at the end of the document. Please include at least 3 peer reviewed academic sources.
• You need to think deeply about each response and write responses that reflect the depth of your thought as informed by your research. Do not just write one-word answers.
• Include questions that you have about your region related to the key course topics covered so far in the Stage 1 worksheet. Be sure to include in this document a record of your efforts to answer the questions so far. Also, there is space at the end to paste in links to any sites you visit that you think might be helpful in the future.
• Create a PowerPoint file that you’ll use to store maps, data, graphs, photos, etc. that you collect related to your assigned region. For every piece of information that you put in your PowerPoint file, you MUST include a citation that clearly explain where that piece of information came from.
• Submit your Stage 1 PowerPoint file and worksheet per the guidance from your instructor. (see rubric below for assessment).
Downloads
Download the worksheet for Capstone Project Stage 1
Capstone Project Overview: Where do you stand?
Upon completion of Stage 1, you should have started to investigate your assigned region and have added information, maps and data to your worksheets and PowerPoint file.
Upon completion of stage 1, you should have at this point:
1. Confirmed which region you will study for your capstone project and identified the members of your group.
2. Initiated research and data compilation in the Stages 1 table in the associated Stages 1 worksheets.
• Stage 1: Regional food setting, history of regional food systems, diet/nutrition
3. Created a PowerPoint file to hold the data that you are collecting about the food system of your assigned region. Information you may have:
• Labeled map of your region
• Soil map of your region
• Precipitation map of your region
4. Keep track of all of the resources and references you use. Remember to include at least 3 peer reviewed academic sources.
5. Compiled an initial list of questions you have about your region related to key course topics and initiated significant efforts to answer.
6. Begun to create a CHNS diagram(s) for your region that illustrates the coupled human-natural systems of your food region.
Rubric for Stage 1 Assessment
Criteria Possible Points
Stage 1 worksheet and ppt files for region uploaded to drop box by deadline 5
All questions in the Stage 1 worksheet answered thoughtfully with evidence of research into the region 10
PowerPoint file includes relevant images, graphs, and data for region 10
Proper citations are included for all items in worksheet and PowerPoint 5
Total Possible Points 30
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/02%3A_Capstone_Project_Overview/2.01%3A_Capstone_Project_Stage_1_Assignment.txt
|
Introduction
This second module in the Future of Food course provides a historical overview of the emergence and development of food systems until the present. Module 2.1, the first half of this module, describes the transition from hunting and gathering to the domestication of crop plants in human prehistory, including the origin of major food crop plants and the locations and processes of domestication, e.g. the emergence of wheat in the eastern Mediterranean or the potato in the Andean region. These processes are seen through the lens of the coupled human-natural systems framework that is introduced in Module 1 and used throughout the course. As part of this historical overview, concepts surrounding human interaction with crop plants and wild relatives are introduced such as the global regions supporting domestication or centers of diversity; or the concept of niche construction as a clear example of human-natural systems interaction. In the second half, module 2.2, we describe the history of food systems as four successive stages during which human innovation responded to both human and natural drivers and feedbacks. These stages span from early domestication activities to the most recent transitions of agriculture and food production towards more globalized trade networks, along with facing the challenges of sustainability.
Goals
• Describe food systems as coupled human-natural systems.
• Define and describe different phases in the history and development of food systems within human history.
• Describe key interactions (e.g. drivers, feedback) that exist within coupled human-natural systems (CHNS).
• Explain key human and natural system factors that explain the emergence of food system phases in human history, using a CHNS framework.
• Start researching and choose a capstone region.
Learning Objectives
After completing this module, students will be able to:
• Describe the major features of hunter-gatherers’ use of food and the environment.
• Define and describe the domestication of plants and animals in early agriculture.
• Define and give examples of spatial diffusion, adaptation, niche construction, and carrying capacity in environment-food systems.
• Define and describe each of the four (4) principal historical-geographic periods of environment-food systems.
• Give examples of early domesticated plants and animals and their region of domestication.
• Within a Coupled Human-Natural Systems framework, relate fundamental drivers and feedbacks in natural and human systems over prehistoric and historical time to the development and spread of agriculture and other changes in food systems over time.
• Relate the origins and current dominance of agriculture to the concept of the Anthropocene period presented in module one.
Assignments
Print
Module 2 Roadmap
Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 2 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. Domestication. National Geographic, Education Encyclopedia.
3. Jared Diamond, "The Worst Mistake in the History of the Human Race”, Discover Magazine, May 1987, pp. 64-66
1. You are on the course website now.
2. Online: Natural Geographic
3. Online: The Worst Mistake in the History of the Human Race
To Do
1. Summative Assessment: Drivers and Feedbacks in the Development of Food Systems
2. Participate in the Discussion
3. Take the Module Quiz
4. Start researching and choose a capstone region.
1. In course content: Summative Assessment; then take the quiz in Canvas
2. In Canvas
3. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
03: Geographic and Historical Context
Introduction
As we presented in module one, agriculture is currently the predominant environment-food system, including the production of both crops and livestock for human consumption. But it was not always this way and other environment-food systems continue to exist, as exemplified by the world's wild-caught ocean fisheries. Module 2.1 first examines the human-natural systems of hunter-gatherers, and then the human-natural systems of early agriculture. The domestication of plants and animals, together with the origins of agriculture, resulted in some of the most profound transformations of environments and human societies, and are a key part of the Anthropocene or "human recent past" presented in the first module. Module 2.2 then describes more recent environment-food systems and those of today.
3.01: Origin of Farming as Coevolution and Coupled Human-Nature Interactions
Hunting and gathering activities were the primary way for humans to feed themselves from their natural environments during over 90% of human history. Gathering plant products, such as seeds, nuts, and leaves, is considered to have been the primary activity in these early human-natural food systems, with hunting mostly secondary. The mix of hunting-gathering activities and the tools used varied according to the environment. Among many hunter-gatherer groups worldwide fire was one of the most important tools and was used widely. Fire was used by these human social systems to transform natural systems in habitats ranging from grasslands and open forests, such as those of Africa, Asia, Europe, and North America, to those of denser forests that included the Amazon rain forest of South America. One importance of fire was that it helped enable hunter-gatherers to “domesticate the landscape” so that it yielded more of the desired plants through gathering and the sought-after animals through hunting.
Fire also was and is crucial in enabling humans to cook food. Cooking rendered animals and many plants into forms that humans were significantly more able to digest. The capacity to cook foods through the use of fire----which was obtained through gathering and hunting---may have arisen as long ago as 1.8 – 1.9 million years ago at about the same general time as the emergence of our ancestral species Homo erectus on the continent of Africa. (Homo erectus subsequently evolved to Homo sapiens, our own species, about 200,000 years ago). These early humans were able to extract significantly more energy from food as the result of cooking. In short, cooking enabled through the use of fire, produced chemical compounds in food that were more digestible and energy-dense. While the changes and challenges of human diets and nutrition continued to evolve---they are a focus of Module 3 —this early shift to cooking through the use of fire was one of the most influential in our history.
Hunter-gatherer peoples are assumed to have used thousands of different types of plant species and, at the least, hundreds of different animal species. In many cases, the impact on the environment or natural systems was only slight or moderate, since population densities were low and their use of the environment was dispersed. Populations were relatively small and technology was fairly rudimentary. In a few cases, environmental impacts were significant, such as the use of fire as discussed above. Hunting pressure also could have led to significant environmental impacts. It is hypothesized that hunting by groups in North America contributed to the extinction of approximately two-thirds of large mammal species at the end of the last Ice Age around 10,000-12,000 years ago. The human role in this extinction episode, referred to as the Pleistocene Overkill Hypothesis, was combined with the effects of other changes. Climate and vegetation changes in particular also impacted the populations of these large mammals and made them more vulnerable to hunting pressure.
We know less about the societies and social structure (human systems) of these groups. However, work with recent and present-day hunter-gatherers suggests they had high levels of egalitarianism since livelihood responsibilities are widely shared and not easily controlled by single individuals or small groups within these groups. One thing we do now know is that hunter-gatherers have been related to agricultural peoples in a number of ways. A first and obvious way is that in the history of human groups and food systems, "we" were all hunter-gatherers once, and across a wide range of environments agriculturalists emerged from hunting and gathering in their origin. Another is that hunter-gatherers sometimes coexist with agriculturalists and may even have conducted rudimentary trade. Last, there are even cases of hunting and gathering emerging from agricultural groups. In Africa and South America, for example, the Bantu or Bushmen (in southern Africa) and the Gi (in present-day Brazil) are thought to have been agriculturalists prior to assuming hunter-gatherer lifestyles. These changes presumably owed to lessening population densities and the opportunity for more feasible livelihoods through hunting and gathering given the circumstances these peoples faced. This re-emergence of hunting and gathering is an excellent example of the sort of human natural-coupling we consider in this module and apply to the history of food systems: the social factor of lessening population densities, and perhaps something the re-emergence of more wild ecosystems in natural landscapes, allowed these agriculturalists to re-adopt hunting and gathering, with consequent changes in the natural systems.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.01%3A_Origin_of_Farming_as_Coevolution_and_Coupled_Human-Nature_Interactions/3.1.01%3A_.txt
|
The origins of agriculture as the predominant mode of food production was dependent on the domestication of plants and animals. Domestication refers to the evolution of plants and animals into types that humans cultivate or raise; conversely domesticated types can no longer exist in the wild. Domestication and the social and environmental transformation that accompanied them are closely related to the Anthropocene and represent one of the most pivotal experiences ever, both of earth’s environments and in our history and evolution as a species. Domestication has been and is widely studied by interdisciplinary environmental and agricultural fields as well as various disciplines such as archaeology, biology, geography, genetics, and agronomy.
A couple of common definitions of domestication will help to underscore the importance of this concept. In a 1995 book on The Emergence of Agriculture, the archaeologist Bruce Smith defines domestication as “the human creation of a new form of plant or animal---one that is definitely different from its wild relatives and extant wild relatives”. In 2002 in the scientific journal Nature the geographer Jared Diamond writes that an animal or plant domesticate is “bred in captivity [or in a field] and thereby modified from its wild ancestors in ways making it more valuable to humans its reproduction and food supply [nutrients in the case of plant domesticates]” (page 700). In other words, plant and animal domesticates have lost most or all the capacity to reproduce long-term populations in the wild---thus making domesticated populations of plants or animals different than ones that have been simply tamed or brought into cultivation on a one-time basis as single organisms. Expanding beyond these definitions, you can read more about domestication at National Geographic: domestication.
Figure 2.1.1.: Grains and ears of the wild ancestor of maize, teosinte, domesticated in the area of present-day Mexico about 6000 years ago (left), a comparison of the plant and seedstalk of teosinte and modern maize (center), and size comparison of teosinte and maize ears (right). Note the relatively large seeds of teosinte that may have called attention to early plant domesticators as a useful species for a staple food, and the true size comparison of the two ears at far right that shows the dramatic increase in size accomplished through domestication and breeding. Credit: Teosinte photo (left), Matt Lavin; Teosinte/Maize comparison diagram (center), Biosciences for farming in Africa (B4FA); size comparison photo (right), Hugh Iltis.
A great deal is now known about the nature of domestication and its timing, in addition to the place of origin of many domesticated crops and animals (covered on the next page). Illustrating the multiple disciplines needed to understand the history of food systems, this information owes to evidence and analysis in archaeology; biology, ecology, and agronomy; geography; anthropology; and genetics. For one, the domesticates in general and our most important domesticated crops and animals in particular---such as wheat, rice, corn (maize), barley, potatoes, sorghum, cattle, pigs, and sheep---are recognized to have evolved from wild plants and animals that were selected, gathered, and brought back to camp by hunter-gatherers. Second, while a broad spectrum of wild plant and animal foods were being gathered and hunted prior to domestication the origins of agriculture represented a bottleneck. The effect of this bottleneck was that the number of major domesticates that became available to humans numbered in the several dozens, but not the thousands. Third, well-established demonstration of the actual dates of domestication varied from 8,000 - 10,000 years ago in the Near East (the Fertile Crescent of present-day Iraq, Turkey, Iran, and Syria) and China to the broad window of 4,000-8,000 years ago in several of the other world regions discussed next.
Domestication of plants and animals has been framed by many experts in terms of a " domestication syndrome" which refers to a set of traits or "syndrome" that are common to domesticates. Syndrome traits are ones that should be easy to remember because these traits confer usefulness to humans. In plants, for example, wild relatives may have shattering seed pods, where a seed is dropped on the ground as it ripens, while domesticates generally keep their seed on the plant to give humans greater convenience in harvesting. There are also dramatic increases in seed and inflorescence size in many plant domesticates in relation to wild relatives (e.g.. Fig. 2.1.1), as well as decreases in bitter or toxic substances that make food crops generally more appealing and nutritious to humans (and sometimes to wild herbivores as well, which then become pests!). Plant domesticates are generally less sensitive to day length as a requirement for flowering and reproduction, which means they complete their life cycles and produce grain and other products in a more predictable way for humans, and tend to have greater vigor as seedlings than wild relatives, which also follows from their larger seeds. In animals, the greater docility of animal pets and livestock, and traits such as floppy ears and general juvenile-type behavior of domesticated dogs are oft-cited examples of domestication syndrome. See if you can identify examples of these traits in the website presentation of domestication cited in the text above.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.01%3A_Origin_of_Farming_as_Coevolution_and_Coupled_Human-Nature_Interactions/3.1.02%3A_.txt
|
Just as for the dates and historical processes that led to domestication, the sites of plant and animal domestication are known from a similar interdisciplinary mix of perspectives, from archaeology to genetics. The map in Figure 2.1.2 and Table 1 show current knowledge of seven important areas of early agriculture where the world’s major crops and animals were domesticated. The question of crop and livestock origins and movements presented in this module is still an active and interesting area of research and more remains to be discovered. Most important of these areas was the Fertile Crescent of the Tigris-Euphrates river system and surrounding uplands in Southwest Asia---present-day Turkey, Iran, Iraq, and Syria. This region was responsible for the domestication of several major crops (wheat, barley, oats) and almost all the major domesticated animals (cattle, sheep, goats, pigs) that are incorporated today into major food systems worldwide (for the definition of food system see module 1.2). Like other areas it also included domesticated plants in particular that were significant components of local food systems and diets---such as bitter vetch and chickpeas—that did not become major global staples. China, which we identify as a single geographic area, was responsible for the domestication of rice, soybeans, millet and several other domesticates that included tree crops such as the peach. Pigs were domesticated independently in China, meaning the pig population there that evolved to domesticated forms was separate from that of the Fertile Crescent. It is likely that China contained two separate areas of major importance in our global overview: the Yangtze River basin and the Wei (Yellow) River valley.
Four other major world regions were also vitally important as sites of early agriculture and in the domestication of major crops and animals. Southeast Asia including New Guinea and the Pacific Islands are an expansive geographic area where staples such as various species of yam, citrus, bananas, and sugar cane were domesticated (see Table 1). A significant-size region of sub-Saharan Africa was also quite important, contributing crops such as sorghum, coffee, and species of millet other than the ones domesticated in East Asia (see Table 1). Geographically this area of sub-Saharan Africa includes the savanna areas of West Africa as well as the highlands of Ethiopia and Kenya. Locally within this region, such domesticates as teff and fonio, a pair of grain crops, became highly regarded foodstuffs.
Figure 2.1.2.: Map of the crop centers of origin as described by botanist and breeder Nikolai Vavilov: (1) Mexico-Guatemala, (2) Peru-Ecuador-Bolivia, (2A) Southern Chile, (2B) Southern Brazil, (3) Mediterranean, (4) Middle East, (5) Ethiopia, (6) Central Asia, (7) Indo-Burma, (7A) Siam-Malaya-Java, (8) China and Korea. Credit: Wikimedia Commons: Vavilov-center (Creative Commons CC BY 3.0)
In South America, the combination of the Andes mountains and the Amazon basin was an important area of early agriculture and domestication that included potatoes, sweet potatoes, peanuts, and manioc (or cassava). The Andes and Amazon also included many locally important domesticates such as quinoa and acai (the fruit of the acai palm) that recently have gained popularity as elements of global food systems. The area of Mexico (extending to the U.S. Southwest and southern Arizona in particular) and Central America is also important. This area’s contributions included corn (also known as maize) and domesticated species of bean, chili pepper, and squash in addition to the turkey. Eastern North America was also an important area of early agriculture though most domesticates there did not become familiar items in major contemporary food systems. Sunflower did though become relatively important and some of the domesticated plants of the northern parts of North America, such as cranberry and so-called Indian rice, did become moderately important foods.
Table 1. Major Geographic Areas of Early Agriculture with Current Knowledge of Where Crops and Animals were Domesticated
Geographic World Region Early Domesticated Crops Included Early Domesticated Animals Included
East Asia (and Central and South Asia) Rice; Buckwheat; Millets; Soybean; Peach; Nectarine; Apple (Central Asia); Apricot (South Asia) Pigs
Southeast Asia and Pacific Islands Taro; Yam; Arrowroot; Banana; Sugar Cane; Coconut; Breadfruit; Orange; Lemon; Lime; Jack Bean; Winged Bean Pigs, Chicken
Near East Wheats; Barley; Rye; Oat; Pea; Chickpea; Lentil; Vetch; Cherry; Almond Pigs, Sheep, Goats, Cattle
Sub-Saharan Africa: the East African Highlands and Sahelian Savanna Sorghum; Pearl and Finger Millet; Teff; Ensete; Coffee; Yam; Pigeon Pea; Cowpea; Fonio Cattle
South America, principally the Andes mountains and the lowlands of Pacific Coast and Amazonia Potatoes; Quinoa; Peanut; Lima Bean; Manioc (Cassava); Pineapple; Sweet Potato Llama, Alpaca, Guinea Pig
Mexico and Central America, mountain ranges and adjoining foothills and lowlands Maize, Mesoamerican Common Bean (Kidney Bean) and Chile Pepper; Squash Turkey
Eastern North America Sunflower, Sumpweed, Marsh Elder, Goosefoot or Lamb’s Quarter
At this juncture, it’s important to note some important points for understanding the environment-food interactions that arise from our discussion thus far of hunting-gathering, domestication, and early agriculture. This geographic and historical context highlights the importance of the independent establishment of early agriculture through domestication in multiple geographic areas across diverse world regions. Our description of current knowledge emphasizes the importance of seven world geographic areas, but other variants of this accounting are possible. Crop origin areas could potentially be more numerous, for example, if we counted additional distinct sub-areas of China, Sub-Saharan Africa, and South America. It is interesting that the major modern population centers, the Eastern United States and Northern Europe, seem to have been less important than other world regions in the domestication of the major staple grains and vegetables. As noted above, the question of crop origins and the relations of humans to crops via domestication, breeding, and knowledge of how to cultivate crops remains an active and fascinating area of research.
Our description also highlights the domestication of a handful of specific species of major crops (approximately 100 species) and major animal domesticates (14 species). These domesticated species are the same ones we still recognize today as the most valuable cornerstones of our current food systems as well as being central elements in their environmental impacts. When local crops and livestock are added the numbers of these domesticates is significantly higher (upwards of 500 species). Still, the number of species in this new agricultural biota paled in comparison to the thousands of species that have been the basis of human livelihoods in hunter-gatherer systems. In other words, early agriculture meant that humans narrowed their focus on a select group species in the biotic world, namely the ones that were most productive and could be most feasibly and effectively produced and consumed. In doing so, humans intensified the level of interaction, knowledge, and cultural importance of these crop species as a fundamental human-natural relationship at the base of food systems from prehistory to the present day.
In a variety of subsequent units of this course we will be considering the diversity of crops and animals in agriculture as we explore the agroecology and geosystems of food production (Section II) and the role of human-environment interactions amid such challenges as climate change, food security, human health, and environmental sustainability (Section III). In this module, we keep our focus on early agriculture and domestication. Our present focus will also require that we use the model of Coupled Natural-Human Systems (CNHS) through the remainder of Module 2.1 followed by continuation and expansion of this focus in the next module (Module 2.2) where we discuss a few of the major historical transformations leading to the world’s current situation with regard to the environment and food (Module 2.2).
Knowledge Check
Domesticate/Word Region matching. Identify with the world region code (see the abbreviations in the first column of Table 1) the general area of domestication of the following important food plants and animals. Drag the words into the correct boxes.
Wheat:
Corn (also referred to as maize):
Rice:
Apple:
Kidney Bean:
Chile Pepper:
Potato:
Banana:
Cattle:
Chicken:
Word Bank:
Near East; East, South, and Central Asia - present-day Kazakhstan; East Asia; Southeast Asia and Pacific Islands; Mexico and Central America; South America; Southeast Asia and Pacific Islands; Near East, Sub-Saharan Africa; Mexico and Central America; Mexico and Central America
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.01%3A_Origin_of_Farming_as_Coevolution_and_Coupled_Human-Nature_Interactions/3.1.03%3A_.txt
|
Explaining Domestication using Coupled Human-Natural Systems (CHNS)
The Coupled Natural-Human Systems, which we introduced in Module one, can be used as a framework to explain domestication events and early agriculture in the history of food systems. This framework is sometimes used to think about the "why" question of domestication, for example, "why did human and natural systems come together at a particular time in different parts of the world, including the middle east, so that plants were domesticated and agriculture started?; Why not earlier, and why not later?". The framework can also be used to explore the history of food systems after domestication, which is the subject of module 2.2.
Review and Definitions: Drivers, Feedbacks, and Coevolution
You probably recall from module 1.2 that systems are assemblages of components and the relations between them. Two basic relations that can occur within systems, and that you likely included in your concept map of a food system example (summative evaluation 1.2) were those of a driver and a feedback relation. As you may already suspect, drivers are those processes or changes that can be said to impel or cause changes in other parts of a system, somewhat like a volume knob that causes the volume of music to increase in a room. In the example of the Pleistocene overkill hypothesis from module 2.1, for example, human hunting is thought of (hypothesized) as a dominant human system driver that eliminates the possibility of hunter-gatherers to easily find food, so that they may have been forced to develop early forms of agriculture. Excessive hunting is the driver, and collapsing prey animal populations, and eventually, domestication are responses. Meanwhile, feedback processes are those that can be said to be self-strengthening or self-damping (see module 1.2), and in the case of domestication, also may involve multi-driver processes where a response to a driver is another process that serves to strengthen both processes (positive feedback) or diminish the change (negative feedback). For example, as you will see in the next module, a common dynamic around the emergence of agriculture could be the coming together of excessive hunting, changing climate with worsening conditions for both wild game and crops, and the expansion of human settlements that may have also degraded the land. This combination of human and natural drivers could all tend to drive increased areas under cultivation to deal with the lack of food from hunting, and later the lack of food from soil degradation. A positive feedback emerges when the expansion of agriculture itself begins to change the climate, further eliminate prey, or reduce food availability from soil degradation. These processes would be thus said to interact as a positive feedback on domestication and the emergence and continuing expansion of agriculture. The diagram below (fig. 2.1.3) shows these potential drivers and feedback processes. the basic-level illustration shows the coupling of these two systems.
Figure 2.1.3.: General Diagram of Coupled Natural-Human Systems (CNHS) illustrating potential drivers and feedback processes from human to natural systems and from natural to human systems (blue ovals show examples of these processes). It is understood from module 1 that the human and natural systems themselves consist of multiple social and environmental components, but are represented as solid blocks here for simplicity. Credit: adapted by Karl Zimmerer and Steven Vanek from a diagram designed by the National Science Foundation Coupled Natural-Human Systems Program
Click for a text description of the Human Natural System image.
Diagram of coupled natural-human systems. At the top of the diagram is the heading "Human to natural drivers and feedbacks: Human system causes changes in the natural system and strengthens existing changes". At the bottom of the diagram is the heading "Natural to Human drivers and feedbacks: Natural system causes change in the human system and strengthens existing changes". From Human System, the arrow flows to an oval with the following items inside: Hunting, Domestication, Fire, Expansion of land under tillage and food crops, City building. There is another arrow from Natural System to an oval with the following items inside: Long-term drought or rainfall increases, Climate warming or cooling, Collapse of wildfire or plant populations. The arrows represent a continuous relationship.
In Figure 2.1.3, then, the human factors that can change the environment we will refer to as “Human Drivers” or “Human Responses” of the CNHS model. The environmental factors that influence humans are referred to as “Environmental Drivers” or “Environmental Feedbacks.” As illustrated below with examples, the CNHS model describes the combined, interlocked changes of human behaviors and societies, on the one hand, and environmental systems including the plants and animals under domestication, on the other hand. This model is also referred to as a coevolutionary model since the drivers and feedbacks, including intentional and unintentional changes, influence subsequent states and the resulting development of the human-environment food system.
An initial, specific example: Why did agriculture emerge at the end of the ice age?
Before using these diagrams in Module 2.2 to explain the history of food systems (including the summative assessment which asks you to diagram some of these relationships yourselves), we'll illustrate the concept of drivers, feedbacks, and the coevolutionary emergence of food systems using a very specific diagram about the emergence of agriculture in Fig. 2.
Figure 2.1.4.: Specific example of a coupled human-natural system in a transition from hunting and gathering to domestication and early agriculture, showing what are thought to be a dominant driver (climate change) and a human response that created positive feedbacks in strengthening the transition to agriculture (densification and social organization of human settlements near water sources). Credit: National Science Foundation Coupled Natural-Human Systems Program
Click for a text description of the Human Natural System image.
Specific examples of coupled natural-human systems. At the top of the diagram is the heading "Human to natural drivers and feedbacks: Human system causes changes in the natural system and strengthens existing changes". At the bottom of the diagram is the heading "Natural to Human drivers and feedbacks: Natural system causes change in the human system and strengthens existing changes". An arrow from Natural System to an oval with the following: 1. Change to warmer climates and more seasonal precipitation, including vegetation changes (late Pleistocene and early Holocene, i.e. end of ice ages). From Human System, the arrow flows to an oval with the following: 2. Increase in the size and density of human population near water and increased social complexity and demand for agricultural products. The arrows represent a continuous relationship.
The "story" of this diagram is as follows: First, climate change is one of the main environmental drivers that influenced early agriculture and domestication. At the end of the Pleistocene, the geologic epoch that ended with the last Ice Age, there was a worldwide shift toward warmer, drier, and less predictable climates relative to the preceding glacial period (Fig. 2.1.4, oval (1)). This climate shift that began in the Late Pleistocene resulted from entirely natural factors. Hunter-gatherer populations are documented to have been significantly influenced by this climate change. For example, many hunter-gatherer populations responded to this climate change by increasing the size and density of human populations near water sources such as river channels and oases (Figure 2.1.4, oval (2)). This climate change also led to the evolution of larger seed size within plants themselves (especially those plants known as annuals that grow each year from seed), which are summarized as part of the vegetation changes noted in Fig. 2.1.4. It may have also selected for annual plants being more apparent parts of natural environments in dry climates these humans inhabited, since surviving only one season as an annual plant, and setting seed that survives a dry period is one evolutionary response in plants to dry climates (see module 6 for the concept of annual and perennial life cycles). The driving factor of climate change thus led to responses in plants and human societies that are hypothesized to have acted as drivers for domestication and early agriculture. The driver of climate change is also thought to have concentrated the populations of the ancestors of domesticated animals. Their concentrated populations would have better-enabled humans to take the first steps toward animal domestication. Recognizing the importance of climate change we single it out as the main driver in Figure 2.1.4, though doubtless there were other interacting drivers.
Influential Human Drivers included such factors as population (demographic) pressure and socioeconomic demands for food and organization of food distribution were also highly important in contributing to the domestication of plants and animals and the rise of early agriculture (Figure 2.1.4, oval (2)). This pair of factors is also referred to as human to natural drivers, as shown in the diagram. The influence of human demographic pressure was felt through the fact that settlements were becoming more permanently established and densely populated toward the end of the Pleistocene. People in these settlements would have been inclined to bring wild plants with good harvest and eating qualities into closer proximity, and thus take the first steps toward agriculture.
Socioeconomic factors are also considered important as Human Drivers in early agriculture and domestication. As hunter-gatherer groups became more permanently settled in the Late Pleistocene they evolved into more socially and economically complex groups. Socioeconomic complexity is generally associated with the demands for more agricultural production in order to support a non-agricultural segment of the population as well as for the use of the ruling groups within these societies. The emergence of this social organization, and higher population density, combined with the ability to feed larger populations with newly domesticated grains, are plausible as a powerful positive feedback that served to continue and strengthen the course of domestication and agriculture. Continued climate change associated with the expansion of farmed areas, and potentially soil degradation from farming that necessitated even larger land areas and/or more productive domesticated crops (see module 5), would have been additional feedback forces that strengthened the emergence of agriculture. Therefore, drivers and feedbacks are one way to answer the "when and why" questions around the start of agriculture, as a coevolution of human society with changing climate and vegetation. The concepts of drivers, feedbacks and coevolution will be further explored in module 2.2, to explain other stages and transitions in the history of food systems.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.01%3A_Origin_of_Farming_as_Coevolution_and_Coupled_Human-Nature_Interactions/3.1.04%3A_.txt
|
Introduction
The environment-food systems characterized by agriculture have exerted transformative effects on environmental and social systems. This unit offers an overview by distinguishing four principal historical-geographic periods of environment-food systems that begin with early agriculture between 10,000 and 4,000 BC. It also introduces modern industrial agriculture and ecological-modernization-and-alternative-food-networks (such as organic and local environment-food systems) as a pair of generally distinct types that are currently predominant and actively evolving. The model of Coupled Natural-Human Systems (CNHS) is used to characterize each historical-geographic period. CNHS definitions of drivers, feedbacks, positive feedback, and negative feedback are utilized. To understand the spread of agriculture and its transformation of environments and societies basic concepts such as spatial diffusion and adaptation are used.
3.02: Historical Development and Change in Food Systems
Introduction
The development of agriculture as part of food systems in the Anthropocene began with domestication, and has continued across millennia among diverse peoples inhabiting a wide variety of the earth’s environments (e.g. the Mediterranean region, the Indus River Valley; southern South America; the Congo River basin; the Island now called Sumatra, and many other highly varied landscapes). The history of agriculture also includes the present: domesticated plants and animals, as well as agricultural management, continues to change. In module 2.2 we will divide an overview of this complex history into four general periods:
1. Domestication/Early farming (10,000 BP-4,000 BP);
2. Independent States, Small Groups, World Trade, and Global Colonial Empires (4,000 BP – 1800/1900 CE);
3. Modern Industrial Agriculture (1800/1900 CE – Present);
4. Recent Quasi-Parallel Agricultural Types and Possible Next Phase (2000 – Present): Agroecological Modernization (e.g., Organic) and Local Environment-Food Systems.
Each of these categories lumps together a lot of variation with regard to the specifics of agriculture and coupled human-natural food systems, and if you have the chance to read in more detail about these phases of the Anthropocene, you'll find a significant and interesting amount of variation among different places and time periods (see the additional readings at the end of the unit).
To continue describing the environment-food systems of each of these four periods, we recall that in module 2.1 we described the long period of hunter-gatherer activities and environment-food systems, which comprised well over 90% of the history of humans as a cultural species. We also looked at plausible drivers and feedbacks in the origins of agriculture and domestication. Here in Module 2.2. We’ll pick up the thread of the environmental and social transformations represented by agricultural origins and domestication. We note that early agriculture, and perhaps a later stage of agricultural development marked the transition to the Anthropocene epoch in which humans became a dominant force in transforming earth's surface and natural systems (see module 1 regarding the Anthropocene).
Key terms and concepts for the history and development of food systems
After its first origins, agriculture spread worldwide through a process known as spatial diffusion. The spatial diffusion of agriculture involved individuals and groups of people gaining access to the ideas, information, and materials of agriculture and other innovations through physical relocation and social interactions. Spatial diffusion can occur through local individual-level human observation and the exchanges of goods and information as well as long-distance trade and organized activities (e.g. group-level decisions to adopt a new planting technology). A brief description and examples of spatial diffusion in early agriculture are given in Table 2.2.1. While agriculture was developed independently in each of the different world geographic areas roughly corresponding to centers of crop domestication (Module 2.1, Figure 2.NN), agriculture then spread widely out of these early centers in a way that was highly influential. Agriculture's diffusion from the Near East to Europe, for example, transformed a wide range of environments and societies. As discussed more below, the spread of crops themselves was often transformative for the environment-food systems to which these domesticates arrived. For example, all the major cuisines we know today rely on food ingredients that were made available as the result of spatial diffusion For example, foods originally from Mexico, such as tomatoes, chili peppers, and maize transformed environment-food systems globally beginning in the 1500s, spreading as far as Africa, India, and China.
The geographic spread of agriculture created both similarities and differences across space and time. On the one hand, sharing the same food crops and sometimes agricultural techniques created commonalities among environment-food systems. The current environment-food system of the country of Peru, for example, is rooted to a large degree in the connections that were forged through spatial diffusion during the Inca Empire that ruled between roughly 1400 and 1532 of the Common Era (CE). On the other hand, differences in environment-food systems also evolved over time as crops and food were subject to the human and natural system influences in each new site to which agriculture spread. One of the main reasons for these differences was the role of people in adapting agriculture to different environments and sociocultural systems.
A few concepts in addition to spatial diffusion are central to understanding the spread of agriculture and its importance, and we introduce them here. These concepts -- adaptation, agrodiversity, and niche construction -- are briefly described with examples in Table 2.2.1, and the term Anthropocene is also reviewed from the standpoint of its relation to early agriculture. The first of these, adaptation, refers broadly to the way in which humans use technical and social skills and strategies to respond to the newness or changes of environmental and/or human systems (e.g. droughts, hillier topography or increased rainfall as crops moved to new areas, climate change). Adaptation and adaptive capacity of human society are a major focus of Module 11.
Table 2.2.1 Key Terms and Definitions with Examples and Significance in the Development of Agriculture and Environment-Food as Covered in Unit 2.2
Term Definition Examples Synopsis of Significance
Spatial Diffusion Movements of people, things, ideas, information, and technology through physical relocation and social interaction. Spread of agriculture from the major areas of early agriculture and domestication (e.g., from Near East to Europe). Each period of agricultural development covered in Module 2.2 relied on spatial diffusion of environment-food systems
Adaptation Humans use social and technical skills and strategies to respond to the newness or changes of environmental and/or human systems. Domestication of plants and animals by the early farmers responding to changes in the environment and human systems; changes in a crop variety or farming techniques carried out by human groups as crops moved into new environments with new requirements for successful agriculture. Adaptation is an ongoing process that has continued through the major periods of agricultural development to the present. (Also covered in Module 9.1)
Agrodiversity Human management of the diversity of environments in agriculture and food-growing; This definition was later expanded to included human organizational diversity in the use of the environment. Many areas of early agriculture had high environmental diversity, such as tropical and subtropical mountains, humans developed myriad agricultural techniques to master food production in these different environments, e.g. irrigation systems, planting methods, terraced fields, special tools, and implements. Agrodiversity is a major form of human-environment interaction. It is related to, but different, than agrobiodiversity (Covered in Module 9.2)
Niche Construction Agriculturalists (and hunter-gatherers) shaped food-growing environments (“niches”) through constructing fields and other kinds of activities Hunter-gatherers shaped heavily used habitats through hunting, gathering, and habitation. These intensively used habitats created the niches that were first occupied by crops in the beginnings of agriculture, with somewhat more disturbed soils, fewer forest plants, and perhaps higher fertility from all sorts of human refuse. Later, farmers actively fertilized and tilled soils to favor domesticated annual crops or created niches within managed forests that favored "forest garden" species. The concept of niche construction is important since it teaches us that humans are adapting not only to environments but also to environments being shaped through human influence
Anthropocene
Distinct geologic epoch representing the present and defined by the significant level of human modifications of the earth’s environmental systems (see module 1)
Two factors commonly mentioned in the definition of the Anthropocene are the global clearing of woodlands (deforestation) in early agriculture and the spread of modern industrial agriculture. Agriculture-related activities are considered major factors in most though not all definitions of the Anthropocene.
The use of agrodiversity was also vital to the spread of early agriculture. Agrodiversity is described by the geographer Harold Brookfield and the anthropologist Christine Padoch as human management of the diversity of environments in agriculture and food-growing. Brookfield and Padoch use agrodiversity to describe indigenous farming practices among native peoples, but all knowledgeable farmers actively make use of agrodiversity, even if the technologies may differ greatly. Managing diverse agricultural environments was essential since early farmers produced domesticated plants and animals under new and different conditions. The third concept is that of niche construction, meaning that agriculturalists (and hunter-gatherers) shaped food-growing environments (“niches”) through constructing fields and all kinds of other activities. As a result, adaptation occurring across the wide geographic and historical evolution of environment-food systems involves responses to a range of factors that include both natural ones and those resulting from human activities.
The development of agriculture through the four periods mentioned above has resulted and continues to incur, a wide range of both environmental and social impacts that will be mentioned in the following pages of this module. Environmentally these impacts have altered the biogeophysical systems of our planet, including the land, water, atmosphere, and biodiversity of the earth. As mentioned the idea of the Anthropocene epoch---a distinct geologic epoch defined by drastic human modifications of the earth’s environmental systems---is often tied to agricultural activities. Global environmental sustainability, whether the earth’s systems are operating within limits that will enable long-term functioning, is fundamentally influenced through agriculture, as you’ll see in this module and all the ones to follow.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.02%3A_Historical_Development_and_Change_in_Food_Systems/3.2.01%3A_1_From_the_Origins_of.txt
|
We will start our historical summary of environment-food systems by describing domestication and early farming (10,000 BP – 4,000 BP). Widespread environmental and social impacts occurred during this period. New agricultural ecosystems were created and spread along with the use of domesticated plants and animals. These agroecosystems contained distinctive species and populations of plants and animals including domesticates, as well as characteristic insects, mammals, soil biota, and uncultivated plants (such as weeds). In many places, agroecosystems were increasingly established in areas that previously had supported tree cover. During this period in the Near East, China, and Europe, for example, clearing for agriculture led to increased deforestation.
Required Reading
Jared Diamond, "The Worst Mistake in the History of the Human Race"
As part of this survey, we ask you to read the short and provocative article by Jared Diamond on the impacts of the diffusion of early agriculture. This should prompt a lot of thinking on your part about the way that the emergence of agriculture affected human societies that we describe further below.
Impacts of domestication and early agriculture were notable not just for natural systems but also on human systems. Both a population explosion and a technology explosion occurred in conjunction with early agriculture. The early farming societies grew in the size of their populations and the use of diverse tools and technologies, including ones that no longer needed to be transported as part of highly mobile hunter-gatherer lifestyles. The growth of population was made possible by the increased productivity of food per unit of land area. Impacts on human health and disease were also notable in this period, though they were not entirely positive. As Jared Diamond points out in the required reading above, there were negative impacts on human health traced to larger settlements and denser human populations (e.g. highly infectious “crowd diseases” such as measles and bubonic plague) and also infectious disease involving transmission from domesticated animals (measles, tuberculosis, influenza). Nutritional stress also ironically increased, with life expectancy actually decreased following domestication and the early development of agriculture.
These negative impacts on humans have led Diamond to refer to agriculture provocatively as “The Worst Mistake in the History of the Human Race”. This title is purposefully provocative, and by way of understanding this "mistake", we should realize that early farmers’ switching to agriculture may have become the most viable option in many places. Agriculture becoming the principal livelihood option would have occurred as local hunted-gathered food sources were overexploited and/or required by population pressure. By the end of this period, the evolution of more complex societies also meant the development of deep class divisions. There the social phenomena of deepened class divisions must also be seen as a product, in part, of the evolution of agriculture. In addition, changing social arrangements from agriculture would tend to create a positive feedback (see the end of module 2.1), along with other factors, in maintaining and deepening the pathway of society towards a greater embrace of an agriculture-based food system.
The model of Coupled Natural-Human Systems (CNHS) can be used to reflect on the above impacts through the integrated perspective of human-environment interactions. Here we can highlight a couple of these interactions. First, widespread deforestation occurred as a result of early agriculture. In addition to changing land cover and ecosystems, it has been postulated that the extent of this deforestation at this time was significant enough to release considerable carbon dioxide (CO2) and thus to define the beginning of the Anthropocene epoch. As mentioned below other scientists argue the Anthropocene was created more recently. This scientific debate about the Anthropocene epoch has been productive in our understanding of human dynamics and impacts with respect to the environment.
Humans are presumed to have responded to deforestation by increasing their reliance on agriculture, since the removal of forest cover would have reduced the productivity of hunting-gathering activities, creating a second positive feedback that would have deepened the transition to agriculture. The second form of human-environment interaction involved the selection of a relatively small fraction of utilizable plants and animals that become the cornerstones of early agriculture. Since these plant and animal domesticates produced well relative to others, they became relied upon by early farmers, also acting as a positive feedback towards the adoption of an agricultural lifestyle. The legacy of this initial selection of certain types of plants and animals demonstrates the important role of contingency and positive feedbacks, whereby initial decisions were amplified and exerted a lasting influence on the Coupled Natural-Human Systems of agriculture. The concepts of feedback are considered further in the subsequent pages and in this Module’s Summative Assessment.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.02%3A_Historical_Development_and_Change_in_Food_Systems/3.2.02%3A_2_Period_1-_Domestica.txt
|
The second period of our rapid historical survey encompasses independent states, societies based on small groups, world trade, and global colonial empires and covers roughly 5,000 years between 3,000 BP and 1800/1900 CE. Both positive and negative environmental and social impacts were associated with this period. We can use the coupled system model to illustrate two examples of this period’s characteristic forms of environment-society interactions. The Inca Empire in the Andes Mountains of western South America (from present-day Colombia to Argentina) offers a good example of an independent state with pronounced environmental and social impacts of its agriculture. Ruling from approximately 1400-1532 the Inca state oversaw the building and maintenance of extensive agricultural field terraces and irrigation canals (Figure 2.2.1). These terraces and canals produced sustainable landscapes in the tropical mountain environments of the Andes.
Figure 2.2.1.: Terraces and related irrigation works built by the Inca and other early, large scale agricultural societies are a dramatic example of the transformation of earth's surface for sustained food production. Credit: Phil Romans, Flickr (Creative Commons CC BY-NC-ND 2.0)
From the perspective of coupled natural-human systems (CNHS), the terraces and canals of the Inca produced sought-after foods and symbolized Inca imperial power, thus contributing further to Inca capacity to extend these sustainability-enhancing earthworks. The Inca state eventually established terraces and other large-scale agricultural and food transportation works (storage facilities, improved riverbank fields, roads, and bridges) that extended over much of the area of their empire. Environmental impacts of these terraces and other earthworks were beneficial since they stabilized mountain agricultural environments and enabled higher levels of food-growing per unit land area without major damage. Still, we need to remind ourselves that early independent states, such as the Inca, also created environmental problems and often were marked by large social inequalities between rulers and commoners. In other words, just as today, the environment-food systems of non-European peoples could and did attain high levels of sophistication while, at the same time, they were often wracked by significant issues with both environmental and social sustainability (see module 1 for definitions from the "three-legged stool" of sustainability. Similarly important for us to note is that some Inca terraces and canals continue to exist and are still used today as they are in Peru so that they still create a sustainable contribution to food systems at a local scale.
A second example of environmental and social impacts resulting from this period of agricultural diffusion and trade in world history comes from the world trade system established by global colonial empires involving major European powers between 1400 and 1800 (such as the Spanish, British, and French colonial empires). A well-known example of social and environmental impacts from this time period is the exporting of crops and livestock, along with related elements of European environment-food systems, to many areas of the world by these empires. Examples included wheat, sugar cane, alfalfa, cattle, and sheep. These crops and livestock had not originated in Europe but had already diffused there during earlier history, and were common in Europe at the time these empires were expanding. These components of new European colonial environment-food systems were mutually reinforcing, since for example the forage crop alfalfa and introduced European grasses were highly conducive to expanding the raising of cattle and sheep and making new sources of animal food products available to human populations. There were thus reinforcing (positive) feedbacks between the way that these crop species such as alfalfa and grasses were able to "remake" environments and make them more hospitable for European livestock. Sugar cane is another crop that is notorious for remaking the landscapes and social relations in the Carribean, South America, and the United States, through plantation agriculture and slavery. The case of pasture species and livestock is considered further in this module's summative assessment.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.02%3A_Historical_Development_and_Change_in_Food_Systems/3.2.03%3A_3_Period_2-_Independe.txt
|
The third major period in our broad historical summary is modern industrial agriculture, which is the predominant environment-food system today, though it coexists with a significant sector of smallholder agriculture that has incorporated modern industrial techniques to a greater and lesser extent.
Modern agriculture arose in the 1800s and 1900s through a variety of developments in agriculture and in the processing and business of foods. “Industrial” in this description refers to the major role of factory-type processes that are principally large-scale and involve the defining role of technological inputs such as large amounts of freshwater use, chemical fertilizer, pesticides, and “improved” seed that delivers high-yield responses to the other inputs. Industrial is also an appropriate term since this environment-food system has narrowed concentration on a few species of crops and livestock. “Modern” is important in this description since distinct foodways and consumption practices---many based on foods that are highly processed, relatively inexpensive, easy-to-prepare convenience items---that are integral to this environment-food system. Modern is also an important term since it’s estimated this predominant system is based on more changes in the past 100 years than occurred over several hundred and maybe even thousands of years previously.
In much of the world, the advent of the modern environment-food system was provided through the Green Revolution beginning in the 1940s and 1950s. The Green Revolution used science and technology to develop modern crops and agricultural production systems for the countries of Asia, Africa, and Latin America. While it has evolved considerably, the approach of the Green Revolution continues to be used today. The worldwide influence of the Green Revolution suggests one additional term to describe this type of environment-food system, which is “global.” The development of this system, as well as its inputs and impacts, is global in scope. The global characteristics of today’s predominant environment-food system will be evident throughout this module and the others in this course as we place emphasis on the global scale of environmental and social impacts, which relates to the concept of the Anthropocene. In fact, if we consider the bar graphs of the relative areas of wild versus managed land (crops and livestock) globally presented in module 1 (Figure 1.1.4) we can see why some experts prefer to think of modern industrial agriculture, and the related expansion of human populations, as the defining period of the Anthropocene.
Figure 2.2.2.: Modern industrial agriculture is a culmination of social and technological processes beginning in the 1800s that sought to increase yields of agriculture for growing human populations by applying fossil fuel energy, mechanization, and advanced crop breeding methods. This photo of a modern grain variety being harvested encapsulates this transformation towards modern agriculture in many ways: the large area of a single grain variety, likely bred and sold by a modern corporation; the extreme mechanical efficiency and speed of grain flowing off the field into a wagon for storage and sales; the need for diesel fuel and associated carbon dioxide emissions to drive the powerful machinery, whose power and efficiency reduces the workforce needed for agriculture. Credit: Alan Harrison, used with permission from Flickr under a creative commons license
A wide range and mix of environmental and social impacts are associated with modern industrial agriculture. Agricultural mechanization has coincided with a major reduction in the agricultural workforce. In the United States, for example, less than 2% of the population is estimated to be directly employed in agriculture. In the 1870s and 1880s, by contrast, this estimate was 60-80% of the U.S. population. Environmental impacts and human-environment interactions have also been strongly influenced by the widespread use of fossil fuels in modern industrial agriculture.
Fossil fuel use is the foundation for many modern agricultural technologies ranging from tractors and farm machinery (Fig. 2.2.2) to fertilizers and pesticides as well as the energy costs of processing and the large number of “food miles” typically involved in transportation. As the result, energy issues along with greenhouse gas emissions have become a major concern with modern industrial agriculture----as discussed in subsequent modules.
One example of human-environment interaction will suffice in this section since modern industrial agriculture will be examined in detail in many of the modules that follow. (Modules 6, 7, and 8, which focus on agroecology, feature excellent and far more extensive examples.) The widespread use of pesticides and the creation of pesticide-dependent crops and cropping systems are a defining characteristic of this agriculture worldwide. The development of these synthetic products for protecting crops, and potentially the increase in yields associated with solving, if temporarily, a pest problem. Meanwhile, the populations of agricultural pests continue to evolve resistance in response to these applications, an example considered further in module 8. As a result, it is essential that these modern industrial crops and cropping systems (including the use of pesticides) be constantly developed in order to gain a new advantage against the most recently evolved pests.This innovation process in agricultural technology for crops is another example of a positive feedback driving the further industrialization of agriculture.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.02%3A_Historical_Development_and_Change_in_Food_Systems/3.2.04%3A_4_Period_3-_Modern_In.txt
|
In recent history (since 2000) significant new directions have entered the spectrum of existing environment-food systems. The future of food will depend on these newer systems, in addition to modern industrial agriculture that was introduced on the previous page. The new directions---which we refer to here as “ecological modernization” and “alternative community-based food systems”---are a response to concerns over environmental sustainability, human health and food safety in addition to the attempt to reinvigorate rural society and address social justice issues, a concept we introduced in module 1 as "social sustainability". Each of these new directions also has its own environmental and social impacts. These impacts are introduced here and then taken up again in module 10.1 when we consider them as "global" and "local community" variants of new, alternative food system types. In both these new directions, a major role is taken by ecological methods and techniques replacing to a significant degree the use of synthetic chemicals. Substantial success can be seen in some cases: for example, organically certified lettuce and carrots with reduced use of synthetic pesticides now account for more than 10% of the land producing these crops in the United States.
Social changes---remember we use this term broadly to refer to economic impacts as well---vary widely in the environment-food systems associated with ecological modernization. Large corporations as well as a substantial number of large family-managed farms, for example, predominate in the large-scale sector of organic agriculture and organic food production and distribution, where these companies and large farms occupy a "quasi-parallel" role to their role in supporting modern industrial food production (previous page). We and other authors describe their style of adoption of organic production techniques as ecological modernization because they seek environmentally sustainable methods as relatively interchangeable replacements for synthetic chemical inputs in modern agriculture (previous page). Ecological modernization also retains modern forms of organization, for example, large scale and efficiency of cropping and shipping of food, corporate management, and sales through mass outlets such as supermarkets. Food distribution companies in this system can offer organic foods at lower prices in the case of fresh vegetables and fruits. This advantage is significant since affordability is a major issue among potential consumers of organic food, and such "corporate organic" foods may be more accessible at the present for a larger proportion of the population. Others argue that issues of cost and accessibility resulting from transitions towards organic and other more ecologically-based ways of managing agriculture merely reflect the artificially low financial, environmental, and social costs of comparable products from the modern industrial food system, for example, the carbon dioxide emitted in the manufacture of fertilizers and pesticides (see module 10). In any case, the rules, regulations, and preferences of human systems designed to foster organic agriculture (such as organic certification and labeling) may be effective in improving the natural system, though the feedbacks to human systems may be ones mostly supporting large agribusiness through positive feedback effects introduced in Module 2.1.
Take for example the case of organic produce such as lettuce and carrots where natural conditions in climatically optimum growing areas (e.g., organic vegetable-growing areas in California) favor the capacity of large corporations and family farms able to access the high-quality land, resource systems (such as water), and deal with the regulatory tasks associated with large-scale national markets. The large scale of these corporate actors becomes a positive feedback driver which strengthens the transition towards this "ecological modernization" mode of new food production system. This case is considered further in this Module’s Summative Assessment.
“Alternative community-based food networks” is a term that is applied to various smaller though increasingly important types of environment-food systems. We use this term to focus on local environment-food systems. Proponents and activists supporting these types of environment-food systems center much of their attention on the process known as re-localization. This process brings food producers into closer contact with consumers. Local farmers markets, where farmers sell food directly to consumers, are an example of re-localization. Local environment-food systems are seen as an alternative to the concentrated corporate control of environment-food systems. A major goal of re-localization is supporting small- and medium-scale farmers, including the majority of family-owned farms, as a means of reinvigorating rural life among a range of small businesses---not just a larger number of farms but also the corresponding number of small business that support and benefit rural areas. This interest in “alternative food systems” is committed to increasing the percentage of the “food dollar” that goes directly to farmers. This percentage is estimated currently at 8-10% in modern industrial environment-food systems where a large share of the food dollar goes to food processors and farm input suppliers. For this reason, the local food emphasis in alternative food movements is also sometimes referred to as an emphasis on short food supply chains exemplified by farmers' markets or regional sourcing of food in supermarkets and restaurants. These alternative food systems are presented further in Module 10.
3.2.06: Summative Assessment-
Instructions
Download the worksheet to understand and complete the assessment. You will submit the answers from the worksheet to the Module 2 Summative Assessment in Canvas.
The first part of the worksheet presents a more detailed version of the interaction of human and natural systems at the onset of agriculture at the end of the last ice age, presented at the end of Module 2.1. This is to provide you an example in the use of these diagrams to think about changes in food systems over human history, and it is shown below here as well.
Figure 2.2.3.: Example of human and natural system drivers around domestication in human food systems, to guide responses in the two other examples in the assessment. Credit: Steven Vanek, adapted from the National Science Foundation
Click for a text description of the Human Natural System image
Heading at the top says, Human to natural drivers and feedbacks: Human system causes changes in the natural system and strengthens existing changes. At the bottom is the heading Natural to Human drivers and feedbacks: Natural system causes changes in the human system and strengthens existing changes. From Human System on the left, an arrow leads to the following items: Humans settle near to water sources in higher population densities, Increase in social complexity, Humans notice and make use of large-seeded wild plants and animals near water for domestication, Increased need for food, Deforestation, and disturbed soils. From Natural system, an arrow flows to the following items: Warmer/drier climates with more seasonal precipitation, Larger seed size in wild plants, Potential domesticated animals drawn to water sources near to humans, Good niches for crops around human settlements. The arrows represent a continuous flow between Human and Natural systems.
Further instructions for the assignment are given in the worksheet. You will need to fill in four questions on the worksheet, some of which have multiple parts.
Submitting Your Assignment
Please submit your assignment in Module 2 Summative Assessment in Canvas.
3.03: Summary and Final Tasks
Summary
Agriculture is the most widely practiced and influential environment-food system though it is not the only one---either historically or at present. Environment-food systems in general and agriculture, in particular, are a complex coupled system that combines human and natural systems and underlies human life, cultural, and social functions. The distinct human-environment interactions of agriculture, including domestication and the management of diverse habitats for raising plants and animals, have existed for upwards of 10,000 years and were preceded and co-exist with other environment-food systems such as hunting-gathering. Human-environment interactions were as integral to the origins of agriculture as they are to our understandings of modern industrial agriculture and farming alternatives in our current period of history. Human-environment interactions also can help to understand the history of food systems between the onset of agriculture and the present day. Considering human-environment interactions in the context of the historical and geographic parameters mentioned above provides an overview that serves to introduce the following two sections of the course that focus on environmental systems (Modules 4-9) and social systems (Modules 10-11). The systems concepts of drivers and feedbacks in the development and functioning of food systems should also help you to understand the focal region you will examine in your capstone project.
Reminder - Complete all of the Module 2 tasks!
You have reached the end of Module 2! Double-check the to-do list on the Module 2 Roadmap to make sure you have completed all of the activities listed there before moving onto Module 3!
Further Reading
• Brookfield, Harold. Agrodiversity. New York: Colombia University Press, 2002.
• Brookfield, H., Padoch, C., Parsons, H., & Stocking, M. 2004. Cultivating biodiversity: understanding, analyzing and using agricultural diversity.
• Crosby, Alfred, Ecological Imperialism: The biological expansion of Europe, 900-1900. Cambridge University Press, 2nd edition 2004.
• DeLind, Laura. "Transforming organic agriculture into industrial organic products: Reconsidering national organic standards." Human Organization 59(2): 198-208, 2000.
• Diamond, J. Evolution, consequences and future of plant and animal domestication. Nature 418 (6898): 700-707, 2002.
• Dunn, Rob. Never Out of Season: How Having the Food We Want When We Want It Threatens Our Food Supply and Our Future. Little, Brown and Company. 2017.
• Duram, Leslie. Good Growing: Why organic farming works. U of Nebraska Press, 2005.
• Pollan, Michael. The botany of desire: A plant's-eye view of the world. Random House, 2001.
• Smith, Bruce D. The Emergence of Agriculture. New York: Scientific American Library, 1995.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/03%3A_Geographic_and_Historical_Context/3.02%3A_Historical_Development_and_Change_in_Food_Systems/3.2.05%3A_5_Period_4-_Sustainab.txt
|
Introduction
Module 3 covers the nutritional needs to which human consumption patterns ideally respond within food systems and some of the nutritional challenges (related to both deficit and excess of diet components) that are currently faced by food systems. Module 3.1 covers some current basic knowledge on human nutritional requirements and features of diets that are health-promoting. Module 3.2 covers current issues within food systems of malnutrition, as well as the challenges and efforts aimed at making diets healthier, both in the United States and around the world. We encourage you as learners to think about how these nutritional principles, and efforts to promote food access and healthier diets, can fit with the analysis of the focal region you will be completing for your capstone region.
Diet, health, food systems, and sustainability
This module addresses issues surrounding diet and nutrition in food systems. This is an aspect that touches all of us very personally – we’ve likely read and absorbed some of the messages about healthy eating that are promoted by government agencies, advocacy groups, and other voices in our society, as well as a substantial dose of messages of all sorts promoting food choices - healthy and otherwise - from food companies within the modern food system. For many of us nutrition goals and principles motivate important decisions that we make on a daily, ongoing basis: can we include a vegetable with our dinner? What makes for a healthy breakfast? How to make snacks healthy rather than an excuse for junk food? Food choices are also wrapped around culture and religious observance for many of us, illustrating how our human systems of culture and ethnic origin feed into food systems, along with our beliefs and principles regarding the supernatural. This echoes the way that food systems and domestication of food-producing plants and livestock were wrapped together with culture and religion in earlier historical and prehistoric periods (see Module 2). Food choices are also wrapped up in social status, as well as linked to environmental sustainability. For example, once we appreciate the dramatically increased use of water to produce beef and the fact that water shortages may be one of the key stresses brought on by climate change (see module 1 food system examples, following modules on water and resilience), we may rethink meat consumption in our society and take a different view of the aspiration of growing wealthy social sectors around the world to consume more beef.
The impact of food choices on the environment is not the only reason to consider diet and nutrition. As a society, our food choices and our ability to access sufficient and healthy food have a dramatic influence on our own health and well-being. This is seen most clearly as two major issues facing societies around the world. The first is a crisis of chronic malnutrition and nutrient deficiencies: the lack of crucial elements of minerals, vitamins, proteins, and high-quality fats around the world have dramatic negative effects, while appropriate diets can prolong life and good health even among people who are materially poor in other ways. The second major issue facing modern and modernizing societies are nutrition-linked disorders such as heart disease and type II diabetes, linked to overconsumption of calories in relation to sedentary lifestyles that translates into increased rates of obesity within both wealthy and poor countries.
Diet and nutrition patterns thus show the potential to either support or harm both the health of the environment and the health of humans within the human systems that live in constant interaction with the environment as main components of food systems.
Goals
• Describe the basic elements of a healthy diet from a scientific standpoint.
• Describe current major nutrition challenges and their immediate causes, such as nutrient deficiencies and calorie overconsumption.
• Relate current major nutrition challenges to social factors such as food access and changing diets in modern food systems.
Learning Objectives
After completing this module, students will be able to:
• Describe the basic categories of nutrients and how these contribute to human function and health.
• Describe the major changes taking place in diet/nutrition in rich and poor countries, respectively.
• Define the concept of food access and the term "food desert" as contrasted to the broader concepts of food security and food insecurity.
• Understand changes in thinking around healthy nutrition and basic principles that have remained.
• Use an online nutrition tool to analyze and compare diets and areas in which they are deficient or excessive in nutrients.
• Analyze why food access is an issue in modern food systems.
• Use a mapping tool to analyze the situation of food access U.S. cities, and relate these situations of food access to literature describing the history of strategies to guarantee food access in these cities.
Assignments
Print
Module 3 Roadmap
Please note that some portions of the Summative Assessment may need to be completed prior to class. Detailed instructions for completing the Summative Assessment will be provided in each module.
Module 3 Roadmap
Action Assignment Location
To Read
1. Materials on the course website
1. You are on the course website now.
To Do
1. Formative Assessment: Using A Diet Assessment Tool
2. Summative Assessment: Food Access and Food Deserts
3. Take the Module Quiz
4. Turn in Capstone Stage 1 assignment
1. In course content: Formative Assessment; then take quiz in Canvas
2. In course content: Summative Assessment; then submit in Canvas
3. In Canvas
4. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
04: Diet and Nutrition
Introduction
We'll start this module with the basics of nutrition and diet required for basic human functioning as well as good health. Nutrition basics start with the idea of a balanced diet, which should provide the essential nutrients for daily human activities, growth and tissue repair, and overall health, that have been demonstrated by years of research on human nutritional needs. Figure 3.1.1 shows one recent attempt to summarize this scientifically grounded view of a balanced diet in an accessible way as a "healthy eating plate". You'll notice that the sections addressing diet throughout module 3 will refer back to the concept of balanced combinations of nutrients from different food sources that create this balanced diet. It is also important to state that nutritional theories and the concept of the optimal diet have been somewhat changing over decades and centuries, which may give us reason to be careful about the certainty with which we hold to nutrition beliefs. See "High-quality fats and shifting paradigms around fat in diets", further on in this module, on the changing attitudes from researchers towards different fat sources in human diets. Nevertheless, years of nutrition research up to the present have defined the requirements of a healthy diet that have been incorporated into the nutritional guidelines summarized in figure 3.1.1. and also published by the United States Department of Agriculture and other government agencies around the world.
Figure 3.1.1.: The Healthy Eating Plate concept for a balanced diet, which is focused on required foods in rough proportions shown on the plate, that will create needed amounts of energy sources, protein, fiber, and other important vitamin and mineral constituents for optimal human health. Note that adequate fluid intake, and some level of physical activity, are also important components of this balanced plate approach to human nutrition for health. Credit: Harvard School of Public Health; made available on Flickr (Creative Commons CC BY-NC-SA 2.0) by Steve Garfield and the Harvard School of Public Health. Copyright © 2011, Harvard University. For more information about The Healthy Eating Plate, please see The Nutrition Source, Department of Nutrition, Harvard School of Public Health, and Harvard Health Publications.
Click for a text description of the Healthy Eating Plate image
Healthy Eating Plate: A plate divided into four sections: the left 1/2 of the plate shows 2/3 Vegetables and 1/3 fruits. The right 1/2 of the plate is 1/2 whole grains and 1/2 healthy protein. Outside the plate is healthy oils and water. Descriptions are as follows: Healthy oils: Use healthy oils (like olive and canola oil) for cooking, on salad, and at the table. Limit butter. Avoid trans fat. Vegetables: The more veggies - and the greater the variety - the better. Potatoes and french fries don't count. Fruits: Eat plenty of fruits of all colors. Water: Drink water, tea, or coffee (with little or no sugar). Limit milk/dairy (1-2 servings/day) and juice (1 small glass/day). Avoid sugary drinks. Whole grains: Eat whole grains (like brown rice, whole-wheat bread, and whole-grain pasta). Limit refined grains (like white rice and white bread). Healthy Protein: Choose fish, poultry, beans, and nuts; limit red meat; avoid bacon, cold cuts, and other processed meats.
What follows in the rest of module 3.1 is a summarized description of human nutritional requirements, intended to allow you to relate these to food systems as the source of human nutrition. Because of this, we will present both the requirements (e.g. vitamin A versus vitamin C versus amino acids) and also some major issues with particular nutrients that tend towards deficiency in many human populations and their related food systems. At the outset, we can already guide your learning by presenting an exceptionally simplified version of human nutrient needs that you will flesh out in the following pages. To a crude approximation, humans need the following components in their diets: energy, which in practice means carbohydrates, fats and protein seen in relation to their energetic content; "building blocks" of growth and maintenance, which is generally protein linked to higher-protein foods but occurring within both the protein and whole grain fraction of the healthy plate above; and promotion of health, proper development, and proper function,closely linked to vitamins and mineral intake. We'll delve into these elements of a balanced diet one by one in the following pages, and add a few details as well. An additional point that deserves mentioning now is the particular importance of proper nutrition for growth, mental development, and health promotion in children. Children are thus particularly vulnerable to nutrient deficiencies, and the consequences of deficiencies can be long-lived in their development into adulthood.
4.01: Diet and Nutrition Basics for Global Food Systems
Energy: Requirements and Function
Carbohydrates (starches and sugars), fat, and protein within food can all function as sources of energy when they are metabolized to carbon dioxide and water in respiration processes in all of our body’s cells. This energy fuels everything from the production of neurotransmitters in our brains to the muscle contractions required to shoot a basketball or weave a basket. The energy content of food is expressed as “calories” (“calories” are in reality kcal or kilocalories as defined in chemistry; 1 kcal will heat one liter of water one degree C). Energy-dense foods with high caloric content are generally those with high carbohydrate, protein, or fat content - for example, pasta, bread, oatmeal, grits, and other cooked whole grains and porridges consumed around the world as staples; plant oils or animal lard present in cooked foods, or meat and cheese. It is interesting to note that gram for gram, fats contain over twice the energy density of carbohydrates or protein: about 9 kcal per gram for fats versus only about 4 kcal per gram for carbohydrates and protein. We’ll address the further role of high-quality fats as a nutrient, rather than just an energy source in a page further on.
Energy Sources: Diet and Food System Aspects
Current U.S. Department of Agriculture (USDA) and other major nutritional guidelines promote the idea of accessing calories via a predominance of whole grains (e.g.. whole wheat and oats and flours made from these, brown rice) as these whole grains contain a mixture of carbohydrates, proteins, and indigestible fiber, as well as vitamins. These non-caloric contributions to nutrition are also important as discussed in the pages below, and combine well with the caloric content of food to produce better health outcomes. Calories are a fundamental consideration within nutrition because a negative calorie balance (calories consumed minus those expended in human sedentary activities and exercise) along with shortages of other associated food components described below leads to weight loss and faltering growth in children, including childhood stunting and permanent harm to a person’s developmental potential. By contrast, large excesses in a calorie balance over time leads to weight gain that is linked at a population level to increased rates of heart disease and diabetes. These diet-related diseases increasingly afflict populations in industrialized economies and urban populations worldwide with access to abundant, though often less healthy, food choices. Diet-related diseases as part of food systems will be taken up again in module 3.2.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.01%3A_Diet_and_Nutrition_Basics_for_Global_Food_Systems/4.1.01%3A_1_Energy_Sources_in_Foods-_Carbohydr.txt
|
Protein: Requirements and Function
The second main component conceptualized by nutritionists as a key ingredient of a healthy diet is protein, that is used in many different ways to build up and repair human tissues. Proteins are basically chains of component parts called amino acids, and it is these amino acids which are the basic “currency” of protein nutrition. Twenty amino acids are common in foods, and of these nine[1] are essential because humans cannot synthesize them from other nutrient molecules. Meat, fish, and eggs are animal-based and protein-dense foods that contain the complete profile of amino acids, basically because we are eating products that are very similar in composition to our own body tissues. In addition, some grains such as quinoa and buckwheat contain complete protein, while most legumes (peas, beans, soybeans, bean sprouts, products made from these) are high in proteins in a way that complements grains in the diet.
Protein Sources: Diet and Food System Aspects
For people who do not eat meat (a vegetarian diet) or who avoid all animal-based foods (vegan diets), the full complement of amino acids are accessed by eating milk and egg products or by eating a diversity of plant-based foods with proteins such as whole grains, nuts, and legumes. Legumes are particularly protein-dense and important in addressing the lack of amino acids in other plant-based foods. The combination of rice and beans is an oft-cited example of the complementarity of amino acids for a complete amino acid profile. Eating a wide range of plant-based foods is an excellent strategy to access the full complement of essential amino acids, as well as the diversity of mineral, vitamin, and fiber needs discussed on the next pages. Many of the most problematic diets are those that are highly monotonous due to poverty and/or inadequate knowledge about diet, with an excess or a sole dependence on a single starch source without legumes or animal products, or overconsumption of processed foods in comparison to fresh plant and whole grain foods. Where only a single grain is eaten, deficiencies of certain amino acids can result.
[1] These are phenylalanine, tryptophan, methionine, lysine, leucine, isoleucine, valine and threonine, which you can find in many introductory nutrition texts or resources online, if further interested. A ninth amino acid, histidine, is important in child growth and may also be vital to tissue repair, while another, arginine is essential for some growth stages and can usually be synthesized by healthy adults.
4.1.03: 3 Vitamins and Minerals- Growth Illn
In addition to the daily requirements for energy and protein, vitamins and minerals are required in relatively small amounts as part of a proper diet to ensure proper functioning and health and are especially important for childhood development. Vitamins and minerals deficiencies can lead to “hidden hunger”, where energy and protein needs are being met but the lack of vitamins and minerals prevents adequate development and health of child rent and saps the productive capacity of adults, for example via iron-deficiency anemia (see below). There are a large number of essential vitamin and mineral components in foods. In this module, we focus on a few that frequently pose major challenges within food systems. If you are interested, full details on the roles of many nutrients can be found in the excellent online text from the Food and Agriculture Organization (FAO) of the United Nations, Human Nutrition in the Developing World. This module's formative assessment may also point to other vitamins and minerals that can become deficient in diets.
Calcium
Although it is important for other functions, calcium is emblematic in its role in proper bone growth and maintenance. It is especially important for women to consume adequate calcium throughout life, and higher intakes of calcium from childhood on are associated with lower rates of osteoporosis and stronger bones later in life. Vitamin D is also essential for the proper absorption of calcium so that a vitamin D deficiency can lead to calcium deficiency. Dairy products and small fish that are consumed whole (so that fine bones are eaten) are highly calcium-dense foods around the world. Grains are low in calcium but are consumed in such volumes that they often contribute substantial calcium to diets. As is true for many other nutrients, women who are breastfeeding a child have an especially high calcium need because they export calcium in their breast milk to help grow the bones of a developing infant.
Iron
Iron is most important as an ingredient in hemoglobin that causes the red color of blood, and the role of red blood cells in carrying oxygen. Iron deficiency thus leads to anemia from a lack of red blood cells, including shortness of breath and overall weakness. Women require more iron than men because of blood loss in menstruation, and pregnant and lactating women require especially high amounts of iron as they expand their blood supply and provide for a growing fetus. During lactation or breastfeeding, mothers pass substantial amounts of iron to their growing infants, so that iron need for women is also high during the period when mothers are nursing their children. When shortage arises during pregnancy or lactation, a woman’s iron stores tend to be sacrificed to the benefit of the child, which can leave a mother who lacks adequate food due to poverty with acute iron deficiency and anemia that greatly complicates other daily activities such as economically important work. The best sources of iron in foods are meat, fish, eggs, green leafy vegetables, and whole grains. Cooking food cast iron utensils is also an easy way to supplement iron in food.
Zinc
Zinc is an essential mineral that is important in a large number of human cellular enzyme processes. It is important for proper tissue growth, cell division, wound healing, and the functioning of the immune system, among other functions. As such it is very important for children’s health, growth, and development. Zinc is an example of a nutrient that is often used to fortify processed foods and is also naturally present in a wide variety of foods such as red meat, poultry, beans, nuts, and whole grains. One goal of plant breeders recently has been to breed or identify traditional varieties of whole grains and potatoes that are high in zinc and iron. This way of enhancing diets by way of the properties of crop plants is called a biofortification strategy. Because these staple foods are usually present even in the most rudimentary diets associated with extreme poverty, biofortification can be an effective strategy to ease access to these important mineral nutrients in the most vulnerable populations.
Vitamin A
Vitamin A or retinol (linked to the word ‘retina’ or part of the eye) is famous for the popularized connection between eating carrots and good eyesight. Vitamin A deficiency is the cause of reduced vision in dim light, called night blindness, as well as a broad correlation to increased infant mortality in children from a variety of causes. True vitamin A is not in fact directly present in carrots and other dark green or pigmented vegetables (collards, squash, sweet potatoes, tomatoes, and even yellow maize) but is readily synthesized in the body from the orange pigment (beta-carotene) that these plant sources contain. True retinol is found in eggs as well as meat and fish products. Like zinc, vitamin A is another crucial nutrient for growth and development that can become deficient in the diets of children and other vulnerable groups (Figure 3.1.2), and has been targeted as a priority for resource-poor populations around the world through the promotion of orange-fleshed sweet potato, other orange vegetables, and yellow maize within smallholder diets and "golden rice" as a genetically engineered innovation in maize varieties that was developed to address vitamin A deficiency. While not all biofortification approaches utilize genetic engineering, golden rice is a further example of a biofortification strategy.
Figure 3.1.2.: Prevalence of Vitamin A deficiency around the world, with colors indicating the percentage of children affected. Many of the areas with "no data" are in fact areas where per-capita incomes are high enough that consumers are assumed to be eating enough animal products and vitamin-A containing vegetables to avoid deficiency. The map shows that prevalence rates in Mexico, Africa, South Asia, East Asia, and the Pacific are highest, reaching up to 80 percent. The rates of deficiency in China, Central America, and South America reach up to 20 percent. Credit: Steven Vanek, based on data from Bassett & Winter-Nelson, 2010, Atlas of Human Nutrition
Vitamin C
Vitamin C is not a major deficiency challenge worldwide, though in the 1700s vitamin C deficiency was linked to the disorder scurvy in sailors due to highly monotonous diets. Rather it is presented here because of its iconic association with fresh fruits and vegetables, especially citrus fruit but also potatoes, bananas, spinach, collards, cabbage, and many of the weeds that are consumed around the world as leafy vegetables. True deficiency is thus uncommon in most diets around the world, though vitamin C’s role as an antioxidant and health-promoting vitamin that “cleans up” harmful free radicals in the body has been promoted. Also, vitamin C is an excellent example of a positive interaction between nutrients. Vitamin C promotes iron absorption. Since most plant sources of iron are much less available than so-called heme iron in animal iron sources, fruits and vegetables with vitamin C in the same meal with plant-based sources of iron are an excellent way for people consuming meat-free diets (or just individual meals without meat) to absorb sufficient iron.
A number of other vitamins and minerals are essential, and in general, the way that a food system can work to provide these to human populations is to make a wide variety of plant-based foods as well as a few meat options, available to consumers. As we will see soon, this is in contrast to what certain sectors of the food system often make available to consumers. Some of these important vitamins and minerals are Vitamin C, Vitamin D, the B-complex vitamins, potassium, and magnesium, and you may see these arise as concerns in the formative assessment below.
A complete description of vitamins, minerals, and other diet components in an accessible format can be found in the online book from the FAO, Human Nutrition in the Developing World.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.01%3A_Diet_and_Nutrition_Basics_for_Global_Food_Systems/4.1.02%3A_2_Protein_and_Amino_Acids-_Building_.txt
|
You may be familiar with the idea that fats are perhaps "delicious yet harmful" for most humans, and to be consumed in moderation (see the balanced plate in figure 3.1.1). Recently there has been increased attention focused on the role that “good fats” play in health and development, in addition to the awareness that most diets in more affluent areas of the world contain excessive fat, especially saturated fats of animal origin. Unsaturated fatty acids of plant origin are generally considered essential healthy nutrients, and there is evidence that fatty acids derived from plant sources and fish are important in promoting better neural development and nerve function. For consumers that tend to face food-insecure conditions, also, fats are a highly concentrated energy (calorie) source and therefore a valuable addition to a diet. Where calories are already in excess such as in many urban diets around the world and particularly in the industrialized first world, calorie content is not a benefit of high-fat diets. Recently it has been found that excessively processed or hydrogenated fats often included in processed foods (trans-fats) are harmful to health, and so labeling now specifies the trans-fat content of foods. For example, you can find the trans-fat content of diets in the diet tool used with this module's formative assessment.
Fat in foods as a case study of shifting paradigms in nutrition
(this section is adapted from a contribution by Human Geographer Mark Blumler at Binghamton University)
Most of us have probably absorbed the current overall thinking that fat in diets needs to be treated with caution, that it is synonymous with "divine" or "sinful" food in a joking way, or perhaps that there is something suspect about fat. Because of evolving in limited nutrition environments, most humans are primed to take in fats and other high-calorie foods as a nutritional bonanza and store it away in an evolutionarily "thrifty" way to confront future calorie shortage. However, western nutrition scientists’ beliefs regarding different types of fat in diets have undergone drastic fluctuations over the past century (Table 3.1) that may potentially shake our confidence in exactly what is known about "good" and "bad" in nutritional terms. The advice coming out of the nutritional science community, as filtered through government proclamations such as the food pyramid, have also caused enormous changes in the American diet, which have benefited some such as the vegetable oil processing industry, while hurting others such as cattle ranchers and the beef lobby.
To recap this sometimes bewildering history: around the 1960s, scientists discovered a relationship between cholesterol and cardiovascular disease and noticed that saturated fats have more cholesterol than other oils. Consequently, there was a big push to replace butter with margarine and to cut back on consumption of red meats, lard, and other animal fats. Initially, it was believed that polyunsaturated fats such as safflower oil are most heart healthy and so there was a major promotion of such oils. Later, interest developed in the “Mediterranean diet” because of the presence of many very old people in Mediterranean Europe, and nutritionists came to believe that monounsaturated fats such as in olive oil were best for us. Polyunsaturated oils, on the other hand, were increasingly shown to be not beneficial. Meanwhile, further research showed that cholesterol in the blood does not correlate with cholesterol in the diet, undermining the assumption that saturated fats are unhealthy. Trans fats, high in margarine and other processed fatty foods, were shown to be very inimical to heart health. Also, fish oils were recognized as being high in omega 3 fatty acids, which are deficient in the typical American diet today. Recently, butter has been officially accepted as “good” fat, reversing a half-century of denigration of its nutritional value. While other saturated fats are not yet accepted, there is nothing to distinguish butter from the others that would explain how it could be “good” and the others “bad”.
Table 3.1 Simplified description of changes in the scientific evaluation of different fats.
Fat 1900 1960 1970 1980 2000 2015
Butter Good Bad Bad Bad Bad Good
Egg Yolks Good OK Bad Bad Bad OK?
Lard Good Bad Bad Bad Bad Bad?
Fish oil Good Good Bad? OK? Very Good Very Good
Coconut oil Good Good Bad OK? OK? Good?
Olive oil Good Good OK Best Best Good
Sunflower oil OK? Good Best Good OK ???
Margerine - Good Good Bad Bad Bad
It is interesting to compare these shifting attitudes against traditional diets: The Japanese have the longest life span of any nation. Within Japan, the longest-lived are Okinawans. On Okinawa the only fat used for cooking is lard (of course, being on an island Okinawans also consume considerable fish oil although they do not cook with it). So, what is going on here? Why can science and scientists not "make up their minds" about fat in diets? Are findings on diet overly influenced by lobbying groups of major food industries, as some have charged for the case of margarine or dairy fats?
The story of fat recommendations illustrates the nature of science, that it proceeds piece by piece, and also seems to have a penchant for identifying single causes that are later shown in the context of a complex system to be overly simplistic. Each research finding, such as that cholesterol is associated with cardiovascular disease, may have been correct. But that gave rise to recommendations that were wrong, because other facts, such as that dietary cholesterol does not correlate with blood cholesterol, were not yet known. Given that many of us would like to eat healthy diets and may also believe that science should guide better nutritional policy, there is a need for principles that emerge from current science to inform dietary recommendations, rather than the confusion that is perhaps caused by this tangled story about the history fats in nutrition. In the summary below, we try to provide some ballpark recommendations regarding fats, other dietary constituents, and lifestyle choices. They summarize many of the same principles from the "balanced plate" at the beginning of this module or the "healthy plate" from the USDA and other nutritional recommendations of government organizations.
Summary: Fat consumption within a healthy diet and lifestyle
• Diets very high in fat in the absence of fiber and sufficient fruits, vegetables, and whole grains are probably not very healthy within the range of choices of modern consumers.
• Eliminating fats in favor of simple (i.e. non-whole grain) carbohydrates to promote "low-fat dieting approaches" was probably a bad idea.
• Plant-based fats and fish oils, on the whole, seem to contain more health-promoting properties than exclusive reliance on animal-based fat. However, some recent large studies have shown little significant correlation between saturated fat consumption (like those found in meats) and chronic diseases like heart disease, although these diseases are definitely thought of as diet-linked.
• Whole grains of many different types are good, as is a preponderance of fruits and vegetables in the diet.
• Much of this seems to be leading us back towards the principles of more traditional unprocessed food diets without a preponderance of meat (which has benefits for the water use related to a diet as well, as you will see next in module 4 regarding water and food).
• Lifestyles should encompass diet and exercise, but this exercise does not need to be high-intensity for it to have a really positive effect on well-being and health.
• Pay attention to upcoming research and advice from research communities regarding diet, but resist taking them to extremes unless there is robust evidence over the long term, and place them in the context of more traditional knowledge and the other principles we've addressed in this module.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.01%3A_Diet_and_Nutrition_Basics_for_Global_Food_Systems/4.1.04%3A_4_High-Quality_Fats_and_Shifting_Par.txt
|
The Importance of Fiber Overall and for the Gut Microbiome
In addition to these nutrients that contribute to particular functions within the human body, fiber is the mostly undigestible component of food that moves through the human digestive tract but also provides remarkable benefits. Undigestible cell wall components of plant foods (fruit membranes, bean and grain seed hulls, most of the plant cell wall, etc.) are examples of dietary fiber. In addition to its famous role in avoiding constipation by moving masses of foodstuffs through the digestive tract as a bulking agent, fiber helps to feed beneficial gut bacteria that produce beneficial substances. Over the last few decades fiber consumption associated with the benefits of avoiding certain cancers, heart disease, and diabetes. Emerging knowledge regarding fiber highlights the role played by the gut microbiome --many billions of non-human cells that inhabit our digestive tract in promoting human health and avoiding disease. These cells are more in number than the human cells in our body, due to the small size of bacteria compared to human cells. Much like the other areas of nutrition described here, the importance of fiber links directly to the importance of eating a varied diet with whole grains, legumes, fruits, and vegetables. It is interesting to view fiber and these microbes not as a direct nutrient for human life processes, but as a "helper nutrient" or "catalyst" for human nutrition. Dietary fiber is relatively inert as a source of protein, minerals, or vitamins, but helps our digestive system do its job.
Optional Reading
For more on the role of fiber and nutrition generally in an accessible format, you can see the following page: "Dietary Fibre" from the British Nutrition Foundation.
Knowledge Check
Human Nutrition Basics: Choose the nutrient or diet component that matches the function or characteristic.
1) Most important as a mineral nutrient involved in growth, healing, and disease resistance.
• Iron
• Vitamin A
• Carbohydrate
• High-quality Fats
• Zinc
2) These sources contribute to human uptake/synthesis: eggs, carrots, orange-fleshed sweet potatoes, collards.
• Iron
• Protein
• Carbohydrates
• Vitamin C
• Vitamin A
3) Interacts positively to promote iron uptake when eaten in meals with plant-derived iron.
• Protein
• Zinc
• Vitamin C
• Carbohydrates
• High-quality fats
4) Considered most importantly as energy sources for respiration within all cells of the body.
• Iron
• Protein
• Vitamin C
• Carbohydrates
• Zinc
5) Important for hemoglobin in blood; deficiency causes anemia.
• High-quality fats
• Protein
• Iron
• Zinc
• Vitamin A
6) Consumption is often analyzed as nine essential amino acids.
• High-quality fats
• Zinc
• Protein
• Vitamin C
• Carbohydrates
7) Plant-based oils are often thought of as this.
• High-quality fats
• Protein
• Zinc
• Carbohydrates
• Vitamin A
4.1.06: Formative Assessment- Using a Diet A
Instructions
In this assessment, you will use an online diet assessment tool to test how different foods contribute to the total nutrients in a daily diet. You will follow along in the instruction sheet, and log the nutrient content (e.g. calories, total fat, vitamin C) for each diet option in an excel spreadsheet, to be able to compare the diets.
Download both the instructions and worksheet (word doc) and the excel spreadsheet for logging the results. The spreadsheet has color-coding of cells to transform the data you log into a color that indicates deficiency or sufficiency, which will help you to interpret the result.
We will use the tool My Food Record for this assessment. Important: you should use the "one-day analysis" under the "analyze" tab so that you do not have to create an account and can just log in as a guest. You should open this online nutrition assessment tool in an adjoining window or a different browser so you can see the instructions for the assessment and the online tool at the same time.
Submitting Your Assignment
Please submit your assignment in Module 3 Formative Assessment in Canvas.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.01%3A_Diet_and_Nutrition_Basics_for_Global_Food_Systems/4.1.05%3A_5_Dietary_Fiber_and_Microbes_in_the_.txt
|
Introduction
In module 3.2., we will incorporate some of the basic information about healthy diets presented above in module 3.1 with the exploration of food systems that you have made throughout the course. In particular we want to highlight (1) the challenges of malnutrition and low food access for impoverished populations around the world, which can represent a failure of adaptive capacity of human societies to providing a socially sustainable future; (2) the phenomena of low food access for marginalized areas of the ‘developed’ world, which can take the form of what are called ‘food deserts’ without easily accessible healthy foods; (3) The rise of so-called chronic and nutrition-related ‘diseases of affluence’ related to caloric overconsumption (which in fact also affect poor, urban populations worldwide as well). We also will examine the potential food-system responses to these challenges, and how different food system types contribute to these challenges and their solutions.
4.02: Food System Issues for Nutrition
Food insecurity, or the inability to access sufficient, culturally appropriate food for adequate nutrition, is a major problem for the poorest segments of the world’s population, the 1 billion or so people who live on less than two dollars per day (Food Security and Insecurity are more fully addressed in module 11). These poorest members of society often face chronic malnutrition, which some call undernutrition to distinguish it from nutrition diseases of overconsumption or poor food choices, which are considered malnutrition of a different type. Undernutrition is sometimes coupled with nutrition-related illnesses and long work hours in paid employment or smallholder agriculture on small and/or degraded land bases that often accompany poorer farms in rural areas. Undernutrition represents a failure of human societies and food systems to create access to a minimum standard of diet quality that can allow all human beings to live to their potential. In addition, the difficulty posed by undernutrition may fall disproportionately on the most vulnerable members of society: women, children, and the disabled and elderly. A particular burden is faced by caregivers of children (women, and increasingly grandparents) to both provide adequate care and feeding and take on the role of earning money to farm or buy food.
Organizations who work with these populations have worked to identify barriers to better care and feeding practices because it has been recognized that if the allocation of food within households is not equitable, simply increasing farm production or access to food can sometimes fail to increase consumption of healthy foods by vulnerable groups in households. Increasing the direct involvement and knowledge of parents and other caregivers in nutrition practices, and focusing attention on children under five years of age can help to improve nutrition outcomes and child growth in many poor households. These aspects of care, feeding, nutrition, and harmonization with local culture are important parts of food security referred to as the utilization component (this will be further addressed in module 11.2). As an example of the sort of trade-off that can occur between agricultural and nutrition goals in improving livelihoods, agricultural methods that are introduced to improve soil quality or increase agricultural income can be labor-intensive and must take care not to place undue additional time burdens on caregivers, who may then neglect the care and nutrition needs of children.
The challenges of chronic malnutrition are often linked in rural food-producing households to small land bases and/or degraded soils, which is of concern to us because it is a highly problematic case that links human system factors in the form of poverty, and natural system factors in the form of the degradation of earth's ecosystems. As will be described further in module 10.2, the coupling of malnutrition and soil degradation can form a ‘poverty trap’ for rural households, where unproductive soils demand large amounts of labor for small yields, with limited alternative options for food production or employment because of inequality -- or lack of social sustainability -- in the local and global human system. In this way, degraded soils have particular bearing on malnutrition because of the additional work and expenditure of calories required to coax yields from degraded land, which both deepens issues of food deficit and malnutrition, and can translate to expansion of the land area under degrading practices, or contribute to continued production at the lowest level that soil will allow. These factors can trap households in poverty. Such a situation can also translate into the migration of a smallholder household in search of more lucrative activities, which often means a dramatic change in diet towards more urban and processed foods, even if it changes the overall income possibilities of a family and can be considered as an adaptive response to food shortage and vulnerability.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.02%3A_Food_System_Issues_for_Nutrition/4.2.01%3A_1_Malnutrition_%28Undernutrition%29_Among_Poor_and_Vu.txt
|
A second major issue facing modern food systems are chronic diet-related diseases that result from calorie overconsumption, often linked to increasing rates of obesity in societies around the world. The major chronic conditions related to calorie overconsumption are heart disease and type II or “old-age” (later onset) diabetes (see Fig. 3.2.1 for a global map of diabetes incidence). These have been called “diseases of affluence” because they tend to increase in prevalence as countries increase in material wealth, with a combined increase in meat and calorie availability along with more sedentary jobs and lifestyles.
Figure 3.2.1.: Percent of the population affected by diabetes by country, including Type II or old-age onset diabetes. Note that diabetes is more common in middle and high-income countries. It is, however, increasing in developing countries as diets change with increasing incomes and urbanization. The map shows that North America, most of South America, Europe, Northern, and Western Asia, and Australia, have incidences of diabetes of at least 5% and up to more than 10%. Whereas, most of Africa, Central Asia, Southeast Asia, and the Pacific have incidences of 2.5% or less. Credit: Steven Vanek, based on data in Millstone and Lang 2013, The Atlas of Food.
The dominant role of the globalized, corporate food system in these societies (see module 10.1 for the typology of food systems) means that processed foods (e.g. mass-produced “non-food” snacks and sweetened beverages, prepared frozen meals, fast food, pasta) occupy a larger and large part of the diet of typical consumers in these societies. To save cost and maintain demand, processed fats, sugar, and salt, are used as low-cost ingredients in these foods (e.g. corn syrup, oil by-product from the cattle and cotton industries) As has been described by food writers such as Michael Pollan, the prevalence of these diet choices means that consumers eat a large proportion of “empty calories” without fiber, high-quality fats, sufficient vitamins, and minerals, or in some cases adequate protein. Although high-calorie and fatty restaurant foods have been common for generations, at a whole food system level the prevalence of these foods, and the way they have been normalized in such concepts as “the American diet” (which upwardly mobile consumers in many other countries aspire to) are of great concern because they provide a dominant range of food choices that are not consistent with human health. This is especially so as consumers become more urban and many (though not all) expend fewer calories in manual labor related to farming. The increased prevalence of calorie excess has produced increasing rates of obesity in North America and Europe. (Fig. 3.2.2 below)
Figure 3.2.2.: The prevalence of obesity in the United States at the county level. The map shows that the prevalence of obesity in the U.S. is highest in Alaska and the southern and southeastern states. It is the lowest in the western states. Credit: Max Masnick based on U.S. Centers for Disease Control data, 2008. Used with permission as a public-domain image.
The “double burden”: chronic diseases in poor economies: Moreover, the term “diseases of affluence” is misleading because it is, in fact, poor people in industrialized countries as well as the developing world that face the greatest impact of these diseases. Empty calories are often very cheap calories for poorer sectors around the world, so that consumption of processed or dominantly carbohydrate diets with insufficient whole grains, fruits, and vegetables is more common among the poor. In addition, poorer households often are less able to pay for the expensive consequences of these diseases in the middle-aged and elderly (e.g. insulin provision for diabetics, the consequences of heart attack and stroke in the elderly). Ironically the same poorer sectors in poorer parts of the world and even within the United States can simultaneously face the issues of “traditional malnutrition” (i.e undernutrition, insufficient consumption of vitamins, iron, zinc, calories), especially among children and women, as well as diseases of overconsumption of empty calories. This ironic pairing of food system dysfunction has been called the “double burden” on developing countries by food policy experts. It also acts, at a national level, to reduce the overall income of a country by impairing the productivity of its human population (Figure 3.2.3, below).
Figure 3.2.3.: The economic impact of chronic diseases around the world estimated as the total income sacrificed by different countries between 2005 and 2015 to impaired productivity and costs to populations and government from chronic diseases like diabetes, heart disease, and conditions resulting from stroke. Credit: Steven Vanek, adapted from data in World Health Organization (WHO), 2005, Rethinking "Diseases of Affluence": the Economic Impact of Chronic Disease
Click for a text description of the economic impact of chronic diseases image
This bar graph shows the foregone total national income in billions of dollars as follows: China 550, Russian Federation 300, India 225, Brazil 50, United Kingdom 30, Pakistan 25, Nigeria 10, Canada 5, Tanzania 3
Food deserts: Within industrialized countries, food system analysts have noted that the marketing model of the globalized food system has focused on suburban supermarkets that are able to capture profits from middle and high-income consumers. This model is profitable for food distribution companies but has the effect of not adequately serving either inner-city poor populations and the rural poor, who face difficulties in physically getting to distant supermarkets. Fast food and high-priced, smaller food markets with a preponderance of processed and unhealthy foods are the only food options in many poorer parts of the United States and other industrialized countries. These areas of low food access for healthy, reasonably priced foods are called food deserts. You will explore these more with a mapping tool in the summative assessment for this module.
Optional Reading
For more information on the "double burden" around the world, you can read the online resource from the World Health Organization, " The Economic Impact of Chronic Diseases"
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.02%3A_Food_System_Issues_for_Nutrition/4.2.02%3A_2_Diseases_of_Affluence-_Not_Just_for_the_Affluent.txt
|
Although the modern globalized food system is highly dynamic and able to move enormous quantities of food and generate economic activity at a huge scale in response to global demand, the issues of poor diets, malnutrition and constrained food access we have described here are sobering issues that human societies need to confront. From the earliest days of civilization, food has been at once (1) a fundamental human requirement and human right; (2)a source of livelihood and a business as well as (3) the common property of cultures and ethnicities. The rise of a globalized food system, however, has brought new patterns into play because food has become an increasingly fiscalized commodity and experience.“Fiscalized” means that the provision of a fast food item, a food service delivery to a restaurant, or a supermarket buying experience (vs. a traditional regional open-air market, for example) are increasingly not only interactions among farmers, truckers, shopkeepers, and consuming households. Instead, the activities of production, distribution, and consumption within food systems become more and more integrated into the trade and investment patterns of the global economy. Food production, trade, and sales have been absorbed into the purview of profit-driven corporations that seek maximum value for stockholders. These stockholders are in turn citizens, organizations, and even governments that also participate by profiting from the functioning of the global system, demonstrating the involvement of common citizens in this system as well. Food activists, policymakers, and advocates of concepts like “agriculture of the middle” (see module 10.1) have argued that this new corporate character of the food system increasingly creates a food system that has an incentive to ignore important values like food access equity, just treatment of producers and workers, healthy diets, and environmental sustainability as the elements of the three "legs" of sustainability (see Module 1). However, reform movements within the globalized food system also demonstrate that it is able to pay attention to human nutrition goals and environmental sustainability.
In fact, the food system is not a completely unfettered capitalist enterprise.& Examining any food packaging shows the degree to which food is subject to regulation and oversight by the government. Food safety scares and health inspections of restaurants show the close attention paid to the acute impact (if not always the chronic impact over time) of unhealthy food. Education efforts promoting healthy choices in diet and exercise are regularly heard from both government organizations and private advocacy organizations: for example, state cooperative extension agencies, universities, and public service announcements. The efforts to label calories on restaurant menus and the movement of food service companies and local restaurants towards healthy options in menus shows the growing awareness and movement of food demand towards healthy options. And many supermarket chains are making substantial efforts to include more local and regionally produced foods and promote healthy diets and nutrition as part of the communication to consumers.
In part, these changes show the changing awareness of the problems in the modern “American diet” among the public, brought on by food activists and authors about the food system. And on-the-ground marketing initiatives for values-based value chains such as those promoted by local and regional food system advocates include improving access to healthier foods like whole grains, fruits, and vegetables. For middle- and higher-income consumers with access to the abundance of foods in typical supermarkets and farmer’s markets around the world, this can incentivize better choices about well-rounded diets. In many cases, these healthier diets also include less reliance on meat because of its water footprint and adverse impacts on health when eaten in excess. One essential question, however, is how these efforts to improve food choices and access can expand their reach to poorer consumers and those who live in food deserts, either by improving geographic access, low-cost alternatives, or income opportunities to these consumers. You’ll explore this question of food equity more in the summative assessment for this module, regarding food deserts and examples of organizations in your capstone regions that are promoting healthy food choices and production.
Optional Reading/Video for Capstone Project
The capstone project, which is introduced at this time in the course and requires you to begin thinking about the food system of a particular focus region, is an opportunity to think about food access and nutrition in your example region. As part of this project, you may want to see some examples of how local governments and organizations of citizens are promoting healthier diets. This may help you to propose similar strategies for food systems. One example you may look into is the website for the Toronto Food Strategy (a part of the municipal government of Toronto, Canada) and the way that their activities are coordinated with the Toronto Food Policy Council (a volunteer study/action and advocacy organization). Many states, counties, and cities in the United States have organizations and government efforts similar to these examples.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.02%3A_Food_System_Issues_for_Nutrition/4.2.03%3A_3_Human_System_Factors_in_Nutrition-_Challenges_in_th.txt
|
At the end of module 2, you read about alternative food systems and relocalization of food production and distribution as one of the emerging future proposals in the history of food. These efforts, which will be revisited in a typology of current food systems in module 10, are an important source of ideas and initiatives to increase sustainable food production methods and equitable relations between consumers and producers. Local and regional food systems and initiatives have been promoted as ways to retain economic benefits and jobs within regional contexts. Organic and sustainable production methods often form a part of these movements and seek to reduce the environmental impacts of food production. Organic food is, in fact, a documented way to reduce exposure to pesticide residues in foods, which is of concern to many consumers. Food such as fruits and vegetables, that is fresher when it is consumed, which can be the case for locally produced food, is also likely to have a greater content of vitamins and other health-promoting components. However, others have pointed out that at a global level, the optimal freshness of produce, or a complete absence of pesticides, can be of smaller benefit to health in the overall food supply than would be, say, orienting diets away from processed fats or towards greater vegetable consumption or plant-based oils. This more incremental approach suggests that it is important to target low-hanging fruit like availability of lower-cost vegetables and higher-fiber diets to more of the worlds' population, rather than just playing up potential benefits from foods that are local or produced with fewer or no pesticides. It is also important to point out that there can be much confusion among consumers on whether all organic food is locally produced (it's not) or whether local food is always organically produced (also not true).
In summary, given the much smaller size of these local and alternative food initiatives in comparison to the global food system, and also the scale of the problems of malnutrition and unhealthy diets, it may be important to put potential benefits of local and/or organically produced foods in the context of the overall challenges of the food system. For example, in the case of an urban food desert where only low dietary quality processed foods are available, increasing the availability of vegetables, fruits, and whole grains consumed using a number of strategies may be a more viable food system strategy to pursue than promoting locally or organically produced foods as a sole strategy. These multiple strategies could, in fact, rely on greater supermarket access and food streams from the globalized food system along with seasonal access to farmers markets for local produce. Home and community gardens can also complement and reinforce strategies for healthy eating. In addition, organizations of farmers using organic and other more sustainable methods have often acted as important allies in local food system settings for promoting healthier diets. As we will see throughout this course, the nutrition and sustainability outcomes emerging from the interacting parts of the food system are complex, and we can't always go with a single alternative to provide the best outcomes.
Required Video: Putting local alternative food systems in context
Please view this short video from the "Feeding the nine billion" project of Professor Evan Fraser at the University of Guelph. He argues for the importance of local, alternative food systems but also acknowledges the issues of scale that make global food systems an important aspect of diet and nutrition for the foreseeable future. This is not just about nutrition -- he is also reviewing many of the themes of food and sustainability we will be covering in the course and the relationships between human and natural systems as part of feeding humanity.
Video: Feeding Nine Billion Video 5: Local Food Systems by Dr. Evan Fraser (5:19)
Click for a transcript of the Feeding 9 Billion video.
Hello, my name is Evan Fraser and I work at the University of Guelph in Ontario, Canada. This video series shows that climate change, population growth, and high energy prices mean that farmers may struggle to produce enough food for all of humanity over the next generation. This video looks at how strong local food systems can help us overcome this problem. Many argue that because modern farms use a lot of energy and cause a lot of pollution, our food systems will prove unable to meet the rising demands of the global population.
These arguments go like this. Today a handful of large corporations control the vast majority of the world's food trade. In doing so, they make a huge amount of money by using farming systems that damage the environment, exploit workers, and displace traditional farmers. By contrast, food systems based on local, diverse, and small farms that use few chemical inputs like pesticides or fertilizers, are more sustainable, equitable, and democratic. This is because when producers and consumers know each other and interact, then the entire community has a say in how food is produced. This should mean that farmers receive a decent income since they will receive a higher percentage of the value of the food they produce. And they should also protect the environment better because consumers will be okay with paying more for food they know isn't covered with polluting sprays. Also, because food is produced and consumed in the same region, the amount of fossil fuels burned for transportation should go down. Goodbye processed cheese and vegetables from the southern hemisphere. And hello locally produced seasonal dishes.
Those of us in the rich parts of the world probably associate these ideas with the 100-mile diet. In the developing world, these ideas are often described as food sovereignty and are promoted by La Via Campesina, an international movement advocating that consumers and small-scale producers work together to take control of their food. Many, however, question whether this vision of alternative food systems can provide a viable food security strategy for humanity's growing population. For instance, while there is a huge disagreement among scientists, many point out that farms using alternative methods tend to have lower yields when compared with conventional farms. This means that many scientists worry that if we're going to feed a growing population using the alternative farming practices promoted by the local food movement, we’ll either need more farmland or we'll have to find ways of cutting down on our consumption and waste.
A second common criticism leveled against the promoters of alternative food systems is that whenever alternative farms try to grow bigger, they end up looking just like conventional farms. But do these criticisms mean alternative local food systems have no place in the 21st Century? I don’t think so. Even if local alternative food systems don’t feed all of us all of the time, it doesn’t mean there is no role for such systems as a component of a secure and resilient food security strategy. Local alternative systems add diversity to our farming landscapes and diversity is very important because alternative farming practices often provide the template to help improve the design of more mainstream systems. Alternative food systems, especially in poor regions of the world, provide a buffer between consumers and the volatility of the international market, while also empowering people by giving them some control over their food.
Finally, having local farms integrated into the fabric of urban life connects city dwellers with their food, making them more aware of the ecosystems on which we all depend. They provide habitat for wildlife, they trap stormwater before it damages people’s homes, and they should be beautiful. Therefore, my own reading of the debate around alternative farming systems tells me that to be sustainable, we must support local food systems that use alternative agricultural practices. We need to do this as consumers, as well as through policy that should foster local food systems by making sure farmers have access to processing facilities and markets. But we must also realize that local and alternative won’t feed us all. We’ll be relying on conventional farming systems that produce huge amounts of food in the world’s breadbaskets for the foreseeable future, albeit with high fossil fuel inputs. So what we need is a balanced approach. Our food security will be enhanced if all of us are able to draw from both global and local systems.
If you’re interested in learning more about this and other topics on feeding 9 billion, you can check out the other videos in this series. Also, my recent book “Empires of Food”, goes into these topics in detail and you can, of course, find me on Facebook and Twitter, where I regularly post news on global food security. Finally, if there’s anything in this video that you want to follow up on, head over to www.feedingninebillion.com(link is external), where I’ve posted all the scripts I’ve used in these videos, along with background references, and opened up an online discussion where you can weigh in with your own thoughts on anything you’ve just heard.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.02%3A_Food_System_Issues_for_Nutrition/4.2.04%3A_4_Local_and_Alternative_Organic_Foods_and_Food_System.txt
|
By now you may be forming the correct impression that a better diet and nutrition around the world is a matter of finding a “happy medium” for consumers between food shortage on the one hand, and excessive consumption of unhealthy foods on the other hand. That is, consumers in poorer sectors and societies eat too little fruits, vegetables, high-quality fats and proteins and in the worst case even insufficient calories. Meanwhile, wealthier consumers and even some of the urban poor eat excessive quantities of low-quality calories and fats in relation to relatively sedentary lifestyles. The results are serious chronic malnutrition (undernutrition and nutrient deficiencies, specifically) at one end of the diet spectrum and chronic diseases such as heart disease and diabetes at the overconsumption end of the same spectrum. In addition, a high-meat diet and millions of acres in crops to feed beef cattle and pigs creates a water-consuming and polluting food sector of the economy to support these diets, as seen in previous modules. Therefore, increasingly there has been a movement to unite concerns about the environmental impacts of food with the problematic diet and nutrition outcomes from modern high meat and processed food diets. The reading below from food columnist Michael Pollan addresses these principles for a happy medium in diets.
Additional Reading
Michael Pollan, Unhappy Meals New York Times Magazine, January 28, 2007. This reading starts with Pollan's by now somewhat famous recipe for a healthy diet: "Eat food. Not too much. Mostly Plants." and then expands on this principle.
One example of a "happy medium": the demitarian diet concept
In order to address the need for this "happy medium", a number of scientists and activists globally have enunciated the interesting principle of the demitarian diet1, in which consumers commit to reducing their consumption of meat products, short of adopting vegan and vegetarian diets. The prefix demi- comes from French for “half” and reflects the principle that consumers in high-income societies and sectors need to at least halve their consumption of meats, to produce better health and environmental impacts, especially the impacts on nitrogen pollution and greenhouse gases from fossil fuels in agriculture (more on this in the following modules). The demitarian diet and its proponents are primarily focused on the environmental sustainability of first-world diets. Nevertheless, we can extend this concept to the third world to say that populations eating diets of poverty will receive benefit from increasing their intake of legumes, fish, meat, vegetables, and other high-quality nutrient sources. Populations at risk from undernutrition may see dramatic positive effects from even slight increases in consumption of these high-quality foods that are often lacking in circumstances of poverty. This is because even small quantities of meat, eggs and other animal products along with legumes, fruits, and nuts, can be very high-density sources of protein, Iron, Zinc, Vitamin A, and high-quality fats. Because of this nutrient-density, animal protein (e.g. poultry, fish, eggs) as well as legume crops (e.g. bean, pigeon pea), vegetables (e.g. sweet potato, collards, carrots), and fruits (e.g. papaya, mango, avocado) therefore feature prominently in nutrition interventions of government and other organizations.
1 The Barsac Declaration highlights the demitarian diet concept.
4.03: Summary and Final Tasks
Summary
We hope that Module 3 of this course has given you a good grounding in both the basic nutrition needs of human populations, problematic trends in nutrition around the world such as unhealthy diets, and the human system factors that represent major challenges for the social sustainability of food systems. In this learning, we've applied concepts (such as social sustainability and human versus natural systems within food systems) from the first two modules. We also are providing you a grounding for human nutrition to keep in mind as the course dives into the natural system factors (water, soils, crops, climate, agricultural ecosystems) in the second section of the course on Environmental Dynamics and Drivers. And lastly and very importantly, this module is designed to launch your understanding of food systems and food access in a capstone region that you will be analyzing in your capstone project so that you can propose sustainability strategies for these regions.
Reminder - Complete all of the Module 3 tasks!
You have reached the end of Module 3. Double-check the to-do list on the Module 3 Roadmap to make sure you have completed all of the activities listed there before you begin Module 4.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.02%3A_Food_System_Issues_for_Nutrition/4.2.05%3A_5_The_Happy_Medium_in_Nutrition_and_Diets.txt
|
Food Access and "Food Deserts" in the United States and in Your Capstone Regions
The Food Access Research Atlas has been created as an online mapping tool by the Economic Research Service of the U.S. Department of Agriculture. It is available at USDA Economic Research Center: The Food Access Research Atlas. The atlas has been designed to present a spatial overview of food access indicators for low-income and other census tracts using different measures of supermarket accessibility. We are focusing on food access because the ability to access a full complement of foods at reasonable prices via supermarkets and other more diverse food sales outlets is one of the main impediments to improved diet among poor households in the United States. The atlas presents an online, zoomable map that you can use to understand food access in different districts of the United States (divided by census tracts). Local and regional data can also be downloaded. When it is time to consider food access in capstone regions in Peru (capstone regions outside the United States) we will present some alternative resources below.
Instructions
First, go to Food Access Research Atlas for the description of the food atlas, including the definition of a "food desert".
Please read these first few short sections in this description regarding the food atlas and pay attention to how a food desert is defined:
• Measures of food access
• Additional indicators of food access
• Data availability and updates
• Component layers for mapping tool
Now download the worksheet for the summative assessment where you will see the questions for the assessment. These are also reproduced below to more easily understand the process of the assessment.
Go to the Food Access Research Atlas. Read the brief overview points on the page and then click on "Enter the Map". Then work to answer the questions on the worksheet. The questions are shown here but the spaces to answer are given on the worksheet.
1. What is the definition of a food desert that is used by the map (the original definition, before changes made more recently)? That is, what does the phrase “LI and LA at 1 and 10” mean? Answer in question one on the worksheet.
2. Before zooming in on the atlas (link above), make sure you have the background set to ‘topo’ (not satellite imagery), and the food desert criteria set to “LI and LA at 1 and 10 (original food desert measure)” – these should be the default settings. Now, look at the food desert map of the whole U.S. Name three regions (which can include parts or all of the multiple states) that seem to have a disproportionately high incidence of food deserts. Answer in question two on the worksheet.
3. Zoom in on the Philadelphia, PA metropolitan area (if needed you can use the "find a place" search box). Roughly centering Philadelphia in the view and with Woodbury Heights, NJ in the south and Elkin Park, Pa in the north—it’s ok that some areas are in New Jersey since we are thinking about a metro area and not just Philadelphia proper (see guide image below). Estimate the percentage of neighborhoods in this area with food deserts and write it on the worksheet in worksheet question three.
Figure 3.2.4.: Map of food desert areas in Philadelphia, Pennsylvania. Note that the green layer (original food desert definition) is checked. Credit: Food Access Research Atlas
1. Now zoom in on the Houston, Texas area in the mapping tool. (hint: it's on the gulf coast, west of New Orleans). Place the view with Houston centered and Spring in the North and Friendswood in the South. Consider the Houston city area at about this diameter (Spring to Friendswood) and estimate the proportion of neighborhoods with food desert status and limited food access and note it in question four on the worksheet. You should also use the slider next to the green checkbox to increase the transparency of the green layer, and then expand the section on "component layers" under the different check boxes, so that you can turn on, one by one, the layers for low access at 1 and 10 miles, and low income. Notice that each of these conditions (LI and LA at 1 and 10) is far more widespread than the green food desert layer, and it is the combination of the two that are needed to create the worst level of food access.
Figure 3.2.5.: Map of food desert areas in Houston, Texas, for use in checking the view in the summative assessment. Credit: Food Access Research Atlas
1. Which city has a higher percentage of food deserts? Can you think of some reasons why this would be the case? Answer in question five on the worksheet.
2. Now read the short excerpt from pp. 104-106 in "Re-storing America's food deserts", chapter 8 in Winne, M. (2008). Closing the food gap: Resetting the table in the land of plenty, about efforts to “re-store” food deserts in Philadelphia.[1]1 Note this book chapter is an excellent source for information and case studies of other efforts to fight food deserts, and some reflections about what works and doesn’t work in improving food access in the United States. Return to this assessment page to answer the rest of the questions.
3. Now pretend you are on a food access advisory panel that is supposed to help develop a policy to improve food deserts in Houston. You are supposed to organize your advice in three to four main points on the problem and some steps to solve it. Your first point can be what you found from the food access atlas above, pretending that you are writing to an audience that knows little about the problem and its effects on diet and nutrition. Then suggest some solutions, based on the example of Philadelphia and the Paul Winne reading. Use the worksheet, question six.
4. Listen to the first ten minutes of the following 21-minute radio clip below (this will be played for the whole class if in the hybrid class). Look for additional solution ideas for food deserts in the interview since you will be asked about new strategies additional to those from Philadelphia. "Houston Matters Radio Program: Food Deserts in Houston"
5. Did you learn anything more from this radio interview? Note additional strategies and ideas that came from the interview, in question seven on the worksheet.
6. Now consider your focus region for the capstone project:
• For U.S.-based capstone regions: Look at this region in the food access atlas mapping tool, and make notes about whether there are food deserts (e.g. the rough percentage, as above), whether these are in urban or rural areas, and ideas about why these deserts might exist.
• For other global capstone regions (e.g. Peru): read the World Food Program, briefly describe two or three factors influencing limiting food access in the smallholder systems of your region. How do challenges to food access differ between the United States and these systems where local on-farm production is so important?
• For Northern Thailand: Go to the FAO's "Food Security and Nutrition Status in Thailand 2005-2011." Read the forward and introduction and then take a look at Chapter Three. Briefly describe two or three factors influencing food access or food security in the systems of this region. How do challenges to food access differ between the United States and these systems?
• For other regions not listed or adequately addressed on the WFP site: find one resource that speaks to food access in your region and describe their findings and how you think they come to these conclusions via a methodology.
1. If you do a google or other web search, can you find examples of efforts to address food deserts or improve food access in your capstone region? Name one and describe it in a few sentences in question 9.
[1] Chapter 6, Re-Storing America’s Food Deserts in Winne, M. (2008). Closing the food gap: Resetting the table in the land of plenty. Beacon Press.
Submitting Your Assignment
Please submit your assignment in Module 3 Summative Assessment in Canvas.
Grading Information and Rubric
Your assignment will be evaluated based on the following rubric. The maximum grade for the assignment is 36 points.
Rubric
Criteria Possible Points Awarded
Short answer questions one through five, correct use of mapper and interpretation of the map 10 points
Assessment of Houston food desert situation and sufficiently detailed suggestions drawn from reading 10 points
Additional learning and new strategies are drawn from Radio Clip "Houston Matters" 5 points
Description of food access in or near the capstone region 3 points
Description of efforts to improve food access in capstone region 3 points
Overall writing style, grammar, spelling 5 points
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/01%3A_Introduction/04%3A_Diet_and_Nutrition/4.04%3A_Summative_Assessment-_Food_Access_and_Food_Deserts.txt
|
Introduction
Water is an essential element in growing the food we eat. Also, the growing of our food has an effect on Earth's water resources as agricultural runoff contributes to pollution and diversions for irrigation affect streamflow and deplete aquifers. In this module, we'll look at how water is a critical element in the production of food. We'll also explore some of the impacts that our food systems have on both the quality and quantity of our water resources.
Plants can't grow without water and in this module, we explore how plants use water and where that water comes from. Have you ever considered that fact that you eat a lot of water? All of the food you eat required water to grow, process, and transport. How much water did it take to make grow feed for the cattle that ultimately became the hamburger you had for lunch this week? Or to feed the chicken that laid the egg for your breakfast? Or to grow the coffee beans for your morning latte? Water is an essential component of our food system!
Goals
• Analyze the relationships between climate, availability of water resources, irrigation, and agricultural food production.
• Examine their water footprints and the virtual water embedded in agricultural food products.
• Summarize the major impacts of agriculture on both the quality and quantity of water resources.
Learning Objectives
After completing this module, students will be able to:
• Explain the relationships between evapotranspiration (ET), climate, and crop consumptive use.
• Describe the major impacts of agricultural diversions on the Colorado River.
• Relate the spatial distribution of precipitation and ET rates to where food can be grown with and without irrigation.
• Relate nutrient loading from fertilizer use to the dead zone in the Gulf of Mexico.
• Attribute major water pollutants to appropriate agricultural sources.
• Estimate their water consumption in the food you eat using the concepts of virtual water and water footprints.
Assignments
Print
Module 4 Roadmap
Detailed instructions for completing assessments are provided with each module.
Module 4 Roadmap
Action Assignment Location
To Read
1. Materials on the course website.
2. EPA Fact Sheet on Agricultural Runoff: Protecting Water Quality from Agricultural Runoff
1. You are on the course website now.
2. Online: Protecting Water Quality from Agricultural Runoff
To Do
1. Formative Assessment: Turning Water into Food
2. Summative Assessment: Kansas Farm Case Study
3. Participate in the Discussion
4. Take Module Quiz
1. In course content: Formative Assessment; then take the formative quiz in Canvas
2. In course content: Summative Assessment; then take the summative quiz in Canvas
3. In Canvas
4. In Canvas
Questions?
If you prefer to use email:
If you have any questions, please send them through Canvas e-mail. We will check daily to respond. If your question is one that is relevant to the entire class, we may respond to the entire class rather than individually.
If you prefer to use the discussion forums:
If you have any questions, please post them to the discussion forum in Canvas. We will check that discussion forum daily to respond. While you are there, feel free to post your own responses if you, too, are able to help out a classmate.
05: Food and Water
Introduction
How much water do you eat? Water is essential for food production. In this unit, you will learn about water as an essential ingredient to grow the food that we eat, including plants and animal products. The concepts of photosynthesis, evapotranspiration, and crop consumptive water use are introduced followed by an overview of the spatial variability of precipitation and the resulting need for irrigation. The final activity will introduce you to virtual water embedded in the food you eat and your water footprint.
The short animated video that follows was produced by the United Nations' Water group for World Water Day and illustrates how much water is embedded in a few different food products. The numbers are given in liters, so it's helpful to remember that there are 3.8 liters per gallon. A liter is a little bigger than a quart. In this module, we'll look at why it takes so much water to produce food and you'll estimate how much water you eat.
Video: All You Eat (0:49)
Click for a transcript of the All You Eat Video
This video has music only, no voice. Words on the screen read:
World water day 2012
Why is water so important to our food security?
Your bread: 650 liters
Your milk: 200 liters
Your eggs: 135 liters
Your steal: 7000 liters
Your vegetables: 13 liters
Your burger: 2400 liters
ALL YOU EAT NEEDS WATER TO GROW
Agriculture accounts for 70% of our total water use
So. Now. You. Know. Why.
By: Faowater
If you do not see the video above, please go to YouTube to watch it.
5.01: Water resources and Food Production
In order to understand why growing food uses so much water, we need to explore the process of evaporation. Evaporation is a hydrologic process that we're all quite familiar with, even if you aren't aware of it. Think about hanging clothes out to dry on the clothesline, or blow drying your hair. Both of those involve the movement of water from its liquid form to its vapor or gaseous form that we call water vapor, or in other words, both involve the evaporation of water.
In what weather conditions do your clothes dry faster? A hot, dry, windy day, or a cool, cloudy, rainy day? Why do you use a blow drier to dry your hair? Water evaporates faster if the temperature is higher, the air is dry, and if there's wind. The same is true outside in the natural environment. Evaporation rates are generally higher in hot, dry and windy climates.
The rate at which water evaporates from any surface, whether from a lake's surface or through the stomata on a plant's leaf, is influenced by climatic and weather conditions, which include the solar radiation, temperature, relative humidity and wind (and other meteorological factors). Evaporation rates are higher at higher temperatures because as temperature increases, the amount of energy necessary for evaporation decreases. In sunny, warm weather the loss of water by evaporation is greater than in cloudy and cool weather. Humidity, or water vapor content of the air, also has an effect on evaporation. The lower the relative humidity, the drier the air, and the higher the evaporation rate. The more humid the air, the closer the air is to saturation, and less evaporation can occur. Also, warm air can “hold” a higher concentration of water vapor, so you can think of there being more room for more water vapor to be stored in warmer air than in colder air. Wind moving over a water or land surface can also carry away water vapor, essentially drying the air, which leads to increased evaporation rates. So, sunny, hot, dry, windy conditions produce higher evaporation rates. We will see that the same factors - temperature, humidity, and wind - will affect how much water plants use, which contributes to how much water we use to produce our food!
Evaporation requires a lot of energy and that energy is provided by solar radiation. The maps below (Figure 4.1.1) illustrate the spatial patterns of solar radiation and of annual evaporation rates in the United States. Notice how the amount of solar radiation available for evaporation varies across the US. Solar radiation also varies with the season and weather conditions. Note that annual evaporation rates are given in inches per year. For example, Denver, Colorado in the lake evaporation map is right on the line between the 30-40 inches and 40-50 inches per year of lake evaporation, so let's say 40 inches per year. On average, if you had a swimming pool in Denver, and you never added water and it didn't rain into your pool, the water level in your pool would drop by 40 inches in a year. Explore the maps and answer the questions below.
Figure 4.1.1.: a. Mean daily solar radiation in the United States and Puerto Rico and
b. Mean annual lake evaporation in the conterminous United States, 1946-55. Data not available for Alaska, Hawaii, and Puerto Rico. Source: Data from U.S. Department of Commerce, 1968). From Hanson 1991.
Knowledge Check (flashcards)
Consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: How are the patterns on the two maps above (Figure 4.1.1) similar? Which regions experience high solar radiation and which regions experience high evaporation rates?
Back: Generally, the spatial patterns of solar radiation and lake evaporation in the US are similar as high solar radiation drives high evaporation. The southwestern region of the US has both high solar radiation and high evaporation rates.
Card 2:
Front: How are the two maps different? What factors might contribute to the differences?
Back: Two main differences are: The Rocky Mountain region where high elevations lead to cooler temperatures which lowers evaporation rates The southeastern US where high humidity reduces evaporation.
Card 3:
Front: Find your location on the maps. How much solar radiation does your location receive per year, and how much water would evaporate from a lake on average per year?
Back: Find the solar radiation and lake evaporation for your location using the maps below. Note that lake evaporation in Figure 4.1.1b is given in inches per year. Is this value higher than you expected?
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.01%3A_Water_resources_and_Food_Production/5.1.01%3A_Evaporation_and_Climate.txt
|
Why do we need so much water for agriculture? Plants use a lot of water!
Plants need water to grow! Plants are about 80-95% water and need water for multiple reasons as they grow including for photosynthesis, for cooling, and to transport minerals and nutrients from the soil and into the plant.
"We can grow food without fossil fuels, but we cannot grow food without water."
Dr. Bruce Bugbee, Utah State University
We can't grow plants, including fruits, vegetables, and grains, without water. Plants provide food for both us and for the animals we eat. So, we also can't grow cows, chickens or pigs without water. Water is essential to growing corn as well as cows!
Agriculture is the world's greatest consumer of our water resources. Globally about 70% of human water use is for irrigation of crops. In arid regions, irrigation can comprise more than 80% of a region's water consumption.
The movement of water from the soil into a plant's roots and through the plant is driven by an evaporative process called transpiration. Transpiration is just evaporation of water through tiny holes in a plant's leaves called stomata. Transpiration is a very important process in the growth and development of a plant.
Water is an essential input into the photosynthesis reaction (Figure 4.1.2), which converts sunlight, carbon dioxide, and water into carbohydrates that we and other animals can eat for energy. Also, as the water vapor moves out of the plant's stomata via transpiration (Figure 4.1.2), the carbon dioxide can enter the plant. The transpiration of water vapor out of the open stomata allows carbon dioxide (another essential component of photosynthesis) to move into the plant. Transpiration also cools the plant and creates an upward movement of water through the plant. The figure below (Figure 4.1.2) shows the photosynthesis reaction and the movement of water out of the plant's stomata via transpiration.
As water transpires or evaporates through the plant's stomata, water is pumped up from the soil through the roots and into the plant. That water carries with it, minerals and nutrients from the soil that are essential for plant growth. We'll talk quite a bit more about nutrients later in this module and future modules.
Figure 4.1.2.: Photosynthesis and transpiration Credit: Wikimedia Commons: Photosynthesis(Creative Commons CC BY-SA 3.0)
Click for a text description of the photosynthesis and transpiration image
This drawing shows the sunlight shining down on a flower. The roots of the flower are in the soil and there is water in the soil. Carbon dioxide is going into the flower. Water vapor and oxygen are being released from the flower (Transpiration). The chemical formula for photosynthesis is shown as 6 CO2 (Carbon Dioxide) + 6 H2O (Water) an arrow representing light leads to C6H12O6 (Sugar) + 6 O2 (Oxygen).
5.1.03: Evapotranspiration and Crop Wate
How much water does a crop need?
The amount of water that a crop uses includes the water that is transpired by the plant and the water that is stored in the tissue of the plant from the process of photosynthesis. The water stored in the plant's tissue is a tiny fraction (<5%) of the total amount of water used by the plant. So, the water use of a crop is considered to be equal to the water transpired or evaporated by the plant.
Since a majority of the water used by the crop is the water that is transpired by the plant, we measure the water use of a plant or crop as the rate of evapotranspiration or ET, which is the process by which liquid water moves from the soil or plants to vapor form in the atmosphere. ET is comprised of two evaporative processes, as illustrated in figure 4.1.3 below: evaporation of water from soil and transpiration of water from plants' leaves. ET is an important part of the hydrologic cycle as it is the pathway by which water moves from the earth's surface into the atmosphere.
Remember, evaporation rates are affected by solar radiation, temperature, relative humidity, and the wind. ET, which includes evaporation from soils and transpiration from plants, is also evaporative, so the ET rate is also affected by solar radiation, temperature, relative humidity, and the wind. This tells us that the crop water use will also be affected by solar radiation, temperature, relative humidity, and the wind! More water evaporates from plants and soils in conditions of higher air temperature, low humidity, strong solar energy and strong wind speeds.
The transpiration portion of ET gets a little more complicated because the structure, age, and health of the plant, as well as other plant factors, can also affect the rate of transpiration. For example, desert plants are adapted to transpire at slower rates than plants adapted for more humid environments. Some desert plants keep their stomata closed during the day to reduce transpiration during the heat of a dry desert day. Plant adaptations to conserve moisture include wilting to reduce transpiration. Also, small leaves, silvery reflective leaves, and hairy leaves all reduce transpiration by reducing evaporation.
In summary, the amount of water that a crop needs is measured by the ET rate of a crop. The ET rate includes water that is transpired or evaporated through the plant. And, the ET rate varies depending on climatic conditions, the plant characteristics, and the soil conditions.
Figure 4.1.3.: Evapotranspiration includes evaporation from soil and transpiration from leaves. Credit: Figure drawn by Gigi Richard, adapted from Bates, R.L. & J.A.Jackson, Glossary of Geology, Second Edition, American Geology Institute, 1980
Click for a text description of the evapotranspiration image.
Diagram of Evapotranspiration. At the bottom is soil and below that, available soil water. In the soil are two plants with roots extending into the soil water. There are lines coming up from the soil representing evaporation from the soil. Lines from the plants represent transpiration from leaves. There is a line drawn around all of this, with the sun outside and humidity and temperature flowing in. An arrow from transpiration and evaporation leads to evapotranspiration.
Crop water use varies
If the ET rate of a crop determines the water use of that crop, we could expect water use of a single crop to vary in similar spatial patterns to evaporation rates. For example, if evaporation rates are very high in Arizona because the hot, dry climate, you would expect ET rates to be higher for a given crop in that climate. ET is measured by the average depth of water that the crop uses, which is a function of the plant and of the weather conditions in the area. In cool, wet conditions, the plant will require less water, but under hot, dry conditions, the same plant will require more water.
Figure 4.1.4 shows a range of typical water use for crops in California. The graph shows how much water needs to be applied as irrigation to grow different crops. Notice how some crops, like alfalfa, almonds, pistachios, rice and pasture grass can require four feet or more of water application. Other crops, like grapes, beans, and grains only require about between one and two feet of water.
If we moved the plants in Figure 4.1.4 to a cooler and more humid climate, the rate of evaporation would be less and the crop water demand would decline as well. In a hot dry climate, you need to apply more water to the plant to keep it healthy and growing because more water is evaporating from both the soil and through the stomata on the plants’ leaves, so the plant is pulling more water out of the soil via its roots to replace the water transpiring from its leaves.
Figure 4.1.4.: Water use of California crops, as measured by water application depth. Note: The range shown reflects the first and third quartile of the water application depth for the period 1998-2010, and the white line within that range reflects the median application depth during that period. Credit: California Agricultural Water Use: Key Background Information, by Heather Cooley, Pacific Institute.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.01%3A_Water_resources_and_Food_Production/5.1.02%3A_Water_is_Essential_for_Food_Prod.txt
|
Where do plants get their water?
The source of water for most land plants is precipitation that infiltrates or soaks into the soil, but precipitation varies dramatically geographically. For example, we know that Florida gets a lot more precipitation per year than Arizona. Figure 4.1.5 below shows the average annual precipitation across the United States and around the globe. Notice on the map of the U.S. that the dark orange colors represent areas that get less than ten inches of precipitation per year. And, the darkest green to blue regions receive more than 100 inches or more than eight feet of precipitation per year!
Climate, including the temperature of a region and the amount of precipitation, plays an important role in determining what types of plants can grow in a particular area. Think about what types of plants you might see in a high water resource region versus a low water resource region. A low resource region with respect to water receives lower precipitation, so would have desert-like vegetation, whereas a higher resource region for water would have lusher native vegetation, such as the forests of the eastern US.
Regions that receive enough precipitation to grow crops without irrigation (i.e., those areas shaded green on the map below) would be considered high resource areas with respect to water. A high resource region is more likely to be a more resilient food production region. In contrast, a low resource region with respect to water would be regions on the map below in the orange shaded colors. In these regions, extra effort is needed to provide enough water for crops, such as through the development of an irrigation system.
Compare the crop water use values in Figure 4.1.6 with the average annual precipitation in Figure 4.1.5 and you'll see that there are parts of the US where there isn't enough precipitation to grow many crops. In fact, there is a rough line running down the center of the US at about the 100th meridian that separates regions that get more than about 20 inches of rain per year from regions that get less than 20 inches of rain per year. On the map in Figure 4.1.5, this line is evident between the orange colored areas and the green colored areas. Generally, west of the 100th meridian there is insufficient precipitation to grow many crops. If a crop's consumptive water use or ET is greater than the amount of precipitation, then irrigation of the crop is necessary to achieve high yields.
Figure 4.1.5.: a) Average annual precipitation in the United States 1961-1990 and b) Mean annual Global Precipitation (1961-1990) Credit a) USGS: The National Map and b)Terrascope: Rainwater Harvesting
Figure 4.1.6.: Water use of California crops, as measured by water application depth. Note: The range shown reflects the first and third quartile of the water application depth for the period 1998-2010, and the white line within that range reflects the median application depth during that period. Credit: California Agriculture Water Use: Key Background Information by Heather Cooley, Pacific Institute.
How can we grow crops when there is insufficient precipitation?
In regions where precipitation is insufficient to grow crops, farmers turn to other sources of water to irrigate their crops. Irrigation is the artificial application of water to the soil to assist in the growth of agricultural crops and other vegetation in dry areas and during periods of inadequate rainfall. These sources of water can be from either surface or groundwater. Surface water sources include rivers and lakes, and diversion of water from surface water sources often requires dams and networks of irrigation canals, ditches, and pipelines. These diversions structures and the resulting depletion in river flow can have significant impacts on our rivers systems, which will be covered in the next part of this module. Pumping of water for irrigation from aquifers also has impacts, which are also discussed in the next part of this module.
Water use for irrigation comprised about 80-90 percent of U.S. consumptive water use in 2005, with about three-quarters of the irrigated acreage being in the western-most contiguous states (from USDA Economic Research Service). For example, in the state of Colorado, irrigation comprised 89% of total water withdrawals in 2010 (Figure 4.1.7). Irrigated agriculture is also very important economically, accounting for 55 percent of the total value of crop sales in the US in 2007 (from USDA Economic Research Service). Globally only about 18 percent of cropland is irrigated, but that land produces 40 percent of the world's food and about 50 percent by value (Jones 2010).
Figure 4.1.7.: Colorado Total water withdrawals by water-use category Credit: Data from Maupin, M. A., Kenny, J.F., Hutson, S.S., Lovelace, J.K., Barber, N.L., and Linsey, K.S., 2014, Estimated use of water in the United States in 2010: U.S. Geological Survey Circular 1405, 56 pp.
Click for a text description of the Colorado Total Water withdrawals image.
A pie chart of Colorado total water withdrawals by water-use category in 2010 shows that almost 89% percent of water withdrawals were from irrigation. The remaining percentages are as follows (approximately): 8% public water supply, 1.18% industrial, 1.11% aquaculture, .70% thermoelectric power, .34% domestic fresh, .33% livestock, .26% mining.
Activate Your Learning
In this activity, you will employ geoscience ways of thinking and skills (spatial thinking and interpretation of the spatial data to characterize specific regions for the geographic facility).
Figure 4.1.1.: Mean annual lake evaporation in the conterminous United States, 1946-55. Credit: Data from U.S. Department of Commerce, 1968. From Hanson 1991.
Knowledge Check (flashcards)
Consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: Compare the evaporation rates in Figure 4.1.1 and the average annual precipitation in Figure 4.1.5 and discuss where you would expect irrigation demands to be highest. Consider the precipitation distribution for just the state of California from south to north. Which regions would be considered low resource regions from a water perspective? Which regions are high resource regions?
Back: The regions with highest irrigation demand are the areas with low precipitation and high evaporation, which includes the southwestern US, and most of the western states, except for the coastal areas of Washington, Oregon, and northern California. Precipitation in California varies from only 5 inches per year in the deserts of the southern part of the state up to about 70 inches in some of the mountainous and northern coastal regions of the state.
From a water perspective, regions with higher annual precipitation are high resource regions, and regions with lower annual precipitation are low resource regions. In general, the eastern US is a higher resource region from a water perspective, and the desert southwest is a very low water resource region.
Card 2:
Front: Identify the locations of the Imperial and Central Valleys in California, which are major agricultural regions. What is the average annual precipitation that falls in these regions? Which crops from Figure 4.1.6 c could be grown without additional water?
Back: The precipitation rates in the Central and Imperial valleys are generally less than 15 inches per year and in some areas 5 inches and less per year. None of the crops in Figure 4.1.6 could be grown in those regions without additional water. At 15 inches of precipitation per year, some areas of the central valley might be able to grow safflower and some grains, but it depends on when the precipitation falls and if it is during the crops' growing season.
Card 3:
Front: What alternatives do farmers have when faced with insufficient precipitation?
Back: When faced with insufficient precipitation for crop growth, farmers can divert water from surface or groundwater for irrigation.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.01%3A_Water_resources_and_Food_Production/5.1.04%3A_Water_Sources_for_Crops.txt
|
The amount of water used for irrigation varies depending on the climate and on the crop being grown, and it also depends on the irrigation technique used. Just like in your garden or home landscaping there are more or less efficient sprinklers. In many parts of the world flood or surface, irrigation is still used where water flows across a field and soaks into the soil.
Surface or flood irrigation is the least efficient manner of irrigation. When a field is flooded, more water than is needed by the plant is applied to the field and water evaporates, seeps into the ground and percolates down to the groundwater, where it can be out of reach of the plant's roots. Another problem with flood irrigation is that the water is not always applied evenly to all plants. Some plants might get too much water, and others get too little. On the other hand, flood irrigation tends to use the least energy of any irrigation system.
Furrow irrigation (Figure 4.1.8) is another type of surface irrigation in which water is directed through gated pipe or siphon tubes into furrows between rows of plants. When using furrow irrigation, water is lost to surface runoff, groundwater, and evaporation, and it can be challenging to get water evenly to an entire field.
Figure 4.1.8.: Furrow irrigation of an onion field in the Uncompahgre Valley, CO. Credits: Perry Cabot
More efficient methods of irrigation include drip irrigation (Figure 4.1.9) sprinklers (such as center pivots, Figure 4.1.10), and micro-spray (Figure 4.1.11) irrigation. All of these methods, while more efficient, also require significant investments in equipment, pipes, infrastructure (e.g., pumps Figure 4.1.9) and energy. In addition to the high cost, some soil types, irrigation networks, field sizes, and crops pose greater challenges to the implementation of more efficient methods of irrigation. For example, in the Grand Valley of western Colorado, the irrigation network is entirely gravity-fed, meaning that farmers can easily flood and furrow irrigate without the use of pumps. In addition, the fields are small and the soils are very clayey, all of which make using center pivots for row crops particularly challenging and expensive. But, in the same valley, the peach orchards have successfully used micro-spray and drip systems. A major advantage of more efficient irrigation in addition to reduced water consumption is that crop yields are often higher because the water can be applied more directly to the plant when water is needed.
Figure 4.1.9.: Filtration and pumps for a drip irrigation system for onion and bean crops in the Uncompahgre Valley, CO. Credit: Gigi Richard
Figure 4.1.10: a) Center pivot sprinkler irrigation on an alfalfa crop in the San Luis Valley, CO and b) a hay crop for cattle feed in the Uncompahgre Valley, CO. Credit: Gigi Richard
Figure 4.1.11.: Micro-spray irrigation at a peach orchard in the Grand Valley, CO. Credit: Gigi Richard
Activate Your Learning
Table 4.1.1 presents data on the top 15 irrigated states in the United States. You can see how many acres of land are irrigated in each state, and how much water is used for irrigation of both surface water and groundwater. Consider the relationship between the amount of irrigated land in a state, the type of irrigation used and the amount of water used.
Table 4.1.1. Top 15 Irrigated States, 2010 Data from U.S. Geological Survey, 2014, Estimated Use of Water in the United States in 2010, Circular 1405, Washington, D.C., U.S. Department of Interior
State Irrigated Land (in thousand acres)
by type of irrigation
Surface Water Withdrawals Groundwater Withdrawals Total Irrigation Withdrawal
- Sprinkler Micro-irrigation Surface Total Thousand acre-feet per year % of irrigation water from surface water Thousand acre-feet per year % of irrigation water from groundwater Thousand acre-feet per year % of total water withdrawals
used for irrigation
California 1790 2890 5670 10400 16100 62% 9740 38% 25840 61%
Idaho 2420 4.57 1180 3600 11500 73% 4280 27% 15780 82%
Colorado 1410 0.2 1930 3340 9440 87% 1450 13% 10890 88%
Arkansas 518 0 4150 4670 1500 15% 8270 85% 9770 77%
Montana 753 0.64 886 1640 7880 98% 142 2% 8022 94%
Texas 3770 244 1910 5920 1940 25% 5710 75% 7650 27%
Nebraska 6370 0.57 2360 8730 1520 24% 4820 76% 6340 70%
Oregon 1210 97 594 1900 3750 64% 2140 36% 5890 78%
Arizona 195 28.1 770 993 3220 63% 1900 37% 5120 75%
Wyoming 184 4.12 892 1080 4410 90% 490 10% 4900 93%
Utah 625 1.45 710 1340 3060 85% 554 15% 3614 72%
Washington 1270 86.1 221 1580 2630 75% 894 25% 3524 63%
Kansas 2840 18.1 217 3080 179 5% 3230 95% 3409 76%
Florida 548 712 731 1990 1500 46% 1770 54% 3270 20%
New Mexico 461 19.6 397 878 1640 54% 1390 46% 3030 86%
Knowledge Check (flashcards)
Based on the information in Table 4.1.1, consider how you would answer the questions on the cards below. Click "Turn" to see the correct answer on the reverse side of each card.
Card 1:
Front: Do the states that use the most water also irrigate the most land? Which states are an exception?
Back: Idaho and Colorado use the second and third most water, but irrigate considerably less land than four other states. Nebraska irrigates more than twice as much land with less than half of the water that Idaho uses and about 2/3 of the water the Colorado uses.
Card 2:
Front: Compare the data for Nebraska with Idaho. Nebraska's water withdrawals are much lower for a larger acreage of land than Idaho. What is the major source of Nebraska's irrigation water? Surface or ground water? And, which type of irrigation is used?
Back: Groundwater and center pivot sprinklers are common in Nebraska. In Idaho, by contrast, gravity-driven, surface-water irrigation is more common. Differences in application efficiencies account for wide variation in irrigation water withdrawals between regions.
Card 3:
Front: What are two reasons, in addition to differences in irrigation efficiencies, that a state might use more water to irrigate less land?
Back: Difference in climate, that is temperature and humidity, can influence evaporation rates, and therefore affect crop consumption. Also, different plants consume different quantities of water, so irrigation needs vary depending on which crops are grown.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.01%3A_Water_resources_and_Food_Production/5.1.05%3A_Irrigation_Efficiency.txt
|
How much water do you eat?
Water is essential to growing food and every bite of food we consume required water to grow, process and transport. The water necessary to grow, process and transport food is often referred to as virtual water or embedded water. Virtual water is the entire amount of water required to produce all of the products we use, including our mobile phones and cotton t-shirts. But a global assessment of virtual water reveals that the majority of water that we consume is in the food we eat. If we total up all of the virtual water embedded in everything we use and eat, we can estimate our total water footprint. Water footprints can be used to provide insights into how much water is used every day in all of our activities including producing our food. For example, Figure 4.1.12 shows the amount of water used per person around the globe associated with wheat consumption. When you eat food imported from another region, you are eating the water of that region. The apple from New Zealand, grapes from Chile and lettuce from California all required water to grow and by consuming those products you’re "eating" that virtual water. The concepts of virtual water and water footprints can be powerful tools for businesses and governments to understand their water-related risks and for planning purposes (water footprint network).
Figure 4.1.12.: Water footprint per capita related to consumption of wheat products in the period 1996–2005. Credit: Figure from Hoekstra, A.Y. and M.M. Mekonnen, 2012, The Water Footprint of Humanity, Proceedings of the National Academy of Sciences, vol. 109, no. 9
Check Your Understanding
Scroll through this infographic explaining virtual water and then answer the questions below.
Check Your Understanding
Scroll through this infographic explaining virtual water and then answer the questions below.
Knowledge Check (MC)
1) How many liters of water do you "eat" every day?
• 6.4 quart
• 3.2 gallons
• 3.496 liters
2) If there are 3.8 liters per gallon, how many 20-gallon aquariums is that?
• 3,496 liters is 920 gallons, which is 46 20-gallon aquariums of water that each of us "eats" per day.
• 3,496 liters is 1840 gallons, which is 92 20-gallon aquariums of water that each of us "eats" per day.
• 3,496 liters is 460 gallons, which is 23 20-gallon aquariums of water that each of us "eats" per day.
3) What percentage of the total water consumed on average per person per day is associated with the production of the food we consume?
• 12% of the water we use is in our food
• 62% of the water we use is in our food
• 92% of the water we use is in our food
4) How big would the wall of one-liter water bottles equivalent to 15,400 liters? Convert the size of the wall to feet. How big is it?
• The wall of one-liter water bottles would be 2 meters by 10 meters, or about 6 feet by 33 feet.
• The wall of one-liter water bottles would be 15 meters by 60 meters, or about 49 feet by 197 feet.
• The wall of one-liter water bottles would be 8 meters by 40 meters, or about 26 feet by 131 feet.
5) Based on the graph of the amount of water needed to produce different food products, what sort of diet would you conclude uses the least/most amount of water?
• The graph shows that in general plants require less water to produce per kilogram than animal products, except for sugar.
• The graph shows that in general plants require less water to produce per kilogram than animal products, except for coffee.
• The graph shows that in general plants require less water to produce per kilogram than animal products, except for rice.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.01%3A_Water_resources_and_Food_Production/5.1.06%3A_Virtual_Water.txt
|
Instructions
Please download the worksheet below for detailed instructions.
You will perform three activities in this assessment:
1. Watch the video below, Turning water into food, and answer the questions on the worksheet as you watch the video
2. Visit water footprint calculator website, compare how your water footprint changes with varying levels of meat consumption, and answer questions on the worksheet. This portion of the assessment will be included in the weekly discussion and not included in the assessment quiz.
3. Perform a comparison of the virtual water embedded in different food products and answer questions on the worksheet.
Video: Turning water into food, Bruce Bugbee | TEDxUSU (16:32)
Click for a transcript of the turning water into food video.
This is my globe. I've had this globe for over thirty years to analyze the three-dimensional relationships among the continents and the water and the nations. Political boundaries have changed over the decades, but the fundamental relationships haven't changed. Like many globes like this, my globe has raised mountains. And I always thought those mountains were diminished on my globe so that it would make it easier to manufacture. Till one day, I looked up the height of Mount Everest and the diameter of the earth, and I got out my micrometers to check how much these were diminished. And to my amazement, they were embellished. They're considerably embellished. It was a very disturbing day for me. If the mountains are embellished, the oceans are similarly thin. And it turns out, if you take all the water on our blue planet, roll it up into a sphere, it comes out to the size of a ping-pong ball, a ping-pong ball!
But it doesn't stop there. Even though this is small, ninety-seven and a half percent of the water on our planet is saltwater. We can't drink it, we can't irrigate our crops with it. The two-and-a-half percent that's freshwater is the size of this small blue marble. Now, if I took this marble, I should put it up here on Greenland because 99% of our freshwater is frozen in glaciers, mostly Greenland and Antarctica. The 1% that's left is the size of a mustard seed. This mustard seed recycles and recycles and sustains life on the planet. We use about a gallon of water every day in the water we drink and in the food we eat. We use about another 20 gallons a day in washing things - washing our clothes and domestic use. But we use several hundred gallons of water every day, indirectly, in the food we eat. That amount dwarfs all the other uses.
In the United States, we dedicate 70% of our water resources to agriculture. I have spent much of my professional life studying how to improve water efficiency in agriculture and I'm joined in this effort by hundreds of colleagues around the world. The challenge is enormous. We can grow food without fossil fuels, but we cannot grow food without water. We think about our carbon footprint. We ought to be thinking about our water footprint, and even more importantly, we ought to be thinking about our global food print. The type of food that we eat has a bigger impact on the environment than the cars we drive. Eating a hamburger is equivalent in water use to taking an 80-minute shower.
To understand where water goes, it's useful to review the Earth's water cycle. As you can see from the globe, 70% of the planet surface is oceans, 30% is land. So the water cycle starts with one fundamental thing. The Sun shines on the oceans and water evaporates. This is an amazing process. All the salts are left behind. It’s distilled water coming out of the ocean. Anybody that has boiled a pot of water on their stove to dryness knows it takes an enormous amount of energy to evaporate water. The Sun does this every day for free, no fossil fuels, no fancy apparatus. Here's an amazing fact, more Sun shines on the earth in an hour than all of the people use in a year. So this water vapor from the ocean blows over to the land, falls on the land as rain, and soaks into the ground. It eventually runs back to the oceans in the rivers. We have a few thousand years of experience in ways to reuse this water. We built dams, we drill wells, we pump the water back up to the surface. It's still liquid water. The microbes in the soil have purified it. We drill more wells, we use it again. Eventually, it slips out of our grasp and runs back to the ocean. This is all liquid water. There’s two fates, the second one is shown here.
Now let's plant some seeds. The roots grow from the seeds and the water that used to go into the ocean is short cycled back to the roots of the plants. The Sun is hot. The same energy that falls on the ocean, falls on the plant leaves. To stay cool and hydrated, they evaporate water. It goes into the air, back to the ocean, falls as rain, and become saltwater again. We have far less control over this water vapor than we do over the liquid water that we can reuse. Without a continuous supply of water vapor, the plants dehydrate and food production stops. We irrigate to keep the plants hydrated. We have developed an amazing array of instruments to precisely tell when and how much to irrigate crops. They get just what they need, no more no less. In some older systems, 50% of the water evaporated from the soil surface and didn't get into the plants, went back to the ocean. In some of our modern systems we now have subsurface drip irrigation that can deliver 90% of the water right to the plants.
Every drop is precious. We call these efforts, more crop per drop. Even with our best efforts, we can't keep up, we can't grow the food we need to feed a hungry planet. So we access aquifers deep in the ground. These aquifers are called fossil aquifers because they formed a long time ago, they're difficult to recharge. We drill deep wells and pump that water up to the surface and irrigate the plants. These aquifers are being depleted far more rapidly than our fossil fuel reserves.
So how much crop can we get per drop? Let's take a look at these wheat plants over here. Wheat and rice are the biggest crops for direct human consumption on the planet. These two crops provide the vast majority of our calories. This wheat was developed here at Utah State University. My colleagues and I hybridized tall high yielding wheat with very short wheat to get a short high yielding wheat. We did this with NASA funding because we wanted to work with NASA to develop a life support system for space, that we could grow our own food in space independent of the planet. We've grown this wheat many times on the international space station and some of the astronauts turned out to be amazing photographers. This is a picture of this wheat at harvest on the International Space Station. That picture in the background is not a photo-shopped image of my globe. We grow this wheat hydroponically and if you haven't ever seen hydroponic wheat, here it is, the roots absorbing the water, going up to the tops of the plant. And if you’re a student in the lab, you know how much water this wheat takes every day. We developed this for a fast rate of development. This wheat is only three weeks old from transplanting to this tub. It'll be ready to harvest in five weeks. That's almost twice as fast as wheat in the field. Surprisingly, hydroponic wheat doesn't require any more water than field wheat. In fact, it’s often is less because there's no evaporation from the soil surface, there's no leaks, all the water goes through the plant. Even with perfect efficiency of every input, it still takes a hundred gallons of water to grow enough wheat to make a loaf of bread. A hundred gallons of water.
To emphasize this point, my students built this simulated hundred-gallon tank of water. If we put a faucet on this and dripped it into this tank into a plot big enough to grow that wheat, it would be empty about the time the wheat was ready to harvest. This greatly exceeds all the other household units it uses even when it's perfect. So why is this water use so enormous for plants? Plant physiology is a lot like human physiology. So let's consider breathing. We exhale water vapor to get oxygen. These plants lose water in order to get carbon dioxide. Every square millimeter of the surface of these plants is covered with tiny pores called stomata. The word stomata comes from the Greek word for mouth, so these stomata open to let carbon dioxide in, and they automatically lose water vapor. There's a hundred times more water vapor inside a leaf than there is carbon dioxide in the air and that's why the water use requirement is so enormous. Water has to come out to let the CO2 in. Saving water by closing the stomates is a lot like asking people to save water by stopping breathing. We can't do it. Humans have it easy. There are six hundred times more oxygen in the air than there is carbon dioxide, so that means plants need 600 times more water to grow.
For all the interest in global warming, carbon dioxide is a trace gas, point zero four percent. If we took the air molecules in this auditorium and made them fluorescent, we'd have a hard time finding the carbon dioxide molecules. There is only four carbon dioxide molecules for every 10,000 air molecules. It's one of the great wonders of the world that plants can find those carbon dioxide molecules and make our food, make high-energy food.
To better understand the effect of diet on the environment, let's analyze the land area required to grow the food for one person. So we're joined with this scientist, who has an advanced degree from the Playmobil Institute. And because of our studies with NASA, we've many times analyzed how much land he needs. This green felt represents the land area he needs to grows his own food. It's a small amount of land. If everything's perfect, he grows the food 365 days a year. He can sustain himself on this amount of land. Now we're going to send him into space. After all, we're trying to make a life support system for space. He's got to have some shelter, so we give him a house. But the house covers some of the land. Every photon is precious, so he's got to have a green roof on his house. Now he's ready, growing his own food. But he's going into the vacuum of space. So we're gonna give him a transparent dome, seal it up, recycle every drop of the water, grow the plants at just the right rate so the carbon dioxide and oxygen are in perfect balance, call up Morton Thiokol, put a big rocket under this, off it goes into space. He can go anywhere in the solar system and be self-sustaining, long as he doesn't go too far away from the Sun. What if he gets up one morning and says, “If you please, I would like an egg for breakfast”. He can't do it. We need additional land area to feed this chicken, to give him the egg. What if he says, “I'd like a glass of milk for lunch”? We need even more land area to feed the cow. If he eats the equivalent of 25 percent of his calories from animal products, which is the national average, it more than doubles the land area.
We'll get up each day, my colleagues in animal science, my colleagues in plant science, and work to make water use efficiency in agriculture better, but small changes in our diets can have a much bigger effect than years of our research. Please think about your global food print the next time you think about putting food in the garbage disposal. Please think about that mustard seed and those fossil aquifers, and consider eating less meat. This is the diet for a small planet thank you.
Submitting Your Assignment
Please submit your assignment in Module 4 Formative Assessment in Canvas.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.01%3A_Water_resources_and_Food_Production/5.1.07%3A_Formative_Assessment_-_Turning_W.txt
|
Introduction
Agricultural food production impacts water resources by depleting quantities of both surface water and groundwater and by polluting surface and groundwater with pesticides and fertilizers. Module 4.2 includes a brief introduction to impacts of agriculture on water resources, followed by two case studies: Colorado River (flow depletions and salinity) and Mississippi River (nutrients, eutrophication and the hypoxic zone in the Gulf of Mexico).
In completing this module, you will be able to:
• Attribute major water pollutants to appropriate agricultural sources
• Summarize the major impacts of agriculture on water resources
• Relate nutrient loading from fertilizer use to the dead zone in the Gulf of Mexico
Agricultural production has significant impacts on both the quality and quantity of surface and groundwater resources around the globe. In this unit, we'll look at how agricultural activities can contribute to water pollution, and we'll also consider how the diversion of irrigation water from both surface and groundwater resources create significant impacts on those water resources and the ecosystems they sustain. Some of the critical issues connecting agricultural activities with water resource quality and quantity are:
• Agricultural groundwater removal generally exceeds the natural recharge rate, and groundwater overpumping causes irreversible land settling and loss of aquifer storage capacity.
• Surface water diversion contributes to downstream ecosystem deterioration.
• Agricultural non-point source pollution is an important contributor to water quality degradation.
Impacts of Water Withdrawals
As discussed in the first part of Module 4, in regions where precipitation is insufficient to grow crops, irrigation water is drawn from lakes, rivers, and aquifers to supplement the insufficient or unreliable precipitation. Water diversions for irrigation can have impacts on both surface and groundwater resources.
We saw earlier in this module that the western US receives less precipitation than the eastern US. What does that mean for irrigation needs? The western US withdraws more water from lakes, rivers, and groundwater for irrigation than the eastern US (Figure 4.2.1). These water withdrawals are not without impacts, as we will see throughout the rest of this module. Figure 4.2.1 maps the water withdrawal data we explored in the previous unit. Do you remember the three states in the US that diverted the most water for irrigation in the US? California, Idaho, and Colorado. But Nebraska irrigated more acres than both Idaho and Colorado. In the map in Figure 4.2.1, you can clearly see the states that use the most irrigation water. Next, we'll look at some of the impacts of surface and groundwater withdrawals.
Figure 4.2.1.: Irrigation water withdrawals, by State, 2005. The majority of withdrawals (85 percent) and irrigated acres (74 percent) were in the 17 conterminous Western States. The 17 Western States are located in areas where average annual precipitation typically is less than 20 inches and is insufficient to support crops without supplemental water. Credit: The USGS Water Science School
5.02: Impacts of Food Production on Water Resources
The storage and redistribution of water by dams, diversions, and canals has been a key element in the development of civilizations. Large-scale water control systems, such as on the Nile in Egypt or the Colorado River in the southwestern U.S. make it possible to support large cities and millions of hectares of agricultural land. As the population grows and water diversions increase, serious questions are being raised about the environmental costs of these large water management systems.
Agricultural water withdrawals are placing significant pressure on water resources in water-scarce regions around the globe (Figure 4.2.2). If more than 20 percent of a region's renewable water resources are withdrawn, the region is in a state of water scarcity and the water resources of the region are under substantial pressure. If the withdrawal rises to 40 percent or more, then the situation is considered critical and evidence of stress on the functions of ecosystems become apparent. More than 40% of the world's rural population lives in river basins that are physically water scarce and some regions, such as parts of the Middle East, Northern Africa, and Central Asia, are already withdrawing water in excess of critical thresholds (FAO 2011).
In order to divert water from rivers, diversion structures or dams are usually constructed and create both positive and negative effects on the diverted river system. Dams can provide a multitude of benefits beyond their contribution to storage and diversion for agricultural uses. Dams can contribute to flood control, produce hydroelectric power, and create recreational opportunities on reservoirs. Negative impacts of dams and agricultural diversions include:
• Habitat fragmentation – blocks fish passage
• Reduction in streamflow downstream, which then results in changes in sediment transport, and in floodplain flooding
• Changes in water temperature downstream from a dam
• Evaporation losses from reservoirs in hot, dry climates
• Dislocation of people
• Sedimentation behind dam fills in reservoirs with sediment and reduces their useful lifespan
Figure 4.2.2.: Global Distribution of Physical Water Scarcity by Major River Basin (FAO 2011) Credit: © FAO 2011 The State of the World's Land and Water Resources for Food and Agriculture (SOLAW)
Click for a text description of the global distribution of physical water scarcity image
This world map shows that water famine especially high in the Southwestern United States and large areas of Africa, the Middle East, and South Asia.
|
textbooks/eng/Biological_Engineering/Food_and_the_Future_Environment_(Karsten_and_Vanek)/02%3A_Environmental_Dynamics_and_Drivers/05%3A_Food_and_Water/5.02%3A_Impacts_of_Food_Production_on_Water_Resources/5.2.01%3A_Impacts_of_Surface_Wat.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.