content
stringlengths 275
370k
|
---|
Shallow and deep groundwater can be a major environmental obstacle for any geophysical surveying technique, especially radio waves. Ground penetrating radar (GPR) is a mature technology with applications in many areas; see Daniels (2004) for an overview. Almost all applications are restricted to imaging the subsurface to a rather shallow depth: large losses of signal occur when propagating through materials with free ions. These conductive losses are determined by the soil conductivity. However, in environments where these losses are low, the depth penetration of GPR increases dramatically, allowing imaging up to depths of several kilometres, for example, through the polar ice on Mars (Jordan et al., 2009; and Orosei et al., 2018) and Antarctica (Berthelier et al., 2005).
To extend the depth range of conventional GPR surveys, a radar-based imaging technology has been developed that measures atomic dielectric resonance (ADR) in the subsurface. ADR technology measures subsurface (i) dielectric permittivity; (ii) spectral (energy, frequency and phase); and (iii) material resonance, from ground level without physically boring the ground. ADR is a patented investigative technique (Stove, 2005) which involves the measurement and interpretation of resonant energy responses of natural or synthetic materials to the interaction of pulsed electromagnetic radio waves from materials which permit the applied energy to pass through the material. The technology can be trained on known geology to build up a reference database, which is then used to classify data collected at new locations.
ADR measurements can be presented in outputs resembling: (i) stratigraphy (like seismic imagery); (ii) information on rock characteristics (like well logs); and (iii) rock petrography (like cores).
Fundamentals of ADR Technology
ADR technology is based on the principle that different materials reflect and absorb electromagnetic radiation (radio waves) at specific frequencies and energy levels. The ADR geophysical system transmits a pulse of electromagnetic energy containing a multispectral wave packet that resonates and interacts with the subsurface materials. The reflections from the subsurface are recorded as a time-domain trace and provide information about the location and composition of the materials encountered (Stove and van den Doel, 2015).
ADR technology finds applications in a variety of different fields, including mineral, oil, and gas exploration, as well as water discovery and geotechnical purposes. The field survey equipment (Figure 1) consists of one transmitting antenna and one receiving antenna, the antennas gimbal platform, the receiver control unit, the transmitter control unit, and the data acquisition computer. Data acquisition is relatively quick as the ADR Scanner and equipment are small and mobile.
The ADR signal generator produces a broadband pulse that is fed to the transmitting antenna. The transmitting antenna conditions the signal into the desired wave packet using dielectric lenses and mirrors so that the transmitter and receiver appear to have much longer chambers than their actual physical size (Stove et al., 2012). Once the signal has been sent to the transmitting antenna, a signal is sent to the receiving control unit to synchronise collection of the subsurface reflection data which is detected by the receiving antenna from different subsurface rock layers and mineral structures. The receiving control unit collects the signal from the receiving antenna and converts it into a form that can be read and stored on the data logging computer (Stove and van den Doel, 2015).
There are three different types of scans that are typically performed in the field: (i) Profile Scan (P-Scan); (ii) Wide Angle Reflection and Refraction scan (WARR); and (iii) Stare scan (Stare).
For P-Scans, the two antennas are moved parallel to one another along the full scan line length. P-Scans are profile scans of the subsurface collected by the ADR system from ground level. This scan produces a two-dimensional cross-section image of the subsurface that offers opportunity for structural and stratigraphic mapping.
For WARR scans, the receiver antenna is left permanently attached to a tripod platform, while the transmitting antenna is moved away from the stationary receiver along a full scan line length. WARRs are used for triangulating depths using techniques such as normal moveout and velocity spectrum analysis, similar to those employed in seismic data analysis.
Stares are conducted by having the two antennas at a fixed point (Figure 2). The distance between transmitter and receiver sensors varies depending on the depth of penetration sought - generally, deeper penetration is achieved by widening the sensors separation. A Stare scan involves a large number of wave packets, typically 100 000 or more, to increase the signal to noise ratio for high resolution and precision regarding the composition of the section it is penetrating.
The transmitted ADR wave packet (see Figure 3 for an example) contains several frequency components in the range of 1-100 MHz, where the low frequencies achieve deep penetration whereas the higher frequencies enhance vertical resolution (van den Doel et al, 2014). When rocks of different compositions, dielectrics, and petrographic textures have been exposed to ADR wave packets, a range of energy and frequency responses are detectable by suitable receivers. The recorded data describe how rocks and minerals, including hydrocarbons, interact with the electromagnetic radiation as it passes through them, and suggests their composition. The technology measures the dielectric permittivity of the subsurface as well as characterizing the nature of the rock types based on analysis of both the spectroscopic and resonant energy responses.
ADR is a time-domain electromagnetic (TDEM) method, but differs significantly from methods such as induced polarization and resistivity methods. Those methods employ much lower frequencies and do not involve propagating waves, but rely on measuring currents and polarizations induced by (relatively) slowly varying electric or magnetic fields. ADR, on the other hand, uses propagating wave packets and derives subsurface properties from the changes in spectral content and energy measured in the reflections. As such, the data analysis resembles seismic methods more than the usual TDEM inversion techniques. However, ADR waves are electromagnetic which are governed by different physics than seismic pressure waves.
Ray tracing and finite-difference time-domain (FDTD) simulation software have been developed for numerical simulation of the ADR wave propagation through various subsurface materials (van den Doel and Stove, 2016). Simulated scans are used for preliminary feasibility studies and for experimental design of specific field scans using ground models based on known geology and/or borehole data, if available.
Experiments have been performed to quantify the depth penetration possible with the system, and to explain the results theoretically with a propagation model based on Maxwell’s equations coupled to a ground model.
Theoretical modelling and empirical field measurements show that the high frequencies of the transmission pulses into the ground penetrate very little, but the low-frequency component experience very low losses. Results are analyzed to estimate the skin depth and interpreted in terms of a constitutive model incorporating Maxwell’s equations with conductivity and polarization losses. In a separate experiment, an ADR field system successfully detected the reflection of the radar pulse from a body of water through 350 m of rock. A numerical simulation of the model confirmed that these results do not contradict theoretical expectations.
The model developed for electromagnetic wave propagation through the subsurface by van den Doel et al (2014) results in this system of partial differential equations:
where ε0 is the permittivity of free space, P is the polarization, E is the electric field, σ is the conductivity, τ is the Debye relaxation time, and εr is the relative permittivity.
By using equations 1 and 2 with a correct application of the skin depth concept, it is possible to model (and measure in practice) deeper penetration of radio waves into the ground. For a given frequency in a uniform material, losses are proportional to distance no matter what the mechanism for signal attenuation/loss is. The depth of subsurface penetration must be exponential: e(-d/sd), where d is distance through the medium and sd is skin depth in metres. The skin depth is the distance where the signal falls off by 1/e. Skin depth generally decreases with frequency; penetration depth is proportional to skin depth. However, this depends on conductivity. In-situ conductivity value of the subsurface are generally unknown. In the experiments reported by van den Doel et al (2014), the ADR values of limestone conductivity were found to be 0.075 mS/m. The value measured was lower than generally assumed, but well within range of possible values (Jackson, 1998) (Figure 4). It would be desirable to confirm this value with independent measurements. Values for limestone conductivity reported in the literature vary widely, for example (Telford et al., 1990) quotes a range of 10-7 to 2x10-2 S/m. The actual value depends on complicated and not fully understood details of how pore water is embedded in the rock, and which solutes are present in the solution. See for example Revil (2013). Furthermore, Schön (2004) quotes values from ε = 102 S/m (wet) to ε = 105 S/m (dry) with permittivity values of ε = 11 (wet) to ε = 6 (dry), suggesting the limestone studied by van den Doel et al (2014) had a rather low water content.
Practical application of ADR for groundwater detection
In November 2012, Adrok Limited performed a practical field experiment to try to identify the presence of known water aquifers beneath a site operated by Scottish Water in Terregles, Dumfries, Scotland (Figure 5 and Figure 6).
Scottish Water provided downhole logs drilled directly beneath the survey site, specifically detailing subsurface water flow and saturation.
The Dumfries Basin aquifer is one of the most important groundwater resources available in Scotland. The Permian-filled Dumfries basin overlies steeply dipping Silurian mudstones and sandstone units and is bound to the west and northeast by en echelon faulting (Robins and Ball, 2006). The aquifers are contained within the Permian Doweel Breccia and Locharbriggs Sandstone formations, present beneath much of Dumfries basin and are the main aquifers for the region.
The Doweel formation has low intergranular permeability and porosity but has high secondary permeability in the form of fractures. Consequently, horizontal permeability is more prevalent than vertical permeability and therefore water is transported through naturally occurring horizontal fractures and inter-layer breaks between sandstone and breccia units that exist throughout the basin, providing a high flow rate for water movement. This can be seen in the downhole logs provided by Scottish Water on the right-hand side of Figure 7.
According to the geological information provided by Scottish Water, the two main aquifers are located in multiple fractures between 58-68 m and 98-110 m. This means that the aquifer is contained within in a less-permeable geological unit, confining the water to thin fracture networks. The stark difference between the water-rich fracture network and the enveloping non-permeable breccia provides a useful contrast for analysis of the ADR scanner results.
Adrok acquired three Stare scans (“virtual boreholes”) at Terregles: TS1, TS2, TS3. They were along a line, separated by 60 meters. The depths were obtained from the WARR scans at each Stare site.
Results of TS3 demonstrate that high Dielectrics and peaks in the Weighted Mean Frequency (Figure 7) could both be used to directly identify the location of the aquifers within the subsurface matched or closely matched with the fracture zones and therefore the aquifers at 58-68 m and 98-110 m with an offset of about 1-2 m.
Figure 8 clearly demonstrates that the aquifers at 58-68 m and 98-110 m are being identified by the ADR scanner at the same depths, separately across the three ADR Stare measurements: TS1, TS2 and TS3.
The Terregles aquifer study demonstrates that ADR technology can identify aquifers using a combination of Dielectrics and Weighted Mean Frequency (Figure 7). This cheap, quick, reliable, and non-destructive technique can be used to identify the presence of water in the field. It could be used to greatly reduce the drilling costs associated with water aquifer exploration and help map, to great detail, fracture pathways in the subsurface. Ultimately, a 4-dimensional (4D) ADR sensing system could be developed that would have the capability to continually monitor water levels and may also be able to show flow rates within aquifers. This would have applications for early warning flood defences and companies wishing to explore for and monitor water aquifers.
Theoretical results, as well as those from field experiments and surveys, suggest that the exploration depth of pulsed radar can be increased significantly by including a low frequency component. Data suggests the high losses of Ground Penetrating Radar (GPR) in the 10 - 1000 MHz range are due to polarization effects, rather than conductivity losses (van den Doel et al, 2014). Measurements of limestone conductivity indicate that the skin depth for the low-frequency component of an ADR wave packet could achieve much greater skin depth than possible by GPR. If these results hold for other rock types, deeply penetrating radar scanning can potentially become an attractive geophysical exploration technique in selective environments where there is no highly conductive near-surface layer, or where this layer is thin enough to penetrate.
The ability to utilise ADR technologies for determining subsurface lithology and fluids pre- drilling would generate large cost savings as well as environmental savings for monitoring subsurface groundwater and general geology.
The results of these experiments are encouraging and warrant further investigations. Of particular interest is the ADR’s ability to detect subsurface water at great depth.
About the Author(s)
Gordon D.C. Stove has over 15 years of experience in developing and applying geoscience technologies. He is co-founder and shareholder of Adrok Ltd., and since Adrok’s inception, Gordon has managed technology developments and the company’s global services business. Gordon is a Member of the Energy Institute, PESGB, EAGE, SEG, AAPG, the Scottish Oil Club, Institute of Directors and the Caledonian Club. Gordon supports young entrepreneurs as a Business Mentor for Business Mentoring Scotland, as well as for the Prince’s Trust Youth Business Scotland. Gordon holds a BSc (Hons) in Geography from the University of Edinburgh, is a PRINCE2 Registered Practitioner and completed the School of CEOs.
Berthelier, J. J., S. Bonaimé, V. Ciarletti, R. Clairquin, F. Dolon, A. Le Gall, D. Nevejans, R. Ney, and A. Reineix, 2005. Initial results of the Netlander imaging ground-penetrating radar operated on the Antarctic Ice Shelf: Geophysical research letters, vol. 32, L22305.
Daniels, D.J., 2004. Ground Penetrating Radar (2nd edition): The Institute of Electrical Engineers.
van den Doel, K., J. Jansen, M. Robinson, G. C. Stove, and G. D. C. Stove, 2014. Ground penetrating abilities of broadband pulsed radar in the 1-70 MHz range. SEG Technical Program Expanded Abstracts 2014: pp. 1770-1774. SEG Denver 2014 Annual Meeting.
van den Doel, K., G. Stove, 2016. Modelling and Simulation of a Deeply Penetrating Low Frequency Subsurface Radar System, proc. EAGE, Vienna.
Google Earth, 2013. Scotland: 56°34’42.50”N, 3°48’41.53”W, Eye alt 643.69 km. Imagery: SIO, NOAA, U.S. Navy, NGA, GEBCO.
Jackson, J. D., 1998. Classical Electrodynamics (3rd ed.). New York: John Wiley & Sons.
Jordan, R., G. Picardi, J. Plaut, K. Wheeler, D. Kirchner, A. Safaeinili, W. Johnson, R. Seu, D. Calabrese, E. Zampolini, A. Cicchetti, R. Huff, D. Gurnett, A. Ivanov, W. Kofman, R. Orosei, T. Thompson, P. Edenhofer, and O. Bombaci, 2009. The Mars express MARSIS sounder instrument: Planetary and Space Science, 57, 1975–1986.
R. Orosei, S. E. Lauro, E. Pettinelli, A. Cicchetti, M. Coradini, B. Cosciotti, F. Di Paolo, E. Flamini, E. Mattei, M. Pajola, F. Soldovieri, M. Cartacci, F. Cassenti, A. Frigeri, S. Giuppi, R. Martufi, A. Masdea, G. Mitri, C. Nenna, R. Noschese, M. Restano, R. Seu, 2018. Radar evidence of subglacial liquid water on Mars, Science, 10.1126/science.aar 7268
Revil, A., 2013. Effective conductivity and permittivity of un-saturated porous materials in the frequency range 1 mHz-1GHz: Water Resources Research, 49, 306–327.
Robins, N. S., and D. F. Ball, 2006. The Dumfries Basin aquifer. British Geological Survey Research Report.
Schön J. H., 2004. Physical properties of rocks, volume 8: Fundamentals and principles of petrophysics: Elsevier.
Stove, G. C., 2005. Radar Apparatus for Imaging and/or Spectrometric Analysis and Methods of Performing Imaging and/or Spectrometric Analysis of a Substance for Dimensional Measurement, Identification and Precision Radar Mapping, USA Patent No.: 6864826, Edinburgh, GB: US Patent Office.
Stove, G. C., J. McManus, M. J. Robinson, G. D. C. Stove, and A. Odell, 2012. Ground penetrating abilities of a new coherent radio wave and microwave imaging spectrometer: International Journal of Remote Sensing, 34, 303–324.
Stove, G. D. C., and K. van den Doel, 2015. Large depth exploration using pulsed radar. In: ASEG-PESA Technical Program Expanded Abstracts 2015, Perth. 1–4.
Telford, W. M., L. P. Geldart, and R. E. Sheriff, 1990. Applied geophysics: Cambridge University Press.
Vanhala, H., P. Lintinen, and A. Ojala, 2009. Electrical Resistivity Study of Permafrost on Ridnitšohkka Fell in Northwest Lapland, Finland. Geophysica, 45(1–2), 103–118. |
Even a casual observer will be impressed with the remarkable diversity, complexity and beauty of the natural world. A closer investigation will reveal a mathematical order and structure that at first may appear random. This mathematical order is related to the Fibonacci Sequence of numbers and the resulting Golden Spiral, Golden Ratio and Golden Angle. These topics are discussed in greater detail in the section titled “Spirals in Mathematics”. Simply to review, the next number in the Fibonacci Sequence, 0,1,1,2,3,5,8,13,21,34,55… is the sum of the preceding two numbers. The Golden Ratio is the proportional relationship between two successive Fibonacci numbers (e.g. 34 to 21 or as a ratio, 1: .618) and the Golden Spiral is a graphic display of the Fibonacci Sequence.
First, let’s look at the number of petals on a variety of flowers.
You will notice that in each flower example, the number of petals corresponds to one of the Fibonacci Sequence numbers.
We can also see Fibonacci numbers associated with plant growth. New shoots of a plant commonly grow out at an axil, a point where a leaf has grown out of the main stem. A schematic diagram of a simple plant, the sneezewort, shows some interesting numbers. If we draw horizontal lines through each axil, you will notice that the number of leaves and the number of branches are Fibonacci Sequence numbers.
For illustration purposes, the schematic diagrams above are presented as if the plant was flat but actually, a majority of plant leaves and shoots spiral around the main stem. Phyllotaxis or phyllotaxy refers to the pattern or arrangement of leaves on the stem or branch of a plant. It is fairly common that the spiral arrangement of leaves on a plant is related to the Golden Ratio and the Golden Angle. You will recall that the Golden Ratio of two consecutive Fibonacci numbers is the ratio of 1: .618. If we apply the Golden Ratio to a circle, we see that the 360° circle is proportionally divided into two arcs, 360° X .618 equals 222.5° and the remaining arc is 137.5°.
The Golden Angle is 137.5 degrees. It is fairly common, in many types of plants, that this is the angle at which adjacent leaves are positioned around the stem.
The positioning of leaves at this 137.5 degree angle around the stem minimizes the blockage of sunlight and rain falling down on the plant and therefore has a beneficial effect on the plant’s growth and survivability. It should be noted that the 137.5 degree positioning of leaves does not apply in all cases. These phyllotaxis observations are not considered “rules of nature” but rather as remarkable prevailing tendencies. The application of the Golden Angle observed here in botany will reappear in the section “Spirals in Architecture.”
Golden Ratio – www.pinterest.com
Calla Lily – www.goodstock.photos.com
Columbine – www.wallpaperswide.us.com
Black Eyed Susan – www.clipartkid.com
Field Daisy – www.hutui6.com
Branches/Leaves – http://britton.disted.camosun.bc.ca/fibslide/jbfibslide.htm
Golden Angle – www.gofiguremath.org
Golden Angle II – www.tinyurl.com/32ny6wt |
The Baroque Era spanned approximately 1600 to 1750, and followed the Renaissance era of musical style. Baroque music was typically harder to perform than Renaissance music as it was written more.
In the Baroque era (about 1600 to 1750) this is less often the case, though paradoxically the man acknowledged as the greatest composer of the era, J. S. Bach, did in fact write works that are.
Music during the Renaissance The new creativity during this era helped the people abandon the stricter ways of the medieval Era Born in italy by a small group of nobles, poets, and composers named the Camerata in Florence around 1575. it was believed that the Greek dramas had.
The Baroque Era Of Music - The Baroque Period, 1600-1750, marked another unique era of musical experimentation and evolvement. Titled after the trendy ornate architectural style during this period, the Baroque period, 1600 to 1750, witnessed a widespread change in the composer’s musical desires as they widely rebelled against the traditional styles that were prevalent during the Renaissance.
Differences And Similarities Between Baroque Music And Jazz. The Baroque period spanned from 1600 to 1750. The baroque period can be divided into three parts: early (1600-1640), middle (1640-1680), and late (1680-1750). Although today most people recognize the latest part as the baroque music, the earliest part was one of the most revolutionary.
Middle Ages vs. Renaissance Music Composers Johann Hummel -Born November 14, 1778 -Died October 17, 1837 -Austrian composer and virtuoso pianist. -His music reflects the transition from the Classical to the Romantic musical era. Mozart Beethoven -Born July 26, 1791 -Died July 29.
The Renaissance age began in the 1300s in the Late Medieval period and lasted into the 1600s and the beginning of Modern history. It was a time of great cultural, architectural, scientific, and educational advances. The Baroque period began in the.
The Baroque era was originated in the times when the categorization between Roman Catholics and Protestants also became the reason of division of churches. Moreover, many creations from the Baroque period are modified recreated versions of the most famous works of the Renaissance period (Carl, 2009). Therefore, this is the reason why both the styles share many similarities however differing in.
First off, to many people, “classical music” is a vague term meaning roughly “mostly-instrumental music written by dead (or obscure modern) composers, typically performed by people in formalwear,” and when used in this loose sense, it generally in.
Societal influences shaped the Baroque and romantic musical ages. Ornamentation was the rule in music, fashion and art between 1600 and 1750, when men and women wore wigs and layers of lace. Baroque composers typically worked for churches or wealthy ruling classes. During the romantic times, musicians broke boundaries in finding their creative outlets. Musicians no longer borrowed from.
After the Renaissance period came the Baroque period from 1600-1750. TheBaroque period was broken up into two periods called the Early Baroque period from 1600-1710 and the Late Baroque period from 1710-1750. During the Early Baroque period music from composers such as Vivaldi and Monteverdi emerged and became popular. Music became more light and airy.
In this essay I am going to look at the differences between Classical music and classical music. There are many differences between the two, one is an era and the other is a type of music. Classical is an era, it is from about 1730 to just after 1800. There are 5 different periods in time (for music), Renaissance, Baroque, Classical, Romantic and Modern (20th Century). At this moment in time.
Comparisons of the Renaissance and Baroque Periods Essay Sample. Comparisons of the Renaissance and Baroque Periods Western Governors University. Comparisons of the Renaissance and Baroque Periods It was the 14th century and Europe was shrouded in creative and intellectual “darkness” as a result of corrupt and oppressive religion. People.
Baroque music shares similar elements with the Renaissance art such as the intense application of counterpoint and polyphony. Nevertheless, the techniques differ. One of the significant divergences between the styles is the fact that Baroque music strives to create a great deal of emotions than the Renaissance melodies. In addition, the former.
Comparison of Renaissance vs Baroque Art Comparison Essay by. research takes into consideration the major aspects of musical culture of the specified ages and focuses on finding differences between the two ages. A) Introduction: Renaissance and Baroque B) General Discussion 1) The Historical Background and Key Features of Renaissance 2) Baroque Age and its Characteristic Trends 3.Get Your Custom Essay on Relationship analysis of the Baroque and. The use of ovals and circles in the Baroque architecture exhibited some of the other differences between these periods with a perfect example of St Peter’s Basilica completed in 1626, in contrast to sharp lines used in the Romantic Palais Garnier Opera House completed in 1875. Another bold difference between the two art.Both the Baroque and the Classical period in music produced great household name composers, such as Johannes Sebastian Bach and George Handel in the Baroque Era, and Wolfgang Amadeus Mozart and Josef Haydn in the Classical Period.To many listeners who are vaguely familiar with classical music, there is not much difference between these two musical periods. |
The main reason why the portuguese enslaved aficans was so they can have men to work on plantations. During colonial period the demand of suger, tobacco, cotten and other agricultural products increased. When this happened so did the demand of workers to work on the plantations especulay in Brazil. One of the best workers were ones that worked for free and also immune to diseases from the new world, these people were African slaves. The slaves were the the main workers of this time and there were many of them in Brazil, “about 812,000 Slaves,” (Robert Conrad pg.
After minimal schooling, he traveled around Latin America and eventually ended up in England. He embraced the ideas of the Pan African Movement. These ideas were the groundwork for the organization he founded, the UNIA. He attracted working class blacks, who formed a devoted following of the man and his ideas. Both of these leaders, of course, were interested in the betterment of their race, but their different visions in achieving their goals led to a division that became both philosophical and intensely personal.
How significant was the slave trade in the growth of the British Empire in the years c1680-1763? The slave trade also known as the transatlantic slave trade led to the greatest forced migration of a human population in history. Millions of Africans were transported to the Caribbean, North America and South America. It is accurate to say that the slave trade played a significant role in the years 1680-1763 due to the settlement of slaves in the colonies of the Americas. At the start of the eighteenth century Britain’s colonies relied heavily on the slave trade for their economic development.
Historical Analysis of Olaudah Equiano’s “From the Interesting Narrative of Olaudah Equiano” Ashley Williams History 193 Professor Bravo February 7, 2014 Olaudah Equiano was an African American that fell into slavery. He was forced like many other African Americans during the 17th and 18th century. In the short story about Olaudah Equiano, it tells about his life and what he went through being a slave. First, there was a lot of trading or bartering going on with the white slave owners. They would use their slaves as a material item and not think of them as a person.
Gary Nash discusses the impact of black people in a white peoples colony. The first negro people to come to America in Virginia were probably indentured servants who would receive some type of reward after their time of service was over, until 1660. After 1660 though many of the “Negros” that came to America were slaves, purchased as property. By the 1800’s every colony in America had “slave codes” which stripped black people of every right they had and made them property. His biggest claim was his stating of, “More than anything else it was sugar that transformed the African slave trade.” The slave trade became an extremely profitable enterprise for European nations once the sugar plantations reached the New World.
It was said the British were the best at keeping their slaves alive while on their voyage to their destination. The Trans-Atlantic Slave Trade had many effects on Africa but they all came from the countries participating in it like the New World and England who had Major roles in Trans Atlantic Slave Trade. The Trans-Atlantic Trade had many effects on the Africans not only on the population but also the government and the living with many new diseases brought over by the different traders and settlers. The Trans-Atlantic Slave trade imported 12 million slaves over its four century span this amount of people leaving a country took a toll out of the African community. The New World Traders and Travelers Newly
That's when, during the slavery era, the genres spirituals(gospel) and blues were created by African-Americans. African-Americans became the music choice for celebrations because masters began inviting slaves to European festivities. African-Americans became recognized as great musicians. They dominated and continue to dominate music and dancing of the community. Rap
Southerners grew tobacco, sugar and particularly cotton. Having the South upholding the name of being the Cotton Kings, which produced 10% of the USA’s manufactured goods in the 1850s. in comparison to the North who were industrialising rapidly, generating a much larger output, twice as much. With this the North were growing through the process of being industrialised with growth of transportation in railways such as the development of steamboats which revolutionised travel on the great divers, Mississippi and its tributaries. Cities were growing around the advanced factories, this meant that slavery would not fit this type of economy, which is a clear difference as the South’s economy was based around slavery.
The African slaves were also a lot more versatile than the indentured servants. While a servant could work for a pretty good amount of time without taking a break, the average African slave could work for almost a full day without stopping. This resulted in a much smoother and quicker harvest, and fairly easy upkeep of crops. This was very important, especially since many of the staple crops of the Southern colonies, given the geographic region, were very labor-intensive, such as tobacco and sugar cane, and the speed with which the crops were harvested resulted in more money for the plantation owner. The quicker the crops got to
Kwanzaa and Juneteenth are two very important celebrations for African American culture. They are two of the most popular celebrations relating to the subject. They do differ however, Kwanzaa is based on African heritage and resurrecting it for one week out of the year and Juneteenth is to celebrate the delayed end to slavery in Texas. Both of these celebrations are not only celebrated by African Americans but by all ethnicities. Kwanzaa is a holiday during the Christmas season to celebrate the African culture. |
Training Our Future Scientists and Engineers For A Better Future!
For years educators have stressed that students learn better “by doing” – especially in the sciences. Students of all ages learn more science-content and skills when they engage in investigation and discovery using everyday materials and the basic equipment of science.
Our carefully designed inquiry-based lessons involve children with hands-on activities, capturing children’s natural curiosity, stimulating their interest in science, and teaching them important science topics along with critical thinking skills.
Club SciKidz Where Science and Technology Connect!
Our unique Science Club program is designed to meet the needs of children in grades K-6. Club SciKidz lessons help engage students in observation, measurement, identification of properties, and experimentation involving life, earth, and physical science concepts. The units are rigorously researched by science educators with the help of teachers and children. The results are lessons that students can enjoy and genuinely learn from.
2021-2022 Super Hero After School Program Descriptions
Save The Bees:
Bees are in trouble! Since 2006 entire hives have been dying from what scientists call “Colony Collapse Disorder.” Worker honeybees are disappearing. Without the worker bees, the hive, or colony, can’t survive. Scientists don’t yet know the exact cause of CCD. Diseases, parasites, loss of habitat, and poor nutrition are all thought to contribute to CCD. However, research is pointing to pesticides as the main cause of CCD.
Students will perform experiments in pollination, build a bee house, plant their own Bee friendly garden, create a beeswax kitchen wrap, and make a honeycomb candle. Can you help save the bees?
When you think of the rainforest, what do you imagine? Do you see tall trees, dense foliage, and diverse animals and insects? How about lots and lots of water? Rainforests are found all over the world, the most well-known one being the tropical rainforest in Brazil. Unfortunately, rainforests are in trouble! Half of the world's rainforests have been destroyed in the last 100 years by humans.
Students will complete projects in building a model rainforest, create a solar oven, assemble and color a water cycle wheel, and create a rainforest friendly bracelet.
Holiday Toy Box Science:
At Club SciKidz we really know how to celebrate the holidays-scientifically that is!
Science is everywhere, in your home, your school, your car…even your toys! Something as simple as a rubber ball demonstrates a scientific principle. In this part of our fall workshop, you will become an expert in the science behind the toy. You are going to experiment with gravity, energy, and other amazing scientific principles. We’ve even added some yummy treats!
Join us as we create Intergalactic donuts, create a miniature galaxy painting, Solve a Puzzle Cube, and experiment with our light up rail twirler and a super bounding ball.
Save Our Oceans:
How many times have you been asked to “throw something away?” Lots of times, right? Have you ever stopped to think about just exactly where is this, “away”? Unfortunately, a lot of what is thrown “away” ends up in our oceans.
Our Save The Oceans box gives young marine biologists the opportunity to learn about how pollution impacts the oceans and the marine biology of our precious seas.
Students will complete projects in making their own bioplastic, experiment with biodegradable packing peanuts, build a model of the ocean zones, learn about pressure and Boyle’s Law and of course, concoct some cool mermaid slime.
Join us on the high seas and help save our oceans!
Every April, kids around the world show support for protecting our earth by celebrating Earth Day. The planet is facing a crisis; pollution, deforestation, and overuse of Earth’s natural resources are causing temperatures to rise. Fortunately, scientists called environmental engineers are working hard to fix these problems. Using their knowledge of physics, biology, and chemistry, these scientists come up with new ways to decrease pollution, improve recycling, and combat climate change. In this workshop, you will become an environmental engineer and learn how to make everyday Earth Day!
Students will build a clean water mini-filtration apparatus, create an Eco-Bag, as an alternative to plastic and paper, grow a tree in our Forest Forever experiment, and participate in our Reduce, Reuse, and Recycle
Activity. Caution: this activity uses pine tree seeds.
Go Fly a Kite!
Humans have always been fascinated with flying like the birds. Kites may seem like they are just for fun, but history shows kites are much more than a toy. Ancient Chinese used kites in battle and exploration. 18th century scientists used kites to study weather and electricity. The Wright Brothers would not have been able to invent the airplane without first experimenting with kites. Kites were important tools of communication and observations in both world wars. Experimental kite materials led to improvements in parachutes and hang gliders. You are going to build a Kazoon Kite and do some experiments of your own. Get ready for some high-flying fun!
Students will build and fly a Kazoon Kite using Tetrahedron shapes. Construct a mini wind turbine, and practice their aviation skills using a foam glider, and assemble their own anemometer. |
NASA confirmed presence of ice on Moon using data from Chandrayaan-I spacecraft
Using data from the Chandrayaan-I spacecraft, that was launched by the Indian Space Research Organisation (ISRO) in 2008, NASA Scientists have confirmed the presence of frozen water deposits in the darkest and coldest parts of the Moon’s polar regions. Scientists used data from NASA’s Moon Mineralogy Mapper (M3) instrument aboard the Chandrayaan-1 spacecraft to identify three specific signatures that definitively prove there is water ice at the surface of the Moon. According to the study published in the journal PNAS, the ice deposits are patchily distributed and could possibly be ancient. The study said with enough ice sitting at the surface within the top few millimetres water would possibly be accessible as a resource for future expeditions to explore and even stay on the Moon. Most of the newfound water ice lies in the shadows of craters near the poles, where the warmest temperatures never reach above minus 156 degrees Celsius. Due to the very small tilt of the Moon’s rotation axis, sunlight never reaches these regions.
Topics: Chandrayaan programme • Chandrayaan-1 • Exploration of the Moon • In India • Indian Space Research Organisation • Lunar rovers • Mapper • Moon • Moon Mineralogy Mapper • Spacecraft • Spaceflight • Water
|View All E-Books: Recent Release| |
The Yellow Sac Spider (Cheiracanthium inclusum) is also known as the Black-Footed Spider. The Yellow Sac Spider is one of a group of spiders in North America whose bites are generally considered to be medically significant. The Yellow Sac Spider is very common in most of the United States and is the cause of a lot of spider bites and other unwanted encounters.
Yellow sac spider Characteristics
Yellow Sac Spiders are light yellow to pale yellowish green, sometimes with a orange-brown stripe on top of the abdomen. The cephalothorax (fused head and thorax) of the Yellow Sac Spider is orange brown to reddish and the abdomen is pale yellow to light grey. An adult female sac spiders body is typically 1/4 to 3/8 inches long and its leg span is up to 1 inch.
Males are more slender, with a slightly larger leg span. The first pair of legs is longer than the fourth. Yellow Sac Spiders have eight similarly-sized dark eyes arranged in two horizontal rows.
Yellow sac spider Habitat and Webs
Yellow Sac Spiders take shelter in flattened silk tubes during the day and move about to hunt during the night. Yellow Sac Spiders often live in houses and can frequently be found crawling upon walls or other vertical surfaces. Yellow Sac Spiders construct a silken tube or sac in a protected area, such as within a leaf, under landscape timbers or logs, or at the junction of a wall and ceiling and they use this sac as their daytime retreat. This is how the Yellow Sac Spiders derives its common name, sac spider. These spiders do not build webs.
Yellow sac spider Diet
Yellow Sac Spiders are active hunters, emerging at twilight from their silken sac to seek out prey. Yellow Sac Spiders prey is a wide diet of arthropods, including spiders larger than themselves and even their own eggs. Outdoors, they often search among foliage, waving their first pair of legs in front of them as they rapidly climb among leaves and stems of plants. Because of their active searching habits, Yellow Sac Spiders often enter homes, particularly during early autumn when their food supply decreases.
Yellow sac spider Reproduction
After mating, females Yellow Sac Spiders produce around 5 egg sacs each containing 30 to 48 eggs. The eggs are laid in a loose mass and covered with a thin coat of spun silk. The small, white, paper-like sacs are often found in easily overlooked locations, along ceilings and corners, or behind pictures and shelves. The female may guard these egg sacs until the eggs hatch. Eggs are usually laid in autumn and the spiderlings emerge during the following spring. The female may produce several egg masses during her lifetime. Adults can be found from April through November, however, in the hottest months small spiders makes up the largest proportion of the population.
Yellow sac spider Venom
The chelicerae of Yellow Sac Spiders are very powerful and the fangs can penetrate human skin quite easily. Most bites on humans occur when people are gardening or performing other kinds of outdoor activities. The venom has mild and local cytotoxic (a toxic effect on cells) and neurotoxic (poisonous to nerve tissue, as to the brain or spinal cord) effects. No fatal incidents from encounters with the yellow sac spider have ever been recorded. It has been noted that a large number of bites attributed to the Brown Recluse spider may actually be the result of yellow sac spider bites. |
In a time when the majority of young children know more about a computer than agriculture, this project brings agriculture to them in a unique way. This presentation introduces a program that aims to connect elementary students with the vast agriculture industry in the digital age using appropriate technology. The project consists of teaching children in Eastern Kentucky and beyond about agriculture and land use through the virtual dissemination of relevant stories, facts, and activities. In this way children can learn how agriculture is vital to their everyday life. Featured topics include horticulture crops, agronomic crops, livestock, and the history of agriculture practices in Appalachia. The information is being shared through the Facebook social media platform using a page entitled, “Little Aggies. The project's goal is to foster learning about agriculture in an intriguing way, while encouraging reading and greater appreciation for plants, animals, and all living things.
Smith, Heather and Gritton, Joy, "Cultivating Gratitude And Learning: Virtual Agriculture Education For Elementary School Students" (2021). 2021 Celebration of Student Scholarship - Oral Presentations. 7. |
The importance of teaching children phonics, to read by linking letters and sounds, is essential in early childhood education. Play-based strategies for supporting the development of oral language and for increasing both phonological and phonemic awareness in young children are more effective in learning contexts, whether it’s at home or in the classroom.
Evolve with QUTeX
QUTeX develops the practical skills of educational professionals wanting to improve their performance in the classroom and advance their career. In undertaking this module, you’ll develop the skills to teach phonics for kids using play-based learning to keep young children engaged. This course has been designed to be relevant to most early childhood contexts worldwide and will develop your professional skills as an educator so you can improve your performance and career.
Who should participate?
Teaching Phonics in Early Childhood has been designed for early childhood educators and workers who want to improve their professional practice and learn how to teach phonics step by step. This course is also of interest to parents of young children who want to introduce their children to phonics prior to school.
For Australian educators, this module aligns with the following Australian Professional Standards for Teachers (APST):
2.1 Content and teaching strategies of the teaching area
2.5 Literacy and numeracy standards
6.1 Identify and plan professional learning needs
6.2 Engage in professional learning and improve practice
6.3 Engage with colleagues and improve practice
7.4 Engage with professional teaching networks and broader communities
This course aligns with the following National Quality Standards (NQS):
Element 1.2.1 Intentional teaching
Element 1.2.2 Responsive teaching and scaffolding
This online module is open for international enrolments. |
Essay on blood and our body
As the stem cell matures, several distinct cells evolve. These include red blood cells, white blood cells, and platelets. Immature blood cells are also called blasts. Some blasts stay in the marrow to mature. Others travel to other parts of the body to develop into mature, functioning blood cells. Hemoglobin Hgb is an important protein in the red blood cells that carries oxygen from the lungs to all parts of our body. The main job of white blood cells, or leukocytes, is to fight infection.
There are several types of white blood cells and each has its own role in fighting bacterial, viral, fungal, and parasitic infections. Types of white blood cells that are most important for helping protect the body from infection and foreign cells include the following:. Help heal wounds not only by fighting infection but also by ingesting matter, such as dead cells, tissue debris, and old red blood cells. The main job of platelets, or thrombocytes, is blood clotting. Platelets are much smaller in size than the other blood cells. They group together to form clumps, or a plug, in the hole of a vessel to stop bleeding.
A CBC count is a measurement of size, number, and maturity of the different blood cells in the blood sample. A CBC can be used to find problems with either the production or destruction of blood cells. Variations from the normal number, size, or maturity of the blood cells can be used to mean there is an infection or disease process. Often with an infection, the number of white blood cells will be elevated. Many forms of cancer can affect the production of blood cells.
For instance, an increase in the immature white blood cells in a CBC can be associated with leukemia.
Blood diseases, such as anemia and sickle cell disease, will cause an abnormally low hemoglobin. To aid in diagnosing anemia and other blood disorders and certain cancers of the blood; to monitor blood loss and infection; or to monitor response to cancer therapy, such as chemotherapy and radiation. To evaluate bleeding and clotting disorders and to monitor anticoagulation anticlotting therapies. Search Encyclopedia. Overview of Blood and Blood Components What is blood? Blood is the life-maintaining fluid that circulates through the entire body. What is the function of blood?
Blood function and composition | HealthEngine Blog
The arteries have thicker smooth muscle and connective tissue than the veins to accommodate the higher pressure and speed of freshly-pumped blood. The veins are thinner walled as the pressure and rate of flow are much lower. In addition, veins are structurally different from arteries in that veins have valves to prevent the backflow of blood. Because veins have to work against gravity to get blood back to the heart, contraction of skeletal muscle assists with the flow of blood back to the heart.
Circulatory and pulmonary systems
According to Gordon, Z. In the fetus, it extends into the umbilical cord. The umbilical arteries supply deoxygenated blood from the fetus to the placenta.
There are usually two umbilical arteries present together with one umbilical vein in the umbilical cord. The umbilical arteries surround the urinary bladder and then carry all the deoxygenated blood out of the fetus through the umbilical cord. Inside the placenta, the umbilical arteries connect with each other at a distance of approximately 5 mm from the cord insertion in what is called the Hyrtl anastomosis. The umbilical arteries are actually the latter of the internal iliac arteries anterior division of that supply the hind limbs with blood and nutrients in the fetus.
Arterial blood has a inform composition of gasses in all parts of the body. Blood from the placenta is carried to the fetus by the umbilical vein. About half of this enters the fetal ductus venosus and is carried to the inferior vena cava, while the other half enters the liver proper from the inferior border of the liver. The branch of the umbilical vein that supplies the right lobe of the liver first joins with the portal vein.
The blood then moves to the right atrium of the heart. In the fetus, there is an opening between the right and left atrium the foramen ovale , and most of the blood flows through this hole directly into the left atrium from the right atrium, thus bypassing pulmonary circulation. The continuation of this blood flow is into the left ventricle, and from there it is pumped through the aorta into the body. Some of the blood moves from the aorta through the internal iliac arteries to the umbilical arteries, and re-enters the placenta, where carbon dioxide and other waste products from the fetus are taken up and enter the maternal circulation.
Some of the blood entering the right atrium does not pass directly to the left atrium through the foramen ovale, but enters the right ventricle and is pumped into the pulmonary artery. In the fetus, there is a special connection between the pulmonary artery and the aorta, called the ductus arteriosus, which directs most of this blood away from the lungs which aren't being used for respiration at this point as the fetus is suspended in amniotic fluid. According to Wang, Y. The blood pressure inside the umbilical vein is approximately 20 mmHg.
The umbilical veins bring the nutrient- and oxygen-rich blood from the placental villi via the umbilical cord to the embryo. Normally there exists only one umbilical vein in the umbilical cord: the unpaired umbilical vein.
How does blood work, and what problems occur?
At the caudal rim of the navel, though, it becomes connected to the two intraembryonic umbilical veins, which go laterally from the umbilical coelom to the heart and empty into the two sinus horns with the omphalomesenteric veins that lie medial from them. In the further development the umbilical veins become quickly included in the developing liver, so that they obtain a connection to the liver's capillary plexus. Now the blood from the left and right umbilical vein gets into the sinus venosus directly on the one hand and via the anastomoses in the liver on the other.
The extrahepatic part of the umbilical veins atrophies rather soon. The blood of the umbilical veins now reaches the sinus venosus mixed with the blood of the omphalomesenteric veins passing through the liver.
Essay on Blood: Top 6 Essays | Circulatory System | Human Physiology
The posthepatic part of the left omphalomesenteric vein atrophies and the right one takes over all of the blood flowing through the liver. In conclusion, mostly, arteries contain oxygenated blood, and veins contain carbon dioxide-rich and oxygen-poor blood; however, there is one exception. The artery leading to the lungs from the heart does not contain any oxygen, and the vein leading from the lungs to the heart does.
This is so the oxygen from the lungs is brought to the heart in order for it to work.
- writing introductions in research papers.
- the house on mango street essay outline.
- two styles of essays are;
- Blood: composition, properties and functions;
- Blood components;
- What is our blood made of?!
Everywhere else, the oxygenated vessels arteries carry blood to the rest of the body. Histologically, veins have a large, floppy irregular lumen and a thinner wall. They also contain valves to prevent backflow when blood is being taken back up to the heart. Arteries usually have a round lumen and thick walls, but no valves. This discussion has outlined that blood in the umbilical artery contains less glucose than blood in the umbilical vein, blood in the umbilical artery contains less carbon dioxide than blood in the umbilical vein.
Blood in the umbilical vein contains less oxygen than blood in the umbilical artery. Blood in the umbilical vein contains more urea than blood in the umbilical artery. |
Hidden underwater melt-off in the Antarctic is doubling every 20 years and could soon overtake Greenland to become the biggest source of sea-level rise, according to the first complete underwater map of the world’s largest body of ice.
Warming waters have caused the base of ice near the ocean floor around the south pole to shrink by 1,463 square kilometers—an area the size of Greater London—between 2010 and 2016, according to the new study published in Nature Geoscience.
The research by the UK Centre for Polar Observation and Modelling at the University of Leeds suggests climate change is affecting the Antarctic more than previously believed and is likely to prompt global projections of sea-level rise to be revised upward.
Until recently, the Antarctic was seen as relatively stable. Viewed from above, the extent of land and sea ice in the far south has not changed as dramatically as in the far north.
But the new study found even a small increase in temperature has been enough to cause a loss of five meters every year from the bottom edge of the ice sheet, some of which is more than 2 km underwater.
“What’s happening is that Antarctica is being melted away at its base. We can’t see it, because it’s happening below the sea surface,” said Professor Andrew Shepherd, one of the authors of the paper. “The changes mean that very soon the sea-level contribution from Antarctica could outstrip that from Greenland.”
The study measures the Antarctic’s “grounding line”—the bottommost edge of the ice sheet across 16,000 km of coastline. This is done by using elevation data from the European Space Agency’s CryoSat-2 and applying Archimedes’s principle of buoyancy, which relates the thickness of floating ice to the height of its surface.
The greatest declines were seen in west Antarctica. At eight of the ice sheet’s 65 biggest glaciers, the speed of retreat was more than five times the rate of deglaciation since the last ice age. Even in east Antarctica, where some scientists—and many climate deniers—had previously believed ice might be increasing based on surface area, glaciers were at best stable and at worst in retreat when underwater ice was taken into account.
“It should give people more cause for concern,” said Shepherd. “Now that we have mapped the whole edge of the ice sheet, it rules out any chance that parts of Antarctica are advancing. We see retreat in more places and stasis elsewhere. The net effect is that the ice sheet overall is retreating. People can’t say ‘you’ve left a stone unturned’. We’ve looked everywhere now.”
The results could prompt an upward revision of sea-level rise projections. Ten years ago, the main driver was Greenland. More recently, the Antarctic’s estimated contribution has been raised by the Intergovernmental Panel on Climate Change. But its forecasts were based on measurements from the two main west Antarctic glaciers—Thwaites and Pine Island—a sample that provides an overly narrow and conservative view of what is happening when compared with the new research.
The study’s lead author, Hannes Konrad, said there was now clear evidence that the underwater glacial retreat is happening across the ice sheet.
“This retreat has had a huge impact on inland glaciers,” he said, “because releasing them from the sea bed removes friction, causing them to speed up and contribute to global sea level rise.” |
3D Animation Workshop: Lesson 2: Building an Object
Lesson 2 - Building an Object - Part 1
In Lesson 1, we were introduced to the basics of 3D space and began to feel our way around. We passed quickly over the idea that the objects we create are composed primarily of points this space--that they are 3-D coordinates designated (x,y,z), for example (1,3,5) or (999,0,-222). This tutorial takes this concept deeper, as we learn to construct the simplest possible object out of points in 3-D space.
But before we go on, let's take a moment to consider an important issue.
The vast majority of people first approaching 3-D graphics are intimidated by the math and geometry concepts. This is especially true of artists. How much of this stuff do we really have to understand in order to create?
We have become accustomed to a sharp line drawn between the arts and the sciences, but this was not always the case. The great Renaissance artists were both artists and scientists, as they had to be to master their art. These great painters and sculptors learned anatomy from the dissection of corpses, alongside the medical students. But their purpose was not to learn medicine, but rather how to represent the human body in art. They studied the geometry of Euclid, and they built and experimented with devices to explore perspective. These inventions became our modern day camera centuries later, but the Renaissance artists were exploring the physics of sight for artistic rather than scientific reasons.
Now, at the birth of 3-D computer graphics as an artistic medium, those who wish to master the tools and produce the art must return to the spirit of the Renaissance masters who delved into every field they needed for their art, without fears or prejudices. In any case, mathematical ideas that may seem dry and forbidding can suddenly seem beautiful and exciting as they are used to create our 3-D art.
Back to our subject. The reader will remember the 3D axes created in Lesson 1.
We are looking at the origin (0,0,0) from above and somewhat over to the left so that we can see the whole scene unobstructed. The blue axis is the vertical one, called y. Positive y values are up and negative ones are down. Let's assume that our axes extend exactly 1 unit from the origin. Thus the point (0,1,0) is at the top end of the y (blue) axis, and (0,-1,0) is at the bottom end. Take a moment to be sure you can imagine this before you continue.
The yellow axis is the horizontal one, called x. (1,0,0) is at the right tip of this axis and (-1,0,0) is at the left tip as we look from the front. The green axis is z, the depth axis. (0,0,-1) is at the far end away from us, while (0,0,1) is at the tip nearest to us as we look from the front. Notice that depth increases in this way as the z value decreases. This creates what is called a "right-handed" coordinate system, and is the most common convention in use today.
|To Continue to Parts 2 and 3, Use Arrow Buttons||
Created: Mar. 4, 1997
Revised: Mar. 4, 1997 |
Hebrew is a Semitic dialect or language that developed in Canaan between the Jordan River and Mediterranean Sea
during the latter half of the second millennium B.C. Biblical Hebrew was a conservative literary language, which
coexisted with other linguistic (spoken) languages and dialects until the Babylonian Exile (6th century B.C.). A
distinction must be made between linguistic Hebrew and Biblical Hebrew, because the spoken language was more
susceptible to regional cultural influences, independent linguistic development and dialect diversity. With the
literary form of Hebrew, scholars have noted that Biblical Hebrew before the Exile exhibited marked differences
from Biblical Hebrew of literary works after the Exile:
Classical Biblical Hebrew (Biblical Hebrew Proper) – This encompassed the Pentateuch
and other Old Testament books written before the Exile.
Based on the earliest pieces of Hebrew writing in possession (such as the Gezer Calendar dated
900 B.C.), there is evidence that it belonged to the Canaanite group of languages.
Late Biblical Hebrew – This described the literary language of the Old Testament books
after the Exile.
The morphology, phonology, and lexicon of Late Biblical Hebrew exhibit a significant Aramaic
influence. For example, the Masoretes (6th century A.D.), responsible for establishing the vowel system for the
consonant only Hebrew lexicon, used vowel features found in Aramaic.
After the Exile, Classical Biblical Hebrew disappeared from everyday life and was used primarily for literary,
liturgical, and administrative purposes until the fall of the Second Temple in 70 A.D.
Because so little data is available for study, limited largely to the Old Testament, the origin of Hebrew is
unknown. Without the ability of examining primary sources, scholars have developed other theoretical approaches
to study the question of the origin and literary transmission of Hebrew such as historical reconstruction, comparative
studies with other local languages and tracing dialect geography.
Identifying lexical anachronisms is one method that destructive critics use to deny Mosaic authorship of the
Pentateuch. Lexical anachronisms are the use of words that may not have existed during a certain period of time and
their presence provides evidence of inconsistency with the dating of that manuscript. For example, if the word "computer"
were found in a manuscript purportedly dated in the 1600 A.D., this "late" word would refute that early dating of
authorship. In a similar manner, it is alleged that some words found in the Pentateuch were used after Moses’ time.
While this approach does have merit, there are several problems that affect the interpretation and conclusion of
the data. Among them:
1. Current scholarship does not have an extensive record of Ancient Hebrew; thus, there is no
clear lexical basis for determining what is ancient or not. For example, Biblical Hebrew only has 8000 lexical terms
preserved in the Old Testament, which isn’t even enough to support a spoken language. Yet there is ample evidence
that much of Biblical Hebrew is indeed ancient.
Biblical Hebrew has names of places and people that are not found in Late Biblical Hebrew or mentioned
in any other ancient texts.
The popular etymologies of practically all of the patriarchal names are explained by synonyms rather
than by their true roots, which indicates that their original meaning are unknown or forgotten.
Over a quarter of the words in the Bible appear just once, and 289 of them belong to root words
used only once in Biblical Hebrew. While the meaning of most were determined on the basis of Rabbinic Hebrew or
Comparative Semitics, there are several in which only the approximate meaning of a term can be discovered.
2. Some words found in the Pyramid Texts (2400 BC) disappear in usage until they were used in
writings of the Greco-Roman period (300-30 BC). If dating based on the presence of late words was used, then the
Pyramid Texts would be incorrectly dated at the later date.
3. Aramaic words, thought to be evidence of Late Biblical Hebrew, created when the Jews were
replacing Hebrew with Aramaic, have turned out to be either Hebrew, Phoenician, Babylonia, or Arabic words. Some
of these words are of linguistic languages that were concurrent with the time of Moses.
An example of this can be found in Genesis 31:47 where Jacob and Laban used different languages
to name a heap of stones. Jacob used the Hebrew term "Galeed," and Laban used the Aramaic term "Jegarsahadutha."
4. Scribal traditions during the second millennium B.C. are unknown. It is not known when
scribal glosses, intended to clarify the text (or update archaic terms or grammar), have been inserted as part
of the text during the process of duplication. However, glosses introduced late words, which has been used to
erroneously refute the early dating of the Pentateuch.
Late Spelling Patterns
In another approach, some scholars have studied the orthography of the Pentateuch to gain a better
understanding of how the Bible was written and transmitted. In general, the older the text, the greater number
of terms that will be spelled in an old fashioned manner. This principle of conservative spelling is used in
all philological studies interested in determining the origin of any language.
For example, the pronoun "you" is spelled "thee" and "thou" in early English manuscripts
dated hundreds of centuries earlier than today.
From objective studies, the Pentateuch has been found to be the most conservative of all the books of the Old
Testament. Within the Pentateuch, the books can be ranked in order of most to least conservative spelling: Exodus,
Leviticus, Numbers, Genesis, and Deuteronomy.
It has been determined that it is the priestly material that contains the most conservative
spelling. Exodus and Leviticus are dominated by priestly material followed by Numbers and Genesis.
As an example of more technical detail, Genesis has a common 3rd person singular pronoun form
-hw; Joshua and later works breaks this into masculine and feminine forms.
From another orthographic perspective, destructive critics have used spelling patterns as a basis to deny
Mosaic authorship. They point to Proto-Semitic inscriptions on Semitic mining camp huts and stone tablets found
in Southern Sinai dated around 1800 B.C., which have 27 consonants. Destructive critics contend that since
Hebrew is probably a descendant of ancient Northwest Semitic (Proto- Semitic), it could not be possibly the
language of Moses’ time since it has only 22 consonants; thus, it had to be the product of later Hebrew authors.
1. Because the Semitic family of some 70 languages shares similar phonology, morphology, syntax,
and vocabulary, various scholars have argued that linguistic unity is the result of a common linguistic origin
including the possibility common race or peoples. Their historical approach includes a hypothetical family tree of
Semitic languages, which presumes the existence of a series of proto languages, which is not supported by any
There is no archeological data to confirm these speculative presumptions, hypotheses and theories.
Archeological data can only take scholars to a period in which there was already more diversity than unity, with
distinct peoples across a wide area speaking languages, which have certain elements in common and undergoing a
variety of independent developments.
Because of the absence of archeological data, there is considerable skepticism of scholars who
use a family tree as part of their theory to explain the existence of a Semitic language.
2. Newer methods of investigation such as dialect geography and their contact points with
other cultures, prove that the traditional classification of Semitic languages into 5 principle languages is
inadequate (Akkadian, Canaanite, Aramaic, Arabic, and Ethiopic).
There is growing recognition that before the first millennium B.C., Northwest Semitic
languages may not be simply seen as two distinct language groups Canaanite or Aramaic but rather as a group of
languages with various features in common.
How Hebrew began still remains a mystery. Scholars are discovering that early language development was more
complex than traditionally thought. The lack of archeological evidence has prompted some to impose hypothetical
historical reconstructions as valid theories, which some destructive critics have used to deny the Mosaic
authorship of the Pentateuch. At this time, the lexicon and orthography of the Pentateuch provide more positive
evidence for an earlier dating of second millennium B.C. than a later dating of first millennium B.C.; the
evidence of lexical anachronisms has been presently insufficient. |
- About the Initiative
- Curricular Resources
- On Common Ground
- League of Institutes
- Video Programs
Have a suggestion to improve this page?
To leave a general comment about our Web site, please click here
Clouds of sulfuric acid fill the sky as winds race around the dry planet. Although there are traces of water vapor in the atmosphere, the high temperature boils away any water near the surface. There are no beaches, forests, or cities, as this land is dead and inhospitable. If oceans ever existed, they have entirely evaporated by now. The temperature on a nice day is hot enough to melt lead: it's a blistering 400 degrees C (about 750 degrees F). The atmosphere, consisting of 95% carbon dioxide, is so heavy that it crushes our bones before we ever get the chance explore the volcano-covered plains or fiery mountain ranges.
The scorching weather on Venus is attributed to the greenhouse effect. While energy from the sun penetrates the atmosphere, the heat is trapped on the surface. Carbon dioxide (CO 2), a greenhouse gas, plays a large part in keeping the surface at such a high temperature, and it is a component that exists in our own atmosphere here on Earth. Fortunately, our atmosphere is not 95% carbon dioxide, but our CO 2 levels are increasing at a faster rate than ever before in our human history. Our concentration of CO 2 is currently about 380 parts per million (ppm), an increase from 280 ppm prior to the Industrial Revolution.
The increase of CO 2 levels - and therefore an increase in temperature - is not just a problem for scientists. In fact, our individual and societal lifestyles actually contribute to the increase of CO 2, making global warming a problem for everyone - including the children in our classrooms today. The words "global warming" and "greenhouse effect" are part of our popular culture; however, if students listen to popular media, they receive information that often lacks in scientific fact and is enhanced by the opinions of a misinformed public.
As English teachers, we are able to help students wade through misinformation and reach a deeper understanding of a specific topic. We help students recognize cause and effect, develop the skills to make predictions based on prior knowledge, apply subject-specific vocabulary, accurately separate fact from Crichton, and support a thesis with sound research. We are also able to nurture the poet and the story teller, to create a space where unique voices can be heard, and to provide the literary resources students need in order to make connections with the world outside of the classroom. Global warming is an issue that involves us all, existing within and without bells and school buses. Why shouldn't an English teacher teach global warming?
The science can be confusing, especially for those of us who - dedicated ruthlessly to our own field - stayed away from math and science in college, fearing that too many numbers would skew a blank verse poem. However, the urgency of global warming has encouraged the scientifically-minded to write books geared for the general public. Reading science does require a different kind of literacy than reading a novel - all the more reason for English teachers to teach it.
A Planet Worth a Thousand Words is a writing curriculum with a focus on global warming. It is primarily for high school seniors, although it could easily be adapted to fit other grade levels. Our New Mexico standards require that seniors research and present issues of public concern, and our school promotes a curriculum that encourages seniors to think about and contribute to the world outside of high school. Our English courses are year-long 50 minute classes; we emphasize writing for one semester and reading for the other, although both reading and writing are of course interrelated. The global warming curriculum will last for nine weeks (a teacher may adapt it for a shorter time-span) and explore poetry, short stories and speeches. Students will engage in journaling and 1-page essay writing throughout the quarter, and they will pursue a specific research question about global warming as they build and refine their writing skills.
As most of the course will cover the fundamentals of global warming, students will be able to use research to understand at least one aspect of the issue in depth. Rather than writing a typical research paper, students will write a creative story that is based on their findings. They will also work in groups to educate the community about one specific area of global warming. This final presentation will be a culmination of scientific knowledge, creative and comprehensive writing skills, and the ability to make a difference in one's community.
Before delving into specific aspects of global warming, it is important to understand the fundamentals. Although I have used books and articles as resources, the majority of scientific information presented in this unit is what I learned from Dr. Sabatino Sofia and my colleagues at the Yale National Initiative Global Warming seminar.
Fundamentals of Global Warming
Most people can easily identify changes in the weather. A bright, sunny day with a few thin clouds stretched across an otherwise blue sky can change into a cold day of rain and hail by tomorrow. The changes in climate, however, are too small to be detected by the daily weather-watcher. Climate is the average of weather over a long period of time. For example, a region may be typically humid, snowy, rainy or mild. According to the EPA (2004), climate is what you expect, and weather is what you get. Our climate temperature has increased about 1 degree F in the last century, warming at a faster rate over the last two decades.
When we speak about climate, we are speaking about the impact of solar energy. We know that the sun warms the air and that hot air rises. The air brings moisture from the sea, and as the air cools, the moisture condenses to make clouds. Some of the solar energy is re-radiated from Earth and back into space while other energy is trapped within our atmosphere.
The climate system is a link between the atmosphere, oceans, ice sheets, rocks and sediments, and living organisms (Buchdahl, 1999). Nothing within the system operates independently. For instance, the Gulf Stream in the North Atlantic contributes to the warm weather in Western Europe. The stream is like a conveyor belt with warm water on top and cold water on the bottom. The cold water is saltier and sinks to the bottom at five billion gallons per second, pulling the warm water in the opposite direction. As the ice melts, fresh water dilutes the salinity and decreases the amount of water that sinks, halting the conveyor belt flow (Motavalli, 2004). The last time the Gulf Stream ceased to flow, about 10,000 years ago, Europe endured an ice age that lasted about 1,000 years (Gore, 2006).
Climate models show that global warming produces extremes in temperatures, so not only do summers become hotter, but winters become colder. Global warming can actually cause an Ice Age in some regions. With this in mind, it is important to remember that climate change is not about bad weather changing into a nice day, or even about finding relief after a long summer with the long-awaited cool temperatures of autumn. Climate change is about altering temperatures for hundreds or thousands of years.
If we could strip Venus and Earth of their atmospheres and place them side by side, we might notice some similarities. They have the same origin, they are similar sizes, and they share a similar distance from the sun. Even as we add the primitive atmosphere of both planets, we notice what they have in common. Both atmospheres originally contained hydrogen, helium, methane, ammonia, nitrogen, neon, and a small amount of argon. A major difference between the two planets occurs when we note that Earth's original atmosphere has escaped. Our current atmosphere is influenced by volcanic eruptions, which contain large amounts of nitrogen, carbon dioxide, and water vapor.
The atmospheres of Venus and Earth both contain carbon dioxide. Fortunately, the rocks of our own planet were able to absorb a great deal of CO 2, whereas the rocks on Venus were never cool enough to do so. Venus has always been slightly hotter than Earth. The principles of CO 2, however, remain the same on both planets.
CO 2 is a greenhouse gas, along with water vapor, methane, nitrous oxide, ozone, halogenated fluorocarbons, perfluoronated carbons, and hydrofluoronated carbons. To understand the affect of greenhouse gases, we can imagine walking into a greenhouse full of plants. Here, we can look through the transparent glass and see the sun and everything else outside of the greenhouse, and we can feel the heat that is trapped inside. It is true that we need greenhouse gases to sustain life. Of course, an extreme excess of such gases would create an uninhabitable planet, perhaps one that resembled Venus a bit more closely. Without its current greenhouse effect, the average climate of Venus would be a balmy 20 degrees C (70 F) instead of the actual temperature of 400 C (750 F). Earth, in turn, would be -25 C without the greenhouse effect. In order for a planet to be habitable, the temperature must be between 0 and 100 degrees C. A healthy greenhouse balance gives Earth's climate an average temperature of 10 C (50 F).
The greenhouse effect acts as a kind of gateway for the sun's radiation. The high-energy short waves of radiation come from the sun and pass through our atmosphere. Earth re-radiates the absorbed portion of this energy back toward space as infrared radiation, or heat, but the greenhouse gases trap most of these long, low-energy waves in our atmosphere. This is why we can see visible light pass through the glass of a greenhouse while the heat stays inside. The windows in our homes also let in visible light in while trapping infrared radiation. If they didn't, we'd have to seal up all of our windows during the winter.
Not all of the radiation makes it close enough to the surface to become re-radiated heat. Some of the radiation is reflected back into space by clouds and ice. This reflectivity is known as albedo. Fashion experts have an intrinsic understanding of albedo - after all, this is why people wear white (or light-colored) clothes in the summer and black (or dark-colored) clothes in the winter. While the white shirt reflects the radiation from the sun and keeps us cooler, the black shirt absorbs the radiation and keeps us warmer.
About 39 % of radiation is reflected (the albedo is 0.39), while the remaining 61 % is absorbed by the surface and emitted as infrared radiation. With an albedo of 0, our planet would be a perfect black body. In other words, Earth would absorb all of the radiation and reflect absolutely nothing. Our albedo depends on clouds, snow and ice. The problem with clouds is that while they contribute to the planet's albedo, they also contribute to warming the planet. Clouds trap infrared radiation. As the planet warms, water evaporates and forms clouds that decrease absorption of solar energy. On the other hand, as the planet warms, the ice melts, increasing the absorption of solar energy. As the ice melts, the albedo decreases. As the albedo decreases, the planet warms. As the planet warms, the ice continues to melt. This is part of the cycle we know as global warming.
CO 2, a greenhouse gas that traps heat, increases as our growing population depends more and more on the use of fossil fuels to run our machines. Not only is CO 2 increasing, it is increasing exponentially. This is the difference between 10 x 2 and 10 raised to the 2 nd power. CO 2 is a waste product of fossil fuels such as coal, petroleum and gas (Flannery, 2005). We rely on the remains of decomposed plants and animals to generate electricity, heat our homes, and provide transportation. It is difficult to imagine a life without fossil fuels, but to continue as we are will turn short-term gratification into long-term devastation.
So far, we have relied on oceans and trees to absorb our CO 2 emissions. Tim Flannery, author of The Weather Makers (2005), describes the problem occurring with our oceans today. The North Atlantic, he writes, contains almost a quarter of the carbon we've emitted, while it only constitutes 15 % of the ocean's surface. He also notes that as the ocean warms, it has less ability to absorb CO 2, much like a warm carbonated drink falls flat. The oceans also become more acidic as they take in more CO 2. The more acidic they become, the less CO 2 they can absorb.
Trees absorb CO 2 through photosynthesis. However, as forests are burned, more CO 2 is released into the air. This also occurs as trees die and decompose. Planting more trees is certainly better than burning them or cutting them down to rot, but this action will not resolve the long-term problem. While trees temporarily absorb CO 2, they unfortunately decrease the albedo.
CO 2 levels will double; it is not a matter of whether they will or won't, but a matter of when. If we take action now to decrease our use of fossil fuels, and therefore decrease the amount of CO 2 that we release into our atmosphere, the levels will double in two centuries. If we do nothing at all, the CO 2 levels will double in forty years.
While some of us may think of warmer temperatures as not having to scrape so much ice off our windshields, we must come to terms with the destructive ramifications to our planet as a whole. We must also remember that we are not talking about a few days of milder weather during winter, but about a major climatic shift that will considerably alter the planet in ways that span beyond our lifetime. It may be easy for certain members (and leaders) of our society to shrug apathetically when asked to consider the world we are leaving the children in our classrooms today, but as educators, we cannot afford such self-centered thinking.
To put it simply, the icecaps are melting, polar bears are drowning, people who depend on the ice are already struggling to maintain their lives, and animals are becoming extinct. We face rising sea levels, an increase in storms, and a change in rain distribution that could destroy the crops of Middle America along with other food-sources around the world. We are coming to a time when floods will displace large populations; survivors will migrate to drier areas while others drown or die from diseases that were once foreign to their homelands. Perhaps SUVs, like cockroaches, will persist, trampling over muddy lands where magnificent glaciers once stood.
It is overwhelming. It is also difficult to believe, especially when I can look out my window over high desert Santa Fe and see that the mountains are as brown as they ever were, or that the rain that never comes finally came days ago and is still splashing tufts of yellow grass into a bright shade of green. However, there is evidence today of the effects of global warming. Elizabeth Kolbert, author of Field Notes from a Catastrophe (2006), describes the gradual extinction of the golden toad due to climate change, the migration of birds and butterflies as temperatures increase, the thawing of permafrost just below the Arctic Circle, the acidity of oceans, the disappearance of sea ice, and the unavoidable relocation of villages due to melting ice and an increase of storms brought on by warmer weather. With all that we know, how is it possible to deny the existence of global warming?
There are uncertainties. Scientists know that human activity is increasing the level of greenhouse gases, but they do not know where the increase will stop. They know that global temperatures will continue to rise, but it is not clear whether the increase will be 1.4 or 5.8 degrees C, or somewhere in between. They know that warm water causes an increase in hurricanes, but it is not possible to tell if specific hurricanes today are due to global warming. They know that the sea levels will rise, but they don't know exactly how high. Although scientists know that global warming will impact wildlife, natural resources and human health, they cannot predict the exact outcome, especially when looking at specific local regions (EPA, 2002).
However, there is no doubt among scientists that global warming exists, and that human activity is mostly responsible. Scientists use ice core samples and trapped air bubbles in amber to determine the atmosphere of our past, allowing comparative measurements to be made today. CO 2 measurements are made in the troposphere above a mountain in Hawaii, where the atmosphere is less contaminated. Scientific data gives us a low number and a high number insofar as possible outcomes. Even the lowest projected global temperature rise of 1.4 C by 2100 is a greater rise than seen in the last 10,000 years, according to the IPCC (EPA, 2002). There are degrees of uncertainty within each projection, but all scientific projections show some degree of global warming.
Some of the uncertainty involves the use of climate models. By their nature, climate models must simplify data. They are tested to determine past climate changes, and they are useful as estimation tools. Different researchers use different models to test different sets of data, and therefore create different results. Gradually, the climate models are beginning to converge, but for now they produce uncertainties. Something worth noting is that published scientific sources will include room for probable error. These are not mistakes; they are uncertainties. If a source does not include uncertainties, but insists that, say, oceans will rise 80 feet, it is probably not a valid scientific source.
An uncertainty about to what degree global warming occurs or what impact it will have is not the same as questioning whether global warming occurs or if it will have any impact at all. This would be like not knowing what a friend had for lunch today and therefore arriving at the conclusion that this friend never eats lunch.
Uncertainties are sometimes used by non-scientists to confuse and mislead the public. According to some of our policy makers, there is too much confusion about whether our friend had a sandwich or bowl of soup, so that means our friend never eats lunch and never will, and global warming does not exist. On the other hand, it is just as unrealistic to assume that our friend will explode from too much food. It is useless and incorrect to say that we are doomed.
One of the drawbacks of being aware of the world around us is that it becomes difficult to live life as usual. We may think about CO 2 levels as we fill our car with gas or drive by a power plant, but how can we turn our thoughts into actions - or even feel confident that our actions even make the tiniest dent in global warming?
On an international level, the Kyoto Protocol, an amendment to the United Nations Framework Convention on Climate Change (UNFCCC), is an effort to mitigate the anthropogenic, or human created, greenhouse gases. Countries must commit to reducing their emissions of CO 2 and other greenhouse gases or engage in emissions trading. Countries that exceed the allotted emissions would have to buy emission credits from a country that stays below the set limitations.
Scientists have also considered renewable energy resources such as windmills, hydroelectric power, solar power, or nuclear power. Unfortunately, windmills require steady wind to work consistently, hydroelectric power involves risks in damming and redirecting waters, solar panels need reliable sunlight, and nuclear power produces nuclear waste. It is also difficult to transport wind, hydro and solar power over long distances. However, current development of renewable energy resources for large and small areas will help reduce our emissions of greenhouse gases.
As individuals, we have plenty of opportunities to do our small part. For instance, we can drive something other than SUVs. At home, we can use compact fluorescent lights rather than incandescent bulbs, as the latter bulb wastes more energy as heat and does not last as long as a compact fluorescent light. We can keep our thermostat slightly cooler in winter and warmer in summer in order to save energy, and reduce the amount of electricity and gas we use in general. Students and teachers can find more suggestions in the back of An Inconvenient Truth by Al Gore and in renewable energy books such as Smart Power by William H. Kemp. In fact, some students may find it interesting to pursue renewable energy resources as the focus of their research.
Why do students want to learn? Why do they study so hard and pursue their education with so much passion? I ask these questions out of neither naiveté nor luck, but because these questions force me to think about education in a very different way. I am no longer asking what I can do to "hook" or convince my students that something is worth their attention - if they can just hang in there for the next few days. Perhaps these are the questions we should be asking ourselves, as educators, in order to understand how to engage students.
Why do students want to learn? Most likely it's not to pass a test - unless they are preparing for a placement or exit exam. They get something from those tests: a chance to either skip ahead or move on. What do they get from most of the tests that they take? Tests, especially standardized tests in the business of education, have a lot to do with the school and very little to do with the child. Students have to take them, but that's not why they learn.
Learning is not always an intellectual pursuit. Robert Sylwester writes that emotion "drives attention, which drives learning and memory" (1995, p. 72). If we don't care about something, we are not likely to give it our attention. Or, in the case of feeling alienated from the learning experience, we may choose to avoid learning altogether. It is possible to feel alienated in at least two ways; one type of alienation has to do with the topic or subject. We missed a step somewhere and now we are not able to connect the words or numbers in a way that constructs meaning. The second form of alienation occurs when we are acutely aware that nothing in the immediate environment has anything to do with us. We may feel deliberately ignored or blindly excluded. The first case of alienation is easier to overcome.
When we learn, we feel a sense of connection to something outside of ourselves. Sometimes we can feel ourselves change as we learn. Maybe we don't feel it until later, but a part of us knows it's coming. There's a click or a flash, and suddenly we're somewhere we've never been before, but no matter what, we're connected. Imagine the time you were touched by a poem that you finally understood. Or imagine learning how to drive a car (a hybrid with high gas mileage, of course), or how to solve a computer problem on your own. You arrived on the other side, and you were connected to the poem or the car or the computer. We want to learn so we can be a part of the world.
Why do students want to learn about global warming? Students are already a part of global warming, as we all are. To not understand the connection is to not understand global warming. It is up to us to show students how they are connected. Not to persuade them or bribe them, but to show them. Students, like us, feel that they are a part of the world when they have something to contribute. Now they will be able to offer their knowledge to their communities and make very important changes to the world where they belong.
Of course, learning cannot depend on emotional connection alone. Once a connection is made, students must understand new information from expert sources, be able to apply their knowledge and practice new skills, and integrate the learning experience with their own lives well enough to make new connections. In other words, the new knowledge becomes prior knowledge. Students should be able to take what we teach them to a new level and hopefully pass this knowledge to others. This cycle follows Bernice McCarthy's 4Mat approach to learning and teaching.
The lessons in A Planet Worth a Thousand Words follow this cycle with a conscious attempt to honor various learning preferences. The act of blending science with language arts may already engage some students who are more scientifically inclined, but the unit is not limited to lectures, research papers, or worksheets. As Sylwester points out, "Doing worksheets in school prepares a student emotionally to do worksheets in life" (1995, p. 77). One purpose of the various lessons is to empower students to be able to take the issues of global warming into their own hands. Nonetheless, this is a curriculum unit for English, and writing is at the core. In one way, the science of global warming is a vehicle for the mastery of writing; in another way, writing is a springboard for the mitigation of global warming.
Gardner's Multiple Intelligences are considered throughout the unit. Assessment tools include the linguistic intelligence through essay writing, poetry, speeches, journals, interviews and reports; logical intelligence through making predictions and calculations, analyzing data, and understanding climate models; spatial intelligence through visuals, illustrations and maps; kinesthetic through hands-on experiments; naturalist through demonstration of environmental sensitivity and research of nature topics; intrapersonal through personal reflections and independent work; and interpersonal through cooperative learning (Chapman and King, 2005). Students may also use rhythmic intelligence in poetry, and they may choose to include musical compositions and recordings in a final presentation.
Students will learn about the fundamentals of global warming as they write poetry, a short story based on research, a speech, and finally a creative presentation for their peers and community members. Throughout the unit, students will keep a journal and write one-page reflection papers to evaluate their learning. They will know ahead of time that the weekly reflections will serve as notes for their unit essay exam: a culminating personal response paper that discusses what the student learned over the quarter.
The unit begins with the student. What does the student already know? How does global warming affect the life of the student? Why should he or she care? Rather than telling the student the answers to these questions, I begin the unit with a game of survival. The details of the game are outlined in the lesson plans below.
Once students make an initial connection, they receive expert knowledge about global warming (see Fundamentals of Global Warming) and learn specific writing skills. In each section, students apply what they've learned about global warming and practice new writing skills, beginning with climate and poetry.
What is the ideal temperature? According to the student, which state has the best weather or climate? Which place has the worst? What is so ideal or terrible about specific weather conditions? Students begin by journaling about their own life preferences and opinions about weather. Students should also learn the difference between climate and weather—a discussion about the different climate conditions in various countries and states would help to illustrate weather versus climate.
Students may use the knowledge of their own climate conditions in an "I Come From" poem. The details of this poem are included in the lesson plans below. What makes the climate? Why does temperature change? Why do we have seasons? Students survey parents, teachers and peers and report on their findings. This would be an excellent time to clarify that we do not have seasons due to the Earth's distance from the sun, but because of the tilt of the Earth on its axis. We are actually closest to the sun in January. Students should know that climate is a link between various elements on Earth, including clouds. An experiment that creates a cloud in a bottle will help students understand how clouds are made (this experiment can be found on several internet sites). Clouds do two major things that affect the temperature in opposite ways: they block out the rays of the sun (radiation) and they prevent the heat of the Earth (infrared radiation) from escaping.
Students expand their thoughts about nature in a lune and haiku. Lunes are three-line poems with three words in the first line, five words in the second, and three in the third. The words can be any length. Lunes are a great warm-up for a haiku, as lunes are similar but not as restrictive. Students begin a glossary of literary terms: they learn about personification by picking an object in nature and giving it human characteristics. They also include onomatopoeia by creating a sound that is connected to the object. Students are encouraged to use their five senses (imagery) to describe the object before they write the lune. They continue to expand their ideas about climate or nature as they write a haiku. Examples of the haiku may not always follow the five-seven-five syllable pattern, but they do tend to create a picture, and they often capture the beauty of a moment. Lunes and haiku teach students to be precise with their words.
At this point, students learn about the Gulf Stream. They learn about how it functions like a conveyor belt and how it affects the weather of Western Europe. Students predict what might happen if the water stopped flowing, and they learn how global warming can cause temperature to become more extreme: while summers become hotter, winters become colder. In their journals, students speculate what it might be like to live in an Ice Age. Viewing specific sections of movies like The Day after Tomorrow or Ice Age might help students visualize, although inaccuracies for the sake of entertainment should be addressed.
Students use a line from their journal to begin a pantoum. A pantoum demonstrates the use of a pattern: the first and third lines of the first stanza become the second and fourth lines of the second stanza, the first and third lines of the second stanza then become the second and fourth lines of the third stanza, and so on. It is an oral folk form from Malaysia, and it allows students to move forward by building on previous lines. Students can also explore the use of the end-stopped line, enjambment and caesura while writing about climate change.
The Story of Atmosphere, Fossil Fuels and Consequences
In this section, students work toward writing a short story about global warming. Unlike most short stories, these stories include a works cited page. The research they do may contribute to their final presentation, but for now they gather scientific information about global warming and use it as the foundation of their creative story. Hopefully, by writing a story instead of a typical research paper, students will see that the point of research is to develop further understanding, not simply to write a research paper for class.
Before beginning the story, students conduct another jar experiment - only instead of creating clouds, they create a greenhouse and an albedo. The details of this activity are included in the lesson plans. Why is Earth habitable while Venus is not? Students compare the atmosphere of Venus and Earth and learn about the origins and current similarities and differences. What are the different levels of the Earth's atmosphere? What are the periodic elements that make up the atmosphere? It may be useful for students to keep notes on periodic elements and some of their "practical" uses. Students also learn how windows let in the sun's radiation while trapping infrared radiation, and why white clothing is cooler in the summer than black clothing.
As students continue to learn about atmosphere, they examine the uses of plot, characterization, setting, dialogue, and theme by reading short story examples. Students use what they've learned about the effects of global warming to describe the setting of their original stories. They write down ideas for plot: exposition, inciting incident, rising action (conflict, complications, dramatic climax, and crisis), technical climax, falling action, and denouement.
Students take a carbon footprint survey and other online assessments that estimate the amount of carbon they are emitting into the atmosphere. Students learn what fossil fuels are and how levels of carbon dioxide increase due to the use of fossil fuels. Students also discuss how we might decrease our CO 2 emissions.
In the meantime, students create a character. They examine elements of characterization: developed characters (direct & indirect presentation) & stock characters (caricature, dynamic character, static character, protagonist, and antagonist) and add various characters to their story. They describe how their characters are affected by global warming. Following characterization, students focus on creating a theme for their story and including elements such as a flashback and foreshadowing. While a flashback may allow the reader to understand how a character ended up in a certain situation, foreshadowing allows a reader to make predictions. Both elements of writing may be connected to consequences.
As students learn about the consequences of global warming, they reflect on what individuals can do to help mitigate climate change. They describe climate change from the point of view of an animal, migrating person, plant, melting ice, or anything else that is affected by global warming. Students learn about different points of view (omniscient, limited omniscient, first person, and objective) and choose a point of view to use for their story. In another writing exercise, students write a dialogue between the character from the point of view exercise and another character who contributes to global warming. What would they say to each other? How would they say it? Students learn about colloquialism and dialect. They apply dialogue techniques to the characters in their story.
Students continue to research global warming as they engage in exercises that focus on specific elements of the short story. Once they have the information they need, they write a story from the beginning to end (according to the teacher's specifications, such as length and format). Students workshop the story with their peers and read excerpts to the class. The research they've completed is a step toward their final presentation. However, information is useless unless they know how to present it.
Speaking of Controversy and Mitigation
Why do some people deny that global warming exists? What might motivate their opinions? Some people believe we are doomed. What motivates the opinions of these people? Students begin again with their own opinions and experiences. They will understand that politicians do not always consult with scientists before forming opinions, and that the general public may also form opinions without having correct information. Students will look at the Kyoto Protocol and discuss the decisions of the United States and Australia not to ratify it. They will also view the cities in the U.S. who have decided to "ratify" it in spite of the nation's decision.
Students will also look at climate models and interpret meanings. They will understand what a scientific error really means, and how it's been misconstrued in order to confuse the public. Students may come to a clearer understanding of how the increase of the average temperature is significant, even at its lowest projection, by thinking about how their bodies feel with only an increase of a couple of degrees. Although 99 and 101 are only two numbers apart, we can feel the difference in our bodies. An increase of a couple of degrees may determine whether or not we are able to get out of bed. Imagine if we projected an increase in body temperature that was even higher - perhaps a mere four or five degrees higher than 99 F. Who wants to feel that sick?
If possible, students view An Inconvenient Truth and analyze Al Gore's presentation techniques. They examine his rhetorical appeals: ethos (character of speaker), pathos (quality that stimulates pity or sorrow in the reader), logos (the speaker's use of logic), and nomos (the identification with the audience). Students also read example speeches. Speaking of Earth, edited by Alon Tal, is an excellent collection of environmental speeches. In order to practice giving speeches, students choose one speech from the book, analyze the rhetorical appeals, and determine how the speech should be presented. They work with a partner and practice reading the speech effectively. The teacher may present a rubric in order to clarify an effective presentation.
Once the students have had practice reading a speech out loud, they prepare to write a speech of their own that touches on aspects of the controversy or suggestions for mitigation. As they prepare to write their original speech, they learn about mitigation through books (see bibliography) and online resources. Students make individual changes in their own lives to help mitigate global warming, whether it involves changing light bulbs, using green energy, driving less, or other suggestions they find in their research. They work in groups to report on various suggestions for mitigation, and, through analysis of the benefits and drawbacks, determine which form of mitigation is best for the country and for the specific state. This activity may lead to a classroom debate. Students will have to use evidence to support their opinions.
Although students write and deliver individual speeches, they work with a partner as they did for the earlier practice speech. They may use note cards and visuals. This speech prepares students for the final presentation.
Climate Change Competition
The final presentation allows students to synthesize new information and show what they know by educating others. They should be thinking about what they will do throughout the unit, and understand that assignments along the way may be applied to the final presentation. However, rather than an overview of global warming, students should focus on one research question in order to increase the depth of their knowledge. Suggested research questions are included in the appendices and divided by sections. It may be beneficial to present the questions throughout the unit (for instance, present suggested questions on climate at the end of the Climatic Poetry section) so students can begin to refine their ideas.
The presentation is a collaborative effort: students work in teams to compete against others. It is similar to a poetry slam, although this competition has only one round. A group of judges - perhaps members of the community - evaluate each team's accuracy, clarity, and creativity. Students must use scientific facts as a foundation for their presentation. They must be able to communicate their message in a way that is understood by their audience, and they must do so in a creative and engaging manner.
Students may choose to narrate a short video, give a slide-show presentation, create a presentation through computer graphics, give a dramatic reading of original poetry with visual displays or art, or use any medium that combines verbal and visual skills. They should be encouraged to be as creative and original as possible.
Each team should focus on a different research question. This way, there will be no duplicate presentations and the competition as a whole will show a broader range of global warming issues. The competitive aspect of the presentation will allow students to get involved in something that goes beyond the classroom. Awards contributed by local business would involve the community and increase the incentive for the students.
In the end, students will understand how their own lives are intricately connected to global warming. They will be able to use writing to communicate coherently and creatively, and they will be able to take these skills with them as they exit the doors of high school. Our students will know that the Earth is their planet, too, and they will show the rest of us how to live on it.
Human Needs Game
What do I need in order to live?
I can, reacting to selections and reflecting on personal experience, develop an argument to support a position; I can explain my ideas in a clear, logical and comprehensive manner; and I can analyze concepts and perspectives and relate these to my own life.
The following activity is based on a leadership game I played when I was a student. Of all the things I learned in high school, this is one of the activities that I remember most. The game was originally geared to promote international relations, but I am adapting it to help students understand what they need to live both a personal and universal level. The game is simply a first step that leads to more in depth discussion and exploration.
This exercise may do more than introduce students to the global warming curriculum, as it also promotes community-building in and out of the classroom. Students are asked to take care of their team mates while working together to achieve a common goal. If this activity is done at the beginning of the year, it may serve as an impetus for classroom bonding.
Goal: Everyone on the team must have enough food.
Obstacle: All teams are pre-equipped with food, utensils, plates, cups, etc., but the distribution is uneven. For instance, one team has all the cups but nothing to put in them. Another team has plenty of at least one kind of food, but no way to eat it. One team has very little of anything while another has more than they need.
Materials: Enough cups, beverages, plates, plastic-ware, napkins, and food for the class. It doesn't have to be anything fancy. It may be possible to arrange for a class potluck, where students can contribute. It is worth noting that symbolic items such as cards or chips do not elicit as much emotional attachment as food.
- The whole team must have enough food within 15 minutes.
- Only one person from each team may negotiate for food. The negotiator is responsible for communicating the needs of his or her team to the other negotiators.
- The desks are arranged in clusters and allotted food is distributed prior to the arrival of students. (In the case of a potluck situation, arrange for "Distributors" to distribute specific items to each team.)
- Students sit in teams.
- The teacher explains the rules and objectives.
- Students choose a negotiator and begin the game. The teacher helps students keep track of remaining time.
After the game, students discuss and assess what happened. Did everyone get enough food? In other words, were basic lunch (or snack) needs met? How do people feel after the experience? Did the negotiators have a hard time taking care of their team? If so, what were the difficulties? What changes could be made to make the experience better? The discussion will change based on the overall experience, as the results of the game will vary with different groups. Some participants will aggressively go after the food while others seem to easily give up their share. The class may want to observe how different teams, particularly the team that had more than they needed, responded to the experience. In some cases the team is generous. Usually, however, this team does not see a need to even negotiate with the others, as they already have what they want.
This game parallels different nations. The U.S. is typically represented by the team that has more than it needs. Later in the unit, during a lesson on the Kyoto Protocol, it would be wise to refer back to this activity. How does the U.S. handle international negotiations regarding global warming? Why might the U.S. respond as it does?
Ask students to describe what they need in order to live in a journal. We saw what teams needed in order to satisfy basic lunch needs. What do students need in their own lives? After they've written for a few minutes, ask students to pick one or two things and describe why they need them. What makes these things so important?
Ask volunteers to discuss what they need in order to live. As a class, discuss basic human needs. Mention things such as food, air, water, crops, sun and rain.
What if we knew that floods were going to wipe out certain cities while other places suffered droughts with no hope of rain, and even others froze in a sudden Ice Age? What if we knew that people were going to suffer from new diseases, that polar bears and other animals were going to become extinct, and that storms and hurricanes like Katrina were going to become more frequent? What if we knew all of this, and we knew that we could stop it, or at least slow it down? What would students do? What if they knew that they, like every other person in their city, were putting something in the air to make such a life a reality? What would they do then?
Explain that the world we just imagined illustrates the possible effects of global warming. Ask students what they know about global warming and examine any misconceptions they may already have. Let students know that global warming is caused by an increase of greenhouse gases, and that they will learn more about how to keep some of the basic things that we need in order to survive on this planet.
Assessment is based on discussion and journaling. The student shares what he or she already knows and perceives about essential elements of survival, community attitudes, and global warming.
"I Come From" Poem
How does my environment affect my identity?
I can reflect and respond to texts for complexity, self-significance, and cultural perspectives; I can apply appropriate metaphorical, grammatical and rhetorical devices to my writing; and I can evaluate how well I use facts, ideas, tone, and voice.
Students think of a weather condition (hurricane, snow, drought, hail, etc.) or a place with a specific climate that they are most like. They write, "I am like…." and finish the thought, creating a simile. Show how to change simile to metaphor by changing the line to "I am…" Students then use write five lines, one for each sense, to show how they are like the weather condition. In this way, they are using imagery. They should be encouraged to use their imagination.
The class reads example poems about identity. "Where I Come From" poems can be found through an internet search. Poems about identity (along with nature and place poems) are also included in From Totems to Hip-Hop, edited by Ishmael Reed. Note any reference to the influence of music, language, food, family members, environment, history, or expectations of society. Students make a list of things that exist in their own lives.
The use of anaphora by repeating "I come from" at the beginning of each line may help students write. They also use words and lines from both the metaphor exercise and their list to construct a longer poem. Once the poem is complete, students read them out loud while the teacher listens for specific literary elements such as alliteration, rhyme, or tone. The teacher may return to specific student poems to point out how various literary elements were used. Students take notes and begin to construct a glossary of terms.
Students use specific techniques to write a poem. They also describe the purpose of literary elements in a glossary of terms.
How does a greenhouse work?
I can identify and answer a research question; I can gather and analyze information and synthesize ideas; I can make a claim, list my reasons for supporting my claim, and support my claim with evidence.
The necessary materials include three glass jars, three thermometers, one black cloth, and one white cloth. Students put thermometers in each jar and seal only the first and second jar, leaving the third open. They cover the bottom of first sealed jar with white cloth and the bottom of the second sealed jar with black cloth. Place all three jars under direct sunlight for ten minutes.
Students predict what will happen. Will the temperature of one jar be higher than another? Why? They write a lab report that includes a description of the exercise, the procedure they follow, and their observations. They also include an analysis of their results.
The sun's glass will be transparent to the sun's radiation, allowing visible light to pass through. Glass also traps infrared radiation, thus heating the inside of the jar, more so in the sealed jars than in the open jar, where the infrared radiation is able to escape.
The sealed jar with the black cloth on the bottom will trap more infrared radiation than the others, as dark colors absorb more infrared radiation. If the jar were a perfect black body with an albedo of 0, it would absorb everything and reflect nothing. In this case, the jar does reflect some radiation, but not as much as the jar with the white cloth. The white cloth at the bottom of the other jar represents an albedo. White or light colors reflect radiation, so less heat is absorbed.
The atmosphere of the Earth is very much like these jars. The greenhouse effect is like a sealed glass jar that is transparent to visible light while it traps infrared radiation. The Earth is warm due to the greenhouse effect. Oceans tend to function like the black cloth: they absorb more solar radiation and thus add to the heat. The ice, snow, and clouds are like the white cloth. They reflect the sun's radiation. The greenhouse effect increases as we emit more carbon dioxide, and as a result of this increase, the ice is melting. As the ice melts, the albedo decreases, and this contributes to global warming.
Students write a lab report that records their observations and findings. Students describe how the glass jars represent the Earth's atmosphere.
THANK YOU — your feedback is very important to us! Give Feedback |
The Hadès was a short-range, road mobile, solid-propellant ballistic missile. Before terminated, it was an attempt to use a tactical missile as a strategic asset in the early years of France’s nuclear program. Originally designed for a range of 250 km (155 miles), the missile’s range was later increased to 480 km (298 miles). 1 The missile’s great advantage was its rugged single-stage solid-propellant engine, making it readily deployable along the French borders to repel a possible Soviet attack.
Under Charles de Gaulle, France pursued an alternate nuclear program to NATO, the goal of which was to function autonomously and provide France with the ability to escalate conflicts quickly. The threat of nuclear escalation on a tactical level was part of the French land-based deterrent, of which the Hadès, as both a tactical and strategic system, was an important component. The project dates back to 1975, when it was designated as a replacement for the tactical road-mobile Pluton system. Development began in 1984, followed by flight testing in 1988. 2
The missiles were deployed on transporter-erector-launcher (TEL) vehicles, each carrying two missiles. Relatively small and light, the TELs could move easily along unimproved roads, thus making the Hadès an easily deployable weapon. The missiles themselves were only 7.5 m in length, 0.53 in diameter, and 1,850 kg in weight. Despite its small size, the Hadès carried a potent 80 kT yield nuclear or a powerful HE warhead. 3 Powered by a single-stage solid-propellant engine, the Hadès could reach a range of 480 km (298 miles), which meant that the missile could be used against strategic military targets, although it was insufficient to threaten Soviet cities and missile silos.
The Hadès used an inertial guidance system capable of making evasive maneuvers as it approached its target. Although the accuracy of the system remains unknown, reports indicate that a variant was being developed to destroy buried hard targets, using a Global Positioning Satellite (GPS) system and digital terminal guidance resulting in an accuracy of less than 5 m CEP. The Hadès trajectory was intentionally kept low, so that the aerodynamic control fins at the rear of the missile could alter the trajectory and range during flight and make evasive maneuvers during the terminal phase. 4
In 1991, the Ministry of Defense decided against deploying the Hadès system operationally and limited production to 30 missiles and 15 TEL vehicles. The missiles were originally put into storage so that they could be reactivated given a military conflict in Europe. 5 In 1996, it was announced that the missiles were to be dismantled following France’s new policy of a sea-based nuclear deterrent. 6 In June 1997, the last of the Hadès missiles was destroyed. 7
Last Updated 9/12/2012
- Lennox, Duncan. “Hades.” Jane’s Strategic Weapon Systems (Obsolete Systems). October 13, 2011. (accessed September 12, 2012). ↩
- Global Security. “Weapons of Mass Destruction (See France, IRBM).” July 24, 2011. http://www.globalsecurity.org/wmd/world/france/hades.htm (accessed September 12, 2012). ↩
- Lennox, “Hades.” Jane’s Strategic Weapon Systems (Obsolete Systems). ↩
- Ibid. ↩
- Ibid. ↩
- JAC Lewis, “All Change for France: How the Big Shake-Out Will Shape-Up,” Jane’s Defence Weekly, March 13, 1996. ↩
- Global Security. “Weapons of Mass Destruction (See France, IRBM).” ↩ |
Overstory #194 - Forest degradation and food security
Forests and the benefits they provide in the form of food, income and watershed protection have an important and often critical role in enabling people around the world to secure a stable and adequate food supply. Forests are important to the food insecure because they are one of the most accessible productive resources available to them.
Deforestation and forest degradation, however, are impairing the capacity of forests to contribute to food security and other needs. This article focuses on tropical forests, which are currently experiencing the highest rates of clearing and degradation. From 1980 to 1990, an estimated 146 million ha of natural forests in the tropics were cleared, with an additional loss of 65 million ha between 1990 and 1995 (FAO, 1997). The area of degraded forest (defined below) is estimated to be even greater (WRI, 1994).
Tropical forests are located in the areas of the world with the highest concentration of the food insecure. They are home to approximately 300 million people who depend on shifting cultivation, hunting and gathering to survive (FAO, 1996a); many are at risk of not consuming enough food to meet their daily energy requirement on a chronic, transitory or seasonal basis. In addition to these forest inhabitants, millions of people living adjacent to forest areas depend on forests for some aspect of their food security.
The full implications of the loss or deterioration of tropical forests for humankind as well as other life forms are not known. What is known, however, is that the loss of forest resources can lead to diminished income- and food-generating capacity for forest-dependent communities, higher rates of soil erosion and siltation of waterways, loss of species and genetic diversity and an increase in carbon emissions which contribute to global warming (Kaimowitz, Byron and Sunderlin, 1998).
It is important to recognize that in addition to these losses, deforestation and forest degradation may also generate profits, from timber or other product sales, forest food products for consumption or crop and livestock production for subsistence or market. In assessing the implications of forest degradation, it is important to consider how the value obtained compares with the costs incurred, taking into account the full implications for the global community, including non-human life forms.
Moreover, forest degradation represents a transfer of value among different groups. Thus it is necessary to identify how different groups with different risk of food insecurity are affected by the transfer. With this knowledge, better-informed choices can be made of trade-offs involved in forest management.
Defining forest degradation and food security
Food security has been defined by the Committee on World Food Security as economic and physical access to food by all people at all times (FAO, 1983). Embodied in the concept is the recognition that people's ability to consume food may be dependent on their own production as well as on their ability to purchase food, and that sufficiency, stability and continuity of supply are necessary to achieve food security. The definition also implies that food security entails meeting food requirements not only for current popula-tions but also for future generations.
Forest degradation is a more complex and ambiguous concept. Its definition depends on the objectives for which the forest is managed. For example, if the objective is the complete protection of the forest ecosystem and all its components and functions, then economic harvesting of forest products could be considered degrading, even if it is managed "sustainably" - i.e. so as to provide a continuous and steady flow of economic benefits from harvested products. However, if the management objective is to obtain a sustainable yield of wood products from the forest, then harvesting would not be considered degrading.
The definition used in this article, adopted from the definition of forest health used by the United States Forest Service, is that degradation is a loss of a desired level of maintenance over time of biological diversity, biotic integrity and ecological processes.
Desired levels of ecosystem maintenance can vary significantly depending on the forest management objectives, e.g. provision of rural livelihoods, environmental services or recreational or aesthetic benefits. Disagreement over the objectives for which the forest should be managed is frequently a source of conflict between governments, forestry professionals, environmental groups, local communities, logging companies, indigenous groups and others. In some cases multiple management objectives are compatible, but in others they are not.
Forest degradation can arise from either human or natural causes. There is a link between the two human action can also influence vulnerability of the forest to degradation from natural causes such as fire, pest and diseases. Since forests are a renewable resource, some forms of degradation are reversible, although rehabilitation may take a considerable time. However, degradation is sometimes irreversible, resulting in an irretrievable loss of some forest ecosystem functions. In contrast to deforestation, which is defined as permanent conversion to other uses, degradation implies the existence of some forest cover but a reduced capacity of the ecosystem to function.
Distribution of costs of forest degradation and implications for food security
This section looks in detail at the costs of forest degradation to the food insecure and others. For the purposes of this article, costs are divided into on-site, local watershed and global scales of analysis. There is some crossover between these categories, and costs can be incurred at other scales of analysis, but these are the scales at which the most solid information is available and they will serve for a simple analysis of how forest degradation costs can be experienced by different groups.
Two important caveats need to be given before the costs associated with forest degradation are addressed. First, forest ecosystems vary in composition, functions and services, and the impacts and associated costs of degradation vary accordingly. Second, the technical and socio-economic impacts of forest degradation are not yet fully understood, so it is frequently difficult (if not impossible) to quantify and identify causal links.
On-site impacts of forest degradation
For the people who live in or near the forest, an obvious impact could be a decrease in biomass produced, i.e. a decrease in the future capacity of the forest to produce wood, fodder, fruit, medicinal plants and so on.
Forest products, including food and household items, and the income generated by them can be quite significant to the food security of local communities throughout the developing world, many of which are food insecure (Reddy and Chakravarty, 1999; Arnold and Townson, 1998; Townson, 1995; Hoskins, 1990; FAO, 1989; 1990). The poorest households generally have the highest degree of reliance on forest products for income and food, as they have the least access to cultivable land and so supplement their production with the gathering of forest products on common-property forest lands (lands that are owned and managed collectively) or open-access forest lands (lands that have no effective collective or private ownership status) (Reddy and Chakravarty, 1999; Jodha, 1990). The latter category is more vulnerable to overexploitation.
In addition, forest products also have an important role in food security as "buffer" foods, helping to meet dietary needs during periodic food shortages (Arnold and Townson, 1998; FAO, 1990). Even if forest products only constitute a small part of overall food consumption and income generation, their absence at a critical time can greatly increase the risk of food shortages. This loss of "consumption insurance" for food-insecure households can have further negative impacts through its effect on agricultural and natural resource investment strategies. Evidence has indicated that the risk of food insecurity results in low-risk and low-return investment patterns (Holden and Binswanger, 1998).
Forest degradation also influences food security through its impact on supplies of fuelwood, which is a major source of income to many poor households (Townson, 1995). Two in five people worldwide, or approximately 3 000 million people, rely on fuelwood or charcoal for heating or cooking, and approximately 100 million people are already facing a "fuelwood famine" (FAO, 1995). A decreased fuel supply creates constraints on food preparation which can lower nutritional values and increase risk of food-borne diseases (FAO, 1989). In many parts of the world women are responsible for collecting fuelwood, and the increased time required for collection of scarcer resources can impede women's ability to participate in household and agricultural labour and thus jeopardize the household's food security (FAO, 1987).
In addition to biomass, other benefits of forests to on-site users include regulation of soil and water flows as well as shade and windbreak protection. Forest degradation which involves the loss of ground cover exposes soil to rainfall and can result in increased erosion (Bruijnzeel, 1990; Chomitz and Kumari, 1996). The loss of nutrient-rich topsoil can result in significant decreases in agricultural productivity (Tengberg, Stocking and Dechen, 1998).
Local watershed impacts of forest degradation
Loss of ground cover in local watersheds can result in increased erosion leading to sedimentation of waterways which may have a negative impact on downstream irrigation, fishery and dam operations (Chomitz and Kumari, 1996). In some cases these impacts can be quite high, although they may occur only after a long time lag (Chomitz and Kumari, 1996; Hodgson and Dixon, 1988). Forest degradation can result in increased runoff and thus increased flooding potential within local watersheds (Chomitz and Kumari, 1996). Changes in the water table may occur as well, although the processes can affect the water table in opposite ways reduced vegetative cover can lead to reduced water loss from evapotranspiration, while runoff is likely to increase, although it may or may not percolate into groundwater tables (Chomitz and Kumari, 1996).
Global impacts of forest degradation
Two important services provided by forest ecosystems which benefit the global community are carbon sequestration and storage and the conservation of biological diversity through the provision of habitat for highly diverse plant and animal species.
Global climate change is associated with rising levels of greenhouse gases (in particular carbon dioxide) in the atmosphere. Forest ecosystems, including above- and below-ground components, are major carbon sinks, taking up carbon from the atmosphere; thus they have an important role in mitigating climate change. The potential impacts of climate change are as yet poorly understood, but climate variability and increasing temperatures are likely to have a moresevere effect on food security in the poorest areas of the world (FAO,2000; Zilberman and Sunding, 1999). However, adjustment costs in response to changes brought about by global warming could be significant world-wide (Zilberman and Sunding, 1999).
Forests are the most species-diverse terrestrial habitat on a global level. Tropical moist forests are home to between 50 and 90 percent of the world's terrestrial species (WRI, 1999; FAO, 1999). The genetic resources of the forest provide the raw material for the improvement of food and cash crops, livestock and medicinal products. Genetic diversity in crop and livestock species may have positive benefits to producers, particularly in marginal production zones as insurance against production risks (Brush and Meng, 1998). Moreover, the conservation of genetic resources may prove to have significant future benefits that are currently unknown, e.g. in new medical treatments or resistance to future disease threats. The most frequently cited cause of genetic erosion is the destruction or degradation of forest and bush lands (FAO, 1996b). Much of this loss is irreversible, such as the extinction of species.
Forest degradation a transfer of value
Forest ecosystems can be thought of as a type of "natural capital", which is defined as the functions, goods and services provided by the environment (Turner, 1999). They can provide a flow of benefits such as timber, non-wood forest products, carbon sequestration and wildlife habitat. The state of the forest is a reflection of the stock of this capital. As with other forms of capital, natural capital may be used or liquidated for current consumption purposes or for investments in alternative productive enterprises - which in turn may yield a flow of future benefits. This liquidation has an associated cost which is the loss of the value that could have been generated if the stock had not been used.
The impacts of degradation are widely felt, both geographically (in some cases globally) and in time (well into the_future). Thus the costs of forest degradation are often borne not by those who caused it and benefit from it, but by others who do not benefit from it. The implications of this mismatch are two fold:
- Since the beneficiaries of the degradation do not pay the full costs, there are incentives to generate more degradation than is rational in the strict economic sense.
- Degradation results in a change in the distribution of wealth, which could lead to either a decrease or an increase in equity depending on who gains and who loses.
Logging, for example, can result in loss of soil and nutrients, reduced value of the forest as a habitat and temporarily reduced capacity to sequester carbon. However, logging companies do not pay for these lost services; therefore they do not enter into the pricing of timber products. Undervalued timber prices contribute to unsustainably high demand for wood products, which in turn results in increased incentives for logging. The result is that on a global scale more forest products are consumed than is economically rational.
In terms of equity, to the extent that logging results in a loss of forest services to populations dependent on them for food security, degradation in this case represents a transfer of benefits away from vulnerable groups to logging companies and the consumers of wood - e.g. a transfer away from the poor to the rich.
However, the equity implications will be quite different if the value obtained from logging is invested in such a way as to provide a future stream of income or food for food-insecure people. For example, if the logging company were a community-based enterprise which reinvested in community assets and provided a sustainable source of employment, then positive impacts on food security could be realized in the community in both the short and long term.
Populations that depend on the forest for food security may also be agents of forest degradation. Again, the instigators of the degradation do not pay the full costs associated with it, which may be borne by other members of the local or global community, who may not be vulnerable to food insecurity. In this instance, there is a transfer of benefits from food-secure to food-insecure groups.
A common example of this case is unsustainably managed shifting cultivation - "slash-and-burn" agriculture - which damages the forest ecosystem. To the extent that crop production contributes to farm household food security, the value of the forest degradation represents a transfer of value from all the potential beneficiaries of the forest ecosystem services to the food insecure.
Decreased future production capacity from forest degradation does not necessarily lead to a decrease in the future potential of the household to obtain food security. If the value obtained from forest degradation is used to invest in an alternative source of income generation, then both the present and future food security of the household can be increased - albeit at the expense of the forest ecosystem and the services it could have provided.
Poverty and forest degradation
Because forests are often located in remote areas, are under some form of collective or State ownership and are difficult to monitor, they are relatively accessible to groups that lack assets and are consequently food insecure. Incursions or settlements on forest lands may be the only means for smallholders or the landless, many of whom are food insecure, to gain access to land for agriculture. (For the same reasons, forest communities that traditionally depend on forest resources for their food security are vulnerable to incursions or expropriation from outside interests.) In addition, many of the products and services of the forest can be transformed into food or income without the need for a large capital investment; production can be obtained from forests at relatively low cost.
However, as has been mentioned above, populations who are dependent on the forest for food security are sometimes themselves agents of forest degradation. What drives the food insecure to compromise their own future production capacity?
First, a combination of outside "conditioning" factors narrows the population's opportunities for achieving a sustainable livelihood (Vosti and Reardon, 1997). Two of the most important conditioning factors are the degree of population pressure on both agricultural and forest lands and the productivity of land under production. These in turn are driven by population growth rates, migration and land utilization patterns (as well as soil type, typography, rainfall, climate, etc.). Increased population pressure on land resources can result in pressure for more extensive agricultural production on new (e.g. forest) lands, or more intensive use of existing production resources, including forests (WRI, 1999). Decreases in the availability of forest lands for food security-related production because of logging, migration settlements or the creation of forest reserves can also result in increased pressures on remaining forest lands.
Government policies sometimes ex-acerbate these pressures. For example, pricing or taxation policies may reduce the profitability of intensifying production on existing agricultural lands or may enhance the profitability of forest intrusions (Heath and Binswanger, 1998; Hecht and Cockburn, 1998). Policies for privatization of lands that are either openly accessible to the poor or managed under traditional communal property rights schemes can limit options available to the food insecure (Ascher, 1995; Das Gupta, 1996). Such measures can lead to forest degradation by putting pressure on food-insecure groups (both traditional forest users and groups driven to forest areas from other locations in search of food security) to intensify forest use or increase use of forest resources.
Second, poverty-specific factors such as lack of power and lack of assets reduce the population's capacity to_respond to existing opportunities or new circumstances (Vosti and Reardon, 1997). Generally, the food insecure are among the least empowered groups in society, so their ability to influence policies to reflect their needs is limited. Lack of assets, whether in the form of production or financial resources or human capital (i.e. technological capacity, which could include both modern and traditional technologies as well as health status) can influence how the food insecure manage their limited resources; pressure to derive the greatest possible value from their resources in the present can prevent them from making investments which could generate future wealth. Given the lack of options available to them, managing the forest in a way that can have a negative impact on their own future food security may be the only alternative available.
Five main conclusions can be drawn on the relationship between forest degradation and food security which have implications for forest management planning and food security-enhancing interventions.
First, forests have an important role in contributing to the food security of a large portion of the world's food insecure, and this factor must be taken into consideration in decisions regarding forest management objectives as well as food security interventions. This does not mean that all forests must be managed for food security purposes; there may be some conflicts between economic uses of the forest and other services such as the protection of biodiversity or recreation. However, it should be recognized that forest management objectives implicitly involve transfers of welfare, and policies that result in increased vulnerability of food-insecure populations generate increased pressure on the resource base and are unlikely to be successful. Forest management plans that involve reduced human access to the forest must therefore include alternative means of achieving a sustainable livelihood for forest-dependent populations.
Second, forest degradation can result in a sustainable increase in food security if the value derived from the degradation is used to generate alternative sustainable food or income flows, and if these flows are accessible to the food insecure. Use of the forests in this manner may be an explicit or implicit policy decision on the part of governments, as in the case of resettlement schemes or lack of forest management enforcement. However, the full cost implications of forest degradation must be taken into account, particularly costs that are irreversible.
Third, many of the benefits from forest ecosystem services are realized by members of national or global society who at present receive these services free of charge. Avoiding forest degradation thus has a value to these groups which could be transferred to forest-dependent users in order to stimulate the adoption of use patterns compatible with the generation of such benefits. This is the idea behind emerging carbon trading programmes such as the Clean Development Mechanism. Such programmes are likely to be most successful where investment constraints currently prevent forest users from adopting management techniques that can contribute to their own welfare.
Fourth, undervalued prices of timber and wood products which exclude the value of external costs associated with increased food insecurity, watershed degradation and the loss of biodiversity and carbon sinks give rise to consumption patterns that drive forest degradation. More rational pricing policies are needed to achieve consumption levels that are sustainable.
Finally, situations in which the food insecure engage in forest degradation to secure short-term food security at the expense of their own future security call for policies that create viable and stable alternative mechanisms for obtaining income and food. Policies that attempt to preserve forests by excluding the access of poor groups may protect one area but, in doing so at the expense of food security, may create more damaging pressure elsewhere.
Arnold, M. & Townson, I. 1998. Assessing the potential of forest product activities to contribute to rural incomes in Africa. ODI Natural Resource Perspectives No. 37. London, UK, Overseas Development Institute.
Ascher, W. 1995. Communities and sustainable forestry in developing countries. San Francisco, California, USA, Institute for Contemporary Studies.
Bruijnzeel, L.A. 1990. Hydrology of moist tropical forests and effects of conversion a state of the knowledge review. Amsterdam, the Netherlands, Netherlands Committee for the International Hydrological Pro-gramme of UNESCO.
Brush, S. & Meng, E. 1998. The value of wheat genetic resources to farmers in Turkey. In R.E. Evenson, D. Gollin & V. Santaniello, eds. Agricultural values of plant genetic resources, p. 97-116. Wallingford, UK, CABI Publishing.
Chomitz, K. & Kumari, K. 1996. The domestic benefits of tropical forests. Policy Research Working Paper 1601. Washing-ton, DC, USA, World Bank.
Das Gupta, P. 1996. An inquiry into well-being and destitution. Oxford, UK, Clarendon Press.
FAO. 1983. Report of the eighth session of the Committee on World Food Security, Rome, 13-20 April 1983. CL 83/10. Rome.
FAO. 1987. Restoring the balance women and forest resources, by R. Clarke. Rome.
FAO. 1989. Forestry and food security. FAO Forestry Paper No. 90. Rome.
FAO. 1990. The major significance of minor forest products the local use and value of forest in the West African humid forest zone, by J. Falconer. Community Forestry Note No. 6. Rome.
FAO. 1995. Forests, fuels and the future. Forestry Topics Report No. 5. Rome.
FAO. 1996a. Forestry and food security, by H. Gillman & N. Hart. Rome. (Pamphlet)
FAO. 1996b. State of the World's Plant Genetic Resources for Food and Agriculture. Rome.
FAO. 1997. State of the World's Forests 1997. Rome.
FAO. 1999. Special biodiversity for food and agriculture. SD Dimensions. www.fao.org/waicent/faoinfo/sustdev/Epdirect
FAO. 2000. Measuring the effect of climate change on developing country agriculture, by R. Mendelsohn. FAO Economic and Social Development Paper. Rome. (In preparation)
Heath, J. & Binswanger, H.P. 1998. Policy-induced effects of natural resource degradation the case of Colombia. In E. Lutz, ed. Agriculture and the environment perspectives on sustainable rural development, p. 22-34. Washington, DC, USA, World Bank.
Hecht, S. & Cockburn, A. 1998. Fate of the forest developers, destroyers, and defenders of the Amazon. London, UK, Verso. (Third edition)
Hodgson, G. & Dixon, J.A. 1988. Logging versus fisheries and tourism in Palawan an environmental and economic analysis. EAPI Occasional Paper No. 7. Honolulu, Hawaii, USA, East-West Center.
Holden, S.T. & Binswanger, H.P. 1998. Small-farmer decision-making, market imperfections, and natural resource management in developing countries. In E. Lutz, ed. Agriculture and the environment perspectives on sustainable rural development, p. 50-70. Washington, DC, USA, World Bank.
Hoskins, M. 1990. The contribution of forestry to food security. Unasylva, 160 3-13.
Jodha, N.S. 1990. Rural common property resources contributions and crises. Economic and Political Weekly, 25.
Kaimowitz, D., Byron, N. & Sunderlin, W. 1998. Public policies to reduce inappropriate deforestation. In E. Lutz, ed. Agriculture and the environment perspectives on sustainable rural devel-opment, p.303-322. Washington, DC, USA, World Bank.
Lipper, L. & Wilmsen, C. 1999. Evaluation report for the Northern New Mexico Rural Agricultural Improvement and Public Affairs Project. Berkeley, California, USA. (Unpublished)
Reddy, S.R.C. & Chakravarty, S.P. 1999. Forest dependence and income distribution in a subsistence economy evidence from India. World Development, 27(7) 1141-1149.
Tengberg, A., Stocking, M. & Dechen, S.C.F. 1998. Soil erosion and crop productivity research in South America. In H.P. Blume, H. Eger, E. Fleishchhauer, A. Hebel, C. Reij & K.G. Steiner, eds. Towards sustainable land use furthering coop-eration between people and institutions, Proceedings of the International Soil Conservation Organization, Bonn, Germany, 26-30 August 1996, Vol. 1. Advances in Geoecology, 31 355-362. Reiskirchen, Germany, Catena Verlag GMBH.
Townson, I.M. 1995. Forest products and household incomes a review and annotated bibliography. Tropical Forestry Papers No. 31. Oxford, UK, CIFOR and Oxford Forestry Institute.
Turner, R. 1999. Environmental and ecological economics perspective. In J.C.J.M. van den Bergh, ed. Handbook of environmental and resource economics, p. 1001-1036. Northampton, Massachusetts, USA, Edward Elgar.
Vosti, S. & Reardon, T. 1997. Poverty-environment links in rural areas of developing countries. In S. Vosti & T. Reardon, eds. Sustainability, growth and poverty alleviation, p. 47-65. Baltimore, Maryland, USA and London, UK, Johns Hopkins University Press.
Wilmsen, C. 1999. Sustained yield recast the politics of sustainability in Vallecitos, New Mexico. Berkeley, California, USA, University of California. (Unpublished draft)
World Resources Institute (WRI). 1994. World resources 1994-1995. Oxford, UK, Oxford University Press.
WRI. 1999. World resources 1998-99 - A guide to the global environment. Oxford, UK, Oxford University Press.
Zilberman, D. & Sunding, D. 1999. Climate change policy and the agricultural sector. Berkeley, California, USA, University of California.
This article was excerpted with the kind permission of FAO from
Lipper, L. 2000. Forest degradation and food security. Unasylva 202. http//www.fao.org/docrep/x7273e/x7273e00.htm
About the author
Leslie Lipper is a natural resource economist working as a consultant in the Food Security and Agricultural Projects Analysis Service of FAO's Agriculture and Economic Development Analysis Division.
Related editions to The Overstory
- The Overstory #191--Edible Leaves
- The Overstory #183--Forestry interventions to reduce poverty
- The Overstory #169--Forestry and sustainable livelihoods
- The Overstory #147--Major Themes of Tropical Homegardens
- The Overstory #139--"Hungry season" food from the forests
- The Overstory #136--Underutilised Indigenous Fruit Trees
- The Overstory #128--Wild Foods in Agricultural Systems
- The Overstory #127--Food Security
- The Overstory #117--Between Wildcrafting and Monocultures
- The Overstory #109--Cultural Landscapes
- The Overstory #106--Hidden Bounty of the Urban Forest
- The Overstory #76--Ethnoforestry
- The Overstory #46--Human Health and Agroecosystems
- The Overstory #24--Sustaining Physical Health |
Transmission of energy through a vacuum or using no medium is accomplished by electromagnetic waves, caused by the osscilation of electric and magnetic fields. They move at a constant speed of 3x108 m/s. Often, they are called electromagnetic radiation, light, or photons.
Did you ever wonder what is electromagnetic radiation? The word is somewhat complicated, but you are in contact with electromagnetic radiation all the time. Here is a diagram of the electromagnetic radiation spectrum that has appeared in many text books and websites. Electromagnetic radiation is caused by the disturbance of an electromagnetic field.
The last line of numbers in power of 10 gives the wavelength in m. The regions sometimes do not have a clear cut, because there is considerable overlap. For example, radio waves and microwaves bondary is very vague, but public regulation for their application (usage) is strict.
Electromagnetic waves are used to transmit long/short/FM wavelength radio waves, and TV/telephone/wireless signals or energies. They are also responsible for transmiting energy in the form of microwaves, infrared radiation (IR), visible light (VIS), ultraviolet light (UV), X-rays, and gamma rays. Each region of this spectrum plays an important part in our lives, and in the business involving communication technology. The list given above are in increasing frequency (or decreasing wavelength) order. Here again is the list of regions and the approximate wavelengths in them. For simplicity, we choose to give only the magnitudes of frequencies. That is we give log (frequency) (log(f)).
Region: Radio, FM, TV, microwave, IR, VIS, UV, X-rays, Gamma rays. Wavelength: 600 m 20 m 1 mc 1 mm 0.1 mm 1e-9 m 1e-12 m 1e-15 m log (f): 6 7 8 9 10 11 12 13 14 15 20 23
Electromagnetic radiations are usually treated as wave motions. The electronic and magnetic fields oscillate in directions perpendicular to each other and to the direction of motion of the wave.
The wavelength, the frequency, and the speed of light obey the following relationship:
The speed of light is usually represented by c, the wavelength by the lower case Greek letter lambda, l and the frequency by lower case Greek letter nu n. In these symbols, the above formula is:
The electromagnetic radiation is the fundation for radar, which is used for guidance and remote sensing for the study of the planet Earth.
Wavelengths of the visible region of the spectrum range from 700 nm for red light to 400 nm for violet light.
red 700 nm orange 630 yellow 550 green 500 blue 450 violet 400There is no need to memorize these numbers, but knowing that the visible region has such a narrow range of 400-700 nm is handy at times when referring to certain light.
In his research on the radiation from a hot (black) body, Max. Planck made a simple proposal. He suggested that light consists of photons. The energy, E, of each individual photon of a monochromatic light wave, is proportional to the frequency, n, of the light:
For the convenience of your future study of electromagnetic radiation, you might want to know the units often used for it.
Einstein learned of Planck's proposal, and he wanted to perform experiments to show if the proposal is true. At that time, the photoelectric effect was known, and he measured the kinetic energy of electrons released by photons. He did find a linear relationship between the kinetic energy of the electrons and the frequency of light used, (see diagram below).
Furthermore, he found the light of minimum frequency needed to release electrons from a metal to be constant, and this energy must be overcome in order to take the electron out of the metal. This energy is called the threshold energy, W. The formula to descirbe photoelectron kinetic energy Ek is
The limiting speed is 3e8 m/s. Nothing moves faster than the speed of light, which is 800 million (0.8 billion) meters per second.
Ultraviolet light has higher frequencies than violet light has. UV light has frequency greater than 8E14 Hz, IR frequencies are less than 4e14 Hz. Thus, radiation with frequency of 1e14 is in the IR region. (3E8 m/s)/(400E-9 m) = 8E14 Hz; (3E8 m/s)/(720E-9 m) = 4E14 Hz Visiable light are in the 4E14 to 8E14 Hz region.
The visible radiation range lies between 8E14 and 4E14 Hz. Yellow light lies in the middle of the visible radiation range.
X-rays wavelengths are 3 orders of magnitude shorter than those of visible light.
E = h c / wavelength
This corresponds to 2.18E8 J or 2.18E5 kJ per mole of photons. A mole of photons is called an einstein.
The minimum frequency is called threshold frequency.
lambda = c/v = 2.62e-7 m or 262 nm. Minimum freuency imply longest wavelength. The light is in the UV region.
E = h v. Make sure you've got the right unit. |
Robotics is a project-based activity that is motivating and engaging for many students. It draws on, and develops, learning related to the disciplines of science, technology, engineering and mathematics (STEM).
I’m a teacher in Tasmania, Australia and have been using LEGO robotics with my students since 2001. I have mentored teams in RoboCup Junior and the FIRST Robotics Competition, and I teach an online robotics class called SmartBots. In 2010, I spent six-months based at the Tufts Center for Engineering Education and Outreach (CEEO) in Boston, USA, and continue to work closely with the Center. I am currently the Content Editor for both LEGOengineering.com and LEGOeducation.com.au.
This article was first published by the Macquarie ICT Innovations Centre.
Why teach robotics?
To be honest the main reason I started teaching robotics was because it was fun! I did, however, quickly come to the position that robotics education fosters these outcomes:
- Resilience and perseverance
- Problem-solving skills
- Communication skills
- Team work skills
- Imagination and creativity
- Planning skills
Although robotics is by no means the only activity that can lay claim to these outcomes, it addresses them quite well and I’d wager that any robotics teacher could cite examples of how their students have developed and demonstrated these characteristics. I think all these outcomes are still fully valid, but from my time at Tufts I think I better understand what it is about robotics education that helps to bring about these outcomes.
Why do I teach robotics now?
The engineering design process – Something that became very obvious from my reading as I prepared to go to Tufts was the CEEO’s commitment to engineering education. They are explicitly committed to raising the level of engineering literacy in their broader community. They figure that the best value for effort is going to come from targeting elementary school-aged children, and that robotics is one of the approaches that they use. In fact, engineering is explicitly part of the Massachusetts K-10 curriculum and the “engineering design process” is a significant part of this. Robotics is a fun way of introducing students to the engineering design process. Here’s a version of the engineering design process that I created for use with my robotics classes:
STEM-based learning – Engineering design tasks provide a meaningful context for learning and assessing understanding in mathematics and science, so robotics is an ideal vehicle for STEM-based (Science, Technology, Engineering, and Mathematics) learning. Robotics challenges not only allow students to apply their STEM learning, but also help students to see the purpose of STEM learning.
Supply v demand – Dean Kamen, medical engineer, creator of the segway and founder of the FIRST competition, talks about the so-called “education crises” in terms of supply and demand. Politicians jump on it, and make it a divisive issue – they throw money, testing, computers at it , but does this solve it. Dean’s assertion is that it’s not an education problem – it’s a culture problem. It’s not a supply problem it’s a demand problem. What if STEM was as highly valued as sport? What if engineers were as popular as sport stars? If there was a “cricket crisis” in the US (e.g. If cricket become part of the Olympics), how would it be solved? Would it be through curriculum, standards, and national high-stakes testing, or through role models and coaching?
Challenge-based learning – Robotics lends itself to project-based learning, or at the very least challenge-based learning, and is the approach I take with all my robotics classes. Challenges can be open or closed, but my favourite challenges have a low entry with a high ceiling and allow for multiple pathways to success. I’ve been a supporter of competitions such as RoboCup Junior and FIRST LEGO League for many years, and have helped to run numerous events, but I’m starting to see the place for more theme-based or exhibition-based challenges. For example, one of the most recent challenges that I’ve been giving students is to work together to create a robotic sideshow alley. Each team is asked to design and build a robotic game that will be fun to play.
A fresh approach to learning?
I think the robotics brings a fresh perspective to the long standing tension between traditional and progressive views of education. One of the big problems in mathematics education, for example, is that because it’s so easy to create list of content of what’s in and what’s out, that it’s very tempting to think that mathematics doesn’t change, and that it’s all been solved already. This is a very insidious perspective and one that I think gets in the way of students learning to think and act like mathematicians. Imagine an art class where the students never have the opportunity to create original pieces of art work! Unfortunately maths class are too often like this…. Engineering is necessarily messy and demands creative solutions to problems. Robotics (and engineering more broadly) hasn’t been schoolified…. yet!
Latest posts by Rob Torok (see all)
- The Plan and the Reality 5: Relay Race - 15 August 2019
- Line Following and Proportional Controls - 15 August 2019
- The Plan and the Reality 4: Research Task and Tallest Tower - 1 July 2019 |
Women and HIV/AIDS: What about Older Adults, Women of Color, and Cancer?
March 10, 2014 is National Women and Girls HIV/AIDS Awareness Day (NWGHAAD). NWGHAAD is a nationwide effort to help women and girls take action to protect themselves and their partners from HIV – through prevention, testing and treatment. The HIV epidemic is rapidly aging with 17% of new HIV diagnoses in the U.S. occurring in those 50 and older. By 2015 the CDC expects half of the HIV infected population to be over 50. Older Americans are more likely than younger Americans to be diagnosed with HIV at a later stage in the disease. This can lead to poorer diagnoses and shorter HIV to AIDS intervals. And with HIV and age, comes cancer.
Statistics – An Overview
- One in four people living with HIV infection in the U.S. are women.
- According to the CDC, 275,700 American women are living with HIV/AIDS.
- Women made up 20% (9,500) of the estimated 47,500 new HIV infections in the U.S. in 2010 with most (84%) of these new infections in women being from heterosexual contact.
- 4,014 women with an AIDS diagnosis died in 2010 and an estimated 111,940 women have died since the beginning of the epidemic.
- Only 41% of HIV positive women are retained in HIV related medical care and only 26% of HIV positive women achieve viral suppression. Viral suppression improves survival and reduces transmission to others.
Disproportionate Affect on Women of Color
- Black and Hispanic women continue to be disproportionately affected by HIV, compared with women of other ethnicities.
- While only 13% of the U.S. female population, Black women represent 64% of new female HIV infections.
- At some point in their lifetimes, an estimated 1 in 32 African American women will be diagnosed with HIV infection.
- Some good news: While African American women accounted for almost two-thirds of all estimated new HIV infections among women in 2010, there was a 21% decrease in new HIV infections between 2008 and 2010.
- By the end of 2010, Hispanic women had an HIV infection rate more than four times that of white women. Hispanic women represented 15% of new HIV infections among women and 19% of all women living with HIV.
Reasons Women are Affected by HIV
- Some women may be unaware of their male partner’s risk factors for HIV (such as injection drug use or having sex with other men) and may not use condoms
- Women have a much higher risk for getting HIV during vaginal sex without a condom than men do.
- For older women, the physical changes of aging such as vaginal drying and the thinning of the vaginal wall due to a loss of estrogen can increase a woman’s susceptibility to HIV and other STDs.
- Some sexually transmitted infections, such as gonorrhea and syphilis, greatly increase the likelihood of getting or spreading HIV. STIs are much more prevalent in Black and Hispanic communities. For instance in 2011, Blacks had 17 times the reported gonorrhea rates of whites.
- Women may be afraid that their partner will leave them or even physically abuse them if they try to talk about condom use.
- Women who have been sexually abused may be more likely than women with no abuse history to engage in sexual behaviors like exchanging sex for drugs, having multiple partners, or having sex with a partner who is physically abusive when asked to use a condom.
- Some HIV infections among women are due to injection drug and other substance use—either directly (sharing drug injection equipment contaminated with HIV) or indirectly (engaging in high-risk behaviors while under the influence of drugs or alcohol).
- The higher proportion of people living with HIV in many Black and Hispanic communities means individuals in those communities face a greater risk of infection with every sexual encounter.
HIV and Cancer Risks
- People infected with HIV have a higher risk of some types of cancer than uninfected people.
- A HIV weakened immune system; infection with other viruses such as HPV, Hepatitis B or C and Epstein Barr; and traditional risk factors such as smoking all contribute to this higher cancer risk.
- HIV treatments have greatly reduced the incidence of AIDS defining cancers such as Kaposi’s sarcoma and non-Hodgkin’s lymphoma as compared to the early years of the epidemic.
- However, various other types of cancers are much more likely to develop in people with HIV. These cancers include anal cancer (10+ times as likely); Hodgkin’s lymphoma (10-20x); head, neck and liver cancers (8-10x); cervical cancer (5-8x); and lung cancer (2.5-7.5x)
- Except for Hodgkin’s lymphoma, these cancers are diagnosed on average 10-15 years earlier in HIV+ people compared to the general population.
- The story is not all bad. People infected with HIV do not have increased risks of breact, colorectal, prostrate or many other common types of cancer.
- What can be done:
- Successful HIV treatment can reduce cancer rates up to 50%.
- Screening for HPV, anal cancer and cervical cancer.
- Tobacco use in people living with HIV/AIDS runs 2-3 times the national average. Reduce smoking to reduce rates of lung, throat, and mouth cancers.
- Diet and exercise as well as reduced alcohol and substance use.
- For those co-infected with Hepatitis B or C, successful Hepatitis treatment reduces liver cancer rates.
How to help
- Increase awareness of safe practices to prevent HIV infection.
- Encourage women to get tested and to know their status and the status of their partners. Find a place to get tested.
- Link HIV+ women to HIV medical care and help them overcome barriers to getting care.
- For those without health care, the Affordable Care Act makes it easier for HIV positive women to receive the care they need. Under the ACA, a woman with a pre-existing condition, such as HIV/AIDS, can no longer be denied insurance because of her health status. People with low and middle income may be eligible for tax subsides that will help them buy coverage. Open enrollment for 2014 ends March 31. Apply for coverage now.
- Encourage women to stay in care and achieve “viral suppression” by using treatment to keep HIV at a level that helps individuals stay healthy and reduces the risk of transmitting the virus to others.
Patrick Aitcheson is the Communications and Logistics Administrator for the Diverse Elders Coalition (DEC). The opinions expressed in this article are those of the author and do not necessarily reflect those of the Diverse Elders Coalition. |
Truman's Committee on Civil Rights:
December 5, 1946
The Federal Government is hampered by inadequate civil rights statutes. The protection of our democratic institutions and the enjoyment by the people of their rights under the Constitution require that these weak and inadequate statutes should be expanded and improved. —HST, December 5, 1946
In the fall of 1946, President Harry Truman's popularity had sunk to the low thirties, making him a serious liability for Democratic congressional candidates who steadfastly avoided campaign contact with the Democratic incumbent in the White House. Not surprising to most political pundits of the day, the Democrats were soundly trounced during the midterm elections in November 1946, when the Republicans gained overwhelming control of both houses of Congress. 1 In the House, the GOP enjoyed a staggering fifty-seven-vote advantage, and a six-vote margin of Republicans dominated the Senate. As a former senator, Truman knew all too well that the GOP majority in the Senate would be bolstered by Southern Democratic senators any time the contentious civil rights issue was raised in the Congress. 2 Despite his party's overwhelming rejection by American voters—tired of meat shortages, labor strikes, and inadequate housing—Truman determined just weeks after the political humiliation of the 1946 election to pick up where Lincoln left off and embark on a predictably unpopular moral crusade—civil rights reform in a racist America.
On December 5, 1946, in a chaotic postwar environment, while the Truman administration was addressing a growing Soviet menace abroad and significant |
There have been warning indicators for years about plummeting insect populations worldwide; however, the extent of the doubtless “catastrophic” disaster had not been properly-understood — till now. The primary international scientific assessment of insect population decline was printed this week within the journal Biological Conservation, and the findings are “surprising,” its authors mentioned. Greater than 40% of insect species are dwindling globally, and a 3rd of species are endangered, concluded the peer-reviewed research, which analyzed 73 stories on insect population declines.
Chillingly, the overall mass of bugs is falling by 2.5% yearly, the overview’s authors stated. If the decline continues at this rate, bugs might be wiped off the face of the Earth inside a century. Scientists have predicted that an anthropological sixth mass extinction is now underway on Earth. Vertebrate species, both on land and under the sea, are threatened at a global scale due to human actions.
However, in accordance with the new evaluation, the proportion of bugs in decline is at present twice as extreme as that of vertebrates, and the insect extinction rate is eight times quicker than that of mammals, birds, and reptiles. Bugs play a profoundly vital function in Earth’s ecosystems. They are a food supply for a lot of animals, are essential pollinators and recycle nutrients again into the soil.
In a November New York Times report about a possible “insect apocalypse,” scientists had been requested to think about a world with no bugs. In accordance with the brand new scientific overview, habitat loss due to intensive agriculture is the highest driver of insect population declines. The heavy use of pesticides, climate change, and invasive species had been additionally pinpointed as vital causes. |
Since its foundation in 1787, The Constitution has been a major document that has shaped and molded the history of the United States. The Constitution has been cited time and time again in court cases, and it has been reviewed and amended 27 times (hey, our forefathers never claimed to be perfect). It was fought for by the Federalists before finally becoming the document in which it is fair to say our Supreme Court System still relies on today. It is a staple item in high school American History and U.S. History courses. But what are the most important things for students to take from the Constitution and its history?
1) The Bill of Rights. It is a majorly important fact that the first ten amendments to the Constitution are called the Bill of Rights. They were put into effect in 1791, after being written by James Madison to ensure the people kept their power and the states did not lose all leverage in comparison to the federal government. The Constitution was the answer to the Articles of Confederation, in which the federal government had almost no power. With the introduction of the Bill of Rights, the anit-federalists were able to be assured that they would keep their rights as people and not become slaves to a federal government.
2) Prohibition. Most high school kids have heard of prohibition, or the outlawing of alcohol in the United States. However, if you mention to a student that prohibition was a part of the Constitution, they tend to give you a blank stare. Prohibition was the 18th amendment, and was later overturned by the 21st.
3) Slavery. Slavery is a major part of history in the United States of America. Most students have heard of the Emancipation Proclamation, and the Underground Railroad. However, the thirteenth amendment abolished slavery, while the fifteenth gave people of all colors the right to vote. While these amendments did not immediately change the treatment of African Americans in the south, they were a major stepping stone towards getting to where we are today.
4) Over a hundred countries have used the United States Constitution as a model for their own government.
5) Without the Constitution, many of the rights we receive now would be denied to us. Without the Constitution in place, one would not have the right to a speedy trial, and could sit in jail for years before ever being tried. There would be no right to an attorney, or right to remain silent. Troops would still be being quartered in residential households in times of war. A cop could search your home for the simple fact that he wants to, no warrant required. Women could not vote. People of color could not vote. A national religion could be enforced. I could be arrested for saying I dislike our president or posting it online. Without the Constitution and the Bill of Rights, we the people would have no freedoms. |
Scientific Name: Hibiscus coulteri
Common Names: Desert Rosemallow, Coulter's Hibiscus
Duration: Perennial, Deciduous
Growth Habit: Shrub, Subshrub
Arizona Native Status: Native
Habitat: Desert. This wildflower grows in foothill canyons and on rocky slopes.
Flower Color: Pale yellow, Cream
Flowering Season: Spring, Summer, Fall (early). It blooms sporadically throughout much of the year, but it blooms most heavily in the spring and then again in the late summer after the summer monsoon rains.
Height: To 4 feet (1.2 m) tall
Description: The flowers are up to 2 inches (5 cm) wide and have 5 fan-shaped petals that are either solid-colored or streaked with red at the base and a ring of green, linear bracts. The leaves are green, hairy, alternate, and 3-lobed. The leaf margins have only a few smaller lobes or a few large, rounded teeth. The stems are slender, upright, and covered with flattened hairs. The plants are often sparse, lanky, and difficult to spot unless blooming.
The similar, but less common Arizona Rosemallow (Hibiscus biseptus) has leaves with heavily toothed margins and palmately 3-5-lobed leaves.
Kingdom: Plantae – Plants
Subkingdom: Tracheobionta – Vascular plants
Superdivision: Spermatophyta – Seed plants
Division: Magnoliophyta – Flowering plants
Class: Magnoliopsida – Dicotyledons
Family: Malvaceae – Mallow family
Genus: Hibiscus L. – rosemallow
Species: Hibiscus coulteri Harv. ex A. Gray – desert rosemallow
More About This Plant |
Many highly-populated coastal regions around the globe suffer from severe drought conditions. In an effort to deliver fresh water to these regions while also considering how to produce it efficiently using clean energy resources, a team of researchers from MIT and the University of Hawaii has created a detailed analysis of a symbiotic system that combines a pumped hydropower energy storage system and reverse osmosis desalination plant to meet both of these needs in one large-scale engineering project. The researchers, who have shared their findings in a paper published in Sustainable Energy Technologies and Assessments, say this kind of combined system could ultimately lead to cost savings, revenues, and job opportunities.
The basic idea to use a hydropower system to also support a reverse osmosis desalination plant was first proposed two decades ago by Professor Masahiro Murakami of Kochi University of Technology, but was never developed in detail.
“Back then renewables were too expensive and oil was too cheap,” says the paper’s co-author Alexander Slocum, the Pappalardo Professor of Mechanical Engineering at MIT, “there was not the extreme need and sense of urgency that there is now with climate change, increasing populations and waves of refugees fleeing drought and war-torn regions.”
Recognizing the potential of such a concept now, Slocum and his co-authors—Maha Haji, Sasan Ghaemsaidi, and Marco Ferrara of MIT; and A Zachary Trimble of the University of Hawaii—developed a detailed engineering, geographic, and economic model to explore the size and costs of such a system and enable further analysis to evaluate its feasibility at any given site around the world.
Typically, energy and water systems are considered separately, but combining the two has the potential to increase efficiency and reduce capital costs. Termed an “Integrated Pumped Hydro Reverse Osmosis (IPHRO) system,” this approach uses a lined reservoir placed in high mountains near a coastal region to store sea water pumped up to it using excess power from renewable energy sources or nuclear power stations. When energy is needed by the electric grid, water flows downhill to generate hydroelectric power. With a reservoir elevation greater than 500 meters, the pressure is great enough to also supply a reverse osmosis plant and thus eliminates the need for separate pumps. An additional benefit is that the amount of water typically used to generate power is about 20 times the amount needed for creating fresh water, so the brine outflow from the reverse osmosis plant can be greatly diluted by the water flowing through the hydroelectric turbines before it discharges back into the ocean, which reduces reverse osmosis outflow system costs.
As part of their research, Slocum’s team has formulated an algorithm that weighs a location’s distance from the ocean and mountain height to explore areas around the world where IPHRO systems might be located. Additionally, they have identified possible IPHRO system locations with the potential for providing power and water—based on an American lifestyle of 50 kilowatt-hours per day of energy consumption and 500 liters of fresh water per day—to serve one million people. In this scenario, a reservoir at 500 meters height would only need to be one square kilometer in size and 30 meters deep.
Their analysis determined that in Southern California, all power and water needs can actually be met for 28 million. An IPHRO system could be located in the mountains along the California coast or in Tijuana, Mexico, and would additionally provide long-term construction and renewable energy systems jobs for tens of thousands of people. Findings show that to build this system, the cost would be between $5,000 and $10,000 per person served. This would cover the cost of all elements of the system, including the renewable energy sources, the hydropower system, and the reverse osmosis system, to provide each person with all necessary renewable electric power and fresh water.
Working with colleagues in Israel and Jordan under the auspices of the MIT International Science and Technology Initiatives (MISTI) program, the team has studied possible sites in the Middle East in detail, as abundant fresh water and continuous renewable energy could be key elements in helping to bring stability to the region. An IPHRO system could potentially form the foundation for stable economic growth, providing local jobs and trade opportunities; and as hypothesized in Slocum’s article, IPHRO systems could possibly help mitigate migration issues as a direct result of these opportunities.
“Considering the cost per refugee in Europe is about 25,000 euros per year and it takes several years for a refugee to be assimilated, an IPHRO system that is built in the Middle East to anchor a new community and trading partner for the European Union might be a very good option for the world to consider,” says Slocum. “If we create a sustainable system that provides clean power, water, and jobs for people, then people will create new opportunities for themselves where they actually want to live, and the world can become a much nicer place.”
This work is now available as an open access article on ScienceDirect, thanks to a grant by the S.D. Bechtel, Jr. Foundation through the MIT Energy Initiative, which also supported the class from which this material originated. The class has also been partially supported by MISTI and the cooperative agreement between the Masdar Institute of Science and Technology and MIT. |
Biomass and the Beast
How free and open data are fostering innovative applications in Africa
A SERVIR project put space-based rainforest data online. Now the data are giving great apes more space.
Web of Life
The ecological footprint of the world’s tropical rainforests is enormous. It’s estimated they support 50 percent of all terrestrial life, yet cover less than seven percent of the globe.
What’s more, continuing deforestation not only threatens the habitat of many species, but also contributes to carbon emissions. The reason? Rainforests, and their living biomass, store large amounts of global carbon; known as a forest’s “carbon stock”. When humans clear rainforests, there’s less biomass to store carbon.
Aiming to support the global conservation initiative called REDD+ (Reducing Emissions from Deforestation and forest Degradation), members of the SERVIR Applied Sciences Team (AST) focused on creating an opensource database that mapped the world’s tropical rainforest biomass. “Making data freely available is not only how science advances, but also how people can most effectively make use of knowledge and information for their various applications,” noted SERVIR AST member Scott Goetz. The team’s hope was this intentionally shared data would ultimately spark applications ideas for conserving areas of unprotected rainforest around the world.
First though, the team needed Earth observations to map these global oases.
Goetz, along with team members Patrick Jantz and Nadine Laporte, used field measurements, NASA lidar observations, and MODIS images from Aqua and Terra, to create a global map estimating the amount and distribution of aboveground rainforest biomass across the Earth’s tropics.
Next, they wanted to determine where conservation efforts were already protecting tropical rainforests. For this, the team downloaded 5,600 world protected areas from a global database. Many countries designate specific locations as protected in an effort to slow or stop rainforest loss. This preservation, however, can at times create other problems for the local ecology. The fragmented nature of these habitats can interrupt species’ migration routes, limit food and water availability, and impact biodiversity. Knowing this, the team assessed ways to link these protected areas to each other along their nearest highest-biomass corridors, which identified new tracts of land that conservation efforts could target.
The final analysis revealed 16,257 corridors— green pathways that could potentially connect thousands of isolated patches of rainforest around the world. These corridors collectively cover 3.4 million square kilometers and contain an estimated 51 gigatons of carbon.
A Pathway to Conservation
This open-source corridor data went online in 2016 and already inspired an application— guiding great ape conservation in Africa.
GRASP is the Great Apes Survival Partnership, a United Nations initiative for ensuring the long-term survival of gorillas, chimpanzees, bonobos, and orangutans and their habitats. In a GRASP-REDD+ mapping project, the UN worked with the Max Planck Institute for Evolutionary Anthropology to develop an online tool. This tool superimposes the SERVIR AST-identified biomass corridors with the distribution of Africa’s great apes. In 2016, GRASP-REDD+ launched the tool during a conservation meeting with nine West African countries in Monrovia, Liberia.
"You cannot protect apes in Africa or Asia without also protecting the forests in which they live." Doug Cress, GRASP
So how are these giant primates reaping the benefits of biomass?
“The carbon tool helps to identify areas where REDD+ investments could potentially generate biodiversity benefits, in our case for great apes. We looked at corridors which could potentially link great ape habitats, and where REDD+ could provide the necessary seed funding to protect these areas,” explained Johannes Refisch, GRASP Program Manager. “The government of Liberia has confirmed that it will use the carbon tool for its national REDD+ prioritization work.”
Harrison S. Karnwea, the managing director of Liberia’s Forestry Development Authority was excited about the application of this new data. “This will help us a great deal here in Liberia,” he said. “It will help us in determining which areas are important and should receive our highest priority. Conservation is a great resource, and applying it scientifically in this way is very innovative.”
For GRASP Program Coordinator Doug Cress, it is crucial that conservation efforts like this continue to work as a partnership— where rainforest preservation and species conservation go hand in hand. “You cannot protect apes in Africa or Asia without also protecting the forests in which they live,” he remarked. “This project does an excellent job of emphasizing the overlap.”
And Jantz says there is more great news in store for the great apes. “We are now in the process of supporting the creation of forest corridors in the Murchison-Semliki Landscape [in Uganda] to conserve eastern chimpanzee populations and looking at possible incentives such as funding through REDD+ to encourage farmers to conserve forest on their land.”
SERVIR is a NASA-USAID venture that fosters applications of Earth observations to help developing countries assess environmental conditions and changes to improve their planning, decisions, and actions. https://www.servirglobal.net
Read more Making Space for Earth blog posts |
QUARTZ CRYSTAL, THE TIMING MATERIAL
Quartz is a piezoelectric material. A thin wafer of quartz, with electrodes attached to opposing surfaces, vibrates mechanically when voltage is applied to the two electrodes. Frequency of vibration is primarily a function of wafer dimensions. The wafers, called crystal resonators when suitably mounted with electrodes attached, have long been used for controlling frequency of radio transmitters, and it has been an essential component in telecommunication communication equipment where its piezoelectric properties are used in filters, oscillators and other devices. Now quartz crystals time and coordinate signals for microprocessors, computers, programmable controllers, watches, and other digital equipment such as various DSP.
Quartz is a crystalline form of silicon dioxide (SiO2). It is a hard, brittle, transparent material with a density of 2649 kg/m3 and a melting point of 1750° C. Quartz is insoluble in ordinary acids, but soluble in hydrofluoric acid and in hot alkalis. When quartz is heated to 573° C, its crystalline form changes. The stable form above this transition temperature is known as high-quartz or beta-quartz, while the stable form below 573° C is known as low-quartz or alpha-quartz. For resonator applications, only alpha-quartz is of interest and unless stated otherwise the term quartz in the sequel always refers to alpha-quartz. Quartz is an abundant natural material, but considerable labor is required to separate good quality from poor-quality natural quartz. Although silicon (mainly in the form of dioxide, and generally as small quartz crystallites) comprises approximately one third of earth’s crust, natural quartz of size and quality suitable for use in devices employing its piezoelectric properties, has been found principally in Brazil. Natural quartz is also costly to process because it occurs in random shapes and sizes. Moreover, some segments of poor-quality quartz are discovered only after partial processing. And widespread impurities in natural quartz often make cutting of small wafers impractical. The first major step in the development of cultured quartz was in 1936 when the US Army Signal Corps gave a contract to Brush Laboratories under the direction of Drs. Jaffe, Hale, and Sawyer. This was done due to the pending scarcity of natural quartz with good piezoelectric quality, customarily purchased from Brazil.
Today, quartz is now grown artificially to specified dimensions. Crystal orientation is controlled, and purity is uniformly high. Standard sizes reduce the cost of cutting wafers, and impurities are widely dispersed, making possible small resonators requiring low driving power.
2. The Basic Process of Growing Cultured Quartz
Cultured quartz is grown in a large pressure vessel known as an autoclave (see the following schematic drawing). The autoclave is a metal cylinder, closed at one end, capable of withstanding pressures up to 30,000 pounds per square inch with internal temperature of 700 to 800° F. It usually stands from 12 to 20 feet high and 2 to 3 feet in diameter.
Small chips of pure but un-faced quartz (1 to 1.5 inch in size), called "lascas or nutrient", are placed in a wire mesh basket and lowered into the bottom half of the vessel. A steel plate with prearranged holes, called a "baffle", is set on top of the basket. The baffle is used to separate the growth (seed) region and the nutrient region, and to help establish a temperature differential between the two regions. Suitably oriented single crystal plates (either natural or cultured), called "seed", are mounted on a rack and suspended on top of the baffle in the upper half of the vessel. The autoclave is then filled with an aqueous alkaline solution (Sodium Carbonate or Sodium Hydroxide) to approximately 80% of its free volume to allow for future liquid expansion, and it is sealed with a high-pressure closure. The autoclave is then brought to operating temperature by a series of resistive heaters attached to the exterior circumference of the cylinder. As the temperature increases, the pressure begins to build within the autoclave. A temperature of 700 to 800° F is attained in the lower half of the vessel while the top half is maintained at 70 to 80° F cooler than the bottom half.
At operating pressure and temperature, the lascas dissolves in the heated solution in the lower half of the vessel, which then rises. As it reaches the cooler temperature of the upper part of the vessel, the solution becomes supersaturated, causing the dissolved quartz within the lascas to re-crystallize onto the seed. The cooled spent solution then returns to the lower half of the vessel to repeat the cycle until the lascas is depleted and the cultured quartz stones have reached the desired size. This so-called "Hydrothermal Process" time ranges from 25 to 365 days, depending upon the desired stone size, properties, and the process type – Sodium Hydroxide or Sodium Carbonate.
3. Symmetry, Twinning and Size of Quartz Crystal
Alpha-quartz belongs to the crystallographic class 32, and it is a hexagonal prism with six cap faces at each end. The prism faces are designated m-faces and the cap faces are designated R and r-faces. The R-faces are often called major rhomb faces and the r-faces are minor rhomb faces. Both left-hand and right-hand crystals occur naturally and can be distinguished by the position of the S and X faces.
As shown in the above schematic drawing, alpha-quartz crystal has a single axis of three-fold symmetry (trigonal axis), and it has three axes of two-fold symmetry (digonal axes) that is perpendicular to that trigonal axis. The digonal axes are spaced 120° apart and are polar axes, that is, a definite sense can be assigned to them. The presence of polar axes implies the lack of a center symmetry and is necessary condition for the existence of the piezoelectric effect. The digonal axes are also known as the electric axes of quartz (x-, y-axis). In crystal with fully developed natural faces, the two ends of each polar axis can be differentiated by the presence or absence of the S and X faces. When pressure is applied in the direction of the electric axis, a negative charge is developed at that end of the axis modified by these faces. The trigonal axis, also known as the optic axis (z axis), is not polar, since the presence of digonal axes normal to it implies that the two ends of the trigonal axis are equivalent. Thus no piezoelectric polarization can be produced along optic axis. In the rectangular coordinate systems, the z-axis is parallel to the m prism faces. A plate of quartz cut with its major surface perpendicular to the x-axis is called an X-cut plate. Rotating the cut 90 degrees about the z-axis gives a Y-cut plate with the y-axis now perpendicular to the major surface. Since a quartz crystal has six prism faces, three choices exist for the x- and y-axis. The selection is arbitrary; each behaves identically.
Quartz is an optically active material. When a beam of plane-polarized light is transmitted along the optic axis, a rotation of the plane of polarization occurs, and the amount the rotation depends on the distance traversed in the material. The sense of the rotation can be used to differentiate between the two naturally occurring forms of alpha-quartz known as left quartz and right quartz. In left quartz the plane of polarization rotates anti-clockwise when seen by an observer looking towards the source of light, and in right quartz it rotates clockwise. Most cultured quartz produced is right quartz, whereas in natural left- and right- quartz are about equally distributed. Either form can equally well be used in the manufacture of resonators, but material in which left and right forms are mixed, which is called optically twinned material, can not be used. On the other hand, electrically twinned material is all of the same hand, but contains regions where the sense of the electric axis is reversed, thus reducing the overall piezoelectric effect. Such material is also not suitable for resonator application. The presence of twinning and other defects in natural quartz crystal is the major reason for the shortage of suitable natural material, and the absence of significant twinning in cultured quartz constitutes one of its main advantages. When alpha-quartz is heated to above 573° C, the crystalline form changes to that of beta-quartz, which has hexagonal rather than trigonal symmetry. On cooling down through 573° C, the material reverts to alpha-quartz, but in general will be found to electrically twinned. By the same token, the application of large thermal or mechanical stresses can induce twinning, so it is necessary in resonator processing to avoid any such thermal or mechanical shocks.
After being removed from an autoclave in which they were produced, cultured quartz crystals are converted, by grinding, into so-called lumbered bars. These are long, rectangular bars, suitable for subsequent cutting into wafers for resonators. Lumbered bars are typically 6 to 8 inch long, but usable length is about 5 to 6 inch because material near the ends is unusable. Longer bars can be grown, but these require longer seeds, the cost of which increases rapidly with length. Height of lumbered bars generally is approximately twice the width because two wafers are normally cut from each slice. Numerous standard-sized lumbered bar are available, and quartz can also be grown and ground to specified dimensions.
4. Chemical Impurities in Quartz Crystal
Both cultured and natural quartz contain chemical impurities that can affect resonator performance. Chemical impurities are those that form chemical bonds with silicon and oxygen in quartz. Aluminum, iron, hydrogen and fluorine are typical chemical impurities. They are held to a much lower level in cultured quartz than that often found in natural quartz. However, chemical impurities are not evenly distributed in cultured quartz. The +x , - x, z regions, and so-called s regions that occasionally form, contain different levels of chemical impurities. The two z regions contain the least amount of impurities. The +x region contains more impurities that the z region, and the - x region has yet more impurities. Density of impurities in the s regions, which are generally small, is between that in the z regions and that in the +x region. When wide seeds are used for culturing, the z regions of a lumbered bar are large and the +x and - x regions are small. When narrow, less expensive seeds are used, the z regions are smaller and the +x and - x regions larger. In general, the chemical impurities can result in degrade in the resonator performance such as radiation hardness, susceptibility to twining, oscillator short term and long term stability, and filter loss.
5. Resonator Q and Crystal Q
The Q value of a crystal resonator is the ratio of energy stored to energy lost during a cycle:
The value is important because it is a measure of the power required to drive the resonator. The Q is primarily a function of the atmosphere in which a resonator operates, surface imperfection, mechanical attachments and other factors resulting from processing and mounting the resonators.
Quartz lumbered bars also are assigned a Q value, but the Q for a quartz bar is not based on a direct measurement of energy stored and energy lost. Instead, the Q of a quartz bar is a figure of merit based on impurities in the bar. Chemical impurities in cultured quartz are measured by directing an infrared light through the z regions in a cross-section slice of a lumbered bar. The difference in transmittance at two specific wavelengths (3,500 nm and 3,800 nm) is measured, and Q value is calculated from these data. Quartz having a high Q contains less impurity than those with low Q, and "Infrared Q" measurements, per EIA Standard 477-1, are routinely used by quartz growers and users as an indicator of quartz quality.
The value of Q for a resonator generally is not identical to that for the quartz bar from which the resonator was cut. However, the Q of a resonator can be affected when Q of the quartz bar is below a critical level. A Q value of 1.8 million or higher for cultured quartz is an indication that chemical impurities will not be a factor in the final Q of a resonator for most applications. Quartz having such values for Q is generally called electronic grade (Grade C). Premium grade quartz has a Q of 2.2 million (Grade B), and special premium has a Q of 3.0 million (Grade A). It is important to be aware of that the Q value for cultured quartz is based on impurities in the z region only. Therefore, even where crystal Q is adequate for an application, resonator Q and frequency vs. temperature behavior can be adversely affected where the active portion (between the electrodes) of a resonator includes +x, - x, or s region material.
Quartz crystal wafers containing only z-region material can be successfully cut only from bars grown from wide seeds, which are relatively expensive. Fortunately, electrodes rarely cover the entire surface area of a resonator wafer, and impurities contained in +x, - x, or s region does not adversely affect resonator operation when these impurity material lie outside the active portion. Thus, resonators for most applications can use quartz grown from a relatively cheap narrow seed.
Piezoelectric quartz crystal, discovered in 1880 by the famous Curie couple and once obtained at high cost from rough-hewn natural crystal, is now grown artificially by a process that produces crystals of specified size and purity. This cultured quartz has lowered the cost and reduced the size of resonators critical to the timing of today’s digital circuits. |
A) Students will learn to safely execute 6 different partner balances
B) Create routine by choosing 4 of the 6 balances
C) Perform routines to class
1) We will review central idea and lines of inquiry
2) Revisit individual balances by practicing a few of them (bird, bridge, table, candle)
3) Students will be paired up
4) Discuss safety: How best can we ensure that we are safe while performing balances?
5) Using Smart Board, introduce partner balance pictures as exemplars.
6) Discuss key points such as strength and stability
7) Students practice each of these partner balances
8) Asking for feedback from peers/teacher stressed as very important in order to ensure that the balances are done properly
9) Students select 4 of the balances that they enjoyed the most and create a routine
10) Practice and refine
Formative Assessment Task
The students are required to draw their selected balances on the assessment sheet in the order that they will perform their routine. Space is provided for extra comments if they wish. Each pair must identify and show teacher their favorite balance. Teacher to take picture that will be included on a assessment sheet.
Lesson went very well. Was not able to perform the routines as we ran out of time. However, they will be able to use first part of next week's class to practice and perform. |
31st March 1998
This new work will be included by Professor Priest in his talk, 'A startling new Sun', at the National Astronomy Meeting at the University of St Andrews on Wednesday 1st April.
The surface of the Sun has a temperature of only 6000 degrees kelvin (5730 degrees C), but its outermost layers of tenuous gas - the corona, which is visible at a total solar eclipse - is surprisingly very much hotter. Its temperature is several million degrees. How the corona is heated represents one of the most important unsolved mysteries in astrophysics which has tantalized solar physicists for the past 40 years.
"But the coronal heating problem is a really tough and complex one to tackle" says Professor Priest. "The corona consists of several types of structure which may be heated by different mechanisms. There are huge magnetic loops arching high above the solar surface, tiny intense cores of emission called X-ray bright points, and dark regions, called coronal holes, where the nature of the magnetic field allows hot gas (plasma) to stream out into interplanetary space."
Two main theories have been proposed to explain the high temperature of the solar corona. One of them involves magnetic waves travelling upwards from the surface of the Sun. Like water waves, magnetic waves carry energy. The other, the 'magnetic reconnection' theory, involves the generation of intense electric currents to discharge the energy directly into the corona.
To test the wave theory, Dr Robert Walsh and Dr Jack Ireland at St Andrews used the Coronal Diagnostic Spectrometer (CDS) instrument on SOHO to search for magnetic waves with periods between 30 and 1000 seconds in an active region of the Sun's surface where the magnetic field is strong. The magnetic structure of the region was also mapped out using data from the Michelson Doppler Imager (MDI) instrument, also on SOHO. The results were startling: in the layers of gas nearest the visible surface of the Sun (the chromosphere), where the temperature is about 10,000 degrees kelvin, there are clear wave-like motions with periods of about 300 seconds and 600 seconds; further up (the transition region) where the temperature is 200,000 degrees kelvin the waves can also be seen. But by the time the one- million-degree corona is reached no such wave motions were detected. It appears that waves are travelling up some distance, but they are not getting far enough to heat the corona.
However, the St Andrews team discovered that intense coronal brightenings known as X-ray bright points are heated by magnetic reconnection. Observations from a rocket instrument called NIXT have shown that such bright points have a complex internal structure of interacting magnetic loops. This structure agrees very well with predictions made by Professor Priest, Dr Clare Parnell and Dr Sara Martin.
"Magnetic reconnection gives a unified explanation for many diverse observations from SOHO that all fall into place when viewed together" says Professor Priest. For example: - Recently, Karel Schrijver, Alan Title and colleagues at Lockheed Martin discovered from MDI observations that the solar surface consists of a "magnetic carpet", in which the magnetic structure is completely replenished every 40 hours. The mechanism for changing the magnetic connections so rapidly is the magnetic reconnection process. - With the SUMER instrument, rapid jets of plasma in explosive events have been discovered and these are also naturally produced by magnetic reconnection. - The discovery made with the CDS instrument of bright spots that have been called "blinkers" are an inevitable consequence of magnetic reconnection.
Said Professor Priest, "It is only now that we are beginning to analyse and digest the results from SOHO, but there are some amazing surprises that are revolutionising our understanding of the Sun - our closest star".
Images to support this story can be found at
The general SOHO Web site is at
The SOHO satellite was launched 2 years ago as a major joint project between the European Space Agency (ESA) and NASA, with ESA as the major partner. |
"Interpersonal dynamics" refers to the way in which a person's body language, facial expression and other nonverbal mannerisms support a verbal message in one-on-one, or interpersonal, communication. Accomplished professionals and leaders recognize the profound impact interpersonal dynamics have in motivating or persuading other people, and they work to develop effective nonverbal communication skills, explains About.Continue Reading
Posture, body movement and hand gestures are among the primary body language factors that contribute to interpersonal dynamics. A dynamic communicator stands tall, but relaxed, moves their body naturally and calmly and uses natural hand gestures to highlight certain points or to support action words. A smile, eye contact and a relaxed facial expression convey confidence. When a listener senses confidence from a message sender, it increases the likelihood he will buy into the message. Vocal expression is another nonverbal factor of interpersonal dynamics. An assertive tone, good volume, a relaxed pace, emphasis and inflection all impact a message. A person can change the entire construct of a sentence just by adjusting the points of inflection.
Another important element of interpersonal dynamics is the relationship between a person's words and nonverbal messages. Honest, clear and well-received communication occurs when words and body language closely match. When they don't, a listener may infer that a speaker is uncertain or dishonest.Learn more about Psychology |
The most effective sources of early learning are immediate, meaningful, and involve children’s discovery and choice. Bubbles not only involve children in learning, but they are fun, easy to use, and ever-changing.
- Fine motor skills. Kids have the opportunity to practice pinching the skinny wand, coordinating two hands to hold the bottle and dip, holding the blower with a pencil-like grasp, opening and closing the bottle, and using hands in different ways to pop the bubbles (poke with index finger, “squeeze” to grab bubbles with the whole hand, use two hands to clap the bubbles).
- Visual tracking skills. Follow where the bubbles go. Some are fast and some are slow. And some will even glow!
- Hand/eye coordination. It takes serious practice to link up what the eyes and hands are doing in order to accurately dip and blow with a wand.
- Sensory processing skills. Bubbles are wet. and slimy. and sticky. They feel funny. And the physical act of blowing can be a very effective sensory-based way to help children “organize”, calm, and focus their bodies.
- Gross motor skills. What an easy way to get kids to reach way up high, stand on their tippie toes, squat, jump, run, stomp, and kick.
- Following directions. You can give them directions on how to pop the bubbles with each turn (clap them, poke them, squeeze them, jump on them, etc.) either one at a time or by telling them a popping sequence (first poke, then squeeze, then clap). Or they can follow the directions to a turn-taking sequence (first Johnny pops, then Caitlin, then Danny). The possibilities for directions are endless.
- Identifying body parts. Pop with your finger, your elbow, your knee, or your nose!
Caution: Bubble solution not suitable for children under 3 years. Do not drink, avoid contact with eyes. Adult supervision is recommended. |
Like many fungi and one-celled organisms, Candida albicans, a normally harmless microbe that can turn deadly, has long been thought to reproduce without sexual mating. But a new study by Professor Judith Berman and colleagues at the University of Minnesota and Tel Aviv University shows that C. albicans is capable of sexual reproduction.
The finding, published online by Nature January 30, represents an important breakthrough in understanding how this pathogen has been shaped by evolution, which could suggest strategies for preventing and treating the often serious infections that it causes.
The most common fungus that infects humans, C. albicans is part of the large community of microorganisms that live for the most part harmlessly within the human gut. But unlike many of its neighbors, this one-celled yeast can also cause disease, ranging from thrush (an oral infection) and vaginal yeast infections to systemic blood infections that cause organ failure and death and usually occur in people with immune defects related to HIV/AIDS, organ transplantation or chemotherapy. C. albicans is responsible for 400,000 deaths annually.
Most single-celled organisms reproduce by dividing, but others reproduce asexually, parasexually or via sexual mating. Scientists have long believed that C. albicans reproduce without mating.
Organisms that produce asexually or parasexually are diploid, which means they have two sets of chromosomes and thus can reproduce without a mate. Organisms that reproduce sexually are haploid, which means they have one set of chromosomes and need a mate to provide a second set. C. albicans was believed to be diploid, but this study shows that the yeast is sometimes haploid, and that these haploids are capable of sexual reproduction.
Sexual reproduction fuels the evolution of higher organisms because it combines DNA from two parents to create one organism. The haploid isolates discovered in Professor Berman's lab arise only rarely within a population, and have been detected following propagation in the lab or in a mammalian host. These haploids can mate with other haploids to generate diploid strains with new combinations of DNA, which may provide the diversity required for fungus to evolve.
The haploid C. albicans isolates also pave the way for genetic studies of the pathogen, such as the construction of "libraries" of recessive mutant strains. In addition, the ability to perform genetic crosses between haploids will help produce modified diploid strains that should help scientists better understand interactions between the fungus and its host and how it transforms from a harmless microbe into a deadly pathogen.
Berman holds appointments and has laboratories at the University of Minnesota's College of Biological Sciences and Tel Aviv University.
The work was done in collaboration with researchers at Bowdoin College (Maine), Brown University (Rhode Island), A*STAR (Singapore) and at the Taipei Medical University (Taiwan) and was funded, in part, by the National Institutes of Health. |
Previous: Introduction Up: Chesapeake Bay Bolide Next: Location of Chesapeake Bay
In order to fully appreciate the consequences of the Chesapeake Bay impact, we need to understand what the crater is like, and how we know it is there. It is the larger of two craters recently discovered on the US East Coast by Wylie Poag and his colleagues. Both were formed 35 million years ago in the late Eocene epoch of geological time. That's about half as old as the dinosaur extinction. The crater is located approximately 200 km southeast of Washington, D.C., and is buried 300-500 meters beneath the lower part of Chesapeake Bay, its surrounding peninsulas, and the inner continental shelf of the Atlantic Ocean. There is, however, much telltale geological evidence of the impact.
The first evidence of a bolide impact on the East Coast came to light in 1983. Wylie Poag was serving as Co-Chief Scientist on the drill ship Glomar Challenger during Leg 95 of the National Science Foundation's Deep Sea Drilling Project. At an offshore drill site 120 km east of Atlantic City, NJ, the scientific party of Leg 95 recovered a core containing sedimentary debris diagnostic of a bolide impact. This figure focuses on that discovery, and introduces some key terminology. Shown here in great exaggeration, is the Glomar Challenger drilling into the sedimentary beds that make up the seaward edge of the continental shelf. The continental shelf is represented as a stack of sedimentary beds, displayed on a seismic reflection profile. The seismic profile is a type of sea floor sonogram. The survey ship sends a series of sound waves into the sea floor. As each wave encounters the boundaries between individual beds, part of the wave is reflected back to a recording instrument. These reflections are digitized and processed by computer to produce the seismic profile. The profile shows the thickness, depth, and spatial orientation of each bed, and allows one to determine the best drill site for solving a particular geological problem. For example, we see here that the yellow bed is tilted seaward, and has been fractured. The eastern block has moved downward along the fracture plane relative to the western block. This fracture plane is called a fault. At the lower end of the drill pipe, the drill bit is located near the crest of a folded bed.
The drill bit has a hole in its center, about the diameter of a tennis ball. So as it grinds down through the sediments, a cylindrical core of sediment protrudes through that opening and up into the hollow drill pipe. From there, it can be recovered and sampled. A core from the red bed contains a 20-cm-thick layer, which includes diagnostic evidence of a bolide impact. The evidence consists of certain minerals, whose physical properties have been altered by the tremendous force of the impact shock, which can be tens of thousands of times greater than atmospheric pressure. Two of the most common alteration products are shown in the yellow circle. Tektites are millimeter-to-centimeter-size glass beads derived from sediment melted by the impact. Shocked minerals, especially quartz, show several sets of closely spaced, intersecting dark stripes when viewed microscopically. The lines represent tiny fracture planes oriented at specific angles to the main optical axis of the quartz crystal. No natural mechanism other than a bolide impact produces tektites and shocked quartz.
The sediments containing the tektite also contain fossilized remains of microorganisms (microfossils) that lived in the ocean when the tektites were deposited. These photomicrographs illustrate a variety of these microfossils (note the scale bars). The microfossils indicated that the tektite layer at Site 612 was deposited in the late Eocene epoch, 35 million years ago. This age was confirmed by determining the ratio of two isotopes of argon gas contained in the tektite glass.
The second indication of an East Coast bolide impact came three years later (1986), from cores drilled onshore in southeastern Virginia. There, the U.S. Geological Survey and the Virginia State Water Control Board were investigating the composition, thickness, and geological age of subsurface sedimentary beds and evaluating their potential as sources of fresh groundwater. They drilled four cores, two on each side of the lower bay. Let's examine some of the core from the Windmill Point and Exmore sites.
Here are parts of two different cores, cut up into two-foot sections
for ease of storage. We can call this rock material a sandy rubble
bed. Mixed within the sand are larger hand-size to person-size chunks
(clasts) of clay, limestone, and sand. The clasts in the rubble bed
change rapidly downcore in composition, size, color, and orientation.
No one had ever seen such a rubble bed before in the subsurface of
Virginia, but it is present in all four of our cores. The strangest
aspect of the bed is not visible to the naked eye, however. We didn't
discover it until we analyzed the microfossils. The upper clay bed
contained the normal stacked succession of microfossils... youngest on
top, getting progressively older downcore. But that's not the case in
the rubble bed. For example, the dark, fractured clay interval in the
Windmill Point core differs by 20 million years in age from the white
limestone below it. But the limestone is not older, as it should be;
it's 20 million years younger. And we found a random mixture of ages
among all the other clasts, too. The clasts turn out to be mainly
fragments ripped from all the surrounding sedimentary beds that
underlie southeast Virginia. Small pieces of the granitic basement are
also scattered throughout the rubble. All these fragments were mixed
together and redeposited in a layer that covers twice the area of Rhode
Island. But most important of all, the youngest microfossils in the
rubble bed are the same group of species we had seen in the tektite
layer off New Jersey. Clearly, some terrific force had torn apart the
normal horizontally stacked layers in Virginia, and scrambled them all
together, at the same time a bolide impact had deposited the tektites
off New Jersey.
This suggested a common origin for the rubble bed and the New Jersey tektite layer. So we looked for shock-altered minerals in the rubble bed. Sure enough, we found trace amounts of shocked quartz and bits of melt-rock in the rubble bed at each core site. Now we had diagnostic evidence that the rubble bed resulted from a bolide impact. But we still could not pinpoint the location of the source crater.
The final piece of the puzzle was provided in 1993 (ten years after the tektite discovery off New Jersey) by Texaco, Inc. and Exxon Exploration Co. These companies were exploring beneath Chesapeake Bay for structures that might contain oil and gas. And as part of that search, they collected a network of seismic reflection profiles in the bay. These profiles showed clearly that a huge peak-ring impact crater is buried beneath the bay and centered near the town of Cape Charles, on Virginia's eastern shore. The crater is 90 km in diameter and 1.3 km deep. It covers an area twice the size of Rhode Island, and is nearly as deep as the Grand Canyon. The rubble bed, which we now realize is an impact breccia, fills the crater and forms a thin halo around it, called an ejecta blanket. Inadvertently, we had drilled two of the core holes mentioned previously into the breccia inside the crater. The other two cores were drilled just outside the rim, into the ejecta blanket. The seismic profiles show that the breccia is much thicker than the cores indicated, however, reaching more than a kilometer.
Here is a seismic profile which shows, in cross section, the structure of the outer rim of the crater. Along the base of the profile is a prominent reflection separating the purple bed from the brown bed. The purple bed is composed of granite and granite-like rocks, which we call crystalline basement. The basement rocks are much denser than the sedimentary layers above it, and this produces the strong basement reflection. The stack of horizontal reflections to the right, between the purple and blue layers, represent the normal sedimentary beds that existed here when the bolide struck. The top of the blue bed represents the ancient sea-floor at the time of the impact. As we look to the left on this profile, however, these horizontal reflections are truncated by a series of faults, and the orderly stacking of beds is disrupted. The blue units are large blocks that have slumped off the crater's outer wall, and have slid to the left into the annular trough. We can still see some organized reflections in these blocks; some remain horizontal, but others are diagonal, indicating that the blocks have rotated. The pink breccia section is characterized by disorganized or chaotic reflections caused by the jumble of clasts it contains. On top of the breccia are horizontal reflections from the youngest beds, which accumulated during the past 35 million years since the bolide struck.
We can put all the core and seismic data together and produce a two-dimensional cross section across the entire crater. A map view at the upper right shows the location of the cross section relative to the crater outline and the core sites. Outside the crater we see a stack of gently dipping sedimentary beds lying on the granitic basement. The bolide punched a deep hole through the sediments and into the basement (the inner basin), fractured it to depths of 8 km, and raised the peak ring around it. The sedimentary walls of the crater progressively slumped in, widened the crater, and formed a layer of huge blocks on the floor of the annular trough. The slump blocks were then covered with the breccia. The entire bolide event, from initial impact to the termination of breccia deposition lasted only a few hours or days. In geological perspective, the 1.2 km-thick breccia is an instantaneous deposit. The crater was then buried by additional sedimentary beds, which accumulated during the following 35 million years. The white perpendicular columns beneath the drill derricks indicate the beds that we cored.
Previous: Introduction Up: Chesapeake Bay Bolide Next: Location of Chesapeake Bay |
1 Answer | Add Yours
The human geography of the Pacific Realm is, of course, rather complex. There are many aspects to it and there are many ways in which it differs from place to place within the realm.
In some ways, the human geography of this region is rather homogeneous. In this whole huge realm, there were three main cultures that came to occupy large numbers of islands. These were the Melanesian, the Polynesian, and the Micronesian groups. These groups were rather similar in many ways. They tended to be animistic peoples who lived in relatively small groups. This was different on the larger islands, but many of the islands of this realm were too small to support large populations. People on the larger islands typically had more complex and hierarchical societies while those on the smaller islands were less complex and more egalitarian.
The human geography changed greatly with the coming of Europeans. In many places, native populations died either due to fighting or to disease. European cultural influences destroyed native ways of life. Many children of mixed racial descent were born. These changes irrevocably altered the human and cultural geography of the islands of the Pacific Realm.
We’ve answered 315,735 questions. We can answer yours, too.Ask a question |
Obsessive Compulsive Disorder (OCD) is experienced by approximately 2.5% of the population, which means that one out of every 40 people suffers from this disorder. People with OCD generally experience both obsessions and compulsions. Obsessions are repetitive intrusive thoughts, images, or impulses that cause anxiety or distress. For example, a common obsession is a fear that an object is contaminated by germs. Compulsions (also called rituals) are behavioral or mental responses to obsessions which are used to prevent or neutralize anxiety. A typical compulsion might be to wash an object excessively to remove contaminants. Most individuals with Obsessive Compulsive Disorder experience both obsessions and compulsions.
Obsessive Compulsive Disorder is a neurobiological disorder which is often triggered by stress. OCD is chronic and tends to wax and wane throughout one’s lifetime. OCD affects men and women equally. For many people with OCD, some symptoms are noticed in childhood. The age of onset for males is most likely to occur between the ages of 13 and 15, and onset for females is most likely to occur between ages 20-24.
Although Obsessive Compulsive Disorder can be debilitating disorder, both medication and cognitive behavior therapy (CBT) have been shown to be effective. In terms of medication, generally the type of antidepressants known as selective serotonin reuptake inhibitors (SSRIs) are used. Studies have found that between 40-60% of people who are treated with SSRIs experience significant improvement of their OCD symptoms. In terms of behavioral treatment, exposure and response prevention (E/RP) is the most effective treatment. Most studies show that approximately 75% of people who complete E/RP experience significant improvement in their OCD symptoms. Many people benefit from combining medication and E/RP.
The OCD Cycle
A vast majority of individuals who do not suffer from OCD experience intrusive, unpleasant, unwanted thoughts. However, for people without OCD the thoughts are less frequent and cause much less distress and impairment, which is what differentiates normal intrusive thoughts from obsessions. The cycle of OCD begins when an unwanted thought, idea, image, or impulse comes to mind spontaneously. For many people with OCD, the unwanted thought initially causes anxiety because of the way the thought is interpreted. For example, if someone had an intrusive thought about harming their sister, several interpretations are possible. One person might think, “I must be a truly horrible, violent person to be having thoughts about harming my sister all the time.” Whereas, another person might think, “What a strange thought” and then go about his or her day. Obviously, the first person would be more likely to experience anxiety as a result of his or her interpretation of the intrusive thought. This person would then most likely complete a ritual (compulsion) in an attempt to decrease the anxiety. In the short term, the ritual causes the anxiety to decrease; however, neutralizing the anxiety reinforces the idea that anxiety will only decrease if the ritual is performed.
The primary treatment for Obsessive Compulsive Disorder is cognitive behavior therapy (CBT) primarily using exposure and response prevention (E/RP). The focus of the treatment is on breaking this maladaptive cycle. By preventing ritualizing when anxiety is present, you learn that the distress will decrease without performing a ritual. In addition, many people with OCD can benefit from cognitive therapy, which helps people challenge their faulty interpretations of their intrusive thoughts.
Obsessive Compulsive Disorder Symptom Subtypes
There are several different types of obsessions and compulsions that are typically experienced by individuals with OCD. Common obsessional themes include contamination, aggression or harm, sex, religion, and orderliness or perfectionism. Common rituals include excessive washing or cleaning, checking, and repeating. Many individuals also perform mental rituals, such as counting, praying, or mentally reviewing events to undo or neutralize intrusive thoughts. It common for people to experience several types of obsessions and compulsions at a given point in time, or throughout their life.
Contamination obsessions are often related to dirt, germs, bodily waste or secretions, contracting or spreading illness or disease, environmental contaminants, or chemicals. Some people fear becoming contaminated, and others fear spreading contamination to others. Compulsions include excessive or ritualistic hand washing, showering, or grooming; excessive cleaning of objects; and avoiding objects, people, and places that are perceived to be contaminated.
Additional information about contamination obsessions:
Many people with OCD are plagued with aggressive obsessions, which are intrusive thoughts and/or images of harming others or themselves. Sometimes people fear causing harm inadvertently (fear of being responsible for a fire or burglary due to personal carelessness) or through acting on an unwanted impulse (running someone over with your car, stabbing or strangling someone, etc.). Often people have violent or horrific images of harming others, often family members or close friends. Aggressive obsessions can be accompanied by checking rituals (checking locks and appliances, checking that you did not harm yourself or others unknowingly) mental rituals (praying, mentally reviewing or analyzing a situation, etc.) and avoidance of situations that provoke anxiety (avoiding being alone with others, avoiding knives or other potential weapons, etc.).
Additional information about harming obsessions:
Sexual obsessions are another common type of obsession for people with OCD. People who suffer from sexual obsessions experience intrusive sexual thoughts, images, or impulses. These obsessions can include unwanted thoughts of molesting one’s children or other children, thoughts or worries about being homosexual, and thoughts or images of violent sexual behavior toward others. People with sexual obsessions, like those with unwanted aggressive thoughts or impulses, tend to engage in many mental rituals, such as mentally reviewing situations and checking for feelings of arousal, as well as seeking reassurance from others that they have not done something wrong and confessing if they believe they have done something wrong.
Additional information about sexual obsessions:
- “The Boy Who Didn’t Know Who He Was” by Fred Penzel, Ph.D.
- “How Do I Know I’m Not Really Gay?” by Fred Penzel, Ph.D.
Religious obsessions (also referred to as scrupulosity) can be particularly disturbing for people. Those who suffer from religious obsessions worry about having blasphemous or immoral thoughts or impulsively committing blasphemous acts, and they often fear being punished for these undesirable thoughts or actions. They may have very strict religious or moral standards that must be followed perfectly. Compulsions for religious obsessions may include saying prayers repetitively until they are done “perfectly,” seeking excessive reassurance from religious leaders, or mentally reviewing past thoughts or actions to determine if they were sinful.
Additional information about religious obsessions:
Ordering obsessions often involve being preoccupied with exactness, balance, symmetry, and order. People with ordering obsessions are often concerned with completing tasks “perfectly” and having things in their environment ordered and arranged “just so.” Compulsions include unnecessarily arranging things in order, counting senseless things (books on a shelf, words in a sentence), and repeating sentences or words until they sound just right.
Additional information about ordering obsessions: |
In the Northern hemisphere, December thru March is flu season. Every year 5% to 20% of us catch "the bug". This year, flu cases peaked around the end of February (see chart). Perhaps you've wondered "Why?".
Hypotheses for flu season are numerous and include:
- Because people are indoors more often during the winter, they are in close contact more often, and this promotes transmission from person to person.
- Cold temperatures lead to drier air, which may dehydrate mucus, preventing the body from effectively expelling virus particles.
- The virus may linger longer on exposed surfaces (doorknobs, countertops, etc.) in colder temperatures.
- Increased travel and visitation due to the holiday season.
- Less sunlight promotes virus survival.
- Our immune systems work poorly during the cold weather. (From Wikipedia).
So too for the influenza virus. Its lipid coat helps protect it from the elements, but is only good when it is tough and rubbery. In a study reported in Nature Chemical Biology, NIH researchers used a sophisticated magnetic resonance technique, developed and previously tested in NIAAA's Laboratory of Membrane Biochemistry and Biophysics, to create a detailed fingerprint of how the virus’s outer membrane responded to variations in temperature. At low temperatures, the lipid coat was solidified as a gel. As the temperature approached 60 degrees Fahrenheit, the coat turned into a goopy mess.
We spread the flu from person to person when we cough and sneeze. In cold temperatures, the virus is better able to survive the elements and find a new host. Once the virus enters a host, the outer protective coat "melts like an M&M in your mouth", and enables the virus to enter the host's cells.
Dr. Joshua Zimmerberg, corresponding author of the study, suggested that people might better protect themselves against getting sick by remaining indoors at warmer temperatures than usual.
Progressive ordering with decreasing temperature of the phospholipids of influenza virus. Ivan V Polozov, Ludmila Bezrukov, Klaus Gawrisch, Joshua Zimmerberg. Nature Chemical Biology 4, 248 - 255 (01 Apr 2008), doi: 10.1038/nchembio.77, Article
Chart from the Centers for Disease Control. |
In their search for the lost grave of King Richard III, archaeologists unearthed a skeleton from underneath a parking lot last August. Today researchers announced that the skeleton is indeed that of England’s 500-year-deceased
king, and they have the DNA and radiocarbon dating to prove it.
Richard III is most famous for the Shakespeare play of the same name, which was written a century after his death. This English king reigned for just over two years, but his body was buried without record of its exact location. Researchers began digging up the vicinity of Greyfriars church in Leicester in 2011, and today’s announcement is the scientific evidence they needed to make their case for a definitive identification.
The parallels between the scientific findings and historical accounts of Richard III are many:
• Radiocarbon dating determined the body to have been buried in the late 15th or early sixteenth century. Historians say Richard III died fighting in the War of the Roses in 1485.
• Skeletal examinations determined the body to be in its 20s or early 30s at the time of death. Richard III was apparently 32.
• The skeleton showed no signs of the withered arm portrayed in Shakespeare’s account, but it did fit the descriptions from Richard III’s contemporaries: short, slim and with a very crooked spine.
• The body shows evidence of ten wounds, including two fatal blows to the skull. Richard III was thought to have died from a blow to the back of the head.
But Richard III was not the only 30-something with scoliosis in the Middle Ages. The real proof of the skeleton’s identity came from the DNA analysis. Samples from the skeleton overlapped with the genomes of two living relatives descended from Richard’s maternal line.
Finally, then, the king can be put to rest in a proper burial spot – and for historians, a long winter of discontent comes to an end. |
During 1966 and 1967 the National Aeronautics and Space Administration launched five Lunar Orbiter spacecraft to obtain photographs from orbit of the surface of the Moon. The reconstructed photographs and support data are now on file at the National Space Science Data Center (NSSDC), Goddard Space Flight Center, Greenbelt, Md. The purpose of this Atlas is to present a selection of these photographs which provides essentially complete coverage of the near side and far side of the Moon in greater detail than any publication now in existence.
A summary of the five missions is given in table 1 (p. 19). The first three spacecraft essentially satisfied the primary objective to obtain high-resolution photographs of proposed Apollo landing sites. The fourth spacecraft systematically photographed the near side of the Moon and the fifth spacecraft completed the far-side coverage. The primary emphasis was not only to support the Apollo program but also to provide more detail in many areas that have been studied from Earth-based observations. At the average spacecraft altitude of about 3000 km for the photographs contained herein, the resolutions of the two cameras were approximately 500 meters and 65 meters; whereas under favorable conditions, Earth-based photography of the Moon can reveal details only as small as 500 to 1000 meters.
All the Lunar Orbiter photographs have been reprocessed from the original video data tapes. Special attention was given to the Atlas photographs to insure high quality and uniformity of appearance. They are presented here as 300-line-per-inch halftone reproductions (plates 1 to 675). The halftone negatives were prepared by the Army Topographic Command (TOPOCOM). The Lunar Orbiter photographs have been referenced to the lunar surface by a complete set of index maps which permit identification of those photographs showing a particular site or area. The Apollo zone photographs and the Atlas photographs have also been referenced separately by two additional sets of index maps. The index maps were prepared by the Aeronautical Chart and Information Center (ACIC). An alphabetical listing of prominent lunar features is given which will aid in the location of these features within the Atlas. A bibliography has also been included to refer the interested reader to additional information on the results of the program.
Lunar Orbiter Spacecraft
The Lunar Orbiter spacecraft is shown in flight configuration in figure 1. Detailed information regarding the spacecraft can be obtained from documents cited in the bibliography. Since the photography cannot be fully interpreted without an understanding of its origin, the photographic subsystem is herein discussed in detail.
The primary elements of the photographic system (fig. 2) were a dual-lens camera, a film processor, and a readout system. The 80-mm focal-length lens provided an angular coverage of 44.4º by 38º. The 610-mm focal-length lens photographed a small area, centered within this field, with an angular coverage of 20.4º by 5.16º. (See fig. 3.) To distinguish between the two exposures, those made with the 610-mm focal-length lens are referred to as high-resolution frames (or H frames) and those made with the 80-mm focal-length lens, medium-resolution frames (or M frames).
The photographs were interlaced on a single strip of Kodak high-definition aerial film, type SO-243, 70 mm wide and 80 meters long, as shown in figure 3. The SO-243 film was selected because it is relatively insensitive to radiation and, although its aerial exposure index of 1.6 is slow compared with that of other emulsions, it has an extremely fine grain structure. At a contrast ratio of 3 to 1, the angular line-pair resolutions of the recovered photographs were 34 and 4.4 seconds of arc, respectively, for the medium- and high-resolution cameras. Prior to use, the edges of the film were preexposed with framelet numbers, a 9-level gray scale, and resolving power charts. A geometric pattern (fig. 4) was preexposed on the spacecraft film of Lunar Orbiters II to V at the same time as the edge data. This pattern aided in the detection of and compensation for distortion introduced by the processing, readout, and ground reproduction systems. The folding mirror in the optical path of the 610-mm focal-length lens caused reversal of the high-resolution images with respect to the medium-resolution images. This condition resulted in the edge data being turned over when the film was printed in reverse to give properly oriented pictures.
Essentially, both lenses opened simultaneously at a fixed aperture of f/5.6. Timing lights which encoded the exposure time were recorded on the film. A between-the-lens shutter was used with the 80-mm focal-length lens; a double-curtain focal-plane shutter, with the 610-mm focal-length lens. Shutter speeds of 0.04, 0.02, and 0.01 second were selectable by transmitted commands. Photographs could be taken as single exposures or in 4-, 8-, or 16-exposure sequences, both the sequence and the time between successive exposures being selectable. Multiple-frame sequences gave an overlap in the direction of flight.
The film was held in the focal plane by film clamps and vacuum which held it flat against the platens during exposure. The platens moved the film during exposure to eliminate image smear caused by the rapid movement of the spacecraft over the lunar surface at low altitudes. The platen velocity, which provided the image motion compensation (IMC), was regulated by a mechanical linkage to an image-motion-sensing device, the velocity-height (V/H) sensor. The V/H sensor optically locked on to the image of the lunar surface through the high-resolution lens and caused the platens of each camera to move at the velocity of its image.
In a normal photographic sequence, the spacecraft was oriented to the correct attitude, the lenses were uncovered by the opening of a thermal door on the spacecraft, the V/H sensor was activated, and the camera was turned on. After the "camera on" command, the cameras operated in an automatic sequence to: (1) clamp the film to the platen and draw it flat by differential pressure, (2) start moving the platens in synchronization with the image motion, (3) open the shutters for simultaneous exposures, (4) return to the platens to the rest position, and (5) advance the film for the next exposure. This sequence was repeated until all photographs commanded were taken.
After exposure, the film was held in the camera storage looper. The storage looper (fig. 2) consisted of a series of fixed rollers in a stationary carriage and a series of rollers in a movable carriage which rode on a track. As film entered the looper, a spring caused the movable carriage to move away from the fixed carriage; thus, a storage capacity for up to 6 meters (˜20 ft) of film was provided.
Upon completion of a photographic sequence, a processor dryer, on command, processed the film from the storage looper at a rate of 6.09 cm (2.4 in.) per minute. Processing was accomplished by pressing the film into contact with Kodak dry Bimat transfer film, type SO-111. Kodak Bimat film consists of a normal film base coated with a gelatin layer presoaked with a special monobath processing solution. The solution both developed and fixed the photographic image during the 3.4 minutes the exposed film and Bimat film were in contact on the processing drum. Processing temperature was closely controlled at 29.5º C.
The exposed film and Bimat film were then separated, the Bimat film going to a takeup spool and the developed film to a dryer drum. The film was in contact with the dryer drum for 11.5 minutes at a temperature of 35º C. Moisture driven from the film by the heat of the dryer drum was absorbed by special chemical salts in pads around the dryer; thus a controlled humidity environment was maintained in the photographic subsystem. After leaving the dryer, the film was transported through the readout storage looper and readout mechanism and stored on a takeup spool. The film was then ready for readout.
At the completion of all photography, the procedure was to cut the Bimat film and read out all the photographs by running the film in reverse and taking it up on the film supply reel. Because of limitations on the number of frames that could be scanned per orbit, this procedure required about 2 weeks. However, throughout the mission the readout storage looper provided the capability of reading the last four exposed frames for priority return of important data and for monitoring system performance.
The readout section (fig. 5) consisted of a line scan tube, a photomultiplier tube, and the associated optics and electronics. In the line scan tube, a spot of light, 112 microns in diameter, generated by the electron beam moved linearly across the face of a revolving phosphor drum. Rotation of the drum avoided local overheating of the phosphor, but it did not affect the orientation of the line. The spot was focused by the scanner lens and projected as a reduced image, 6.5 microns in diameter, onto the film where it moved 2.67 mm horizontally in one direction (the return trace was blanked out). The scanner lens moved continuously at right angles to the film edge. The result was a complete scan of a "framelet" consisting of 16 359 parallel scan lines, each 2.67 mm long, across 57 mm of the 70-mm film. At the completion of a framelet, the film was advanced 2.54 mm to allow for an overlap before making the next scan in the reverse direction across the film. A complete dual-exposure frame, 298 mm long, required 117 framelets.
The light passing through the film, modulated by image density, was sensed by a photomultiplier tube through the associated light-collector optics. An electrical signal proportional to the intensity of the transmitted light was generated, amplified, and transmitted to the ground receiving station. The received video signal was sent to the ground reconstruction electronics (GRE) where it was converted to a line scan on a kinescope tube. The variations in light intensity on this kinescope tube corresponded to the variations in image density on the spacecraft film.
The line on the kinescope tube was recorded on moving 35-mm Kodak television recording film, type SO-349. The image on the 35-mm film was 7.2 times the size of the image on the spacecraft film. After processing, this positive image film was run through a film cutter to remove excess film and the individual framelets were separated. The framelets were then laid side by side on stable-base polyester film to reconstruct the original photograph. Master negatives were made from the positives.
A full medium-resolution photograph was reconstructed from approximately 27 framelets and measured 47 cm by 40 cm. The high-resolution photograph consisted of approximately 86 framelets and measured 158 cm by 40 cm. Because this size was unwieldy, the practice was to assemble high-resolution frames into three sections. Photographic reassembly is illustrated in figure 6.
Certain imperfections may be observed in some of the photographs. These imperfections are directly traceable to the method of film development, the readout system, the video data, or the GRE system.
Most photographs are not perfectly rectangular. This distortion was caused by a misalinement of the line-scan tube with respect to the mechanical scan direction. When the projected line was not perpendicular to the scan direction during readout, and the kinescope trace in the GRE system was perpendicular to the edge of the 35-mm film, then a noticeable tilt could be observed when successive framelets were laid side by side to reconstruct a complete frame.
Many framelets appear to have light and dark stripes running parallel to their edges. This effect was due primarily to an inherent nonuniformity in the light output of the scan system in the spacecraft that caused a variation in light intensity and affected the video signal level during a scan across the width of a framelet. Ideally, the level should be constant for a constant film density.
In some photographs small-scale streaks appear as bright white lines (see plate 297) parallel to the framelet edge. This condition was caused by phosphor granularity in the GRE kinescope tube.
The Bimat technique introduced several development imperfections that are scattered throughout many of the frames. Bimat stop lines (shown in plate 75) and Bimat pull-off lines (shown in plate 144) result from anomalous development conditions which occurred at the entrance to and exit from the development system. Two oval-shaped spots (shown in plate 92) appear near the center of the film and are associated with the location of the Bimat stop line; they follow it by about 10.7 cm (4.2 in.). "Lace" (shown in plate 116) appears as a spotted area of unprocessed film arranged in a random manner. The areas vary in size and location on the film and do not follow any pattern. Because of overlapping photography, the amount of data lost by these processing defects is small; their main effect is the spoiling of the appearance of the photographs.
Various other minor imperfections are scattered throughout the photographs. Occasionally, momentary dropout of the video modulation on the transmitted carrier caused extremely fine white lines to appear in the framelets (shown in plate 573). In plate 2 there appears to be an area of double exposure. This condition was caused by a failure of the film to advance completely after a photograph was taken; as a result, a medium-resolution image overlaps a high-resolution image. A few photographs (such as plate 344) have a blurred or out-of-focus appearance that was a result of water vapor condensing on the camera window. Once the problem was recognized, it was eliminated by closer control of the window temperature.
Although the electronic nature of the photographic system introduced undesirable defects in the photographs, it also allowed for flexibility during reconstruction. An important example was adjusting for overexposure often evident in the bright areas of the medium-resolution photographs. By amplifying the video signal during reconstruction, photographic detail, lost by normal processing, was retrieved.
In addition to the electronic enhancement techniques often employed during reconstruction, photographic dodging techniques were used (in the production of the negatives) to compensate for large density variations within the photographs. The enhancement process, although it increases the information content of the photographs, distorts the photometric fidelity. It is therefore not advisable to draw conclusions based on a comparison of photographic density.
Lunar Orbiter Photographs
A listing of the support data required to analyze the Atlas photographs is contained in table 2. Additional information may be found in the references cited in the bibliography. The positional data are subject to possible future revision. The terms presented in the support data are defined in this section (in the order given in the table). Figure 7 illustrates the geometry of these parameters.
Although the primary purpose of this Atlas is to present a complete photographic coverage of the near and far sides of the Moon, an index of all Lunar Orbiter photographs has been included. Figures 8 and 9 (p. 8-13) include mercator and polar projections which display the lunar surface outlines of photographs from the five missions. The concentration of photography in the Apollo landing zones required the front-side equatorial region to be displayed in greater detail to avoid confusion. Any photograph in this Atlas can be located by means of these maps, and the maps also show whether additional photographs are available for any specific area of interest.
An index of the photographs presented in this Atlas is given in figure 10. (A few Mission I and Mission IV photographs are not given, either because the photography is oblique, only part of the frame is available, or they are redundant; they are I-35, I-37, I-39, I-102, I-117, IV-39, IV-45, IV-46, IV-51, IV-54, IV-55, IV-56, IV-61, IV-99, IV-123, IV-178, IV-184, and IV-192.)
DEFINITION OF TERMS
To facilitate the location of the principal named lunar features, an alphabetical listing is provided in table 3 identifying the feature, the plate on which it can be found, and the corresponding Lunar Orbiter photograph number. The associated Lunar Aeronautical Chart (LAC) published by the Aeronautical Chart and Information Center, U.S. Air Force, St. Louis, Mo., is referenced. The LAC charts are based on telescopic observation and may be updated by use of Lunar Orbiter photographs.
These charts were of great assistance in locating features on Lunar Orbiter photographs. Where charts were not available, Kuiper's "Rectified Lunar Atlas" was used. Table 3 by no means covers all identifiable features, only about 450 of the most prominent features. Catalogs (see bibliography) prepared by the Lunar and Planetary Laboratory, University of Arizona, list approximately 7000 features and give selenographic coordinates and other pertinent data.
The areas covered by the high-resolution photographs are typically too small to include all, or even most, of a mare. Accordingly, the maria are not specifically identified and located on the high-resolution photographs; they are indicated only on the medium-resolution photographs. Since identification of the maria is helpful in obtaining the proper perspective, these areas are identified on many more photographs than are referenced in table 3.
PRESENTATION AND ARRANGEMENT OF ATLAS PHOTOGRAPHS
The photographs (plates 1 to 675) are reproduced in 300-line-per-inch halftones at 55 percent GRE scale. They are oriented with north at the top of the page. Because of different orientations of the spacecraft, the edge data may appear on either the right or left margin. Several of the support parameters useful in interpreting the photographs are given in the lower margin. Each high-resolution photograph is presented in three sections. Because of the unequal lengths of the sections, the center coordinates given at the bottom of the photographs are located only approximately in the center of the middle section (designated H2). An approximate scale is provided to help in estimating the size of the prominent lunar features; for comparison, the width of a framelet is approximately 1 cm. The alphanumeric coordinates of major features are given at the bottom of each photograph.
For quick reference as to location, a sketch of a lunar globe with a cartographic coordinate system is included with each photograph. The centers of medium-resolution photographs are indicated on the globe by a cross. The outline of the entire high-resolution photograph is drawn on the globe with the particular section darkened.
The sequence in which photographs are presented in this Atlas has no relationship to a specific mission or the order in which exposures were made on each mission. Rather, a plan similar to that used by ACIC in their LAC charts and Kuiper in the "Rectified Lunar Atlas" was adopted. The Moon is viewed with north at the top. The near and far sides are treated separately. Beginning with the near side, the photographs are presented by starting at the northwestern limb, sweeping to the right to the northeastern limb, then moving southward, and repeating the procedure. The result is six bands running west to east. The same left-to-right procedure was used on the far side, although, the coverage was not as orderly and symmetrical. The medium-resolution photograph is generally presented first, always on a left-hand page, followed by the three sections of the corresponding high-resolution photograph. The reference globe in the lower outside edge of the page permits rapid location of a plate showing a particular area.
In a few cases, medium-resolution photographs were significantly degraded and are not included. To maintain an orderly sequence, however, intentional blank pages have been substituted. No data are lost by the omission, since these areas are adequately covered in adjacent photographs.
AVAILABILITY OF LUNAR ORBITER PHOTOGRAPHS
The halftone prints within this Atlas do not reproduce all the detail in the original photographs. For some uses it may therefore be desirable to obtain prints from the original negatives. Prints may be obtained from the NSSDC in Greenbelt, Md. The standard format is approximately 50 x 60 cm (20 x 24 in.) (GRE scale); other formats such as microfilm, are available. For further information on availability, inquiries should be addressed to-
National Space Science Data Center |
Some ants are blind, but most are not.
More Info: Some ant species are blind, especially those that are subterranean, but not all. Many ant species have compound eyes, which they use to hunt for food. Some species have ocelli, three simple eyes, which detect light. Most ants have marginal eyesight relying on other senses to help assist in foraging for food, but some have excellent eyesight, such as the Myrmecia species, commonly referred to as bull ants.
Blind Ant Species
Army Ants: This group is not a single species of ants, but encompasses more than 200 species across 18 genera and 6 subfamilies. An ant is classified as an army ant due to its aggressive swarming predatory behaviors. These ants do not have compound eyes, and cannot see, instead using their antennae and pheromones to smell, touch, and communicate.
Martialis heureka: This unknown ant species was discovered in a Brazilian rainforest in 2003. One of the most primitive species discovered yet, this ant is pale in color and eyeless. When its founder exhibited the ant to an expert at Harvard University’s Museum of Comparative Zoology he exclaimed that that ant species may as well be from Mars it is so different. The species was thus named Martialis heureka, which essentially translates to “From Mars! Wow!”
“Ant Anatomy | ASU – Ask A Biologist.” Ask A Biologist | ASU – Ask A Biologist. N.p., n.d. Web. 2 Mar. 2012. .
“Army ant – New World Encyclopedia.” New World Encyclopedia. N.p., n.d. Web. 2 Mar. 2012.
“Scientists Find One Specimen of Strange Ancient Ant – NYTimes.com.” The New York Times – Breaking News, World News & Multimedia. N.p., n.d. Web. 2 Mar. 2012. |
The Curious tale of Z and Cedilla.
Our modern cedilla symbol begun life as a "scribal ligature" which in plainer words means two characters combined into one by scribes as a kind of shorthand. The sounds cedilla represents in Roamnce language in one stage were written cz and had a Spanish nickname of cedilla or zedilla - the litte zeta. Originally the sound was probably "ts" and later became a "sh" sibilant sound partly due to the large number of Arabic and Hebrew speakers in Spain during the Islamic era.
It has been or is used in Spanish, Portugese, Catalan, Friuli and Occitan.
Beyond Europe it was adopted by several languages of the Turkkic family when they switched from Arabic and Cyrillic to Roman alphabets, Osmanli also known as Ottoman, Azeri, Tatar, Turkmen, and others.
While usuallu representing a sibilant one European Langaue Latvian uses it to mark Palatals to distinguish them from Velars and Nasals so it appears under g, K, L, and N.
Its use was much more common before IPA fonts were available. |
This is the VOA Special English HEALTH REPORT.
American researchers say the vaccine medicine that can prevent the disease chicken pox may also provide protection against a painful nerve condition called shingles.
Shingles is also known as herpes zoster. It is caused by the same virus that causes chicken pox. The virus remains in the body's nerve cells after the chicken pox disappears. Shingles develops if the virus becomes active again many years later.
It is not clear why this happens. Medical researchers think a temporary weakness in the body's defense system may permit the virus to move along nerves to the skin. Most people who suffer shingles are more than fifty years old. People with weakened defense systems against disease are also more likely than others to develop shingles.
The first sign of shingles is a burning pain on the skin. The skin becomes red after a few days and enlarged areas appear. These swollen areas become hard. Then they disappear after a few weeks. These skin blisters are not a problem unless they appear on the face near the eyes.
However, the pain continues after the skin is healed. The pain can continue for months or even years. This is why doctors consider shingles a serious health problem. As many as one-million people in the United States develop shingles every year. Doctors treat it with pills or pain- killing substances placed on the skin. But these treatments are not always effective.
Chicken pox was common among American children until the vaccine was approved in nineteen-ninety-five. Researchers for the company that makes the vaccine says the virus used in the medicine appears less likely than the natural virus to remain in the body's nerve cells. This could mean that children who get the vaccine for chicken pox may be less likely than others to develop shingles later in life.
Some researchers think the vaccine seems to increase the ability of the body's defense system to suppress the virus. Now, a large study is taking place to test if a stronger chicken pox vaccine can prevent shingles in healthy adults, or at least reduce the pain. The study involves more than thirty-eight-thousand people over the age of sixty. The results are expected next year.
This VOA Special English HEALTH REPORT was written by Nancy Steinbach. |
Chinese music,the classical music forms of China.
Origins and Characteristics
Chinese music can be traced back as far as the third millennium B.C. Manuscripts and instruments from the early periods of its history are not extant, however, because in 212 B.C., Shih Huang-ti of the Ch'in dynasty caused all the books and instruments to be destroyed and the practice of music to be stopped. Certain outlines of ancient Chinese music have nevertheless been ascertained. Of primary significance is the fact that the music and philosophy of China have always been inseparably bound; musical theory and form have been invariably symbolic in nature and remarkably stable through the ages. Ancient Chinese hymns were slow and solemn and were accompanied by very large orchestras. Chamber music was also highly developed. Chinese opera originated in the 14th cent. as a serious and refined art.
Tone and the Instruments
In Chinese music, the single tone is of greater significance than melody; the tone is an important attribute of the substance that produces it. Hence musical instruments are separated into eight classes according to the materials from which they are made—gourd (sheng); bamboo (panpipes); wood (chu, a trough-shaped percussion instrument); silk (various types of zither, with silk strings); clay (globular flute); metal (bell); stone (sonorous stone); and skin (drum). Music was believed to have cosmological and ethical connotations comparable to those of Greek music. The failure of a dynasty was ascribed to its inability to find the proper huang chung, or tone of absolute pitch.
The huang chung was produced by a bamboo pipe that roughly approximated the normal pitch of a man's voice. Other pipes were cut, their length bearing a definite mathematical ratio to it. Their tones were divided into two groups—six male tones and six female. These were the lüs, and their relationship approximated the Pythagorean cycle of fifths. Legend ascribes their origin to birdsong, six from that of the male bird and six from that of the female, and the tones of the two sets were always kept separate.
The lüs did not constitute a scale, however. The scale of Chinese music is pentatonic, roughly represented by the black keys on a piano. From it, by starting on different notes, several modes may be derived. The melody of vocal music is limited by the fact that melodic inflection influences the meaning of a word. Likewise, quantitative rhythms are not easily adaptable to the Chinese language.
Several types of notation were used. Singers used the syllabic symbols for the five notes of the pentatonic scale, as did players of pipes. Players of the stone and bell chimes, which were tuned to the lüs, used symbols that represented the pitch names of the lüs. Players of flutes and zithers used a kind of tablature. None of this notation indicated rhythm.
Throughout the political and social turmoil following World War I, Western (classical and popular) and Japanese sources dominated Chinese music. At present, Western concepts of harmony are in active use but are generally applied to vocal genres, such as cantatas and music dramas, which have educational as well as musical value. The Beijing Opera has produced numerous new works since 1949, most of them concerning political topics. It is one of the few forums of traditional performance style, although there is an ongoing effort directed by the Beijing Institute of National Music to preserve the few remainders of ancient musical practice.
See J. H. Levis, Foundations of Chinese Musical Art (2d ed. 1964); E. Halson, Peking Opera (1966); bibliography by F. Lieberman (1970, 2d ed. 1979). |
Describe the key features of the Stresemann foreign policy in the years 1923-1929 (6 marks). In 1924 Stresemann and Charles G. Dawes created the Dawes Plan. The Dawes Plan reduced annual reparation payments to an affordable amount. It was also agreed that American Banks would invest in German industry. This also improved the trust the allies had in Germany, as they were reassured that they would get their reparation payments. In 1925 Stresemann signed the Locarno Pact. This was a treaty between Germany, Britain, France, Italy and Belgium. The Pact consisted of Germany agreeing to keep its border with France and Belgium if Allied troops left the Rhineland and France promised peace. This opened talks about Germany joining the League of Nations, as the Allied troops began to see Germany as a friend instead of an enemy. In 1929 Stresemann signed the Young Plan. This reduced the total reparations debt to £2 billion. Also Germany was given a further 59 years to pay. This helped Germany’s debt problems, as they didn’t have the worry of not being able to afford the annual reparation payments. Describe the key features of the Dawes plan (6 marks).
In 1924 The Dawes Plan was created between Stresemann and Charles G. Dawes, an American Banker. One feature of The Dawes Plan was that annual reparation payments that Germany had to pay were reduced to an affordable amount. This meant that there would be less chance of an incident, like the occupation of the Ruhr, would happen again; as the annual payments were much more realistic to Germany’s financial state. Another feature of The Dawes Plan was that American banks would invest in German industry. This meant that Germany could rebuild their industry, therefore increasing employments, which lead to increased profits. One bad feature of The Dawes Plan was that it relied heavily on American banks. This would prove to be a bad idea after the Wall Street Crash; as America called in all their foreign loans,...
Please join StudyMode to read the full document |
Smashing atoms together is no longer just science fiction — researchers have been doing just that on Science Hill for more than 40 years.
Since the late 1960s, Yale researchers have used the particle accelerator at the Wright Nuclear Structure Laboratory on Science Hill to examine properties of the atomic nucleus, physics professor Andreas Heinz said. By analyzing data from the experiments, researchers have helped explain how an atomic nucleus is held together, physics professor Volker Werner said.
Although the nucleus is a complex system, experiments have shown that it exhibits regular behaviors, such as sending out fixed amounts of radiation.
“We would assume that the nucleus would be an extremely chaotic system, but as it turns out, they exhibit very regular patterns,” Heinz said.
In order to study atomic nuclei and the force that holds them together, researchers must use probes on a similar scale to the nuclei. At the Wright Nuclear Structure Laboratory, beams of charged particles collide with nuclei at speeds as high as 60,000 kilometers per second, researchers said. Because positively charged nuclei repel each other, researchers use the particle accelerator to generate enough energy into the system to overcome the repulsion.
“If you want to study the structure of a mosquito, don’t hit it with a truck,” Werner said.
In the accelerator, positively charged beams that leave the accelerator at 10 to 20 percent the speed of light eventually collide with a nucleus, which causes it to emit gamma radiation, or highly charged light, Heinz said. Gamma radiation detectors, he added, keep track of the number, interval and energy level of the rays the nucleus emits. Researchers then use this data to determine properties of the nucleus, Heinz said. For example, if the emitted gamma rays have similar energy levels, scientists can figure out that the nucleus is spherical, Werner said.
However, the particle accelerator at the Wright Nuclear Structure Laboratory isn’t powerful enough to conduct experiments about the structure of protons and neutrons, said physics professor John Harris, who is the director of the particle accelerator but does not conduct his research there. Instead, he conducts his research at Relativistic Heavy Ion Collider at the Brookhaven National Laboratory in New York and at the Large Hadron Collider in Geneva, Switzerland.
But many experiments are still being conducted at the Wright Nuclear Structure Laboratory, physics professor Richard Casten GRD ‘67 said. The particle accelerator at the laboratory was used for 4,300 hours last year, Casten said.
So far, this research has no direct applications, said Mark Heinz, a postdoctoral researcher. But the equipment required for these studies push the boundary of technology, he added.
The Wright Nuclear Structure Laboratory was completed in 1964 with support from the National Science Foundation. |
Hypoplastic Left Heart Syndrome
Graphic courtesy of Centers for Disease Control and Prevention,
National Center on Birth Defects and Developmental Disabilities
Hypoplastic left heart syndrome (HLHS) is a rare congenital heart defect that occurs when the left side of a newborn’s heart is underdeveloped. When an infant is born with HLHS, most of the structures on the left side of the heart (left ventricle, mitral valve, aorta and aortic valve) are small and fail to completely develop. In a healthy infant, the left side of the heart receives oxygen-rich blood from the lungs and pumps it out to the rest of the body. When a baby has HLHS the circulation of blood becomes critically low. HLHS is a critical defect which is fatal if not treated within the first week after birth.
The following are structures found in the left side of the heart. These structures are usually affected by HLHS.
The lower left chamber of the heart, called the left ventricle, receives oxygen-rich blood from the left atrium and pumps it into the aorta. The left ventricle is a strong and muscular chamber designed to pump blood to the rest of the body. In an infant with HLHS, this chamber is poorly developed which means that the critical oxygen-rich blood is not reaching the rest of the body.
The mitral valve controls blood flow between the left atrium and left ventricle in the heart.
The aorta is the largest artery in the body. It carries oxygen rich blood from the left ventricle out into the rest of the body.
The aortic valve regulates blood flow from the heart into the aorta.
A newborn with a hypoplastic left heart may appear normal at first. It is not until the first few hours of life that symptoms begin to appear. Some symptoms take up to a few days to develop. The following are symptoms associated with HLHS:
- Poor suckling and feeding
- Shortness of breath
- Rapid breathing
- Cold extremities
- Enlarged liver
- Poor pulse
- Pounding heart
- Bluish (cyanosis) or poor skin color
- Sudden death
Hypoplastic left heart syndrome is irreversible and cannot be completely cured. The only available treatments for this condition are a series of surgeries or in very serious cases, a heart transplant.
In some cases, heart transplantation is considered the best treatment for hypoplastic left heart syndrome. In most cases however, HLHS requires a series of three surgeries.
After the initial diagnosis, a newborn is transferred to the neonatal intensive care unit where a ventilator may be used to help with breathing. The series of surgeries then begins, usually within the first few days of life. The first procedure is called the Norwood operation. During this procedure the surgeon builds an aorta and inserts an artificial shunt that will maintain blood flow to the lungs.
The second procedure, called the Glenn shunt or Hemi-Fontan procedure, usually occurs when a baby is four to six months of age. During this procedure, surgeons remove the shunt placed during the Norwood operation and important artery connections are made to reduce the work of the right ventricle.
The final procedure is called the Fontan procedure. This surgery is usually performed before a child is three-years-old. The goal of this procedure is to connect the remaining blood vessels carrying blood from the body to the blood vessels carrying blood to the lungs.
A child born with HLHS will need long-term monitoring as their reconstructed heart grows. Some children will need to take heart medication. In some cases, patients will need additional surgeries in their 20s and 30s.
Studies have suggested that the use of certain antidepressants during pregnancy could raise the risk of heart defects like HLHS in newborns. Women who take antidepressants while pregnant are at a potentially higher risk of having babies with serious birth defects.
Baum Hedlund is investigating if there is a link between the use of antidepressants in pregnancy and birth defects like hypoplastic left heart syndrome. |
The Arab slave trade refers to the practice of slavery in West Asia and East Africa. The trade mostly involved East Africans, Medival Africa Middle Eastern peoples(Arabs,Berbers,Persians,etc.) and to some extent Indians, Medeival Africa while others such as the Chinese played a Medeval Africa very small role. Also, Arab slave trade was not limited to people of certain Mediveal Africa color, ethnicity, or religion. In the early days of the Islamic state, Medieal Africa ~ 8th and 9th centuries, most of the slaves were of Persian, and Caucasian origins. Later, and toward Meideval Africa the 18th and 19th centuries, slaves were mainly coming from East Africa.
From a Western point of view, the subject merges with the Oriental slave trade, which followed two main routes in the Middle Ages:
- Overland routes across the Maghreb and Mashreq deserts (Trans-Saharan route)
- Sea routes to the east of Africa through the Red Sea and Indian Ocean (Oriental route)
The slave trade went to different destinations from the transatlantic slave trade, and supplied African slaves to the Muslim world, which at its peak stretched over three continents from the Atlantic (Morocco, Spain) to India and eastern China.
- 1 A recent and controversial topic
- 2 Medieval Arabic and Persian sources
- 3 European texts (16th - 19th centuries)
- 4 Other sources
- 5 Historical and geographical context of the Arab slave trade
- 5.1 The Islamic world
- 5.2 Africa: 8th through 19th centuries
- 6 Legacy of Arab slave trade
- 7 Africa and the Arab slave trade
- 8 Aims of the slave trade and slavery
- 9 Geography of the slave trade
- 9.1 "Supply" zones
- 9.2 Routes
- 9.3 Barter
- 9.4 Slave markets and fairs
- 9.5 Towns and ports implicated in the slave trade
- 10 See also
- 11 References
- 12 Bibliography
- 13 Audio Material
- 13.1 Books and articles in French
- 14 Websites
A recent and controversial topic
The history of the slave trade has given rise to numerous debates amongst historians. Firstly, specialists are undecided on the number of Africans taken from their homes; this is difficult to resolve because of a lack of reliable statistics: there was no census system in medieval Africa. Archival material for the transatlantic trade in the 16th to 18th centuries may seem more useful as a source, yet these record books were often falsified. Historians have to use imprecise narrative documents to make estimates which must be treated with caution: Luiz Felipe de Alencastro states that there were 8 million slaves taken from Africa between the 8th and 19th centuries along the Oriental and the Trans-Saharan routes. Olivier Pétré-Grenouilleau has put forward a figure of 17 million African people enslaved (in the same period and from the same area) on the basis of Ralph Austen's work. Paul Bairoch suggests a figure of 25 million African people subjected to the Arab slave trade, as against 11 million that arrived in the Americas from the transatlantic slave trade. Owen 'Alik Shahadah author of African Holocaust (audio documentary), puts the figure at 10 million and argues that the trade only boomed in the 18th century, prior to this the trade was "a trickle trade" and that exaggerated numbers have occurred to de-emphasize the Transatlantic trade. Slavery in Arabian Societies
Another obstacle to a history of the Arab slave trade is the limitations of extant sources. There exist documents from non-African cultures, written by educated men in Arabic, but these only offer an incomplete and often condescending look at the phenomenon. For some years there has been a huge amount of effort going into historical research on Africa. Thanks to new methods and new perspectives, historians can interconnect contributions from archaeology, numismatics, anthropology, linguistics and demography to compensate for the inadequacy of the written record.
In Africa, slaves taken by African owners were often captured, either through raids or as a result of warfare, and frequently employed in manual labor by the captors. Some slaves were traded for goods or services to other African kingdoms.
The Arab slave trade from East Africa is one of the oldest slave trades, predating the European transatlantic slave trade by hundreds of years.Male slaves were employed as servants, soldiers, or laborers by their owners, while female slaves, mostly from Africa, were long traded to the Middle Eastern countries and kingdoms by Arab and Oriental traders, some as female servants, others as concubines. Arab, African, and Oriental traders were involved in the capture and transport of slaves northward across the Sahara desert and the Indian Ocean region into the Middle East, Persia, and the Indian subcontinent. From approximately 650 CE until around 1900 CE, as many African slaves may have crossed the Sahara Desert, the Red Sea, and the Indian Ocean as crossed the Atlantic, and perhaps more. The Arab slave trade continued in one form or another into the early 1900s. Historical accounts and references to slave-owning nobility in Arabia, Yemen and elsewhere are frequent into the early 1920s.
For some people, any mention of the slave-trading past of the Islamic world is rejected as an attempt to minimise the transatlantic trade. Yet a slave trade in the Indian Ocean, Red Sea, and Mediterranean pre-dates the arrival of any significant number of Europeans on the African continent.
Medieval Arabic and Persian sources
Ibn Battûta - first Arab geographer to visit sub-Saharan Africa
These are given in chronological order. Scholars from the Arab world had been travelling to Africa since the time of Muhammad in the 7th century.
- Al Masudi (died 957), Muruj adh-dhahab or Meadows of Gold, the reference manual for geographers and historians of the Muslim world. The author had travelled widely across the Arab world as well as the Far East.
- Ya'qubi (9th century), Book of Countries
- Al-Bakri, author of Book of Roads and Kingdoms, published in Cordoba around 1068, gives us information about the Berbers and their activities; he collected eye-witness accounts on Saharan caravan routes.
- Al Idrisi (died circa 1165), Description of Africa and Spain
- Ibn Battûta (died circa 1377), Moroccan geographer who travelled to sub-Saharan Africa, to Gao and to Timbuktu. His principal work is called Gift for those who like to reflect on the curiosities of towns and marvels of travel.
- Ibn Khaldun (died in 1406), historian and philosopher from North Africa. Sometimes considered as the historian of Arab, Berber and Persian societies. He is the author of Historical Prolegomena and History of the Berbers.
- Ahmad al-Maqrî (died in 1442), Egyptian historian. His main contribution is his description of Cairo markets.
- Leo Africanus (died circa 1548), author of a rare description of Africa.
- Rifa'a al Tahtawi (died in 1873), who translated medieval works on geography and history. His work is mostly about Muslim Egypt.
- Joseph Cuoq, Collection of Arabic sources concerning Western Africa between the 8th and 16th centuries (Paris 1975)
European texts (16th - 19th centuries)
- João de Castro, Roteiro de Lisboa a Goa (1538)
- James Bruce, (1730-1794), Travels to Discover the Source of the Nile (1790)
- René Caillié, (1799-1838), Journal d'un voyage à Tombouctou
- Henry Morton Stanley, (1841-1904), Through the Dark Continent (1878)
- African oral tradition
- Kilwa Chronicle (16th century fragments)
- Numismatics: analysis of coins and of their diffusion
- Archaeology: architecture of trading posts and of towns associated with the slave trade
- Iconography: Arab and Persian miniatures in major libraries
- European engravings, contemporary with the slave trade, and some more modern
- Photographs from the 19th century onward
Historical and geographical context of the Arab slave trade
A brief review of the region and era in which the Oriental and trans-Saharan slave trade took place should be useful here. It is not a detailed study of the Islamic world, nor of Africa, but an outline of key points which will help with understanding the slave trade in this part of the world.
The Islamic world
the religion of Islam appeared in the 7th century CE, and in the next hundred years it was quickly diffused throughout the Mediterranean area, spread by Arabs who had conquered North Africa after its long occupation by the Berbers; they extended their rule to the Iberian peninsula where they replaced the Visigoth kingdom. Arabs also took control of western Asia from the Byzantine Empire and from the Sassanid Persians. These regions therefore had a diverse range of different peoples, and their knowledge of slavery and a trade in African slaves went back to Antiquity. To some extent, these regions were unified by an Islamic culture built on both religious and civic foundations; they used the Arabic language and the dinar (currency) in commercial transactions. Mecca in Arabia, then as now, was the holy city of Islam and pilgrimage centre for all Muslims, whatever their origins.
It must be noted here that the conquests of the Arab armies and the expansion of the Islamic state that followed, have always resulted in the capture of war prisoners whom were subsequently set free or turned into slaves or Raqeeq (رقيق) and servants rather than taken as prisoners as was the Islamic tradition in wars. Once taken as slaves, they had to be dealt with in accordance with the Islamic law which was the law of the Islamic state especially during the Umayyad and Abbasid eras. According to that law, slaves are allowed to earn their living if they opted for that, otherwise it is the owner’s (master) duty to provide for that. They also can’t be forced to earn money for their masters unless with an agreement between the slave and the master. This concept is called “مخارجة” in the Islamic jurisprudence. If the slave agrees to that and he would like the money s/he earns to be counted toward his/her emancipation then this has to be written in the form of a contract between the slave and the master. This is called “مكاتبة” (mukatabah) in the Islamic jurisprudence. Muslims believe that slave owners in Islam are strongly encouraged to perform “mukatabah” with their slaves as directed by Qur’an
“And if any of your slaves ask for a deed in writing (to enable them to earn their freedom for a certain sum), give them such a deed if ye know any good in them: yea, give them something yourselves out of the means which Allah has given to you.” "24:33 "
After the fall of the Umayyad dynasty (750), the Muslim world was divided into various political entities (caliphates, emirates, sultanates), often rivals of one another. In the 11th century, the arrival of the Turks from central Asia radically changed the geography of the Near East and of North Africa, with the establishment of the Ottoman Empire (1299-1922).
The framework of Islamic civilisation was a well-developed network of towns and oasis trading centres with the market (souk, bazaar) at its heart. These towns were inter-connected by a system of roads crossing semi-arid regions or deserts. The routes were travelled by convoys, and black slaves formed part of this caravan traffic.
Africa: 8th through 19th centuries
13th century Africa - simplified map of the main states, kingdoms and empires
In the 8th century CE, Africa was dominated by Arab-Berbers in the north: Islam moved southwards along the Nile and along the desert trails.
- The Sahara was thinly populated. Nevertheless, since Antiquity there had been cities living on a trade in salt, gold, slaves, cloth, and on agriculture enabled by irrigation: Tahert, Oualata, Sijilmasa, Zaouila, and others. They were ruled by Arab or Berber chiefs (Tuaregs). Their independence was relative and depended on the power of the Maghrebi and Egyptian states.
- In the Middle Ages, sub-Saharan Africa was called Sûdân in Arabic, meaning land of the Blacks. It provided a pool of manual labour for North Africa and Saharan Africa. This region was dominated by certain states: the Ghana Empire, the Empire of Mali, the Kanem-Bornu Empire.
- In eastern Africa, the coasts of the Red Sea and Indian Ocean were controlled by native Muslims, and Arabs were important as traders along the coasts. Nubia had been a "supply zone" for slaves since Antiquity. The Ethiopian coast, particularly the port of Massawa and Dahlak Archipelago, had long been a hub for the exportation of slaves from the interior, even in Aksumite times. The port and most coastal areas were largely Muslim, and the port itself was home to a number of Arab and Indian merchants.
Slaves in eastern Africa
- illustration from late 19th century)
The Solomonic dynasty of Ethiopia often exported Nilotic slaves from their western borderland provinces, or from newly conquered or reconquered Muslim provinces. Native muslim Ethiopian sultanates exported slaves as well, such as the sometimes independent Adal Sultanate. On the coast of the Indian Ocean too, slave-trading posts were set up by Arabs and Persians. The archipelago of Zanzibar, along the coast of present-day Tanzania, is undoubtedly the most notorious example of these trading colonies. East Africa and the Indian Ocean continued as an important region for the Oriental slave trade up until the 19th century. Livingstone and Stanley were then the first Europeans to penetrate to the interior of the Congo basin and to discover the scale of slavery there. The Arab Tippo Tip extended his influence and made many people slaves. After Europeans had settled in the Gulf of Guinea, the trans-Saharan slave trade became less important. In Zanzibar, slavery was abolished late, in 1897, under Sultan Hamoud bin Mohammed.
- The rest of Africa had no direct contact with Muslim slave-traders.
Legacy of Arab slave trade
Islam like Christianity became the context for the cultural prevalence of Arab culture, Arab names became Islamic names and those who adopted Islam automatically adopted Arab culture in an attempt to become more Islamic. The Afro-Arab relationship was riddled with complexities lined in a cultural nexus. Some Arabs were Arab linguistically but racially African (see definition of Arab. Thus, the Arab trade in enslaved Africans was not only conducted by Asiatic and Caucasian Arabs, but also African Arabs: Africans speaking Arabic as a first language embracing an Arab culture..
Focus on the Arab slavery is previously been low due to the fact that most descendants of enslaved people are as a result of the Transatlantic slave trade for this reason the impact of the Arab trade on people of the Americas is neglegiable. Another reason is the legacy of the Arab Slave Trade is far less impacting than the European trade in enslaved Africans, as there are no ghettos or prison complexes in Arabian lands overflowing with African people. The African Diaspora in Arab lands has almost disappeared through inter-marriage. The resurgence of Islamaphobia some argue has brought this aspect of history to the foreground.
Africa and the Arab slave trade
People were captured, transported, bought and sold by some very different characters. The trade passed through a series of intermediaries and enriched some sections of the Muslim aristocracy.
Slavery fed on wars between African peoples and states, which gave rise to an internal slave trade. Those conquered owed tribute in the form of men and women reduced to captivity. Sonni Ali Ber (1464–1492), emperor of Songhai, waged many wars to extend his territory.
In the 8th and 9th centuries, the Caliphs had tried to colonise the African shores of the Indian Ocean for commercial purposes. But these establishments were ephemeral, often founded by exiles or adventurers. The Sultan of Cairo sent slave traffickers on raids against the villages of Darfur. In the face of these attacks, the people formed militias, building towers and outer defences to protect their villages.
Aims of the slave trade and slavery
Chained slaves in eastern Africa - 19th century
Economic motives were the most obvious. The trade resulted in large profits for those who were running it. Several cities became rich and prospered thanks to the traffic in slaves, both in the Sûdân region and in East Africa. In the Sahara desert, chiefs launched expeditions against pillagers looting the convoys. The kings of medieval Morocco had fortresses constructed in the desert regions which they ruled, so they could offer protected stopping places for caravans. The Sultan of Oman transferred his capital to Zanzibar, since he had understood the economic potential of the eastward slave trade.
There were also social and cultural reasons for the trade: in sub-Saharan Africa, possession of slaves was a sign of high social status. In Arab-Muslim areas, harems needed a "supply" of women.
Finally, it is impossible to ignore the religious and racist dimension of this trade. Punishing bad Muslims or pagans was held to be an ideological justification for enslavement: (evidence?) the Muslim rulers of North Africa, the Sahara and the Sahel sent raiding parties to persecute infidels: in the Middle Ages, Islamisation was only superficial in rural parts of Africa(evidence?).
Racist opinions recurred in the works of historians and geographers: so in the 14th century CE Ibn Khaldun could write "...the Negro nations are, as a rule, submissive to slavery, because (Negroes) have little that is (essentially) human and possess attributes that are quite similar to those of dumb animals...
However, Ibn Khaldun, also wrote, of the arabs themselves. : "they are the most savage human beings that exist. Compared with sedentary people, they are on a level with wild, untamable animals and dumb beasts of prey." - Arabs dominate only of the plains, because they are, by their savage nature, people of pillage and corruption. - The Muqaddimmah
In addition, there is debate over his ethnicity, some refer to him as Andalusian/Spanish (he grew up there, his parents were from there), some say he was a Berber/North African (time spent in Tunis), and some say he was an arab (he traced his ancestors to Yemen). - see Ibn Khaldun article for details.
" In the same period, the Egyptian scholar Al-Abshibi wrote, "When he (a black man) is hungry, he steals, and when he is sated, he fornicates". (This is however a forged hadith which has been universally dismissed by hadith experts as false. See the book "An introduction to the science of Hadith" by Suhaib Hasan for further details of this quote)
Geography of the slave trade
Cowrie shells were used as money in the slave trade
Merchants of slaves for the Orient stocked up in Europe. Danish merchants had bases in the Volga region and dealt in Slavs with Arab merchants. Circassian slaves were conspicuously present in the harems and there were many odalisques from that region in the paintings of Orientalists. Non-Muslim slaves were valued in the harems, for all roles (gate-keeper, servant, odalisque, musician, dancer, court dwarf). In 9th century Baghdad, the Caliph Al-Amin owned about 7000 black eunuchs (who were completely emasculated) and 4000 white eunuchs (who were castrated). In the Ottoman Empire, the last black eunuch, the slave sold in Ethiopia named Hayrettin Effendi, was freed in 1918. The slaves of Slavic origin in Al-Andalus came from the Varangians who had captured them. They were put in the Caliph's guard and gradually took up important posts in the army (they became saqaliba), and even went to take back taifas after the civil war had led to an implosion of the Western Caliphate. Columns of slaves feeding the great harems of Cordoba, Seville and Grenada were organised by Jewish merchants (mercaderes) from Germanic countries and parts of Northern Europe not controlled by the Carolingian Empire. These columns crossed the Rhône valley to reach the lands to the south of the Pyrenees.
Slaves were also brought into the Arab world via Central Asia. many of these slaves went on to serve in the armies forming an elite rank. It was from these troops that the Mamaluks came.
- At sea, Barbary pirates joined in this traffic when they could capture people by boarding ships or by incursions into coastal areas.
- Nubia, Ethiopia and Abyssinia were also "exporting" regions: in the 15th century, there were Abyssinian slaves in India where they worked on ships or as soldiers. They eventually rebelled and took power (dynasty of the Habshi Kings in Bengal 1487-1493).
- The Sûdân region and Saharan Africa formed another "export" area, but it is impossible to estimate the scale, since there is a lack of sources with figures.
- Finally, the slave traffic affected eastern Africa, but the distance and local hostility slowed down this section of the Oriental trade.
Caravan trails, set up in the 9th century, went past the oases of the Sahara; travel was difficult and uncomfortable for reasons of climate and distance. Since Roman times, long convoys had transported slaves as well as all sorts of products to be used for barter. To protect against attacks from desert nomads, slaves were used as an escort. Any who slowed down the progress of the caravan were killed.
Historians know less about the sea routes. From the evidence of illustrated documents, and travellers' tales, it seems that people travelled on dhows or jalbas, Arab ships which were used as transport in the Red Sea. Crossing the Indian Ocean required better organisation and more resources than overland transport. Ships coming from Zanzibar made stops on Socotra or at Aden before heading to the Persian Gulf or to India. Slaves were sold as far away as India, or even China: there was a colony of Arab merchants in Canton. Chinese slave traders bought black slaves (Hei-hsiao-ssu) from Arab intermediaries or "stocked up" directly in coastal areas of present-day Somalia. Serge Bilé cites a 12th century text which tells us that most well-to-do families in Canton had black slaves whom they regarded as savages and demons because of their physical appearance. The 15th century Chinese emperors sent maritime expeditions, led by Zheng He, to eastern Africa. Their aim was to increase their commercial influence.
13th century slave market in the Yemen
Slaves were often bartered for objects of various different kinds: in the Sûdân, they were exchanged for cloth, trinkets and so on. In the Maghreb, they were swapped for horses. In the desert cities, lengths of cloth, pottery, Venetian glass beads, dyestuffs and jewels were used as payment. The trade in black slaves was part of a diverse commercial network. Alongside gold coins, cowrie shells from the Indian Ocean or the Atlantic (Canaries,Luanda) were used as money throughout black Africa (merchandise was paid for with sacks of cowries).
Slave markets and fairs
Enslaved Africans were sold in the towns of the Muslim world. In 1416, al-Makrisi told how pilgrims coming from Takrur (near the Senegal river) had brought 1700 slaves with them to Mecca. In North Africa, the main slave markets were in Morocco, Algiers, Tripoli and Cairo. Sales were held in public places or in souks. Potential buyers made a careful examination of the "merchandise": they checked the state of health of a person who was often standing naked with wrists bound together. In Cairo, transactions involving eunuchs and concubines happened in private houses. Prices varied according to the slave's quality.
Zanzibar - the Old Slave Market
Towns and ports implicated in the slave trade
- North Africa:
- Marrakesh (Morocco)
- Algiers (Algeria)
- Tripoli (Libya)
- Cairo (Egypt)
- Aswan (Sudan)
- Sub-Saharan Africa
- Timbuktu (Mali)
- East Africa:
- Massawa (Eritrea)
- Zeila (Somalia)
- Mogadishu (Somalia)
- Bagamoyo (Tanzania)
- Zanzibar (Tanzania)
- Sofala (Beira, Mozambique)
- Arabian peninsula
- Zabid (Yemen)
- Muscat (Oman)
- Aden (Yemen)
- Atlantic slave trade
- African slave trade
- Slave beads
- Slavery in antiquity
- Zanj Rebellion
- This article was initially translated from the featured French wiki article "Traite musulmane" on 19 May 2006.
- ^ Luiz Felipe de Alencastro, Traite, in Encyclopædia Universalis (2002), corpus 22, page 902.
- ^ Ralph Austen, African Economic History (1987)
- ^ Paul Bairoch, Mythes et paradoxes de l'histoire économique, (1994). See also: Economics and World History: Myths and Paradoxes (1993)
- ^ Mintz, S. Digital History Slavery, Facts & Myths
- ^ Mintz, S. Digital History Slavery, Facts & Myths
- ^ Catherine Coquery-Vidrovitch, in Les Collections de l'Histoire (April 2001) says:"la traite vers l'Océan indien et la Méditerranée est bien antérieure à l'irruption des Européens sur le continent"
- ^ Mintz, S. Digital History Slavery, Facts & Myths
- ^ Pankhurst, Richard. The Ethiopian Borderlands: Essays in Regional History from Ancient Times to the End of the 18th Century (Asmara, Eritrea: Red Sea Press, 1997), pp.416
- ^ Pankhurst. Ethiopian Borderlands, pp.432
- ^ Pankhurst. Ethiopian Borderlands, pp.59
- ^ "Myths regarding the Arab Slave Trade". "Owen 'Alik Shahadah".
- ^ Ibn Khaldun The Muqaddimah trans. F.Rosenthal ed. N.J.Dawood (Princeton 1967); see also Jacques Heers, Les négriers en terre d'islam, page 177.
- ^ François Renault, Serge Daget, Les traites négrières en Afrique, Karthala, p.56
- ^ Bernard Lewis, Race and Color in Islam (1979)
- ^ Serge Bilé, La légende du sexe surdimmensionné des Noirs, éditions du Rocher, 2005, p.80: "la plupart des familles aisées de Canton possédaient des esclaves noirs [...] qu'elles tenaient néanmoins pour des sauvages et des démons à cause de leur aspect physique"
- Mintz, S., Digital History/Slavery Facts & Myths
Books in English
Dhows were used to transport African slaves to India
- The African Diaspora in the Mediterranean Lands of Islam (Princeton Series on the Middle East) by Eve Troutt Powell (Editor), John O. Hunwick (Editor)
- Edward A. Alpers, The East African Slave Trade (Berkeley 1967)
- Robert C. Davis, Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, the Barbary Coast, and Italy, 1500-1800 (Palgrave Macmillan, 2003)
- Allan G. B. Fisher, Slavery and Muslim Society in Africa, ed. C. Hurst (London 1970, 2nd edition 2001)
- Murray Gordon, Slavery in the Arab world (New York 1989)
- Bernard Lewis, Race and slavery in the Middle East (OUP 1990)
- Ibn Khaldun, The Muqaddimah trans. F.Rosenthal ed. N.J.Dawood (Princeton 1967)]
- Paul E. Lovejoy, Transformations in Slavery: A History of Slavery in Africa (Cambridge 2000)
- Ronald Segal, Islam's Black Slaves (Atlantic Books, London 2002)
- Owen 'Alik Shahadah, African Holocaust Audio Documentary
Books and articles in French
- Serge Daget, De la traite à l'esclavage, du Ve au XVIIIe siècle, actes du Colloque international sur la traite des noirs (Nantes, Société française d'histoire d'Outre-Mer, 1985)
- Jacques Heers, Les Négriers en terre d'islam (Perrin, Pour l'histoire collection, Paris, 2003) (ISBN 2-262-01850-2)
- Murray Gordon, L'esclavage dans le monde arabe, du VIIe au XXe siècle (Robert Laffont, Paris, 1987)
- Bernard Lewis, Race et esclavage au Proche-Orient, (Gallimard, Bibliothèque des histoires collection, Paris, 1993) (ISBN 2-07-072740-8)
- Olivier Petré-Grenouilleau, Les Traites oubliée des négrières (la Documentation française, Paris, 2003)
- Jean-Claude Deveau, Esclaves noirs en Méditerranée in Cahiers de la Méditerranée, vol. 65, Sophia-Antipolis
- Olivier Petré-Grenouilleau, La traite oubliée des négriers musulmans in L'Histoire, special number 280 S (October 2003), pages 48-55.
- East African Slave-Trade
- African Holocaust/Arab Slave Trade
- Defining Legends
- Calling Trade Muslim Misleads
Categories: Articles with unsourced statements | NPOV disputes | Articles lacking sources from November 2006 | All articles lacking sources | History of Africa | Muslim history | Slaves of the Muslim world | Slave trade |
RESOURCES + MATERIALS
• PlayColor Sticks
• Black Sharpie
- Cut out template. Use an extra sheet of paper to cut out the template of the vase. Have students trace the vase on their paper with a black Sharpie. Older students may be able to draw their own vase, but this step will be helpful for the younger ones.
- Using the black Sharpie, have students draw the water line inside of the vase. Then, have them draw the line of the table in the background.
- Using the PlayColor Sticks, students can start adding in the flowers. Students can start with yellow as the base and then add in the other primary colors of red and blue. Once they have their Primary Petals completed, they can draw the stems and then color in the table.
- Finally, your students can fill in the rest of the white space with water colors. They can add blue water to the vase and any color background they’d like. Just encourage them to fill the page with saturated colors. |
BOULDER, Colo., Oct. 22 (UPI) — The world’s plants play a bigger role in cleansing the Earth’s atmosphere of common air-polluting chemicals than previously thought, U.S. researchers say.
Scientists at the National Center for Atmospheric Research in Boulder, Colo., used observations, gene studies and computer modeling to show that deciduous plants absorb about a third more of a class of air-polluting chemicals known as oxygenated volatile organic compounds than previously believed, ScienceDaily.com reported Friday.
These compounds form in the atmosphere from hydrocarbons and other chemicals emitted from both natural sources and human activities, and can have long-term impacts on the environment and human health, researchers say.
“Plants clean our air to a greater extent than we had realized,” research center scientist Thomas Karl, the lead author, says. “They actively consume certain types of air pollution.”
By measuring levels of the atmospheric compounds in a number of ecosystems in the United States and other countries, the researchers found that deciduous plants appear to be absorbing them at an unexpectedly fast rate — as much as four times more rapidly than previously estimated.
“This really transforms our understanding of some fundamental processes taking place in our atmosphere,” Karl says.
Copyright 2010 United Press International, Inc. (UPI). Any reproduction, republication, redistribution and/or modification of any UPI content is expressly prohibited without UPI’s prior written consent. |
Animals of the Arctic
The Arctic is home to many different animals…
The Arctic’s beautiful and unique landscape is home to a diverse range of animals, several of which are not found anywhere else in the world. The affects of climate change on some animals – such as polar bears – are well known, but did you know that melting sea ice could also push walrus, and even some bird species, towards extinction?
Warmer weather means that animals from outside the Arctic could move north
Warmer weather means that many animals that currently live outside of the Arctic will migrate northwards, bringing new species, and in some cases new diseases, into the region. Some of the animals that are currently found all across the Arctic could suffer a major decline.
How does the extreme weather in the Arctic affect the animals who live there?
The Arctic climate is already very mixed and extreme. A sudden summer storm or freeze can wipe out an entire generation of young birds, thousands of seal pups or hundreds of caribou calves. Climate change is making the weather in the Arctic even more extreme, putting even more animals at risk.
Take a more detailed look at some of the Arctic’s more amazing animals!
And keep scrolling down the page! We have lots more exciting information underneath the Arctic animals…
Some Arctic land animals are struggling to stay alive
Land animals such as reindeer and caribou are expected to find it harder to find food and breeding grounds and their migration routes will have to change – thousands have already been killed by warm events in winter. Another problem is that more trees will be able to grow if the weather is warmer, and this will change the habitat to one which they are less adapted to.
Some types of fish could be harmed because of climate change…
The impact on marine animals will be mixed. Some fish species, including herring and cod, are likely to increase, but other marine life and their habitats could be harmed through increased oil extraction and shipping. Fish found in lakes and rivers (freshwater species) that are adapted to live in the Arctic will be badly affected with their numbers reducing. Arctic char, broad whitefish and Arctic cisco are among the fish threatened by a warming climate.
Why does this matter?
Arctic climate change will have an impact on biodiversity around the world because migratory species depend on breeding and feeding grounds in the Arctic. Our planet’s ecosystems are finely balanced and a small change in one region can have a big impact everywhere else. |
British Rule in India
Sepoy Indian soldiers that protected the British East India Company interest in India
Sepoy mutiny first war of Indian independence
1 What were the immediate causes Muslim and Hindu issues ?
Rifle cartridges were greased with cows and pigs ,the cow sacred to Hindus and the pig was taboo to Muslims .
2 What happened when the Sepoys refused to load the rifles and what did fellow Sepoys do after the mutiny trials?
A group of Sepoys at Army post refused to load the rifles with the cartridges the British charged them with mutiny and put them in prison. Other Sepoys troops go on a rampage, killing 50 European men, women and children
3 What were the numbers on each side, and who won? What undermine the superior military force?
Indian troops 230,000 and British troops 45,000 the British won because the Indian troops were not well organized and Hindus and Muslims troops were not working together.
4 What were the outcomes of the mutiny?
British Parliament made the East India Company. The new Indian government and Queen Victoria became Empress of India so England took over India
British colonial rule
1 How many British administered how many Indians?
British viceroy with a staff of 3500 and they ruled 300 million Indian will people
2 According to the textbook. What were the benefits of British rule?
British rule brought order and stability, and efficient government, a new school system Postal Service and trains
3 What were the cost of British rule?
A huge economic cost, illegal taxes were collected , and 30 million Indians died of starvation because they had to grow cotton instead of food
1 What were the characteristics of the first Indian nationalist?
The first Indian nationalists were upper class, and English educated. Some were lawyers or civil servants and wanted reform, not revolution
2 What if they attempt to do it first, and what was a problem with that?
They preferred reform to revolution, but the slow pace of reform convinced many of them that it wasn't going anywhere.
3 How did the movement fracture?
The Indian national Congress. They formed had difficulties because of religious differences. The IMC why independence for all Indians, regardless of class, but many leaders were Hindu and Muslims wanted a separate Muslim league
4 When Gandhi returned to India. What did he advocate or want for India?
Gandhi wanted independence, and he began a movement based on nonviolent resistance trying to make the British improve the lives of the poor
Colonial Indian culture
1 How did some of the social and cultural exchange between Britain and India actually undermine British colonial rule in India?
The British brought education, the English-language, trains, and government to India. The people in India became educated and then realized that they were not in charge of their own country anymore, and learned from their new education had a fight against the British to regain their independence. Many British had lived in India for a long time and learn to like the Indians and felt they deserved to be free of British rule as well.
Gandhi wanted India to be independent of Britain and he wanted a better life for the people of India. Gandhi was a lawyer that worked at South Africa and he was well versed in what bigotry look like because he was the victim of it in South Africa. Gandhi worked to get the British rule away from India using nonviolent resistance against the British role.
Tagore was loved by both the Indians and the British. He wrote books and plays was an educator and all-around Renaissance man. He wrote about British rule, and even the Britons thought that they should get out of India because of the amazing writing and philosophy that he shared with them.
Both of these men helped India gained their independence from Britain.. Tagore made the British people realize that they were doing the wrong thing and Gandhi showed them that civil disobedience against British rule could be very powerful and when they work together. They won India's independence. |
“Screencasting” is a method of capturing the actions performed on a computer, including mouse movements and clicks on web browser links, in the form of a video. Using online screencasting tools, the video can be shared via e-mail attachment or a web link, or be uploaded to a server for continual use. Screencasts may also contain audio narration which is recorded simultaneously with the actions are performed on screen or added after the video is completed. Additionally, still images of the computer screen, or “screen shots,” may include captions, highlighting or call-out boxes to draw the user’s eye to a specific place on the image.
Screencasting is a quick and easy-to-use tool that can help you create slick demonstration tutorials in any subject area, using any computer application. The software allows you to record a movie of what you are doing on a computer. Along with your movie, you can record voice-over audio to provide a series of instructions.
Consider the possibilities. Students in math class can generate tutorials on how to solve problems. Students in Social Studies can create tours through the National Archives or any museum. Science students can be guided through simulation exercises. Teachers can demonstrate step-by-step instructions on how to get started with any software application. Screencasting can be used with any computer application and in any subject area.
Here’s an example of a young student using a screencast to explain proportions:
Once your screencast “movies” are recorded, they can be published in a variety of ways. They can be embedded in other media such as PowerPoint slides or iMovie. Content can be burned onto cd’s that students can take at home and share with their families.
Carr, A., & Ly, P. (2009). “More than words”: screencasting as a reference tool. References Services Review , 37 (4), 408-420. |
The statistics are startling. The People’s Republic of China (PRC) constitutes 22 percent of the world’s population but only 7 percent of the world’s arable land. And 50 percent of that area has been severely eroded. Desertification is occurring in almost 30 percent of China and salinization affects 10 percent of the nation.
It is tempting to conclude that this situation has come about simply because of population pressure, but there are many other countries in the world with even higher population densities that don’t have the same problem. Rather, the resource degradation phenomena, particularly in western China, has come about through the imposition of a raft of policies beginning in 1949 with the Mao regime.
The requirement for grain self-sufficiency in each province pushed ill-suited lands into unsustainable agricultural production. This was exacerbated by an industrialization policy that located factories in the less populated and strategically secure western provinces, and an associated policy of forcibly moving people into the hinterland
COLLECTIVISM AT WORK
Most significantly, agriculture was collectivized. State ownership of the land and the products of the land, right through to state monopoly control of the distribution process, meant that individual farmers had no financial incentive to be productive. Nor did they have an incentive to care about the condition of the natural resources—the soil and water—which were once central to the sustainability of their livelihoods.
Even the introduction of the Household Responsibility System (HRS) in 1978, which served as a key component to economic reform, was not able to stem the tide of resource degradation. Under the HRS, farmers were given rights to the products of their labor. These so-called “use rights” did not, however, extend farmers’ incentives to care for the natural resources they used. These incentives were further weakened by frequent compulsory seizure of land without compensation.
The degradation of China’s agricultural lands— particularly those in the western provinces with their fragile soils and harsh climate—presents a multitude of issues. Environmentally, the rivers are choked with silt, the air is polluted by sand storms on top of motor vehicle exhausts and factory wastes, plants and animals are being pushed to extinction, and the western landscape has taken on a lunar quality. Socially, the livelihoods of millions of farming families are being jeopardized by diminishing productivity and an increasing frequency of natural disasters such as floods and mud slides. Politically, the pressure of dissatisfied, poor, rural households seeking to relocate to wealthier urban areas is unnerving to the Central Council of the PRC.
GRAIN FOR GREEN
In recognition of these issues, in 1999 the Chinese Government instituted the Conversion of Cropland to Forest and Grassland Program (CCFGP) in an attempt to stabilize the soils of highly erodable areas. Under the program—often called the Grain for Green Program—farmers are paid a combination of cash and grain to plant tree seedlings or perennial grasses (provided free) on previously cropped or barren land.
The program has been enthusiastically embraced by more than 15 million farm households across 25 provinces. Between 1999 and 2005, 55 million acres were converted. The Chinese government has budgeted US$43.6 billion for CCFGP and it is expected to increase forest and grassland areas by 93 million acres by 2010.
A chief concern regarding the program is whether the land-use changes it has generated will be continued once the payments of grain and cash are stopped. For farmers to refrain from simply returning to their previous practices, the net income stream resulting from the newly established trees (from fruit and lumber production) and grass (from grazing enterprises) must be greater than the old patterns of annual cropping.
Surveys of farmers across four provinces in northwest China, reported in Environmental Protection in China: Land Use Management (2008), show that, in general, the profitability of the new land uses over a ten-year time frame, including tree crops (apples, apricots, and persimmons) and perennial pastures for tethered livestock, is superior to those generated by the continuation of old practices such as annual cropping of wheat and unconstrained grazing.
This result is good news for the sustainability of the program’s land use changes and the continuation of the environmental and social benefits they bring. However, it begs the question: If the alternative land uses offered better livelihoods, why didn’t farmers adopt the changes without the need for the CCFGP?
PIECING THE PUZZLE
The answer to this question is multifaceted. Undoubtedly, a lack of knowledge regarding the alternatives promoted the continuation of the status quo. Also, the costs of change—including the costs of tree seedlings and pasture seed as well as the years of foregone income during the establishment phase of tree crops and grazing enterprises—presented a barrier to many farmers without personal savings or access to credit.
Another important element of the answer relates to the weak definition and defense of the property rights held by farmers. Without clearly defined and defended rights to the land, farmers are constantly concerned that they will lose their access. Farmers can limit the risk this presents to their livelihood by reducing the time period between the planting effort and the harvest reward. Annual cropping is therefore a low-risk option compared to establishing pastures for grazing and, even more so, the planting of tree crops. A crop of wheat, for example, takes just one season to provide an income, but an apricot or an apple tree will not bear a financially rewarding crop for at least three years. Chinese farmers know that a lot can happen, politically, in three years that may cause them to lose access to the trees they planted.
What the CCFGP did for farmers was to provide an assured source of annual income in the period when the new land uses were being established and before they could yield readily accessible annual income streams. In essence, the program lowered the risks of land use change created by the insecure property rights.
The message relating to property rights has not been lost on the Chinese government. One component of the CCFGP has been an extension to 70 years of the use rights farmers have over their outputs if they enroll in the program. In 2007, the annual meeting of the Central Council discussed the prospect of instituting private property rights over more intensively used agricultural land. Should this occur, the extent of the government’s budgetary commitment to the continuation of the CCFGP, and numerous other resource protection schemes (including the Shelterbelt Development Program and the Sand Control Program for Beijing and Tianjin) could be reduced while more environmental protection is being achieved.
NOTE: This article is based on a project Bennett has been leading with the Chinese Forest Economics and Development Research Centre, with funding from the Australian Centre for International Agricultural Research. More details of the project can be found in Bennett, Wang and Zhang, Environmental Protection in China: Land Use Management (2008). |
Use Quia Web to reinforce content vocabulary at the beginning or end of a unit, especially at home or during remediation sessions. Give a quiz on Quia Web and have results just minutes later that can be used to guide differentiation. Consider setting up stations with laptops or desktops in your classroom, and have students complete a different activity at each station. Be sure to include stations with manipulatives or small experiments to provide some variety and connect with different learning styles. Or, after a pre-assessment, assign Quia Web activities as an out-of-class opportunity to close learning gaps for students who are missing some of the necessary prior knowledge.Continue reading Show less
Quia Web is a platform for teaching and assessing students that basically offers an interactive way for kids to learn, study, and take quizzes. You provide the lesson material -- questions, vocabulary, etc. -- in any subject area, and Quia Web generates fun games and activities for them to do online.
Quia Web also hosts a collection of shared activities from other teachers and a question bank that you can use to create quick assessments.Continue reading Show less
Most of Quia Web's activities are interactive, and in some cases that interaction provides opportunities for students to grapple with new knowledge. For the most part, however, the activities support rote memorization of facts and definitions through repetition, and reward the student's ability to recall information. As a teacher, you can use these activities to build a good base of knowledge; however, they won't spark your students' imaginations or force them to think deeply. Consider supplementing Quia Web with hands-on experiences that push students' understanding.Continue reading Show less |
MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free!
Common Core Math Standards - 8th GradeMathScore aligns to the Common Core Math Standards for 8th Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience.
The Number SystemKnow that there are numbers that are not rational, and approximate them by rational numbers.
1. Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. (Repeating Decimals )
2. Use rational approximations of irrational numbers to compare the size of irrational numbers, locate them approximately on a number line diagram, and estimate the value of expressions (e.g., π2). For example, by truncating the decimal expansion of √2, show that √2 is between 1 and 2, then between 1.4 and 1.5, and explain how to continue on to get better approximations.
Expressions and EquationsWork with radicals and integer exponents.
1. Know and apply the properties of integer exponents to generate equivalent numerical expressions. For example, 32 × 3–5 = 3–3 = 1/33 = 1/27. (Negative Exponents Of Fractional Bases , Multiplying and Dividing Exponent Expressions , Exponent Rules For Fractions )
2. Use square root and cube root symbols to represent solutions to equations of the form x2 = p and x3 = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that √2 is irrational. (Perfect Squares )
3. Use numbers expressed in the form of a single digit times a whole-number power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. For example, estimate the population of the United States as 3 times 108 and the population of the world as 7 times 109, and determine that the world population is more than 20 times larger.
4. Perform operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. Use scientific notation and choose units of appropriate size for measurements of very large or very small quantities (e.g., use millimeters per year for seafloor spreading). Interpret scientific notation that has been generated by technology. (Scientific Notation , Scientific Notation 2 )
Understand the connections between proportional relationships, lines, and linear equations.
5. Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways. For example, compare a distance-time graph to a distance-time equation to determine which of two moving objects has greater speed.
6. Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; derive the equation y = mx for a line through the origin and the equation y = mx + b for a line intercepting the vertical axis at b. (Graphs to Linear Equations )
Analyze and solve linear equations and pairs of simultaneous linear equations.
7. Solve linear equations in one variable. (Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Linear Equations )
a. Give examples of linear equations in one variable with one solution, infinitely many solutions, or no solutions. Show which of these possibilities is the case by successively transforming the given equation into simpler forms, until an equivalent equation of the form x = a, a = a, or a = b results (where a and b are different numbers).
b. Solve linear equations with rational number coefficients, including equations whose solutions require expanding expressions using the distributive property and collecting like terms. (Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Linear Equations )
8. Analyze and solve pairs of simultaneous linear equations.
a. Understand that solutions to a system of two linear equations in two variables correspond to points of intersection of their graphs, because points of intersection satisfy both equations simultaneously.
b. Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection. For example, 3x + 2y = 5 and 3x + 2y = 6 have no solution because 3x + 2y cannot simultaneously be 5 and 6. (System of Equations Substitution , System of Equations Addition )
c. Solve real-world and mathematical problems leading to two linear equations in two variables. For example, given coordinates for two pairs of points, determine whether the line through the first pair of points intersects the line through the second pair. (Age Problems )
FunctionsDefine, evaluate, and compare functions.
1. Understand that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output.1
2. Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a linear function represented by a table of values and a linear function represented by an algebraic expression, determine which function has the greater rate of change.
3. Interpret the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. For example, the function A = s2 giving the area of a square as a function of its side length is not linear because its graph contains the points (1,1), (2,4) and (3,9), which are not on a straight line.
Use functions to model relationships between quantities.
4. Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or a table of values.
5. Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally.
GeometryUnderstand congruence and similarity using physical models, transparencies, or geometry software.
1. Verify experimentally the properties of rotations, reflections, and translations:
a. Lines are taken to lines, and line segments to line segments of the same length.
b. Angles are taken to angles of the same measure.
c. Parallel lines are taken to parallel lines.
2. Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them.
3. Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates.
4. Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional figures, describe a sequence that exhibits the similarity between them.
5. Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. For example, arrange three copies of the same triangle so that the sum of the three angles appears to form a line, and give an argument in terms of transversals why this is so.
Understand and apply the Pythagorean Theorem.
6. Explain a proof of the Pythagorean Theorem and its converse.
7. Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. (Pythagorean Theorem )
8. Apply the Pythagorean Theorem to find the distance between two points in a coordinate system.
Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres.
9. Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. (Cylinders )
Statistics and ProbabilityInvestigate patterns of association in bivariate data.
1. Construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describe patterns such as clustering, outliers, positive or negative association, linear association, and nonlinear association.
2. Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and informally assess the model fit by judging the closeness of the data points to the line.
3. Use the equation of a linear model to solve problems in the context of bivariate measurement data, interpreting the slope and intercept. For example, in a linear model for a biology experiment, interpret a slope of 1.5 cm/hr as meaning that an additional hour of sunlight each day is associated with an additional 1.5 cm in mature plant height.
4. Understand that patterns of association can also be seen in bivariate categorical data by displaying frequencies and relative frequencies in a two-way table. Construct and interpret a two-way table summarizing data on two categorical variables collected from the same subjects. Use relative frequencies calculated for rows or columns to describe possible association between the two variables. For example, collect data from students in your class on whether or not they have a curfew on school nights and whether or not they have assigned chores at home. Is there evidence that those who have a curfew also tend to have chores?
Learn more about our online math practice software. |
When you think particle accelerator, you think big. CERN’s Large Hadron Collider, for example, spans two countries, features a tunnel more than 16 miles (27 km) long, employs thousands of scientists and requires a budget of $1 billion a year.
Turns out, particle accelerators need not be so massive. In fact, proton-smashing technologies initially developed to reveal the mysteries of the universe are being scaled down to solve less lofty, but no less important, problems related to environment, health and safety.
The Florida State University-headquartered National High Magnetic Field Laboratory is playing a role in a nationwide effort to make human-scale particle accelerators for a host of applications. With a $1 million grant from the U.S. Department of Energy, scientists at the lab’s Applied Superconductivity Center are developing a key component of these slimmed-down accelerators called radio frequency (RF) cavities.
“They can be used for everything from zapping cancer cells to curbing pollution to scanning cargo for contraband,” said Lance Cooley, an ASC scientist and professor at the FAMU-FSU College of Engineering who is leading the RF cavity research.
Whether made of protons, electrons or ions, beams generated by accelerators can break up unwanted molecules like coal flue gases or bacteria; catalyze processes helpful in industry and manufacturing; and identify nefarious stowaways hidden in shipping containers. The list of potential applications is long.
“The can be used anywhere you need a catalyst or an X-ray,” Cooley said.
But downsizing complex technologies to a size both portable and affordable is a massive challenge that requires solving lots of engineering problems.
Cooley is focused on the problem of designing an RF cavity that doesn’t require the fancy infrastructure used in large-scale accelerators and doesn’t break the bank.
RF cavities boost the speed of particles as they pass through them; in the Large Hadron Collider at CERN, for example, 16 cavities work to build up the particles’ velocity to close to the speed of light. When radio waves of just the right frequency are funneled into the cavities, they bounce around inside, creating oscillating electric and magnetic fields that, when timed just right, propel the particles forward.
It’s similar to what happens inside microwave ovens, but with much higher energy and, of course, a different objective. To minimize any loss of energy, the cavities are made of superconducting materials, which carry electricity with perfect efficiency.
The superconducting material most often chosen to make these cavities is niobium. But it’s an expensive element, and RF cavities made out of the stuff fetch as much as a Ferrari, said Cooley, making it impractical for most applications.
That’s where the expertise of Cooley and his ASC colleagues comes in. Their contribution is finding a way to coat a cavity fashioned of less expensive copper with a superconducting layer of niobium-tin.
“For me it’s the logical approach to an affordable, industrial accelerator cavity,” Cooley said.
ASC has worked with niobium-tin for decades, refining superconducting wires for use in magnets at the National MagLab and other facilities.
“That’s why we can succeed where other groups might not,” Cooley said.
Traditional, low-temperature superconductors only perform at extremely cold temperatures that require the use of liquid helium, which complicates the engineering of a small-scale machine. Part of the project’s challenge lies in engineering a material that can perform at slightly warmer (in relative terms) temperatures that can take advantage of less complex, more compact cryogenic devices.
Niobium-tin offers that potential.
“We can take advantage of a portable cooling device with an ‘on/off’ switch, which lets the accelerator come to the application,” Cooley said.
Cooley’s collaborators on the project are ASC Associate Director Peter Lee; Professor Choong-un Kim of the University of Texas–Arlington; and John Buttles, CEO of DMS-South / Bailey Tool LLC.
Existing and Potential Applications for Small-Scale Particle Accelerators
- Irradiate food to kill bacteria and extend shelf life
- Sterilize medical devices
- Produce medical isotopes
- Convert smokestack gases from burning coal into useful products such as fertilizer and cement
- Safely reclaim biosolids from wastewater by deactivating drugs and carcinogens
- Treat drinking water to remove pharmaceuticals, nitrates, and non-filterable agents
- Treat tumors by proton and carbon therapy
- Detect nuclear materials hidden in cargo
- Refine crude oil
- Cure rubber and other polymers
- Transform asphalt for more durable roads
- Speed production of biofuels from waste
- Extract oil from algae |
ANATOMY OF THE RESPIRATORY TRACT
The upper respiratory tract consists of the nose, the nasal cavities and the oropharyngeal space. The surface area of the nasal cavity side walls is increased considerably by the superior, middle and inferior conchae. The inspiratory air is warmed, moistened and filtered inside the nasal cavity. To enable it to do this, its large surface area is covered with a mucous membrane (the mucosa), which contains mucous glands and ciliated epithelial cells. A dense network of blood vessels on the epithelial layer warms the inspiratory air to body temperature. In the nose, moisture is added to the inspiratory air by the action of the mucous glands, and the mucus is also able to filter out dust particles. The cilia then transport the mucus towards the throat, where it can be swallowed together with the trapped dust particles and pathogens. This nasal filter system supports the deposition of particles larger than 10µm, while smaller particles are able to pass through the nose and reach the lower respiratory tract.
When a person only breathes through his mouth, this cleaning and warming function is bypassed.
After the nose and throat, the inspiratory air passes though the larynx into the lower respiratory tract, which includes the trachea, the bronchi and the bronchioles. The trachea is still made up of rings of cartilage, but the bronchioles are entirely without cartilage. Consequently, they are particularly prone to constrictions, as happens in an asthma asthma
Asthma is a common chronic inflammatory airway disease, characterized by variable and recurring symptoms, reversible airflow obstruction and bronchospasm. attack, for example. The bronchi divide into smaller and smaller branches until they terminate in about 300 million alveolar sacs. It is on the enormous surface of these alveoli alveoli
The pulmonary alveoli are the terminal ends of the respiratory tree. that gas exchange actually takes place: oxygen (O2) is absorbed into the blood and carbon dioxide (CO2) is transferred to the expiratory air.
Diseases of the respiratory tract have conventionally been divided into two medical specialties - ENT and pulmonology - but the respiratory tract is one functional unit. It is unified particularly by the common mucous membrane, which covers all areas.
The functional connection between the upper and lower respiratory tracts becomes particularly apparent with the rhinitis rhinitis
Eine akute oder chronische Entzündung der Schleimhaut durch infektiöse, allergische und pseudoallergische Mechanismen. -asthma asthma
Asthma is a common chronic inflammatory airway disease, characterized by variable and recurring symptoms, reversible airflow obstruction and bronchospasm. link. If allergic rhinitis rhinitis
Eine akute oder chronische Entzündung der Schleimhaut durch infektiöse, allergische und pseudoallergische Mechanismen. is not treated appropriately, the symptoms eventually spread to the lungs and asthma asthma
Asthma is a common chronic inflammatory airway disease, characterized by variable and recurring symptoms, reversible airflow obstruction and bronchospasm. is triggered in as many as 20 to 50% of all cases.
The term "united airways" is a vivid way to express this anatomical and functional unity. |
4 Answers | Add Yours
Mitosis is the process by which cells reproduce from one parent cell into two identical daughter cells. It is divided into 4 phases: Prophase, metaphase, anaphase, and telophase.
In Prophase, the chromosomes condense into visible threads that grow thicker and shorter. The Chromosomes duplicate into sister chromatids, connected by a structure called the centromere. Centrioles become visible, and move to opposite ends of the cell, remaining connected to each other through a mechanism called spindles.
In metaphase the centromeres attach to the center of the spindle.
In anaphase the sister chromatids separate becoming identical chromosomes, and the spindles pull the chromosomes to the opposite ends of the cell.
In telophase, the cell splits at the equator, and the chromosomes uncoil and become invisible. Each daughter cell is an exact clone of the original cell.
Mitosis is a type of cellular reproduction. Unlike meiosis, mitosis is asexual (only one parent's DNA is needed). Because there is only one set of DNA, there is no new genetic variety. The end results in two daughter cells identical to their parent cell.
Mitosis is used as a way to grow for multicellular organisms such as ourselves. Mitosis can also be used to reproduce for unicellular organisms. Bacteria use mitosis to create duplications of themselves.
The acronym, IPMAT, is used for the various steps of mitosis: interphase, prophase, (pro)metaphase, anaphase, and telophase. After IPMAT, cytokinesis (spliting of the cytoplasm) occurs.
the process in which cells duplicates into two identical daughter cell
mitosis is the process in which cells duplicates into two genetically identical daughter cell
We’ve answered 319,847 questions. We can answer yours, too.Ask a question |
A variety of plants grows wild in the swamps of Florida, from the northern Okefenokee Swamp to the Everglades in the southern part of the state. These swamp plants have adapted to wet conditions and can thrive in Florida gardens if given proper care. Swamp plants perform well in wetland gardens, near ponds or in saturated areas of the yard where other plants may not grow.
Taxodium distichum, commonly known as bald cypress, is a deciduous tree that can grow up to 150 feet tall. Young trees form a pyramidal shape but produce a flat top as they mature. The yellowish green, needle-like leaves give the tree a delicate, feathery look and turn orange in the fall before dropping. Round, rough cones follow inconspicuous flowers and the shaggy bark and growth habit provide winter interest. When growing in completely waterlogged soil, bald cypress produces woody growths called knees. This tree grows wild in swamps across Florida and prefers wet, acidic soil but will adapt to slightly dry conditions. Plant this cypress tree in sun or shade and avoid alkaline soil.
The rounded shrub Hydrangea quercifolia, commonly known as oakleaf hydrangea, grows up to 6 feet tall and 8 feet wide. Long clusters of white flowers bloom in summer and turn pinkish purple as they mature. The bold, deeply lobed leaves grow up to 8 inches long and turn red in the fall. Native to the swamps of northern Florida, this deciduous shrub performs best in nutrient-rich, porous, acidic soil and partial shade. Prune back after blooming to control the size and shape of this hydrangea.
Spanish moss, or Tillandsia usneoides, grows wild among the branches of oak and cypress trees in the swamps of southern Florida. This member of the bromeliad family does not have roots and absorbs nutrients and moisture from the air and rain instead of the soil. It grows as long as 15 feet and features greenish-gray, threadlike foliage and inconspicuous flowers in spring or fall. Spanish moss grows well in shaded, humid conditions, but tolerates sun. It is sensitive to air pollution and will not thrive in urban areas.
The shrubby, evergreen perennial Hibiscus coccineus, or swamp hibiscus, reaches up to 10 feet in height. The showy, funnel-shaped, bright red flowers grow 6 to 8 inches wide and feature prominent stamens that extend from the center of the flower. Each flower only lasts for one day, but new blooms open continually from summer through fall. The glossy foliage is divided into three to seven pointed lobes. Swamp hibiscus grows in wet or well-drained soil but requires regular water if planted in a dry location. This perennial tolerates partial shade but will flower best in full sun. |
Mental_Floss has compiled a list of six remarkable medical gadgets and how they were invented.
As a young medical-school student in 19th-century Paris, Rene Theophile Hyacinthe Laennec developed a knack for hearing and interpreting the different sounds made by the heart and lungs when he placed his ear on patients’ chests. This method only worked if the patient was sufficiently slender, of course. One afternoon, Laennec saw some children playing with wooden boards. One tyke would scratch or tap softly on one end, while another put his ear on the other end of the board to hear the sound. Laennec went back to his office – presumably after removing a splinter from the tyke’s ear – and constructed a long tube out of several pieces of rolled-up paper. By placing the end of the cylinder directly on a patient’s chest or back, he discovered that he could hear sounds much more clearly than before. After experimenting with different materials and designs, he came up with the stethoscope. In 1819, the medical community began to recognize the use of the gadget as a valuable diagnostic tool. |
Living without electricity is a reality for almost 60 million people in Indonesia – particularly in remote areas, where extension of the country’s national grid is held back by excessive costs. Many are forced to harvest firewood instead, depleting the country’s already fragile forest resources, or purchase kerosene, which represents a significant financial burden on most households.
Fortunately, communities have a largely untapped renewable source of energy on their doorsteps. That resource is bamboo. With a calorific value similar to wood, the plant is now being used to power small-scale generators, part of an initiative funded by the Millennium Challenge Corporation, an independent development agency established by the US Government.
Bamboo makes both economic and environmental sense, offering a sustainable source of energy for the estimated 100 million Indonesians who use biomass as their primary energy source. The plant is highly renewable, can grow up to one meter per day, and is harvested for use in only 3-6 years. In comparison, many tree species take much longer to become established.
It also produces fewer pollutants than either wood or petroleum, and its production helps reduce pressure on existing forests. This has a dual benefit: reducing deforestation and preventing the release of previously sequestered carbon into the atmosphere. Production would bring enormous benefits to Indonesia, where in 2012 alone, over 840,000 hectares of forest were cleared.
Serving ‘off-grid’ communities
The man behind the initiative, Jaya Wahono of Clean Power Indonesia, initiated a pilot project to electrify three villages on Siberut, an island 150 km off the coast of Sumatra. The small villages – Madobag, Matotonan, and Saliguma – are located in Siberut Biosphere Reserve and can only be reached by a four-hour boat ride.
Electricity here is produced through gasification – a process that involves burning biomass in special units that power an electricity-generating turbine. The initiative plans to generate approximately 14-50 kilowatts (kW) in small hamlets, and up to 100-300 kW in medium-sized villages. Feasibility studies show that two bamboo poles – each weighing approximately ten kilograms – can provide enough energy for a single family over a 24-hour period. To maximize impacts, the by-product – charcoal – will also be used for cooking and fertilizing soil.
Securing a reliable supply of bamboo
During the start-up phase the small-scale generators are supplied by harvesting wild bamboo. But, large-scale use of this rural technology will require a stable and reliable supply of quality bamboo. The solution: properly managed bamboo plantations that provide a year-round supply of biomass energy.
The initiative encourages communities to grow bamboo themselves, supplying bamboo cuttings to small-scale power generators in exchange for electricity – a concept Wahono has called “Listrik Gotong Royong,” or ‘working together for electricity.’ (See image below).
In the project, Bambu Nusa Verde (BNV), a global supplier of tropical bamboo plants has sent seedlings to the area. Each family received 100 bamboo culms, which producers will harvest and supply after a period of 3-4 years.
Achieving long-term sustainability
BNV also provide training on effective management techniques – including planting, maintenance, and harvesting. “It is not enough to simply supply bamboo” says Marc Peeters, BNV Director. “Otherwise, we’d see many plants not being treated well and dying. The plants should be monitored closely during the first few years, fertilizer should be used, and the culms should not be harvested before they are two years old.”
Peeters predicts that the biomass supply from this scheme will be exceeded in five years, and then the communities will have a choice –additional gasification units and even more electricity, or use the excess bamboo to make high-end products like emergency housing, ply-bamboo, or bamboo pellets for export.
Long-term sustainability of small-scale bamboo electivity approaches is also dependent on communities taking responsibility for the generators – requiring additional training in maintenance and repair. Clean Power Indonesia will provide support for a ten-year period before handing it over completely to the communities involved.
Scaling-up small scale power generation
The initiative holds significant potential as a form of ‘off-grid’ power generation for remote communities elsewhere. In Indonesia alone, Peeters estimates there may be as many as 10,000 villages and hamlets without access to electricity. Worldwide, some 1.3 billion people currently live without electricity.
But, the potential also has to be communicated effectively. One obstacle is the plant’s perception among many Indonesians who perceive bamboo only in terms of its traditional role as a raw material for handicrafts – neglecting its wider applications, including energy generation. Success in Siberut, however, might convince them otherwise.
For more information:
Siberut Biomass Power Plant Project: www.greenprosperitymentawai.com
Clean Power Indonesia: www.cleanpowerindonesia.com
Bambu Nusa Verde: www.bambunusaverde.com
Jaya Wahono: [email protected] / [email protected]
Marc Peeters: [email protected] |
Each year more than 1 million tonnes of mineral nitrogen fertiliser is applied to arable and grass crops in the UK. This pollutes waterways through nitrate run-off and the atmosphere from the release of ammonia and nitrogen oxides.
Nitrate pollution represents a significant health hazard and causes oxygen depleted “dead zones” in our waterways and oceans. A recent study estimates that the annual cost of damage caused by nitrogen pollution across Europe is £60-£280 billion a year.
The world needs to unhook itself from the Haber-Bosch process of nitrogen fertiliser production. The energy, economic and environmental costs are too high. With global concerns about food security, sustainable production, and consumer worries about GMOs, an alternative is desperately required to enable more effective delivery of nitrogen.
The development of a seed inoculation based upon a naturally occurring bacterium called Gluconacetobacter diazotrophicus (Gd), which is able to remove nitrogen from the atmosphere, offers a genuine opportunity to deliver such a solution. The inoculation and the symbiotic relationship of crops such as peas and beans with a nitrogen-fixing bacteria called Rhizobia has long been known and promoted in agriculture. These bacteria are applied to seeds or the soil where they are taken up by the growing plant in their root nodules, where they fix the nitrogen.
The ability of such legumes to produce root nodules is essential to this relationship. Research is underway by other organisations to genetically modify crops to enable them to produce root nodules, but we are many years away from developing a practical solution.
However, we have discovered that under certain conditions Gd – a bacteria originally isolated from sugarcane – will colonise growing cells of the developing root of any crop plant, where it then fixes nitrogen. This means a practical solution to help reduce nitrogen fertiliser use by farmers will be available within a few years.
The approach is neither genetic modification nor bio-engineering. Rather, we used simple but effective means to encourage this naturally occurring bacteria to form a symbiotic, mutually beneficial relationship with the plant. The plant provides sugar used by the bacteria as an energy source in order to fix nitrogen, half of which is released to the plant for protein production and growth.
Through staining the bacteria and using microscopic techniques, it was possible to see the Gd within plant cells. Unlike Rhizobia, which is confined to the root nodules of peas and beans, the Gd moves throughout the whole of the plant, the roots, stems and leaves.
This research has taken more than 10 years, with extensive programmes on a range of crops in the laboratory, growth rooms and glasshouses.
It has been possible for plants to be grown in the laboratory in the total absence of any fixed nitrogen within their roots so that they have to fix nitrogen from the air around them. This is the ultimate demonstration of the bacteria functioning in true symbiosis with the plant.
However, it is not an expectation or even necessary that the bacteria produce all of a plants’ nitrogen requirements. Rather, it should be sufficiently effective to allow the reduction in use of synthetic nitrogen fertilisers. This alone will provide a cost saving to the farmer while maintaining or even increasing yields. It would also reduce the scale of the negative side effects of water and atmospheric pollution from nitrogen fertilisers.
The current research programme has moved into a product development phase via Azotic Technologies Ltd in order to determine how well the Gd inoculation works under normal crop growing conditions. Initial results are promising and it is anticipated that within two years, products based on Gd will be available for farmers, for use on a wide range of crops, on a global basis.
- theconversation.com, first published 7 August 2013 |
Upper Elementary Age 9 to 12
"Education between the ages of six to twelve is not a direct continuation of that which has gone before, though it is built upon that basis. Psychologically there is a decided change in personality, and we recognize that nature has made this a period for the acquisition of culture, just as the former was for the absorption of the environment." – Maria Montessori
The Upper Elementary program at Montessori Academy focuses on high academic achievement in a low-pressure environment. We provide an environment for each student to grow intellectually by using the principles and philosophy developed by Maria Montessori. Our knowledgeable and highly trained educators are there to present lessons, guide, and nurture the social, academic and emotional needs of each child.
Upper Elementary learning at Montessori Academy allows students age 9-12 to discover, research and share ideas in many detailed and often individual ways within the larger context of a 6-12 Elementary multi-classroom environment.
The Elementary curriculum is based upon the Five Great Lessons developed by Maria Montessori. These lessons provide the basis for study of Mathematics, Language, Sciences and Arts. Children ages nine to twelve are eager invited to revisit these Great Lessons from a new perspective with a greater amount of foundational knowledge from their Lower Elementary work. These lessons, and the work that follows throughout the school year, allow children to develop a deep understanding of academic concepts as well as cultures of the world.
- The Upper Elementary Curriculum meets and/or exceeds Common Core Standards.
- Daily PE is offered to all Elementary students.
- Weekly Art/Music/Drama lessons are presented which connect to the cultural curriculum of the classroom.
- Self-chosen research is encouraged. Students research and present their findings to the class.
- Responsibility, kindness, hard work and perseverance are skills taught and encouraged all throughout the school year.
Reading, Writing and Language Arts: Comprehension skills are fine tuned, while the class continues to break new ground in the areas of language mechanics and creative writing. The class orally communicates their ideas through presentations, reports and drama.
Mathematics: The study of mathematical systems, problem solving and geometry expands at the Upper Elementary level to include a wide range of concepts including statistics, formulas and pre-algebra.
History and Cultural Studies: World history, ancient civilizations and a broad view of significant historical events is combined with adventures into detailed exploration and research of specific events and time periods. History and geography are studied and tied to present day events. Thorough research and diverse sources are used to influence history projects and beyond.
Geography: Studying and understanding maps from a global perspective helps provide context for a wide variety of studies.
Environmental Studies: Global, local and personal perspectives on environmental studies help children deepen their understanding of society’s impact on the Earth and ways that they can impact their future.
Practical Life: In the Elementary classroom enviornments, practical life is taught through the daily care of the classroom – both indoors and outdoors. It is the responsibility of the children to prepare snack for the community, dust and straighten shelves, sweep, compost appropriate materials and care for our plants and animals. |
Understanding HIV/AIDS -- the Basics
What Causes HIV/AIDS?
HIV lives in human blood and sexual fluids (semen and vaginal secretions). The infection is spread from person to person when these body fluids are shared, usually during vaginal or anal sexual contact or when sharing injectable drugs. It can also be passed from dirty needles used for tattoos and body piercing. HIV does not live in saliva, tears, urine, or perspiration -- so HIV cannot be spread by casual contact with these body fluids. It can be spread through oral sex, although the risk is small.
HIV cannot survive for long outside the human body and dies quickly when the body fluid in which it is contained dries up. It is not spread by animals or insects and is not found on public surfaces. It's actually not as easy to get as other infectious diseases.
A mother can pass HIV to her child during birth when the child is exposed to the mother's infected blood. Breastfeeding does carry a risk for HIV infection, though in some areas of the developing world, breastfeeding is considered safer than feeding a newborn contaminated water.
There are two main types of HIV, called HIV-1 and HIV-2. HIV-2 is rarely found outside Africa and parts of Asia, so there is no need to test for it specifically -- unless a person has had contact with someone from an area of the world where HIV-2 is common.
Blood transfusions were once a concern, but all blood products used in the United States today are tested for several infectious diseases, including HIV. If signs of disease or other problems are found in donated blood, the person who donated the blood is notified to be re-tested by their health care provider and is not permitted to continue donating blood. Any donated blood that tests positive for HIV is disposed of and never makes it into the public blood supply. |
Reading comprehension and vocabulary activity
Read the text below and try to memorize the information and details about the atmosphere.
The atmosphere surrounding Earth is made up of gas mixtures. The most common are nitrogen, oxygen and carbon dioxide and their amounts change in different places on Earth.
The atmosphere puts pressure on the planet. This pressure becomes less and less the further away from surface you are. When we think of the atmosphere, we mostly think of the part that is closest to us.
The atmosphere is divided into five layers. It is thickest near the surface and thinner as it merges with space.
The troposphere is the first layer above the surface and contains half of the Earth's atmosphere. It extends up from the surface of Earth for about 10 kilometers. This is the layer where airplanes fly. About three-fourths of our atmosphere’s air is found here and at any moment in time, its overall condition can change. These changes are what we know as weather.
Just above the troposphere is the stratosphere. It extends to about 30/ 40 kilometers above Earth’s surface. Most of the planet’s ozone layer is in this colder, drier layer. This gas helps keep some of the sun’s dangerous radiation from reaching us. Many jet aircrafts also fly in the stratosphere because it is very stable.
If we continue upward, the next layer is the mesosphere, which extends up to about 50 kilometers above Earth’s surface. The mesosphere is extremely cold. It is within this layer that meteors or rock fragments burn up.
Next is the thermosphere about 300 kilometers away, as we get closer to the sun. Temperatures in the thermosphere can be over 1,500º Celsius. The thermosphere is a layer with auroras. It is also where the space shuttle orbits.
Finally we come to the extremely thin exosphere where the atmosphere merges into space beyond the 300 kilometers. This is the upper limit of our atmosphere.
Together, the layers of our atmosphere protect Earth and provide the conditions needed to support life.
Source: Cambridge University, Weather Channel, Wikipedia
Can you answer the following questions without going back to the passage? Check your guesses afterwards with the text.
1) Which layer of the atmosphere has most of the air?
2) If you were to send a rocket 25 kilometers up into the air, which layer of the atmosphere would it be in?
3) What are the most common gases in Earth’s atmosphere?
4) What important barrier is there in stratosphere? Why is it important?
5) What is the reason why many meteors do not reach the Earth?
6)What are the main characteristics of the exosphere?
7)Where can temperatures reach 1,500º Celsius?
8)Where is there more atmospheric pressure, in the mesosphere or in the stratosphere?
9)In which layer do airplanes fly?
10)Which layer is thicker, the troposphere or the stratosphere?
Earth's Atmosphere Vocabulary Challenge – Individal, pair, or group competition.
What do you call...?
a) the force resulting from a column of air pressing down on an area.
b) the invisible rays that are part of the energy that comes from the sun. they can burn the eyes, hair, and skin.
c) the transfer of energy through empty space; the way by which energy from the sun reaches Earth.
d) the process by which heat from the sun is trapped by gases in Earth's atmosphere, which results dangerous.
e) a scientific instrument used in meteorology to measure atmospheric pressure.
f) a form of oxygen that has three oxygen atoms in each molecule. |
The University of Geneva’s Michel Mayor and his graduate student Didier Queloz were the first to discover a planet orbiting a distant star much like our own Sun. Meticulously ruling out, one after the other, alternative interpretations of their measurements, in October 1995 they announced the discovery of the planet designated 51 Pegasi b, now known as Dimidium, orbiting the star 51 Pegasi, since named Helvetia. Michel Mayor presented the discovery to an international assembly of astrophysicists in Florence, Italy.
The hunt for these exoplanets was inspired by advances in understanding of the formation of stars: it was becoming clear that the gases that were contracting to form new stars were somehow shedding the bulk of their energy of rotation, while new observations were revealing disks full of gas and dust, spinning around such forming stars, and containing a lot of energy of rotation. That dust could in principle coagulate to make planets whose orbiting motions could relieve the rotation problem of forming stars. But did it?
Several teams around the world convinced their sponsors of the need for more sensitive instruments, not to see the tiny, faint exoplanets directly, but to study their effects on the stars that they orbit. The mutual gravitational tug between a star and an exoplanet would cause the star to move back and forth just enough that its light should alternate measurably in colour from red to blue and back as the exoplanet moved around it. Measuring that swing required unprecedented precision, a lot of observing time to cover enough of a planet’s orbital period, a large sample of stars (because no one knew how common exoplanets were), and a great deal of commitment of the team members to stick with it. Only a few groups pulled that off.
Mayor and Queloz from the Observatory of Geneva in Switzerland had joined forces with colleagues in France to develop a new advanced spectrograph, an instrument that unravels starlight into its constituent colors. They had as primary goal to look for the smallest stars in orbit around Sun-like stars. The potential to discover exoplanets was on their mind also, but they were not hopeful. They expected they could find only large, heavy exoplanets and these were thought to be on orbits that would take many years to complete, and therefore they would need to observe similarly long to discover them.
But once the spectrograph was connected to a telescope at the Observatoire de Haute-Provence in France, they were lucky. Among the 142 Sun-like stars that they were monitoring, they found the exoplanet known as 51 Peg b. By the ideas of the time, it should not have been there – and that is why their colleagues were skeptical at first. It is a heavy planet at half the size of the giant Jupiter, but so close to its star that its orbit lasts only 4.2 days. The theory of the day held no planet like that could form where it was.
The discovery was quickly confirmed by another team, however, and scientists subsequently found many other such “hot Jupiters”. We now estimate that there are over 100 billion planetary systems in our Galaxy alone. Our new understanding of the formation of such systems tells us how Dimidium most likely got to be where it is: it formed much further out from its star, but then its orbit contracted to end up close to its star.
Queloz checked his observations and computations many times over before the 1995 announcement. Then just entering the field of astrophysics, he realized that an erroneous discovery claim would abruptly end his career. Mayor was confident, however. His announcement in the meeting was met with a mix of skepticism and enthusiasm, but when he returned to his hotel room that same day, there was a pile of faxes already waiting for him from journalists around the world. Queloz and Mayor’s lives changed in that discovery, and the field of exoplanetary science rapidly took off. These discoveries help us understand how planetary systems form and evolve. In doing so, they also reveal what happened a long, long time ago when our own solar system formed, when the giant planets roamed around to find their final orbits, and how that affected everything else in the solar system, including Earth.
Featured image credit: “Artist impression of the exoplanet 51 Pegasi b” by European Southern Observatory. CC by 4.0 via Wikimedia Commons. |
Definition Virtual Circuit
Virtual Circuit is a connection oriented mean of transporting the data packets virtually by giving an impact of a physical link between the sources and destinations over the packed switched computer networks.
In Virtual Circuits, first of all a pathway is defined and a link is established between the source and required destinations before initializing the transportation call. After establishing the connection, call for sending data is being generated and all data packets are routed and dropped on their destination points on the basis of short addresses stored in those packets along with their data. Virtual Circuits also eliminate the need for storing full unique addresses in those packets as it is a connection oriented approach and it pre-locates all the destinations before initializing the call for data transfer. All the packets are delivered in proper order as path is generated on the basis of data packets placed in a queue. |
According to a new theory by Dr Stone of Sheffield University, skills such as flying are easy to refine because the innate ability of today's birds depends indirectly on the learning that their ancestors did, which leaves a genetically specified latent memory for flying.
The theory has been tested on simple models of brains called artificial neural networks, which can be made to evolve using genetic algorithms.
Whilst these networks do not fly, they do learn associations, and these associations could take the form of a skill such as flying. Using computer simulations, Stone demonstrates in a study, publishing in the open access journal PLoS Computational Biology, that the ability to learn in network models has two surprising consequences.
First, learning accelerates the rate at which a skill becomes innate over generations, so it accelerates the evolution of innate skill acquisition. For comparison, evolution is slow if a network simply inherits its innate ability from its parents, but is not allowed to learn in order to improve this innate ability. Second, learning in previous generations indirectly induces the formation of a latent memory in the current generation, and therefore decreases the amount of learning required. It matters how quickly learning occurs, because time spent learning is time spent not eating, or time spent being eaten, which incurs the ultimate penalty for slow learners. These effects are especially pronounced if there is a large biological 'fitness cost' to learning, where biological fitness is measured in terms of the number of offspring each individual has.
Crucially, the beneficial effects of learning depend on the unusual form of information storage in neural networks, a form common to biological and artificial neural networks. Unlike computers, which store each item of information in a specific location in the computer's memory chip, neural networks store each item distributed over many neuronal connections. If information is stored as distributed representations then evolution is accelerated. This may help explain how complex motor skills, such as nest building and hunting skills, are acquired by a combination of innate ability and learning over many generations.
The new theory has its roots in ideas proposed by James Baldwin in 1896, who made the counter-intuitive argument that learning within each generation could guide evolution of innate behaviour over future generations. It now seems that Baldwin may have been more right than he could have guessed, even though concepts such as artificial neural networks and distributed representations were not known in his time.
A previous version of this article appeared as an Early Online Release on June 8, 2007 (doi: 10.1371/journal.pcbi.0030147.eor).
Andrew Hyde | alfa
New gene catalog of ocean microbiome reveals surprises
18.08.2017 | University of Hawaii at Manoa
18.08.2017 | Deutsches Zentrum für Diabetesforschung
16.08.2017 | Event News
04.08.2017 | Event News
26.07.2017 | Event News
18.08.2017 | Life Sciences
18.08.2017 | Physics and Astronomy
18.08.2017 | Information Technology |
Hemophilias are bleeding disorders due to deficiency in one of the factors present in the clotting cascade.1,2 The most common factor abnormalities are of factor VIII (hemophilia A) or factor IX (hemophilia B). von Willebrand's disease is a related defect of the von Willebrand factor.3
These hereditary bleeding disorders typically appear early in life, and adult patients will usually be able to relate a history of a bleeding problem. However, patients with mild forms of inherited disease may be unaware of a bleeding disorder until stressed by significant trauma or development of another hemostatic problem.
Systemic bleeding disorders should be suspected in patients with severe bleeding related to trivial trauma or minor surgery, or spontaneous bleeding, particularly when the bleeding occurs in joints or muscle. Unusual bleeding or bruising at multiple areas should also raise concern about a coagulopathy. Medications can be responsible for unmasking a mild bleeding diathesis.
The pattern of bleeding can suggest a likely cause. Patients with easy bruising, gingival bleeding, epistaxis, hematuria, GI bleeding, or heavy menses are more likely to have a deficiency or dysfunction of the platelets. Conversely, patients with spontaneous deep bruises, hemarthrosis, retroperitoneal bleeding, or intracranial bleeding are more likely to have a coagulation factor deficiency. In factor-deficient patients, bleeding associated with trauma may be delayed, due to inadequate fibrin clot formation that inadequately stabilizes the initial platelet thrombus. Patients with von Willebrand's disease may present with features of both platelet and clotting factor problems.
The genes that encode factors VIII and IX are located on the long arm of the X chromosome. A genetic mutation in the factor VIII gene produces hemophilia A, occurring in about 1 in 5000 male births in the United States. A mutation in the factor IX gene causes hemophilia B, affecting approximately 1 in 25,000 male births in the United States. Together, these two forms of hemophilia make up about 99% of patients with inherited coagulation factor deficiencies. Hemophilia A and B are clinically indistinguishable from each other, and specific factor testing is required to identify the type.
Because hemophilia A and B are X-linked disorders, hemophilia is overwhelmingly a disease of men, with women typically being asymptomatic carriers. Only rarely do women have severe disease. While these disorders are genetic and usually inherited, a family history of bleeding may be absent because approximately one third of new cases of hemophilia arise from a spontaneous gene mutation.
Bleeding manifestations in patients with all forms of hemophilia are directly attributable to the decreased plasma activity levels of either factor VIII or IX (Table 230–1). Those with factor activity levels of 0.3 to 0.4 IU/mL (30% to 40% of normal) may never be aware that they have hemophilia, or they might manifest unusual bleeding only after major ... |
[This post has already been read 4021 times!]
Solar panels glimmering in the sun are an icon of all that is green. But while generating electricity through photovoltaics is indeed better for the environment than burning fossil fuels, several incidents have linked the manufacture of these shining symbols of environmental virtue to a trail of chemical pollution. And it turns out that the time it takes to compensate for the energy used and the greenhouse gases emitted in photovoltaic panel production varies substantially by technology and geography.
Once installed, your solar power system should produce electrical power for a large number of years with practically no further inputs to damage its green credentials, bar perhaps the occasional use of water to clean the modules, maintaining their efficiency.
In addition to being renewable, solar energy is typically labelled a “green” source of energy due to the lack of harmful environmental side effects associated with its use. While fossil fuels release greenhouse gases and other particles into our atmosphere, generating energy from solar panels is a zero-emissions process that can take place anywhere the sun shines.
The Environmental Effects of Manufacturing Solar Panels
Many people are concerned with the environmental effects of manufacturing solar panels. Like any manufactured product, making quality solar modules takes resources and energy, which means that solar energy production has at least some environmental impact. It is of course very difficult to measure the overall energy input into the manufacturing process of solar power systems in general. The good news is that this impact is minimal in comparison to the benefits of the zero-emissions energy produced with solar panels. Studies have shown that it only takes a few months for a solar panel producing energy to “cancel out” the impact of manufacturing it.
The environmental effect of producing solar panels is decreasing year after year with the introduction of better panel technologies and designs. For example, solar panel efficiency is increasing dramatically every year. This means that solar panels are becoming much better at converting sunlight into emissions-free energy, and the relative environmental cost of producing panels compared to the clean energy they generate is shrinking rapidly.
What Happens to Solar Panels at the End of Their Life?
The last few years have seen the growing concern over what happens to solar panels at the end of their life. Perhaps the biggest problem with solar panel waste is that there is so much of it, and that’s not going to change any time soon, for a basic physical reason: sunlight is dilute and diffuse and thus require large collectors to capture and convert the sun’s rays into electricity. Those large surface areas, in turn, require an order of magnitude more in materials — whether today’s toxic combination of glass, heavy metals, and rare earth elements, or some new material in the future — than other energy sources.
Hurricane Maria (Atlantic hurricane of 2017) caused massive damage and destruction in Puerto Rico, resulting in a major humanitarian crisis. Many people around the world have heard and read about the disaster in on the island, but few know much about the damage of the electricity plant of solar panels. The hurricane destroyed completely some of the island’s renewable energy projects, such as the solar park in Humacao. For the most part, the electricity grid was destroyed.
Solar panels can be recycled and the components within them repurposed, further lowering the overall environmental footprint of solar energy. Similar to panel efficiency improvements, panel recycling processes are continually getting better, further reducing the lifetime impact of solar energy.
Solar Panel Purchase Fee
One solution can be to set a fee on solar panel purchases to make sure that the cost of safely removing, recycling or storing solar panel waste is internalized into the price of solar panels and not externalized onto future taxpayers. An obvious solution would be to impose a new fee on solar panels that would go into a decommissioning fund. The funds would then, in the future, be dispensed to state and local governments to pay for the removal and recycling or long-term storage of solar panel waste. The advantage of this fund overextended producer responsibility is that it would make sure that solar panels are safely decommissioned, recycled, or stored over the long-term.
Picture on top: Unknown
Categories: Development, Technology, World |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
On this page, we will provide you with a communication worksheet that will help you learn the basic communication styles.
What is a Communication Worksheet about?
Communication is the process of conveying and receiving information through various mediums, such as speaking, writing and non-verbal communication. Communication takes place in every social setting, be it at work, school, or social gatherings. In fact, there is no social interaction without communication and no communication is also communication. On this worksheet, we shall delve into the communication styles used to convey information to others.
How will the Communication Worksheet help?
A communication worksheet will help you learn more about communications styles. On this worksheet, you will be able to identify your personal communication styles by reading and understanding the characteristics of each style.
Instructions on how to use the Communication Worksheet.
To use this worksheet, begin by reading the four communication styles.
Complete the worksheet by indicating the style you relate to.
You can download this worksheet here.
On this page, we provided you with a communication worksheet that we hope assisted you learn more about effective communication.
If you have any questions or comments, please let us know. |
Computers are used to solve problems, and the type of problems they solve depends on their algorithms and hardware as well as their capabilities and limitations. Imagine what would happen if you could lift these limitations once and for all? What would modern quantum computers be able to do then?
What has made quantum computers so exciting lately? Perhaps the fact we are approaching the limits of computing capabilities of the kinds of transistor-based machines we have long been using. They are limited by the laws of physics, which prevent us from packing more processors on chipboard.
The old transistor computer …
Today’s processors consist of billions of transistors, each a few nanometers in size, crammed onto a very small surface. According to Moore’s law, the number of transistors in a microprocessor doubles every two years. Unfortunately, computational power increases have slowed down lately. This is because we are gradually reaching the technological boundaries of how many transistors can be “crammed” onto such small surfaces. The limit, which can not be exceeded physically, is transistors reduced to the size of a single atom with a single electron used to toggle its state between 0 to 1.
… and its kid brother, the quantum computer
Quantum computing relies on the intermediate state, thus moving beyond the scheme of two opposing values. The qubit (short for quantum bit), which is the quantum unit of measure, can assume the state of both 0 and 1 simultaneously or, to be precise, take on an infinite number of states between 0 and 1. This property is referred to as superposition. Only when the value of a qubit is checked does it ever assume either of the two basic states of either 0 or 1.
While the difference seems to be minor, a superpositioned qubit can execute multiple calculation commands at a time, helped by the fundamental laws of quantum physics. Physically, a qubit can be represented by any quantum system with two different fundamental states such as an electron or atom spin, two energy levels in an atom, or two levels of photon polarization – vertical and horizontal.
This completely unreal situation becomes reality when using a quantum computer. A machine of this kind can process data even hundreds of thousands, and theoretically millions of times faster than devices relying on advanced silicon components! An ideal use for such a machine would be to recognize objects in a huge stock of photos, process large numbers, or encrypt and decrypt codes. By using mathematical data, we can theoretically increase the performance advantage of quantum over a conventional computer to 1:18 000 000 000 000 000 000 times!
The creation of quantum algorithms is a big challenge as they must adhere to the laws of quantum mechanics. Algorithms executed by a quantum computer follow the laws of probability (hence, they are referred to as probabilistic). This means that if the same algorithm is run twice on a quantum computer, the randomness of the process may cause it to produce different results. Simply put, in order to generate reliable calculation results, one should apply the principles of probability.
Quantum computers are perfectly suited for specific highly-specialized calculations that rely on algorithms to harness their full power. Some of the most common applications of probabilistic algorithms are the Miller-Rabin’s test of numbers for primality (which has extensive applications in cryptography) and Quicksort, a rapid sorting algorithm. All this means that quantum computers are unlikely to appear on every desktop or in every home any time soon.
However, no matter how long it takes for a given algorithm to generate a result, we can easily imagine, even today, a scenario in which a quantum machine is needed to solve a specific problem.
Mathematics, physics, astronomy … and code-breaking
Quantum technologies can significantly affect various fields of science such as astronomy, mathematics, and physics. Quantum computers can instantly sift through mountains of data. This, in fact, maybe the main reason why intelligence agencies and tech companies invest heavily in this technology. Quantum computers could be ideal for code-breaking. Asymmetric cryptography algorithms, used as part of the system that secures web browser and mobile/online banking transmissions, could also be used to instantly break them. This is, potentially, the first technology that could threaten the cryptographic algorithms of blockchain networks and cryptocurrencies, or rather their cryptographic methods based on a pair of keys: public and private.
Despite the impressive computational power of quantum computers, they are not simply machines that can run existing software a billion times faster. Rather, quantum computers are good at solving specific types of problems.
Smarter artificial intelligence
The primary application of quantum computers could be to support artificial intelligence and, more specifically, machine learning. Neural networks, which underpin AI, need to be trained and taught specific behaviors that are based on algorithms and huge data volumes. Put very simply, faced with the choice of taking a specific action as a result of the calculations performed by an algorithm, neural networks are additionally guided by the probability of the occurrence of a specific desired outcome. As they receive feedback on whether such an outcome is indeed desirable and correct, they automatically modify their algorithms to increase the chance of taking that desired, appropriate action.
This is an example of feedback-based machine learning. In a nutshell, the action model is based on the calculated probability of many possible choices. Artificial intelligence is ideal for quantum computing, where probabilities drive the operation of quantum computer algorithms.
A good example is the startup Rigetti which designs neural networks using quantum computers. Rigetti has recently announced having designed a data clustering algorithm for quantum computers. In tests, the algorithm proved to cluster data faster than algorithms run on classic computers. The feat was accomplished on a 19-qubit quantum machine. What speeds can more advanced computers achieve? When it comes to quantum computing, the increase in computational power between 19 and, say, 30 qubits, is not linear but exponential. While many companies use quantum computers to run artificial intelligence algorithms, this example is one of the first successful attempts to make neural networks and quantum computers work together.
Quantum computers and artificial intelligence share another common feature: huge, exponential scalability. The power of quantum computers is measured in qubits, with the most advanced quantum computers reaching about 50Q. With such power, they are equivalent to a single supercomputer. A power increase to just 60Q would produce a machine that exceeds the collective computational power of all of the world’s supercomputers.
Quantum machine learning is the latest field of research and emerging technology that attempts to harness the power of quantum computers to accelerate the performance of classic machine-learning algorithms. Even today’s AI systems with their machine learning algorithms are capable of processing incredible amounts of data. The process the algorithms used to search databases could be improved by employing quantum computing. Both such algorithms and quantum computer capabilities are expected to be available within a few years. Once they are here, the speed of neural networks will rise well beyond an ordinary surge. We will see it skyrocket, multiplied by a factor of millions.
Theoretically speaking, self-replicating artificial intelligence could scale itself with the expansion of hardware and cloud computing networks. This would allow artificial intelligence to create algorithms of a complexity that far exceeds any human creations. All to harness the full power of quantum computing.
An aerospace company is planning to use a quantum computer to test autopilot software used onboard aircraft. The latest models of behavior of the neural networks and algorithms used in such autopilots are too complex for conventional computers to handle. Quantum computers are also used to design software that can spot and mark autonomous vehicles.
We have already reached the point where AI creates new artificial intelligence without human involvement. All this thanks to quantum computers as well as the principles of quantum physics that underpin their operation.
Qubits harnessed for molecular modeling
Another example of the use of quantum computers is for precision modeling of molecular interactions to find the optimal configurations for chemical reactions. Quantum chemistry is so complex that only the simplest molecules and the simplest relationships among them can be analyzed by classical computers.
It is the computing power of computers in academic data centers that determines the accuracy with which various phenomena are simulated, often down to individual molecules. In highly complex systems, rather than relying exclusively on simplified presumptions to map interactions between molecules and molecule sets, as is the practice today, quantum computers will render such interactions mappable in environments that closely mimic real-life conditions. This will be possible thanks to the quantum nature of chemical reactions. By their very design and owing to their operating principles, quantum computers would have no difficulty simulating and evaluating even the most complex molecular processes. Molecular modeling is applied in nanotechnology, drug design, the exploration of biological structures having known sequences but unknown structures and functions, learning about the dynamics and thermodynamics of chemical compounds, and material research. The applications do not end – there is, in fact, a multitude of others. The main constraint on their development is the limited computing power of conventional computers.
In early 2018, scientists from the Institute of Quantum Optics and Quantum Information of the University of Innsbruck applied an algorithm of a programmable quantum system to simulate interactions among protein molecules. As part of an experiment described in Nature, a team of researchers used a basic quantum computer to test the impact of external factors on particular ions of a molecule. In the experiment, the external factor was an attempted alteration of molecules to create a new chemical compound. The simulation showed it was possible to modify a real-life environment in such manner. This means one can build new, stable particles. The quantum simulator worked perfectly in the experiment.
Fundamentals of cryptography for code-breaking
As quantum computers grow more powerful, common encryption algorithms become obsolete. Today, most data encryption security depends on the difficulty of factorizing (or breaking up) large numbers into primes.
To break a private key or crack an encryption method, factorization algorithms must painstakingly attempt to make divisions by successive numbers. While the task can be completed by today’s supercomputers, it would make no financial sense to use them. The estimated time that a conventional computer would need to break a 4096-bit RSA key would exceed the time that has passed since the formation of our galaxy!
This makes breaking codes and keys costly and impractical. However, a quantum algorithm would allow one to check all potential combinations simultaneously and generate the correct solution in an instant. What this means is that today’s asymmetric encryption algorithms that rely on a public and private key will no longer be secure and that other methods of securing data, transactions or system access will have to be found.
We can nevertheless rest easy. Very little has been achieved in the practical application of quantum factorization. Besides, ever more asymmetric encryption methods are being developed that will resist quantum computers’ attempts to crack them.
Modern financial markets are among the world’s most complex ecosystems. Although many sophisticated mathematical tools have been developed to manage such markets, they are still far from efficient.
Asset managers responsible for investment funds can only dream of having a perfectly balanced portfolio designed for them. To rebalance their portfolios, change the proportions of their components and restore the original desired level of asset allocation, they either buy or sell assets. Let’s say that the original ratio of shares to bonds in the portfolio is 50/50. If shares in the portfolio perform well in a given period, their proportion could increase to 70%. To restore the desired 50/50 ratio, the asset manager may have to sell some shares and buy some bonds. This means incurring transaction costs. In a market where most funds only generate short-term single-digit profits, the loss of a few percents in transaction costs to rebalance the portfolio could be devastating. And the portfolio may have to be rebalanced multiple times during a reporting period.
Quantum computers may optimize investment portfolios significantly faster than the algorithms used in classic computers, not to mention humans.
This is just one example of how quantum computers can handle the huge challenges faced by fund managers. A few years from now, quantum algorithms should be stable enough to replace people in both designing and managing investment portfolios.
At a financial conference of the Singularity University held in December 2017, the CEO of 1Qbit, Andrew Fursman said that quantum computers that rely on the most fundamental laws of nature will appear sooner than we think. One of their key applications will be in quantum finance.
Quantum weather forecasting
NOAA (National Oceanic and Atmospheric Administration) Chief Economist Rodney F. Weiher claims that nearly 30 percent of the U.S. GDP is directly or indirectly affected by weather, impacting food production, transportation and retail trade. The ability to better predict the weather would have enormous benefits in many fields.
Although reliable weather forecasting has long been the goal of scientists, the equations governing the weather contain such a large number of variables and data that classic computer simulations are unable to perform the required calculations and produce outcomes within reasonable time limits. Figuratively speaking, for the current supercomputers, the simulation of the weather forecast for the next 4 days will take three weeks. This is not a problem of having no access to data or bad algorithms, but merely of computational power. As Seth Lloyd, a researcher who applies quantum computers for weather forecasting pointed out: “Using a classic computer to perform such analysis might take much longer than it takes the actual weather to evolve”. The use of quantum computers would reduce processing time from weeks to hours.
While quantum physics in the form of quantum computers is already hugely impacting the areas listed above, you can easily imagine many other applications. Quantum technology and quantum algorithms are evolving. What will they bring? I hope a lot of good. |
By Lindsay Powell
The basic unit of the Roman army of the late Republic was the legion, derived from the Latin word legio, meaning “military levy.” Caius Marius is associated with several innovations of the unit in the run-up to the Battle of Vercellae. Among these was the transformation of the legion from one based on the maniple to one based on the cohort.
Organization of the Roman Legion
The manipular legion was made up of three lines of differently equipped troops. It was conceived around the time of the Samnite War in 315 bc. Lightly equipped hastati, generally younger men, formed the front line, which engaged the enemy first. Behind them stood the principes, who wore heavier armor and were more experienced. Behind them were the crack, battle-hardened triarii. The legion was made up of 20 maniples of hastati and 20 of principes, approximately 120 men each, and 20 half-strength maniples of triarii, making a total of 6,000 men.
The new cohort-based legion eliminated the three levels of troops and replaced them with uniformly equipped and trained legionarii. Eighty regular legionaries formed a centuria, commanded by a centurion, and six centuries made a cohort with its own standard. Nine regular cohorts with a 10th cohort, perhaps of double size, made a total of 5,600-6,000 men in a legion. A legatus legionis commanded the legion, aided in order of rank by a senior tribune, a camp prefect, and a senior centurion called the “first pilum.” There is no evidence that Marius was solely responsible for the change from the manipular to the cohort-based legion. It is likely the innovation was well in process during his lifetime and had already taken place in many units, although some commanders in 101 bc may have preferred to retain the traditional manipular organization.
To supplement the ranks of the Roman legions at this time, noncitizen allies (socii) from the cities of Italy were recruited and formed their own legions, making up about half of total Roman forces. They seem to have been similarly equipped. The allies were particularly important for providing specialist troops and cavalry, which were referred to as extraordinarii.
Making a More Mobile Roman Legion
Reforms of the army firmly ascribed to Marius included greater mobility with less reliance on the baggage train, a modified design of pilum, and the adoption of the aquila or eagle standard as the unifying and iconic emblem of the Roman legion.
From contemporary sculptures such as the Aemilius Paulus Monument and the Altar of Domitius Ahenobarbus, it is evident that by the 1st century bc, legionaries were uniformly equipped with iron chain-mail body armor (lorica hamata), bronze helmets, and large curved oval shields (scutum). Their weapons included the gladius Hispaniensis, the double-edged sword adopted from the Spanish guerrillas, and the pilum, a uniquely designed javelin.
Roman historian Plutarch records that before Vercellae the iron shank of the javelin was attached to the wooden shaft with two iron nails. Just prior to the battle, Marius had his men replace one of the nails with a wooden peg. The design change meant that when the pilum struck the enemy’s shield, the wooden peg would snap, the iron shank would bend or break off and remain stuck in the shield, rendering the shield unwieldy and the pilum unusable and nonreturnable. The opponent would then likely abandon his shield. Cimbri warriors carried large oval shields, and eliminating this defensive weapon would give the advantage to the Romans and improve their kill ratio.
Roman battle doctrine was to form up the cohorts in two or three lines in a checkerboard formation, with cavalry situated on the wings (alae). The legionaries would throw their pila in volleys and then charge as one body in a solid line or series of wedges, using their scuta to punch or knock down their opponents while stabbing them with their gladii. In a set-piece battle, this doctrine proved highly effective and was employed successfully by Roman legions for hundreds of years. |
Neodymium magnets are a type of permanent magnets also known as rare earth magnets, due to the fact that they contain one or more of the rare earth elements of the periodic table. Most are made of a metal alloy containing neodymium, iron, and boron. They are much stronger than most of the magnets people are accustomed to using, like refrigerator magnets. Because of the forces they generate, they can be dangerous or even cause fatal injury if not handled properly.
These magnets are the strongest permanent magnets available, and are able, in some cases, to hold up more than 1,000 times their own weight. They are manufactured in many different shapes and sizes, such as cubes, discs, spheres, plates, and rings, among others. Small ones are used in certain electronic devices, such as computer hard drives and headphones. They have also been found to be useful in the construction of engines for remote-controlled model aircraft.
The strength of neodymium magnets is noted as the letter "N" followed by a number, with a range from N24 to N55. In theory, it is possible to make one that is as strong as N64, but this remains a mostly theoretical possibility. These magnets have some odd properties when they interact with certain other materials because of their impressive strength-to-size ratio.
One of these properties is known as magnetic braking, and it can be observed by dropping a neodymium magnet through a copper pipe. The magnet's fall will be very slow, because of the way that the magnet and the nonmagnetic copper interact with each other. Immersing the copper pipe in liquid nitrogen is said to enhance this effect. A row of sufficiently strong neodymium magnets is powerful enough to affect the speed and angle of a steel bullet in flight.
Most of the neodymium magnets in use are small, and even these can be dangerous if improperly handled. For example, if a child is left unattended and swallows two small magnets, they can pinch together internal organs and cause fatal injuries or infections. Even more care must be taken with larger magnets, such as those that are as large as the palm of a person's hand. These magnets are strong enough to affect everything magnetic or electronic in a room, often with unpleasant results.
Manufacturers are unable to ship these larger magnets on aircraft, because they are so strong that they can interfere with a plane's navigation system, especially its compass. The websites of many neodymium magnet retailers are replete with safety warnings regarding their handling. Despite these warnings, the magnets can prove very useful in scientific applications, both for demonstration and experimentation. |
This course will highlight the most interesting experiments within the field of psychology, discussing the implications of those studies for our understanding of the human mind and human behavior. We will explore the brain and some of the cognitive abilities it supports like memory, learning, attention, perception and consciousness. We will examine human development - both in terms of growing up and growing old - and will discuss the manner in which the behavior of others affect our own thoughts and behavior. Finally we will discuss various forms of mental illness and the treatments that are used to help those who suffer from them. The fact of the matter is that humans routinely do amazing things without appreciating how interesting they are. However, we are also routinely influenced by people and events without always being aware of those influences. By the end of this course you will have gained a much better understanding and appreciation of who you are and how you work. And I can guarantee you that you'll learn things that you'll be telling your friends and family about, things that will fundamentally change the way you think of yourself and others. How can you resist that?! |
This post is a compilation of our most viewed notes on Geography, which we think our readers should not miss.
Learn Geography: Must Read Articles
Geography can be broadly divided into Physical Geography and Human Geography. Both these streams can be studied under the verticals – Indian Geography and World Geography.
- Geomorphic Processes.
- Endogenic Forces and Evolution of Landforms.
- Volcanoes: Everything You Need To Know.
- Earthquakes: Everything You Need To Know.
- Exogenic Forces and their classification.
- Erosion and Deposition: Action of Running Water and Groundwater
- Erosion and Deposition: Action of Glaciers
- Erosion and Deposition: Action of Wind and Waves
- Major Landforms – Mountains, Plateaus, and Plains: Learn faster
- Interior of the Earth: Crust, Mantle and Core
- Earth’s Crust: Elements, Minerals and Rocks
- Ocean Floor: Everything you need to know
- Major Ocean Currents: How to learn faster?
- Movements of ocean water: Waves, Tides and Ocean Currents
- Composition and Structure of the Earth’s Atmosphere
- Insolation and Heat Balance of the Earth
- Clouds: How to Distinguish the Different Types of Clouds?
- Air masses: Origin and Classification
- Fronts: Types and Significance
- Winds: Classification and types
- Cyclones vs Anticyclones
- Jet streams: Characteristics, Types and Significance
- Rainfall: Different Types Explained in Layman’s Language
Applications of Physical Geography: World Geography and Indian Geography
- Countries of the World Listed By Continent.
- Interesting Facts and Figures Regarding World Geography.
- Soils of India: Classification and Characteristics.
- Different soil types in India: Understand the differences.
- Causes of Soil Degradation and Methods for Soil Conservation.
- Urban Heat Islands.
- Sectors of Economy: Primary, Secondary, Tertiary, Quaternary and Quinary
- Factors Responsible for the Location of Primary, Secondary and Tertiary Sector Industries in Various Parts of the World (Including India)
- Distribution of Major Industries: Location Factors
Applications of Human Geography: World Geography and Indian Geography
- Indian Agriculture: Farming Types, Features and Challenges
- Cotton Cultivation in India: Important Things You Should Know
- Biosphere Reserves of India: Names and Location
- Geographical Indication (GI) Tags in India: Memorize Faster
- Ramsar sites (Wetlands) in India: Memorize faster
- UNESCO’s World Heritage Sites: Names from India
Check the Geography notes category to read the complete article archives (from the latest posts to the oldest ones).
How to download ClearIAS Notes?
Every note published on ClearIAS.com has a print-pdf button attached at the post-bottom-right. Readers can download each of the notes as PDF for free using the ‘print-pdf’ option.
Alternatively, you can use the website ‘printfriendly.com’ to enter the URL of any posts in ClearIAS.com to download a clean, reader-friendly PDF.
Books to learn Indian Geography and World Geography
The best source to study Geography is NCERT Textbooks from Standard 6 to 12. Most of the topics are explained in very simple language in the basic textbooks and it’s really helpful for the exam. Other useful books for Geography topics are:
- Certificate Physical And Human Geography – by Goh Cheng Leong (Click to buy from Amazon)
- Geography of India – by Majid Husain (Click to buy from Amazon)
- Oxford Student Atlas for India (Click to buy from Amazon)
Additional Reference Books
- World Geography: World Geography by Majid Husain (Click to buy)
- Physical Geography: Physical Geography by Savindra Singh (Click to buy)
- Human Geography: Human Geography by Majid Husain (Click to buy)
Learn Geography fast: The study plan
Geography is an important topic for IAS Preliminary as well as Main Exam. The discipline of Geography is broadly divided into Physical Geography and Human Geography. The concepts of Geography need to be applied to questions related to Indian Geography as well as World Geography.
Indian Geography In Brief: Topics To Cover
Indian Geography can be divided into three – Physical Geography, Economic Geography and Social Geography. The major sub-topics under Physical Indian Geography are Physio-graphic divisions, Drainage, Climate, Vegetation, Natural Resources etc. Topics related to the environment like wildlife, Soil, Flora etc should be given stress too. Economic and Social Geography related aspects of Indian Geography should be studied in parallel to Physical Geography. Tip: NCERT Books will turn really handy for the preparation of Economic and Social Geography.
- Mountains (Himalayas).
- Northern Plains.
- Peninsular Plateau.
- Coastal Plains.
- Himalayan Rivers.
- Peninsular Rivers.
Climate + Four Seasons of India
- Hot Weather Season.
- Advancing Monsoon.
- Retreating Monsoon.
- Cold Weather Season.
- Tropical Rain-forest.
- Tropical Deciduous Forests. (Monsoon Forests)
- Mountain Vegetation.
- Desert Vegetation.
- Marshy land Vegetation.
- Minerals including Petroleum and Natural Gas.
- Wild Life.
Human Geo: Economic Geography
Study in detail about the economic activities related to Agriculture, Industries and Services in different areas of India. Learn how the geography of a region affects the economic prosperity of the same.
Human Geo: Social Geography
Learn more about the details like Demographics, poverty, hunger, literacy rates, unemployment etc. from a geography perspective. |
p>This paperwork of CMGT 557 Week 2 Individual Assignment Emerging Technology Timeline includes:
Pick one industry or field that has benefited from emerging technology in the last 100 years. List ten innovations that have helped shape the selected industry during this time. Identify the selected technology and, in the chart below, provide a brief description of the technology. Then, explain the relevance by describing how the innovation advanced the industry. Finally, describe how the technology was initially received. How long did it take to fully implement? Do not simply list advances in a specific technology, such as progressively faster computer processors. Instead, identify different technologies that changed the industry. |
What sparks your curiosity? What sparks the curiosity of your children? I would venture to guess that nature and animals might rank pretty high on the list of interests for you and the children you work with every day. Using animals and nature with children is a wonderful opportunity to teach empathy, conservation and environmental stewardship. Fortunately, Nebraska Extension has an exciting early childhood resource to share with you this year around animals and their habitats. This year, we are thrilled to provide eight guides highlighting habitats such as the tundra, rainforest, and desert.
Nebraska Extension has created this great resource for parents, early childhood professionals, care takers, grandparents, and anyone who loves to read with young children that ties directly with local libraries’ summer reading programs. Summer reading programs are taking place right now and the theme across the state is Tails and Tales. Our STEM Imagination Guides are designed to provide several opportunities to connect with each year’s theme by featuring:
- Familiar storybook suggestions:
- The stories that have been selected for each guide are well-known stories and often children’s favorites. It is okay if you child has already heard the story prior to taking part in the lesson. Sharing a story multiple times helps children develop language and listening skills.
- Conversation starters:
- When a two-way conversation is initiated with children during story time, participation in dialogic reading is encouraged. Open-ended questions are provided in each lesson to foster dialogic reading which has tremendous academic and social-emotional benefits for young children.
- STEM connection experiments:
- Children love finding out how things work through fun, hands-on projects. The experiment included in each guide relates to the featured habitat and teaches a variety of STEM concepts that are engaging and educational.
- Sensory explorations:
- Sensory play stimulates children’s senses and is important for brain development. During the suggested sensory activities, children use multiple senses which allows them to learn more from their experiences and retain more information.
- Music and movement activities:
- Research shows that music ignites all areas of child development and enhances skills for school readiness. Not only is singing songs and playing games fun, but these activities also encourage self-expression and physical activity.
- Creative arts investigations:
- When children create pictures of stories that they have read, comprehension improves and often motivates children to want to read and interact with books even more. Art is an early form of communication. Creative art suggestions allow children to express themselves and make meaningful connections with the stories.
- Additional related readings:
- Since each of the Imagination Guides focus on a different habitat, children often have additional questions and are interested in learning MORE! Supplemental fiction and nonfiction books are suggested so children can expand their knowledge.
The STEM Imagination Guides can be utilized in a variety of ways. No need to panic if you do not have access to the featured storybook. Consider listening to the story online or sharing the story orally from memory. Each Imagination Guide has a variety of options and can be customized to meet the needs and interests of the children in your care. Incorporate all of the activities or just a few. It is up to you! The shared reading experience and creative play opportunities are sure to create an excitement for animals as well as foster a joy for reading.
All of these resources are free and available for download and print at https://go.unl.edu/imagination. This website also houses the previous year’s resources focusing on fairy tales. This website is like a treasure chest of great literacy and STEM resources right at your fingertips. All Imagination Guides, whether from this year or previous years can be utilized at no cost. Enjoy this year’s habitat exploration!
SARA ROBERTS AND JACKIE STEFFEN, EXTENSION EDUCATORS | THE LEARNING CHILD
Peer Reviewed by Amy Napoli, Assistant Professor & Early Childhood Extension Specialist and LaDonna Werth and Lynn DeVries Extension Educators, The Learning Child
Make sure to follow The Learning Child on social media for more research-based early childhood education resources! |
Education Gender Gap: Are Boys and Girls on Equal Ground?
The education landscape is changing. From evolving teaching strategies to coursework rooted in augmented reality, many aspects of how we remember school are shifting. But is it possible that these shifts are leaving some students behind? And if so, who?
The school-based gender gap refers to the disparity in achievement between genders in an educational environment.1 And there’s no denying these gaps exist in many instances. As the classroom landscape, coursework, and policies continue to change, the gaps seem to become more apparent—and perhaps in ways you might not realize. Below, we’ve outlined some of the ways in which boys and girls are losing (or gaining) footing in school.
- In nearly every U.S. school district, girls surpass boys in reading and writing.2
According to Stanford University's systematic study of gender achievement gaps—based on state accountability test data of third through eighth grade students from 2008–2015—girls outperform boys by nearly half a grade level in third grade. By the end of eighth grade, girls are almost a full grade ahead. And although it is theorized that this disparity exists because boys are statistically more likely to have a learning disability or because they feel pressured to conform to masculine norms—which do not prioritize reading—the study does not determine exactly why girls excel in reading and writing, just that this gap is consistent across the board.
- Though gender achievement gaps have narrowed, learning stereotypes are still reinforced—particularly in high-income populations.2
In 2015, math gaps were noticeably lower on average than in 2009. However, the gender stereotype of “boys are better at STEM” still exists and can greatly impact learning outcomes. According to Sean Reardon, a member of Stanford’s Center for Education Policy Analysis steering committee, "It may be easier for parents to reinforce stereotypical patterns in affluent places because they have more money to do so." This hypothesis provides potential insight into why boys from affluent communities still outperform girls in math, citing the difference money can make when it comes to stereotype reinforcement.
- Girls are more successful at implementing self-regulated learning strategies.3
Self-regulation is the ability to control oneself. This includes thoughts and behaviors, motivation, and overcoming distraction and procrastination. And the ability to self-regulate is vital to academic achievement. Students are set up for greater success when they demonstrate organizational skills, goal-setting and planning strategies, attentiveness, and impulse control—all aspects that are directly tied to high levels of self-regulating behaviors.
- Boys tend to struggle more with disciplinary issues.4
Believe it or not, how boys behave in school—and how that behavior is handled—plays a significant role in future educational outcomes. According to a study conducted by Brown University’s Watson Institute for International and Public Affairs, boys are less likely to learn and more likely to be held back in school. The study also found that boys enter school with more behavioral problems than girls—and are punished more often for them. This disparity is believed to impact learning outcomes and contribute to the widening gender gap when it comes to academic achievement.
- Socioeconomic status can affect gender achievement gaps.2
It was found that gender gaps in English and math vary with community wealth and racial diversity after a review of test scores from 10,000 U.S. school districts. According to Stanford’s education study, boys in affluent, highly educated, and predominantly white districts outperformed girls in math. Whereas girls in poorer, more racially diverse districts often outperformed boys in math. However—despite these striking differences in gender achievement for math—no correlation with local socioeconomic status or racial makeup was found in reading and writing. Unfortunately, Stanford’s research doesn’t provide evidence as to why these socioeconomic and racial conditions impact learning but is meant to encourage further research on the matter.
Learn About the Topics in Education That Matter Most at Walden University
If you’re considering enrolling in a graduate program for teachers or want to learn more about the social change topics affecting the education field, Walden can help. An accredited university, Walden offers online education degree and certificate programs that can help you gain the experience and knowledge you need to further your career. Walden offers an MS in Education (MSEd) program rooted in the latest teaching strategies, cutting-edge technologies, and best practices taught by industry experts. Choose from 14 specializations and explore online courses in curriculum design, instruction, and assessment.
Advance your teaching knowledge by earning your master’s degree in education at Walden. And because the program is offered on a convenient online platform, you can earn your degree from wherever you have internet access—no need to rearrange your schedule or commute to campus. Earn your MSEd degree to better impact the lives of your students while you continue to work full time.
Walden University is an accredited institution offering a suite of education programs online, including an MS in Education degree program. Expand your career options and earn your degree with a convenient, flexible format that fits your busy life.
Walden University is accredited by The Higher Learning Commission, www.hlcommission.org.
Fill out the form and we will contact you to provide information about furthering your education.
Please use our International Form if you live outside of the U.S. |
Written by: Dina Dechmann and Mariëlle van Toor
Straw-coloured fruit bats exist throughout most of the African continent. This large fruit bat is one of, if not the most numerous fruit-eating animal (called frugivores) in Africa. They live in colonies of thousands to millions of individuals.
Fruit bats sleep during the day, hanging upside down in the crowns of old trees, and become active at sunset when they set off in search of food – specifically nectar and fruit.
With their wingspan of up to 80cm, they are able to cover vast distances. When the colonies are very large and competition for food is stiff, they can fly up to 95km to suitable food trees and only return to their roosts the following morning. They defecate the seeds of the fruit they eat over an unusually long time period, even during flight. They can thus disperse seeds across huge areas as they go.
The seeds transported in this way can end up far from the parent plant, and in areas that are good for germination and establishment. The fact that these gigantic colonies seasonally migrate across Africa, following the rain and upcoming fruit, help disperse seeds of seasonal fruit and in places with only a few local frugivores.
The fruit bats therefore contribute to the species and genetic diversity of forests.
In 2019 we investigated the potential of these fruit bat colonies to reforest areas where trees had been lost in parts of Africa.
We tracked the movements of fruit bats in Ghana, Burkina Faso and Zambia by deploying them with small GPS loggers, which allowed us to follow their nightly movements to food trees. We also looked into how long they held food in their gut. We then applied our findings to entire colonies to see what services they provided in large numbers.
We found that, in a conservative estimate, a colony of 150,000 animals could disseminate more than 300,000 small seeds in a single night, and that a single colony of fruit bats could kickstart the regrowth of 800 hectares of forest.
They’ve likely often done so – a study using seed traps deforested areas in Cote d’Ivoire found that 96 percent of dropped seeds were carried in by fruit bats.
Worryingly, fruit bats have started to disappear from forests everywhere. They are primarily at risk from hunting and persecution out of superstition, fear or simple annoyance due to the noise they make when they roost.
This would not only lead to a loss in biodiversity but have huge economic consequences as fruit bats disperse the seeds of, and likely pollinate as well, many economically valuable plants such as timber species and food producing plants.
For our study, we used GPS transmitters to track the flight paths of the bats. We also measured the time it took them to excrete the seeds after eating them. For this we took bats into captivity, fed them their natural food dyed with fluorescent dye and then filmed when which food item was excreted. These showed that the animals only excrete some of the seeds after a relatively long time, thereby facilitating their dispersal over vast distances.
We were able to calculate the potential of an entire colony to disseminate seeds over long distances and to transport them to deforested areas.
Among other things, the straw-coloured fruit bat disperses fast-growing trees that are the first to colonise open ground, so-called pioneer trees, and which are able to grow in bright sunlight, creating the right environment for rainforest tree species to establish and grow.
The profit that the regrowth of this much forest generates for the population, for example through edible fruits, increased soil fertility and timber, has been estimated using the results from a study on the cost of deforestation in Ghana under the assumption that all areas supplied with seeds by bats were allowed to reforest. Our estimate was in excess of 700,000 Euro (about US$750,000). Because the straw-coloured fruit bats migrate throughout Africa, many communities profit from their services.
Sadly, the population of straw-coloured fruits bats is in continuous decline. For example a colony we monitor in Accra, Ghana, has gone down from one million individuals over a decade ago to less than 20,000 bats in the spring of 2022.
Given that each female gives birth to a single pup each year, this is going to lead to a population collapse. Logging the large trees in which the animals live is also threatening their populations. Often we will return to a place where a thriving colony was previously observed only to find their roost trees and thus the bats, gone.
The straw-coloured fruit bats contribute to the conservation of African forests, so there is an urgent need to explain their importance to the human population. With the recent COVID-outbreak and other diseases such as Ebola, bats have moved into the focus of the press and thus local communities. While it is important to inform people about how to safely co-exist with the bats, there is currently no scientific evidence to support the rumour that straw-coloured fruit bats or any bat may have been involved in these outbreaks. The best way to ensure the health and safety of both bats and people is to simply stay away from them.
During our research, we met a local king in Kibi, a town in southern Ghana, who is leading by example. He’s placed the straw-coloured fruit bat colony that has taken up residence in his garden under his own personal protection and calls them their babies.
An NGO we collaborate with closely – the Rwanda Wildlife Corporation – does exemplary work to help mitigate the negative trend of fruit bat populations. They visit local communities, inform them about the benefits and threats the bats offer, and recruit local volunteers to contribute to counts and observations. Many of these volunteers are children, which are our best ambassadors for a future where humans and bats can live side by side. |
CBSE Class 11-science Biology Stem, Leaf, Inflorescence
- define inflorescence
- A tendril is not seen in tall trees. Why
- What is the difference between dorsoventral and isobilateral leaf
- What is meant by acropetal and basipetal order?
- Define stem.
- What are nodes and internodes?
- Name the plants whose stems are modified to store food.
- State the function of stem tendrils.
- What are thorns? Give one example of a plant in which thorns are found.
- Define leaf.
Kindly Sign up for a personalised experience
- Ask Study Doubts
- Sample Papers
- Past Year Papers
- Textbook Solutions
Verify mobile number
Enter the OTP sent to your number |
Cosmic rays are continuously getting stronger and that could mean Earth should brace itself for a deep solar minimum, scientists have warned.
A decade ago, scientists noticed an all-time high in cosmic rays – rays which originate from deep space, not to be confused with solar rays which come from the Sun. Now, scientists have noticed cosmic rays are back on the up as the Sun goes deeper into a solar minimum.
The Sun follows 11 year cycles where it reaches a solar maximum and then a solar minimum. During a solar maximum, the Sun gives off more heat and is littered with sunspots. Less heat in a solar minimum is due to a decrease in magnetic waves.
The Sun entered the current solar minimum roughly a year or so ago when magnetic waves from our host star began to lessen.
With less magnetic waves coming from the Sun, cosmic rays find it easier to penetrate Earth’s atmosphere and are more noticeable to scientists.
While cosmic rays have little effect on our planet, one of the reasons scientists monitor them is to see when the Sun has entered a solar minimum.
Now, with cosmic rays almost reaching that all time high again, scientists know the Sun is about to enter a prolonged cooling period.
The last time a prolonged solar minimum was in effect was the Maunder minimum, which saw seven decades of freezing weather, began in 1645 and lasted through to 1715, and happened when sunspots were exceedingly rare.
During this period, temperatures dropped globally by 1.3 degrees celsius leading to shorter seasons and ultimately food shortages in what was called a “mini Ice Age”.
Cosmic forecasting site Space Weather said that the solar minimum gets deeper as the year progresses.
It reads: “As 2019 unfolds, Solar Minimum appears to still be deepening. Cosmic rays haven’t quite broken the Space Age record set in 2009-2010, but they’re getting close.”
Nathan Schwadron, a space physicist at the University of New Hampshire, said: “No one can predict what will happen next.
“However, the situation speaks for itself: We are experiencing a period of unusually weak solar cycles.”
see also: GWPF coverage of solar activity research |
The name “bird’s eye” has been given to many companies and mapping mechanisms. Who could forget Bird’s Eye vegetables? There are many applications that want you to make the connection between the bird eye and high quality products or software that can show map details for miles. The bird’s eye is a remark that is often tossed around when someone is referring to a high vantage spot that allows them to see the lay of the land for several miles. How much do we owe these wild animals?
Did you know that your eye and the bird’s eye have many of the same structural features? Both the human eye and the bird eye have a cornea, retina, iris, lens, anterior chamber and eyelids. Some of these structures function in the same way for both humans and wild birds/domesticated birds. The obvious difference is the size of the eye in comparison to the rest of the body. A starling’s eye is 15% of their body mass, whereas a human’s eye is only 1%.
The size of the eye depends on the bird species. Owls, for example, have huge eyes that allow them to take in more at once. For a wild bird that is important because it means that they can spot enemies and their prey much easier. Owls cannot move their eyes so they rotate their heads. How far do they rotate their heads? They rotate a whopping 200 degrees. Another difference to be found is the number of eyelids a bird has over humans. The human eye only has an upper and lower eyelid.
All species of birds have three eyelids. They have an upper, lower and a nictitating membrane that cleans and protects the eye. Another difference is in the positioning of the eyes. Humans and most wild animals, especially predators, have eyes that are not as centralized as a bird’s eye. A bird’s eye is on the front of the skull and they lack the ability to see very far to the sides. This is why birds turn their heads often.
The bird eye is fascinating. Though it is very similar to a human’s eye, it has enough differences as to allow the bird to flourish and survive out in the wild. A wild bird that does not have the correct vision or positioning would face a very dire situation when protecting itself or finding food for survival. If you want to know more about the avian eye, then go to your local library and check out a book about birds. |
| Hate for hate's sake|
A pogrom is a violent riot aimed against a person or personal property based on the victim's ethnic, religious, or social background or lifestyle. It was originally used for race riots against Jewish communities, but has also been used since in reference to other groups. Historically many pogroms - particularly against Jews - were perpetrated with the tacit approval or even outright support of local rulers and non-Jewish businesspeople.
Examples of pogroms
- The Alexandrain riots of 38 CE was likely the first recorded pogrom in human history; the racial hatred towards the Jews was likely instigated by Aulus Avilius Flaccus, the Roman prefect of Egypt.
- The German Crusade was inspired by the call for Christians across Europe to take Jerusalem from the Muslims; due to the view that the Jews were as evil as the Muslims, the Germans took it upon themselves to drive them out.
- The Lisbon massacre was a series of persecutions by Catholics against the "New Christians", recently converted Jews, who were perceived to be secretly practicing Judaism.
- The Russian Tsarist rule of government of the 19th century was renowned for its cruelty towards the Jewish population of Russia.
- The Adana massacre of resident Armenians in the Ottoman Empire in 1909; the massacre lasted for over a month and resulted in 30,000 deaths.
- The Ocoee massacre in Miami, Florida which resulted in 50 to 60 African Americans being killed.
- The Tulsa race riot in 1921 where 55 to 300 African Americans were driven out of "Black Wall Street".
- The massacre of Jewish residents and immigrants of Hebron in Mandatory Palestine by their Arab neighbors.
- Nazi Germany was also home to Jewish pogroms in the late 1930s; the most well known pre-war pogrom was Kristallnacht.
- The Iași pogrom was launched by the fascist government of Romania and killed up to 13,226 Jews.
- The Lviv pogroms perpetrated by Ukrainian nationalists against Jews in Poland.
- The Kielce pogrom in Poland was the result of a moral panic that Jewish refugees were kidnapping children; this pogrom is especially notable because it happened a year after WWII against victims of the Holocaust.
- The anti-Sikh riots in India after the assassination of Indira Gandhi.
- The anti-Tamil riots in Sri Lanka by the Sinhalese; this event is commonly called Black July and is considered to be the beginning of the Sri Lanka civil war.
- The Baku pogrom against the Armenians living Azerbaijan.
- The Mława riot in Poland against the Roma.
- In a historical irony, fanatical Jewish settlers in the West Bank have perpetrated pogroms against Palestinians every now and again.
- The Crown Heights riot, a three-day riot against Hasidic Jews in Crown Heights (Brooklyn, New York) by a Caribbean and American mob partially instigated to action by the Reverend Al Sharpton.
- The Gujarat pogrom was a riot by Hindu Indians against the Indian Muslims; Indian Muslims retaliated by driving out Hindus from their communities. The Indian government was blamed for its complicity and the racial hatred was believed to have been incited by Hindu nationalists like Narendra Modi.
- In 2004, Serbians were driven from their homes by Albanians in Kosovo
- In recent years, Burma has been home to pogroms against the smallish Muslim community (known as Rohingya) by Buddhist nationalists. |
Lokrum is a green island, covered in fertile land and lush flora. However, what lies underneath that thin layer is clearly visible only at the shore: the whole island is made of thick layers of sedimentary rock – limestone and dolomite.
The sedimentary rocks hide an interesting geological story millions of years old – they used to be a part of the Adriatic-Dinaridic carbonate platform. The platform’s remains still make up the karst region in Croatia, from Karlovac, Gorski Kotar and Lika to Istria, the Croatian Littoral and Dalmatia, but its largest part has sunk under the Adriatic Sea.
The platform was an expansive shallow-water area with an irregular cluster of smaller and larger islands, beaches and shallow waters with deeper lagoons separating them. It was bordered by a reef built of organisms which could survive in spite of strong waves and currents from the surrounding deep sea, such as golden anemones, moss animals and mollusks.
The inner area of the platform was dominated by quieter depositional environments, with conditions for the growth of various plants and animals in the very warm, shallow and full of light sea, i.e. conditions similar to the ones in the Bahamas today.
Just a small part of this lush life left a trace in the rocks, mostly organisms with mineral skeletons, similar to snails, mollusks and corals. The remains of such organisms, i.e. fossils, help determine the age of layers.
In the layers of sedimentary rock that Lokrum is made of, along with sporadically numerous mollusks, various tiny unicellular organisms, visible only with a magnifier or a microscope – foraminifera, of which the most important are Moncharmontia apenninica and Scandonea samnitica, and plants – algae Thaumatoporella parvovesiculifera i Aeolisaccus kotori, are present. They indicate that the sediments were deposited during the Late Cretaceous, about 85 million years ago.
The slow depositing of limestone material (carbonate mud, skeletons of various organisms, fragments of older rock, etc.) throughout the long geologic periods resulted in thick rock layers, slowly transforming through complex processes the carbonate mud and sand into hard rock – limestone.
About forty million years after the sedimentation of rock which makes up the island of Lokrum, the Adriatic-Dinaric carbonate platform ceased to exist due to the movement of large parts of Earth’s crust – the ancient supercontinents Gondwana, that incorporated present-day Africa and India, and Laurasia, that incorporated present-day Europe and Asia. That singular event had a huge impact on the whole region located between them, including the Adriatic-Dinaric carbonate platform.
As the result of the collision, the mountain chain that stretches from the Alps to the Himalayas arose. The Dinaric Alps are part of that belt.
What used to be an undisturbed sequences of sedimentations, like a large cake with horizontal layers, had undergone a huge deformation, and the consequences are also visible on Lokrum. The sedimentary surface is more or less inclined, rippled, full of numerous deep cracks or separated by faults.
The deposits between Portoč and Bora Cove and the surroundings of the favorite Lokrum swimming spot DEAD SEA (Mrtvo more), connected with the open sea by an underwater passage, are highly tectonically disturbed.
Later geological processes, such as dolomization (a process by which dolomite is formed from limestone when magnesium ions replace calcium ions in calcite) and karstification (gradual dissolution of soluble carbonate layers due to chemically aggressive water, which causes irregularly wide tectonic cracks and an uneven rock surface) also impacted strongly the current look of the rocks.
The island’s coast is constantly exposed to waves, which during occasional stronger storms slowly but surely erode the coastal area, especially in the south part of the island.
THE ROCKS which are actually placed on the very seaside, had been fairly far from the sea only 10 – 15 thousand years ago. Namely, during the glacial periods, the last of which had then been at its culmination, due to the coupling of a large quantity of water in the vast ice covers, the sea level had been sometimes more than one hundred meters below the actual.
At that time, Lokrum was in fact not even an island: it was only a small hill. The final flooding of the earlier land by the sea had taken place approximately 6 – 10 thousand years previously, during the time of the origin of the most ancient civilisations. |
Common core teachers, look no further! This 15 page document is the ONE resource you need to teach ALL of the Language 5.3 standards. This resource is part of my 5th grade language notebook bundle.
* L.5.3.a – Expand, Combine, & Reduce Sentences
* L.5.3.b – Varieties of English used in Stories, Dramas, or Poems
Each skill includes four to six pages of work, and follows the gradual release of responsibility method. Check out the general academic & domain-specific words FREE resources in my TpT store to fall in love with the format and content of these lessons.
Each comprehensive lesson includes:
• “I Can” statements for each standard
• Interactive cloze passages explaining each skill
• Direct teaching pages
• Mentor text examples
• Practice pages
• ‘You Try It’ sections, requiring written word
and sentence responses
• Authentic writing opportunities
BUNDLE and SAVE with my 5th Grade Interactive Notebook
5th Grade Language Notebook |
String Interning is a method of storing only one copy of each distinct String Value, which must be immutable.
By applying String.intern() on a couple of strings will ensure that all strings having the same contents share the same memory. For example, if a name ‘Amy’ appears 100 times, by interning you ensure only one ‘Amy’ is actually allocated memory.
This can be very useful to reduce the memory requirements of your program. But be aware that the cache is maintained by JVM in permanent memory pool which is usually limited in size compared to heap so you should not use intern if you don’t have too many duplicate values
intern() method : In Java, when we perform any operation using intern() method, it returns a canonical representation for the string object. A pool is managed by String class.
- When the intern() method is executed then it checks whether the String equals to this String Object is in the pool or not.
- If it is available, then the string from the pool is returned. Otherwise, this String object is added to the pool and a reference to this String object is returned.
- It follows that for any two strings s and t, s.intern() == t.intern() is true if and only if s.equals(t) is true.
It is advised to use equals(), not ==, to compare two strings. This is because == operator compares memory locations, while equals() method compares the content stored in two objects.
false true true
Explanation : Whenever we create a String Object, two objects will be created i.e. One in the Heap Area and One in the String constant pool and the String object reference always points to heap area object. When line-1 execute, it will create two objects and pointing to the heap area created object. Now when line-2 executes, it will refer to the object which is in the SCP. Again when line-3 executes, it refers to the same object which is in the SCP area because the content is already available in the SCP area. No need to create a new one object.
If the corresponding String constant pool(SCP) object is not available then intern() method itself will create the corresponding SCP object.
Explanation : We use intern() method to get the reference of corresponding SCP Object. In this case, when Line-2 executes s2 will have the value “GFGGFG” in it only creates one object. In Line-3, we try to intern s3 which is again with s2 in SCP area. s4 is also in SCP so all give output as true when compared.
- Convert a List of String to a comma separated String in Java
- Convert an ArrayList of String to a String array in Java
- Collator compare(String, String) method in Java with Example
- Convert a Set of String to a comma separated String in Java
- Collator equals(String, String) method in Java with Example
- String Literal Vs String Object in Java
- Convert Set of String to Array of String in Java
- Insert a String into another String in Java
- Java.lang.string.replace() method in Java
- Java.lang.String.replace() in Java
- Java.lang.String class in Java | Set 2
- Java.lang.String.startswith() in Java
- Java.lang.String.copyValueOf() in Java
- Java.lang.String.matches() in Java
- Java.lang.String.getByte() in Java
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : bhagat_dineshbe2006 |
When viewed over the entire production-to-disposal lifecycle, compact fluorescent lamps (CFLs) are more efficient than other types of light sources. At least that is the claim of researchers at Empa, an interdisciplinary research and services institution for material sciences and technology development located in Switzerland.
Empa’s Technology and Society Laboratory examined several different lighting methods to find out which source of illumination is the most environmentally friendly. However, their investigation notably did not include LEDs. They investigated the classical incandescent bulb, halogen lamps, fluorescent tubes and CFLs. Researchers prepared a life cycle analysis for each kind that considered the raw material and energy consumption of a lamp during its complete life cycle, from the production and usage to final disposal.
Empa scientists say the proportion of the total environmental effects caused by the production of all the lamps was small. Using the Swiss electrical power mix as a basis, the manufacture of an incandescent bulb, for example, was responsible for just 1% of its total environmental effect. By comparison, the production of an energy saving lamp at 15% of the total is significantly higher, because they contain a built-in electronic ballast.Using the European power mix (which includes a significant fraction of electricity generated by coal fired power stations) as a basis for calculation leads to much lower values for incandescent bulbs and energy saving lamps of 0.3% and 4% respectively.
Researchers say the method of disposal of the lamps at the end of their useful life is also not an important factor in the overall ecobalance calculation. In fact, in the case of CFLs the environmental effects reduce by as much as 15% when they are recycled instead of being incinerated. But even when they are
incinerated in a waste disposal facility the much criticized mercury release is quantitatively insignificant because the overwhelming proportion of mercury in the environment is emitted by fossil-fuel-burning power stations.
Using the European power mix, which is produced mainly by fossil fuel powered generation plants, both incandescent lamps and CFLs reach their environmental break-even points quickly - after some 50 hours - because of the significantly higher power consumption of the tungsten filament bulb. With the Swiss power mix this point is reached after 187 hours. But with a typical
lifetime of about 10,000 hours for a compact fluorescent energy saving lamp (compared to some 1,000 hours for an incandescent bulb), the purchase of such a lamp pays for itself quickly in an ecological sense, they say. |
Considerations when using unedited Wikipedia articles in class because:
- They are often unevenly edited. The fact that different paragraphs, sentences or even words have been added by people from all walks of life and of different nationalities will mean they are not always good examples of correct English usage and, even when edited by advanced English speakers, several spelling or grammar errors may arise. This may, of course, make for useful authentic material when teaching higher-level students or ESP courses for translators, with activities such as copyediting and/or correcting mistakes, etc.
- Even when they are well written, they are encyclopedia articles. While this is acceptable written English, it might not be a register that you would want your students to emulate under "normal" circumstances.
- Simple Wikipedia (http://simple.wikipedia.org) is written in simple English, similar to Plain English, which may be useful in some classes.
- Create a topical class #Wikipedia -- using Wikipedia sourced facts in a topical lesson.
- Wikipedia "Main Page" |
All the glue that has held the Arctic together for millennia is being dissolved by climate change, right down to the ground beneath millions of people’s feet.
Rising temperatures are melting frozen soil at an alarming clip with the changes visible before our very eyes today. But the future promises an even more dramatic shift according to a new study published Tuesday in Nature Communications. As the frozen ground turns to muck, it could result in millions of people left without homes or the infrastructure that makes living in one of the harshest environments on Earth possible. What’s more disconcerting is that even if the world slashes carbon emissions dramatically, these changes are basically locked.
The new findings offer what the authors call “an unprecedentedly high spatial resolution” look at how the melt of frozen soil, known as permafrost, will impact infrastructure. As permafrost melts, it essentially turns previously firm ground into a slurry of soil and water. Communities in the Arctic are already coping with the impacts the 1 degree Celsius (1.8 degrees Fahrenheit) of warming since the industrial revolution has wrought. Infrastructure is collapsing or at risk of it as are traditional ways of life.
To see what the future holds for the permafrost region, the authors overlaid data of where permafrost deposits exist in the Arctic with infrastructure and settlements. They then looked a variety of climate scenarios to see how much the surface layer of permafrost is likely to melt by mid-century, all at a resolution on 1 kilometer.
The findings show that a whopping 70 percent of infrastructure in the permafrost region—the equivalent of one-third of all Arctic infrastructure—sits on land that has a high potential for permafrost thaw by mid-century. That includes railroads, homes, and ironically oil and gas infrastructure that’s responsible for shipping carbon-emitting fossil fuels to market. Nearly 4 million people call the high-risk regions home. If the infrastructure isn’t adapted to the melting landscape, it could force these people to migrate to areas with more solid ground.
The worst impacts will be in Russia and northern Europe, places that are particularly dependent on the permafrost region. David Titley, the head of Penn State’s Center for Solutions to Weather and Climate Risk, told Earther in an email that roughly 20 percent of Russia’s population and its GDP comes from north of the Arctic Circle “so they will have some big bills coming up.”
What’s most harrowing about the study is that this is basically all due to warming already locked into the climate system. Because it takes the atmosphere so long to reach equilibrium with all the new carbon dioxide humans have added, the planet would continue to warm for decades even if all carbon emissions stopped tomorrow. These findings show humans have already disrupted the Arctic and point to the need to adapt to those changes ASAP.
At the same time, the study shows that reducing emissions now could provide meaningful benefits by century’s end, noting that meeting the Paris Agreement target of limiting global warming to 2 degrees Celsius (3.6 degrees Fahrenheit) “could stabilize risks to infrastructure after mid-century.”
“[This is] yet another study confirming the overwhelming evidence that our rapidly changing climate is impacting every facet of the globe, and every aspect of human civilization,” Titley said. “I like to say this is part of the carbon tax we are being forced to pay, whether you ‘believe’ in climate change or not. And p.s., the ice doesn’t actually care if you accept the science or think this is all a hoax — it just melts.” |
Unit Two: Studying Africa through the Social Studies
Module Nine: African Economies
Case Study of Diversification, Specialization and Trade in South Western
Until approximately 500 hundred years ago, the vast majority of the people in all regions of the world were directly involved in the production of food. Since the 15th century CE, innovations in agricultural production gradually led to diversification and specialization within world economies. This process eventually led to industrialization and widespread urbanization.
Historians who study the history of economic change and development report that the process of economic diversification and specialization occurred at approximately the same time in a number of African societies as it did in Europe.
An example of economic specialization and diversification comes from the Yoruba and societies of South-Western Nigeria (see map). We will return to a more detailed history of the Yoruba and other West African kingdoms in a later lesson. From Module Seven: African History, you will remember that as early as the 15th Century C.E. Yoruba speaking peoples began to develop into centralized political kingdoms along the boundary between the savannah and forest regions of Nigeria . The most famous of these early Yoruba kingdoms were Benin and Oyo.
The development of these states was facilitated by changes in economic practice, particularly in agriculture. Agricultural productivity was assisted by new skills, new crops, and the introduction of stronger metal tools, which allowed for clearing and planting of larger areas of land. The increase in food production provided the opportunity for people to become more engaged in non-agricultural economic, social, and cultural activities.
The production of food surplus helped to facilitate the development of economic specialization of activities. Some non-agricultural activities, such as metal smithing that had previously been limited to times of the year when people were less busy in farming activities, could now be done on a full-time basis. With more spare time, people were able to concentrate on non-agricultural activities and greatly improve their skills. In the Yoruba societies, a number of specialized occupations developed, two of the most important being metal-smithing and textile production.
Highly skilled metal smiths developed stronger and more efficient agricultural tools that further increased agricultural productivity. In addition, they also produced improved weapons, a factor which became important in the expansion of Yoruba kingdoms.
Yoruba textile production became an important economic and trade activity beginning as early as the 17th Century. Cotton was imported from the savannah regions to the north of the Yoruba areas. There were a number of specialized activities associated with this textile industry- spinning thread, dying, and weaving cloth . By the 19th Century, the Yoruba societies, along with other West African societies, had moved beyond self-sufficiency in textiles and were engaged in an active trade with other areas of West Africa.
Economic diversification, specialization, and trade are among the most important factors that led to the development of strong political kingdoms among the Yoruba. Module Ten: The Politics of Africa will look at the development of pre-colonial kingdoms in West Africa. For this lesson, it is important to draw the connection between economic development and political development as illustrated by the Yoruba kingdoms of Benin, Oyo, and in the 18th Century Ibadan, Ife, and Abeokuta.
Diversity, specialization and trade were essential to the growth of these kingdoms in four major ways.
- First, surplus production of food allowed for political specialization. That is, elders and other "traditional" leaders were not obligated to engage in subsistence farming. This allowed time for "statecraft," or the development of more specialized and complex political organization.
- Second, specialization increased the revenue (money collected by the state) that rulers were able to raise through various forms of taxation. Much of the taxation came in the form of tribute, gifts that citizens were required to give their kings. Tribute was often given in the form of food and specialized goods, such as textiles and metal tools and weapons. Yoruba rulers, much as rulers elsewhere in the world, used the revenue and tributes to increase the power of the state. For example, with tribute of food and weapons, Yoruba rulers were able to feed and equip armies that were used to expand the territory controlled by the rulers.
- Third, Yoruba rulers were able to control external trade between their kingdom and other peoples. Traders were required to give a small portion of the goods that they traded to the state. The revenues generated for the royal treasury through the taxation of trade greatly enhanced the power of the Yoruba Kingdoms.
- Fourth, agricultural surplus and specialization freed up the labor of some young men who could be recruited and trained as "professional" soldiers. Yoruba rulers used soldiers and revenues generated from economic development and trade to expand the borders of their kingdoms, further strengthening the political system.
Many societies and cultures produce wonderful art. The Yoruba cultures are no exception. As will be detailed in Module Twelve: African Art, Yoruba artisans have produced very sophisticated clay, wood, textile, and metal artifacts for many centuries. However, economic diversification and specialization provided the opportunity for the development of professional artisans.
Research and Writing Activity
Among the many fine examples of Yoruba art are beautiful Benin Bronze statues. The Yoruba artists who produced these statues used a special lost wax technique to produce their art. In the 19th Century, European traders and colonial officials were so impressed with the Benin Bronzes that they collected many of them and shipped them to Europe to be placed in museums. Over the past century, Benin Bronzes have greatly increased in value.
Using the World Wide Web find out as much as you can about Benin Bronzes and write a one-page report on your findings. Information that you might want to include in your report may include:
- the time period in which Benin Bronzes were made;
- information on the artists;
- information on the lost-wax method of sculpting;
- information on the use of Benin Bronze statues in Yoruba society;
- information on the export of the Bronzes to Europe;
- information on the movement to return Bronzes from Museums in Europe to Benin and Nigeria.
If you like to draw, using pictures of Benin Bronzes posted on the web, do your own copy of Benin Bronze.
Once your teacher has looked at your work, and you have made a final copy of your report, place it in your Exploring Africa Web Journal.
Here is web-site to get you started.
If you don't have access to a computer to complete this assignment, you should go to your school library and use the encyclopedias and other resources available there to find information on the Benin Bronzes.
Go to Activity Four or
- Activity One: Engage (Wants and Needs)
- Activity Two: Explore (Food Production)
- Activity Three: Explore2 (Yoruba Case Study)
- Activity Four: Explain (Economics of Colonialism)
- Activity Five: Explain2 (Transportation)
- Activity Six: Expand (Case Study: Zambia/Northern Rhodesia)
- Activity Seven: Expand2 (Case Study: Mali/Soudan)
- Activity Eight: Expand3 (Post-Colonial Economies)
- Activity Nine: Expand4 (Globalization and Africa Economies)
- Activity Ten: Summary |
Thank you to everyone who attended the webinar ‘Strategies for EMI/CLIL Success for Primary Learners’! During the webinar I had defined EMI and CLIL while addressing a few strategies applying the CLIL approach focusing on primary learners.
EMI – English as a Medium of Instruction
Information communicated to the learner (English being their non-native language) in the classroom is in English. This includes subject content, student materials and resources (textbooks and or coursebooks), and lecture instructions.
CLIL – Content and Language Integrated Learning
CLIL refers to, situations where subjects, or parts of subjects, are taught through a foreign language with dual-focused aims, namely the learning of content and the simultaneous learning of a foreign language.
[D. Marsh, 1994]
Strategy Focus for Primary Learners with CLIL – Use of Visuals and its Benefits
Visual aids are tools and instruments teachers will use to encourage student learning by making the process easier, simpler, and more interesting for the learner. Visual aids usage supports information acquisition by allowing learners to digest and comprehend knowledge more easily.
- Examples of visual aids, but not limited to, are: Pictures, models, charts, maps, videos, slides, diagrams, flashcards, and classroom props.
Thank you all for your interesting questions! Here I will do my best to respond to a couple of those I could not answer during the webinar.
What challenges do students in EMI [classes] face?
A student’s stage in education, (i.e. Primary, secondary, etc.) would result in different challenges. Overall, there are usually two main factors to consider in an EMI learning environment; first the student’s native tongue is not English, and second, the acquisition of the subject content being taught. Since the learner is dealing with new and fresh information in a relative new subject, those challenges being difficult on their own, a strong command of English would be a prerequisite.
That being understood, without the language ability, challenges could include difficulties comprehending subject concepts or themes, struggles communicating with the teacher or classroom peers, even troubles using materials such as their textbooks, workbooks, or class resources.
I am not stating that a student must be 100% fluent in English for EMI to be successful, but since EMI classrooms do not focus solely on English language learning, an appropriate level of English is needed to help learners reach their goals.
Does CLIL overlap with the PPP approach?
I believe that CLIL and the PPP method can overlap. Just to clarify the PPP methodology, this style of English teaching follows the 3Ps – presentation, practice, production. This method deals with a set process of how to deliver content to a L2 student, then provides support for language usage and application. Though CLIL does not encompass or represent all learning styles, it does provide a more flexible set of principles and guidelines. To paraphrase our previous definition, CLIL is established as a learning environment that satisfies the two goals of learning content and learning a foreign language equally. I like to think of the PPP method as a language delivery system. If an English teacher is teaching her L2 students science and writing skills, the PPP method can be used just as effectively as with a teacher teaching L1 grammar to an L1 classroom.
Many of the questions that were included were in regards to characteristics of a CLIL classroom/lesson. For that, I would like to recommend a short article for additional information.
The British Council has an article by Steve Darn that addresses CLIL’s framework and expectation in the classroom with supplemental resources: https://www.teachingenglish.org.uk/article/clil-a-lesson-framework. I also would like to recommend some other resources that I have found very helpful as well for CLIL and EMI in the classroom:
- Ball, P., Kelly, K., Clegg, J. (2015). Putting CLIL into Practice. Great Clarendon Street, Oxford: Oxford University Press.
- Deller, S., Price, C. (2007). Teaching Other Subjects Through English. Great Clarendon Street, Oxford: Oxford University Press.
Missed my webinar? click the link below to watch the recording!
Interested in EMI and CLIL? Get practical recommendations from our experts with our position paper. Click here to download.
Joon Lee has been involved in the EFL and ESL educational community at the positions of Academic Director, Content and Curriculum Developer, and Academic Advisor. He has been fortunate to pursue his interests in developmental learning from both in and out of the classroom. At OUP he is part of the Asia Educational Services team and shares his experiences providing teacher training and professional development workshops. He holds great respect for educators and administrators who show passion towards nurturing a learner’s path to success. |
In a wake-up call to the American public, a Scientific Statement from the American Heart Association (AHA) states that most US children don’t eat a healthy diet and that too many have elevated cholesterol, fasting blood sugar levels and exercise too little.
Published in the journal Circulation, the AHA states that many US children do not meet what they call the seven basic standards of good health. Those include:
- manage blood pressure
- manage cholesterol
- manage blood sugar
- healthy diet
- healthy weight
- no smoking
The statement’s primary author Dr. Julia Steinberger, the director of cardiology at the University of Minnesota in Minneapolis, told Reuters Health, “The primary reason kids fell out of cardiovascular health is diet and physical activity,” and that approximately 9 out of 10 US children don’t have healthy diets mainly “because they’re consuming sugary food and drinks,”
US Children Lacking Healthy Lifestyle Habits
Reuters reported that data from the statement from the AHA revealed that about 50 percent of young children between ages 6 and 11 get the minimum recommended 60 minutes of physical activity per day. Teens between 16 and 19 are even less active.
The statement also shared that about a third of US teens say they have at least tried cigarettes. The new AHA statement indicates that children should never try or smoke cigarettes.
Regarding obesity, the statement estimated that 10 to 27 percent of US children and teens are obese. One of every three children and teens have high cholesterol. Almost a quarter of girls have high blood sugar and more than a third of all boys between age 12 and 19 have high blood sugar.
Good news came regarding blood pressure. About 90 percent of US children and teens have a healthy blood pressure.
According to the AHA guidelines children should:
- Have a BMI below the 85th percentile
- Get 60 minutes of moderate to vigorous physical activity each day at a minimum
- Eat a healthy diet
- Maintain a total cholesterol lower than 170 mg/dL
- Maintain a blood pressure below the 90th percentile
- Maintain a fasting blood sugar below 100mg/dL
How to Help Children Improve Their Health
Steinberger also told Reuters that to keep children healthy, entire families and schools need to be involved.
The AHA shares some tips on how to help children develop healthy lifestyle habits:
- Be a role model. If children see their parents being active and eating healthy, they will be more likely to adopt these behaviors.
- Plan ways to move together as a family. Taking family walks or bike rides or playing outdoor games ensures everyone reaps the health benefits. Also, take time to find out what unique physical activities your children enjoy and encourage those activities.
- Limit screen time to 2 hours a day. When at a screen, most people are sitting for long periods of time and this goes against heart health.
- Go slow. Take small steps in changing your lifestyle habits so you and your children can ease into new behaviors in a positive and comfortable way.
- Get involved. Work with schools to serve better food choices. Have your healthcare providers check your child’s BMI, blood pressure, and cholesterol. “Make your voice heard.”
Further reading on healthy lifestyle habits:
- These 8 Careers are the Worst for Your Weight & Heart Health
- VIDEO: Why is Exercise So Hard?
- Acknowledge Your Progress So Far!
- 7 Diabetes Habits of People with A1Cs Under 7.0
- Sugar or Cigarettes: Which is Worse For You?
Photo Credit: FotoshopTofs and Steve Morissette on Pixabay |
To draw grids using linear perspective, it is best to start with a square. The diagram above shows a perspective view of a room drawn in one-point perspective. After you have established a horizon line and a vanishing point follow this step by step procedure to create a gridded interior
- Begin by creating a square plane in perspective. Remember that when drawing objects in perspective trust your observation. You will find that when you look at a square in perspective it appears to be a very flat form in space.
- Make marks at equal increments across the bottom edge of the rear wall rectangle. You can use a ruler to mark off equal increments or any regular measurement that you choose (my diagram uses 50 points for each segment).
- Draw a line from the vanishing point through each point until it reaches the edges of your drawing area. Notice that the points where the lines meet the front edge of the drawing area are further apart than the points at the bottom of the rectangle edge.
- Draw a horizontal line across these lines indicating a row of tiles along the back edge of the square space that we are subdividing. This can be done through observation or by using the method used in creating a square in perspective
- Using the back right corner of the space as a starting point, draw a line from the back right corner through the front left corner of the small square and continue the line to the edge of the paper.
- Draw a horizontal line at each point that the diagonal crosses the radiating lines that subdivide the large square. This will create a grid pattern.
This grid system can be carried into walls and ceiling areas to create grids on all planes. This may make it possible to systematically place objects in a space by using the grid on all planes. |
Food energy is chemical energy that animals derive from their food and molecular oxygen through the process of cellular respiration. Humans and other animals need a minimum intake of food energy to sustain their metabolism and to drive their muscles.
Organisms derive food energy from carbohydrates, fats and proteins as well as from organic acids, polyols, and ethanol present in the diet. Some diet components that provide little or no food energy, such as water, minerals, vitamins, cholesterol, and fiber, may still be necessary to health and survival for other reasons.
Using the International System of Units, researchers measure energy in joules (J) or in its multiples; the kilojoule (kJ) is most often used for food-related quantities. An older metric system unit of energy, still widely used in food-related contexts, is the "food calorie" or kilocalorie (kcal or Cal), equal to 4.184 kilojoules.
<>Fats and ethanol have the greatest amount of food energy per mass, 37 and 29 kJ/g (8.8 and 6.9 kcal/g), respectively. Proteins and most carbohydrates have about 17 kJ/g (4.1 kcal/g).
Conventional food energy is based on heats of combustion in a bomb calorimeter and corrections that take into consideration the efficiency of digestion and absorption and the production of urine. |
A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds. Electronic keyboards make music through sound waves. The very earliest digital synthesis experiments were made with general-purpose computers, as part of academic research into sound generation. Early commercial digital synthesizers used simple hard-wired digital circuitry to implement techniques such as additive synthesis and FM synthesis, becoming commercially available in the early 1980s. Other techniques, such as wavetable synthesis and physical modeling, only became possible with the advent of high-speed microprocessor and digital signal processing technology. One of the earliest commercial digital synthesizers was the Synclavier.
The Yamaha DX7 was the first commercially successful all-digital synthesizer. It became indispensable to many music artists of the 1980s.
Some digital synthesizers now exist in the form of "softsynth" software that synthesizes sound using conventional PC hardware, though they require careful programming and a fast CPU to get the same latency response as their dedicated equivalents. In order to reduce latency, some professional sound card manufacturers have developed specialized digital signal processing hardware. Dedicated digital synthesizers frequently have the advantage of onboard accessibility, with switchable front panel controls to peruse their functions, whereas software synthesizers trump their dedicated counterparts with their additional functionality, against the handicap of a mouse-driven control system.
With focus on performance-oriented keyboards and digital computer technology, manufacturers of commercial electronic instruments created some of the earliest digital synthesizers for studio and experimental use with computers being able to handle built-in sound synthesis algorithms.
Analog vs. digital
The main difference is that a digital synthesizer uses digital processors and analog synthesizers use analog circuitry. A digital synthesizer is basically a computer with software used as an interface. An analog synthesizer is made up of sound-generating circuitry and modulators. Another difference is the price. Digital synthesizers can be run with a MIDI control, thus equipment is much cheaper depending on what quality of sound one is looking for. Also the price of the computer, its CPU speed, and software interface also determine the money spent. Analog synthesizers contain their own controls on the circuit board, making them harder to learn but more capable of creating its own unique sound instead of using a preset from software.
Full article ▸ |
Lesson Two: Primary and Secondary Sources
A primary source is evidence of history. Whether it is an object, text, or recording, a primary source was created at the time a particular event occurred or was created by someone with firsthand knowledge of an event.
A secondary source synthesizes or analyzes primary source material. Typically, researchers produce secondary sources after an historical event or era. They discuss or interpret evidence found in primary sources. Examples are books, articles, and documentaries.
Using materials from the Helen Keller Archive, students learn to identify and use primary sources in their research and historical writing. Students differentiate between primary and secondary sources and critically examine the authorship, purpose, and historical context of multiple primary sources.
- Define and differentiate between primary and secondary sources.
- Examine and analyze the contents of primary sources.
- What is a primary source?
- What is a secondary source?
- Where do I find primary sources?
- How do I read a primary source?
- Internet connection
- Projector or Smartboard (if available)
- Worksheets (provided, print for students)
Part 1: What is a Primary Source?
Ask and Discuss:
- Does anyone keep a diary? Write texts? Take photos? Create art?
- If a historian found your diary/emails/photos 100 years from now, what would they learn about your life? Family? School? Town?
- For example, in an archaeological dig, researchers might uncover your local landfill, including the empty toothpaste tube you threw out last week. Looking through an archive, a researcher might find my gradebook from this very year…including your last test score.
- These everyday products of your life are potentially primary sources. Historians use items like these from ten, a hundred, a thousand years ago to learn about the past.
Explain and Connect:
A Primary Source …
- Was created in the past, specifically at the time being researched.
- But just being “old” does not make something a primary source.
- Has firsthand knowledge or other direct evidence of the era or subject under research.
- Has provenance. Provenance means that the time and/or place of the production of a document or artifact can be reasonably believed to be true and provable.
- Needs to be evaluated based on its creators (who made it) and historical context (when and how it exists).
- Is found in an archive, museum, library/bookstore, or maybe in your backpack, right now.
- Define archive for students if necessary. See Definitions page.
- Explain that if your texts and videos are preserved, for example in an archive, library, or museum, scholars in the future may use your work to write a history of the early 21st century.
- Look at your last text conversation/email thread/search history. What could it show a historian about life in the 21st century?
Compare sources side-by-side, using worksheet at the end of this lesson plan.
- Read sources as a class.
- What is similar about these two sources? Different?
- Both of these documents are about Helen Keller and her advocacy. One was written 100+ years later by a historian, and one was written by Helen herself.
- The letter is a primary source.
- The biography is a secondary source.
A secondary source…
- Was written after the time under research.
- Brings together primary source material to tell a larger story.
- Some sources can be either a primary or secondary source, depending on how it is used.
- For example: if someone in the 19th century is writing about the 17th century, that source is a secondary source for the 17th century and a primary source for the 19th century.
- Is found in classrooms, libraries/bookstores, movies, or new media.
Brainstorm Examples of Primary and Secondary Sources
Optional: Which of the following are primary sources? Secondary? Both?
- Your history textbook
- A diary written in 1940
- Leonardo’s The Last Supper
- A documentary on the life of Helen Keller
- Tax records
- A photograph of the attack on Pearl Harbor during World War II
- A musical about American history
- A history of the Roman Empire written in 1776
- Yesterday’s newspaper
Part 2: How Do I Use Primary Sources?
- Pull up the digital Helen Keller Archive.
- Explain that the class will be using primary sources found in the HKA, which collects documents and objects by and about Helen Keller.
- Navigate to primary source used in earlier exercise:
- Detail that this letter is preserved in material/physical format at the Helen Keller Archive facility.
Let’s Find Out More About this Primary Source.
- Explain that a digital archive includes metadata/source information that will allow researchers to analyze and contextualize the source.
- Highlight the Metadata section and explain the information available in metadata, including description, subject, date, original type, person to/from, place.
- This metadata tells us the 5W1Hs of the primary source: who, what, when, where, why, and how.
- Highlight transcript section and explain that archivists and volunteers transcribe any text found in the document.
Transcription is important because:
- It helps us read unfamiliar handwriting or faded letters.
- It helps people with visual impairment use text-to-speech technology to read documents.
Ask and Discuss
- To analyze a primary source, start with the basics: Who, what, when, where, why, and how.
- Who wrote this letter? When?
- To what is the letter responding?
- What does the author say about the topic under consideration? What alternatives do they propose?
- What names or terms in this letter are unfamiliar? What additional information would you need to more fully understand this letter?
- Based on this letter, what can we infer about the economic position of blind Americans in the 1940s?
- Let’s refer back to the secondary source.
- Where does the author of the secondary source refer to the letter?
- How does she use the letter to prove a point? What is she trying to prove?
- What additional information does she provide to contextualize this letter?
– OR –
Students complete “Spotlight on Helen Keller” individually or in groups.
This Lesson Meets Curriculum Standards:
Cite specific textual evidence to support analysis of primary and secondary sources.
Determine the central ideas or information of a primary or secondary source; provide an accurate summary of the source distinct from prior knowledge or opinions.
Identify aspects of a text that reveal an author’s point of view or purpose (e.g., loaded language, inclusion or avoidance of particular facts).
Gather relevant information from multiple print and digital sources, using search terms effectively; assess the credibility and accuracy of each source; and quote or paraphrase the data and conclusions of others while avoiding plagiarism and following a standard format for citation. |
The Quran is the most important book in Islam. It contains the teachings and story of the chief prophet of Islam, Muhammad. The Quran, whose name means "recitation" in Arabic, is the sacred text of Muslims and the highest authority in both religious and legal matters.
Muslims believe the Quran to be a flawless record of the angel Gabriel's revelations to Muhammad from 610 until his death in 632 AD. It is also believed to be a perfect copy of a heavenly Quran that has existed eternally.
The Quran's name is derived from the Gabriel's initial command to Muhammad to "Recite!" Recitation is a fundamental concept associated with the Quran. The first followers of the prophet memorized his recitation in order to recite it to others, following an established Arabic method for preserving poetry.
The revelation was put in writing shortly after Muhammad's death to preserve the content from corruption, but it is still regarded as most authentic when recited aloud. Professional reciters of the Quran (qurra') are held in very high esteem, and have often been influential in deciding matters of doctrine or policy.
Contents of the Quran
The Quran is roughly the length of the Christian New Testament. It is divided into 114 surahs (chapters) of widely varying length, which, with the exception of the opening surah (fatihah), are generally arranged from longest to shortest. As the shortest chapters seem to date from the earlier period of Muhammad's revelation, this arrangement results in a reverse chronological order.
Each surah has a heading, which usually incorporates the following elements:
- A title (e.g. "The Bee," "The Cow") taken from a prominent word in the Surah, but one that does not usually represent its overall contents.
- The basmalah, a formula prayer (e.g. "In the name of God the Merciful, the Compassionate")
- An indication as to whether it was received at Mecca or Medina
- The number of verses in the Surah
- In 29 of the Surahs, fawatih, or "detached letters" of unclear significance. They may be abbreviations, initials of owners of early manuscripts, or have some esoteric meaning.
The verses (ayat, "signs") also vary in length, with the shortest usually found in the earlier surahs. In these verses, the form closely resembles the rhymed prose of the seers (kahins) of Muhammad's time. The later verses are more detailed and less poetic.
Most of the Quran is written in the first person plural, with Allah as the speaker. When Muhammad himself speaks, his words are introduced by "Say," to clarify he is being commanded by Allah to speak.
The vocabulary of the Qur'an is overwhelmingly Arabic, but some terms are borrowed from Hebrew and Syriac, cultures with which Muhammad was familiar. Such words include injil (gospel), taurat (law, Torah), Iblis (Devil), amana (to believe) and salat (prayer).
Go here for an index to the chapters of the Quran.
- - Helmer Ringgren, "Qur'an." Encyclopaedia Britannica. Encyclopaedia Britannica Premium Service. 2004.
- "Qur'an." Merriam-Webster's Encyclopedia of World Religions.
- "Qur'an." The Oxford Concise Dictionary of World Religions. |
Avena’s website has links to new research and articles about the effects of sugar on the brain and behavior, and how this can influence body weight.Want to learn more about the adverse effects of sugar? Read Food Junkie, Dr. Avena’s blog on Psychology Today. Here is one post that is particularly relevant: Sugar Cravings: How sugar cravings sabotage your health, hormone balance & weight loss, by Dr. How Do Cold Packs Work? - A Chemistry Lesson. The chemistry of cookies - Stephanie Warren.
Heat and Temperature. A melting ice cube on a plate.
Knowing the difference between heat and temperature is important. It can lead to a clearer understanding of energy. Above is a picture of an ice cube melting in a small dish. The ice, water, dish, and are experience heat exchanges and temperature changes. In this section we will define both heat and temperature and hopefully reach an understanding of how they are related, but not identical ideas.
This page covers some introductions about heat and temperature. Fluids (part 1) Lesson Plans. Chapter 1: Matter—Solids, Liquids, and Gases Students are introduced to the idea that matter is composed of atoms and molecules that are attracted to each other and in constant motion.
Discuss how much water the ocean contains. Display the MapMaker Interactive and make sure students can all identify which areas are land and which are ocean. Ask: Does the Earth have more land, or more ocean? Students should notice that there is more ocean than land. Explain that the ocean covers almost three-quarters of Earth’s surface and is very deep. USGS Education. Monitoring the Earth from Space with SeaWiFS. 2: Why study the oceans from space?
Before we jump into the main subject of this presentation, it might be helpful to understand how we have learned what we know about the oceans in the past and how we are doing this today. Being an oceanographer and working for NASA has more often than not seriously confused people. My standard reply when asked what an oceanographer is doing at NASA is to say that I study the oceans from space. At that point, people generally shake their heads and look at me in a very puzzled way and ask "why do you have to launch a satellite to study the oceans? Wouldn't it just be easier to go out in a boat and look at it?
". Most of what we have learned about the oceans over the years has come from people going out to sea in boats and tossing things over the side and collecting whatever happens to get caught in their nets. Satellites on the other hand are wonderful for looking at very large areas of the world in a very short time. More Information: Chemistry Laboratory Safety Rules. Some rules are NOT made to be broken.
That is true of the rules used in a chemistry lab. They are really, truly for your safety and not your humiliation. Follow the instructions given by your instructor or lab manual.Don't start a lab until you know all of the steps, from start to finish. If you have questions about any part of a procedure, get the answer before starting.Do Not Pipette By Mouth - EverYou say, "But it's only water. " Even if it is, how clean do you think that glassware really is?
Chem Lab Resources How To Keep a Lab NotebookHow To Write a Lab ReportLab Report TemplateLab Safety SignsChemistry Pre LabLab Safety Quiz. Periodic Table of the Elements by WebElements. Mars interactive panorama: What it looks like to be there. Imagine standing on Mars and looking around.
Or instead of imagining, check out this interactive panorama. Nearly 300 images taken by two Curiosity rover cameras over the course of 13 days were used to create the best quality images we’ve ever seen of Mars. Earth & Sky. : Education. NASA - Home. Secret Worlds: The Universe Within - Interactive Java Tutorial. Secret Worlds: The Universe Within View the Milky Way at 10 million light years from the Earth.
Then move through space towards the Earth in successive orders of magnitude until you reach a tall oak tree just outside the buildings of the National High Magnetic Field Laboratory in Tallahassee, Florida. After that, begin to move from the actual size of a leaf into a microscopic world that reveals leaf cell walls, the cell nucleus, chromatin, DNA and finally, into the subatomic universe of electrons and protons. Once the tutorial has completely downloaded, a set of the arrows will appear that allow the user to increase or decrease the view magnitude in Manual mode. Click on the Auto button to return to the Automatic mode. Notice how each picture is actually an image of something that is 10 times bigger or smaller than the one preceding or following it. Earth = 12.76 x 10+6 = 12,760,000 meters wide (12.76 million meters) Contributing Authors David A.
Questions or comments? Geology.com: News and Information for Geology & Earth Science. Moon Phases Calendar / Moon Schedule. Login to the JASON Mission Center. |
- Shop Online
- Join and Give
Lesson Plan: Play a Painting
In this lesson plan, students explore the relationship between music and art.
Suggested Grade Level: 3-5
Estimated Time: One class period
Music is auditory, existing in time, but art is visual, existing in space. The process of examining music and art together can highlight the distinctive elements of each form. It can also demonstrate how their characteristics are interrelated. In this lesson, students create musical interpretations of two works of art.
- Learn to describe and analyze works of art
- Explore the relationship between music and the visual arts
- The Old Guitarist
- Improvisation No. 30 (Cannons)
- Classroom percussion instruments or instruments made from rubber bands, paper, blocks, etc.
- Encourage students to think about the sounds they can make on their own and with instruments. Begin discussion with the following questions:
- How many ways can you make sounds with your hands?
- How many different sounds can you make with your mouth and voice?
- What different sounds can be produced with handmade or percussion instruments?
- Look together at Pablo Picasso’s The Old Guitarist, late 1903-early 1904, and discuss it, asking:
- What do you see?
- What is the dominant color in the painting?
- What is the painting’s mood? What other elements of the composition contribute to this mood?
- What would the music played by the old man sound like?
- Have students describe the sounds in detail. Ask:
- Are the sounds loud, soft, long, short, high, low, smooth, rough?
- Encourage students to try to produce the sounds of the painting, first with their hands, mouths, or voices, and then with their instruments.
- Once students have finished interpreting an art object as sound, have them "play" a work of art. Look at Vasily Kandinsky’s Improvisation No. 30 (Cannons), 1913. Ask:
- What do you see? (real objects such as a cannon, formal elements such as colors, lines, etc.)
- Tell students that in his book Concerning the Spiritual in Art, Kandinsky associated color with the sounds of musical instruments: green with the cello, yellow with brass instruments, and white with silence. Ask them whether or not they agree with Kandinsky’s associations:
- How do you feel about Kandinsky’s associations?
- Are they valid? Are they arbitrary?
- Ask students to describe what sounds come to mind while looking at the painting and have them try to produce these sounds with their hands, mouths, voices, and instruments.
- Find out which parts of the painting students think they should play alone, as solos, and which parts they want to play together, in chorus. Ask them in what order they think they should play the parts and why. Encourage them to consider how their eyes move across the painting. Ask:
- What attracts your eyes first?
- What do you think the artist painted first?
- What elements of the painting are the most prominent (loud)?
- What elements seem to be in the background (soft)?
- Act as the composer/conductor for the first performance; then allow a series of students to take that role. Let the composer/conductors choose from the proposed sounds the ones that they want to use to present each visual element. Allow them to conduct the piece more than once if necessary to achieve the effects they desire.
Base students’ evaluation on their ability to identify elements of line, shape, space, color, texture, pattern, and mood in the visual arts while making varied and creative musical sounds inspired by these elements.
Illinois Learning Standards
Fine Arts: 25, 26 |
Toothbrush abrasion is a type of dental abrasion which is commonly seen in the mouth. It is most frequently on the junction where the teeth meet the gums (gum line or gum margins) and the root surfaces of teeth.
Toothbrush abrasion is the result of traumatic tooth brushing in a horizontal scrubbing movement rather than a vertical direction and appears as notches worn into the teeth near the gum margins which can be made worse by abrasive dentifrices. Changes can be detected anywhere in the mouth, although the upper teeth are usually more involved than the lower teeth.
What causes toothbrush abrasion?
Hard bristles, worn bristles, pressure applied during brushing, improper brushing techniques and abrasiveness of dentifrices used can all influence the degree of toothbrush abrasion.
Besides external factors, toothbrush abrasion can occur in receding gum margins which exposes the root surface of the tooth. Our gums recede as we get older and when the gums are inflamed, as seen in gum diseases. However toothbrush abrasion itself can also cause gum recession.
Harmful effects of improper tooth brushing
Initially, toothbrush abrasion can cause trauma to the gums, which can appear as red, brushing or depressed lesions. Long-term toothbrush abrasion can lead to gum recession or clefts on the gums. The receding gum margins will expose the root surface of the tooth which is thinner than the tooth enamel of the crown and can result in notches and sensitive teeth.
Will toothbrush abrasion heal?
Unfortunately toothbrush abrasion does not heal by itself as the tooth wear is permanent.
What to do once you have toothbrush abrasion
- Visit your attending dentist to discover the cause of the toothbrush abrasion and monitor the condition of your teeth.
- A minor notch, if detected early, may not need major treatment and can be managed with desensitizing agents.
- A deeper notch will require greater attention. If the notch is left exposed, food and bacteria can become trapped in hidden corners and this will lead to tooth decay. Furthermore the deeper notch can weaken the tooth and can result in tooth fracture if you bite too hard. Your dentist will repair the toothbrush abrasion with tooth-colored materials (composite / glass ionomer cements) that help fill up the notch, improve appearance and reduce teeth sensitivity.
- The receded gum margins can only be corrected with surgical procedures that reposition the gums back into place (flap surgery). If you are comfortable with the gum line, it can be left as it is.
How to prevent toothbrush abrasion
1. Choosing the right toothbrush
The size and style of the toothbrush are your personal decisions as there are many types available in the market. It is generally recommended to use soft-bristled brushes with small head size.
Soft bristles are less likely to cause damage to the gums or any exposed root surfaces of the teeth and they also adapt to the shape of the tooth better. The small head size enables the toothbrush to reach the areas in the back of the mouth for cleaning the teeth in the area without hurting the mouth.
A power toothbrush can be used to decrease brushing force and overzealous tooth brushing.
2. Replace worn toothbrush
Toothbrushes should be replaced as soon as the bristles show signs of wear, splay, fray or when the color indicator changes color. Generally, this is every 8 to 12 weeks. A worn toothbrush would not be able to clean as efficiently as a normal one.
3. Use a less abrasive toothpaste
4. Modify your tooth brushing method
There are several types of tooth brushing techniques. The Bass method (named after Dr. C. Bass) is the most commonly recommended technique. This proper tooth brushing method is effective in removing plaque and food debris directly beneath the gum margins. It is important to clean the gum margins to avoid gum diseases. There is also a modified Bass technique whereby you add a rolling stroke following brushing the inner part of the gum margins.
The Bass method
A. Place the toothbrush with the bristles directed straight into the gum sulcus about 45-degree angle to the tooth.
B. Press lightly so that the tips of the bristles go into the gum sulcus. Vibrate the brush back and forth with very short strokes. Approximately 10 gentle strokes should be completed without removing the bristle ends from the gum sulcus before proceeding to the next area. The brush head is moved to the next tooth or group of teeth by overlapping with the completed area.
C. Hold brush in a vertical position and use gentle back-and-forth strokes to clean the inner surfaces of the teeth.
D. Place bristles on the top of the teeth surface and move the brush in scrubbing or small circular motions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.