content
stringlengths 275
370k
|
---|
An introduction to simplicial quantum gravity
We know today that Newtonian physics is really only an approximation that works well in almost every situation that we encounter in everyday life. We can use these rules to describe exactly how a baseball will travel when thrown. We know how to use Newton's laws so well that we can launch a rocket, and give it exactly the right amount of thrust so that it can coast to Jupiter, a planet only 90,000 miles across, but millions of miles distant from earth and orbiting at 30,000 miles per hour.
This approximation is not good enough in some situations. If I want to describe the behavior of things on a very small scale, such as the motion of an electron around an atom, I cannot use the same Newtonian rules that I use to describe the motion of a baseball thrown across the infield. For really small-scale physics, I must use a new set of rules proposed in the early 1900's. These rules are called quantum mechanics.
Now, both quantum mechanics and the theory of relativity still accurately describe the motion of our baseball, but Newton's rules of mechanics and gravity work so well in this situation that we would never bother using these far more more cumbersome theories in such a case.
Below is a chart showing when these three theories, Newtonian physics, quantum mechanics, and relativity may safely be applied.
Note, that there is still a square on the chart that is not covered by ANY of the theories that we've mentioned so far. This means that even quantum mechanics and relativity are in a sense approximations that fail when we want to predict very small scale phenomena in the vicinity of very large masses. This will someday be the domain of a theory of quantum gravity.
To find out what we require from such a theory, lets first briefly examine what quantum mechanics and relativity tell us.
Now, lets see how Quantum Mechanics and Relativity might be mixed to form a theory of Quantum Gravity. |
What we perceive to be Christmas tradition in the UK owes much to Dickens.
The following, including the image, is taken from Dr Jim Eckman’s article, ‘Charles Dickens and the Message of Christmas’, which can be read in full here.
For over 150 years Charles Dickens’ story of the miserly, miserable Ebenezer Scrooge and his three ghosts has been a regular Christmas tradition throughout Western Civilization. Indeed, even Hollywood has fueled this tradition by producing more than 15 feature productions of “A Christmas Carol.” Why is this story so powerful, so gripping and such a staple of the Holiday season? The answer lies in understanding the author, Charles Dickens. Charles Dickens is arguably the most influential novelist in the English language. It was his Christmas stories and his struggle with Christianity that dominated much of his life and permeated his writings.
The ghosts of Christmas past, present and future haunt Scrooge throughout Christmas Eve night, as they expose all of his sins and shortcomings. He comes to terms with his greed and selfishness as “the squeezing, wrenching, grasping, scraping, clutching, covetous” miser. In short, Scrooge is regenerated, born again, into a generous, compassionate, loving man who rescues Tiny Tim from death, and becomes one “who knew how to keep Christmas well.”
Christmas day becomes a reassuring antidote to the factory jobs and crowded cities of Victorian England. Today, we are far removed from Victorian England. But perhaps that is why we love the story so. We can identify with Scrooge in his miserliness, yet also long for his redemption. The message of Christmas is that God understands our miserly, selfish human condition and provides our redemption through His son, Jesus. The message of Christmas remains that the babe in the manger on Christmas morning was God’s “unspeakable gift” to the human race.
Dr Jim Eckman
The full article may be read here online. |
Prosopagnosia is a neurological disorder characterized by the inability to recognize faces. Prosopagnosia is also known as face blindness or facial agnosia. The term prosopagnosia comes from the Greek words for “face” and “lack of knowledge.” Depending upon the degree of impairment, some people with prosopagnosia may only have difficulty recognizing a familiar face; others will be unable to discriminate between unknown faces, while still others may not even be able to distinguish a face as being different from an object. Some people with the disorder are unable to recognize their own face. Prosopagnosia is not related to memory dysfunction, memory loss, impaired vision, or learning disabilities. Prosopagnosia is thought to be the result of abnormalities, damage, or impairment in the right fusiform gyrus, a fold in the brain that appears to coordinate the neural systems that control facial perception and memory. Prosopagnosia can result from stroke, traumatic brain injury, or certain neurodegenerative diseases. In some cases it is a congenital disorder, present at birth in the absence of any brain damage. Congenital prosopagnosia appears to run in families, which makes it likely to be the result of a genetic mutation or deletion. Some degree of prosopagnosia is often present in children with autism and Asperger’s syndrome, and may be the cause of their impaired social development.
The focus of any treatment should be to help the individual with prosopagnosia develop compensatory strategies. Adults who have the condition as a result of stroke or brain trauma can be retrained to use other clues to identify individuals.
Prosopagnosia can be socially crippling. Individuals with the disorder often have difficulty recognizing family members and close friends. They often use other ways to identify people, such as relying on voice, clothing, or unique physical attributes, but these are not as effective as recognizing a face. Children with congenital prosopagnosia are born with the disability and have never had a time when they could recognize faces. Greater awareness of autism, and the autism spectrum disorders, which involve communication impairments such as prosopagnosia, is likely to make the disorder less overlooked in the future.
The National Institute of Neurological Disorders and Stroke (NINDS) conducts research related to prosopagnosia in its laboratories at the National Institutes of Health (NIH), and also supports additional research through grants to major medical institutions across the country. Much of this research focuses on finding better ways to prevent, treat, and ultimately cure disorders, such as prosopagnosia.
on this topic.
being conducted about this condition.
on this topic. |
The foundation for healthy permanent teeth in children and teenagers is laid during the first years of life. Poor diet, poor habits of food intake and inadequate toothbrushing habits during the first 2 years of life have been shown in several studies to be related to tooth decay in children. The development of caries in primary teeth further increases the risk of developing caries in new permanent teeth.
Therefore it is essential to establish a proper oral hygiene routine early in life to help ensure the development of strong and healthy teeth. Parents, as consistent role models, are key for setting a daily routine and to making their children understand the importance of oral hygiene. Toothbrushing should be presented as a habit and an integral part of the daily hygiene routine. Children are very sensitive to social stimuli such as praise and affection, and learn best by imitating their parents. Physiological and mental development affects the oral care of children.
Importance of the primary dentition
Primary teeth start to erupt in children from the age of six months. The primary dentition is complete by approximately two and a half years of age. The enamel of primary teeth is less densely mineralized than the enamel of permanent teeth, making them particularly susceptible to caries. Primary teeth are essential tools, both for chewing and learning to talk. They help to break up food into small pieces, thereby ensuring efficient digestion. A full set of teeth is an essential prerequisite in learning correct pronunciation. Primary teeth also play a vital role in the proper alignment and spacing of permanent teeth; it is therefore imperative that they are well cared for and preserved until normal ex-foliation takes place. Establishing a proper oral care routine early on in life sets the foundation for the development of healthy and strong permanent teeth. In addition to good oral hygiene, diet also plays a key role in keeping teeth healthy. In this respect it is not only the quantity of sugar that is important, but also the frequency of consumption. As much as possible, children should be limited in the amount of sweets between meals, especially in the evening or at night.
New permanent teeth
Although permanent teeth are already partly formed in children aged 0 to 3 years, eruption only occurs later in life (from about 6 years on) when the 32 permanent teeth (16 in the upper and 16 in the lower jaw) replace the 20 primary teeth. During this time root resorption and crown shedding of primary teeth take place. With the eruption of the first permanent teeth (from about 6 years on), the mouth contains a mixture of both primary and permanent teeth, which puts children at increased risk of caries. Often the eruption of this permanent tooth is not realized neither by the child nor by the parents, because it is positioned behind the last primary molar and is not replacing any primary tooth. Although enamel is fully formed at eruption the surface remains porous and is inadequately mineralized. Subsequently, a secondary mineralization occurs (second maturation), in which ions from the oral cavity penetrate hydroxyapatite and increase the resistance of the enamel against caries. Furthermore, any primary teeth with caries form reservoirs of bacteria, which can easily attack the immature enamel of the new permanent teeth. During the eruption, the occlusal surfaces of the new permanent teeth are on a lower level than the primary teeth. Toothbrushing becomes more difficult than before, given the coexistence of loose primary teeth, gaps and newly erupting permanent teeth. The jaw is also growing significantly, making space for more teeth. The cleaning of the narrower interdental spaces becomes more important with increasing numbers of permanent teeth.
Role of Parents
Parents have a key role in helping their children to develop a proper oral hygiene routine in the first years of their life. Parents should lead and supervise their children?s toothbrushing approximately for the first 12 years, until motor and mental functions allow the child to routinely perform a proper toothbrushing technique alone. After brushing the teeth for their children for the first 2 years of life, parents will have to use playful motivation to encourage their children to brush their own teeth from about 3 years onwards ? the time when children want to brush their teeth alone. Each time the child has finished brushing, parents should re-brush the hard-to-clean areas. At the age of around 6 years, children are able to brush their teeth using a proper brushing technique. In this phase, parents have to continue supervising the regular brushing efforts of their children. The special anatomical situation of changing dentition makes it indispensable that parents still need to help their children in the daily toothbrushing task until eruption of the second molar (around the age of 12).
Development stages of children from the age 0-12
As soon as the first primary teeth erupt into the oral cavity, parents should begin brushing their children?s teeth. From the age of two years, teeth should be brushed twice daily with smaller than a pea-size amount of children?s toothpaste. Small children tend to swallow a large amount of toothpaste, so that there is a risk of developing dental fluorosis. Supervised application of the amount of toothpaste to the toothbrush is important. Due to the risk of fluorosis, the fluoride content of toothpaste for children up to the age of 5?7 years was reduced in most European countries (250 ppm to 750 ppm). Beginning with the eruption of the new permanent teeth, children should be switched from a low fluoride containing children?s toothpaste to a higher fluoride containing toothpaste (1000 ppm to 1500 ppm). This ensures the best caries protection as possible for their new permanent teeth.
Toothpaste with an age adapted content of fluoride is recommended
Primary teeth should be brushed by parents twice a day from the first tooth onwards. Parents should re-brush thoroughly after the child has brushed first. From the age of 6 years children have the ability to brush their teeth alone twice daily. However, parents must supervise the toothbrushing (until the age of 12) and check on the condition of the toothbrush. A worn toothbrush is also less effective at cleaning teeth. |
Comparing Fractions on a Number Line Math Worksheet SetPinned By - Misty Burgin
Common Core Fractions on a Number Line! This bundle comes with 12 ready-to-go printable worksheets. Use for assessment, homework, or classwork! Try the FREEBIE sheet first! Students will use both written fractions and models showing fractional parts to determine where to place the fraction on the number line. Blank number lines start at 0 and are already partitioned by the whole to scaffold toward independently placing tic marks. All fractions in this set are one whole or smaller. 40 plot and compare questions and 4 short answer. View the preview for full size! Also seen in...Fraction Bundle - Daily Math & More! - 3rd Aligned to 3rd grade Common Core standards: Number and Operations - Fractions 3.NF.A.1 Understand a fraction 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a/b as the quantity formed by a parts of size 1/b. 3.NF.A.2 Understand a fraction as a number on the number line; represent fractions on a number line diagram. 3.NF.A.2.A Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line. 3.NF.A.3 Explain equivalence of fractions in special cases, and compare fractions by reasoning about their size. 3.NF.A.3.A Understand two fractions as equivalent (equal) if they are the same size, or the same point on a number line. 3.NF.A.3.B Recognize and generate simple equivalent fractions, e.g., 1/2 = 2/4, 4/6 = 2/3. Explain why the fractions are equivalent, e.g., by using a visual fraction model. 3.NF.A.3.C Express whole numbers as fractions, and recognize fractions that are equivalent to whole numbers. Examples: Express 3 in the form 3 = 3/1; recognize that 6/1 = 6; locate 4/4 and 1 at the same point of a number line diagram. 3.NF.A.3.D Compare two fractions with the same numerator or the same denominator by reasoning about their size. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model. Make sure to leave feedback and earn 1 TpT Credit for every dollar spent! Redeem 20 points for $1 off any product on TpT! .
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Comparing Fractions on a Number Line Math Worksheet Set is a educational Infographics - By Misty Burgin.It helps students in grades 3 practice the following standards 3.NF.A.3.
1. 3.NF.A.3 : Explain equivalence of fractions in special cases, and compare fractions by reasoning about their size. (Grade 3 expectations in this domain are limited to fractions with denominators 2, 3, 4, 6, and 8.).
Ratings & Comments
0 Ratings & 0 Reviews |
Roll-to-roll laser-induced superplasticity, prints metals at the nanoscale needed for making ultrafast electronic devices
The researchers at Purdue University developed a manufacturing technique that uses a process similar to newspaper printing to form smoother and flexible metals.
The technique combines with the existing tools used in industry for manufacturing metals on a large scale. But it works on the principle of the speed and precision of roll-to-roll newspaper printing to remove a couple of fabrication barriers in making electronics faster than they are today.
Induce superelastic behaviour
To combat the issues of roughness and low resolution of metal circuits, a fabrication method was developed to enable the formation of smooth metallic circuits at the nanoscale using conventional carbon dioxide lasers.
The fabrication method, called roll-to-roll laser-induced superplasticity, uses a rolling stamp like the ones used to print newspapers at high speed. The technique can induce superelastic behaviour for a brief period of time to different metals by applying high-energy laser shots, which enables the metal to flow into the nanoscale features of the rolling stamp circumventing the formability limit.
Hence, printing tiny metal components like newspapers makes them much smoother. This allows an electric current to travel better with less risk of overheating.
It is also said that in future, the roll-to-roll fabrication of devices could enable the creation of touch screens covered with nanostructures capable of interacting with light and generating 3D images as well as the cost-effective fabrication of more sensitive biosensors. |
A flower is a part of a plant. Flowers are also called the bloom or blossom of a plant. It sometimes has a stem – a thin pipe – to support the flower. Flowers have petals. The flower contains the part that produces seeds.
In many plants, a flower is its most colourful part. We say the plant 'flowers', 'is flowering' or 'is in flower' when this colourful part begins to grow bigger and open out. There are many different kinds of flowers in different areas in the world. Even in the coldest places, for example the Arctic, flowers can grow during a few months.
Structure of flowers [change]
In botany, flowers are the key evolutionary advance made by flowering plants. To investigate the structure of a flower, it must be dissected, examined under a binocular microscope, and its structure summarised by a floral diagram or a floral formula. Then its family can be identified with the aid of a flora, which is a book designed to help you identify plants.
Flowers contain the reproductive organs of a plant. Some flowers are dependent upon the wind to move pollen between flowers of the same species. Many others rely on insects or birds to move pollen. The role of flowers is to produce seeds or fruit (fruits contain seeds). Fruits and seeds are a means of dispersal. Plants do not move, but wind, animals and birds spread the plants across the landscape.
Evolution of flowers [change]
Flowers are modified leaves possessed only by the group known as the angiosperms, which are relatively late to appear in the fossil record.
The flowering plants have long been assumed to have evolved from within the gymnosperms; but gymnosperms form a clade which is distinct from the angiosperms. The two clades diverging some 300 million years ago.
Since the ovules are protected by carpels and integuments, it takes something special for fertilisation to happen. Angiosperms have pollen grains comprising just three cells. One cell is responsible for drilling down through the integuments, and creating a passage for the two sperm cells to flow down. The megagametophyte has just seven cells; of these, one fuses with a sperm cell, forming the nucleus of the egg itself, and another other joins with the other sperm, and dedicates itself to forming a nutrient-rich endosperm. The other cells take auxiliary roles. This process of "double fertilisation" is unique and common to all angiosperms.
Flowers for people [change]
As decoration [change]
Flowers have long been admired and used by humans. Most people think that flowers are beautiful. Many people also love flowers for their fragrances (scents). People enjoy seeing flowers growing in gardens. People also enjoy growing flowers in their backyards, outside their homes. People often wear flowers on their clothes or give flowers as a gift during special occasions, holidays, or rituals, such as the birth of a new baby (or a Christening), at weddings (marriages), at funerals (when a person dies). People often buy flowers from businesses called florists.
As a name [change]
Some parents name their girl children after a flower. Some common flower names are: Rose, Lily, Daisy, Holly, Hyacinth, Jasmine, Blossom.
As food [change]
People also eat some types of flowers. Flower vegetables include broccoli, cauliflower and artichoke. The most expensive spice, saffron, comes from the crocus flower. Other flower spices are cloves and capers. Hops flowers are used to flavor beer. Dandelion flowers are often made into wine.
Honey is flower nectar that has been collected and processed by bees. Honey is often named for the type of flower that the bees are using (for example, clover honey). Some people put flowers from nasturtiums, chrysanthemums, or carnations in their food. Flowers can also be made into tea. Dried flowers such as chrysanthemum, rose, jasmine are used to make tea.
Special meanings [change]
Flowers were used to signal meanings in the time when social meetings between men and women was difficult. Lilies make people think of life. Red roses make people think of love, beauty, and passion. In Britain, Australia and Canada, poppies are worn on Memorial Day as a mark of respect for those who served and died in wars. Daisies make people think of children and innocence.
List of common flowers [change]
|Wikimedia Commons has media related to: Flowers|
- Water Lily
- Morning glory
Other websites [change]
Relevant pages [change]
- Nam, J.; Depamphilis, CW; Ma, H; Nei, M (2003). "Antiquity and evolution of the MADS-Box gene family controlling flower development in plants". Mol. Biol. Evol. 20 (9): 1435–1447. doi:10.1093/molbev/msg152. PMID 12777513. http://mbe.oxfordjournals.org/cgi/content/full/20/9/1435.
- tiny haploid female plant which includes the egg.
- Crepet W.L. (2000). "Progress in understanding angiosperm history, success, and relationships: Darwin's abominably "perplexing phenomenon"". Proceedings of the National Academy of Sciences 97 (24): 12939–41. doi:10.1073/pnas.97.24.12939. PMC 34068. PMID 11087846. http://www.pnas.org/cgi/reprint/97/24/12939.
- Wilson Nichols Stewart & Gar W. Rothwell 1993Paleobotany and the evolution of plants. 2nd ed, Cambridge Univ. Press. |
When mechanical engineer Ellen Kuhl, PhD, came to Stanford in 2007, she was studying the physical forces that affect how the heart functions. But some of her students began wondering how those same forces apply to the brain, particularly the way it creates those complex folds we’re used to seeing.
Kuhl told me, “A lot of people are trying to understand the biology, chemistry, and electricity of the brain, but nobody was really looking at the physical forces in the brain.”
Kuhl thought her students were on to something and got a seed grant through Stanford Bio-X to team up with Antonio Hardan, MD, a professor of psychiatry and behavioral sciences who had previously found that kids with autism have differences in the way certain regions of their brains had folded. Together, they thought if they could understand the way those folds form in the first place they might be able to improve diagnosis for autism or other disorders.
Since then, Kuhl has created computer models of the physical forces that push the outer portion of the brain to bend and flex into complex folds. The team hopes these same models can help explain what lies behind misfolded brain regions in kids with autism or other diseases.
The website PhD Comics recently created a short animation that explains Kuhl’s work and its implications for diseases like autism - see above.
Previously: New Stanford research offers hope for faster autism diagnosis, Girls with autism show behavior and brain differences compared to boys, Stanford study finds and A new insight into the brain chemistry of autism |
SummaryStudent teams design and build shoe prototypes that convert between high heels and athletic shoes. They apply their knowledge about the mechanics of walking and running as well as shoe design (as learned in the associated lesson) to design a multifunctional shoe that is both fashionable and functional.
An amazing amount of engineering goes into the design of shoes. Shoes must withstand a multitude of forces, stresses and strains on a daily basis and withstand them for the life of the shoe. The ability of a shoe to convert between different functions is a potential solution to the problem of high heels being uncomfortable to wear for extended periods of time, difficult to drive in, and causing ankle, foot and hip injuries.
After this activity, students should be able to:
- Identify the different parts of the engineering design process.
- Identify the different parts of the walking and running gaits.
- Explain how forces on different parts of the foot increase and decrease while walking or running, and how a shoe is built to accommodate these forces.
- Explain the difference between over and underpronation, and how to fix gait misalignments with orthotics.
More Curriculum Like This
Students explore the basic physics behind walking, and the design and engineering of shoes to accommodate different gaits. They are introduced to pressure, force and impulse as they relate to shoes, walking and running. Students learn about the mechanics of walking, shoe design and common gait misal...
During this activity, students look at their own footprints and determine whether they have either of the two most prominent gait misalignments: overpronation (collapsing arches) or supination (high arches).
Students use the engineering design process to solve a real-world problem—shoe engineering! Working in small teams, they design, build and test a pair of wearable platform or high-heeled shoes, taking into consideration the stress and strain forces that it will encounter from the shoe wearer.
Students are introduced to prosthetics—history, purpose and benefits, main components, main types, materials, control methods, modern examples—including modern materials used to make replacement body parts and the engineering design considerations to develop prostheses. They learn how engineers and ...
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
- Evaluate a solution to a complex real-world problem based on prioritized criteria and trade-offs that account for a range of constraints, including cost, safety, reliability, and aesthetics, as well as possible social, cultural, and environmental impacts. (Grades 9 - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- Established design principles are used to evaluate existing designs, to collect data, and to guide the design process. (Grades 9 - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- Engineering design is influenced by personal characteristics, such as creativity, resourcefulness, and the ability to visualize and think abstractly. (Grades 9 - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- A prototype is a working model used to test a design concept by making actual observations and necessary adjustments. (Grades 9 - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- The process of engineering design takes into account a number of factors. (Grades 9 - 12) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
Gather all materials in advance of the activity and allow students to choose from the pre-selected materials as they design. Alternatively, have students compile a materials list within a set budget after the research phase of the design process, and buy exactly what they request. Materials are available at hardware, office supply and fabric/craft stores.
Note: An infinite number of materials can be used for this activity. The wider the selection of materials available, the more elaborate and creative student shoe designs can be. Feel free to add anything to the supply list that looks interesting or useful. Other optional materials might include: balsa wood, foam, zippers, etc.
Each group needs:
- foam core board, 8-in x 11-in (20-cm x 28-cm) size (available at hardware and office supply stores)
- thin rubber sheets, 7-in x 11-in x 0.25-in (18-cm x 28-cm x .64-cm) size (look in the plumbing section of a hardware store)
- bendable, flat metal strip for support of high heel sole, 1-in x 6-in x 0.1-in (2.5-cm x 15-cm x .25 cm)
- wooden dowels for shoe heel, 1-3-in (2.5-cm - 7.6-cm) long, of varied thicknesses
- ½-in (1.27-cm) hinges
- material for the body of the shoe (look in the fabric store scrap pile)
- shoe laces
- button clasp
- other types of fasteners
- butcher block paper and pencils, for brainstorming ideas and design sketching
- Static Forces Worksheet, two per group
For the entire class to share:
- needles and thread
- duct tape
- rubber cement
- hot glue gun and glue sticks
- screwdrivers and screws
- (optional) computer with CAD program
- (optional) video camera to film group 30-second shoe commercials
How many pairs of shoes do you have? Could you name some of the different types of shoes that you have? (Listen to student descriptions.) Wouldn't it be nice if a pair of dress shoes with heels could double as a pair of running shoes? You could wear a nice pair of shoes to school and then you wouldn't need to bring an extra pair of running shoes for gym. It would decrease the amount of closet space needed for your shoes by a factor of two.
Designing a shoe is a complex process during which several factors must be considered. As an example, the pressure that a stiletto heel exerts on the ground while walking is greater than that of an elephant walking! And the heel also experiences a great deal of torque during the heel strike phase of the stride. A high-heel shoe must be designed to withstand these forces while also providing support for the foot since the entire foot never touches the ground at once. It generally requires a stiff sole to maintain the shape of the shoe and prevent the foot arch and entire shoe from collapsing during the midstride phase.
A running shoe, on the other hand, must be able to withstand the impact of hitting the ground every time the runner takes a stride and it must do so without slipping, so the runner can propel him/herself forward. Running shoes must provide enough support to pad the foot and prevent injuries, while still being flexible enough to allow the foot to flex through the stride.
What would happen if the wearer of the shoe had a knee injury resulting from either high arches or collapsing arches? Orthotics are often used to fix common gait misalignments caused by overpronation (sometimes known as collapsing arches) or supination (sometimes known as high arches). Orthotics can be designed and placed into a shoe to create a straight line between a person's ankle, knee and hip, fixing many common knee and foot injuries.
Is it possible to create a shoe that converts between these two very different functions while fixing a gait misalignment? That's our engineering project. Let's get designing.
force: Pushes or pulls; anything that causes an object to accelerate or change direction.
impulse: Average force x change in time or change in momentum. A measure of how "hard" a shoe hits the ground.
orthotic: An insert placed inside a shoe to correct either overpronation or supination.
overpronation: Excessive rolling inward movement of the foot when walking or running. Predisposes lower extremity injuries (such as knee injuries). Causes heavier wear on shoes on the inner margin. Collapsing arches while walking.
pressure: Force per area.
supination: A rotation of the foot and leg in which the foot rolls outward with an elevated arch so that in walking the foot tends to come down on its outer edge. Leads to shoes wearing on the outer edge, and knee injuries. High arches. The opposite of pronation. Same as underpronation.
This open-ended activity can be done in several hours, or extended much longer. It is important to have several interim milestones for students to aim for within the engineering design process to ensure progress is being made.
Engineering Design Process
The engineering design process (sometimes called the engineering design loop) is a set of steps that engineers use to design and build a product. For this project, the steps are as follows:
- Recognize the need: High-heeled shoes are a societal norm but cause many back, knee and ankle problems for those who wear them. If a high-heeled shoe was able to convert easily to a shoe without a heel, then the wearer could switch between the two styles when appropriate (such as when driving a car), increasing comfort and reducing foot injuries.
- Define the problem: The foot's position in a high-heeled shoe is detrimental to leg alignment and should not be used for extended periods of time.
- Plan the project: Plan the project: What needs to be done? Determine how much time to spend brainstorming and researching the problem. How much time to spend constructing the first prototype? Will there be time to redesign the prototype?
- Gather information: Gather information: Some things to consider: Has anyone else made a "convertible shoe?" How much padding is needed to provide support to an athletic shoe? How is padding distributed on the sole of an athletic shoe? How high can a heel be before causing health problems? Does the shape of the toe box of a shoe matter?
- Generate concepts: Generate concepts: During this brainstorming phase, all ideas are good ideas. Encourage wild ideas. Do not judge ideas (at this stage). Write down all ideas. For example, ways to make a shoe convert between a high heel and an athletic shoe might include: a heel that can be turned under the shoe and stowed, a heel that can be removed completely, and a heel that can telescope into itself. These are just a few ideas.
- Evaluate the concepts: Analysis: Determine what each concept would take to complete. List the pros and cons of each concept and determine whether they fit the project constraints and requirements.
- Select the most promising concept: Select the most promising concept: Each project has certain constraints. This project is constrained by materials, time and people. The most promising solution fulfills all the requirements while working within the constraints.
- Build a prototype: A prototype is a working version of the final product. It is used to demonstrate the concept, and for this project, will be the final deliverable product. Prototypes are further used as a model to produce the final product.
- Test the prototype: The prototype must fulfill the requirements decided upon at the beginning of the project. For this case, the shoe must be wearable and convert between a high heel and an athletic shoe.
- Evaluate test results: Evaluate test results: Did it work? Does the heel retract? Can the shoe hold the weight of a person? Is it attractive?
- Redesign:Redesign: Most engineers go through the design loop several times to reach a final product. If time is a limited, then students can describe what they would change if given more time. If time permits, students can do a redesign and change certain aspects of their shoe to make it better.
Before the Activity
- Review with students the steps of the engineering design process.
- Decide on your approach to the activity. Either: 1) Gather a pre-selected supply of construction materials in advance of the activity from which students incorporate as they design, or 2) Have students compile a materials list within a set budget after the research phase of the design process, and buy what they request.
- Make two copies of the Static Forces Worksheet per group — one to complete at the beginning of the project, using the shoes from one team member, and another to complete at the end of the project using the team's prototype shoe.
- As needed, print out the attached Engineering Design Process Graphic for reference.
With the Students - Shoe Design
- Divide the class into teams of three or four students each.
- To make sure students understand the relationship between force, pressure and surface area, have each team complete the Static Forces Worksheet using one team member's shoe.
- Starting with the beginning of the design process, recognize a need: Identify a problem. For example, most shoes have only a single function. Or, most athletic shoes are not appropriate for a dressier occasion. Alternatively, discuss the health effects of wearing high-heeled shoes and brainstorm situations in which being able to remove heels quickly would be beneficial (such as driving a car, walking to the bus or sitting at a desk). As necessary, go through the steps as they apply to this project, as described in the Background section.
- As a class, brainstorm features required for both a high-heel shoe and an athletic shoe. A high heel needs structural support under the arch, as well as a sturdy way to keep the heel attached. An athletic shoe needs a flexible sole and a fastening system so the wearer can tighten the shoe.
- Working in their engineering design teams, have students compare their shoes for 10 minutes. Have them write down differences in construction and support features such as rigidity, materials used, fastening devices, shape, etc.
- Have each group hold a 15-minute brainstorming session during which they list all the possible features their convertible shoes might have and ways to convert between the two styles. Give each group a large sheet of butcher paper and encourage them to write down all ideas that come up. The conversion mechanism might be a hinge that folds the heel away or some way to easily remove the heel and its sole support.
- (If requiring students to use a pre-selected supply of materials, show students these materials at this time.) Have students spend 20-30 minutes designing their shoe on paper, working from the brainstorming session ideas. If possible, have them make CAD drawings of their designs.
- Design Peer Review: Have each group present their designs to the class. Make sure they explain key shoe features that make it a good athletic shoe and a good high heel. Make sure they explain the conversion mechanism clearly. Allow time for the other teams to critique the presenting group's design and offer feedback and suggestions for improvement.
- (If buying supplies that students specify, have them make a materials buy list at this time.)
With the Students - Building Shoes
- Have each group retrieve the materials required for their shoe design (either from the pre-selected materials provided or from supplies the instructor purchased from their buy list.)
- Allow about 60 minutes for students to build the soles of the shoes. For example, they might be made of foam core board cut to the shape of a foot and covered with a thin rubber sheet for traction. Create creases in the foam core board to correspond to the foot, toe and arch joints, which allows the shoe to take on a high-heeled shape.
- Next, have students build the removable or retractable heel. For example, the heel might be made of wooden dowels and attached with a hinge so it swings under the shoe and into a slit in the rubber base when not in use, or it can be removed completely using a button clasp. Allow 30-90 minutes, depending on the method used to stow or remove the heel.
- Use the flat metal piece to create the shape of the high heel. Attach it to the flexible sole of the shoe using Velcro and hot glue so it is removable. Allow 30 minutes for this step.
- Give students at least two hours to finish their shoes (or extend longer, depending on the intricacy of their designs). At this stage, have students build the upper part of the shoe using fabric that is attached by using hot glue or sewing. Have students add a fastening system to the shoes, possibly using laces, Velcro or buckles. Have students accessorize their shoes to make them unique.
With the Students - Marketing
- Have students write scripts for 30-second commercials selling their shoes, highlighting the technical features and customer benefits. Perform or video-tape the commercials to show the entire class.
- Have teams complete the Static Forces Worksheet for their prototype shoes, including calculating the pressures and forces experienced by their shoes in the different configurations.
- Conclude by conducting a class critique of each group's shoe prototypes. Have each group explain the unique features of each style of their shoe, as well as the conversion mechanism to switch between them. If this was a real project and they had more time, what improvements would they make? Remind students that real-world engineers cycle through the engineering design process many times before they have a completed design, ready to manufacture.
- Provide supervision when students use power tools and hot glue.
Gluing is always an experimental art. Be sure to have a wide variety of adhesives or other attachment options available.
Students tend to get caught up on various steps of the design process such as research or design. Often, students need to be forced to move from these phases into the prototype construction phase.
Force and Pressure: Have students complete the Static Forces Worksheet using one team member's shoe, to ensure they understand the relationship between force, pressure, and surface area. (This same worksheet may have already been completed if the associated lesson was conducted.)
Activity Embedded Assessment
Design Peer Review: Have group present their designs to the class and ask for feedback. Have them explain key features in their shoes that make them good athletic shoes and good high heels. Make sure they explain the conversion mechanisms clearly.
TV Time: Have group write scripts and film (or perform) 30-second commercials selling their shoes. Make sure they highlight the key engineering features and customer benefits of their shoes.
Pressures: Have students calculate the pressures and forces experienced by their shoes in the different configurations. Hand out another Static Forces Worksheet for each team to complete the calculations for their prototype shoe.
Redesign: Conduct a class critique of each group's shoe prototypes. What improvements would they make in the next prototype version?
Have teams make additional, improved prototypes of their shoe designs.
Copyright© 2010 by Regents of the University of Colorado.
Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government.
Last modified: July 18, 2017 |
Definition relative dating geology
With absolute age dating, you get a real age in actual years.
It’s based either on fossils which are recognized to represent a particular interval of time, or on radioactive decay of specific isotopes. Based on the Rule of Superposition, certain organisms clearly lived before others, during certain geologic times.
Each isotope is identified with what is called a ‘mass number’.
When ‘parent’ uranium-238 decays, for example, it produces subatomic particles, energy and ‘daughter’ lead-206.
These use radioactive minerals in rocks as geological clocks.
The atoms of some chemical elements have different forms, called isotopes.
, characterisation of material cultures, definition of prehistoric territories and exchange patterns), but there are also many pitfalls associated with this approach, and these are not dealt with in this volume.
Relative age dating also means paying attention to crosscutting relationships.
Yet, you’ve heard the news: Earth is 4.6 billion years old. That corn cob found in an ancient Native American fire pit is 1,000 years old. Geologic age dating—assigning an age to materials—is an entire discipline of its own.
In a way this field, called geochronology, is some of the purest detective work earth scientists do.
Geologists use radiocarbon to date such materials as wood and pollen trapped in sediment, which indicates the date of the sediment itself.
The table below shows characteristics of some common radiometric dating methods. |
The Met Office is a world leader in forecast accuracy, but due to the chaotic nature of weather there are unavoidable limitations to what we can predict. However, we can calculate the confidence in a weather forecast to give people a clear picture of any uncertainties.
This is particularly relevant when it comes to precipitation (usually rain, but also including drizzle, sleet, snow, hail, etc) because it can often vary a lot from place to place - especially when it falls as showers. This makes it difficult to be precise about whether precipitation will fall, and if so how much. To try and convey the uncertainty better, we have introduced a new element to the forecast, the Probability of Precipitation (PoP).
Here we explain what this part of the forecast means and how we produce it.
PoP is given in the five day forecasts for specific locations (there are now around 7000 locations to choose from around the UK).The forecast represents an hourly period for days one to two of the five day forecast and a three-hour period of the day for days three to five. The Precipitation Probability is given as a percentage (%) to the nearest 5% - which indicates how likely it is that any precipitation will fall during that period at the selected location. More precisely, by "any precipitation" we mean at least 0.1mm, which is about the smallest amount that we can measure. Note that this does not mean the probability that it will be raining, snowing, hailing etc for the whole of the period, only the probability that some precipitation will fall during that period.
So what does a PoP of 10% mean? This means that there is a 1 in 10 chance that precipitation will fall during this period. Another way of looking at this probability is that there is a 9 in 10 chance that it will stay dry. Similarly, a PoP of 80% means an 8 in 10 chance that precipitation will fall, and only a 2 in 10 chance that it will remain dry.
You can learn more about the precipitation risk by looking at the PoP alongside the weather symbol for the same period. This will indicate what type of precipitation is most likely (drizzle, rain, snow, showers etc.) and how heavy it might be. The symbol, for days three to five, represents the most likely weather during the three-hour period, so you may sometimes find that the symbol indicates no precipitation, but the PoP gives a low amount, say 20%, indicating that although it is most likely that it will be dry, there is still some risk.
Often people want to make a decision, such as whether to put out their washing to dry, and would like us to give a simple yes or no. However, this is often a simplification of the complexities of the forecast and may not be accurate. By giving PoP we give a more honest opinion of the risk and allow you to make a decision depending on how much it matters to you. For example, if you are just hanging out your sheets that you need next week you might take the risk at 40% probability of precipitation, whereas if you are drying your best shirt that you need for an important dinner this evening then you might not hang it out at more than 10% probability. PoP allows you to make the decisions that matter to you.
A weather forecast is an estimate of the future state of the atmosphere. It's created by observing the current state of the atmosphere and using a computer model to calculate how it may change over time. As the atmosphere is a chaotic system, small approximations in the way observations are analysed can lead to large errors in a weather forecast. We can't create perfect weather forecasts because we can never observe every detail of the atmosphere as it changes hour by hour and day by day.
To estimate the uncertainty in the forecast we use what are known as 'ensemble forecasts'. Here, we run our computer model many times from slightly different starting conditions. Initial differences are tiny so each run is equally likely to be correct, but the chaotic nature of the atmosphere means the forecasts can be quite different. On some days the model runs may be similar, which gives us a high level of confidence in the weather forecast; on other days, the model runs can differ radically so we have to be more cautious.
A single probability forecast is never right or wrong. We can only measure how good our probability forecasts are by looking at a large number of them over time. We do this by grouping together, for example, all of the 10% probability forecasts and checking that those weather events actually took place on one in 10 occasions as predicted.
Last updated: 8 August 2014 |
In this chapter fluids at rest will be studied. Mass density, weight density, pressure, fluid pressure, buoyancy, and Pascal's principle will be discussed. In the following, the symbol ( ρ ) is pronounced " rho ."
Example 1 : The mass density of steel is 7.8 gr /cm3. A chunk of steel has a volume of 141cm3. Determine (a) its mass in grams and (b) its weight density in N/m3. Solve before looking at the solution. Use horizontal fraction bars.
Solution: (a) Since ρ = M / V ; M = ρV ; M = (7.8 gr / cm3) (141 cm3 ) ; M = 1100 grams.
Before going to Part (b), let's first convert (gr/cm3) to its Metric version (kg/m3). Use horizontal fraction bars.
7.8 gr / cm3 = 7.8 (0.001kg) / (0.01m)3 = 7800 kg/m3.
1 kg is equal to 1000gr. This means that 1 gr is 0.001kg as is used above.
Also, 1 m is 100cm. This means that 1cm is 0.01m. Cubing each, results in : 1cm3 = 0.000001m3 as is used above. Now, let's solve Part (b).
(b) D = ρg ; D = [7800 kg /m3] [ 9.8 m/s2] = 76000 N /m3.
Not only you should write Part (b) with horizontal fraction bars, but also check the correctness of the units as well.
Example 2 : A piece of aluminum weighs 31.75N. Determine (a) its mass and (b) its volume if the mass density of aluminum is 2.7gr/cm3.
Solution: (a) w = Mg ; M = w / g ; M = 31.75N / [9.8 m/s2] ; M = 3.2 kg ; M = 3200 grams.
(b) ρ = M / V ; V = M / ρ ; V = 3200gr / [2.7 gr /cm3] = 1200 cm3.
Example 3 : The mass densities of gold and copper are 19.3 gr/cm3 and 8.9 gr/cm3, respectively. A piece of gold that is known to be an alloy of gold and copper has a mass of 7.55kg and a volume of 534 cm3. Calculate the mass percentage of gold in the alloy assuming that the volume of the alloy is equal to the volume of copper plus the volume of gold. In other words, no volume is lost or gained as a result of the alloying process. Do your best to solve it by yourself first.
Solution: Two equations can definitely be written down. The sum of masses as well as the sum of volumes are given. The formula M = ρV is applicable to both metals. Mgold = ρgoldVgold and Mcopper = ρcopperVcopper . Look at the following as a system of two equations in two unknowns:
|Mg + Mc = 7550 gr||ρgVg + ρcVc = 7550||19.3 Vg+ 8.9Vc = 7550||19.3 Vg + 8.9Vc =7550|
|Vg + Vc = 534 cm3||Vg + Vc = 534||Vg + Vc = 534||Vg = 534 - Vc|
Substituting for Vg in the first equation yields:
19.3 (534 - Vc) + 8.9 Vc = 7550 ; 10306.2 -10.4Vc = 7550 ; 2756.2 = 10.4Vc ; Vc = 265 cm3.
Since Vg = 534 - Vc ; therefore, Vg = 534 - 265 = 269 cm3.
The masses are: Mg = ρgVg ; Mg = (19.3 gr/cm3) ( 269 cm3 ) = 5190 gr ; Mc = 2360 gr.
The mass percentage of gold in the alloy (7750gr) is Mgold / Malloy = (5190/7550) = 0.687 = 68.7 %
Karat means the number of portions out of 24 portions. [68.7 / 100] = [ x / 24] ; x = 16.5 karat.
Pressure is defined as force per unit area. Let's use lower case p for pressure; therefore, p = F / A. The SI unit for pressure is N/m2 called " Pascal." The American unit is lbf / ft2. Two useful commercial units are: kgf / cm2 and lbf / in2 or psi.
Example 4: Calculate the average pressure that a 120-lbf table exerts on the floor by each of its four legs if the cross-sectional area of each leg is 1.5 in2.
Solution: p = F / A ; p = 120lbf / (4x 1.5 in2) = 20 lbf / in2 or 20 psi.
Example 5: (a) Calculate the weight of a 102-gram mass piece of metal. If this metal piece is rolled to a square sheet that is 1.0m on each side, and then spread over the same size (1.0m x 1.0m ) table, (b) what pressure would it exert on the square table?
Solution: (a) w = Mg ; w = (0.102 kg)(9.8 m/s2) ; w = 1.0 N
(b) p = F / A ; p = 1.0N / (1.0m x 1.0m) ; p = 1.0 N/m2 ; p = 1.0 Pascal (1.0Pa)
As you may have noticed, 1 Pa is a small amount of pressure. The atmospheric pressure is 101,300 Pa. We may say that the atmospheric pressure is roughly 100,000 Pa, or 100kPa We will calculate this later.
Fluid Pressure: Both liquids and gases are considered fluids. The study of fluids at rest is called Fluid Statics. The pressure in stationary fluids depends on weight density, D, of the fluid and depth, h, at which pressure is to be calculated. Of course, as we go deeper in a fluid, its density increases slightly because at lower points, there are more layers of fluid pressing down causing the fluid to be denser. For liquids, the variation of density with depth is very small for relatively small depths and may be neglected. This is because of the fact that liquids are incompressible. For gases, the density increase with depth becomes significant and may not be neglected. Gases are called compressible fluids. If we assume that the density of a fluid remains fairly constant for relatively small depths, the formula for fluid pressure my be written as:
p = hD or p = h ρg
where ρ is the mass density and D is the weight density of the fluid.
Example 6: Calculate (a) the pressure due to just water at a depth of 15.0m below lake surface. (b) What is the total pressure at that depth if the atmospheric pressure is 101kPa? (c) Also find the total external force on a spherical research chamber which external diameter is 5.0m. Water has a mass density of ρ = 1000 kg/m3.
Solution: (a) p = hD ; p = h ρg ; p = (15.0m)(1000 kg/m3)(9.8 m/s2) = 150,000 N /m2 or Pa.
(b) [p total] external = p liquid + p atmosphere ; [p total] external = 150,000Pa + 101,000Pa = 250,000Pa.
(c) p = F / A ; solving for F, yields: F = pA ; Fexternal = (250,000 N/m2)(4π)(2.5m)2 = 20,000,000 N.
F = 2.0x107N ( How may millions?!)
Chapter 11 Test Yourself 1:
1) Average mass density, ρ is defined as (a) mass of unit volume (b) mass per unit volume (c) a &b. click here
2) Average weight density, D is defined as (a) weight per unit volume (b) mass of unit volume times g (c) both a & b.
3) D = ρg is correct because (a) w = Mg (b) D is weight density and ρ is mass density (c) both a & b.
4) 4.0cm3 of substance A has a mass of 33.0grams, and 8.0cm3 of substance B has a mass of 56.0 grams. (a) A is denser than B (b) B is denser than A (c) Both A and B have the same density. click here
Problem: 1gram was originally defined to be the mass of 1cm3 of pure water. Answer the following question by first doing the calculations. Make sure to write down neatly with horizontal fraction bars. click here
5) On this basis, one suitable unit for the mass density of water is (a) 1 cm3/gr (b) 1 gr/cm3 (c) both a & b.
6) We know that 1kg = 1000gr. We may say that (a) 1gr = (1/1000)kg (b) 1gr = 0.001kg (c) both a & b.
7) We know that 1m = 100cm. We may say that (a) 1m3 = 100cm3 (b) 1m3 = 10000cm3 (c) 1m3 = 1000,000cm3.
8) We know that 1cm = 0.01m. We may write (a) 1cm3 = 0.000001m3 (b) 1cm3 = 0.001m3 (c) 1cm3 = 0.01m3.
9) Converting gr/cm3 to kg/m3 yields: (a)1gr/cm3 = 1000 kg/m3 (b)1gr/cm3 = 100 kg/m3 (c)1gr/cm3 = 10 kg/m3.
10) From Q9, the mass density of water is also (a) 1000 kg/m3 (b) 1 ton/m3, because 1ton=1000kg (c) both a & b.
11) Aluminum is 2.7 times denser than water. Since ρwater = 1000kg/m3 ; therefore, ρAlum. = (a) 2700kg/m3 (b) 27kg/m3 (c) 27000kg/m3. click here
12) Mercury has a mass density of 13.6 gr/cm3. In Metric units (kg/m3), its density is (a) 1360 kg/m3 (b) 13600 kg/m3 (c) 0.00136kg/m3.
13) The weight density of water is (a) 9.8 kg/m3 (b) 9800kg/m3 (c) 9800N/m3. click here
14) The volume of a piece of copper is 0.00247m3. Knowing that copper is 8.9 times denser than water, first find the mass density of copper in Metric units and then find the mass of the copper piece. Ans. : (a) 44kg (b) 22kg (c) 16kg.
Problem: The weight of a gold sphere is 1.26N. The mass density of gold is ρgold = 19300kg/m3.
15) The weight density, D, of gold is (a) 1970 N/m3 (b) 189000 N/m3 (c) 100,000 N/m3.
16) The volume of the gold sphere is (a) 6.66x10-6m3 (b) 6.66cm3 (c) both a & b. click here
17) The radius of the gold sphere is (a) 1.167cm (b) 0.9523cm (c) 2.209cm.
18) Pressure is defined as (a) force times area (b) force per unit area (c) force per length.
19) The Metric unit for pressure is (a) N/m3 (b) N/cm3 (c) N/m2. click here
20) Pascal is the same thing as (a) lbf / ft2 (b) N/m2 (c) lbf / in2.
21) psi is (a) lbf / ft2 (b) N/m2 (c) lbm / in2 (d) none of a, b, or c.
22) A solid brick may be placed on a flat surface on three different sides that have three different surface areas. To create the greatest pressure it must be placed on its (a) largest side (b) smallest side (c) middle-size side.
Problem: 113 grams is about 4.00 ounces. A 102 gram mass is 0.102kg. The weight of a 0.102kg mass is 1.00N. Verify this weight. If a 0.102gram piece of say copper that weighs 1.00N, is hammered or rolled to a flat sheet (1.00m by 1.00m), how thin would that be? May be one tenth of 1 mm? Note that a (1m) by (1m) rectangular sheet of metal may be viewed as a rectangular box which height or thickness is very small, like a sheet of paper. If you place your hand under such thin sheet of copper, do you hardly feel any pressure? Answer the following questions:
23) The weight density of copper that is 8.9 times denser than water is (a) 8900N/m2 (b) 1000N/m3 (c) 87220N/m3.
24) The volume of a 0.102kg or 1.00N piece (sheet) of copper is (a) 1.15x10-5m3 (b) 1.15x105m3 (c) 8900m3.
25) For a (1m)(1m) = 1m2 base area of the sheet, its height or thickness is (a) 1.15x10-5m (b) 1.15x105m (c) 8900m.
26) The small height (thickness) in Question 25 is (a) 0.0115mm (b) 0.0115cm (c) 890cm.
27) The pressure (force / area) or (weight / area) that the above sheet generates is (a) 1N/1m2 (b) 1 Pascal (c) both a & b.
28) Compared to pressures in water pipes or car tires, 1 Pascal of pressure is (a) a great pressure (b) a medium pressure (c) a quite small pressure.
29) The atmospheric pressure is roughly (a) 100Pa (b) 100,000 Pa (c) 100kPa (d) both b & c.
30) The atmospheric pressure is (a) 14.7 psi (b) 1.0 kgf/m2 (c) 1.0 kgf/cm2 (d) a & c.
|Gravity pulls the air molecules around the Earth toward the Earth's center. This makes the air layers denser and denser as we move from outer space toward the Earth's surface. It is the weight of the atmosphere that causes the atmospheric pressure. The depth of the atmosphere is about 60 miles. If we go 60 miles above the Earth surface, air molecules become very scarce to where we might travel one meter and not collide with even a single molecule (a good vacuum!). Vacuum establishes the basis for absolute zero pressure. Any gas pressure measured with respect to vacuum is called " absolute pressure. "||
Calculation of the Atmospheric Pressure:
The trick to calculate the atmospheric pressure is to place a 1-m long test tube filled with mercury inverted over a pot of mercury such that air can not get in the tube. Torricelli ( Italian) was the first to try this. The figure is shown above. In doing this, we will see that the mercury level drops to 76.0cm or 30.0 inches if the experiment is performed at ocean level. The top of the tube lacks air and does not build up air pressure. This device acts as a balance. If it is taken to the top of a high mountain where there is a smaller number of air layers above one's head, the mercury level goes down. This device can even be calibrated to measure elevation for us based on air pressure.
The pressure that the 76.0-cm column of mercury generates is equal the pressure that the same diameter column of air generates but with a length of 60 miles ( from the Earth's surface all the way up to the no-air region).
Using the formula for pressure ( p = F / A ), the pressure of the mercury column or the atmospheric pressure can be calculated as follows:
patm = the mercury weight / the tube cross-sectional Area. ( Write using horiz. fraction bars).
patm = (VHg)(DHg) / A = (A)(hHg)(DHg) / A = hHgDHg . Note that the tube's volume = VHg. = (base area) (height) = (A)(hHg.).
patm = hHgDHg ( This further verifies the formula for for pressure in a fluid).
In Torricelli's experiment, hHg = 76.0cm and DHg = 13.6 grf /cm3 ; therefore ,
patm = ( 76.0cm )( 13.6 grf /cm3 ) = 1033.6 grf / cm2
Converting grf to kgf results in patm = 1.0336 kgf / cm2
To 2 significant figures, this result is a coincidence: patm = 1.0 kgf /cm2.
If you softly place a 2.2 lbf (or 1.0 kgf ) weight over your finger nail ( A = 1 cm2 almost), you will experience a pressure of 1.0 kgf / cm2 (somewhat painful) that is equivalent to the atmospheric pressure. The atmosphere is pressing with a force of 1 kgf = 9.8 N on every cm2 of our bodies and we are used to it. This pressure acts from all directions perpendicular to our bodies surfaces at any point. An astronaut working outside a space station must be in a very strong suit that can hold 1 atmosphere of pressure inside compared to the zero pressure outside and not explode.
Example 7: Convert the atmospheric pressure from 1.0336 kgf / cm2 to lbf / in2 or psi.
Solution: 1 kgf = 2.2 lbf and 1 in. = 2.54 cm. Convert and show that patm = 14.7 psi.
Example 8: Convert the atmospheric pressure from 1.0336 kgf / cm2 to N / m2 or Pascals (Pa).
Solution: 1 kgf = 9.8N and 1 m = 100 cm. Convert and show that patm = 101,300 Pa.
Example 9: The surface area of an average size person is almost 1m2. Calculate the total force that the atmosphere exerts on such person.
Solution: p = F / A ; F = pA ; F = ( 101,300 N/m2 )( 1 m2 ) = 100,000 N.
F = ( 1.0336 kgf / cm2 )( 10,000 cm2 ) = 10,000 kgf = 10 ton force.
Example 10: A submarine with a total outer area of 2200m2 is at a depth of 65.0m below ocean surface. The density of ocean water is 1030 kg/m3. Calculate (a) the pressure due to water at that depth, (b) the total external pressure at that depth, and (c) the total external force on it. Let g = 9.81 m/s2.
Solution: (a) p = hD ; p = h ρg ; p = (65.0m)(1030 kg/m3)(9.81 m/s2) = 657,000 N /m2 or Pa.
(b) [p total] external = p liquid + p atmosphere ; [ p total ]external = 657,000Pa + 101,000Pa = 758,000Pa.
(c) p = F / A ; solving for F, yields: F = pA ; F = (758,000 N/m2)(2200m) = 1.67x109 N.
Buoyancy, Archimedes' Principle:
When a non-dissolving object is submerged in a fluid (liquid or gas), the fluid exerts an upward force onto the object that is called the buoyancy force (B). The magnitude of the buoyancy force is equal to the weight of displaced fluid. The formula for buoyancy is therefore,
B = Vobject Dfluid
Example 11: Calculate the downward force necessary to keep a 1.0-lbf basketball submerged under water knowing that its diameter is 1.0ft. The American unit for the weight density of water is Dwater = 62.4 lbf /ft3.
Solution: The volume of the basketball (sphere) is: Vobject = (4/3) π R3 = (4/3)(3.14)(0.50 ft)3 = 0.523 ft3.
The upward force (buoyancy) on the basketball is: B = Vobject Dfluid = (0.523 ft3)(62.4 lbf / ft3) = 33 lbf .
Water pushes the basketball up with a force of magnitude 33 lbf while gravity pulls it down with a force of 1.0 lbf (its weight); therefore, a downward force of 32 lbf is needed to keep the basketball fully under water. The force diagram is shown below:
A Good Link to Try: http://www.mhhe.com/physsci/physical/giambattista/fluids/fluids.html .
Example12: Calculate the necessary upward force to keep a (5.0cm)(4.0cm)(2.0cm)-rectangular aluminum bar from sinking when submerged in water knowing that Dwater = 1 grf / cm3 and DAl = 2.7 grf / cm3.
Solution: The volume of the bar is Vobject = (5.0cm)(4.0cm)(2.0cm) = 40cm3.
The buoyancy force is: B = Vobject Dfluid = (40cm3)(1 grf / cm3) = 40grf.
The weight of the bar in air is w = Vobject Dobject = (40cm3)(2.7 grf / cm3) = 110grf.
Water pushes the bar up with a force of magnitude 40. grf while gravity pulls it down with 110grf ; therefore, an upward force of 70 grf is needed to keep the bar fully under water and to avoid it from sinking. The force diagram is shown below:
Example13: A boat has a volume of 40.0m3 and a mass of 2.00 tons. What load will push 75.0% of its volume into water? Each metric ton is 1000 kg. Let g = 9.81 m/s2.
Solution: Vobj = 0.750 x 40.0m3 = 30.0m3.
B =Vobject Dfluid = (30.0m3)(1000 kg /m3)(9.81 m/s2) = 294,000N.
w = Mg = (2.00 x 103 kg)(9.81 m/s2) = 19600N.
F = B - w = 294,000N - 19600N = 274,000N.
An important and useful principle in fluid statics is the " Pascal's Principle." Its statement is as follows: The pressure imposed at any point of a confined fluid transmits itself to all points of that fluid without significant losses.
One application of the Pascal's principle is the mechanism in hydraulic jacks. As shown in the figure, a small force, f, applied to a small piston of area, a ,imposes a pressure onto the liquid (oil) equal to f/a. This pressure transmits throughout the oil as well as onto the internal boundaries of the jack specially under the big piston. On the big piston, the big load F, pushes down over the big area A. This pressure is F/A . The two pressures must be equal, according to Pascal's principle. We may write:
f /a = F/A
Although, for balance, the force that pushes down on the big piston is much greater in magnitude than the force that pushes down on the small piston; however, the small piston goes through a big displacement in order for the big piston to go through a small displacement.
Example14: In a hydraulic jack the diameters of the small and big pistons are 2.00cm and 26.00cm respectively. A truck that weighs 33800N is to be lifted by the big piston. Find (a) the force that has to push the smaller piston down, and (b) the pressure under each piston.
Solution: (a) a = π r2 = π (1.00cm)2 = 3.14 cm2 ; A = π R2 = π (13.00cm)2 = 530.66 cm2
f / a = F / A ; f / 3.14cm2 = 33800N / 530.66cm2 ; f = 200N
(b) p = f /a = 63.7 N/cm2 ; p = F / A = 63.7 N/cm2.
Chapter 11 Test Yourself 2:
1) In Torricelli's experiment of measuring the atmospheric pressure at the ocean level, the height of mercury in the tube is (a) 76.0cm (b) 7.6cm (c) 760mm (d) a & c. click here
2) The space above the tube in the Torricelli's experiment is (a) at regular pressure (b) almost vacuum (c) at a very small amount of mercury vapor pressure, because due to vacuum, a slight amount of mercury evaporates and creates minimal mercury vapor pressure (d) both b & c.
3) The pressure inside a stationary fluid (liquid or gas) depends on (a) mass density and depth (b) weight density and depth (c)depth only regardless of the fluid type. click here
4) A pressure gauge placed 50.0m below ocean surface measures (a) a higher (b) a lower (c) the same pressure compared to a gauge that is placed at the same depth in a lake.
5) The actual pressure at a certain depth in an ocean on a planet that has an atmosphere is equal to (a) just the liquid pressure (b) the liquid pressure + the atmospheric pressure (c) the atmospheric pressure only. click here
6) The formula that calculates the pressure at a certain depth in a fluid is (a) p = h ρ (b) p = hD (c) p = h ρg (d) both b & c.
Problem: Mercury is a liquid metal that is 13.6 times denser than water. Answer the following questions:
7) The mass density of mercury is (a) 13600 kg/m3 (b) 13.6 ton / m3 (c) both a & b. click here
8) The weight density of mercury is (a) 130,000N/m3 (b) 1400N/m3 (c) 20,100N/m3.
9) In a mercury tank, the liquid pressure at a depth if 2.4m below mercury surface is (a) 213000N/m2 (b) 312000N/m3 (c) 312000N/m2. click here
10) In the previous question, the total pressure at that depth is (a) 412000 N/m3 (b) 212000N/m2 (c) 412000 N/m2.
Problem: This problem shows the effect of depth due to liquid pressure. A long vertical and narrow steel pipe of 1.0cm in diameter is connected to a spherical barrel of internal diameter of 1.00m. The barrel is also made of steel and can withstand an internal total force of 4,00,000N! The barrel is gradually filled with water through the thin and long pipe on its top while allowing the air out of the tank. When the spherical part of the tank is full of water, further filling makes the water level in the thin pipe to go up fast (Refer to Problem 7 at the very end of this chapter for a suitable figure). As the water level goes up, it quickly builds up pressure. p = hD. If the pipe is 15.5m long, for example, answer Questions 11 through 14.
11) The liquid pressure at the center of the barrel is (a) 151900N/m2 (b) 156800N/m2 (c) 16000kg/m2.
12) The total internal area of the barrel is (a) 0.785m2. (b) 3.141m2. (a) 1.57m2. click here
13) The total force on the internal surface of the sphere is (a) 246000N (b) 123000N (c) 492000N.
14) Based on the results in the previous question, the barrel (a) withstands the pressure (b) does not withstand the pressure. click here
15) The liquid pressure at a depth of 10m ( 33ft ) below water on this planet is roughly (a) 200,000Pa (b) 100,000Pa (c) 300,000Pa.
16) Since the atmospheric pressure is also roughly 100,000 Pa, we may say that every 10m of water depth or height is equivalent to (a) 1 atmosphere of pressure (b) 2 atmospheres of pressure (c) 3 atmospheres of pressure.
17) In the Torricelli's experiment, if the formula P = hD is used to calculate the pressure caused by 0.760m of mercury the value of atmospheric pressure becomes (a) 9800 N/m2 (b) 98000 N/m2 (c) 101,000 N/m2. Perform the calculation. ρmercury = 13600kg/m3. click here
18) To convert 101,000 N/m2 or the atmospheric pressure to lbf /in2 or psi, one may replace (N) by 0.224 lbf and (m) by 39.37 in. The result of the conversion is (a) 25.4ps (b) 14.7psi (c) 16.2psi. Perform the calculation.
19) To convert 101,000 N/m2 or the atmospheric pressure to kgf /cm2, one may replace (N) by 0.102 kgf and (m) by 100cm. The result of the conversion is (a) 1.0 kgf /cm2 (b) 2.0 kgf /cm2 (c) 3.0kgf /cm2. Perform the calculation.
20) Due to the atmospheric pressure, every cm2 of our bodies is under a force of (a) 1.0kgf (b) 9.8N (c) both a & b. click here
21) An example of an area approximately close to 1cm2 is the size of (a) a finger nail (b) a quarter (c) a dollar coin.
22) The formula that calculates the area of sphere is Asphere = (a) πr2 (b) 2πr2 (c) 4πr2.
23) The force due to liquid pressure on a 5.0m diameter spherical chamber that is at a depth of 40.0m below ocean surface is (a) 3.14x106N (b) 3.08x107N (c) 6.16x106N. click here
24) Buoyancy for a submerged object in a non-dissolving liquid is (a) the upward force that the liquid exerts on that object (b) equal to the mass of the displaced fluid (c) equal to the weight of the displaced fluid (d) a & c.
25) The direction of the buoyancy force is (a) always downward (b) always upward (c) sometimes upward and sometimes downward. click here
26) The buoyancy on a cube 0.080m on each side and fully submerged in water is (a) 5.02N (b) 63N (c) 0.512N.
27) If the cube in the previous question is made of aluminum ( ρ = 2700 kg/m3), it has a weight of (a) 13.5N (b) 170N (c) 0.189N. click here
28) The force necessary to keep the cube in the previous question from sinking in water is (a) 107N (b) 8.5N (c) 7.0N.
Problem: A (12.0m)(50.0m)(8.0m-height)-barge has an empty mass of 1250 tons. For safety reasons and preventing it from sinking, only 6.0m of its height is allowed to go under water. Answer the following questions:
29) The total volume of the barge is (a) 480m3 (b) 60m3 (c) 4800m3. click here
30) The effective (safe) volume of the barge that can be submerged in water is (a) 3600m3 (b) 50m3 (c) 360m3.
31) The buoyancy force on the barge when submerged in water to its safe height is (a) 1.83x106N (b) 5.43x108N (c) 3.53x107N. click here
32) The safe load that the barge can carry is (a) Buoyancy + its empty weight (b) Buoyancy - its empty weight (c) Buoyancy - its volume.
33) The mass of the barge in kg is (a) 1.25x103 ton (b) 1.25x106 kg (c) a & b.
34) The weight of the barge in N is (a) 1.23x107 N (b) 2.26x107 N (c) neither a nor b. click here
35) The safe load in N that the barge can carry is (a) 3.41x107N (b) 2.3x107N (c) 2.53x107N.
36) The safe load in Metric tons is (a) 1370 ton (b) 2350 ton (a) 5000 ton.
37) According to Pascal's principle, a pressure imposed (a) on any fluid (b) on a confined fluid (c) on a mono-atomic fluid, transmits itself to all points of that fluid without any significant loss. click here
Problem: In a hydraulic jack the diameter of the big cylinder is 10.0 times the diameter of the small cylinder. Answer the following questions:
38) The ratio of the areas (of the big piston to the small piston) is (a) 10.0 (b) 100 (c) 50.0. click here
39) The ratio of the applied forces (on the small piston to that of the big piston) is (a) 1/100 (b) 1/10 (c) 1/25. click here
40) If the applied force to the small piston is 147.0N, the mass of the car it can lift is (a) 1200kg (b) 3500kg (c) 1500kg.
1) The mass density of mercury is 13.6 gr /cm3. A cylindrical vessel that has a height of 8.00cm and a base radius of 4.00cm is filled with mercury. Find (a) the volume of the vessel. Calculate the mass of mercury in (b) grams, (c) kg, and (d) find its weight both in N and kgf. Note that 1kgf = 9.81N.
2) A piece of copper weighs 49N. Determine (a) its mass and (b) its volume. The mass density of copper is 8.9gr/cm3.
3) The mass densities of gold and copper are 19.3 gr/cm3 and 8.9 gr/cm3, respectively. A piece of gold necklace has a mass of 51.0 grams and a volume of 3.50 cm3. Calculate (a) the mass percentage and (b) the karat of gold in the alloy assuming that the volume of the alloy is equal to the volume of copper plus the volume of gold. In other words, no volume is lost or gained as a result of the alloying process.
4) Calculate the average pressure that a 32-ton ten-wheeler truck exerts on the ground by each of its ten tires if the contact area of each tire with the ground is 750 cm2. 1 ton = 1000kg. Express your answers in (a) Pascal, (b) kgf/cm2, and (c) psi.
5) Calculate (a) the water pressure at a depth of 22.0m below ocean surface. (b)What is the total pressure at that depth if the atmospheric pressure is 101,300Pa? (c) Find the total external force on a shark that has an external total surface area of 32.8 ft2. Ocean water has a mass density of ρ = 1030 kg/m3.
6) A submarine with a total outer area of 1720m2 is at a depth of 33.0m below ocean surface. The mass density of ocean water is 1025 kg/m3. Calculate (a) the pressure due to water at that depth. (b) the total external pressure at that depth, and (c) the total external force on it. Let g = 9.81 m/s2.
7) In the figure shown, calculate the liquid pressure at the center of the barrel if the narrow pipe is filled up to (a) Point A, (b) Point B, and (c) Point C.
Using each pressure you find (in Parts a, b, and c) as the average pressure inside the barrel, calculate (d) the corresponding internal force on the barrel in each case.
If it takes 4.00x107N for the barrel to rupture, (e) at what height of water in the pipe will that happen?
A sphere = 4πR2 and g = 9.81m/s2.
8) In problem 7, why is it not necessary to add the atmospheric pressure to the pressure you find for each case?
9) A volleyball has a diameter of 25.0cm and weighs 2.0N. Find (a) its volume. What downward force can keep it submerged in a type of alcohol that has a mass density of 834 kg/m3 (b) in Newtons and (c) lb-force? Vsphere=(4/3)πR3.
10) What upward force is needed to keep a 560cm3 solid piece of aluminum completely under water avoiding it from sinking (a) in Newtons, and (b) in lbf.? The mass density of aluminum is 2700kg/m3.
11) A boat has a volume of 127m3 and weighs 7.0 x104N. For safety, no more than 67.0% of its volume should be in water. What maximum load (a) in Newtons, (b) in kgf, (c) in ton-force, and (d) in lbf can be put in it?
1) 402cm3, 5470grams, 5.47kg, 53.7N & 5.47kgf 2) 5.0kg , 562 cm3
3) 72.3% , 17.4 karat
4) 420 kPa, 4.3 kgf/cm2, 61 psi
5) 222kPa, 323kPa, 969 kN
6) 332kPa, 433kPa, 7.45x108N
7) 176kPa, 225kPa, 255kPa, 2.00x107N, 2.55x107N, 2.88x107N,
36.1m above the barrel's center
8) For students to answer 9) 8.18x10-3m3, 64.9N, 14.6 lbf
10) 9.3N, 2.1 lbf 11) 765000 N, 78000 kgf, 78 ton-force, 170,000 lbf |
An Act of the US Congress concerning slavery. Following the Mexican–American War, the Compromise of 1850 had allowed squatters in New Mexico and Utah to decide by referendum whether they would enter the Union as “free” or “slave” states. This was contrary to the earlier Missouri Compromise. The Act of 1854 declared that in Kansas and Nebraska a decision on slavery would also be allowed, by holding a referendum. Tensions erupted between pro-and anti-slavery groups, which in Kansas led to violence (1855–57). Those who deplored the Act formed a new political organization, the Republican Party, pledged to oppose slavery in the Territories. Kansas was to be admitted as a free state in 1861, and Nebraska in 1867.
Subjects: Warfare and Defence — United States History. |
The 1997 US film Men In Black followed two secret agents who protected humans from extraterrestrial aliens, and aliens from humans. They had access to several (fictional) gadgets — the most notable of which was the “neuralizer.” The neuralizer allowed an agent to flash a light into the victim’s eyes, which deleted their memory and allowed the agent to suggest a new memory to overwrite the old one.
A research team at University of Oxford, UK, developed a gene that could only be activated with a laser, a technology that at first glance bears the same mystique as Hollywood’s neuralizer. While the Oxford team’s development cannot affect memory, it may have beneficial medical uses.
With this idea in mind, it is possible to imagine using a laser to activate only the toxic genes in cancerous tumors to destroy them. What is meant by “toxic genes?” Cells contain genes that turn on and off cell-death scenarios in case the cell is compromised or damaged, like a controlled self-destruct sequence. This means that the tumor cells receive the light, which activates the self-destruct gene, and the surrounding healthy cells would be left alone! Light activated DNA can be regulated outside the body (so no need for multiple surgeries) and targeted (the light only goes to the place where it’s useful).
In biology, genes are pieces of DNA that store instructions for making proteins. Proteins are responsible for the major functions inside the cell: they break down food for energy, carry oxygen in blood cells, etc. The gene developed by the researchers at Oxford encodes for a protein that creates a microscopic tunnel between two cells. This tunnel allows cells to send chemical signals through the tunnel only when the laser is activated. In this study, the group used some 3D-printed synthetic cells with this light-activated DNA inside that, when illuminated with a laser, could communicate to each other through the pore spaces using small electrical pulses.
When exposed to the laser, only the cells with the light-activated DNA would express this pore gene and create pores to cells without the light-activated DNA inside, such as tumor cells. In the future, doctors could place both the synthetic cells and a tumor-killing medicine inside of a tumor, where the synthetic cells could first generate pores between synthetic cells and tumor cells. These pores would then allow for the release of medicine to only the tumor. What about healthy cells? This new method can not yet discriminate between diseased and healthy cells and needs more research, so localizing these synthetic cells with light-activated DNA to a small area in a tumor with a tightly-regulated toxic gene is paramount. This small gene is a huge step in the realm of targeted medicines! |
This chapter is devoted to the major satellites of the giant planets: those large enough to have acquired a roughly spherical shape through self-gravity. There are 17 of these worlds (four at Jupiter, seven at Saturn, five at Uranus, and one at Neptune), ranging in diameter from 5,260 kilometers (Ganymede) to 400 kilometers (Mimas) (Figure 8.1, Table 8.1). They are astonishingly diverse, with surface ages spanning more than four orders of magnitude, and surface materials ranging from molten silicate lava to nitrogen frost. This diversity makes the satellites exceptionally interesting scientifically, illuminating the many evolutionary paths that planetary bodies can follow as a function of their size, composition, and available energy sources, and allowing researchers to investigate and understand an exceptional variety of planetary processes. However, this diversity also presents a challenge for any attempt to prioritize exploration of these worlds, as we move from initial reconnaissance to focused in-depth studies.
The sizes, masses, and orbits of all the large satellites are now well known and are key constraints on the origin of the planetary systems to which they belong. Additional constraints come from their detailed compositions, which scientists are just beginning to investigate. Several worlds have unique stories to tell us about the evolution of habitable worlds, by illuminating tidal heating mechanisms, providing planetary-scale laboratories for the evolution of organic compounds, and harboring potentially habitable subsurface environments. Many of these worlds feature active planetary processes that are important for understanding these bodies themselves as well as worlds throughout the solar system. These processes include silicate volcanism, ice tectonics, impacts, atmospheric escape, chemistry, dynamics, and magnetospheric processes.
While much can still be learned from ground-based and near-Earth telescopic observations, particularly in the temporal domain, and from analysis of existing data, missions to these worlds are required to produce new breakthroughs in understanding. During the past decade, understanding of these worlds has been substantially expanded by the Cassini spacecraft and its Huygens probe that descended to Titan in 2005. Data from Cassini continue to revise and expand what is known about Saturn’s moons. In addition, continued analysis from past missions such as Galileo has produced surprises as well as helping to inform the planning for future missions.
All three of the crosscutting science themes for the exploration of the solar system motivate further exploration of the outer planet satellites; their study is vital to addressing many of the priority questions in each of the themes. For example, in the building new worlds theme, the satellites retain chemical and geological records of the processes of formation and evolution in the outer solar system—records no longer accessible in the giant planets themselves. As such the satellites are key to attacking the question, How did the giant planets and their satellite systems accrete, and is there evidence that they migrated to new orbital positions? The planetary habitats
theme includes the question, What were the primordial sources of organic matter, and where does organic synthesis continue today? The surfaces and interiors of the icy satellites display a rich variety of organic molecules—some believed to be primordial, some likely being generated even today; Titan presents perhaps the richest planetary laboratory for studying organic synthesis ongoing on a global scale. Europa, Enceladus, and Titan are central to another key question in this theme: Beyond Earth, are there modern habitats elsewhere in the solar system with necessary conditions, organic matter, water, energy, and nutrients to sustain life, and do organisms live there now? Exhibiting a global methane cycle akin to Earth’s hydrologic cycle, Titan’s complex atmosphere is key to understanding the workings of the solar system theme and the question, Can understanding the roles of physics,
TABLE 8.1 Characteristics of the Large- and Medium-Size Satellites of the Giant Planets
|Primary||Satellite||Distance from Primary(km)||Radius (km)||Bulk Density (g cm-3)||Geometric Albedo||Dominant Surface Composition||Surface Atmospheric Pressure (bars)||Dominant Atmospheric Composition||Notes|
|Jupiler||Io||422,000||1,822||3.53||0.6||S, SO2, silicates||10-9||SO2||Intense tidally driven voleanism, plumes, high mountains|
|Europa||671,000||1,561||3.01||0.7||H2O, hydrates||10--12||O2||Recent complex resurfacing, probable subsurface ocean|
|Ganymede||1,070,000||2,631||1.94||0.4||H2O, hydrates||10--12||O2||Magnetic field, ancient tectonism, probable subsurface ocean|
|Callislo||1,883,000||2,410||1.83||0.2||H2O Phyllosilicates?||Partially undifferentiated, heavily cratered, probable subsurface ocean|
|Enceladus||238,000||252||1.61||1.0||H2O||Intense recent tectonism, active water vapor/ice jets|
|Tethys||295,000||533||0.97||0.8||H2O||Heavily crate red, fractures|
|Dione||377,000||562||1.48||0.6||H2O||Limited resurfacing, fractures|
|Rhea||527,000||764||1.23||0.6||H2O||Heavily crate red, fractures|
|Tiian||1,222,000||2,576||1.88||0.2||H2O, organics, liquid CH4||1.5||N2,CH4||Active hydrocarbon hydrologic cycle, complex organic chemistry|
|Iapetus||3,561,000||736||1.08||0.3||H2O, organics?||Heavily cratered, extreme albedo dichotomy|
|Primary||Satellite||Distance from Primary(km)||Radius (km)||Bulk Density (g cm-3)||Geometric Albedo||Dominant Surface Composition||Surface Atmospheric Pressure (bars)||Dominant Atmospheric Composition||Notes|
|Uranus||Miranda||130,000||236||1.21||0.3||H2O||Complex and inhomogeneous resurfacing|
|Ariel||191,000||579||1.59||0.4||H2O||Limited resurfacing, fractures|
|Umbriel||266,000||585||1.46||0.2||H2O, dark material||Heavily cratered|
|Titania||436,000||789||1.66||0.3||H2O||Limited resurfacing, fractures|
|Oberon||584,000||761||1.56||0.2||H2O, dark material||Limited resurfacing|
|Neplune||Triton||355,000||1,353||2.06||0.8||N2,CH4,H2O||10-5||N2,CH4||Captured; recent resurfacing, complex geology, active plumes|
chemistry, geology, and dynamics in driving planetary atmospheres and climates lead to a better understanding of climate change on Earth? Finally, the giant-planet satellites exhibit an enormous spectrum of planetary conditions, chemistry, and processes—contrasting with those of the inner solar system and stretching the scientific imagination in addressing the question, How have the myriad chemical and physical processes that shaped the solar system operated, interacted, and evolved over time?
The planetary science community has made remarkable progress over the past decade in understanding the major satellites of the giant planets (Table 8.2), but despite this progress, important questions remain unanswered. The committee developed some specific high-level goals and associated objectives to guide the continued advancement of the study of planetary satellites. The goals cover the broad areas of origin and evolution, processes, and habitability. They are as follows:
• How did the satellites of the outer solar system form and evolve?
• What processes control the present-day behavior of these bodies?
• What are the processes that result in habitable environments?
Each of these goals is described in more detail in subsequent sections.
Understanding the origin and evolution of the satellites is a key goal of satellite exploration. Satellite composition and internal structure (particularly the state of differentiation) provide important clues to the formation of
TABLE 8.2 Major Accomplishments by Ground- and Space-Based Studies of the Satellites of the Giant Planets in the Past Decade
|Major Accomplishment||Mission and/or Technique|
|Discovered an active meteorological cycle on Titan involving liqui ydrocarbons instead of water||Cassini and Huygens; ground-based observations|
|Discovered endogenic activity on Enceladus and found that the Enceladus plumes have a major impact on the saturnian environment||Cassini|
|Greatly improved understanding of the origin and evolution of Titan’ tmosphere and inventory of volatiles and its complex organic chemistry||Theory and modeling based on Cassini and Huygen ata|
|Major improvement in characterizing the processes, composition, an istories for all the saturnian satellites||Theory and modeling based on Cassini data|
|Developed new models improving understanding of Europa, Io, and th ther Galilean satellites||Theory and modeling based on Galileo data; ground-based observations; and Cassini and New Horizons|
these worlds and their parent planet; of particular interest are the origin and evolution of volatile species. Orbital evolution, and its intimate connections to tidal heating, provide a major influence on satellite evolution. Tidal and other energy sources drive a wide range of geologic processes, whose history is recorded on the satellite surfaces.
Objectives associated with the goal of understanding the formation and evolution of the giant-planet satellites include the following:
• What were the conditions during satellite formation?
• What determines the abundance and composition of satellite volatiles?
• How are satellite thermal and orbital evolution and internal structure related?
• What is the diversity of geologic activity and how has it changed over time?
Subsequent sections examine each of these objectives in turn, identifying important questions to be addressed and future investigations and measurements that could provide answers.
What Were the Conditions During Satellite Formation?
The properties of the existing regular satellite systems provide clues about the conditions in which they formed. The regular satellites of Jupiter, Saturn, and Uranus orbit in the same planes as the planets’ equators, suggesting that the moons likely formed in an accretion disk in the late stages of planet formation.1 Neptune has one large irregular satellite, Triton, in an inclined and retrograde orbit (opposite from the direction of Neptune’s rotation). Triton may be a captured Kuiper belt object, and moons that might have formed in a neptunian accretion disk were probably destroyed during the capture. Each of the regular systems has unique characteristics. Jupiter has four large satellites (the Galilean satellites), the inner two of which are essentially rocky bodies while the outer two moons are rich in ice. The saturnian system has a single large satellite, whereas closer to Saturn there are much smaller, comparably sized icy moons. The regular uranian satellites lie in the planet’s equatorial plane that is tilted by 97° to the ecliptic (i.e., the plane of Earth’s orbit).
The outer planet satellites have also been modified by endogenic (e.g., internal differentiation and tides) and exogenic (e.g., large impacts) processes that have strongly influenced what is seen today. Although the present orbital dynamical, physical, and chemical states of the satellites preserve information about their origins, such information can have been hidden or erased by processes occurring during the evolution of the moons.
The Cassini mission has opened our eyes to the wonders of the saturnian satellites. Titan’s surface is alive with fluvial and aeolian activity,2 yet its interior is only partially differentiated,3 has no magnetic field, and probably has no metallic core. On tiny Enceladus, water vapor plumes have been discovered emanating from south polar
fissures, warmed by an unusual amount of internal heat.4 These observations together with the satellite’s density have important implications for the interior of Enceladus that in turn impose limitations on its formation and evolution. Iapetus is remarkably oblate for its size, and its ancient surface features a singular equatorial belt of providing unique constraints on its early history. Cassini observations of the other saturnian moons—Rhea, Dione, Mimas, and Tethys—have increased our knowledge of their surfaces, compositions, and bulk properties.
Measurements of volatile abundances are enabling the reconstruction of the planetesimal conditions at the time of accretion of the satellites, but those conditions are still far from understood. Titan’s dense atmosphere makes it especially interesting: The dominance of molecular nitrogen and the absence of the expected accompanying abundance of primordial argon are important results that constrain its origin.5
Some important questions concerning the conditions during satellite formation include the following:
• Why are Titan and Callisto apparently imperfectly differentiated whereas Ganymede underwent complete differentiation?
• Why did Ganymede form an iron-rich core capable of sustaining a magnetic dynamo?
• What aspects of formation conditions governed the bulk composition and subsequent evolution of Io and Europa?
• In what ways did the formation conditions of the saturnian satellites differ from the conditions for the jovian satellites?
• Is it possible to discern in the uranian satellites any evidence of a very different origin scenario (a giant impact on Uranus, for example), or is this satellite system also the outcome of a process analogous to processes by which the other giant-planet satellites originated?
• What features of Triton are indicative of its origin?
Future Directions for Investigations and Measurements
An investigation key to understanding the conditions during satellite formation is to establish the thermodynamic conditions of satellite formation and evolution by determination of the bulk compositions and isotopic abundances. These results would directly constrain conditions of formation, for example the radial temperature profile in the planetary accretion disk from which the regular satellites formed. Two other crucial areas of investigation are to better constrain the internal mass distributions of many of the satellites by measuring the static gravitational fields and topography and to probe the existence and nature of internal oceans by measuring tidal variations in gravity and topography and by measuring electromagnetic induction in the satellites at multiple frequencies. Internal oceans may date to a satellite’s earliest history, given that they can be difficult to re-melt tidally once frozen, and thus their presence constrains formation scenarios.
What Determines the Abundance and Composition of Satellite Volatiles?
Volatiles on the outer planet satellites are contained mainly in ices, although volatiles can also be retained in the rocky components (e.g., hydrated silicates on Europa or Io). Clathration (i.e., the incorporation of gas molecules within a modified water-ice structure) is a likely process for retention of many volatiles in satellite interiors, and it helps to explain the current composition of Titan.6 Alternatives like trapping of gases in amorphous ice have also been suggested.
The building blocks for the satellites may have originated from the solar nebula or formed in the planetary subnebula.7 In either case, the thermodynamic conditions and composition of the gas phase determine the formation conditions of ices.
Huygens probe results and Cassini results have motivated a great deal of modeling of the formation conditions for the Saturn system and Titan in particular. Planetesimal formation in the solar nebula with only modest subnebula
processing may be representative of the satellite formation process in the Saturn system,8 and clathration may have had an important role, presumably aided by collisions between planetesimals to expose “fresh” ice. In the Galilean satellites, by contrast, extensive processing in the jovian subnebula may have occurred. However, the formation conditions of the Galilean satellites are not well constrained at this time, due to the lack of measurements of volatiles for these satellites, including noble gases and their stable isotopes. The origin and evolution of methane on Titan are receiving much attention, with some workers favoring ongoing outgassing from the interior to balance the continual destruction over geologic time. The argon content of the atmosphere implies that nitrogen arrived as ammonia rather than as molecular nitrogen, yet how ammonia evolved into molecular nitrogen is not known.
Ground-based spectroscopy continues to expand our knowledge of inventories of volatiles on satellite surfaces, for instance with the discovery of carbon dioxide ice on the uranian satellites.9
Some important questions concerning the abundance and composition of satellite volatiles include the following:
• In what ways do the highly volatile constituents differ between Callisto and Ganymede?
• Are volatiles present at the surface or in the ice shell of Europa that are indicative of internal processing or resurfacing?
• How, and to what extent, have volatiles been lost from Io?
• What does the plume material from Enceladus tell us about the volatile inventory of that body?
• Why does Titan uniquely have an exceptionally thick atmosphere?
• What does the volatile inventory of Titan tell us about its history? In particular, how is the methane resupplied, given its rapid photochemical destruction in the upper atmosphere?
Future Directions for Investigations and Measurements
Investigations and measurements relevant to the abundance and composition of satellite volatiles include determination of the volatile composition of the ices, the stable isotope ratios of carbon, hydrogen, oxygen, and nitrogen, and the abundances of the noble gases to help untangle nebula and subnebula processes using highly precise remote and in situ determinations of atmospheric and surface compositions; improved observations of currently active processes of loss of volatiles; and improved understanding of the thermodynamics of volatiles and the efficiency of clathration of volatiles as a function of the formation conditions.
How Are Satellite Thermal and Orbital Evolution and Internal Structure Related?
Like those of planets, the structure and evolution of satellites are strongly affected by mass and composition. Unlike planets, satellites are very close to the central body and can therefore be greatly affected by tides and tidally mediated resonances (i.e., periodic mutual gravitational interactions).10 This leads to a rich diversity of outcomes (Figure 8.2), understanding of which can reveal the history of the system and a satellite’s internal structure. At least three bodies (Io, Europa, and Enceladus) are thought to be currently undergoing large tidal heating, and others (Ganymede, Triton, possibly Titan, and maybe more) may have been heated in this way in the past. Tidal effects are ultimately limited by orbital evolution and the energy budget this allows. Unlike heating caused by energy released from radioactive substances, the magnitude and the spatial and temporal variability of tidal heating are very sensitive to the structure of a satellite. The evolution of the internal structure of a satellite is also affected by the radiogenic heating of the rocky component, and this alone will guarantee convection in the ice-rich parts of the larger satellites.11 Convection can in turn drive surface tectonics and may cause outgassing or cryovolcanism.
Although Enceladus was already recognized at the time of the 2003 planetary science decadal survey as a likely location of tidal heating, it has emerged as an active body of great interest, primarily through Cassini observations. The plume activity and estimates of thermal emission imply a level of tidal heating that is unexpectedly high for a
body so small. Enceladus’s forced eccentricity and tidal heating may not, however, be constant through geologic time.12 Progress has also continued on a more complete understanding of Io and Europa, through continued analysis of Galileo data combined with ground-based and Earth orbit telescopic observations. Recent work appears to support the idea that Io is in thermal but not orbital equilibrium.13 Cassini gravity data suggest that Titan is not fully differentiated,14 perhaps like Callisto but unlike Ganymede. These data mainly elucidate formation conditions but might also inform researchers about tidal heating in Ganymede or the role of later impacts.
Some important questions about the thermal and orbital evolution of satellites and how it relates to their internal structure include the following:
• What is the history of the resonances responsible for the tidal heating, and how is this heating accomplished?
• How does this heat escape to the surface?
• How is this heat transfer related to the internal structure (thickness of an outer solid shell, or composition of the interior) and formation?
• How hydrostatic are the satellites?
There are also body-specific questions:
• Does Io have a magma ocean, and what is the compositional range of its magmas?
• What is the origin of the topography of Io?
• What are the magnitude and the spatial distribution of Io’s total heat flow?
• What are the thickness of Europa’s outer ice shell and the depth of its ocean?
• What is the magnitude of Europa’s tidal dissipation, and how is it partitioned between the silicate interior and the ice shell?
• What is the relationship between Titan’s surface morphology and its internal processes, particularly for the history of the methane budget and lakes or seas and possible replenishment of methane from the interior or subsurface?
• Does Titan have an internal liquid-water ocean?
• What is the spatial distribution of Enceladus’s heat output, and how has it varied with time?
• Does Enceladus have an ocean or some other means of providing large tidal dissipation, and to what extent is its behavior dictated by its formation conditions (e.g., presence or absence of a differentiated core)?
• What does the diversity of the uranian moons indicate about the evolution of small to medium-size icy satellites? What drove such dramatic endogenic activity on Miranda and Ariel?
• What powers past or possible ongoing activity on Triton, which currently has negligible tidal heating?
Future Directions for Investigations and Measurements
Many of the future investigations needed to understand satellite formation arise here as well because of the interplay of formation conditions and subsequent thermal evolution. A better understanding of the internal structure and thermal evolution of satellites requires measurements of static gravitational fields and topography to probe interior structure and of tidal variations in gravity and topography, as well as electromagnetic induction in the satellites at multiple frequencies to search for oceans. The presence and nature of intrinsic magnetic fields also constrain internal thermal evolution and initial conditions. Another needed key investigation is subsurface sounding (e.g., radar) to investigate the structure of the upper lithosphere. Heat flow can be sufficiently large to be detected through thermal infrared techniques. This provides a powerful constraint on the satellite’s thermal state. Improved maps of composition and geology of the satellite surfaces will constrain the extent and nature of transport of heat from the interior. Critical to interpretations from these investigations are improved laboratory determinations of the thermophysical and mechanical properties of relevant candidate materials to better constrain interior processes.
What Is the Diversity of Geologic Activity and How Has It Changed Over Time?
The surfaces of solar system bodies provide important clues to their history and evolution. Collectively, outer planet satellites show the scars of almost every surface process, including impact cratering, tectonic deformation, cryovolcanism, and aeolian and fluvial erosion. Many of these processes are still mysterious. Icy-satellite is often extensional,15 sometimes bringing interior materials up to the surface, but strike-slip faulting is also observed. Compressive tectonism is less evident on icy satellites; however, it is likely responsible for Io’s towering mountains.16 Much of Europa’s surface is disrupted by extensive and mysterious chaos regions.17 Solid-state convection is likely to be an important driver of icy-satellite geology, but details are unclear. The large range of ages and processes provides a valuable window into solar system history, constraining thermal and compositional evolution and allowing a better understanding of how planetary systems form and evolve.
The science return from the Cassini mission has been phenomenal. Multiple flybys of Titan have confirmed the presence of numerous methane lakes on the surface—the only bodies of surface liquid on any known world other than Earth—along with fluvial channels (Figure 8.3), and evidence for seasonal variations.18 Images of Enceladus reveal a long and complex geologic history that continues to the present day, and includes ridges that are morphologically similar to Europa’s ubiquitous double ridges.19 Wispy features on Dione and Rhea’s trailing hemisphere have been revealed to be huge cliffs, evidence of a tectonically active past.20 Images of Iapetus show
an ancient equatorial belt of mountains, a remnant of Iapetus’s early evolution. Cassini images have shown unusual impact crater morphologies on Hyperion. Continued analysis of Galileo images has constrained the population of primary and secondary impactors in the outer solar system21 and provides continued new insights into the remarkable geology of Europa.
Some important questions about the diversity of geologic activity and how it has changed over time include the following:
• One of the key missing pieces in the understanding of satellite surface geology is adequate knowledge of the cratering record in the outer solar system.22 What are the impactor populations in the outer solar system, and how have they changed over time, and what is the role of secondary cratering?
• What are the origins of tectonic patterns on Europa, including the ubiquitous double ridges (Figure 8.4) and chaos regions?
• How much non-synchronous rotation has Europa’s ice shell undergone, and how have the resulting stresses manifested at the surface?
• How is contraction accommodated on Europa?
• Has material from a subsurface Europa ocean been transported to the surface, and if so, how?
• What caused Ganymede’s surface to be partially disrupted to form grooved terrain, and is the grooved terrain purely tectonic or partly cryovolcanic in origin?
• Did Ganymede suffer a late heavy bombardment that affected its appearance and internal evolution?
• What is the age of Titan’s surface, and have cryovolcanism and tectonism been important processes? Have there been secular changes in the surface methane inventory?
• Why is Enceladus’s geology so spatially variable, and how has activity varied with time?
• What geologic processes have created the surfaces of the diverse uranian moons, particularly the dramatic tectonics of Miranda and Ariel?
• Has viscous extrusive cryovolcanism occurred on icy satellites, as suggested by features on Ariel and Titan?
• What geologic processes operate on Triton’s unique surface, how old is that activity, and what do its surface features reveal about whether Triton is captured?
Future Directions for Investigations and Measurements
Advancing understanding of the full range of surface processes operative on outer planet satellites requires global reconnaissance with 100-meter scale imaging of key objects, particularly Europa, Titan, and Enceladus as well as topographic data and high-resolution mapping (~10 meters/pixel) of selected targets to understand details of their formation and structure. In particular, understanding of tidally induced tectonics requires such global maps. Improved knowledge about subsurface structure is essential to constrain the nature and extent of endogenic geologic processes, for example the lithospheric thickness, fault penetration depths, porosity, thermal structure, and the presence of subsurface liquid. Maps of compositional variations at high spatial and spectral resolution and over a broad range of wavelengths are key to understanding how surface materials are emplaced and evolve.
Critical to accurate interpretation of such spacecraft data are better laboratory reflectance and emission spectra of materials relevant to the outer solar system (some of which do not exist at standard temperature and pressure). A comprehensive spectral database of ices and minerals covering a wide temperature range would have wide-ranging applications to outer solar system satellites.
Many planetary satellites are highly dynamic, alive with geologic and/or atmospheric activity, and even the more sedate moons have active chemical and physical interactions with the plasma and radiation environments that
surround them. Study of these active processes provides an invaluable opportunity to understand how planetary bodies work.
Important objectives include the following:
• How do active endogenic processes shape the satellites’ surfaces and influence their interiors?
• What processes control the chemistry and dynamics of satellite atmospheres?
• How do exogenic processes modify these bodies?
• How do satellites influence their own magnetospheres and those of their parent planets?
Subsequent sections examine each of these objectives in turn, identifying key questions to be addressed and future investigations and measurements that could provide answers.
How Do Active Endogenic Processes Shape the Satellites’ Surfaces and Influence Their Interiors?
Watching active geology as it happens provides unique insights into planetary processes that can be applied to less active worlds. Active endogenic geologic processes, both volcanic and tectonic, can be observed directly on Io and Enceladus, and Europa’s low crater density implies that ongoing activity is also plausible there. An isolated active region on Europa comparable in size to Enceladus’s south polar province could easily have been missed by previous missions. Evidence for ongoing endogenic activity on Titan has been suggested, and Triton’s plumes may be driven by ongoing endogenic processes.23
Cassini measurements have revealed active cryovolcanism on Enceladus, which provides a window into its interior structure and composition and provides a case study for tidal heating of icy satellites; associated tectonic and other resurfacing activity is seen along and near the tiger stripes (the active geologic features near south pole) (Figures 8.5 and 8.6), shedding light on the origin of similar double ridges on Europa.24 Cassini images of Titan’s surface show many enigmatic features, some of which may result from active cryovolcanism. The source of the current atmospheric methane which should be destroyed on geologically short timescales remains problematic, and cryovolcanic supply remains plausible.25
At Jupiter, the Pluto-bound New Horizons spacecraft demonstrated the potential of high data rates and sensitive instrumentation for illuminating active volcanic processes on Io, capturing spectacular images and movies of its volcanic plumes (Figure 8.7).26
Much remains to be learned about active volcanic and tectonic processes. Some important questions include the following:
• What mechanisms drive and sustain Enceladus’s plumes and active tiger stripe tectonics?
• What are the magnitude, spatial distribution, temporal variability, and dissipation mechanisms of tidal heating within Io, Europa, and Enceladus?
• Is there active cryovolcanism on Titan?
• What are the eruption mechanisms for Io’s lavas and plumes and their implications for volcanic processes on early and modern Earth?
Future Directions for Investigations and Measurements
Key investigations and measurements into active tectonic and volcanic processes include (1) exploration of Io’s dynamic volcanism in the temporal domain at high spatial resolution, over timescales ranging from minutes (for the dynamics of active plumes) to weeks or decades (for the evolution of lava flows and volcanic centers), (2) global maps of Titan’s surface morphology and surface composition to search for evidence for present-day geologic activity, and (3) acquisition of higher-resolution thermal and visible imaging of the active south pole of Enceladus, including temporal coverage, to elucidate plume generation mechanisms. Other important objectives include a search for activity on other satellites such as Europa by looking for thermal anomalies, gas and dust plumes, or surface changes, as well as collection of additional in situ measurements of the composition of the endogenic materials lofted into the atmospheres or plumes of these satellites.
What Processes Control the Chemistry and Dynamics of Satellite Atmospheres?
Satellite atmospheres are exceptionally varied (see Table 8.1), and a great range of processes govern their structures, chemistries, and dynamics. Surface pressures range over 12 orders of magnitude, from picobars to
1.5 bar (~1.5 times Earth’s surface pressure). The thinnest atmospheres, including those of Europa, Ganymede, and probably Callisto, are created by sputtering (i.e., ejection of particles from the surface by plasma bombardment) and are dominated by oxygen molecules that are too sparse to interact significantly with each other.27 Io’s patchy atmosphere, dominated by sulfur dioxide, results from a combination of volcanic supply and surface frost evaporation,28 whereas Triton’s denser global molecular nitrogen-dominated atmosphere is supported entirely by the evaporation of surface frosts.
Ground-based observations have furthered understanding of the distribution of the atmosphere-supporting molecular nitrogen and methane frosts over the surface of Triton.29 Ground-based and Hubble Space Telescope observations have demonstrated that Io’s atmosphere is concentrated in the equatorial regions and shows stable 10-fold variations in density with longitude.30
By far the largest satellite atmosphere is Titan’s, dominated by nitrogen molecules, which dwarfs Earth’s atmosphere, and which originated from the outgassing of volatiles during its formation, continuing into at least the recent past. Titan’s atmosphere experiences a range of dynamical and chemical processes31 (Figure 8.8). The second most abundant constituent, methane, exists as a gas, a liquid, and a solid, and cycles from the surface to the atmosphere, with clouds, rain, and lakes. The temperature profile manifests greenhouse warming and “anti-greenhouse” cooling. The dynamics of Titan’s atmosphere range in scale from global circulation patterns to local methane storms. Titan’s atmospheric composition is affected primarily by the dissociation of methane and nitrogen by solar ultraviolet radiation and magnetospheric electrons, which leads to a complex chemistry that extends from the ionosphere down to the surface.
Measurements by Cassini and Huygens, complemented by ground-based observations, have revolutionized understanding of Titan’s atmosphere. Cassini and ground-based telescopes have begun to characterize the seasonal variations in Titan’s clouds and circulation patterns, for example recently observing the appearance of clouds at the beginning of northern spring, and Cassini has revealed surface terrains shaped by rain, rivers, and wind, which point to weather possesses similar to those on Earth with convection, evaporation, and rainfall. measurements have also revealed that Titan’s ion chemistry and photochemistry produce a multitude of heavy organic molecules, likely containing amino acids and nucleotides.
Some important questions concerning the chemistry and dynamics of satellite atmospheres include the following:
• What is the temporal and spatial variability of the density and composition of Io’s atmosphere, how is it controlled, and how is it affected by changes in volcanic activity?
• What are the relative roles of sublimation, molecular transport, sputtering, and active venting in generating tenuous satellite atmospheres?
• Do the large organic molecules detected by Cassini in Titan’s haze contain amino acids, nucleotides, and other prebiotic molecules?
• What processes control Titan’s weather?
• What processes control the exchange of methane between Titan’s surface and the atmosphere?
• Are Titan’s lakes fed primarily by rain or by underground methane-ethane “aquifers”?
• How do Titan’s clouds originate and evolve?
• What is the temperature and opacity structure of Titan’s polar atmosphere, and what is its role in Titan’s general circulation?
• What is Triton’s surface distribution of molecular nitrogen and methane, and how does it interact with the atmospheric composition and dynamics?
Future Directions for Investigations and Measurements
Improved understanding of the chemistry and dynamics of Io’s atmosphere will require improved mapping of the spatial distribution and temporal variability of its atmosphere and associated correlations with local time and volcanic activity, as well as measurement of the diurnal variation in frost temperatures, and direct sampling of the atmosphere to determine composition. New advances in characterizing the tenuous atmospheres of the icy Galilean and saturnian satellites can be achieved by direct sampling from flybys and, where possible, by their ultraviolet emissions.
Continued observations of seasonal changes on Titan will be vital to understanding the dynamics of its atmosphere and its interaction with the surface. Improved understanding of its organic chemistry will require in situ
atmospheric compositional measurements capable of characterizing complex organic molecules. New insights into atmosphere-surface interactions and energy balance on Titan will require global and regional morphological and compositional mapping of the surface as well as measurements of lake composition and evaporation processes. Future measurements of the vertical structure of Titan’s hazes and clouds, their densities, and particle sizes and shapes are needed to understand cloud and haze formation and evolution, particularly in the polar regions.
Advancing the exploration of Triton will require detailed surface compositional and temperature maps coupled with ultraviolet stellar and radio occultations, as well as direct samples of the atmosphere from spacecraft flybys.
New laboratory data on the spectroscopy of mixtures including molecular nitrogen, methane, ethane, and propane liquid and ice, as well as methane gas at high pathlength (1029 m–2) and low temperature (~85 K), are critical to understand the volatile inventory on Titan and the composition of Triton’s surface and atmosphere.
How Do Exogenic Processes Modify These Bodies?
Most of the large satellites are embedded in the hot corotating plasmas of their planets’ magnetospheres. The plasmas erode the surfaces of these satellites through ion sputtering and also chemically modify them through electron-induced radiolysis (i.e., radiation-driven chemistry).32 With the exception of Ganymede (which is protected by its own magnetic field), the trailing hemispheres of the satellites bear the brunt of the corotating plasma onslaught (Figure. 8.9). Ion sputtering results in the formation of tenuous atmospheres and even circumplanetary ion and neutral tori (such as around the orbits of Io and Europa), and potentially allows orbital measurement of surface composition via sputtered products. Europa may lose around 2 centimeters of its surface to plasma sputtering every million years.33 Implantation of exogenic species can be significant (for instance, sulfur of likely ionian origin is found on Europa’s trailing side), and radiolytic processing generates reactive species such as molecular oxygen and hydrogen peroxide in surface ices, which might, in the case of Europa or Enceladus, deliver chemical energy to underlying bodies of liquid water in quantities sufficient to power biological activity.34
Micrometeoroids play a crucial role in regolith generation and in redistributing radiolytic products to the subsurface layers through impact gardening. Regolith thickness may be many meters. Impacts may eject surface dust
samples to altitudes where they can be analyzed by orbiting or flyby spacecraft. Macroscopic impacts are the major landform generators on many satellites, and are powerful probes of the structure and composition of the subsurface that they penetrate. Crater populations provide information on relative ages of surface units and on the population of projectiles over time.
Solar radiation also alters planetary surfaces. Extreme ultraviolet photolysis (i.e., photon-driven chemistry) modifies surface composition (though it is dominated by particle radiation on Jupiter’s moons), and solar ultraviolet radiation has a major influence on the atmospheric chemistry of Titan. Solar-driven frost sublimation is an important process in atmospheric support and the modification of surface albedo and composition.
Recent studies based on Cassini data indicate that in Saturn’s magnetosphere, the loss of surface material from plasma sputtering from the icy satellites is minimal (less than a few grams per second for all of the satellites). Cassini observations show that the rate of loss of heavy ions from Titan due to solar and magnetospheric effects is much larger than expected, and a mass as large as the mass to the present day atmosphere may have been lost to space over the lifetime of Titan, a conclusion supported by evidence for significant nitrogen-15/nitrogen-14 fractionation.35 Analysis of Cassini data from Iapetus suggests that its long-mysterious extreme albedo dichotomy results from a combination of exogenic processes (infall of dark dust and the resulting sublimation and migration of water ice), while Enceladus’s plumes have influenced the albedos and the leading and trailing photometric asymmetries of the inner Saturn satellites.
Some important questions concerning exogenic processes include the following:
• Is Io’s intense magnetospheric interaction responsible for its volatile depletion?
• How is the strong ionosphere of Triton generated?
• How do exogenic processes control the distribution of chemical species on satellite surfaces?
• How are potential Europa surface biomarkers from the ocean-surface exchange degraded by the radiation environment?
• What do the crater populations on the satellites reveal about the satellites’ histories and subsurface structure and about the populations of projectiles in the outer solar system and the evolution thereof?
Future Directions for Investigations and Measurements
Important investigations and measurements into exogenic processes include improved mapping of satellite surface composition to understand and separate the distributions of endogenic and exogenic materials. Because most of the exogenic materials are carried between the moons by plasma processes, in situ measurements of the field and plasma environments are required to understand the relative roles of exogenic and endogenic processes in defining the surface chemistries of the moons. These measurements may also be able to discover active venting from satellites. Improved remote sensing of impact structures, including topography and subsurface sounding (e.g., to reveal melt sheets and crustal thinning), will enhance understanding of impact processes and their effects on surface evolution. New laboratory studies should be performed to characterize the effects of irradiation on ices infused with exogenic and endogenic materials. Obtaining data on bulk ices and not just thin films is important because energetic electrons and photons often travel large distances before interacting with the contact material. More laboratory data are also needed to understand how the spectral characteristics of the icy satellites are modified by ion-induced sputtering, electron irradiation, micrometeoroid bombardment, and energetic photon bombardment in the cold, low-pressure environments of the icy satellites.
How Do Satellites Influence Their Own Magnetospheres and Those of Their Parent Planets?
The magnetospheres of Jupiter, Saturn, and Neptune (but not Uranus) derive a large fraction of their plasma and neutral content from their embedded satellites. In Jupiter’s magnetosphere, Io’s volcanoes deliver between
1 and 2 tons per second of material (mostly sulfur dioxide, sulfur, and oxygen) through Io’s atmosphere to the magnetosphere, and changes in plasma density may be related to changes in volcanic activity.36 Saturn’s magnetosphere is dominated by material from Enceladus, as detailed below.
The plasma in Neptune’s magnetosphere appears to be dominated by positive nitrogen ions derived mainly from the atmosphere of its moon Triton. Escape of electrically neutral particles from Triton supplies a neutral torus with a peak density of ~400 cm–3 near the orbit of Triton.37
Ganymede’s magnetosphere derives its plasma from its own sputter-generated atmosphere and also captures plasma from the magnetosphere of Jupiter. The residence time of plasma is quite short, and the overall densities of charged particles are small.38
Ground-based telescopic observations of Io’s torus and the associated fast neutral nebula continue to improve understanding of how Io refills its torus and ultimately supplies plasma to Jupiter’s magnetosphere. Continued analysis of Galileo and Cassini data have stressed the importance of Europa as another important source of plasma in Jupiter’s magnetosphere, revealing that a neutral atomic- and molecular-hydrogen torus is present near the orbit of Europa.39
Cassini has revealed that most of the material in Saturn’s magnetosphere, predominantly water, hydroxyl, and oxygen, is derived from the south polar plume of Enceladus.40 Unlike at Jupiter, this material is largely in a neutral rather than an ionized state. Saturn’s E-ring is continually resupplied by ice particles from the Enceladus plumes. Titan also loses a considerable amount of neutral material from its atmosphere, yet there is no evidence of the presence of plasma derived from Titan in the magnetosphere of Saturn.
Some important questions concerning how satellites influence their own magnetospheres and those of their parent planets include the following:
• Why is Jupiter’s magnetosphere dominated by charged particles whereas Saturn’s magnetosphere is dominated by neutral species?
• What fraction of the material in Jupiter’s magnetosphere originates from Europa and other icy satellites?
• Is the reconnection in Ganymede’s magnetosphere steady or patchy and bursty?
• How rapidly does Saturn’s magnetosphere react to the temporal variability of Enceladus’s plume?
• Do other saturnian icy satellites such as Dione and Rhea contribute a measurable amount of neutrals or plasma to Saturn’s magnetosphere?
• What is the nature of Triton’s inferred dense neutral torus?
Future Directions for Investigations and Measurements
Investigations and measurements important to advancing understanding of how satellites influence their own magnetospheres and those of their parent planets include (1) measurement of the composition of the jovian plasma and concurrent observations of Io’s volcanoes and plumes to understand the roles of Io and the icy satellites (especially Europa) in populating Jupiter’s magnetosphere and (2) simultaneous multiple spacecraft measurements of the jovian system to help to address the problem of temporal versus spatial change in Jupiter’s and Ganymede’s magnetospheres and to enhance understanding of how plasma populations move around in these magnetospheres. Also key are continued field and plasma measurements and monitoring of Enceladus’s plume to better elucidate the roles of Enceladus and other icy satellites in populating Saturn’s magnetosphere. A survey of the fields and plasmas of Neptune’s magnetosphere, supplemented by low-energy neutral-atom imaging of the magnetosphere, would dramatically improve understanding of Triton’s neutral torus.
The understanding of humanity’s place in the universe is a key motivation for the exploration of the solar system in general and planetary satellites in particular. Satellites provide many of the most promising environments for the evolution of extraterrestrial life, or for understanding the processes that led to the evolution of life on our own planet. Important objectives relevant to this goal include the following:
• Where are subsurface bodies of liquid water located, and what are their characteristics and histories?
• What are the sources, sinks, and evolution of organic material?
• What energy sources are available to sustain life?
• Is there evidence for life on the satellites?
Subsequent sections examine each of these objectives in turn, identifying key questions to be addressed and future investigations and measurements that could provide answers.
Where Are Subsurface Bodies of Liquid Water Located, and
What Are Their Characteristics and Histories?
A fundamental requirement for habitability is the presence of liquid water. Several of the larger satellites are thought to possess at least some liquid water in their interiors.41 In the coming decade, two key objectives will be to further characterize the known subsurface oceans, and to determine whether other bodies also possess such oceans.
One of the key results of the Galileo mission was the use of Jupiter’s tilted magnetic field to detect subsurface oceans via magnetic induction on Europa, Ganymede, and Callisto.42 However, neither the thickness nor the conductivity (and thus composition) of these oceans can be uniquely determined with the current observations.
The plumes on Enceladus include salt-rich grains, for which the most likely source is a salty subsurface body of liquid.43 A global ocean that permits greater tidal flexing and heating of the ice shell is also suggested by the observed surface heat flux; however, a regional “sea” beneath the South Pole is also possible.
Because of Titan’s size and the likely presence of ammonia, a subsurface ocean is plausible44 and expected to be a long-lived feature.
Some important questions concerning the location and characteristics of subsurface bodies of liquid water include the following:
• What are the depths below the surface, the thickness, and the conductivities of the subsurface oceans of the Galilean satellites? The depth of the ocean beneath the surface is important because it controls the rate of heat loss from the ocean and the probability of material exchange with the surface. The thickness indicates the likely ocean lifetime, and for Ganymede and Callisto constrains the ocean temperature.
• Which satellites elsewhere in the solar system possess long-lived subsurface bodies of liquid water? Titan and Enceladus are obvious candidates, but other mid-size icy satellites, including those of Uranus and Neptune, could in theory have retained internal oceans to the present day.45 Triton in particular, with its geologically young surface and current geysering, is another interesting candidate.
• For all satellites, what is the lifetime of potential oceans? Ocean lifetime is a key to habitability. If Enceladus is only intermittently active, for instance, as suggested by several lines of evidence, and thus only intermittently supports liquid water, it is less attractive as a potential habitat.46
Future Directions for Investigations and Measurements
Important investigations and techniques for exploring subsurface liquid water include further characterization of the Galilean satellite oceans with satellite orbiters that can measure the induction response at both Jupiter’s spin frequency and the satellite’s orbital frequency. With two frequencies, both the ocean depth and conductivity (which constrains composition) can be solved for independently.47 For Saturn’s satellites, the negligible tilt of Saturn’s magnetic field precludes induction studies by flybys, but studies may be possible from satellite orbit by exploiting the satellite’s orbital eccentricity. A flyby detection of an ocean would be possible at Triton or the uranian satellites. Measurement of tidal flexing, for example at Europa, can provide strong constraints on the thickness of the overlying ice shell and the presence of an ocean. Geodetic studies of the rotation states of these bodies might provide additional constraints on ocean characteristics. Other important investigations and measurements for probing satellite interiors should include use of subsurface sounding from orbit (e.g., using radar) to investigate the presence of near-surface water and perhaps the ice-ocean interface on Europa. In the far term in situ measurements from the surface would provide additional information on the surface composition and environment and the subsurface structure (via seismology or magnetometry). Improved compositional measurements of gas and dust ejected from the Enceladus plume (and potential Europa plumes) would provide valuable insights into the presence of liquid water at the plume source.
What Are the Sources, Sinks, and Evolution of Organic Material?
Life as we know it is made of organic material (i.e., complex carbon-based molecules). Organic molecules can be abiotically produced in the laboratory, and it is well known that the solar system and the interstellar medium are rich in nonbiological organics. The satellites have much to teach us about the formation and evolution of complex organics in planetary environments, with implications for the origin and evolution of terrestrial life.
Perhaps the clearest example of organic synthesis in the solar system is on Titan, where Cassini and Huygens have provided abundant new information.48 Methane and nitrogen in the atmosphere are decomposed by particle and solar radiation, starting a chemical reaction cycle that produces a range of gaseous organic molecules, with molecular weights up to and exceeding 5,000, and a haze of solid organics and liquid condensates.
Once on the surface the organics accumulate and apparently are responsible for the huge dunes seen in Titan’s equatorial regions. The atmospheric organics probably accumulate in the lakes seen in the polar regions.
Cassini has revealed that the plume of Enceladus hosts a rich organic chemistry, including methane and a rich suite of hydrocarbons.49 The source of the organics is not clear. Possibilities include thermal decay of organics brought in with the accreting material, Fischer-Tropsch type synthesis in a subsurface environment, rock-water reactions that can produce hydrogen, and finally, if most speculatively, methanogenic microorganisms.
Europa may have organics on its surface but this has not been conclusively demonstrated, and the radiation environment makes the survival of organics uncertain over a few million years.50 If organics are found on the surface of Europa, the next step would be to determine if these organics may have derived from the underlying ocean and if so, whether they might be biological in origin.
Some important questions about the sources, sinks, and evolution of organic material include the following:
• What is the nature of the atmospheric processes on Titan that convert the small organic gas-phase molecules observed in the upper atmosphere (such as benzene) into large macromolecules and ultimately into solid haze particles?
• What is the fate of organics on the surface of Titan and their interaction with the seasonally varying lakes of liquid hydrocarbons?
• Are organics present on the surface of Europa, and if so, what is their provenance?
• What is the source of the organic material in the plume of Enceladus?
Future Directions for Investigations and Measurements
Observations of the surface of Europa should include the capability to determine the presence of organics, for instance by reflectance spectroscopy or low-altitude mass spectroscopy of possible out-gassing and sputter products. Observations should also provide correlation of any surface organics with surface features related to the ocean and provide site selection for a future landed mission. Ultimately, however, a lander will probably be required to fully characterize organics on the surface of Europa. Studies of the organic processes on Titan in the atmosphere and on the surface will be best done with in situ platforms. The diversity of surface features on Titan related to organic solids and liquids suggests that long-range mobility is important. Measurements of the concentration of hydrogen and organics in the lower atmosphere and in surface reservoirs would allow for more quantitative determination of energy sources. Further studies of the high-elevation haze region would help provide a more complete picture of the formation of organic macromolecules. Finally, detailed investigations of the organic chemistry of the plume of Enceladus, with improved mass range and resolution compared to those provided by Cassini, are needed to determine the source of this material. Similar measurements would be important for any plumes that might be found on Europa.
What Energy Sources Are Available to Sustain Life?
On Earth, life derives the energy for primary productivity from two sources: sunlight and chemical redox couples (i.e., pairs of ions or molecules that can pass electrons back and forth). However, for sunlight to be an effective energy source, habitable conditions are required on the surface of a planet, with atmospheric shielding of solar ultraviolet and particle radiation. In the solar system, only Earth and Titan meet these requirements. Elsewhere in the solar system, the habitable zones, if they exist, are below the surface, cut off from sunlight. In these subsurface habitats, chemical redox couples are the most likely source of energy.
On Earth we have discovered three microbial ecosystems that survive without sunlight on redox couples that are produced geologically. Two of these ecosystems are based on hydrogen released by the reaction of water with basaltic rocks and the reaction of this hydrogen with carbon dioxide.51 Such an energy source could be operative in the ocean on Europa or in a liquid-water system on Enceladus. The third system on Earth is based on oxidants produced by the dissociation of water due to natural radioactivity,52 which produces oxidants and hydrogen. The oxidants produced generate sulfate that is then used by sulfur-reducing bacteria with the hydrogen. These three systems provide an analog for energy sources suggested for Europa and Enceladus in which oxidants are produced on the surface by ionizing radiation and are carried to the water reservoirs below the surface.53
On Titan the availability of chemical energy is obvious. The atmospheric cycle of organic production results in the formation of organics such as acetylene and ethane, with less hydrogen per carbon than methane. These compounds, as well as the solid organic material, will react with atmospheric hydrogen to release energy in amounts that can satisfy the needs of typical Earth microorganisms.
On Europa and Enceladus there are clearly geothermal energy sources. But the availability of a biologically usable chemical energy source (methanogen or oxidant based) remains speculative though possible.
Some important questions about the available energy sources for sustaining life include the following:
• What is the nature of any biologically relevant energy sources on Europa?
• What are the energy sources that drive the plume on Enceladus? These may lead to understanding the possibilities for biologically relevant energy sources.
• On Titan, how is chemical energy delivered to the surface?
Future Directions for Investigations and Measurements
Important directions for future investigations relating to energy sources for life include (1) measurement of the oxidant content and studies to increase understanding of its formation mechanisms on the surface ice of Europa and Enceladus, (2) through remote sensing, efforts to improve understanding of geologic processes that might deliver surface oxidants to subsurface liquid water, and (3) for Titan, improved measurements of atmospheric and surface chemistry to increase understanding of the biological availability of chemical energy.
Is There Evidence for Life on the Satellites?
The search for evidence of life is an emerging science priority for the moons of the outer solar system. Organic material produced biologically is distinguishable from abiotic sources.54 Studies of the plume of Enceladus and any organics on the surface of Europa (or in potential Europa plumes) may provide evidence of biological complexity even if the organisms themselves are no longer present or viable. Titan has a liquid on its surface—methane, not water—and there are speculations that it may be a suitable medium for organic life as well.55
The detection of organic material in the icy plume of Enceladus indicates the possibility of conditions suitable for biological processes, present or past. On Titan organic molecules are clearly present and interacting with liquids (certainly liquid hydrocarbons and possibly ammonia-water mixtures), but these interactions are not necessarily of biological origin.
Some important questions relevant to evidence for life on the satellites include the following:
• Does (or did) life exist below the surface of Europa or Enceladus?
• Is hydrocarbon-based life possible on Titan?
Future Directions for Investigations and Measurements
A key future investigation of the possibility of life on the outer planet satellites is to analyze organics from the interior of Europa. Such analysis requires either a lander in the far term or the discovery of active Enceladus-style venting, which would allow analysis from orbit with a mission started in the next decade. A detailed characterization of the organics in the plume of Enceladus is important to search for signatures of biological origin, such as molecules with a preferred chirality or unusual patterns of molecular weights. A major investigation should be to characterize the organics on Titan’s surface, particularly in liquids, to reveal any potentially biological processes occurring there.
Connections with Other Parts of the Solar System
The satellites of the outer planets embody processes that operate throughout the solar system. Io’s silicate volcanism provides living examples of volcanic processes that have been important now or in the past on all the terrestrial planets and the Moon. Eruptions seen in recent years are comparable to the largest terrestrial eruptions witnessed in human history. Io’s high heat flow provides an analog to the terrestrial planets shortly after their formation, and its loss of atmospheric mass illuminates mechanisms of the loss of volatiles throughout the solar system. Ganymede’s surprising magnetic field may help elucidate the dynamos in terrestrial planets, and the poorly differentiated interiors of Callisto and Titan constrain timescales for assembly of the solar system. An understanding of Titan’s methane greenhouse might improve understanding of anthropogenic greenhouse warming on Earth, or Venus’s greenhouse, and Titan’s organic chemistry illuminates terrestrial prebiotic chemical processes. Triton provides a valuable analog for large evolved bodies in the Kuiper belt such as Pluto and Eris.
In turn, studies of other bodies in the solar system help to advance understanding of the giant-planet satellites. The composition and internal structure of the giant planets constrain the raw materials and formation environments of the satellites, while the populations and compositions of primitive bodies illuminate the current and past impact environments of the satellites.
Connections with Heliophysics
There is much overlap between planetary satellite science goals and NASA solar and space physics goals,56 because many giant-planet satellites are embedded in their planetary magnetospheres and interact strongly with those magnetospheres, producing a rich variety of phenomena of great interest to both fields.
Connections with Extrasolar Planets
The first detections of extrasolar planetary satellites may not be far off (Kepler may detect satellite-induced planetary wobble via transit timings, for instance). When such satellites are found, our understanding of our own giant-planet satellite systems will be essential for interpretation of the data on extrasolar satellites, both for the direct understanding of those worlds and for their use as constraints on the evolution of their primary planets. Extrasolar satellite systems will provide more habitable environments than their primaries in many cases, and understanding of those environments will depend heavily on our understanding of satellites in the solar system.
In recent years, NASA’s research and analysis (R&A) activities for the outer solar system have been increased through the establishment of the Cassini Data Analysis Program, the Outer Planets Research Program, and the Planetary Mission Data Analysis Program. All of these programs have enabled growth in the understanding of outer solar system bodies and the training of new researchers. They are essential to harvesting the maximum possible science return from missions, whether past (Voyager, Galileo), present (New Horizons, Cassini), or future (Juno, Jupiter Europa Mission).
Satellite science will benefit from continued development of a wide range of instrument technologies designed to improve resolution and sensitivity while reducing mass and power, and to exploit new measurement techniques. Specific instrumentation requirements for the next generation of missions to the satellites of the outer planets include the following:
• In the immediate future, continued support for Europa orbiter instrument development. Europa instruments face unique challenges: they must survive not only unprecedented radiation doses, but also prelaunch reduction of microbial bioburden to meet planetary protection requirements. Instrumentation for future missions will also benefit from Europa instrument development, e.g., radiation-hardened technology for Io and Ganymede missions and the ability to survive reduction of microbes for missions to Enceladus.
• Development of instruments for future Titan missions, particularly remote-sensing instruments capable of mapping the surface from orbit and in situ instruments, needed for detailed chemical, physical, and astrobiological exploration of the atmosphere, surface, and lakes, which must operate under cryogenic conditions.57,58
Aerocapture should be considered as an option for delivering more mass to Titan in the future Titan flagship mission studies, and is likely to be mission-enabling for any future Uranus and Neptune orbiters (Chapters 7, 11,
and Appendix D) mission. Further risk reduction will be required before high value and highly visible missions will be allowed to utilize aerocapture techniques.
Plutonium power sources are of course essential for most outer planet satellite exploration, and completion of development and testing of the new Advanced Stirling Radioisotope Generators (ASRGs) is necessary to make most efficient future use of limited plutonium supplies. However, maintenance of the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) technology is also required because such a device is better suited to use on a Titan hot-air balloon than is an ASRG.
Hot-air balloons at Titan will be of great utility for understanding the atmospheric processes and chemistry. There is currently a European effort to advance this technology.59,60 Titan aircraft provide a potential alternative to balloons if plutonium supplies are insufficient to fuel an MMRTG but sufficient for an ASRG.61
The identification of trajectories that enable planetary missions or significantly reduce their cost is an essential and highly cost-effective element in the community’s tool kit.62 The history of planetary exploration is replete with examples, and the Enceladus orbiter mission concept discussed in this report is an example of a mission enabled by advanced trajectory analysis. A sustained investment in the development of new trajectories and techniques for both chemical propulsion and low-thrust propulsion mission designs would provide a rich set of options for future missions.
A radiation effects risk reduction plan is in place and would be implemented as part of the Phase A activities for a Jupiter Europa Orbiter (JEO). Future missions to Io and Ganymede will benefit from this work, which will have to be sustained to ensure that the technology base is adequate to meet the harsh radiation environment that JEO and future missions will encounter.
The base for thermal protection system (TPS) technology used for atmospheric entry is fragile, and is important for satellite science applications including aerocapture at Titan from heliocentric orbit, and Neptune aerocapture. The technology base that supports the thermal protection systems for re-entry vehicles was developed in the 1950s and 1960s, with small advances thereafter. The near loss of the TPS technology base endangered the development of Mars Science Laboratory, which required the use of phenolic impregnated carbon ablator (PICA). Although PICA is an old technology, its use for MSL was enabled by the significant investment in the Orion TPS project that was required to resurrect a technology base that had atrophied. One very important lesson learned in this process was that several years of intense and expensive effort can be required to implement even modest improvements in TPSs.63
The committee considered a wide range of potential mission destinations and architectures, guided by community input provided in white papers, with particular emphasis on the recommendations of the Outer Planet Assessment Group (OPAG).64 The committee evaluated their cost-effectiveness in addressing the goals and objectives discussed earlier. The feasibility of several missions studied was influenced by the availability of gravity assists from Jupiter or Saturn, and the necessary planetary alignments should be considered when developing long-term strategies for solar system exploration (Figure. 8.10).
The challenges posed by the physical scale of the outer solar system and resulting long flight times, and the relative immaturity of current understanding of outer planet satellites, are best met with the economies of mission scale: large missions are the most cost-effective. Cassini has spectacularly demonstrated the value of large, well-instrumented missions, for instance in its multi-instrument discovery and detailed characterization of on Enceladus. A role for smaller missions remains, however, and mission studies prepared for the committee (Appendix G) demonstrated that scientifically exciting and worthwhile missions can be conducted for less than the cost of the flagship missions.
Previously Recommended Missions
Cassini Extended Mission
The Cassini spacecraft has been in Saturn orbit since 2004 and continues to deliver a steady stream of remarkable discoveries. Recent satellite science highlights have included direct observations of changing lake levels on Titan and high-resolution observations of the Enceladus plumes and their source regions that are refining understanding of plume composition and source conditions. The extension of the mission through northern hemisphere summer solstice in 2017—the Cassini Solstice mission—will provide major opportunities for satellite science.65 Seasonal change is key to understanding the dynamics of Titan’s atmosphere and interactions with the surface, and the mission extension will more than double the seasonal time base, including the critical period when the northern hemisphere lakes and polar vortex respond to major increases in insolation as spring advances. Twelve additional Enceladus flybys will map its gravity field, search for temporal and spatial changes in plume activity and composition, and provide unprecedented detail on the south polar thermal emission and heat flow. In addition, flybys of Rhea and Dione will probe their interiors and search for endogenic activity.
Europa Geophysical Explorer
Europa, with its probable vast subsurface ocean sandwiched between a potentially active silicate interior and a highly dynamic surface ice shell, offers one of the most promising extraterrestrial habitable environments, and a plausible model for habitable environments beyond our solar system. The larger Jupiter system in which Europa resides hosts an astonishing diversity of phenomena, illuminating fundamental planetary processes. While Voyager and Galileo have taught us much about Europa and the Jupiter system, the relatively primitive instrumentation of those missions, and the low data volumes returned, have left many questions unanswered, and it is likely that major discoveries remain to be made (Figure 8.11).
The Europa Geophysical Explorer mission was endorsed by the NRC’s 2003 planetary science decadal survey as its number one recommended flagship mission to be flown in the decade 2003-2013.66 That report states, in words that remain true today, “The first step in understanding the potential for icy satellites as abodes for life is a Europa mission with the goal of confirming the presence of an interior ocean, characterizing the satellite’s ice shell, and understanding its geological history. Europa is important for addressing the issue of how far organic chemistry goes toward life in extreme environments and the question of how tidal heating can affect the evolution of worlds. Europa is key to understanding the origin and evolution of water-rich environments in icy satellites” (p. 196). A Europa orbiter mission was subsequently given very high priority by the 2006 Solar System Exploration Roadmap67 and the 2007 NASA Science Plan,68 and it is the highest-priority large mission recommended by OPAG.69
The Europa Jupiter System Mission (EJSM), now under advanced study by NASA,70 takes the goals of the Europa Geophysical Explorer mission and adds Jupiter system science for an even broader science return. The
proposed mission will be a partnership with the European Space Agency (ESA) and will have two components, to be launched separately: a Jupiter Europa Orbiter (JEO), which will be built and flown by NASA, and a Jupiter Ganymede Orbiter (JGO), which will be built and flown by ESA and will accomplish numerous Callisto flybys before going into orbit around Ganymede (Figure 8.12). Both spacecraft will be in the jovian system at the same time, allowing for unprecedented synergistic observations.71 Even if ESA’s JGO does not fly, the NASA JEO mission will enable huge leaps in understanding of icy satellites, giant planets, and planetary systems, addressing a large fraction of the science goals outlined in this chapter.
The overarching goals of this mission are as follows, in decreasing priority order:
1. Characterize the extent of the ocean and its relation to the deeper interior.
2. Characterize the ice shell and any subsurface water, including their heterogeneity, and the nature of surface-ice-ocean exchange.
3. Determine global surface compositions and chemistry, especially as related to habitability.
4. Understand the formation of surface features, including sites of recent or current activity, and identify and characterize candidate sites for future in situ exploration.
5. Understand Europa’s space environment and interaction with the magnetosphere.
6. Conduct Jupiter system science (Jupiter’s atmosphere, magnetosphere, other satellites, and rings).
Launched in 2020, JEO would enter the Jupiter system in 2026, using Io for a gravity assist prior to Jupiter orbit insertion. This strategy increases the delivered mass to Europa by significantly decreasing the required Jupiter orbit insertion propellant in exchange for a modest increase in the radiation shielding of the flight system. The JEO mission design features a 30-month jovian system tour, which includes four Io flybys, nine Callisto flybys (including one near-polar), six Ganymede flybys, and six Europa flybys along with ~2.5 years of observing Io’s volcanic activity and Jupiter’s atmosphere, magnetosphere, and rings.
After the jovian tour phase, JEO would enter orbit around Europa and spend the first month in a 200-kilometer circular orbit before descending to a 100-kilometer circular orbit for another 8 months. The mission would end with impact onto Europa.
Flagship-class missions historically have a greatly enhanced science return compared to that of smaller missions—the whole is greater than the sum of the parts—and so the higher cost of a flagship mission compared to a New Frontiers-class mission is well justified. Europa remains the highest priority for satellite exploration, and a Europa mission deserves sufficient resources to realize its phenomenal scientific potential. Therefore, a Europa mission should take precedence over smaller missions to outer solar system targets during the next decade. If ESA’s Jupiter Ganymede Orbiter also flies, then the science return will be even higher.
The intense jovian radiation environment remains the largest challenge for the JEO spacecraft and its instruments, although thanks to extensive study of the issue in the past decade, the risks and mitigation strategies are now well understood. This work has included characterization of the radiation hardness of key electronic components (including development of an “approved parts and materials list” for use by instrument developers), improved modeling of expected radiation fluxes, and detailed consideration of shielding strategies. NASA should continue to work closely with instrument developers to understand and mitigate the impact of radiation on JEO instruments, prior to final payload selection.
Io provides the ideal target to study tidal dissipation and the resulting variety of volcanic and tectonic processes in action, with fundamental implications for the thermal co-evolution of the Io-Europa-Ganymede system as well as for habitable zones around other stars. As such, an Io mission is of high scientific priority,72 as highlighted in the 2003 planetary science decadal survey73 and subsequent 2008 New Frontiers recommendations.74 An Io mission was studied in detail at the committee’s request. The study (Appendix G) and subsequent cost and technical evaluation (CATE) analysis (Appendix C) found this mission to be a plausible candidate for the New Frontiers program.
The science goals of the Io Observer mission include the following:
• Study Io’s active volcanic processes;
• Determine the melt fraction of Io’s mantle;
• Constrain tidal heating mechanisms;
• Study tectonic processes;
• Investigate interrelated volcanic, atmospheric, plasma-torus, and magnetospheric mass- and energy-exchange processes;
• Constrain the state of Io’s core via improved constraints on whether Io generates a magnetic field; and
• Investigate endogenic and exogenic processes controlling surface composition.
Two baseline options were studied; one used ASRGs and the other was solar powered. Each was a Jupiter orbiter carrying a narrow-angle camera, ion-neutral mass spectrometer, thermal mapper, and magnetometers
and performing ten Io flybys. A floor mission with reduced payload and six flybys was also studied, as was an enhanced payload including a plasma instrument. A high-inclination orbit (~45°) provides polar coverage to better constrain the interior distribution of tidal heating and significantly reduces accumulated radiation: the total radiation dose is estimated to be half that of the Juno mission. No new technology is required. All objectives are addressed by the floor mission and accomplished to much greater extent by the baseline and enhanced missions. This mission provides complementary science to that planned by JEO (which is limited by JEO’s low-inclination orbit, less Io-dedicated instrumentation, and small number—three—of Io science flybys, plus one non-science Io flyby).
The Io Observer mission could also, with the addition of suitable particles and fields instrumentation (perhaps funded separately), address some of the science goals of the Io Electrodynamics mission considered by the 2003 solar and space physics decadal survey.75
New Missions: 2013-2022
Further exploration of Titan is a very high priority for satellite science. White papers from the community provide strong support for Titan science,76,77,78,79 and OPAG endorsed a Titan flagship mission as its second-highest priority flagship mission as part of an outer planets program.80
Titan Saturn System Mission
Many Titan mission concept studies have been conducted over the past decade including the most recent outer planet flagship mission study.81 In that study, completed in 2009, NASA and ESA worked jointly to define a flagship-class mission that would achieve the highest priority science. The resulting concept is called the Titan Saturn System Mission (TSSM) and has three overarching science goals:
1. Explore and understand processes common to Earth that occur on another body, including the nature of Titan’s climate and weather and their time evolution, its geologic processes, the origin of its unique atmosphere, and analogies between its methane cycle and Earth’s water cycle.
2. Examine Titan’s organic inventory, a path to prebiotic molecules. This includes understanding the nature of atmospheric, surface, and subsurface organic chemistry, and the extent to which that chemistry might mimic the steps that led to life on Earth.
3. Explore Enceladus and Saturn’s magnetosphere—clues to Titan’s origin and evolution. This includes investigation of Enceladus’s plume for clues to the origin of Titan ices and a comparison of its organic content with that of Titan, and understanding Enceladus’s tidal heating and its implications for the Saturn system.
The purpose of Goal 1 is characterization of the physical processes, many of which are similar to those on Earth, that shape Titan’s atmosphere, surface, and evolution.
Goal 2 motivates investigation of Titan’s rich organic chemistry. An extensive study is particularly important because it will elucidate the chemical pathways that occur in two environments, which may resemble those of early Earth. Measurements of the composition of the thermosphere will determine whether amino acids are made in the upper atmosphere. The chemical pathways that lead to these prebiotic molecules will be investigated to determine whether this formation mechanism is typical, and whether prebiotic molecules are common in irradiated methane- and nitrogen-rich atmospheres, perhaps typical of early Earth. Measurements of the surface will investigate the progress of Titan’s organic chemistry over longer time periods.
Goal 3 involves investigation of Enceladus, whose plumes provide a unique view of the composition and chemistry of the interior, which is likely representative of the same types of icy materials that formed Titan. This goal could possibly be addressed by a separate Enceladus mission as described below, but Enceladus science remains a high priority for a Titan mission, if Enceladus is not targeted separately. The TSSM mission design includes Enceladus flybys prior to Titan orbit insertion, but some Titan mission architectures, such as aerocapture directly
from heliocentric orbit, might preclude Enceladus science unless the spacecraft subsequently left Titan orbit for Enceladus, and these trade-offs require further study as mission concepts are developed further.
The study of such a complex system requires both orbital and in situ elements, and the TSSM concept includes three components—an orbiter, a balloon, and a lander (Figure 8.13).
The TSSM science was rated by both NASA and ESA science review panels as being on a level equivalent to the science of the Europa Jupiter System Mission. The science was rated as excellent and science implementation rated as low risk, although the need for continued technology development for TSSM was noted. Based on technical readiness, a joint NASA-ESA recommendation in 2009 prioritized EJSM first, followed closely by TSSM. The multi-element mission architecture is appropriate because it enables complementary in situ and remote-sensing observations. The TSSM study demonstrated the effectiveness of such an approach for accomplishing the diverse science objectives that are high priorities for understanding Titan. However, the details of such an implementation are likely to evolve as studies continue.
Technology needs for Titan, including surface sampling, balloons, and aerocapture, which may enable delivery of additional mass to Titan, were prominent in OPAG’s technology recommendations.82 Technology development priorities for this mission are those needed to address the mission design risks identified by the outer planet flagship review panel.83 Specific components highlighted as requiring development include the following:
• In situ elements enabling extensive areal coverage. The Montgolfière (hot-air) balloon system proposed for TSSM is a promising approach,84,85 but an aircraft, which could use an ASRG rather than an MMRTG,86 might be more appropriate if there is a limited supply of plutonium-238;
• Mature in situ analytical chemistry systems that have high resolution and sensitivity; and
• Sampling systems that can operate reliably in cryogenic environments. (See Chapter 11 for additional details.)
Furthermore, mission studies have shown that any future mission to Saturn will require the use of suitable radioisotope power sources, thus placing a high priority on the completion of the ASRGs and the restart of the plutonium production program by the U.S. Department of Energy.
The committee commissioned a detailed study of a Titan Lake Probe (Appendixes D and G) for considering the mission and instrument capabilities needed to examine the lake-atmosphere interaction as set forth in the TSSM study report.87 In addition, the Titan Lake Probe study evaluated the feasibility and value of additional capability to directly sample the subsurface and lake bottom. The integrated floater/submersible concepts in that study were designed to make measurements at various lake depths and even sample the sediment on the bottom of
the lake. The findings indicated that such a system that includes floater and submersible components and enhanced instrumentation would result in significantly increased science, but also significantly increased mass relative to the simpler TSSM lake lander concept. Following from the results of that study, further studies are needed to refine lander concepts as part of a flagship mission. Stand-alone lake lander concepts, independent of TSSM, were also studied (Appendixes D and G) but were judged to be less cost-effective than a lake lander integrated with TSSM.
Enceladus, with its remarkable active cryovolcanic activity, including plumes that deliver samples from a potentially habitable subsurface environment, is a compelling target for future exploration,88,89,90,91 and OPAG recommended study of mid-size Enceladus missions for the coming decade.92 Mission studies commissioned by the committee indicated that a focused Enceladus orbiter mission is both scientifically compelling and would cost less than Europa or Titan flagship missions (Appendix C). Enceladus orbiters have been the subject of several previous mission studies, most recently in 2007 (Figure 8.14).93
The most important science goals for an Enceladus mission, in priority order, are the following:
1. What is the nature of Enceladus’s cryovolcanic activity, including conditions at the plume source, the nature of the energy source, delivery mechanisms to the surface, and mass-loss rates?
2. What are the internal structure and chemistry (particularly organic chemistry) of Enceladus, including the presence and chemistry of a global or regional subsurface ocean?
3. What is the nature of Enceladus’s geologic history, including tectonism, viscous modification of the surface, and other resurfacing mechanisms?
4. How does Enceladus interact with the rest of the saturnian system?
5. What is the nature of the surfaces and interiors of Rhea, Dione, and Tethys?
6. Characterize the surface for future landing sites.
The committee commissioned a broad study of possible mission architectures including flybys, simple and flagship-class orbiters, landers, and plume sample return missions, and concluded that a simple orbiter would provide compelling science (Appendix G). A follow-up detailed study (Appendix G) found that the above science goals could be addressed well using a simple orbiter with a payload consisting of a medium-angle camera, thermal mapper, magnetometer, mass spectrometer, dust analyzer, and radio science. Sophisticated use of leveraged flybys of Saturn’s mid-size moons before Enceladus orbit insertion was found to reduce delta-V requirements, and thus mass and cost, compared to previous studies.94 The mission requires plutonium for power, in the form of ASRGs, but requires little other new technology development. However, planetary protection is an issue for Enceladus because of the possibility of contamination of the probable liquid-water subsurface environment, and mission costs could increase somewhat if it proves necessary to sterilize the spacecraft to meet planetary protection guidelines.
The exploration of the uranian satellites could potentially be accomplished by the Uranus Orbiter and Probe mission discussed in Chapter 7. The proposed satellite tour (Appendix G), which includes two targeted flybys of each of the five major satellites, would help to fill a major gap in understanding of planetary satellites, because the sides opposite to those seen by Voyager 2 would be illuminated, flybys would be closer than those of Voyager (for instance potentially enabling magnetic sounding of satellite interiors), and because of instrumentation improvements relative to Voyager. Neptune orbiter and flyby missions (Appendixes D and G) could potentially address many science goals for Triton.
Rationale for Prioritization of Missions and Mission Studies
The committee’s decision to give higher priority to the Jupiter Europa Orbiter than to the Titan Saturn System Mission was made as follows. The likely science return from both the Europa and Titan missions would be very
high and comparable in value, but the Jupiter Europa Orbiter mission was judged to have greater technical readiness. The technical readiness of the Europa mission results from a decade of detailed study dating back to the original Europa Orbiter concept, for which an Announcement of Opportunity was issued in 1999. The biggest technical issue for the Europa mission, the high radiation dose, remains challenging but has been mitigated by the extensive preparatory work. The Titan Saturn System Mission concept is considerably less mature, with more potential for the emergence of unanticipated problems. Also, the Titan mission is much more dependent for achievement of its
science goals on integration with non-U.S. mission components. Although the Jupiter Europa Orbiter is intended as an element of a multi-spacecraft mission, operating in tandem with the ESA-supplied Jupiter Ganymede Orbiter, the missions are launched and flown separately, and so integration issues are relatively minor. Also, the majority of the Jupiter Europa Orbiter science goals are achieved independently of the Jupiter Ganymede Orbiter. In contrast, many key science objectives of the Titan mission rely on the balloon and lander elements, which are in an early stage of development. The use of three spacecraft elements at Titan (orbiter, lander, and balloon) also increases the complexity of spacecraft integration and mission operations, and thus the associated risk.
For these and other reasons, the NASA-sponsored evaluation of the 2008 flagship mission studies rated the Europa mission as having mission implementation and cost risk lower than those for the Titan mission. Costs to NASA as estimated for the decadal survey (Appendix C) were lower for the Jupiter Europa Orbiter ($4.7 billion in FY2015 dollars) than for the Titan Saturn System Mission (at least $5.7 billion, after subtraction of the estimated $1 billion cost of the ESA-supplied balloon and lake lander from the $6.7 billion estimate for an all-NASA mission, and addition of any potential costs associated with dividing the mission between NASA and ESA). Finally, the outer planets community, as represented by the Outer Planets Assessment Group (OPAG), ranked the Europa mission as its highest-priority flagship mission, followed by the Titan mission.
JEO is also given higher priority than the Enceladus Orbiter for two primary reasons. JEO’s flagship-class payload will return a greater breadth and volume of science data than would the more focused payload of the Enceladus Orbiter (see the discussion of mission size above). Also the severe limitations of the Galileo data set, due to Galileo’s low data rate and the older technology of its instrument payload, mean that knowledge of Europa and the Jupiter system is now poorer than knowledge of Enceladus and the Saturn system, giving a particularly high potential for new discoveries by JEO at Europa and throughout the Jupiter system.
Among the smaller missions studied by the panel, the Enceladus Orbiter was given highest priority because of the breadth of science questions that it can address (with the potential for major contributions to understanding the chemistry, active geology and geophysics, and astrobiological potential of Enceladus), coupled with its relatively simple implementation, requiring little new technology. The Io Observer was chosen as a New Frontiers candidate because of its compelling science and because it was the only outer planet satellite mission studied for which cost estimates placed it plausibly within the New Frontiers cost cap. Of the other satellite missions studied (Appendix D) the stand-alone Titan Lake Lander was rated lower priority because of its relatively narrow science focus and relatively challenging technology requirements. The Ganymede Orbiter was rated as lower priority for a NASA mission because of the probability that ESA’s planned Jupiter Ganymede Orbiter will achieve most of the same science goals.
Other stand-alone Titan mission concepts that could achieve a subset of the goals of the TSSM mission are also possible. However, implementation of such stand-alone missions is challenging, as evidenced by the fact that only one additional mission that could replace an element of TSSM was proposed in any of the community white papers submitted to the decadal survey: a stand-alone Titan airplane.95 This concept is intriguing, and is noted above as a possible alternative to a balloon as an element of a flagship mission. However, high data rates are required to obtain full benefit from the remote sensing that would be a key measurement goal of an aircraft or a balloon. High data rates are difficult to achieve without the use of a relay spacecraft, making aircraft or balloons less attractive as stand-alone mission candidates than the lake lander chosen for detailed study. One additional stand-alone mission, the Titan Geophysical Network, was proposed in a white paper96 but was not chosen for detailed study because the science goals, which go beyond those of TSSM, were judged to be of lower priority, and the required low-power radioisotope power supplies would entail significant additional development. A stand-alone Titan orbiter without the in situ elements might also be considered, but was not chosen for study because it was not proposed by community white papers, and because of the advantages of an integrated orbiter and in situ elements both for delivery to Saturn and for data relay.
To achieve the primary goals of the scientific study of the satellites of the giant-planet systems as outlined in this chapter, the following actions are needed.
• Flagship missions—The planned continuation of the Cassini mission through 2017 is the most cost-effective and highest-priority way to advance understanding of planetary satellites in the near term. The highest-priority satellite-focused missions to be considered for new starts in the coming decade are, in priority order: (1) Jupiter Europa Orbiter component of EJSM as described in the Jupiter Europa Orbiter Mission Study 2008: Final Report97 and refined subsequently (including several Io science flybys); (2) Titan Saturn System Mission, with both Titan-orbiting and in situ components; and (3) Enceladus Orbiter. JEO is synergistic with ESA’s JGO. However, JEO’s priority is independent of the fate of ESA’s JGO. The Uranus Orbiter and Probe mission discussed in Chapter 7 would return very valuable satellite science, but it is not prioritized here relative to the satellite-focused missions discussed in this chapter.
• New Frontiers missions—An Io Observer, making multiple Io flybys from Jupiter orbit, is the high-priority medium-size mission. The Ganymede Orbiter concept studied at the committee’s request (Appendixes D and G) was judged to be of lower priority for a stand-alone NASA mission.
• Technology development—After the development of the technology necessary to enable JEO, the next highest priority goes to addressing the technical readiness of the orbital and in situ elements of TSSM. Priority areas include the balloon system, low-mass and low-power instruments, and cryogenic surface sampling systems.
• International cooperation—The synergy between the JEO, JGO, and Japan Aerospace Exploration (JAXA’s) proposed Jupiter Magnetospheric Orbiter is great. Continued collaboration between NASA, ESA, and JAXA to enable the implementation of all three components of EJSM is encouraged. Also encouraged is the NASA-ESA cooperation needed to develop the technologies necessary to implement a Titan flagship mission.
1. P.R. Estrada, I. Mosqueira, J.J. Lissauer, G. D’Angelo, and D.P. Cruikshank. 2009. Formation of Jupiter and conditions for accretion of the Galilean satellites. Pp. 27-58 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). of Arizona Press, Tucson, Ariz.
2. R. Jaumann, R.L. Kirk, R.D. Lorenz, R.M.C. Lopes, E. Stofan, E.P. Turtle, H.U. Keller, C.A. Wood, C. Sotin, L.A. Soderblom, and M.G. Tomasko. 2009. Geology and surface processes on Titan. Pp. 75-140 in Titan from Cassini-Huygens (R. Brown, J-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
3. L. Iess, N.J. Rappaport, R.A. Jacobson, P. Racioppa, D.J. Stevenson, P. Tortora, J.W. Armstrong, and S.W. Asmar. 2009. Gravity field, shape, and moment of inertia of Titan. Science 327:1367.
4. J.R. Spencer, A.C. Barr, L.W. Esposito, P. Helfenstein, A.P. Ingersoll, R. Jaumann, C.P. McKay, F. Nimmo, C.C. Porco, and J.H. Waite. 2009. Enceladus: An active cryovolcanic satellite. Pp. 683-724 in Saturn from Cassini-Huygens (M. Dougherty, L. Esposito, and T. Krimigis, eds.). Springer, Heidelberg, Germany.
5. J. Lunine, M. Choukroun, D. Stevenson, and G. Tobie. 2009. The origin and evolution of Titan. Pp. 75-140 in Titan from Cassini-Huygens (R. Brown, J.-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
6. S. Atreya, R. Lorenz, and J.H. Waite 2009. Volatile origin and cycles: Nitrogen and methane. Pp. 177-199 in Titan from Cassini-Huygens (R. Brown, J.-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
7. O. Mousis, J.I. Lunine, M. Pasek, D. Cordier, J.H. Waite, K.E. Mandt, W.S. Lewis, and M.-J. Nguyen. 2009. A primordial origin for the atmospheric methane of Saturn’s moon Titan. Icarus 204:749-751.
8. J. Lunine, M. Choukroun, D. Stevenson, and G. Tobie. 2009. The origin and evolution of Titan. Pp. 75-140 in Titan from Cassini-Huygens (R. Brown, J.-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
9. W.M. Grundy, L.A. Young, J.R. Spencer, R.E. Johnson, E.F. Young, and M.W. Buie. 2006. Distributions of H2O and CO2 ices on Ariel, Umbriel, Titania, and Oberon from IRTF/SpeX observations. Icarus 184:543-555.
10. G. Schubert, H. Hussmann, V. Lainey, D.L. Matson, W.B. McKinnon, F. Sohl, C. Sotin, G. Tobie, D. Turrini, and T. Van Hoolst. 2010. Evolution of icy satellites. Space Science Reviews 153:447-484.
11. G. Schubert, H. Hussmann, V. Lainey, D.L. Matson, W.B. McKinnon, F. Sohl, C. Sotin, G. Tobie, D. Turrini, and T. Van Hoolst. 2010. Evolution of icy satellites. Space Science Reviews 153:447-484.
12. J. Meyer and J. Wisdom. 2008. Tidal evolution of Mimas, Enceladus, and Dione. Icarus 193:213-223.
13. V. Lainey, J.-E. Arlot, O. Karatekin, and T. van Hoolst. 2009. Strong tidal dissipation in Io and Jupiter from astrometric observations. Nature 459:957-959.
14. L. Iess, N.J. Rappaport, R.A. Jacobson, P. Racioppa, D.J. Stevenson, P. Tortora, J.W. Armstrong, and S.W. Asmar. 2009. Gravity field, shape, and moment of inertia of Titan. Science 327:1367.
15. L. Prockter and G.W. Patterson. 2009. Morphology and evolution of Europa’s ridges and bands. Pp. 237-258 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
16. P.M. Schenk and M.H. Bulmer. 1998. Origin of mountains on Io by thrust faulting and large-scale mass movements. Science 279:1514.
17. G. Collins and F. Nimmo. 2009. Chaotic terrain on Europa. Pp. 237-258 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
18. R. Jaumann, R.L. Kirk, R.D. Lorenz, R.M.C. Lopes, E. Stofan, E.P. Turtle, H.U. Keller, C.A. Wood, C. Sotin, L.A. Soderblom, and M.G. Tomasko. 2009. Geology and surface processes on Titan. Pp. 75-140 in Titan from Cassini-Huygens (R. Brown, J-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
19. J.R. Spencer, A.C. Barr, L.W. Esposito, P. Helfenstein, A.P. Ingersoll, R. Jaumann, C.P. McKay, F. Nimmo, C.C. Porco, and J.H. Waite. 2009. Enceladus: An active cryovolcanic satellite. Pp. 683-724 in Saturn from Cassini-Huygens (M. Dougherty, L. Esposito, and T. Krimigis, eds.). Springer, Heidelberg, Germany.
20. R. Jaumann, R.N. Clark, F. Nimmo, A.R. Hendrix, B.J. Buratti, T. Denk, J.M. Moore, P.M. Schenk, S.J. Ostro, and R. Srama. 2009. Icy satellites: Geological evolution and surface processes. Pp. 636-682 in Saturn from Cassini-Huygens (M. Dougherty, L. Esposito, and T. Krimigis, eds.). Springer, Heidelberg, Germany.
21. E.B. Bierhaus, C.R. Chapman, and W.J. Merline. 2005. Secondary craters on Europa and implications for cratered surfaces. Nature 437:1125-1127.
22. L. Dones, C.R. Chapman, W.B. McKinnon, H.J. Melosh, M.R. Kirchoff, G. Neukum, and K.J. Zahnle. 2009. Icy of Saturn: Impact cratering and age determination. Pp. 613-635 in Saturn from Cassini-Huygens (M. Dougherty, L. Esposito, and T. Krimigis, eds.). Springer, Heidelberg, Germany.
23. S.D. Wall, R.M. Lopes, E.R. Stofan, C.A. Wood, J.L. Radebaugh, S.M. Hörst, B.W. Stiles, R.M. Nelson, L.W. Kamp, M.A. Janssen, and R.D. Lorenz. 2009. Cassini RADAR images at Hotei Arcus and western Xanadu, Titan: Evidence for geologically recent cryovolcanic activity. Geophysical Research Letters 36:L04203.
24. F. Nimmo, J.R. Spencer, R.T. Pappalardo, and M.E. Mullen. 2007. Shear heating as the origin of the plumes and heat flux on Enceladus. Nature 447:289-291.
25. E.B. Bierhaus, C.R. Chapman, and W.J. Merline. 2005. Secondary craters on Europa and implications for cratered surfaces. Nature 437:1125-1127.
26. J.R. Spencer, S.A. Stern, A.F. Cheng, H.A. Weaver, D.C. Reuter, K. Retherford, A. Lunsford, J.M. Moore, O. Abramov, R.M.C. Lopes, J.E. Perry, et al. 2007. Io volcanism seen by New Horizons: A major eruption of the Tvashtar volcano. Science 318:240.
27. M.A. McGrath, E. Lellouch, D.F. Strobel, P.D. Feldman, and R.E. Johnson. 2004. Satellite atmospheres. Pp. 457-483 in Jupiter: Planet, Satellites, and Magnetosphere (F. Bagenal, T. Dowling, and W. McKinnon, eds.). Cambridge University Press, Cambridge, U.K.
28. M.A. McGrath, E. Lellouch, D.F. Strobel, P.D. Feldman, and R.E. Johnson. 2004. Satellite atmospheres. Pp. 457-483 in Jupiter: Planet, Satellites, and Magnetosphere (F. Bagenal, T. Dowling, and W. McKinnon, eds.). Cambridge University Press, Cambridge, U.K.
29. W. Grundy and L. Young. 2004. Near-infrared spectral monitoring of Triton with IRTF/SpeX I: Establishing a baseline for rotational variability. Icarus 172:455-465.
30. L.M. Feaga, M. McGrath, and P.D. Feldman. 2009. Io’s dayside SO2 atmosphere. Icarus 201:570-584.
31. D.F. Strobel, S.K. Atreya, B. Bézard, F. Ferri, F.M. Flasar, M. Fulchignoni, E. Lellouch, and I. Müller-Wodarg. 2009. Atmospheric structure and composition. Pp. 234-258 in Titan from Cassini-Huygens (R. Brown, J-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
32. O. Grasset, M. Blanc, A. Coustenis, W. Durham, H. Hussmann, R. Pappalardo, and D. Turrini, eds. 2010. Satellites of the outer solar system: Exchange processes involving the interiors. Space Science Reviews 153(1-4):5-9.
33. C. Paranicas, J.F. Cooper, H.B. Garrett, R.E. Johnson, and S.J. Sturner. 2009. Europa’s radiation environment and its effects on the surface. Pp. 529-544 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
34. C.F. Chyba. 2000. Energy for microbial life on Europa. Nature 403381-382.
35. R.E. Johnson, O.J. Tucker, M. Michael, E.C. Sittler, H.T. Smith, D.T. Young, and J.H. Waite. 2009. Mass loss processes in Titan’s upper atmosphere. Pp. 373-391 in Titan from Cassini-Huygens (R. Brown, J.-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
36. P.A. Delamere, A. Steffl, F. Bagenal. 2004. Modeling temporal variability of plasma conditions in the Io torus during the Cassini era. Journal of Geophysical Research (Space Physics) 109:10216.
37. J.D. Richardson, J.W. Belcher, A. Szabo, and R. McNutt. 1995. The plasma environment of Neptune. Pp. 279-340 in Neptune and Triton (D. Cruikshank, ed.). University of Arizona Press, Tucson, Ariz.
38. M.G. Kivelson, F. Bagenal, W.S. Kurth, F.M. Neubauer, C. Paranicas, and J. Saur. 2004. Magnetospheric interactions with satellites. Pp. 513-536 in Jupiter: Planet, Satellites, and Magnetosphere (F. Bagenal, T. Dowling, and W. McKinnon, eds.). Cambridge University Press, Cambridge, U.K.
39. C. Paranicas, J.F. Cooper, H.B. Garrett, R.E. Johnson, and S.J. Sturner. 2009. Europa’s radiation environment and its effects on the surface. Pp. 529-544 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
40. T.I. Gombosi, T.P. Armstrong, C.S. Arridge, K.K. Khurana, S.M. Krimigis, N. Krupp, A.M. Persoon, and M.F. Thomsen. 2009. Saturn’s magnetospheric configuration. Pp. 203-255 in Saturn from Cassini-Huygens (M. Dougherty, L. Esposito, and T. Krimigis, eds.). Springer, Heidelberg, Germany.
41. G. Schubert, H. Hussmann, V. Lainey, D.L. Matson, W.B. McKinnon, F. Sohl, C. Sotin, G. Tobie, D. Turrini, and T. Van Hoolst. 2010. Evolution of icy satellites. Space Science Reviews 153:447-484.
42. K.K. Khurana, M.G. Kivelson, K.P. Hand, and C.T. Russell. 2009. Electromagnetic induction from Europa’s ocean and the deep interior. Pp. 571-586 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
43. F. Postberg, S. Kempf, J. Schmidt, N. Brilliantov, A. Beinsen, B. Abel, U. Buck, and R. Srama. 2009. Sodium salts in E-ring ice grains from an ocean below the surface of Enceladus. Nature 459:1098-1101.
44. C. Sotin, G. Mitri, N. Rappaport, G. Schubert, and D. Stevenson. 2009. Titan’s interior structure. Pp. 61-73 in Titan from Cassini-Huygens (R. Brown, J.-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
45. H. Hussmann, F. Sohl, and T. Spohn. 2006. Subsurface oceans and deep interiors of medium-sized outer planet satellites and large trans-neptunian objects. Icarus 185:258-273.
46. J.R. Spencer, A.C. Barr, L.W. Esposito, P. Helfenstein, A.P. Ingersoll, R. Jaumann, C.P. McKay, F. Nimmo, C.C. Porco, and J.H. Waite. 2009. Enceladus: An active cryovolcanic satellite. Pp. 683-724 in Saturn from Cassini-Huygens (M. Dougherty, L. Esposito, and T. Krimigis, eds.). Springer, Heidelberg, Germany.
47. K.K. Khurana, M.G. Kivelson, K.P. Hand, and C.T. Russell. 2009. Electromagnetic induction from Europa’s ocean and the deep interior. Pp. 571-586 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
48. F. Raulin, C. McKay, J. Lunine, and T. Owen. 2009. Titan’s astrobiology. Pp. 215-233 in Titan from Cassini-Huygens (R. Brown, J.-P. LeBreton, and J.H. Waite, eds.). Springer, Heidelberg, Germany.
49. J.H. Waite, Jr., W.S. Lewis, B.A. Magee, J.I. Lunine, W.B. McKinnon, C.R. Glein, O. Mousis, D.T. Young, T. Brockwell, J. Westlake, M.-J. Nguyen, et al. 2009. Liquid water on Enceladus from observations of ammonia and 40Ar in the plume. Nature 460:487-490.
50. K.P. Hand, C.F. Chyba, J.C. Priscu, R.W. Carlson, and K.H. Nealson. 2009. Astrobiology and the potential for life on Europa. Pp. 589-629 in Europa (R. Pappalardo, W. McKinnon, and K. Khurana, eds.). University of Arizona Press, Tucson, Ariz.
51. F.H. Chapelle, K. O’Neill, P.M. Bradley, B.A. Methé, S.A. Ciufo, L.L. Knobel, and D.R. Lovley. 2002. A hydrogen-based subsurface microbial community dominated by methanogens. Nature 415:312-315.
52. L.-H. Lin, P.-L. Wang, D. Rumble, J. Lippmann-Pipke, E. Boice, L.M. Pratt, B. Sherwood Lollar, E.L. Brodie, T.C. Hazen, G.L. Andersen, T.Z. DeSantis, D.P. Moser, D. Kershaw, and T.C. Onstott. 2006. Long-term sustainability of a high-energy, low-diversity crustal biome. Science 314:479-482.
53. C.F. Chyba. 2000. Energy for microbial life on Europa. Nature 403:381-382.
54. See, for example, National Research Council, Exploring Organic Environments in the Solar System, The National Academies Press, Washington, D.C., 2007, pp. 11-19.
55. C.P. McKay and H.D. Smith. 2005. Possibilities for methanogenic life in liquid methane on the surface of Titan. Icarus 178:274-276.
56. National Research Council. 2003. The Sun to the Earth—and Beyond: A Decadal Research Strategy in Solar and Space Physics. The National Academies Press, Washington, D.C.
57. P.M. Beauchamp, W. McKinnon, T. Magner, S. Asmar, H. Waite, S. Lichten, E. Venkatapathy, T. Balint, A. Coustenis, J.L. Hall, M. Munk, et al. 2009. Technologies for Outer Planet Missions: A Companion to the Outer Planet Assessment Group (OPAG) Strategic Exploration White Paper. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
58. D. Schultze-Makuch, F. Raulin, C. Phillips, K. Hand, S. Neuer, and B. Dalton. 2009. Astrobiology Research Priorities for the Outer Solar System. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
59. A. Coustenis. 2009. Future in situ balloon exploration of Titan’s atmosphere and surface. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
60. J. Nott. 2009. Advanced Titan Balloon Design Concepts. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
61. L. Lemke. 2009. Heavier Than Air Vehicles for Titan Exploration. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
62. N. Strange. 2009. Astrodynamics Research and Analysis Funding. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
63. E. Venkatapathy. 2009. Thermal Protection System Technologies for Enabling Future Outer Planet Missions. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
64. W. McKinnon. 2009. Exploration Strategy for the Outer Planets 2013-2022: Goals and Priorities. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
65. L. Spilker. 2009. Cassini-Huygens Solstice Mission. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
66. National Research Council. 2003. New Frontiers in the Solar System: An Integrated Exploration Strategy. The National Academies Press, Washington, D.C.
67. NASA. 2006. Solar System Exploration: 2006 Solar System Exploration Roadmap for NASA’s Science Mission Directorate. CL#06-1867-A. Jet Propulsion Laboratory, Pasadena, Calif. Available at http://solarsystem.nasa.gov/multimedia/downloads/SSE_RoadMap_2006_Report_FC-A_med.pdf.
68. NASA. 2007. Science Plan for NASA’s Science Mission Directorate 2007-2016. Available at http://science.nasa.gov/media/medialibrary/2010/03/31/Science_Plan_07.pdf.
69. W. McKinnon. 2009. Exploration Strategy for the Outer Planets 2013-2022: Goals and Priorities. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
70. K. Clark et al. 2009. Jupiter Europa Orbiter Mission Study 2008: Final Report—The NASA Element of the Europa Jupiter System Mission (EJSM). Jet Propulsion Laboratory, Pasadena, Calif.
71. The possibility of adding a third spacecraft, the Japan Aerospace Exploration Agency-supplied Jupiter Magnetospheric Orbiter, has been discussed.
72. D. Williams. 2009. Future Io Exploration for 2013-2022 and Beyond, Part 1: Justification and Science Objectives. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
73. National Research Council. 2003. New Frontiers in the Solar System: An Integrated Exploration Strategy. The National Academies Press, Washington, D.C.
74. National Research Council. 2008. Opening New Frontiers in Space: Choices for the Next New Frontiers Announcement of Opportunity. The National Academies Press, Washington, D.C.
75. National Research Council. 2003. The Sun to the Earth—and Beyond: A Decadal Research Strategy in Solar and Space Physics. The National Academies Press, Washington, D.C.
76. A. Coustenis. 2009. Future In Situ Balloon Exploration of Titan’s Atmosphere and Surface. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
77. J. Lunine. 2009. The Science of Titan and Its Future Exploration. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
78. J.H. Waite. 2009. Titan Lake Probe. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
79. M. Allen. 2009. Astrobiological Research Priorities for Titan. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
80. W. McKinnon. 2009. Exploration Strategy for the Outer Planets 2013-2022: Goals and Priorities. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
81. K. Reh. 2009. Titan Saturn System Mission Study Final Report on the NASA Contribution to a Joint Mission with ESA. JPL D-48148. Jet Propulsion Laboratory, Pasadena, Calif.
82. P.M. Beauchamp. 2009. Technologies for Outer Planet Missions: A Companion to the Outer Planet Assessment Group (OPAG) Strategic Exploration White Paper. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
83. C. Niebur. 2009. Outer Planet Satellites Review Panel Report. Summary Presentation available at http://www.lpi.usra.edu/opag/march09/presentations/02Niebur.pdf.
84. A. Coustenis. 2009. Future In Situ Balloon Exploration of Titan’s Atmosphere and Surface. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
85. J. Nott. 2009. Advanced Titan Balloon Design Concepts. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
86. L. Lemke. 2009. Heavier Than Air Vehicles for Titan Exploration. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
87. K. Reh. 2009. Titan Saturn System Mission Study Final Report on the NASA Contribution to a Joint Mission with ESA. JPL D-48148. Jet Propulsion Laboratory, Pasadena, Calif.
88. P. Tsou. 2009. A Case for Life, Enceladus Flyby Sample Return. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
89. J. Lunine. 2009. The Science of Titan and Its Future Exploration. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
90. T. Hurford. 2009. The Case for Enceladus Science. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
91. T. Hurford. 2009. The Case for an Enceladus New Frontiers Mission. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
92. W. McKinnon. 2009. Exploration Strategy for the Outer Planets 2013-2022: Goals and Priorities. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
93. A. Razzaghi. 2007. Enceladus Flagship Mission Concept Study. NASA Goddard Space Flight Center, Greenbelt, Md.
94. A. Razzaghi. 2007. Enceladus Flagship Mission Concept Study. NASA Goddard Space Flight Center, Greenbelt, Md.
95. L. Lemke. 2009. Heavier Than Air Vehicles for Titan Exploration. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
96. R. Lorenz, T. Hurford, B. Bills, F. Sohl, J. Roberts, C. Sotin, and H. Hussmann. 2009. The Case for a Titan Geophysical Network Mission. White paper submitted to the Planetary Science Decadal Survey, National Research Council, Washington, D.C.
97. K. Clark et al. 2009. Jupiter Europa Orbiter Mission Study 2008: Final Report—The NASA Element of the Europa Jupiter System Mission (EJSM). Jet Propulsion Laboratory, Pasadena, Calif. |
Insufficient diets are a fact of every day life for hundreds of millions of children. The signs of malnutrition are so common – a short child or a child who has lost some weight – that we often don't see these children as sick or suffering. But they are. Malnutrition is not merely the result of too little food. It is a pathology caused principally by a lack of essential nutrients which not only causes growth to falter, but also increases susceptibility to common diseases. This is why a common cold or bout of diarrhea can kill a malnourished child.
Most of the damage caused by malnutrition occurs in children before they reach their second birthday. This is the critical window of opportunity when the quality of a child's diet has a profound, sustained impact on his or her health, physical and mental development. Breast milk is the only food babies need for the first six months. After this time, breastfeeding alone is not sufficient and the types of foods introduced into the diet are of paramount importance. Diets that do not provide the right blend of energy including high-quality protein, essential fats, and carbohydrates as well as vitamins and minerals can impair growth and development, increase the risk of death from common childhood illness, or result in life-long health consequences. The fortified cereals currently distributed through food aid do not meet this minimal standard.
UNICEF estimates that there are nearly 195 million children suffering from malnutrition across the globe. Malnutrition plays a huge role in child mortality because the immune systems of these children are less resistant to common childhood diseases. In fact, malnutrition contributes to at least one-third of the eight million annual deaths of children under five years of age. Despite the vast numbers of preventable deaths worldwide, international assistance over the past decade has amounted to an estimated $350 million annually out of $11.8 billion the World Bank estimates is required to adequately combat malnutrition in 36 high-burden countries, where 90 percent of the malnourished children live today. MSF is advocating for an additional $700 million, identified by the World Bank study, as the amount of funds needed to reach the 32 countries with the highest prevalence of malnutrition among their child population under five.
And current approaches to address malnutrition have serious limitations. In places where families have little or no access to highly-nutritious foods, behavior change approaches to malnutrition that focus on education about proper food choices, hand-washing, and breastfeeding are not enough to address the problem. Such strategies are insufficient because in the world’s “malnutrition hotspots,” the Horn of Africa, the Sahel, and South Asia, many families simply cannot afford more expensive nutritious food. They need access to energy-dense, nutrient-rich foods, including animal-source foods like milk, meats, and fish to provide the 40 essential nutrients a young child needs to grow and be healthy. Exclusive breastfeeding meets nutritional needs until six months of age, and beyond that, young children need the addition of dairy, eggs, meat, or fish.
Tested strategies to address malnutrition are effective and are showing promising results in many countries. Some, including Mexico, Thailand, and Brazil, have reduced early childhood malnutrition through direct nutrition programs that ensure infants and young children from even the poorest families have access to quality foods, such as milk and eggs. Through such programs, substantial progress has been made towards freeing children from the consequences that come with malnutrition at an early age. At the same time there is growing political will in Asian and African countries to replicate successful programs.
Unfortunately, most current food aid programs for developing countries rely almost exclusively on the fortified cereal blend of corn and soy that may relieve a young child's hunger, but does not provide proper nourishment. International donors must end this double standard. They should only support programs that respect the minimal nutritional needs of infants and young children, and work with countries most affected by the crisis to put access to nutrient-rich foods at the center of their efforts to tackle childhood malnutrition. |
Spring Cleaning With Clifford the Big Red Dog
These activities teach children the importance of working together to care for our environment.
- Grades: PreK–K, 1–2
About this book
Woof! Woof! This lesson teaches children about the importance of working together to care for our surroundings and the environment through participation in cooperative learning and classroom projects.
Clifford’s Big Idea: Work Together
Opportunities for a child to share a common goal with other children provide important lessons for growing up. Being with others and establishing positive relationships within a group can enrich a child's experiences and give that child confidence.
The following activities nurture essential:
- language and literacy skills
- social and emotional skills
- environmental awareness skills
- science and discovery skills
- Clifford's Spring Clean-Up by Norman Bridwell
- Three large, sturdy, empty boxes
- Labels for the boxes: Plastic, Paper, and Aluminum
Step 1: Discuss the concept of working together, also called teamwork.
Step 2: Share illustrations from Clifford's Spring Clean-Up by Norman Bridwell. Ask children to guess, or predict, what they think will happen when Clifford decides to help his friends spring clean.
Step 3: Read the story aloud to the class.
Step 4: Discuss the things that Emily Elizabeth's family did to clean their home and yard. What happened when Clifford pitched in to help? Help children identify that it was Clifford's help that turned the vacant lot into a beautiful garden.
Step 5: Talk to children about ways people spring clean: washing windows, sweeping, dusting, raking leaves, clearing vegetation, taking out trash, pruning plants, tilling garden soil, etc.
Step 6: To end the activity, have children cooperatively compose oral sentences about Clifford and Emily Elizabeth's spring clean-up adventure.
Step 1: Give children an opportunity to experience recycling by introducing a classroom project. Send home notes informing parents and caretakers of the project. Encourage children to participate by bringing small items from home each day to contribute.
Step 2: Discuss the importance of recycling.
Step 3: Ask children to find classroom items that could be used in the project. Help children become aware of by-products, or things made from items normally thrown in the trash:
- Plastic: clothing, dog houses, playground equipment, vitamin bottles, rulers, video cassettes
- Paper: egg cartons, tissues, cereal boxes, newsprint, greeting cards, paper bags
- Aluminum: beverage containers, bicycles, pie plates, candy wrappers
Step 4: Place three empty boxes in the classroom. Label the boxes plastic, paper, and aluminum. For safety reasons, omit glass.
Step 5: Every day, help children sort the items for recycling. Set goals and reward the entire class at the project's end.
The more we learn, the more we can understand why recycling is so important!
- Before taking full boxes to the recycling facility, ask someone knowledgeable in the process of recycling to come and present. Then prepare a classroom presentation for your students to inspire other classes to work together and recycle.
These books support Clifford’s Big Ideas and reinforce valuable early literacy skills:
- Great Trash Bash by L. Leedy
- Recycle That! by Fay Robinson and Allan Fowler
- Clifford's Good Deeds by Norman Bridwell
Also check out the Clifford the Big Red Dog Book List.
- Inspire children with enthusiasm
- Build self-esteem by providing activities in which children excel
- Communicate, reinforce, and review expectations
- Modify instruction to accommodate special needs
- Provide a structured day, while allowing time for resting and refueling young minds and muscles
- Use positive reinforcement
- Create a stimulating, safe environment that fosters creativity
- Involve parents and caretakers |
Most transformers change one level of voltage to another, but the size of the transformer depends on the amount of overall power it must transfer. The formula P=IE, means power (P) equals current in amps (I) times volts (E). Since power equals volts times amps, some manufacturers, engineers or technicians use the term "volt amps" in place of "watts." For most practical purposes, the terms are interchangeable. To find the volt amp rating for a transformer for home appliances, you need to include the ratings of all the appliances the transformer will power.
Inspect each appliance to find out its power consumption. Each appliance should have a tag that tells at what voltage it operates on and how much current it draws.
Multiply volts times amps on each appliance. For example for a fax machine that draws 1 amp at 120 volts the volt amps is 120 (120 x 1 = 120). For a TV set that draws 3 amps, the volt amps is 360 (120 x 3 = 360). For a washing machine that draws 7 amps, the volt amps is 840 (120 x 7 = 840).
Add up all the volt amp ratings of all the appliances that you will power with the transformer. In the above case, the total volt amps is 1,320 (120 + 360 + 840 = 1,320).
Select a transformer with a volt amp rating equal to or greater than the total.
To calculate the current on the input of your transformer, simply divide the total volt amps by the input voltage of the transformer. In the above example, for an input of 120 volts - such as with an isolation transformer - the input current would be 11 amps (1,320 / 120 = 11). But if for a transformer that steps 240 volts down to 120, the input current would be 5.5 amps (1,310 / 240 = 5.5).
Using a transformer with a volt amp rating lower than the calculated sum will cause the transformer to overheat and eventually burn out. Make sure your transformer's input and output voltages match your application. |
Creating a classroom environment in which students can hear and comprehend what a teacher or other students are saying will create a more effective learning environment.
“Classroom Acoustics,” a guide put together by the Acoustical Society of America, provides many suggestions for improving the listening conditions for students and teachers:
Reducing the reverberation time (how quickly sound decays in a room) can improve acoustics. Decreasing the volume of a classroom or increasing the amount of sound absorption will reduce the reverberation time.
“Adding a suspended ceiling of sound-absorbing tile can significantly improve the acoustics by simultaneously decreasing the volume and increasing absorption,” the guide says.
Noise from mechanical equipment can detract from a student's ability to concentrate and learn. Avoid placing any major mechanical equipment “inside, above, below or adjacent to classrooms.”
Windows may allow unwanted sound into a classroom.
“To provide noise reduction, windows must be well sealed,” the guide says. “Double-paned glass provides better sound reduction than single-paned glass.”
- Wall construction
In general, the thicker the wall, the greater the noise reduction.
“However, a thick, solid wall is usually too expensive and heavy and wastes valuable floor space,” the guide says. “An effective compromise is to construct a wall of a layer of heavy material, an airspace, and another layer of heavy material.”
- Sound system
Some classrooms can benefit from sound amplification; a teacher wears a microphone, and speakers amplify the speech.
“This can be useful in a room with a moderate amount of mechanical noise that would otherwise be difficult or expensive to silence,” the guide says. “However, such systems also have their limitations.”
For instance, amplified sound in a room with a high reverberation time may remain unintelligible, and such systems usually do not provide amplification for students. Using a combination of reflective and absorptive materials will provide the most effective acoustical conditions.
Sound-pressure levels (in decibels) of common sound sources:
50 to 70
Jet engine (75 feet away)
Source: “Classroom Acoustics,” Acoustical Society of America |
Find the Letter Q: Five Little Ducks
Five little ducks went out to play, but when their mom called them with a "quack, quack, quack, quack," only four came back! Maybe your child can solve this duck mystery. But first, let's count the Q's in the nursery rhyme! This worksheet offers practice recognizing and naming capital and lowercase letters of the alphabet and exposes kids to nursery rhymes and poetry. Kids completing this worksheet also practice counting. |
The Roman empire in the time of Hadrian (ruled 117-38 AD), showing the network of main Roman roads.
The Roman roads were roads built by the Roman empire, intended for swift transport of material from one location to another, for cattle, vehicles, or any similar traffic along the path. They were essential for the growth of the Roman Empire. Roman roads enabled the Romans to move armies and trade goods and to communicate news. The Roman road system spanned more than 400,000km of roads, including over 80,500km of paved roads.
The Romans became adept at constructing roads, which they called viae. They were intended for carrying material from one location to another. It was permitted to walk or pass and drive cattle, vehicles, or traffic of any description along the path.
|Examples of Roman Milestones| |
Scattering is the process of light bouncing off small particles. There are many kinds of scattering. However, short wavelengths of light (blue) are more effectively scattered because long wavelengths simply pass over particles the same way large ocean waves ignore pebbles.
This turns out to be an amazingly complex problem. If short wavelengths of light are scattered more than long wavelengths, then violet light should be scattered most of all, so why isn't the sky violet?
The answer depends on many factors. First, the sun emits the largest amount of light in the green part of the spectrum. There is less blue light than green and less violet light than blue. Second, our eyes have evolved to see the light emitted by the sun, so our eyes are most sensitive in the green part of the spectrum rather than to blue and violet. So we see blue better than violet, and there's more of it to see.
Next, scattering increases toward the violet end of the spectrum, so even though there's more green light in sunlight, less of it gets scattered from the open sky. Violet light gets scattered the most, but it also gets absorbed the most, so some of it never reaches our eyes.
Finally, there's the way our eyes respond to mixtures of colors. Sunlight is strongest in the green part of the spectrum, but we don't see the sun as green. We see that mixture of colors as white with a hint of yellow. If we illuminate something with a mix of green, blue and violet light, our brains will process that mix as blue.
On the average, a photon of light from blue sky has been scattered just once. If particles are numerous, however, the light may be scattered many times, a phenomenon called multiple scattering.
|When light has undergone multiple scattering, our eyes get a random mixture of colors from all over, and we see white light. Virtually everything that is white owes its color to multiple scattering: clouds, snow, paper, sugar, salt, beach sand, milk, white paint.|
Also, since light is scattered so much, the chances of a photon making it all the way through in a straight line are very small. Small amounts of multiple scattering, as in thin fog, wash out colors and reduce visibility of distant objects. Larger amounts of multiple scattering, as in thick clouds, make the object completely opaque. This is why clouds look dark when they are not sunlit. In the photo at lower right above, even though the salt, sugar and milk are made entirely of substances that are intrinsically transparent, they all cast distinct shadows because scattering prevents light from getting through.
Side comment: one material that is very good at scattering light is titanium dioxide. It is a principal ingredient in white paint. Surprisingly, it's also an ingredient in black ink, because any light that gets through the ink is scattered by the titanium dioxide. It helps make the ink opaque.
When the sun is setting, it looks red because it is traveling through a long distance of air, and all the shorter wavelengths of light are absorbed or scattered before they reach your eye.
The atmosphere also bends or refracts light, so much so that when the setting sun appears to be on the horizon, it has actually set. This effect is greatest the closer the sun is to the horizon, so that the setting sun appears flattened because light from the bottom of the sun is refracted more than light from the top.
|The fact that sunlight is reddened and refracted by the earth's atmosphere means that during an eclipse of the moon, reddened sunlight refracted through the earth's atmosphere still strikes the moon.|
|Only once has an eclipse of the sun by earth been photographed from the moon. This astonishing picture, surprisingly, is rarely seen. Taken in 1966 by one of the Surveyor lunar landers, it is the only picture ever taken of a lunar eclipse as seen from the Moon. The Earth is totally eclipsing the sun, but sunlight scattered by the Earth's atmosphere creates a bright ring around the Earth.|
Not the latest comic book (sorry, graphic novel) super-hero, and not an urban legend as some people claim. It's real. The atmosphere bends different wavelengths differently, just like a prism. This effect is called dispersion. If you look at a very bright star just above the horizon with binoculars, you may see it flicker with different colors because of dispersion.
If the sky is extremely clear at sunset, then the refraction of sunlight may bend red and yellow light into the ground before it gets to your eye, while blue and violet light are scattered or absorbed before they reach you. The last pinpoint of the setting sun may appear green for a second or two.
This is rarely seen because it's rare for the horizon to be that clear and unobstructed. You need a very flat horizon, otherwise the sun won't be low enough for dispersion to be effective. A sea horizon would be best, but even on very clear days at sea there is enough haze on the horizon to redden the sun too much. The green flash is often seen in the Arctic and Antarctic because the cold air is very dry and because the sun sets at a grazing angle to the horizon, making the effect last longer. (Around here, try northern Door County after the Bay is frozen.)
The green flash above actually appeared as a bright blue-white star on the horizon for a few seconds. Blue is even less common than green and requires extremely clear air. (Small consolation if you want to see green! The sky is blue.) This picture was taken at Palmer Station, Antarctica.
After the sun has set, the sky overhead still looks blue because it is still sunlit and still scattering sunlight. But in the east, the atmosphere is no longer illuminated by the sun. We see the boundary as a rather sharp line between light illuminated sky and dark blue sky. This boundary, which is actually the shadow of the earth cast on the atmosphere, is sometimes called the Girdle of Venus.
|The earth's shadow is seen here as a blue band grading sharply into pink. Note that the full moon is very close to the boundary, hence exactly opposite the Sun. So it should be no surprise that there was an eclipse of the moon a few hours later.|
|From an airliner, the earth's shadow can be seen sloping down toward the sun. The curvature of the earth's horizon is just barely visible from an airliner at 10 kilometers altitude.|
|On high mountain peaks, not only is the earth's shadow visible, but the shadow of the mountain itself.|
"Crepuscular" actually means "pertaining to twilight." I heard a talk on mountain lions once that described them as crepuscular. Fortunately, just because you see crepuscular rays doesn't mean there's a mountain lion lurking nearby. Usually. These rays are most dramatic near sunrise or sunset, hence the name.
These wonderfully picturesque rays are due to scattering. Dust or water droplets in the air are illuminated by the sun and scatter sunlight while shadowed areas do not. Light also scatters through the thin edges of clouds, making them bright white, while the same scattering prevents light from penetrating through the thick parts of the clouds, leaving them darkly silhouetted.
|The rays are actually parallel but appear to radiate away from the sun because of perspective.|
|Crepuscular rays over water are sometimes said to be the sun drawing water up into the clouds. Nonsense.|
Now here's some nicely confused terminology. Strictly speaking, anti-crepuscular should mean the opposite of twilight - in other words, rays you see at noon. However, you can only see these from the ground when the anti-solar point is close to the horizon, that is, near sunrise or sunset. Under the right conditions, you can see them from an airplane any time of day.
|Actually, these are crepuscular rays that cross the entire sky to converge on the opposite horizon. Again, the rays are parellel and the convergence is a perspective effect. Shadows can penetrate hundreds of miles across the atmosphere, so these rays can be created by clouds or distant mountains far beyond the horizon, up to hundreds of kilometers away. They can occur in a completely cloudless sky.|
|A spectacular set of anti-crepuscular rays.|
|Anti-crepuscular rays seen from an airplane looking down into a hazy sky.|
The poet Keats once lamented that science had explained the rainbow and robbed it of its mystery:
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
Philosophy will clip an angel’s wings. (Lamia. Part ii)
As a scientist, I can look at a rainbow and see all the beauty that Keats saw, plus I can see things he could not. For one thing, every single rainbow is unique and no two people ever see exactly the same rainbow. For that matter, you never see the exact same rainbow for more than a split second.
The only thing that matters with a rainbow is the angle between the sun, the drop and your eye. Distance doesn't enter into it at all. So it's possible to see a continuous rainbow formed by droplets from a distant rainstorm, a nearer waterfall, and a nearby fountain.
|Rainbow over Niagara Falls|
|Even though a rainbow isn't a real physical object, this one can be seen faintly reflected in the water. The explanation is below.|
|In theory a rainbow should make a complete circle. Inverted bows are rarely seen from commercial airliners. To see one, you'd have to be flying through sunlit rain. Here a rain shower is illuminated just right to see that the rainbow also has a lower half.|
|When the sun is high in the sky, rainbows are low. If the sun is higher than 42 degrees, the rainbow is below the horizon.|
|The drops can be any distance away and it is common to see rainbows
in front of distant objects.
The whole pot of gold business is a reflection of the fact that the rainbow is a purely optical effect and has no real physical location. So you can never get to the end of one.
A rainbow isn't a physically real object, merely a cone of drops that refracts light back to our eyes at a particular angle. Nevertheless, it's possible to see rainbows reflected in water under the right conditions. How can that be?
The answer is that it's different drops that produce the reflected rainbow. These drops refract light on a path that would normally miss you completely, except that it's reflected to your eye.
We often see double rainbows, with a fainter bow outside the primary bow and its colors reversed. The secondary bow is formed by light reflected twice within the drops, and is always fainter. After all, any light that exits to form the primary bow is no longer available to form a secondary bow.
The fact that the two bows are close together is misleading. Actually, the right side of the primary bow corresponds to the left side of the secondary, and vice versa. That's why the colors are reversed.
So is it possible to see a third order or tertiary rainbow? The surprising answer is that the tertiary bow wouldn't be just outside the secondary bow as you'd expect, but back in the direction of the sun. Light from the primary bow gets bent through 137.5 degrees, light from the secondary bow gets bent 231 degrees. Since we look opposite the sun (180 degrees away) to see a rainbow, the primary bow has a radius of 180 - 137.5 = 42.5 degrees and the secondary bow has a radius of 231 - 180 = 51 degrees. That's why they appear close together. Light from the tertiary bow gets bent 317.5 degrees, so the bow is only 42.5 degrees from the sun. It's in a completely different part of the sky.
To see a tertiary bow, you'd have to be looking through rain toward the sun, and there's a lot of glare in that direction. I've tried on several occasions without success. There are a few claimed sightings but no photographs. To make matters worse, the fourth order bow overlaps the third order bow but with reversed colors, meaning anything you do see will be washed out.
High order "rainbows" can be observed in the lab. Not the entire bow, but rays of light that have been internally reflected many times. A drop of water and a laser will do it.
In principle, water droplets in clouds should produce rainbows. In practice, when particles are very tiny, light experiences a process called diffraction that produces colored halos around each droplet. These colored halos smear out any rainbow colors, so all you can see is a diffuse white arc.
|At left center is a fogbow on distant low clouds. The arc is sharply defined but lacks color. The nearest part of the fog bank is maybe half a mile away, but the horizon is many miles away. Nevertheless, the bow extends right to the horizon.|
|Coronas are fairly simple. They are colored rings around the sun or moon, caused when water droplets or ice crystals refract light passing through them. The bend angle isn't large so the colors are always very close to the sun or moon.|
|The droplets don't have to be far away to create the effect. A fogged up window pane will do it nicely.|
Ice can both reflect and refract light and the range of optical effects produced by ice is enormous. Ice crystals can vary in shape and can produce special effects if the air is still and the ice crystals have some specific orientation. Some of the more common effects are shown below. Halos are due to refraction through the ice crystals. Sun pillars and parhelic circles are due to reflection off vertical and horizontal faces of the ice crystals. These appear when the air is calm because any turbulence would cause the ice crystals to be randomly oriented.
|This picture, tinted green by window glass, shows a 22 degree halo and a rarely seen small halo. The cause of the small halo is not certain but may be due to unusual crystalline forms of ice that form at extremely low temperatures.|
|Here we have a very intense 22 degree halo and sun dog plus a sun pillar. These are due to wind-blown snow and thus end not far above the horizon.|
|A spectacular sun pillar seen from an aircraft.|
|Flying above a cirrus cloud layer may present the opportunity to observe a halo in the clouds below.|
|Sometimes, thin cirrus layers below your plane will have ice crystals all with similar orientations that reflect the sun like a mirror. Sometimes an anti-sun will be so bright it can even have its own sundogs.|
Air travelers commonly see the shadow of their airplane surrounded by colored rings. This phenomenon, the glory, is fairly complex. When light hits a water droplet at a grazing angle, some of it is refracted into the droplet and some travels along the surface. The refracted light bounces off the back of the droplet and exits on the other side, where it encounters the light traveling along the surface. If the light waves are in step, they add up (called constructive interference). Other waves are out of step and cancel out (called destructive interference). The wavelengths that experience constructive interference show up as colored rings.
We can see from this that:
|If your plane is far from the clouds, it no longer casts a shadow. As seen from the clouds, your plane is silhouetted against the sun, but doesn't cover it. Nevertheless, the glory still appears as a series of colored rings.|
|In this photo, the glory appears under the wing near the left edge and a faint cloud bow appears extending vertically below the rightmost wing pod.|
Before air travel, the only way to see this effect was to be on a mountain peak with the sun casting your shadow onto clouds. It was rarely seen. If a group of people observed this effect, each person saw the glory around his own shadow. Some people think the haloes in religious art were inspired by this effect. A mountain in Germany, the Brocken, is a place where this effect occurs frequently because of its foggy weather, and the combination of the observer's shadow and glory is sometimes called the Specter of the Brocken.
|It sounds simple but the Specter of the Brocken is rarely seen. You need bright sun behind you and thick fog in front. This is the only time in my life I have ever photographed it.|
|Large volcanic eruptions eject large amounts of sulfuric acid droplets into the stratosphere. These photos were all taken after the great eruption of Mount Pinatubo in 1991, although smaller eruptions can produce short episodes of these effects. One effect is long-lasting sunset colors extending high into the sky. Often the colors are light purple, so this effect is called the purple light.|
|Intense sunset colors can persist long after sunset.|
|Bright glare around the sun often grades into the blue sky in a diffuse purplish or brownish halo, called Bishop's Ring.|
Created 21 May 1997, Last Update 4 May 2000
Not an official UW Green Bay site |
Would you like to learn how to write computer programs? Would you like to learn how to become a software developer? Would you like to learn how to make a computer do cool things? Python is a computer language, and by learning Python you can do all three of these things. It is super easy to get started, so let's go!
The traditional program that people start with in any computer language like Python is called the "Hello World!" program. You write a program that causes the computer to say, "Hello World!". Here is how you do this in Python:
To run this program, click the run button, which is shaped like a triangle. You will see the program's output, which is the words "Hello World!"
You can change the program. Have it say "Hello John!", or add in your name, or have the program say anything you like. Try it! Change the program and run it again. You can't hurt anything. Worst case scenario - if everything blows up, just reload the page in the browser and it will reset to the original form.
There, you have written your first computer program! You can now say to your friends, "I am a software developer!" because you have developed some software right here. It is very simple software, but software nonetheless.
What you want to do now is learn to write more and more advanced software, so that you can make the computer do more and more interesting things for you.
So let's say that you want the computer to write the words "Hello World!" 5 times. One way to do it would be to repeat the "print" command 5 times, like this:
Now run this program by clicking the run button (the small triangle). It works! It prints "Hello World!" 5 times.
That does work, but if you wanted to print "Hello World!" a hundred times, you would get tired of retyping the same command over and over again. There is a better way to do it. You can use a command that tells the computer to repeat something. So the new program would look like this:
That's great - It printed "Hello world!" five times just like the other program, but it only took two lines of code to do it. You used something called a "for loop" to make the computer repeat the print statement 5 times. The "range" part tells the for loop how many times to repeat.
Even better is this: if we change the number 5 to 20 in the program, it will print "Hello World!" 20 times. Like this:
How do you know there are really 20 copies of "Hello World!" there when you run the program? You could count them by hand, but who has time for that? Let's have the computer number the lines for us. Here's what that program looks like:
Run the program and you will see that the lines now have numbers. It worked!
But, if you are a normal person, and you are new to computer programming, you are thinking, "Wait a minute - what the heck is that '%d' thing doing in there, and the '% (x)' thing???" It is not intuitively obvious at all. This is one thing about computer programming that you will come to understand soon. Sometimes, to write computer software, it looks a little weird. You have to write the software the way the language wants you to write it or your program will not run. In this case, this is how the Python language does it, so you write your program to make Python happy. For now, understand that the "%d" is a placeholder in the line for a number that you want to print, and the "% (x)" is the value of the number that you want to print.
You might also be thinking, "I don't want it to print the number 0 up to the number 19. I want it to go from 1 to 20 like normal." There are two ways you could make that happen. One way would be to change the numbers in the range to go from 1 to 21. That will work. The other way would be to change "% (x)" to "% (x+1)". Try both now. You can edit the code above. Change the code yourself and run your new program.
What you will find, as you learn to write computer software, is that you see an example like this, and you remember it. Here you saw an example of how to number the lines in your output. The next time you want to number the lines like this, you will either have that example in your head, or you will come back here and look at this little piece of code in order to remind yourself how to do it.
Play around with this code a little. Edit it to print "Hello World!" a hundred times. Change the text from "Hello World!" to something else. Put the line number at the end of the line instead of the front of the line. Get comfortable change the code and playing around.
What happens if you do something wrong? For example, what if you misspell the word "for" as "forr", or you misspell the word "print" as "prnt" by leaving out the i or "Print" with a capital P instead of a small p. If you make a mistake like that, Python will look at your code and it will not understand what you mean. And it will give you an error message sometime, and sometimes it will simply refuse to run your program. Try it. Misspell the word "print". You will see the error message. Then fix the error and your program will run again.
In the next tutorial, we will learn how to get some input from the user... |
A driverless car (sometimes called a self-driving car, an automated car or an autonomous vehicle) is a robotic vehicle that is designed to travel between destinations without a human operator. To qualify as fully autonomous, a vehicle must be able to navigate without human intervention to a predetermined destination over roads that have not been adapted for its use.
Companies developing and/or testing driverless cars include Audi, BMW, Ford, Google, General Motors, Volkswagen and Volvo. Google's test involved a fleet of self-driving cars -- six Toyota Prii and an Audi TT -- navigating over 140,000 miles of California streets and highways. A single accident occurred during one of the infrequent occasions when a human was driving. Another test of over 1000 miles was completed successfully with no human intervention.
Here’s how Google’s cars work:
- The “driver” sets a destination. The car’s software calculates a route and starts the car on its way.
- A rotating, roof-mounted LIDAR (Light Detection and Ranging - a technology similar to radar) sensor monitors a 60-meter range around the car and creates a dynamic 3-D map of the car’s current environment.
- A sensor on the left rear wheel monitors sideways movement to detect the car’s position relative to the 3-D map.
- Radar systems in the front and rear bumpers calculate distances to obstacles.
- Artificial intelligence (AI) software in the car is connected to all the sensors and has input from Google Street View and video cameras inside the car.
- The AI simulates human perceptual and decision-making processes and controls actions in driver-control systems such as steering and brakes.
- The car’s software consults Google Maps for advance notice of things like landmarks and traffic signs and lights.
- An override function is available to allow a human to take control of the vehicle.
Proponents of systems based on driverless cars say they would eliminate accidents caused by driver error, which is currently the cause of almost all traffic accidents. Furthermore, the greater precision of an automatic system could improve traffic flow, dramatically increase highway capacity and reduce or eliminate traffic jams. Finally, the systems would allow commuters to do other things while traveling, such as working, reading or sleeping.
Sebastian Thrun, who helped develop Google’s cars, discusses self-driven cars at TED and shows a demonstration:
The history of driverless cars goes back much further than most people realize -- Leonardo da Vinci designed the first prototype around 1478. Leonardo’s car was designed as a self-propelled robot powered by springs, with programmable steering and the ability to run pre-set courses.
Self-driving vehicles are not yet legal on most roads. In June 2011, Nevada, US became the first jurisdiction in the world to allow driverless cars on public roadways.
See also: Global Positioning System (GPS)
Continue reading about driverless cars:
> Wikipedia’s entry about driverless cars
> From Google’s blog: What we’re driving at
> The New York Times reports on Google’s driverless car tests |
Tapping more of the sun’s energy using heat as well as light
January 24, 2014
A new approach to harvesting solar energy, developed by MIT researchers, could improve efficiency by using sunlight to heat a high-temperature material whose infrared radiation would then be collected by a conventional photovoltaic cell.
This technique could also make it easier to store the energy for later use, the researchers say.
In this case, adding the extra step improves performance, because it makes it possible to take advantage of wavelengths of light that ordinarily go to waste. The process is described in a paper published this week in the journal Nature Nanotechnology.
A conventional silicon-based solar cell “doesn’t take advantage of all the photons,” associate professor of mechanical engineering Evelyn Wang explains. That’s because converting the energy of a photon into electricity requires that the photon’s energy level match that of a characteristic of the photovoltaic (PV) material called a bandgap. Silicon’s bandgap responds to many wavelengths of light, but misses many others.
Collecting a broad spectrum of light
To address that limitation, the team inserted a two-layer absorber-emitter device — made of novel materials including carbon nanotubes and photonic crystals — between the sunlight and the PV cell. This intermediate material collects energy from a broad spectrum of sunlight, heating up in the process. When it heats up, as with a piece of iron that glows red hot, it emits light of a particular wavelength, which in this case is tuned to match the bandgap of the PV cell mounted nearby.
This basic concept has been explored for several years, since in theory such solar thermophotovoltaic (STPV) systems could provide a way to circumvent a theoretical limit on the energy-conversion efficiency of semiconductor-based photovoltaic devices. That limit, called the Shockley-Queisser limit, imposes a cap of 33.7 percent on such efficiency, but Wang says that with TPV systems, “the efficiency would be significantly higher — it could ideally be over 80 percent.”
There have been many practical obstacles to realizing that potential; previous experiments have been unable to produce a STPV device with efficiency of greater than 1 percent. But the researchers have already produced an initial test device with a measured efficiency of 3.2 percent, and they say with further work they expect to be able to reach 20 percent efficiency — enough, they say, for a commercially viable product.
The design of the two-layer absorber-emitter material is key to this improvement. Its outer layer, facing the sunlight, is an array of multiwalled carbon nanotubes, which very efficiently absorbs the light’s energy and turns it to heat. This layer is bonded tightly to a layer of a photonic crystal, which is precisely engineered so that when it is heated by the attached layer of nanotubes, it “glows” with light whose peak intensity is mostly above the bandgap of the adjacent PV, ensuring that most of the energy collected by the absorber is then turned into electricity.
In their experiments, the researchers used simulated sunlight, and found that its peak efficiency came when its intensity was equivalent to a focusing system that concentrates sunlight by a factor of 750. This light heated the absorber-emitter to a temperature of 962 degrees Celsius.
This level of concentration is already much lower than in previous attempts at STPV systems, which concentrated sunlight by a factor of several thousand. But the MIT researchers say that after further optimization, it should be possible to get the same kind of enhancement at even lower sunlight concentrations, making the systems easier to operate.
Such a system, the team says, combines the advantages of solar photovoltaic systems, which turn sunlight directly into electricity, and solar thermal systems, which can have an advantage for delayed use because heat can be more easily stored than electricity. The new solar thermophotovoltaic systems, they say, could provide efficiency because of their broadband absorption of sunlight; scalability and compactness, because they are based on existing chip-manufacturing technology; and ease of energy storage, because of their reliance on heat.
Some of the ways to further improve the system are quite straightforward. Since the intermediate stage of the system, the absorber-emitter, relies on high temperatures, its size is crucial: The larger an object, the less surface area it has in relation to its volume, so heat losses decline rapidly with increasing size. The initial tests were done on a 1-centimeter chip, but follow-up tests will be done with a 10-centimeter chip, they say.
The work was funded by the U.S. Department of Energy through MIT’s Solid-State Solar Thermal Energy Conversion (S3TEC) Center, as well as the Martin Family Society, the MIT Energy Initiative, and the National Science Foundation.
Abstract of Nature Nanotechnology paper
The most common approaches to generating power from sunlight are either photovoltaic, in which sunlight directly excites electron–hole pairs in a semiconductor, or solar–thermal, in which sunlight drives a mechanical heat engine. Photovoltaic power generation is intermittent and typically only exploits a portion of the solar spectrum efficiently, whereas the intrinsic irreversibilities of small heat engines make the solar–thermal approach best suited for utility-scale power plants. There is, therefore, an increasing need for hybrid technologies for solar power generation. By converting sunlight into thermal emission tuned to energies directly above the photovoltaic bandgap using a hot absorber–emitter, solar thermophotovoltaics promise to leverage the benefits of both approaches: high efficiency, by harnessing the entire solar spectrum; scalability and compactness, because of their solid-state nature; and dispatchablility, owing to the ability to store energy using thermal or chemical means. However, efficient collection of sunlight in the absorber and spectral control in the emitter are particularly challenging at high operating temperatures. This drawback has limited previous experimental demonstrations of this approach to conversion efficiencies around or below 1%. Here, we report on a full solar thermophotovoltaic device, which, thanks to the nanophotonic properties of the absorber–emitter surface, reaches experimental efficiencies of 3.2%. The device integrates a multiwalled carbon nanotube absorber and a one-dimensional Si/SiO2 photonic-crystal emitter on the same substrate, with the absorber–emitter areas optimized to tune the energy balance of the device. Our device is planar and compact and could become a viable option for high-performance solar thermophotovoltaic energy conversion. |
Biology Symmetry Help
Orderly Patterns Of Body Form Within Invertebrates
Besides lacking a backbone or vertebral column, the invertebrates display other orderly patterns of body form. One fundamental pattern is symmetry ( SIM -et-ree). The term, symmetry, exactly translates from the Ancient Greek to mean, “the process of measuring together.” In modern times, however, we are not really referring to measuring anything (as with a ruler). Rather, we are carefully looking at the shape and size of two things, compared to each other. Symmetry is said to be present, then, whenever there is a rough balance or equality of body shape and size, on either side of some dividing line. Since invertebrates lack a stiffening backbone, their bodies tend to be much more flexible than those of vertebrates. Thus, symmetry has become an important organizing influence or pattern for their survival.
Bilateral Symmetry (Mirror-Image)
There are several specific kinds of symmetry (see Figure 10.2). The most familiar to most people is bilateral (buy- LAT -er-al) or mirror-image symmetry. In this type of symmetry, an imaginary line subdivides the body into two equal halves. The right half of the body is then considered a mirror image of the left half of the body. Consider, for example, a line drawn lengthwise through the middle of a lobster (Figure 10.2, A). The lobster body often has a high degree of bilateral (mirror-image) symmetry, because the right side of the body is a mirror image of the left side, and vice versa. (By mirror image, it is meant that both sides of the body have the same shape and size, but that right and left are reversed.)
[ Study suggestion: Get up out of your chair and go look at your reflection in a full-length mirror. To what degree does your body and its reflection show the characteristic of bilateral or mirror image symmetry?] An overhead view of an automobile also often reveals a high degree of bilateral (mirror image) symmetry.
Another important type of rough balance is called radial ( RAY -dee-al) symmetry . In radial symmetry, there is a rough balance of various parts or “rays” ( radi ) that come out from the same center or axis. [ Study suggestion: Picture the sun and its rays, which make a radial symmetry.] Consider, for example, the identical tendrils or arms of a jellyfish, which seem to radiate ( RAY -dee- ate ) out from a central axis in the middle of its body (Figure 10.2, B). The jellyfish, therefore, has radial symmetry. This makes it somewhat resemble a wheel with a central hub, around which a series of spokes radiate.
Anatomy Of The Bilateria Versus The Radiolarians
Animals with a bilateral symmetry body plan are technically called the bilateria ( buy -lah- TEER - ee-uh), or “two-sided” animals. The bilateria (bilateral animals) include most vertebrates, as well as many invertebrates. They have more to their body plan than just left and right sides. Bilateria have a head or anterior (an- TEER -ee-or) end, that lies in “front” ( anteri -). And they have a tail or posterior (pahs- TEER -ee-or) end, that follows “behind” ( posteri -). Since the bilateria have a “head” or cephalic (seh- FAL -ik) end to their bodies, we say that they show the characteristic called cephalization ( sef -uh-luh- ZAY -shun). By cephalization, it is meant that an animal has a definite head end to its body, usually containing the main collection of its sensory organs (such as the brain and eyes and sound detectors).
Further, the bilateria have an upper or dorsal ( DOOR -sal) side in their “back” ( dors ), as well as a lower or ventral ( VEN -tral) side on their “belly” ( ventr ).
Another main group of invertebrates are the radiolarians ( ray -dee-oh- LAIR -ee-uns) – creatures with “little rays” ( radiol ) or spines projecting out-ward from their bodies. The jellyfish with its many radiating arms, of course, is a typical radiolarian. In these animals, there is no head or rear end, nor left or right side. They do not show the characteristic of cephalization. There is, however, both a superior ( soo-PEER -e-or) portion of the animal lying “above” ( superi ) most of the body, and an inferior ( in-FEER -e-or) portion lying “below” ( inferi ). (Review Figure 10.2 to see these terms of relative body position.)
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
ACTIVITIESGet Outside! 10 Playful Activities
Local SAT & ACT Classes
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Bullying in Schools
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Should Your Child Be Held Back a Grade? Know Your Rights
- First Grade Sight Words List |
Scientists have discovered massive landforms lurking under Antarctica – some as tall as the Eiffel Tower – and they’ve been actively carving deep channels into the ice flow above.
These landforms, which are five times bigger than those left behind by former ice sheets in Scandinavia and North America, are now thought to be contributing to the thinning of the Antarctic ice shelves, and that could have big consequences for the region’s stability.
Thanks to ancient ice sheets in the Northern Hemisphere that have long since retreated, scientists knew that landforms can grow for many metres below the surface.
The Scandinavian Ice Sheet was one of the largest glacial masses of the Pleistocene epoch (2,588,000 to 11,700 years ago), and in its prime, covered most of northern Europe in ice spanning about 6.6 million square kilometres (2.5 million square miles).
Below the ice sheet, which was about 3,000 metres thick (9,800 feet), various landforms began to take shape, and for thousands of years, they mitigated a complex cycle of evaporation and precipitation to keep the ice cycling through the ocean.
Now that Scandinavia’s ancient ice sheet has retreated, these strange landforms – called eskers – have been exposed.
Scientists have long suspected that similar features could be lurking under our current ice sheets, but no one could have prepared researchers from the Université libre de Bruxelles in Belgium and the Bavarian Academy of Sciences in Germany for what they would find under Antarctica.
At five times the scale of the landforms left behind by the Scandinavian Ice Sheet, the landforms that lie beneath the Antarctic ice sheet dwarf any structures of their kind found on Earth, and they’ve helped scientists piece together what’s actually going on down there.
They were identified during a survey of three subglacial locations at the Roi Baudouin Ice Shelf in Dronning Maud Land, Antarctica, where meltwater is let out into the surrounding ocean.
Using a combination of satellite imagery and airborne and ground-based radar data, the team identified distinct “radar reflectors” below the ice sheet.
Those reflections appear to be indications of large, ridge-shaped protrusions cutting into the ice flow above that look a whole lot like the ancient eskers of Scandinavia – except that these ones are seriously oversized.
So how did these massive landforms come to be?
The researchers investigated features known as subglacial conduits, which form under large ice sheets and funnel meltwater out towards the ocean, and found that they become considerably wider the closer they get to the the ocean.
This widening allows for the accumulation of sediments over millennia, which give rise to the formation of eskers, they suggest.
“As the conduits widen, the outflow velocity of the subglacial water decreases, which leads to increased sediment deposition at the conduit’s portal. Over thousands of years, this process builds up giant sediment ridges – comparable in height with the Eiffel tower – below the ice,” the team explains in a press release. |
Snow is in the forecast at Woodland Park Zoo in Seattle, WA, with the hatching of a Snowy Owl chick on June 13! The chick marks the first offspring between the mom, estimated to be 22 years old, and the father, 14 years old. It's gender has not been determined.
“At this time the chick isn’t visible to visitors because the mom is sitting on the nest and providing very good care,” curator Jennifer Pramuk said in a statement from the Zoo. “Our expert zookeepers are monitoring the owlet, which appears to be in good health. It’s growing very quickly, so visitors should be able to spot it in a week or two.”
In zoos, the Snowy Owl population experienced a dramatic decline due to West Nile virus, which is spread by infected mosquitoes to birds. Owls and hawks were especially susceptible to the virus, causing acute death. Few zoos have been successful in breeding Snowy Owls within recent years.
The fluffy white Snowy Owl is the heaviest North American Owl and one of the largest in overall size. Males are nearly pure white and the female's white plumage is highlighted with dark brown bars and spots. The Snowy Owl prefers open areas for its breeding range, including tundra and grasslands. During winter it seeks treeless habitat to the south, including prairies, marshes or shorelines.
This species of owl is migratory and nomadic. Every seven to 10 years, the Arctic-dwelling Snowy Owl appears in Washington state during winter months in large numbers, known as an "irruption," a period when young owls leave their breeding range in search of food.
Well adapted to live in harsh tundra environments, Snowy Owls face few threats and their populations are stable. This changes, however, as these raptors migrate south and come into contact with human civilization. Although not a threat to the species, Snowy Owls die from flying into utility lines, wire fences, automobiles, airplanes (at airports) and other human structures. Some owls are even killed by hunters; changes in the arctic climate also may be a looming threat for this species. Owls in general are in decline because of habitat loss, introduced disease and poisoning from improperly used rodent poison.
Many raptor species are facing decline due to human-imposed activities. Raptors provide many benefits. "Raptors are top predators of the food chain so they're an indicator species of their overall ecosystems," explained Pramuk. "They consume many animals that humans consider as pests, such as rodents and destructive insects, and they help keep animal populations in balance. It's hard to imagine a world without these majestic birds of prey." |
Within national parks we preserve the continuation of natural processes. This includes allowing bears to roam, letting rivers run free of dams and diversions, and understanding the beneficial roles that fire can play. A dark night is a resource integral to many natural processes. Many of the darkest night skies in the country are found within national park boundaries. With the loss of night sky quality over the last five decades to light pollution, this resource has become nationally significant.
We generally think of dark night skies as a scenic resource, valued by amateur astronomers as well as casual stargazers. Sometimes forgotten is the importance of natural darkness for wildlife. Nearly half the species on Earth are nocturnal—active at night instead of during the day. The absence of light, natural or otherwise, is a key element of their habitat. Many species rely on natural patterns of light and dark to navigate, nest, mate, hide from predators, and cue behaviors.
Adding artificial light to natural habitat may result in substantial impact to certain species (Rich &Longcore, 2006). For example, migrating passerine birds reference stars to fly at night and can be disoriented by city lights and towers. Sea turtle hatchlings orient toward the brightest light on the beach, but instead of being drawn to the safety of sparkling waves on the ocean, they are often drawn toward roads and parking lots, where they quickly perish. And amphibians, with vision far more sensitive than that of humans, are prone to be disoriented by light. Changes to cave environments can have a similarly disruptive effect. Research into the ecological consequences of artificial night lighting is revealing numerous connections between light pollution and species disruption.
Dark night skies are also considered an air quality related value under the 1977 Clean Air Act Amendments, and air quality in turn affects the quality of the night sky. Just as air pollution decreases the visibility during the day, hazy air at night dims the stars and scatters more light from cities, resulting in a gray appearance to the night sky instead of sparkling stars on a black canvas.
Rich, C. and Longcore, T. 2006. Ecological Consequences of Artificial Lighting, Island Press. |
A learning disability (or LD) is a specific impairment of academic learning that interferes with a specific aspect of schoolwork and that reduces a student’s academic performance significantly. An LD shows itself as a major discrepancy between a student’s ability and some feature of achievement: the student may be delayed in reading, writing, listening, speaking, or doing mathematics, but not in all of these at once. A learning problem is not considered a learning disability if it stems from physical, sensory, or motor handicaps, or from generalized intellectual impairment (or mental retardation). It is also not an LD if the learning problem really reflects the challenges of learning English as a second language. Genuine LDs are the learning problems left over after these other possibilities are accounted for or excluded. Typically a student with an LD has not been helped by teachers’ ordinary efforts to assist the student when he or she falls behind academically—though what counts as an “ordinary effort,” of course, differs among teachers, schools, and students. Most importantly, though, an LD relates to a fairly specific area of academic learning. A student may be able to read and compute well enough, for example, but not be able to write.
LDs are by far the most common form of special educational need, accounting for half of all students with special needs in the United States and anywhere from 5 to 20 percent of all students, depending on how the numbers are estimated (United States Department of Education, 2005; Ysseldyke & Bielinski, 2002). Students with LDs are so common, in fact, that most teachers regularly encounter at least one per class in any given school year, regardless of the grade level they teach.
Defining Learning Disabilities ClearlyEdit
With so many students defined as having learning disabilities, it is not surprising that the term itself becomes ambiguous in the truest sense of “having many meanings.” Specific features of LDs vary considerably. Any of the following students, for example, qualify as having a learning disability, assuming that they have no other disease, condition, or circumstance to account for their behavior:
- Albert, an eighth-grader, has trouble solving word problems that he reads, but can solve them easily if he hears them orally.
- Bill, also in eighth grade, has the reverse problem: he can solve word problems only when he can read them, not when he hears them.
- Carole, a fifth-grader, constantly makes errors when she reads textual material aloud, either leaving out words, adding words, or substituting her own words for the printed text.
- Emily, in seventh grade, has terrible handwriting; her letters vary in size and wobble all over the page, much like a first- or second-grader.
- Denny reads very slowly, even though he is in fourth grade. His comprehension suffers as a result, because he sometimes forgets what he read at the beginning of a sentence by the time he reaches the end.
- Garnet’s spelling would have to be called “inventive,” even though he has practiced conventionally correct spelling more than other students. Garnet is in sixth grade.
- Harmin, a night-grader, has particular trouble decoding individual words and letters if they are unfamiliar; he reads conceal as “concol” and alternate as “alfoonite.”
- Irma, a tenth-grader, adds multiple-digit numbers as if they were single-digit numbers stuck together: 42 + 59 equals 911 rather than 101, though 23 + 54 correctly equals 77.
With so many expressions of LDs, it is not surprising that educators sometimes disagree about their nature and about the kind of help students need as a consequence. Such controversy may be inevitable because LDs by definition are learning problems with no obvious origin. There is good news, however, from this state of affairs, in that it opens the way to try a variety of solutions for helping students with learning disabilities.
Assisting Students with Learning DisabilitiesEdit
There are various ways to assist students with learning disabilities, depending not only on the nature of the disability, of course, but also on the concepts or theory of learning guiding you. Take Irma, the girl mentioned above who adds two-digit numbers as if they were one-digit numbers. Stated more formally, Irma adds two-digit numbers without carrying digits forward from the ones column to the tens column, or from the tens to the hundreds column. Figure 5-2 shows the effect that her strategy has on one of her homework papers. What is going on here and how could a teacher help Irma?
Behaviorism: Reinforcement for Wrong StrategiesEdit
One possible approach comes from the behaviorist theory discussed in Chapter 2. Irma may persist with the single-digit strategy because it has been reinforced a lot in the past. Maybe she was rewarded so much for adding single-digit numbers (3+5, 7+8, etc.) correctly that she generalized this skill to two-digit problems—in fact overgeneralized it. This explanation is plausible because she would still get many two-digit problems right, as you can confirm by looking at Figure 5-2. In behaviorist terms, her incorrect strategy would still be reinforced, but now only on a “partial schedule of reinforcement.” As I pointed out in Chapter 2, partial schedules are especially slow to extinguish, so Irma persist seemingly indefinitely with treating two-digit problems as if they were single-digit problems.
From the point of view of behaviorism, changing Irma’s behavior is tricky since the desired behavior (borrowing correctly) rarely happens and therefore cannot be reinforced very often. It might therefore help for the teacher to reward behaviors that compete directly with Irma’s inappropriate strategy. The teacher might reduce credit for simply finding the correct answer, for example, and increase credit for a student showing her work—including the work of carrying digits forward correctly. Or the teacher might make a point of discussing Irma’s math work with Irma frequently, so as to create more occasions when she can praise Irma for working problems correctly.
Metacognition and Responding ReflectivelyEdit
Part of Irma’s problem may be that she is thoughtless about doing her math: the minute she sees numbers on a worksheet, she stuffs them into the first arithmetic procedure that comes to mind. Her learning style, that is, seems too impulsive and not reflective enough, in sense discussed in Chapter 4. Her style also suggests a failure of metacognition (remember that idea from Chapter 3?), which is her self-monitoring of her own thinking and its effectiveness. As a solution, the teacher could encourage Irma to think out loud when she completes two-digit problems—literally get her to “talk her way through” each problem. If participating in these conversations was sometimes impractical, the teacher might also arrange for a skilled classmate to take her place some of the time. Cooperation between Irma and the classmate might help the classmate as well, or even improve overall social relationships in the classroom.
Constructivism, Mentoring, and the Zone of Proximal DevelopmentEdit
Perhaps Irma has in fact learned how to carry digits forward, but not learned the procedure well enough to use it reliably on her own; so she constantly falls back on the earlier, better-learned strategy of single-digit addition. In that case her problem can be seen in the constructivist terms, like those that I discussed in Chapter 2. In essence, Irma has lacked appropriate mentoring from someone more expert than herself, someone who can create a “zone of proximal development” in which she can display and consolidate her skills more successfully. She still needs mentoring or “assisted coaching” more than independent practice. The teacher can arrange some of this in much the way she encourages to be more reflective, either by working with Irma herself or by arranging for a classmate or even a parent volunteer to do so. In this case, however, whoever serves as mentor should not only listen, but also actively offer Irma help. The help has to be just enough to insure that Irma completes two-digit problems correctly—neither more nor less. Too much help may prevent Irma from taking responsibility for learning the new strategy, but too little may cause her to take the responsibility prematurely.
- United States Department of Education. (2005). 27th Annual Report to Congress on the implementation of the Individuals with Disabilities Education Act. Washington, D.C.: Author.
- Ysseldyke, J. & Bielinski, J. (2002). Effect of different methods of reporting and reclassification on trends in test scores for students with disabilities. Exceptional Children, 68(2), 189-201. |
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Four Simple Steps to Small-Group Guided Writing
|Grades||K – 2|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Three 30-minute sessions|
San Diego, California
- Learn how to support their own engaged, sustained, and fluent writing by choosing and reflecting on one narrow but interesting focus and by rereading their own text when necessary for writing fluency
- Create appropriate titles for their writing of information-based text by considering the readers' viewpoints and developing an intent to interest potential readers
- Write brief texts of their own on topics of interest, including enough details for clarity, by choosing and reflecting on a narrow, interesting focus for writing and giving consideration to the full set of information needed for readers' interest and understanding
Guided writing lessons are taught in four steps: (1) brief shared experience and discussion, (2) discussion of strategic behavior for writing, (3) time to write a new text each day with immediate teacher guidance, and (4) sharing. Each of these steps is implemented in each session of this lesson.
These sessions should be taught at a brisk pace. Guided writing lessons are intervention lessons with a tight focus on improving each child's ability to use a small, specific set of cognitive strategies. They do not take the place of whole-class instruction. Your students should have ample opportunities in other contexts to write longer texts over an extended timeframe, discuss mentor texts with you and their peers, and observe your modeling of good writing behavior during whole-class lessons. Be direct and clear in the information you give to students during guided writing lessons and encourage active participation. Focus your instruction on strategic behavior for writing rather than on the accuracy and correctness of the writing product alone.
Guided writing lessons usually occur while other students are writing independently and can be adapted to any topic.
Session 1: Strategies for Sustained Engagement in Writing
|1.||Once your class has settled into an independent activity, gather two to six students who need extra support at a table. Explain to them that first you will discuss interesting information about bats and that they will then each write a story.
|2.||Engage in a rich discussion with your students about bats (10 minutes). Stimulate this discussion by reading aloud and discussing pages 17-19 from Bat Loves the Night. Information books do not need to be read in their entirety to students; instead, they are often best shared through short sections of text. Look at pictures of bats and talk through one or two of the Fast Facts from the National Geographic Kids: Amazing Bats of Bracken Cave website with your group. Be sure that every student has opportunities to talk about this information, enabling them to develop an expanded language base for this specific topic.
|3.||Introduce the Keep Writing! strategy list to students and give them the opportunity to orally rehearse a sentence they might write about bats (5 minutes). Tell students that one of the best ways to write more is to choose a very interesting idea about which to write. Summarize the previous discussion about bats clearly and directly, and ask your students to start thinking about what they will write: "We learned how one bat eats moths, and what kinds of things other bats eat. Now think about which part you want to write about. Mark, tell us what your first sentence will be."
Model the sentences volunteered by students and have the full group rehearse one or two of these sentences aloud.
|4.||Students are now ready to write their own brief but complete stories (10 minutes). Before handing out paper (or journals) and pencils, remind students that if they get stuck they can reread what they have already written to think about what to write next. Students should write their own stories as independently as possible.
|5.||Support individual writing during this time by monitoring students' engagement and success. "Lean in" when someone needs assistance and provide feed forward (not feedback to correct errors) for the writer, such as: "Cindy, your information about what bats eat is very interesting. What can you tell your readers next?" Every student should be actively writing the entire time (e.g., thinking about their ideas, writing ideas down, rereading their writing so far, and consulting with you or a peer).
|6.||After about 10 minutes of sustained, independent writing, ask students to finish the last portion of their writing. Ask each student to read their first (or favorite) sentence to the group and discuss how he or she chose a truly interesting fact about bats about which to write (5 minutes).
|7.||Collect each of these stories, or students' journals, for assessment.
Session 2: Understanding the Function of Titles for Information Text
|1.||Expand your students' linguistic resources for the topic before asking them to write (10 minutes). Read pages 6-8 of Bat Loves the Night aloud, focusing your discussion on how bats look, what body parts they have, or how they fly. Play the bat sounds from the Naturesongs: Other Animal Sounds website. This should be an inspiring, instructional conversation that builds students' confidence and interest in the topic prior to writing. Provide students with practice using the academic, content-rich language about bats that they can then include in their own writing.
|2.||Present the Write a Title! strategy list to students. Lead a brief but active discussion of ways in which students can integrate this strategy into their own writing: "Here's a new list for you to use. Think of a title that will tell what your story is about. How do you think you'll be able to do that?" (5 minutes).
Provide a think-aloud to your students for this strategy, such as: "I'll think about a title for my story. I'm going to write about the way that bats sound because I think that's very interesting. I can describe what they sound like and how bats make those sounds. So my title will be ‘How Bats Sound' because that will tell my readers what I wrote about. Susan, what title are you thinking about for your story?" As soon as everyone has had a chance to say their title aloud and rehearse it with the group, remind students that if they get stuck, they can reread what they have already written to help themselves think of a title and keep writing.
|3.||Students in your group should now write their own stories as independently as possible (10 minutes). These stories will be new, rather than extensions of those constructed in the last session. Help students continue to write by guiding them with the following:
|4.||Ask students to share elements of their new stories with the group (5 minutes). You might, for example, ask them to read their title and first sentence to the group. Then ask students, "Did you hear Sean's title? ‘Bats Have Wings.' That's very interesting. It makes me want to know what their wings look like! What did you tell readers in your story, Sean?"
|5.||Collect each of these stories, or students' journals, for assessment.
Session 3: Adding Useful and Important Details about Your Topic
|1.||View and discuss the Cave Life Gallery and Bat Echolocation Station on the California Underground: Bat Echolocation Station website. Read aloud and discuss pages 12-14 from Bat Loves the Night. Remember that you are working to expand each student's linguistic resources, supporting his or her ability to write about bats with confidence and interest (10 minutes). Summarize carefully, such as: "We have a lot of great information about bats! We know how echolocation works, and we even heard what it can sound like! Now it's time to write even more about what we know."
|2.||Remind students that they already know how to choose an interesting idea so that they can keep writing, and how to create a title that tells readers the content of the story. Ask one student to volunteer a title. For example, "Echolocation is Cool!" Talk together about the kinds of details that will help readers understand the title well, such as: "What do you need to tell your readers about echolocation so they'll know how cool it is?" Present the Write Lots of Details! strategy list to students and discuss how they can do this in their own stories (5 minutes).
|3.||Observe students as they write independently (10 minutes). Continue to provide assistance to each student as soon as he or she begins to struggle: "Mark, you have a good start with your title. What do you need to tell your readers about echolocation first?" Provide enough assistance around the table so that each student is writing successfully, building confidence and understanding.
|4.||Ask students to share their new stories with the group (5 minutes). Ask each student to silently read his or her own story from beginning to end and choose one sentence that he or she thinks has good information in it. (You may want to demonstrate this process with a think-aloud from a model story that you have written.) As each student reads just that one sentence to the group, discuss the details that have been included, such as: "Listen again to Jose's sentence: ‘Bats' wings are delicate and you can see through them.' What details did Jose write about in this sentence? What else do you think he could tell his readers about bats' wings?"
|5.||Collect each of these stories, or students' journals, for assessment.
- Observe one student in particular each day during writing. Either during the lesson or soon after, write brief notes describing how that student went about his or her writing that day. You may want to observe how engaged and enthusiastic this student appeared to be during writing, for example, or note that he or she often got “stuck” during writing and needed lots of teacher support to continue. These observations should help you to modify your instruction and provide more support for a student’s area of needed improvement. For a student who frequently gets stuck, for example, useful prompts might be:
"Could your next sentence start with "Bats wings are…?" "Think about everything you know about bats.” "What books or information about bats could you look at to help yourself get ideas?”
- Complete the Analytic Assessment form (Fearn & Farnan, 2001) for each student and for each session. (You may find it useful to analyze one set of writing samples produced before these sessions as well.) Determine a short list of those skills and strategies that have been taught recently and examine students’ texts for those specific factors. Assess each student’s writing fluency, active engagement, use of an appropriate title, and inclusion of related details. This assessment should affect your decisions regarding which strategies to teach and the type of assistance to offer during writing, from one session to the next. Based on this analysis you may decide, for example, to teach more sessions on strategies for engaged, sustained writing rather than moving on to Sessions 2 and 3 right away. Plan your teaching so that each student’s drafts improve in specific, targeted ways from one session to the next. |
Clearly one of the most important features distinguishing humans from all other mammals is the size of our brain in comparison to the rest of our body. While it is certainly true that other mammals have larger brains, scientists recognize that larger animals must have larger brains simply to control their larger bodies. An elephant, for example, has a brain that weighs 7500 grams, far larger than our 1400 gram brain. So making comparisons about brain power or intelligence just based on brain size is obviously futile. Again, its the ratio of the brain size to total body size that attracts scientists interest when considering the brains functional capacity. An elephants brain represents 1/550 of its body weight, while the human brain weighs 1/40 of the total body weight. So our brain represents about 2.5 % of our total body weight as opposed to the large brained elephant whose brain is just 0.18% of its total body weight.
But even more important than the fact that we are blessed with a lot of brain matter is the intriguing fact that gram for gram, the human brain consumes a disproportionately huge amount of energy. While only representing 2.5% of our total body weight, the human brain consumes an incredible 22% of our bodys energy expenditure at rest. This represents about 350% more energy consumption in comparison to body weight compared to other anthropoids like gorillas, orangutans and chimpanzees.
So it takes a lot of dietary calories to keep the human brain functioning. Fortunately, the very fact that weve developed such a large and powerful brain has provided us the skills and intelligence to maintain adequate sustenance during times of scarcity and to make provisions for needed food supplies in the future. Indeed the ability to conceive of and plan for the future is highly dependant upon the evolution not only of brain size, but other unique aspects of the human brain.
It is a colorful image to conceptualize early Homo sapiens migrating across an arid plain amongst carrion of animals less able to survive as they lacked our advantageous clever brain. But our earliest ancestors had one other powerful advantage compared to even our most closely related primates. The human brain has developed a unique biochemical pathway that proves hugely advantageous during times of food scarcity. Unlike other mammals, our brain is able to utilize an alternative source of calories during times of starvation. Typically, we supply our brain with glucose provided by our daily food consumption. We continue to supply our brains with a steady stream of glucose (blood sugar) between meals by breaking down glycogen, a storage form of glucose primarily in the liver and muscles. But relying on glycogen stores provides only short term availability of glucose. As glycogen stores are depleted, our metabolism shifts and we are actually able to create new molecules of glucose, a process aptly termed gluconeogenesis. Gluconeogenesis involves the construction of new glucose molecules from amino acids harvested from the breakdown of protein primarily found in muscle. While this adds needed glucose to the system, it does so at the cost of muscle breakdown, something less than favorable for a starving hunter-gatherer.
But human physiology offers one more pathway to provide vital fuel to the demanding brain during times of scarcity. When food is no longer available, after about 3 days, the liver begins to use body fat to create chemicals called ketones. One ketone in particular, beta hydroxybutyrate (beta-HBA), actually serves as a highly efficient fuel source for the human brain, allowing humans to function cognitively for extended periods during food scarcity. Our unique ability to power our brains using this alternative fuel source helps reduce our dependence on gluconeogenesis and therefore spares amino acids, and the muscles from which they are derived. Reducing muscle breakdown provides obvious advantages for the hungry Homo sapien in search of food. It is this unique ability to utilize beta-HBA as a brain fuel that sets us apart from our nearest animal relatives and has allowed humans to remain cognitively engaged and therefore more likely to survive the famines ever present in our history.
This metabolic pathway, unique to Homo sapiens, may actually serve as an explanation for one of the most hotly debated questions in anthropology, the disappearance of our Neandertal relatives. Clearly, when it comes to brains, size does matter. Why then, with a brain some 20% larger than our own, did Neandertals suddenly disappear in just a few thousand years between 40,000 and 30,000 years ago? The party line among scientists remains fixated on the notion that the demise of Neandertals was a consequence of their hebetude. As William Calvin described Neandertals in his book,
A Brain for All Seasons, Their way of life subjected them to more bone fractures; they seldom survived until forty years of age; while making tools similar to overlapping species, there was little inventiveness that characterizes behaviorally-modern Homo sapiens (p306).
While it is convenient and almost dogmatic to accept that Neandertals were wiped out by clever Homo sapiens, many scientists now believe that food scarcity may have played a more prominent role in their disappearance. Perhaps the simple fact that Neandertals, lacking the biochemical pathway to utilize beta-HBA as a fuel source for brain metabolism, lacked the mental endurance to persevere. Relying on gluconeogenesis to power their brains would have led to more rapid breakdown of muscle tissue ultimately compromising their ability to stalk prey or migrate to areas where plant food sources were more readily available. Their extinction may not have played out in direct combat with homo sapiens but rather manifested as a consequence of a simple biochemical inadequacy.
Our ability to utilize beta-HBA as a brain fuel is far more important than simply a protective legacy of our hunter-gatherer heritage. As Harvard Medical School professor George F. Cahill stated, Recent studies have shown that beta-hydroxybutyrate, the principal ketone is not just a fuel, but a superfuel more efficiently producing ATP energy than glucose It has also protected neuronal cells in tissue culture against exposure to toxins associated with Alzheimers or Parkinsons (Trans Am Clin Climatol Assoc. 2003; 114:149-61; discussion 162-3. Ketoacids? Good medicine? Cahill GF Jr, Veech RL).
Indeed, well beyond serving as a brain superfuel, Dr. Cahill and other researchers have determined that beta-HBA has other profoundly positive effects on brain health and function. Essentially, beta-HBA is thought to mediate many of the positive effects of caloric restriction and fasting on the brain including improved antioxidant function, increase in mitochondrial energy production with increase in mitochondrial population, increased cellular survival, and increased levels of BDNF leading to enhanced growth of new brain cells (neurogenesis).
"I fast for greater physical and mental efficiency. –Plato (428-348 B.C.)
Clearly, the idea of substantially reducing your daily calorie intake will not appeal to many people, despite the fact that it is a powerful approach not only to brain enhancement, but for overall health as well.
What seems to be more appealing to most people is the idea of intermittent fasting. That is, a complete restriction of food for a defined period of time at regular intervals. Research has clearly demonstrated that many of the same health providing and brain enhancing genetic pathways activated by caloric restriction are similarly engaged by fasting, even for relatively short periods of time.
Not only does fasting turn on the genetic machinery for the production of BDNF, but the Nrf2 pathway is also powered up, leading to enhanced detoxification, reduction of inflammation, and increased production of brain-protective antioxidants. Fasting causes the brain to shift away from using glucose as a fuel to a metabolism that consumes a special type of fat, manufactured in the liver called ketones. When the brain is metabolizing ketones as a fuel, even the process of cell suicide (apoptosis) is reduced, while mitochondrial genes are turned on leading to mitochondrial replication. Thus fasting shifts the brains basic metabolism and specifically targets the feminine DNA of mitochondria enhancing energy production and paving the way for not only better brain function and clarity, but also a deeper connection with the divine healing feminine energy.
As my colleague Gabriel Cousins explained,
"I often observe in fasting participants that concentration seems to improve, creative thinking expands, depression lifts, insomnia stops, anxieties fade, the mind becomes more tranquil, and a natural joy begins to appear. It is my hypothesis that when the physical toxins are cleared from the brain cells, mind-brain function automatically and significantly improves, and spiritual capacities expand." –Gabriel Cousens, M.D. (Founder, Tree Of Life Rejuvenation Center, Patagonia, AZ)
The expansion of spiritual capacities referred to by Dr. Cousens is a manifestation of the opening of the gates to the divine universal energy by the expansion of mitochondrial number and function brought about by the shift in brain metabolism that fasting imparts. It is through this functionally enhanced and increased population of mitochondria that the eternal energy is imbibed into our being. As Paramahansa Yogananda eloquently put it in his collection of essays entitled, Mans Eternal Quest. "Through fasting, let your mind depend on its own power. When that power manifests, the life force in the body becomes increasingly reinforced with the eternal energy continually flowing into the brain and spine from the cosmic energy around the body
Indeed the utility of fasting in spiritual quests is an integral part of the human religious history. All major religions to this day retain fasting as far more than simply a traditional ceremonial act. It remains a fundamental part of the spiritual practice to gain enlightenment as in the Muslim fast of Ramadan and the Jewish fast of Yom Kippur. Yogis practice austerity with their diets and shamans fast during their vision quests.
Father Thomas Ryan, Director of the Catholic Paulist Society's North American Office for Ecumenical and Interfaith Relations summarized the gifts offered by fasting by stating, "Fasting as a religious act increases our sensitivity to that mystery always and everywhere present to us. It is an invitation to awareness, a call to compassion for the needy, a cry of distress, and a song of joy. It is a discipline of self-restraint, a ritual of purification, and a sanctuary for offerings of atonement. It is a wellspring for the spiritually dry, a compass for the spiritually lost, and inner nourishment for the spiritually hungry."
"Fasting is the master key to mental and spiritual unfoldment and evolution."–Dr. Arnold Ehret (1866-1922; German Father of Naturopathy, a.k.a. Naturopathic Medicine)
Image by moosold9, courtesy of Creative Commons license. |
Efforts to implement standardization in education have increased over the years. This return to “a single standard of achievement and a one-dimensional definition of the common will…result in severe injustices to the children …” (Greene, 1995, p. 173). In schools today, the phrase “high standards” implies that every student is expected to reach a predetermined bar. The inevitable consequence becomes failure when teachers are forced to implement these standards without regards to the needs and experiences of the students. Teachers and administrators blame the state standards and assessments for the decline of multicultural education (Bohn & Sleeter, 2001). “Standards can…make explicit what students will be tested on, a detail that may help parents and community leaders at least know what the ‘game’ is and what the students will be judged on” (Bohn & Sleeter, 2001, p. 2). Many students’ educations are in jeopardy because they are oblivious to this game or “culture of power” (Delpit, 1995). Standardized approaches to
curricula and pedagogy are inadequate when considering the needs of culturally diverse
classrooms. Delpit (1995) argues that “children who may be gifted in real-life settings are often at a loss when asked to exhibit knowledge solely through decontextualized paper-and-pencil exercises” (p.173). In order to provide a culturally relevant environment for teaching, no one pedagogy should control the classroom by excluding all others.
We have been involved in conversations with our peers in different school districts and find that many teachers are supposed to be on specific lessons on specific days – that the lessons are fairly scripted and that deviation from the timeline is not permitted. No room is allowed for individuality or creativity in presenting lessons; only the officially sanctioned timeline of lessons is to be seen if an administrator walks into the classroom. Delpit (2003) goes into details of excellent teachers who have been reprimanded for not following standardized curriculum, including one who was “ordered to give up his thoughtful, imaginative, and effective practice in order to conform to following the script of the mandated Open Court (SRA/McGraw Hill, 2002) literacy program. Although he protested that his students were already achieving beyond the district’s expectations, he was told he had to fall in line” (p.15). We concur with Delpit that this practice is dangerous, treating students as products on an assembly line, with no thought as to the emotional and intellectual growth, only the results on a standardized test.
While we find the experiences the teachers in these districts are having to be appalling, the missing factor for us in all the talk about standards is the student. Where does the student fit into this assembly line model of education? What experiences does the student bring with her when she enters the classroom? What are the cultural expectations and mores that the student encounters at home? What kinds of biases are inherent in the curriculum that prevent student learning? We believe that through responsive teachers, schools can make a difference and begin to address these questions and the impact of education on the lives of students (Banks, Cookson, Gay, et al., 2001). As educators, we understand that there is more to education than “covering” the objectives that are issued to us from our state departments of education; there is also the need to cultivate relations with our students, to allow students the opportunity to be free to express their ideas without fear of reprisal or humiliation in our classrooms, to make the curriculum relevant to all. By making teaching culturally relevant, strategies such as constructing multicultural representations in the classrooms will help bridge the gap between students, their diverse experiences, and what the school curriculum requires (Banks, Cookson, Gay, et al., 2001).
In an effort to find ways to build bridges between the home and school as well as academic and lived experiences of students, we performed a literature review on cultural biases in education and pedagogical practices to help in our exploration to find strategies that will counteract or eliminate these biases. In order to make the literature come alive for us, we began to look for examples within our own classrooms that represented culturally responsive pedagogy in action. In this article you will find some understandings we culled from our literature review, followed by anecdotes that represent for us theory in practice. The anecdotes you find here represent just the beginning of this exercise, as we have also begun collecting real world examples from our peers. We hope to illustrate how theory can be moved into practice, with narratives where culturally responsive pedagogical practices have been implemented in our classrooms.
Educational Importance for Culturally Relevant/Responsive Teaching
The 2000 census shows a growing Hispanic and African American population in the United States, and figures show that non-White population in the United States is approximating 25% (Grieco & Cassidy, 2001). Meanwhile, a recent study in our state, Georgia, shows that the teaching force in Georgia remains over 80% White, with a trend of higher turnover for White teachers at schools that serve predominantly minority students (Freeman, Scafidi, & Sjoquist, 2002). As “No Child Left Behind” reforms and the consequential “Adequate Yearly Progress” reports that specifically track performance of ethnic groups emerge, the focus for teachers ought to move away from what curriculum will be tested to how to engage both teachers and students in appropriate pedagogical practices for diverse populations. The importance of multicultural education as well as culturally appropriate pedagogical practices becomes even clearer.
Literature has been devoted to the empowerment of the majority culture and the consequent disabling of minority students (e.g. Bowman, 1994; Cotton, 1991; Cummins, 1985; Dimitriadis, 2001; Hale, 2001; Ladson-Billings, 1994; Lipka, 2002; Perry & Delpit, 1998; Salzer, 1998; Steele, 1992). “Many of the…marginalized are made to feel distrustful of their own voices…yet they are not provided alternatives that allow them to their stories or shape their narratives or ground new learning in what they already know” (Greene, 1995). In order for students to “buy into” education, they must be able to find a personal connection with education and the learning process. As teachers we ought to become fluid in our curricula presentation so as to allow students opportunities to connect. Using Gloria Ladson-Billings term, culturally relevant teaching, we ought to initiate actions to integrate the students’ cultural backgrounds into the classroom (1994, 1995). The objective of a culturally relevant classroom is to use this connection between culture and curriculum, home and school to promote academic achievement.
Ladson-Billings (1994) and others (e.g. Gay, 2000; Howard, 2003; Klug & Whitfield, 2002; Townsend, 2002) have taken the research a step further with the development of culturally relevant/responsive pedagogy, a theoretical framework for education “that attempts to integrate the culture of different racial and ethnic groups into the overall academic program” (http://www.emstac.org/registered/topics/disproportionality/models.htm). By involving students in a culturally responsive classroom, they learn different ways of knowing, understanding, and presenting information. Because diverse views are allowed, students are introduced to new and diverse interpretations and perspectives. The different views challenge and broaden the students’ boundaries. In the classrooms students are allowed to use their strengths which in turn facilitates the development of new skills. Moreover, associations are made between the school culture and home culture.
What exactly is meant when we say culturally relevant pedagogy? Gay (2000) actually prefers the term culturally responsive as opposed to relevant, and one will find that we use the terms interchangeably. From Gay, we find that “Culturally responsive teaching can be defined as using the cultural knowledge, prior experiences, frames of reference, and performance styles of ethnically diverse students to make learning encounters more relevant and effective for them” (p. 29). Gay notes that improving academic achievement is far from the only goal, as a culturally responsive approach to teaching helps students of color “maintain identity and connection with their ethnic groups and communities; develop a sense of community, camaraderie, and shared responsibility; and acquire an ethic of success” (p. 30). Further, culturally responsive/relevant teaching can be described as multidimensional. While it does address curriculum content, culturally relevant teaching also includes “learning context, classroom climate, student-teacher relationships, instructional techniques, and performance assessments” (p. 31). An important understanding of this multidimensionality is to realize that
To do this kind of teaching well requires tapping into a wide range of cultural knowledge, experiences, contributions, and perspectives. Emotions, beliefs, values, ethos, opinions, and feelings are scrutinized along with factual information to make curriculum and instruction more reflective of and responsive to ethnic diversity. However, every conceivable aspect of an ethnic group’s culture is not replicated in the classroom. Nor are the cultures included in the curriculum used only with students from that ethnic group. Cultural responsive pedagogy focuses on those elements of cultural socialization that most directly affect learning. (Gay, 2000, pp. 31-32)
Consequently, culturally responsive teaching can be considered transformative, as “it recognizes the existing strengths and accomplishments of these students and then enhances them further in the instructional process” (Gay, 2000, p. 33) and emancipatory, “in that it releases the intellect of students of color from the constraining manacles of mainstream canons of knowledge and ways of knowing” (Gay, 2000, p. 35). Finally, “cooperation, community, and connectedness are also central features of culturally responsive teaching. Students are expected to work together and are held accountable for one another’s success” (Gay, 2000, p. 36).
Even with the aforementioned research, as educators we still find an ever-growing divide between theory and practice. While there is a movement within teacher education programs to include more instruction about multicultural education, and even a “bold proposal” to certify new teachers in culturally responsive pedagogy (Townsend, Nov.2002), there remains a large segment of the practicing teaching population who are unexposed and unaware of the philosophy behind the movement, or for that matter, unaware of the need for culturally relevant teaching practices in the face of the standards movement.
The Need for Visualization: Translating Theory Into Practice
While appreciating the literature on culturally relevant pedagogy, we felt a need to visualize the practices in classrooms that are very culturally diverse as well as classrooms that are predominantly White. This visualization is important for several reasons. One author is an African American woman who teaches in a culturally diverse middle school, while the other author is a White woman who taught in a predominantly White elementary school. Our experiences are vastly different, but we both believe that every child in our classroom has the right to be successful, and the right to be affirmed as an important human being regardless of the ethnic makeup of our classes – culturally responsive pedagogy is one way to accomplish this task. The conversations we have had about what culturally responsive pedagogy looks like in our classrooms have been invaluable in helping each other make deeper connections. Another reason why we have become engrossed in this visualization is in an effort to make the practice more accessible to our colleagues and present our students with opportunities for academic success while guiding them to become more caring, concerned, and humane individuals.
In both of our personal experiences as teachers in a K-12 setting, we have learned from experience that the use of narratives in the classroom not only helps students to achieve a greater understanding of curriculum, but also allows our students to find culturally relevant ways of applying the curriculum to previous knowledge. Theoretically and methodically we took a narrative approach as we monitored our own classrooms. This study began as a project for our doctoral class; however, we quickly realized the impact culturally relevant pedagogy had on our students and the classroom environment. Narrative inquiry encouraged us to examine our roles as practitioners as we listened to and learned from the stories of our students. Through the storytelling we were allowed to understand perspectives of others and develop rich connections as we gained knowledge academically and socially. Using the same line of thinking, we as teachers achieve a greater understanding of culturally relevant practice by listening to one another. Our own experiences of listening to each other about events in our classrooms have solidified this belief for us. Stories of successful pedagogical practice with diverse populations help to reinforce the ideas behind culturally relevant pedagogy as it translates from theory into practice.
Narratives from our classrooms
Our scripts are based on life experiences, students’ needs, and required curriculum. Through narratives, we have attempted to move the curriculum from the noun to verb form by restoring the fluidity to curriculum. By utilizing a concept of fluidity the curriculum becomes reflexive instead of linear. Students are able to accept diversity and develop and enhance critical thinking skills, where as those in a linear environment settle for dichotomous understanding. Worded differently, the inability to make connections through teachable moments when the interpretation of standards is a curriculum that insists upon specific objectives and pages covered reduces the fluid nature of dialogue within the classroom, and the understandings that can emerge about one another and the curriculum. As teachers we need to realize that narratives are powerful tools that can positively or negatively affect our students. Through narratives we are presented with a safe environment to experience, explore, and understand with a different openness . The following narratives represent some of our own personal explorations with culturally responsive pedagogy in classrooms that are the most familiar to us – our own.
Recently my literature classes read a short story about two pre-teens (boy and girl) that are in conflict with their family, cultural tradition, and community. The children were preparing to endure a cultural ritual that takes place when the boy child becomes a warrior and the girl becomes a woman. In the introductory discussion the students and I talked about traditions. In each class, I had at least one student per class to express that the traditions of the featured culture was “stupid.”
I asked my students two questions that took our discussion to a new level. The questions: Why do you think this tradition is stupid? How would you feel if someone called your family tradition stupid? I could realize these students were in some instances speaking from learned stereotypes. They honestly did not know why they spoke so negatively about differences.
After our class discussions, I could tell some of the students realized the pain their closed mindedness and negative words could cause. I believe this storytelling also allowed the students to see the conflict from the “other” point of view. The students were able to understand how it feels to be torn between following family tradition and how others perceive your difference.
In one class a student shared a narrative that served as the perfect bridge into our literature story. This student began with her feelings of fear of an Indian woman in her neighborhood that wore clothing that covered all but her eyes. She thought this woman was deformed. Her mother was friends with this lady and arranged for the Indian woman to tell her daughter of her culture and beliefs. The student was even allowed to try on one of the woman’s garments. Quickly she clarified that she did not try on a garment but a Jhumn (head and body covering) and a Ghago (dress). She eventually completed a social studies project on the culture of her neighbor and got an A. This student was beaming with excitement as she told this story to the class. While she talked, her peers stared and clung to every part of her story. By gaining exposure to a different culture, history, and literature the students gained experience, learned respect for difference, and developed empathy and a greater understanding of self and others.
In reflecting upon the above narrative there is displayed an instance in which the traditional curriculum has stepped apart from the rigid steps and rules process of teaching. The students had the opportunity to experience “otherness” and use that unique experience to see the literature story from a different lens. The lessons that incorporate narratives, like the above, encourage students to be autonomous, find a connection, and allow for personal reflection on diversity. “The purpose of critical reflection is not to indict teachers…but to improve practice, rethinking, philosophies, and become more effective for today’s ever-changing population” (Howard, 2003).
Reflection Through Critically Relevant Pedagogy Enhances Education
One component of narrative teaching is the opportunity for self-reflection. The value of reflection in education has been documented in history through the era of John Dewey. Dewey (1933) viewed reflection as a form of problem solving developed in the framework of using experiences as a deliberate cognitive process. This philosophical process is interwoven with many contemporary approaches to education. In order for this process to be beneficial in a diverse society, we should be cognizant of moral, political, and ethical concepts presented.
Ladson-Billings (1994) argues on the same lines of Dewey that a reflective approach provides students with an authentic belief that culturally diverse students are capable learners. She argues that if students are treated as morally and ethically competent, they will demonstrate these expectations through actions. To become culturally relevant, we as teachers need to engage in honest, critical reflection and discourse that challenges our colleagues and students into reevaluating their positionality. “Good intentions and awareness are not enough to bring about the changes needed in educational programs and procedures to prevent academic inequities among diverse students” (Gay, 2000, p. 13). This discourse will explore the areas of race, culture, social class, and gender and how these characteristics shape our learning, thinking, and understanding of our curriculum, society, and the world.
Poetry is another unit placed within the Literature curriculum for K-12 student. Within the thematic unit and standardized curriculum, students are required to read and explain from a literary canon to which they cannot connect. As with most topics, students come alive when they are able to read, learn, and write about something that is of interest to them. Ladson-Billings (1995) asserts, “if the students’ home language is incorporated into the classroom, students are more likely to experience academic success” (p. 159). In many instances musicians speak the home language of our students. Many required poetry units feature traditional works that are considered foreign to students that live in the popular culture era. Narratives by those outside the school environment can be used to find that connection so that students are capable of relating to the unit of study.
The first day of the poetry unit the students entered my room with sad faces and despair. The question on many of their mind was how long we had to study poetry. I waited until I got everyone’s attention and began to name the poets that would introduce our unit. I told the students that I would hope they recognized some of the names and began to call names of the rap artists and singers they recognize from radio and television. Within two minutes my class was electric. The students understand and listen intently to popular rap songs, and this was the needed connection to re-ignite the spark for poetry.
Rap music is a global phenomenon that my students understand. I decided to use the genre of the hip-hop culture to introduce poetry and get my students excited about poetry. The idea behind rap as poetry is to encourage the students to remove the melody of a song they know and like and realize that once the words are spoken, there is a poem. Once the excitement developed, it was an easy transition from rap to the required poetry for the curriculum.
One important feature of culturally relevant pedagogy is that the teacher does not teach as if the student has deficits of knowledge (Howard, 2003, ¶ 14). In this above instance, the students were affirmed and their knowledge of popular culture was treated as worth knowing. In addition, this introduction to poetry further helped to explicitly make a “connection between culture and learning” (Howard, 2003, ¶ 15) as well as “ incorporate a wider range of dynamic
and fluid teaching practices” (Howard, 2003, ¶15), both of which are important understandings of what culturally relevant pedagogy entails.
"Culturally relevant teaching requires that teachers attend to students' academic needs, not merely make them ‘feel good.’ The trick of culturally relevant teaching is to get students to ‘choose’ academic excellence" (Ladson-Billings, 1995, p. 160).
Just this week I witnessed one of those moments where students chose academic excellence. The students had been working diligently as we studied the elements of mythology. To keep the excitement high, I decided to incorporate an art activity. The students chose partners or small groups. The objective was to create a mythic cartoon that had characterization, foreshadowing, a mythic hero, and a moral. While meeting the students’ needs in academics, I saw culturally relevant teaching pay off as a bond develops between two of my most ‘unlikely’ students.
J is an African American male student that was recently mainstreamed into the regular classroom from the Behavior Disorder class. He is opinionated, loud, and very independent. A is a White male student that comes from a very conservative environment. He is the typical child with the need to please. A is independent (when he believes the teacher wishes it). I assume
from A’s mannerisms and reactions to the other students and to me that he has limited interactions with minorities outside the school environment.
Both boys love sports and wanted to use that love as the framework for their Mythic Cartoon. They had the same idea and very reluctantly decided to work together. A is an excellent writer with a keen sense of creativity. J is an excellent artist with a flair for drawing what others request. The boys worked diligently and actually produced the best product of all the classes that completed the assignment. Their classmates were in awe of the final project.Three days later, I noticed them talking in the hall and sitting together during lunch. J (the behavior problem) is much calmer and contributes positively to the class. I watched A have an opinion in a discussion, and he did not back down when other students disagreed. What an amazing transformation for both boys. I am almost certain that these two students would have been in the same class for the entire year and never talk. However, their love for football and art created an unlikely friendship and connection.
The above narrative is an interesting example of choosing excellence. Not only did the assignment allow the students to succeed while using their strengths, but it also created a bridge between cultures – showing each student that the other has strengths to bring to the table, focusing on positive interaction rather than negative interaction.
Culturally Responsive Pedagogy in Majority White Schools
Through narratives, teachers are able to look inward. This allows teachers to know themselves and their students. Palmer (1998) asserts that “we teach who we are.” Hard questions are then asked: Are the students that fail unlike us?
We feel that culturally responsive pedagogy is not only important for schools who have a large representation of diverse populations, but also important in schools that do not have a large diverse population. Ladson-Billings (1994) states, “Culturally relevant teaching is about questioning (and preparing students to question) the structural inequality, the racism, and the injustice that exist in society” (p. 128). Nieto (2000) elaborates on this point further:
Racism is seldom mentioned in school (it is bad, a dirty word) and therefore is not dealt with. Unfortunately, many teachers think that simply having lessons in getting along or celebrating Human Relations Week will make students nonracist or nondiscriminatory in general. But it is impossible to be untouched by racism, sexism, linguicism, heterosexism, ageism, anti-Semitism, classism, and ethnocentrism in a society characterized by all of them….Therefore, part of the mission of the school becomes creating the space and encouragement that legitimates talk about racism and discrimination and makes it a source of dialogue. (p. 307)
This element is why we include stories that show the importance of examining and affirming cultures in the classroom when the majority of the faces are White. Not only do these teachers have a responsibility to the students in their rooms who are not part of the dominant culture, they also have a responsibility to all of their students to learn to accept and appreciate one another. “We must cultivate in ourselves a capacity for sympathetic imagination that will enable us to comprehend the motives and choices of people different from ourselves, seeing them not only as forbiddingly alien…but as sharing many problems and possibilities with us” (Nussbaum, 1997, p. 85). There are so many differences in society that it makes acceptance of different people, diverse cultural beliefs, or various points of view harder. Through the development of understanding and sympathy we can hopefully learn to collaborate across cultural boundaries.
In one of my classes recently, I had a class discussion based on something that I found that was entitled “If the world were a village of 100 people” (Some of this information, found below, can be found at http://www.pratyeka.org/library/text/100people.html). The following represents some of the information that we discussed:
- 70 would be nonwhite, 30 would be white
- 61 would be Asian, 13 African, 13 from North and South America, 12 Europeans, and the remaining one from the South Pacific.
- 33 would be Christians, 19 believers in Islam, 13 would be Hindus, and 6 would follow Buddhist teachings. 5 would believe that there are spirits in the trees and rocks and in all of nature. 24 would believe in other religions, or would believe in no religion.
- 17 would speak Chinese, 9 English, 8 Hindi and Urdu, 6 Spanish, 6 Russian, and 4 would speak Arabic. That would account for half the village. The other half would speak Bengal, Portuguese, Indonesian, Japanese, German, French, or some other language.
- In such a village with so many sorts of folks, it would be very important to learn to understand people different from yourself and to accept others as they are. But consider this. Of the 100 people in this village,
- 20 are undernourished, 1 is dying of starvation, while 15 are overweight.
- Of the wealth in this village, 6 people own 59% (all of them from the United States), 74 people own 39%, and 20 people share the remaining 2%.
- Of the energy of this village, 20 people consume 80%, and 80 people share the remaining 20%.
- 75 people have some supply of food and a place to shelter them from the wind and the rain, but 25 do not. 17 have no clean, safe water to drink.
- If you have money in the bank, money in your wallet and spare change somewhere around the house, then you are among the richest 8.
- If you have a car, you are among the richest 7.
- Among the villages, 1 has a college education. 2 have computers. 14 cannot read.
I decided that for this activity I would present this information by writing it on the board, but that I wanted my students to copy it. Groans of how “mean” I am followed from some students, but I believe that by having the students copy the information, the possibility of them absorbing what was presented was greater. It certainly allowed for more introspection.
After the students had finished copying most of the information, I asked them how this relates to perspective, our focus of discussion. One child said that we are very fortunate. Many of them had never thought that a college education could be considered rare – after all, most of their parents have college educations. Somehow in the conversation, someone said that this is a reminder to treat others the way that you want to be treated, and I asked in return “Is it really?” This brought us into a discussion of how different cultures have different social mores, and the way that people within that culture want to be treated may be very different from the way that “I” want to be treated. As I reflected back on how the conversation progressed in both classes, I thought to myself that more students need to be engaged in conversations like this one, and how good it felt to be engaged in dialogue that was on such a high level with nine and ten year olds. Moments like this one are one of the reasons that I teach.
One of the reasons we include this particular narrative is that this activity shows a White teacher who is becoming more comfortable with discussing race, class, gender, and other differences in her classroom. This step is important, as Howard (2003) notes:
As the teaching profession becomes increasingly homogeneous, given the task of educating an increasingly heterogeneous student population, reflections on racial and cultural differences are essential. In order to become a culturally relevant pedagogue, teachers must be prepared to engage in a rigorous and oftentimes painful reflection process about what it means to teach students who come from different racial and cultural backgrounds than their own. (¶ 18)
I was leading my fifth-graders in a discussion of current events, which is a daily practice in my classroom. One of our first events was a discussion of Shirin Ebadi, who is the recent recipient of the Nobel Peace Prize, and the first Muslim woman to receive this prestigious award. One of my students, M, immediately raised his hand and said, “WHY in the WORLD would they give the Nobel Peace Prize to a MUSLIM?” Another student, T, immediately erupted from his chair and said, “Excuse me, Ms. H., but I really need to deal with this.” He faced the students and entered into a heartfelt speech on prejudice and racism. He told M that he still liked him very much as a person, but was deeply offended by what M had said. He said, “How dare you judge an entire group of people based on what a few people from one religion have done?”
What ensued was a class debate—totally unplanned! The students explored racism, prejudice, and the many factors that shape our perspectives. When one student stated that the world would be boring if we were all alike, I asked him to look around our classroom. There were actually gasps when they realized that we were sitting in a class with no diversity whatsoever. This led into a discussion of how it can be detrimental to assume that we are the “norm.” The students also discussed how sometimes people do judge an entire group of individuals because they are different from themselves or because of what a few of those individuals have done.
At the end of this profound debate, the original student who had made the comment about the Nobel Peace Prize asked to make a statement. He told the class that his perspective had totally changed and that he now fully understood the merit of this amazing Muslim woman. He actually thanked the student who had helped to open his eyes.
When I later shared this event with colleagues, I stated that this was one of the most amazing teaching moments of my career. When students are given the freedom and the forum to discuss relevant issues in our world, we are ALL enriched. (K. Harrell, personal communication, November 12, 2003)
Activities such as these in a predominantly White setting also shows culturally relevant pedagogy. Ladson-Billings (1995) states that in order to have culturally relevant teaching as pedagogy, three criteria must be in place: “(a) Students must experience academic success, (b) Students must develop and/or maintain cultural competence, and (c) Students must develop a critical consciousness where they challenge the status quo of the current social order” (p. 160). In an environment where the students are exposed to monoculture instead of multiculture, students need to develop a broader sense of sociopolitical consciousness to realize the cultural norms, values, and mores of others. These students need to know that race matters (West, 1994). It always has, and it always will. It matters in whose version of history is presented, what canon is adopted, and in who is seen as invisible.
Perspectives Presented Through Narratives Enhance Inquiries
Just as the use of narratives in a classroom (and narratives about classrooms) can enhance learning and help make important connections for our students, the use of narratives have not been lost upon educational researchers. Educational researchers are utilizing a variety of different approaches that include the use of narratives: personal narrative and narrative inquiry (e.g. Josselson, 1996; Connelly and Clandinin, 1990), narrative multiculturalism (e.g. Phillion, 2002; He, 2002), reflexive ethnography (Pink in Mullen, 2001); portraiture (e.g. Garcia, 1999); fiction as inquiry (Morrison, 1970); and cultural inquiry (e.g. Gee, 1999), just to name a few. Many of these approaches have common characteristics. The above-mentioned researchers each seek to discover cultural roots to encourage participation from students of diverse cultures in the classroom. The importance of culturally relevant teaching is immeasurable. Ethically speaking, all students should receive equal educational opportunities and enrichment experiences. One thread that runs through each of these inquiries is the attempt to connect the researchers with personal and cultural experiences of a diverse population. This knowledge allows the researchers to develop an understanding of complications, concerns, and cultural issues that affect the diverse student population. The emphasis on “narrative,” or story telling, in each of these modes of inquiry is important when translating theory into practice. “We see; we hear; we make connections; (Greene, 1995, p. 186).
In our classrooms, we realize that each person has a story to tell. Some of these stories may actually show the prejudices that our students have developed. Many times we would not have the opportunities to act as multicultural educators if we did not have opportunities to discuss such prejudices, just as illustrated in the story about the Muslim woman winning the Nobel Peace Prize. As one reviewer of this article commented, “The stories many students will tell draw from racist, sexist and misinformed accounts which dominate popular culture. These are easier for students to find the words to speak…narratives show how unplanned pedagogical interventions can give students access to subjugated stories – and enable other stories to be told and heard.” We encourage the use of personal narratives in our classes for this reason, as well as to help students make connections with the curriculum just as we encourage the telling of personal narratives so we can better understand the cultural backgrounds and heritage that our students bring to the classroom. “Cultural background…plays a part in shaping identity; but it does not determine identity” (Greene, 1995, p. 163). These stories remain dormant if the focus is on curriculum, not on culturally relevant pedagogical practices. Relationships that are fostered through the use of narrative, then, become important for both the student and the teacher in maneuvering through the curriculum.
Ohanian (1999) makes numerous cases against the use of education standards. She asserts with much research that current educational reform is from a business framework and excludes the voices of teachers and students. Schools would benefit when the curriculum expands as to provide options so that the diverse student population may become aware of the knowledge presented and find opportunities for self-reflection. Ohanian (1999) believes there is a need for more accountability, but standardization without inclusion of diversification is not the answer.
Ignoring cultural differences may also lead to unfair testing practices. For example, the illustrations, wording, and contextual information given on standardized tests may reflect the experiences and languages of a particular group. This privilege will benefit some students while penalizing many that are not aware of the cultural differences. Through narratives, students gain exposure and perception of diverse cultures. For students who are not a “participant in the culture of power, being told explicitly the rules of that culture makes acquiring power (understanding) easier” (Delpit, 1995, p. 24).
The proliferation of standardized testing and standardized curriculum ignores the beauty of diverse cultures and divergent thinking that can be fostered in our classrooms. It ignores the power of the teachable moment that occurs when we stop and listen to each other’s stories, focusing instead on a measurable product. Delpit (2003) well sums up the difference between a focus on standardization and culturally responsive teaching:
…we can educate all children if we truly want to. To do so, we must first stop attempting to determine their capacity. We must be convinced of their inherent intellectual capability, humanity, and spiritual character. We must fight the foolishness proliferated by those who believe that one number can measure the worth and drive the education of human beings, or that predetermined scripts can make for good teaching. Finally, we must learn who our children are – their lived culture, their interests, and their intellectual, political and historical legacies. (p. 20)
Every class will have a unique cultural make-up. Whatever commonalities exist, it is important to realize that no two schools, classrooms, or students will be exactly alike. Therefore, examinations of the impact of narratives as well as the narratives themselves will be a continual and ever changing process.
A superior academic education is dialogical in nature – it is not standardized. It initiates all students in the art of participating in a creative interplay between different cultural perspectives. A dialogue between cultures through narratives alerts them to personal biases - a gain in itself - and enables them to reduce such biases in a non-threatening way.
Culturally relevant pedagogy is an empty phrase without action, and transformation of the standardized educational system can not happen without action. In order for success, we must include action and reflection without new pedagogy. One element needed to begin this transformation is dialogue of teachers with their colleagues, teachers with students, and teachers with parents and community representatives. Dialogue adds awareness and perspective as teachers and students begin to broaden their view. From the dialogue will come the stories from teachers on how culturally relevant pedagogy emerged in their classrooms. Thus we have developed an appropriate step for bridging theory into practice. As our population becomes more diverse, and teachers of all races and genders are seeking ways of meeting the challenge of increased standardization, successful practices need to be chronicled and shared with others, in turn reducing more biases that are inherent in the standardized curriculum. The narratives presented here from classrooms familiar to us represent only the beginning of our endeavor to chronicle these stories.
Lee Woodham Digiovanni
Georgia College & State University
Milledgeville, GA 31061
Phone - 478-445-0513
Paula Booker Baker
Georgia Southern University
Paula Baker received her doctorate in Curriculum Studies from Georgia Southern University. Her area of interest is critical narrative inquiry, culturally relevant pedagogy, and academic affects of identity exploration with a particular focus on linguistically and culturally diverse student education. Her dissertation study focused on critical narrative identity exploration and resiliency and the effects on African American women with doctorates in the field of education.
Lee Woodham Digiovanni
Georgia College and State University
Lee Woodham Digiovanni is an Assistant Professor of Early Childhood Education at Georgia College and State University. Her research interests focus on the application of multicultural and feminist understandings to education. She completed her Ed.D in Curriculum Studies from Georgia Southern University. Her dissertation study focused on White female elementary teachers who have internalized the importance of teaching for diversity.
Banks, J., Cookson, P., Gay, G., Hawley, W., Irvine, J., Nieto, S., Schofield, J., Stephan, W. (2001). Diversity within unity: Essential principles for teaching and learning in a multicultural society [Electronic Version]. Phi Delta Kappan, 83 (3), 196-203. Retrieved October 11, 2003 , from Academic Search Premier.
Bohn, A., & Sleeter, C. (2001). Will multicultural education survive the standards movement? [Electronic version]. Education Digest, 66 (5), 17-24. Retrieved October 11, 2003, from Academic Search Premier.
Bowman, B. (1994). Cultural diversity and academic achievement. (Order no. UMS-CD094). Oak Brook, IL: Urban education program. Urban monograph series. (ERIC Documentation service no. ED382757)
Connelly, F. & Clandinin, D. (1990). Stories of experience and narrative inquiry. Educational Researcher, 19(5), 2-14.
Cotton, K. (1991). Fostering intercultural harmony in schools: Research findings. School improvement research series. (Contract RP91002001). Portland, OR: Northwest regional educational laboratory. NWREL developer. Retrieved on October 1, 2003 from www.nwrel.org/scpd/sirs/8/topsyn7.html
Cummins, J. (1985). Empowering minority students: A framework for intervention. In L. Weis &
M. Fine (Eds.) (1993). Beyond Silenced Voices: Class, race, and gender in United States schools. (pp. 101-117). Albany, NY: State University of New York Press.
Delpit, L. (1995). Other people’s children: Cultural conflict in the classroom. New York: The New Press.
Delpit, L. (2003). Educators as “seed people’ growing a new future. Educational Researcher, 32(7), 14-21.
Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. Boston: D.C. Heath and company.
Dimitriadis, G. (2001). Performing identity/performing culture: Hip hop as text, pedagogy and lived practice. New York: Peter Lang Publishing.
Freeman, C., Scafidi, B., & Sjoquist, D. (December 2002). Racial segregation in Georgia public schools 1994-2001: Trends, causes and impact on teacher quality. Retrieved October 6, 2003, from http://www.gsu.edu/~wwwsps/news/release/segregated_schools.htm
Garcia, E. (1999). Student cultural diversity: Understanding and meeting the challenge. (2 nd ed.). Boston: Houghton Mifflin.
Gay, G. (2000). Culturally responsive teaching: Theory, research and practice. New York: Teachers College Press.
Gee, J. (1999). Literacy, discourse, and linguistics: Introduction. Journal of Education, 171, 5-17.
Greene, M. (1995). Releasing the imagination: Essays on education, the arts, and social change. San Francisco: Jossey-Bass.
Grieco, E., & Cassidy, R. (2001). Overview of race and Hispanic origin 2000. Retrieved on October 6, 2003 from http://www.census.gov/prod/2001pubs/c2kbr01-1.pdf
Hale, J. (2001). Learning while Black: Creating educational excellence for African American children. Baltimore, MD: John Hopkins University Press.
He , M.F. (2002). A narrative inquiry of cross-cultural lives: Lives in China. Journal of Curriculum Studies, 34 (3), 301-321.
Howard, T.C. (2003). Culturally relevant pedagogy: Ingredients for critical teacher reflection [Electronic Version]. Theory into Practice,42(3), 195-202. Retrieved October 5, 2003, from Academic Search Premier.
Josselson, R. (1996). Ethics and progess in the narrative study of lives (Vol. 4). Thousand Oaks, CA: Sage.
Klug, B. & Whitfield, P. (2002). Widening the circle: Culturally relevant pedagogy for American Indian children. New York: RoutledgeFalmer. If the world were a village of 100 people. (n.d.). Retrieved September 13, 2003, from http://www.pratyeka.org/library/text/100people.html
Ladson-Billings, G. (1995). But that’s just good teaching! The case for culturally relevant pedagogy [Electronic version]. Theory into Practice, 34(3), 159-165. Retrieved October 5, 2003, from Academic Search Premier.
Ladson-Billings, G. (1994). The dreamkeepers: Successful teachers of African American children. San Francisco: Jossey-Bass Publishers.
Lipka, J. (2002, January). Schooling for self-determination: Research on the effects of including native language and culture in the schools. (Report No. ED-99-CO-0027). Charleston,WV: Clearinghouse on Rural education and small schools. (ERIC Document reproduction Service no. ED459989)
Morrison, T. (1970). The bluest eye. NY: A Lume Book.
Mullen, L. (2001, December). Review Note: Sarah Pink (2001). Doing Ethnography: Images, Media and Representation in Research [8 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [On-line Journal], 3(1). Retrieved October 10, 2003, from http://www.qualitative-research.net/fqs-texte/1-02/1-02review-mullen-e.htm
Nieto, S. (2000). Affirming diversity: The sociopolitical context of multicultural education (3 rd ed.). New York: Longman.
Nussbaum, M. C. (1997). Cultivating humanity: A classical defense of reform in liberal education. Cambridge, MA: Harvard University Press.
Ohanian, S. (1999) One size fits few: The folly of educational standards. Portsmouth, NH: Heinemann.
Palmer, P. (1998) The courage to teach. San Franciso: Jossey-Bass.
Perry, T. & Delpit, L. (1998). The real ebonics debate: Power, language, and the education of
African-American children. Boston: Beacon Press.
Phillion, J. (2002). Narrative multiculturalism. Journal of Curriculum Studies, 34(3), 265-279.
Salzer, J. (1998, February). Regents board says little is expected from poor, minority
students. Online Athens: Athens Banner Herald.
Steele, C. (1992, April) Race and the schooling of Black Americans. The Atlantic Monthly,
269(4). Retrieved August 26, 2003, from
Townsend, B. L. (2002). Leave no teacher behind: A bold proposal for teacher education [Electronic version]. International Journal of Qualitative Studies in Education,15(6), 727-738. Retrieved October 5, 2003, from Academic Search Premier.
West, C. (1994). Race matters. New York: Vintage. |
Harnessing the kinetic energy of the wind does not produce any carbon dioxide and provides clean, reliable electricity.
How does it work?
The wind forces the rotor of a turbine to turn and with it the low speed shaft (blue) inside the nacelle. This is linked to a gearbox, which in turn increases the rotational speed in the high speed shaft (green), which spins inside the generator's static magnets to generate electricity. This process will start when the wind speed is sufficient, and shut down when it reaches the maximum for which it has been designed. The anemometer monitors the wind speed. The wind vane detects the wind's direction, and regularly directs the nacelle to 'yaw' into the wind. If it has turned two and a half times in the same direction, it will stop and untwist itself automatically.
The turbine at Beaufort Court was manufactured in 1995 and bought second hand in 2003. It is 36m to the top of the nacelle and the diameter of the rotor is 29m. This turbine can generate sufficient electricity for 30-40 homes. It has variable pitch blades: at lower wind speeds, their 'pitch' adjusts to maximise the wind energy captured, whilst at very high wind speeds they can limit it. When the turbine shuts down, its blades will 'feather' or pitch out of the wind. It has a dual fixed speed rotor at 30 or 40 rpm. The generator's speed is limited by the grid's frequency (50Hz), so an increase in wind speed does not affect the shaft or rotor speed, but instead adds to the torque on the generator shaft, which translates into a higher electrical output. The generator speed is 760-1000rpm.
Is wind energy reliable?
Yes. Wind turbines will generate electricity most of the time (70-85%) for 20-25 years. Their actual output varies with wind speed, but any variability in the supply of electricity this creates is managed by our national grid, which is already designed to cope with far bigger variations both in supply (e.g. when a large power station trips) and demand (e.g. during the World Cup final). The output from UK onshore wind farms is consistent over the course of a year: around 30% of their theoretical maximum (compared with 50% for power stations). This 'capacity factor' is sometimes confused with 'efficiency' (how much of the wind's energy a turbine will capture) which is not relevant in a system where the fuel, wind - is free.
Are wind turbines noisy?
At a distance of 300 meters, a typical modern wind turbine is no noisier than a kitchen refrigerator or a moderately quiet room. Strict rules are applied by local authorities to ensure that wind turbines are far enough from nearby houses to avoid causing disturbance. Sensitive siting and layout design can keep noise impact to a minimum. |
The early period
The antiquity of Beirut is indicated by its name, derived from the Canaanite name of Beʾerōt (Wells), referring to the underground water table that is still tapped by the local inhabitants for general use. Although the city is mentioned in Egyptian records of the 2nd millennium bce, it did not gain prominence until it was granted the status of a Roman colony, the Colonia Julia Augusta Felix Berytus, in 14 bce. The original town was located in the valley between the hills of Al-Ashrafīyah and Al-Muṣayṭibah. Its suburbs were also fashionable residential areas under the Romans. Between the 3rd and 6th centuries ce, Beirut was famous for its school of law. The Roman city was destroyed by a succession of earthquakes, culminating in the quake and tidal wave of 551 ce. When the Muslim conquerors occupied Beirut in 635, it was still mostly in ruins.
Arab and Christian rule
Beirut was reconstructed by the Muslims and reemerged as a small, walled garrison town administered from Baalbek as part of the jund (Muslim province) of Damascus. Until the 9th or 10th century, it remained commercially insignificant and was notable mainly for the careers of two local jurists, al-Awzāʿī (d. 774) and al-Makḥūl (d. 933). A return of maritime commerce to the Mediterranean in the 10th century revived the importance of the town, particularly after Syria passed under the rule of the Fāṭimid caliphs of Egypt in 977. In 1110 Beirut was conquered by the military forces of the First Crusade and was organized, along with its coastal suburbs, as a fief of the Latin kingdom of Jerusalem.
As a crusader outpost, Beirut conducted a flourishing trade with Genoa and other Italian cities; strategically, however, its position was precarious because it was subject to raids by the Druze tribesmen of the mountain hinterland. Saladin reconquered Beirut from the crusaders in 1187, but his successors lost it to them again 10 years later. The Mamlūks finally drove the crusaders out in 1291. Under Mamlūk rule, Beirut became the chief port of call in Syria for the spice merchants from Venice.
Beirut, along with the rest of Syria, passed under Ottoman rule in 1516, shortly after the Portuguese had rounded the African continent (1498) to divert the spice trade of the East away from Syria and Egypt. The commercial importance of Beirut declined as a consequence. By the 17th century, however, the city had reemerged as an exporter of Lebanese silk to Europe, mainly to Italy and France. Beirut at the time was technically part of the Ottoman province (eyalet) of Damascus, and after 1660 of Sidon. Between 1598 and 1633, however, and again between 1749 and 1774, it fell under the control of the Maʿn and Shihāb emirs (feudal suzerains and fiscal agents) of the Druze and Maronite mountain hinterland. From the mid-17th to the late 18th century, Maronite notables from the mountains served as French consuls in Beirut, wielding considerable local influence. During the Russo-Turkish War of 1768–74, the town suffered heavy bombardment by the Russians. Subsequently it was wrested from the Shihāb emirs by the Ottomans, and it soon shrank into a village of about 6,000.
The growth of modern Beirut was a result of the Industrial Revolution in Europe. Factory-produced goods of the Western world began to invade the markets of Ottoman Syria, and Beirut, starting virtually from nought, stood only to profit from the modern industrial world. The occupation of Syria by the Egyptians (1832–40) under Muḥammad ʿAlī Pasha provided the needed stimulus for the town to enter on its new period of commercial growth. A brief setback came with the end of the Egyptian occupation; by 1848, however, the town had begun to outgrow its walls, and its population had increased to about 15,000. Civil wars in the mountains, culminating in a massacre of Christians by Druzes in 1860, further swelled Beirut’s population, as Christian refugees arrived in large numbers. Meanwhile, the pacification of the mountains under an autonomous government guaranteed by the Great Powers (1861–1914) stabilized the relationship between the town and its hinterland. In 1888 Beirut was made the capital of a separate province (vilâyet) comprising the whole of coastal Syria, including Palestine. By the turn of the century, it was a city of about 120,000.
Meanwhile, Protestant missionaries from Great Britain, the United States, and Germany and Roman Catholic missionaries mainly from France became active in Beirut, particularly in education. In 1866 American Protestant missionaries established the Syrian Protestant College, which later became the American University of Beirut. In 1881 French Jesuit missionaries established St. Joseph University. Printing presses, introduced earlier by Protestant and Roman Catholic missionaries, stimulated the growth of the city’s publishing industry, mainly in Arabic but also in French and English. By 1900 Beirut was in the vanguard of Arabic journalism. A class of intellectuals sought to revive the Arabic cultural heritage and eventually became the first spokesmen of a new Arab nationalism.
Beirut was occupied by the Allies at the end of World War I, and the city was established by the French mandatory authorities in 1920 as the capital of the State of Greater Lebanon, which in 1926 became the Lebanese Republic. The Muslims of Beirut resented the inclusion of the city in a Christian-dominated Lebanon and declared loyalty to a broader Pan-Arabism than most Christians would support. The resultant conflict became endemic. The accelerated economic growth of Beirut under the French mandate (1920–43) and after produced rapid growth of the city’s population and the rise of social tensions. These tensions were increased by the influx of thousands of Palestinian refugees after 1948. The political and social tensions in Beirut and elsewhere in Lebanon, coupled with Christian-Muslim tensions, flared into open hostilities in 1958 and even more violently in 1975–90. In the violence that marked those years, Beirut became a divided city, depleted of the large, varied, and long-established community of foreigners that had once enriched its social and cultural life and given it the distinctive cosmopolitan character for which it was famed.
West Beirut was largely destroyed by heavy fighting between Israeli forces and members of the Palestine Liberation Organization (PLO) in 1982, when Israel launched a full-scale attack on PLO bases operating in the city. Israeli troops surrounded West Beirut, where most PLO guerrilla bases were located, and a series of negotiations brought about the evacuation of PLO troops and leaders from Lebanon to other Arab countries.
Divisive sectarian loyalties only increased after the Israeli withdrawal. Neither the continued Syrian military presence nor the formation of coalition governments could defuse the violence. The shelling persisted, and much of the population fled.
In early 1984, following a failed attempt by the Christian-led Lebanese army to consolidate its control in West Beirut by force, the division between the two sides of the city became complete. In East Beirut, order continued to be maintained until 1990 by the army, working in cooperation with the unified Christian militia of the Lebanese Forces (LF). In West Beirut, however, the situation drifted to near total anarchy, as the different Muslim militias repeatedly clashed with one another in the streets to settle sectarian or partisan scores. Security collapsed under these circumstances, and many Lebanese and resident foreigners were taken hostage by different political groups, to remain their prisoners for months or years. Some of the hostages were even killed by groups determined to make a political point. In 1986, leaders in West Beirut pleaded with the Syrians to reenter the city in force and bring the situation under control. Three years later, in 1989, the Lebanese Army in East Beirut subjected West Beirut to months of heavy shelling, ostensibly to liberate the Muslim parts of the capital from Syrian occupation. In the last stage of the civil war, large parts of East Beirut and its Christian suburbs were destroyed or heavily damaged when the Lebanese army clashed with the LF. The issues leading to the estrangement of these former allies and their eventual confrontation involved the question of whether or not the Ṭāʾif Accord, arrived at in 1989 to restore peace to Lebanon, was acceptable to the Christian side. Unlike the LF and other Christian Lebanese leaders, General Michel Aoun, the Lebanese army commander, maintained that the accord was totally unacceptable in principle. The clashes between the troops under his command and the LF were finally stopped by Syrian military intervention, followed by the forced departure of General Aoun to France.
In the years after the end of the civil war, a major effort was begun to reconstruct Beirut’s devastated infrastructure. The city developed a plan to modernize its transport facilities, restore many of its historic buildings, and revive its economic sectors. After surviving more than a decade of civil war, Beirut hosted the Pan-Arab Games in 1997 and entered the 21st century with a sense of renewed optimism. |
Washington, Jan 24 : A team of US researchers has recovered the first section of a massive Antarctic ice core, which might provide the most detailed record of Earth's climate history over the past 100,000 years, including greenhouse gases.
The recovered ice core is 580-meter (1,900-foot) long, and is believed to be the first section of what is hoped to be a 3,465-meter (11,360-foot) column of ice, which might also provide a precise year-by-year record of the last 40,000 years.
The discovery was made by a team of scientists, engineers, technicians, and students from multiple US institutions, who are working as part of the National Science Foundation's West Antarctic Ice Sheet Divide (WAIS Divide) Ice Core Project.
While other ice cores have been used to develop longer records of Earth's atmosphere, the record from WAIS Divide will allow a more detailed study of the interaction of previous increases in greenhouse gases and climate change.
This information will improve computer models that are used to predict how the current unprecedented high levels of greenhouse gases in the atmosphere caused by human activity will influence future climate.
According to Kendrick Taylor, the chief scientist for the project, "The dust, chemicals, and air trapped in the two-mile-long ice core will provide critical information for scientists working to predict the extent to which human activity will alter Earth's climate."
The WAIS Divide core is also the Southern Hemisphere equivalent of a series of ice cores drilled in Greenland beginning in 1989, and it will provide the best opportunity for scientists to determine if global-scale climate changes that occurred before human activity started to influence climate were initiated in the Arctic, the tropics, or Antarctica.
The new core will also allow investigations of biological material in deep ice, which will yield information about biogeochemical processes that control and are controlled by climate, as well as lead to fundamental insights about life on Earth.
"We are very excited to work with ancient ice that fell as snow as long as 100,000 years ago. We read the ice like other people might read a stack of old weather reports," said Taylor. |
A new study by researchers at Lawrence Berkeley National Laboratory that is the first to use a global model to study the question has found that implementing cool roofs and cool pavements in cities around the world can not only help cities stay cooler, they can also cool the world, with the potential of canceling the heating effect of up to two years of worldwide carbon dioxide emissions.
Because white roofs reflect far more of the sun’s heat than black ones, buildings with white roofs will stay cooler. If the building is air conditioned, less air conditioning will be required, thus saving energy. Even if there is no air conditioning, the heat absorbed by a black roof both heats the space below, making the space less comfortable, and is also carried into the city air by wind—raising the ambient temperature in what is known as the urban heat island effect. Additionally, there’s a third, less familiar way in which a black roof heats the world: it radiates energy directly into the atmosphere, which is then absorbed by the nearest clouds and ends up trapped by the greenhouse effect, contributing to global warming.
For the northern hemisphere summer, they found that increasing the reflectivity of roof and pavement materials in cities with a population greater than 1 million would achieve a one-time offset of 57 gigatons (1gigaton equals 1 billion metric tons) of CO2 emissions (31 Gt from roofs and 26 Gt from pavements). That’s double the worldwide CO2 emissions in 2006 of 28 gigatons. The 57 gigatons is equal to roughly 300 million cars (about the cars in the world) for 20 years.
Roofs and pavements cover 50 to 65 percent of urban areas. Because they absorb so much heat, dark-colored roofs and roadways create what is called the urban heat island effect, where a city is significantly warmer than its surrounding rural areas. This additional heat also eventually contributes to global warming. More than half of the world’s population now lives in cities; by 2040 the proportion of urbanites is expected to reach 70 percent, adding urgency to the urban heat island problem
If you liked this article, please give it a quick review on Reddit, or StumbleUpon. Thanks
How to Make Money |
Gravitational deflection of light
- An article by
- Steven S. Shapiro, Irwin I. Shapiro
Theories of the deflection of light by mass date back at least to the late 18th century. At that time, the Reverend John Michell, an English clergyman and natural philosopher, reasoned that were the Sun sufficiently massive, light could not escape from its surface. The pioneer of a mathematical description of gravity, Sir Isaac Newton, apparently wrote nothing about the effect of mass on the path of light rays, other than to note at the end of his treatise, "Opticks," published in 1704, that light particles should be affected by gravity in the same way as is ordinary matter.
The first calculation of the deflection of light by mass was published by the German astronomer Johann Georg von Soldner in 1801. Soldner showed that rays from a distant star skimming the Sun's surface would be deflected through an angle of about 0.9 seconds of arc, or one quarter of a thousandth of a degree. This angle corresponds to the apparent diameter of a compact disc (CD) viewed from a distance of about 30 kilometers (nearly 20 miles). Soldner's calculations were based on Newton's laws of motion and gravitation, and the assumption that light behaves like very fast moving particles. As far as we know, neither Soldner nor later astronomers attempted to verify this prediction, and for good reason: Such an attempt would have been far beyond the capability of early 19th century astronomical instruments.
Light deflection in general relativity
Over a century later, in the early 20th century, Einstein developed his theory of general relativity. Einstein calculated that the deflection predicted by his theory would be twice the Newtonian value.
The following image shows the deflection of light rays that pass close to a spherical mass. To make the effect visible, this mass was chosen to have the same value as the Sun's but to have a diameter five thousand times smaller (i.e., a density 125 billion times larger) than the Sun's.
According to general relativity, a light ray arriving from the left would be bent inwards such that its apparent direction of origin, when viewed from the right, would differ by an angle (α, the deflection angle; see diagram below) whose size is inversely proportional to the distance (d) of the closest approach of the ray path to the center of mass.
A graph of α as a function of d for this compact spherical mass is shown below:
The yellow area in the center indicates the spatial extent of the spherical mass. The curves to each side show the dependence of the deflection angle α on the distance d. The deflection angle is largest when the light rays pass closest to the mass; α becomes smaller as d becomes larger. For the Sun, the curves look similar, but the predicted value of α is five thousand times smaller for rays that skim the surface of the Sun than for rays that skim the surface of this "pseudo" Sun.
An observer on the Earth can detect the deflection by the Sun of the light from a (distant) star by the change with time of year in the star's apparent position in the sky. The following animation shows light from a star on the left and an observer on the right, and the change in apparent position of the star caused by the presence of a mass in between:
The inset shows a magnified view of how the light rays reach the observer. In the absence of a mass, the light follows a straight line from the star to the observer. In the presence of the mass, the light ray is bent, and the light reaches the observer from a slightly different direction. This direction defines a star's apparent position in the sky.
Such shifts in position - although far smaller than in the images above - should be visible to a properly equipped optical observer on the Earth for a star near the Sun's path in the sky. However, under ordinary conditions sunlight causes the atmosphere to be so bright that it is not feasible to observe from the Earth with optical telescopes any star whose light passes near the Sun. For the first tests of Einstein's predictions, astronomers therefore used solar eclipses, occasions on which the moon is between the Earth and the Sun, and blocks all sunlight from reaching the vicinity of the Earth and brightening its atmosphere.
Measuring light deflection
1919 saw the first successful attempt to measure the gravitational deflection of light. Two British expeditions were organized and sponsored by the Royal Astronomical Society and the Royal Society. Each of the two groups took photographs of a region of the sky centered on the Sun during the May 1919 total solar eclipse and compared the positions of the photographed stars with those of the same stars photographed from the same locations in July 1919 when the Sun was far from that region of the sky. The results showed that light was deflected, and also that this deflection was consistent with general relativity but not with "Newtonian" physics. The subsequent publicity catapulted Einstein to world fame, and led to his having the only ticker-tape parade ever held for a scientist on Broadway in New York City.
With repetitions of eclipse measurements over the next half century, astronomers were able to improve on the accuracy of these first results by only about a factor two, yielding a confirmation of general relativity to within about ten percent. The breakthrough came in 1967 with the realization that simultaneous measurements with a set of radio telescopes (especially, "Very Long Baseline Interferometry") could be used to measure light deflection with much greater accuracy.
In addition to providing the means to test general relativity to high accuracy, the fact that mass deflects light has been a great boon to studies of the universe. Masses acting as gravitational lenses have now become a standard tool of astronomy. They allow astronomers to infer the masses of cosmic objects, and the structure and size scale of the universe (with some caveats). Through their magnifying effect, gravitational lenses have also been used to observe the properties of very distant galaxies and quasars, as well as to search for planets around distant stars.
Related Spotlights on relativity can be found in the section General Relativity. |
SMEDEREVO FORTRESS – ONE OF THE LARGEST LOWLAND FORTERESSES IN EUROPE Smederevo fortress is situated at the confluence of the Danube and the Jezava on an area of about 10 hectares. Its greatest value is its original appearance that has been preserved until today. The example of the Serbian medieval architecture, preserved to this point, makes it unique on the European soil. The fortress was built by the despot Djuradj Brankovic in the second quarter of the XV century.
Smederevo fortress is a plain fortification, which was not the characteristic of the fortifications in the area, but it was built because of the good transport links and military opportunities for defense. The two sides of its walls are surrounded by the confluence of the Danube and the Jezava, while the third party is surrounded by an artificial moat that connects the two rivers. These are the reasons that have contributed to its triangular shape. The lack of stone was one of the major challenges to the construction, therefore, stones were brought from the distant cities, more than tens of kilometers away. Throughout history, the idea was not realized to develop a city, within the walls of the castle, that would have exclusively a military role. Smederevo fortress today represents the highest achievement of the medieval Serbian military architecture. |
Building Molecules from Atoms
Association - two concepts with an unnamed link between them; two concepts connected by a generic link such as is related to.
Bidirectional - a characteristic of relations between two concepts, in that they can be expressed in the two opposing directions, from Concept A to concept B and vice versa, and often require different names in each direction (as in 'Mary has parent John / John parent of Mary').
Capillary Action - the drawing of a substance along an adherent surface, like water along a glass tube, despite the force of gravity.
Category - a grouping, class, or set of objects that have certain features in common; defined by prototypical members; categories often have a complex structure ranging from most prototypical to least prototypical members.
Chemical Reaction - the event resulting from the mixing of different substances.
Classification - the systematic grouping of organisms into categories based upon shared characteristics or traits; organized into hierarchies with largest groups at the top and smaller subdivisions below; classes inherit traits from highest categories to lowest; a system for organizing, interpreting, and learning about organisms; based on many lines of evidence including morphology, anatomy, embryology, DNA patterns, etc.
Cohesion - the characteristic of identically structured molecules to stick to each other.
Concept - concept.
Dissolve - the process by which atoms of a molecule dissociate into ions after being placed in a solvent.
Electronegativity - the characteristic of an atom to take up electrons from another atom in an attempt to fill its outer shell.
Hydrogen Bond - a special type of intermolecular interaction whereby the hydrogen of one molecule is attracted to the oxygen, nitrogen, or flourine of another molecule (if the molecule is large enough to fold on itself, this attraction can be between the hydrogen of one molecule and an oxygen, nitrogen or flourine of the same molecule). This interaction will increase the overall stability of the substance or molecule.
Instance - two concepts linked by a bidirectional named relation, as in dog has part tail / tail part of dog (our style is to underline concepts and show relations in italics).
Ionic Bond - a linking together of atoms in such a fashion that one atom gives.
Macromolecules - large single chemical entities such as pieces of DNA or RNA.
Molecules - an entity of two or more atoms.
Organic Molecules - carbon containing substances.
Partial Negative Charge - the resulting characterisitc of a specific part of a molecule which has the tendency to take away electrons from other parts of the molecule. In water, for example, the electrons of the hydrogen atoms will spend more time orbiting the oxygen atom. This leaves oxygen with a partial negative charge.
Partial Positive Charge - the resulting characteristic of a specific part of a molecule which has the tendency to give away electrons to other parts of the molecule. In water, for example, the electrons of the hydrogen atoms will spend more time orbiting the oxygen atom. This leaves hydrogen with a partial positive charge.
Properties - characteristics.
Relation - a word or phrase that describes a way in which two concepts are inter-related; usually described by a verb or verb phrase.
Solution - any solvent to which solute has been added. For example, a salt solution may consist of table salt (solute) and water (solvent).
Specific Heat - the amount of energy required to raise the temperature of 1.0 grams of a substance by 1.0 degree Celsius.
Substances - a large group of identically structured molecules. |
'Designer' graphene makes its debut
Mar 15, 2012 6 comments
Researchers in the US have created the first artificial samples of graphene with electronic properties that can be controlled in a way not possible in the natural form of the material. The samples can be used to study the properties of so-called Dirac fermions, which give graphene many of its unique electronic properties. The work may also lead to the creation of a new generation of quantum materials and devices with exotic behaviour.
Graphene is a single layer of carbon atoms organized in a honeycomb lattice. Physicists know that particles, such as electrons, moving though such a structure behave as though they have no mass and travel through the material at near light speeds. These particles are called massless Dirac fermions and their behaviour could be exploited in a host of applications, including transistors that are faster than any that exist today.
The new "molecular" graphene, as it is has been dubbed, is similar to natural graphene except that its fundamental electronic properties can be tuned much more easily. It was made using a low-temperature scanning tunnelling microscope with a tip – made of iridium atoms – that can be used to individually position carbon-monoxide molecules on a perfectly smooth, conducting copper substrate. The carbon monoxide repels the freely moving electrons on the copper surface and "forces" them into a honeycomb pattern, where they then behave like massless graphene electrons, explains team leader Hari Manoharan of Stanford University.
Described by Dirac
"We confirmed that the graphene electrons are massless Dirac fermions by measuring the conductance spectrum of the electrons travelling in our material," says Manoharan. "We showed that the results match the two-dimensional Dirac equation for massless particles moving at the speed of light rather than the conventional Schrödinger equation for massive electrons."
The researchers then succeeded in tuning the properties of the electrons in the molecular graphene by moving the positions of the carbon-monoxide molecules on the copper surface. This has the effect of distorting the lattice structure so that it looks as though it has been squeezed along several axes – something that makes the electrons behave as though they have been exposed to a strong magnetic or electric field, although no actual such field has been applied. The team was also able to tune the density of the electrons on the copper surface by introducing defects or impurities into the system.
"Studying such artificial lattices in this way may certainly lead to technological applications, but they also provide a new level of control over Dirac fermions and allow us to experimentally access a set of phenomena that could only be investigated using theoretical calculations until now," adds Manoharan. "Introducing tunable interactions between the electrons could allow us to make spin liquids in graphene, for instance, and observe the spin quantum Hall effect if we can succeed in introducing spin-orbit interactions between the electrons."
He adds that molecular graphene is just the first of this type of "designer" quantum structure and hopes to make other nanoscale materials with such exotic topological properties using similar bottom-up techniques.
The work is reported in Nature 483 306.
About the author
Belle Dumé is a contributing editor to nanotechweb.org |
Herpes is also commonly referred to as HSV (Herpes Simplex Virus). Herpes Simplex Virus is grouped into two classes:
– HSV-1 (Herpes Simplex Virus – Type 1) which is also commonly referred to as oral herpes.
– HSV-2 (Herpes Simplex Virus – Type 2) which is also commonly referred to as genital herpes.
HSV-1 triggers sores around the mouth, nose and lips. Because of the sores, oral herpes are also commonly known as fever blisters or cold sores. Whereas HSV-1 has been known to trigger genital herpes, a majority of cases of genital herpes are triggered by HSV-2. In herpes simplex virus type 2, an infected person usually displays symptoms such as sores around the genital area. Even though the herpes simplex virus type 2 can occur in different areas of the body, the sores are commonly found beneath the waist region.
What Triggers HSV and Outbreaks?
HSV-1 is primarily transmitted from one person to the other via oral secretions and sores on the skin. A person can contract herpes simplex virus type 1 through sexual intercourse, oral sex, kissing and sharing of items such as towels, toothbrushes, spoons and other utensils. In essence, a person can only contract the herpes simplex virus type two through unprotected sexual intercourse with a carrier. Then again, it is important to point out that one can contract the infection even though sores might not be present in the genital area.
Since genital herpes can be passed from the mother to the child during birth, it is important that a pregnant woman with genital herpes consult a physician so that chances of infection are minimized during the delivery process. More than often, the virus stays dormant for prolonged periods of time. However, the virus can be activated when the immune system is weakened as a result of fatigue, menstruation, emotional and physical stress, strain on the affected region, especially during sexual intercourse and general ailments.
Signs and Symptoms of Herpes Simplex Virus
The Herpes Simplex Virus is usually typified by lesions and blisters that appear on the mouth, lips and nose. These symptoms are normally associated with oral herpes. Genital herpes is typified by blisters, lesions and soreness in the vaginal area, penis and scrotum.
How is Herpes Simplex Virus Diagnosed?
Usually, the appearance of HSV is distinctive and testing is not required in order to verify the diagnosis. In instances where the health provide is not sure about the existence of herpes simplex virus, the virus can be diagnosed through a laboratory tests.
Herpes Simplex Virus Treatment
Even though there is no treatment of herpes, there are treatments that are used to ease the symptoms. Prescribed medications can eased the aches associated with the outbreak, thus reducing the amount of time taken for complete healing. Medications have also been known to decrease the amounts of outbreaks. Examples of medications used to cure herpes simplex virus include, but not limited to Valtrex. If you have genital sores, the pains can be eased by taking warm baths.
Is Herpes Simplex Virus Painful?
There are people who experience mild forms of genital herpes. On the other hand, there are people who don’t even experience the symptoms. More than often, persons with genital herpes do not even know that they have the disease. Nonetheless, when the symptoms are triggered, they are usually painful. Pain is extreme amongst people who experience the outbreaks for the first time. Herpes outbreaks are usually characterized by pains around the genital area, burning sensations and itchy feeling. There are people who also experience pain while urinating and discharge from the vagina and penis.
Oral herpes (Herpes Simplex Virus Type 1 normally triggers tingling feeling and burning sensation just before the formation of blisters. |
Personality disorders are specific diagnoses that typically indicate severe disturbances across several areas of functioning. These disorders involve disturbances in behavior, personal and social functioning and perception of self, others and the world. Additionally, personality disorders are considered to be chronic and pervasive patterns of coping that, by adulthood, have become inflexible. There is a great deal of literature and research that traces these diagnoses back to early developmental years. Because personality disorders reveal themselves in their most pronounced forms within relationships, they are generally thought to be consequences of early attachment and bonding experiences that were interrupted or dysfunctional. For example, many individuals with personality disorders have experienced erratic parenting and childhood abuse and/or neglect. Further, such individuals typically have a series of relationship problems in adulthood that are characterized the same behaviors, feelings, thoughts and beliefs that are disruptive and distressful in relationships.
While the diagnosis of a personality disorder is usually reserved for individuals in late adolescence and adulthood, early family experiences are often addressed in treatment in order to correct current dysfunctional behavior, distorted thinking and perception and distressful emotion caused by these disorders. Treatment is considered to be a long-term endeavor.
To illustrate the nature of personality disorders, their characteristics and their pervasive dysfunction in relationships, the Antisocial Personality Disorder is used here as an example. This disorder is one in which there is a pattern of disregard for and violation of the rights of others. Some of the diagnostic criteria for Antisocial Personality Disorder are:
• failure to conform to social norms
• unlawful behavior
• deceitfulness, lying, use of aliases or conning others
• impulsivity– acting without consideration of consequences
• disregard for safety of self and others
• failure to honor one’s responsibilities and obligations
• lack of remorse
These criteria clearly describe dysfunctional ways of behaving in relationship to others. To further illustrate how relationship issues are related to personality disorders, there is strong evidence that individuals who have had erratic, abusive or neglectful parents are vulnerable to developing this disorder. Also, the treatment for individuals with this disorder (and other personality disorders) attempts to correct one’s relationship to others. For example, treatment of Antisocial Personality Disorder often seeks to repair some of the missing moral development of early life such as the experience of having empathy for others.
Antisocial Personality Disorder is just one example of personality diagnoses that address the dysfunctional styles of behaving and coping within relationship to others. Other personality disorders are listed here along with a brief description of relationship problems found in each:
• Borderline Personality Disorder– relationships are governed by fear of abandonment by others
• Histrionic Personality Disorder– there is an intense need for the attention of others and relationships are thought to be more intimate than they really are
• Narcissistic Personality Disorder– there is a belief that one is more special and more entitled than others
• Avoidant Personality Disorder—there is avoidance of interpersonal contact and fear of being disliked
• Dependent Personality Disorder– there is an excessive reliance upon others
• Obsessive-Compulsive Personality Disorder– there is rigidity that interferes with relationships
• Paranoid Personality Disorder—there is a belief that others cannot be trusted
• Schizoid Personality Disorder– there is no enjoyment or desire for close relationships
• Schizotypal Personality Disorder– there is acute discomfort in relationships and an inability to maintain them
All of these personality disorders describe distorted interpersonal boundaries and a distortion in the ability to be appropriately intimate with others. Relationships in families of substance use and addiction can display many of the characteristics of all of these personality disorders and are traditionally thought of as causing intimacy problems for individuals who grow up in these sorts of families. In particular, codependent traits and behaviors have become identified as especially prominent in such families and while codependency is not identified as a formal diagnosis of personality, it does speak of specific dysfunctional personality characteristics, behavior and coping that are enduring and patterned as are those in personality disorders.
Codependency became an issue of concern in substance treatment long before the term was coined. An earlier concept, co-alcoholism, embodied the many traits, characteristics, behaviors and dynamics of codependency, but was used specifically to describe relationships involving alcohol. Co-alcoholics were considered to be family members of alcoholics — primarily spouses — who were impacted by the alcoholism of their loved one in dysfunctional ways. Co-alcoholic and enabler were used interchangeably to describe a person who took on the responsibilities of the alcoholic and made amends and/or accommodations for the alcoholic’s behavior. Denial of the alcoholism was seen as a core characteristic of co-alcoholics and enablers. The behavior of co-alcoholics was also seen as contributing to the progression of the disease of alcoholism.
Treatment of alcoholism grew to include family members in hopes of intervening in co-alcoholic behavior and thus improving the chances of alcoholics maintaining recovery. Addressing family system dynamics was considered to be essential in the recovery efforts of substance dependent individuals and continue to be today. In the 1980s, the adult child and codependency self-help movements– particularly among 12 Step groups– further defined and popularized these concepts.
There is a significant body of literature about relationships and family dynamics that involve substance use. Some early authors who have addressed these issues are:
• Melody Beattie, Codependent No More and Beyond Codependency
• Claudia Black, It Will Never Happen to Me and It’s Never Too Late to Have a Happy Childhood
• John Bradshaw, Healing the Shame That Binds You; Homecoming: Reclaiming and Championing Your Inner Child; Family Secrets: the Path to Self-Acceptance and Reunion
• Pia Melody, Facing Codependence; Breaking Free and Facing Love Addiction
Such literature has helped define in depth the difficulties and dynamics of codependency as well as the process of recovering from such relationship dysfunction. Further, such works have identified codependency as usually beginning in childhood in one’s family of origin and to be characterized by behavior in relationships as well as by certain types of thoughts, feelings and perceptions of one’s self in relationship to others. Some of the hallmarks of codependency are:
• Behavior of codependency
o attempts to control others
o attempts to please others
o assuming responsibility for others
Such behaviors are common in relationships of addiction and are classic examples of the behavior seen in the family members of addicts and alcoholics. These behaviors are typically used to lessen the impact of the loved one’s addiction.
• Emotions of codependency
The emotions of codependence are direct results of living with another’s addiction and attempting to solve the problems surrounding the addiction as well attempting to solve the problems of the addict.
• Beliefs of codependency
o self-worth depends upon a relationship
o good feelings depend upon others’ approval
o self-esteem depends upon solving others’ problems
o well-being depends upon the status of the relationship
Such beliefs underlie the behaviors and feelings of codependency. Another person is the focus and good feelings about the self are derived solely from the relationship. Co-dependency is based upon an inherent “no-win” situation because codependents involve themselves with dysfunctional people who “need” “to be fixed”.
There are several ways that codependency can be manifested within relationships. Some of these are:
• becoming submissive
These characteristics are typically caused by anxiety and fear of loss and abandonment. These are individuals that we typically call “people pleasers” who seek approval and act from a motivation to be liked. The threat of losing their relationship causes these types of codependents to lose themselves as they seek to understand and demonstrate what the other person wants. Their chief motivation is to avoid abandonment.
Other codependents are more concerned with a fear of losing control. This is manifested in relationships by:
• being demanding
• being manipulative
• using emotional blackmail
• creating turmoil, chaos and drama
Still other codependents live their lives through others by:
• making personal sacrifices
• being focused on the well-being of another while disregarding one’s own well-being
• being focused on the goals of another while dismissing one’s own
• being focused on the thoughts and feelings of another while dismissing one’s
Codependents display a variety of behaviors that can, at various times, include a unique mix of the many types listed here. The common theme is, however, a focus upon accommodating and maintaining a relationship with someone who is impaired. Traditionally, codependency has been in relationship to an individual with a substance disorder, however, these characteristics can be found in relationship to people with other dysfunction such as mental illness and criminal behavior. Typically, codependents have learned these behaviors as children and continue them throughout adulthood to some degree in all their relationships. For this reason, codependents, like individuals who have personality disorders, display fixed patterns of coping and interaction within relationships. Along with their behaviors there are also fixed beliefs about self, others and the world that create poor coping and distress in daily life.
Self-help groups such as Codependents Anonymous use the 12 Steps of Alcoholics Anonymous to address codependency and recovery. The first step of Codependents Anonymous is the only step of the 12 which differs from the original 12 Steps used in Alcoholics Anonymous. The first step of Alcoholics Anonymous says:
We admitted we were powerless over alcohol and that our lives had become unmanageable.
In contrast, the first step of Codependence Anonymous states:
We admitted we were powerless over others and that our lives had become unmanageable.
Codependents Anonymous suggests several points to consider when determining if one has codependency. Among these are:
• patterns of denial that cause difficulty in identifying feelings
• patterns of low self-esteem that cause difficulty in making decisions, getting one’s needs met and perceiving oneself as lovable or worthwhile
• patterns of compliance that honor others’ opinions, feelings, desires and wishes, while dismissing one’s own
• patterns of control in which one behaves to be needed, approved of, care for others and control others
As in the diagnoses of personality disorders, these patterns of codependency are considered to be chronic and fixed patterns of behavior and coping. Recovery implies the need to change behavior and coping strategies in relationships. Codependents Anonymous offers a definition of codependency recovery in its preamble stating:
Co-Dependents Anonymous is a fellowship of men and women whose common purpose is to develop healthy relationships…share with each other in a journey of self-discovery — learning to love the self.
Recovery pursuits for the codependent involve becoming aware of one’s own feelings, values and beliefs; improving interpersonal boundaries in which one’s own needs and wishes are honored; tolerating true intimacy; developing assertiveness and improving self-care. Self-help groups are valuable recovery resources for codependency and many find that psychotherapy and counseling are as well. Often therapy will address current issues, along with problems from earlier stages of life such as those in one’s family of origin where codependent beliefs and behaviors were learned. In many ways, treatment for codependency is similar to treatment for formal personality disorders in which dysfunctional beliefs and perceptions must be replaced with more appropriate ones. |
CHAPTER 05.02: DIRECT METHOD OF INTERPOLATION: Cubic Interpolation: Part 1 of 2
In this segment, we'll talk about cubic interpolation. Again, the cubic interpolation we're going to be doing will be under the direct method. So in this case, cubic interpolation, again you might be given several data points, tens, or hundreds, but in cubic interpolation what you're going to do you're going to choose four data points. You're going to choose four data points like this, and then draw the cubic interpolant there. So through these four data points you are choosing a cubic interpolant, which will be, let's suppose f3 of x, and the form of that will be a0, plus a1 x, plus a2 x squared, plus a3 x cubed. So the bottom line about cubic interpolation is to find out what these four coefficients are, and those four coefficients will be found out from the values which you will have at these four data points, and then you can find out the value of the interpolant at any point between the lowest value and the highest value of x which you have for those four data points. So let's go ahead and take an example and see how this all works. Let's suppose somebody gives me the upward velocity of a rocket as a function of time. So they've given me six data points, 0, 0, 10, 227.04, 15, 362.78, 20, 517.35, 22.5, 602.97, 30, 901.67. So we're given six data points, and what we want to do is we want to be able to find out what the value of the velocity at 16 is by using cubic interpolation. So if you are trying to find the value of the velocity at, you've got to first choose four data points to be able to find the interpolant, so you want to choose the closest data points which are to 16. So 15 is 1 away, 20 is 4 away, 22.5 is 6.5 away, and 10 is 6 away, so what you have to do is you have to choose the four closest points to 16 on this time column, and this is 6 away, 1 away, 4 away, and 6.5 away, the other ones are further away, so you're not going to choose those, and at the same time, this number 16 has to be bracketed between 10, 15, 20, and 22.5, which it is, because 16 is between 15 and 20. So what you're going to do is you're going to take these four values here to find your cubic interpolant. So if you have your velocity now given as a0, plus a1 t, plus a2 t squared, plus a3 t cubed now, so you've got a cubic interpolant now. So you're going to put the value of the velocity at 10, which is a0, plus a1 times 10, plus a2 times 10 squared, plus a3 times 10 cubed. Then you've got the value of the velocity at 15. Then we've got the value of the velocity at 20, which is 517.35, which will be a0, plus a1 times the value at 20, plus a2 20 squared, plus a3 20 cubed, so that gives you the third equation. Now, the fourth equation is going to come from the value of the velocity at 22.5. So all you are doing is substituting the value of time in the cubic expression, you're substituting the value of time in this cubic expression at these different times of 10, 15, 20, and 22.5, and setting up the equations. So this is going to give you four equations, four unknowns. And as homework, what I will ask you to do is go ahead and set it up in the matrix form. Set it up in the matrix form. I'm not going to show you how to set it up in the matrix form in this segment, but go ahead and set it up the matrix form, and then go ahead and solve it by using Gauss elimination, or LU decomposition method, or just simply your high school algebra method. And what's going to happen is when you have four equations, four unknowns, you'll be able to find out what the values of a0, a1, a2, and a3 are. So this is what I get after I solve these four equations, four unknowns, I get a0 equal to -4.3810, a1 equal to 21.289, a2 equal to 0.13065, and a3 equal to 0.0054606. So those are the values of the constants I get from my third-order interpolant there, which basically means is that I can substitute these values in the coefficients of the third-order polynomial, which is a0, plus a1 t, plus as t squared, plus a3 t cubed. So that means it is -4.3810, plus a1, which is 21.289 times t, plus 0.13065 times t squared, plus a3, which is 0.0054606 t cubed. And keep in mind that this particular interpolant is now valid between the values of time, t equal to 10 and 22.5, because those are the lowest value of t and the highest value of t which you used to . . . which you used to calculate this third-order interpolant. interpolant. However, we are interested in calculating the value of the velocity at 16. When I substitute the value of the velocity at 16 there, I'm going to get 392.06 meters per second. All I'm doing is substituting the value of time equal to 16 here, 16 squared here, 16 cubed here, and I'll get 392.06 meters per second. Now, how do I know how many significant I can trust in this solution, and that should be based on what I got from the previous interpolation. So I'm going to write that down, the value of the velocity at 16, which I got from quadratic interpolation, was 392.19, that's what I got from quadratic interpolation, and the value of the velocity at 16 I got to be 392.06, and this is what I got from cubic interpolation. So I can, in order to be able to calculate my relative approximate error, I can choose this as my present approximation and this as my previous approximation. So in that case, what I do is I get the relative approximate error to be the current approximation, which is from the cubic interpolation, minus 392.16, which is from my quadratic interpolation, divided by 392.06, which is from the cubic interpolation, and I multiply it by 100 in order to get the value in terms of percentages, and this value here turns out to be 0.033 percent. So what does this tell me? It is that it is less than or equal to 5 percent, it is less than or equal to 0.5 percent, and it's also less than or equal to 0.05 percent, so 0.05 percent. I cannot go beyond that, I cannot go to 0.005 percent. So this tells me that since this is 0.05 percent, at least three significant digits are correct. So it's telling me that at least three significant digits are correct, so in the number 392.06, I can trust 3, I can trust 9, and I can trust 2, those are the three significant digits which I can trust in that solution. And that's the end of this segment. |
...the mouth of a baby gray whale!
has an upper and a lower jaw. It looks like a beak.
Fact: Whales need to be streamlined to move through
the water easily. Baleen whales (gray whales are baleen whales) need
large mouths to hold the large volumes of mud and water they take in
in order to catch lots of the tiny crustaceans, which they eat for food.
- No protruding
lips; instead, the mouth opening is rounded inward. No teeth,
but look closely to see edges of baleen.
Fact: When the calf grows up and is weaned it will get food by filtering
large volumes of mud and water through short yellow-white colored
baleen plates that hang from its upper jaw. Baleen acts like a large
catching the tiny crustaceans in the hair-like fringe as water and
mud are pushed out of the mouth. The young whale will use its huge
tongue to lick the food off its baleen.
Fact: The skin feels like wet rubber, which makes a good waterproof
jacket for the baby! The skin easily stretches as the baby gains
a layer of blubber under it to keep warm.
is "pickled," or dented and dimpled.
grow from those dents! The upper jaw has a number of regularly spaced
indents containing tactile hair follicles, each indent with
a half inch
to an inch long. On one calf, scientists*
counted 56 of these hairs on each side of the upper jaw. (Whales
are mammals, and mammals always have some hair.)
has white spots, and darker and lighter shades of gray.
born, gray whales are a deep slate gray color with many white to
light gray patches and flecks. Newborns
are quickly infested with whale lice, small crustaceans that live
in the creases of their skin and feed on dead skin. Many
of whales and dolphins naturally slough visible quantities of skin
that can appear light-colored, ragged, and torn. As the calf grows,
its skin will also become deeply embedded with host-specific sessile
barnacles. These barnacles and whale lice give older gray whales
their mottled appearance.
- Skin has some scars or torn-looking places.
Fact: Some scarring or skin sloughing may happen
when calves bump against the sandy bottom of the lagoons or rub against
the barnacles on
- How does
the baby nurse with a mouth like that?
whale baby does not suck, but opens its mouth as the mother pumps
Author **John Heyning describes it: "Calves
nurse just as human babies do. Nursing in the ocean poses some
challenges. It is not easy to drink under water, and cetacean
mothers often nurse
their calves while swimming. To overcome such difficulties,
cetaceans have evolved special muscles around the mammary glands
milk quickly into the calf's mouth, which has a special tongue
designed not to spill!"
L. Eberhardt and Kenneth S. Norris. "Observations of newborn Pacific
whales on Mexican calving grounds." Journal of Mammalogy (Vol. 45).
John E. Masters of the Ocean Realm: Whales, Dolphins, & Porpoises. (Seattle: University of Washington Press, 1995).) |
Assess, Plan, Act
IUCN and the Species Survival Commission (SSC) approach conservation efforts by taking into account three essential steps: Assess, Plan and Act.
While Assess is about assessing the conservation status of species and Plan is the part of the process that enables the development of conservation strategies to protect species, it is only in Act, the last step of the process, that we are able to deliver conservation action on the ground that saves species from extinction.
IUCN currently has a variety of different programmes and initiatives that implement and guide species conservation action around the world. Reverse the Red, IUCN Save Our Species, the Integrated Tiger Habitat Conservation Programme and others make use of a carefully balanced network that involves different stakeholders such as non-governmental organisations, scientists, government agencies and local communities. By collaborating together, we are able to have a tangible, positive impact on biodiversity and threatened wildlife.
Forging strong partnerships between donors and the conservation community is therefore essential to lay the groundwork for effective species conservation action. If we want to reverse the trend of biodiversity loss and increase species population numbers, this is where we should focus our efforts. |
Includes eBook and worksheets for this title
For years, the best-selling Learn to Read series has provided teachers with engaging stories for their emergent readers. Each book in the series provides maximum support for the child by using repetitive, predictable story lines and illustrations or photos that match the text. Now, to supplement the Learn to Read books, Creative Teaching Press offers accompanying practice pages that have been written and developed to reinforce and extend the skills introduced in each book. For each book you will find the following:
Sight Words (pink) - Allows students to practice reading and writing sight words found in the story as well as other commonly used sight words.
Phonics (blue) - Provides practice with the phonics skills introduced in each story, such as reading and writing words with blends, short and long vowels, and word families.
Vocabulary (green) - Reinforces content-based and high-frequency vocabulary words found in each story.
Skill (orange) - Allows students to develop an understanding of other important language arts concepts, such as compound words, opposites, contractions, spelling patterns, and more. Each skill page also extends the content learning reflected in the story.
Activity (purple) - Allows students to create their own personal mini books to practice phrasing and fluency. Each mini book reinforces the content and skills presented in the corresponding Learn to Read story.
|Author||Rozanne Lanczak Willams| |
The size of a small car, Curiosity is much larger than previous Mars rovers and carries 10 science instruments.
POWER Dust can cover solar panels on Mars, so Curiosity generates its own power. Eleven pounds of plutonium dioxide generates heat, which is converted to electricity and used to recharge two lithium-ion batteries.
VISION Extending 7 feet above the ground, a mast holds Mastcam, a pair of high-definition cameras, and ChemCam, which can measure the composition of rock after shooting it with a laser.
DRIVE Each of the 20-inch aluminum wheels has its own motor.
REACH The rover’s 7-foot arm carries several tools, including a camera, an X-ray spectrometer, and a drill, brush and scoop for collecting samples.
ANALYSIS The rover’s body holds experiments for detecting ground water, measuring naturally occurring radiation and analyzing soil and rock samples delivered by the robotic arm.
And the landing site and path of Curiosity’s planned peregrinations (red line): |
Nearsightedness, technically known as myopia, is a condition which causes difficulty focusing on objects at a distance, while near vision remains normal. Myopia is one of the most common vision problems worldwide and it is on the rise.
Myopia Signs and Symptoms
People with myopia are usually able to see well up close, but have difficulty seeing objects at a distance. Due to the fact that they may be straining or squinting to see into the distance, they may develop headaches, eye fatigue or eye strain.
Myopia is a refractive error caused by an irregular shaped cornea that affects the way light is focused on the retina. For clear vision, light should come to a focus point directly onto the retina. In myopia, the cornea is longer than usual, resulting in a focus point that falls in front of the retina, causing distant objects to appear blurry, while close objects can be seen normally.
Myopia typically has a genetic component as it often appears in multiple members of a family and it usually begins to show signs during childhood, often getting progressively worse until stabilizing around age 20. There may also be environmental factors that contribute to myopia such as work that requires focusing on close objects for an extended period of time and spending too much time indoors.
Diagnosis of Myopia
Myopia is diagnosed by an eye examination with an qualified optometrist. During the exam the optometrist will determine the visual acuity of the eye to prescribe eye glasses or contact lenses. A prescription for myopia will be a negative number such as -1.75.
Treatment for Myopia
Myopia is typically treated with corrective eyeglasses or contact lenses and in certain cases refractive surgery such as LASIK or PRK is an option. Surgery is the most risky treatment as it requires permanently changing the shape of the cornea. Other treatments involve implanting a lens that reshapes the cornea called a phakic intra-ocular lens or vision therapy. A treatment called Ortho-k, in which the patient wears corneal reshaping contact lenses at night to see without correction during the day can be another option.
While some people require vision correction throughout the day, others may only need it only during certain tasks such as driving, watching television or viewing a whiteboard in school. The type of treatment depends on the overall health of your eye and your eye and vision needs. |
What are the formulas for double angles?
Using the cosine double-angle identity. The cosine double angle formula tells us that cos(2θ) is always equal to cos²θ-sin²θ.
What is double angle formula for sin?
The double-angle formulas are a special case of the sum formulas, where α=β. Deriving the double-angle formula for sine begins with the sum formula, sin(α+β)=sinαcosβ+cosαsinβ. sin(θ+θ)=sinθcosθ+cosθsinθsin(2θ)=2sinθcosθ.
How do you prove tan2x?
The formula for tan2x identity is given as:
- tan2x = 2tan x / (1−tan2x)
- tan2x = sin 2x/cos 2x.
Is cos3x the same as 3cosx?
FAQs on Cos3x Cos3x is a triple angle identity in trigonometry. It can be derived using the angle addition identity of the cosine function. The identity of cos3x is given by cos3x = 4 cos3x – 3 cos x.
How do you prove the double angle formulas?
The double-angle formulas are proved from the sum formulas by putting β = . We have 2 sin cos . cos 2 − sin 2. . . . . . . (1) This is the first of the three versions of cos 2 . To derive the second version, in line (1) use this Pythagorean identity:
How do you prove the double angle identities of sin and cos?
We can prove the double angle identities using the sum formulas for sine and cosine: From these formulas, we also have the following identities: sin 2 x = 1 2 ( 1 − cos 2 x) cos 2 x = 1 2 ( 1 + cos 2 x) sin x cos x = 1 2 ( sin 2 x) tan 2 x = 1 − cos 2 x 1 + cos 2 x.
How do you find the RHS of a double angle formula?
Different forms of the Cosine Double Angle Result. By using the result sin 2 α + cos 2 α = 1, (which we found in Trigonometric Identities) we can write the RHS of the above formula as: cos 2 α − sin 2 α. = (1− sin 2 α) − sin 2 α. = 1− 2sin 2 α.
How to get the cosine of a double angle formula?
It is useful for simplifying expressions later. Using a similar process, we obtain the cosine of a double angle formula: and once again replace β with α on both the LHS and RHS, as follows: By using the result sin 2 α + cos 2 α = 1, (which we found in Trigonometric Identities) we can write the RHS of the above formula as: |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Does it feel like grazing management information is shrouded in acronyms and terms that boggle the mind on first glance? Do you struggle to decipher terms like animal unit equivalents? And how does one go about calculating AUMs and then applying those numbers? Be reassured, you’re not alone! There’s a lot going on when sorting through the finer points of grazing management and figuring out how to work through the many calculations.
A good starting point is defining a grazing animal in terms of how much forage it requires to meet its nutritional demands. We know that grazing animals’ forage needs differ depending on class, weight, age and stage of production. And in order to account for those differences, it’s helpful to create a baseline in order to quantify forage demand. Continue reading →
Carrying capacity, also known as grazing capacity, is the amount of forage available for grazing animals in a specific pasture or field. Calculating the correct carrying capacity will help you determine a proper stocking rate that maintains productivity of both your animals and forage while encouraging the sustained health of the grassland resources.
Stocking rate is the number of animals on a pasture for a specified time period and is usually expressed in Animal Unit Months (AUMs) per unit area.
One way to determine carrying capacity is to obtain past stocking rates and grazing management information and assess the condition of the pasture. But what if the historical stocking rate data is not available or you are unsure of its accuracy and reliability?
Carrying capacity can be calculated using several different techniques. All of them depend on some trial and error as they are monitored and adjusted over time. When calculating carrying capacity, it boils down to three questions:
How much forage is available?
How much of that forage can be used by grazing animals?
How many animals can graze on that piece of land and for how long?
The BCRC Carrying Capacity Calculator provides a road map for answering these questions using two separate methods: 1) forage estimates based on provincial guides and 2) field-based sampling, also known as the clip and weigh method. Each method contains four steps. Continue reading →
This article written by Dr. Reynold Bergen, BCRC Science Director, originally appeared in the May 2021 issue ofCanadian Cattlemenmagazine and is reprinted on the BCRC Blog with permission of the publisher.
Pasture plants are generally classified as decreasers, increasers and invaders. Decreaser species are the plants you want to see and your cattle prefer to eat, so they face the most grazing pressure. Increaser plants tend to thrive when the decreaser species are challenged by overgrazing, drought or other sub-optimal conditions. Invaders (weeds) proliferate when increasers and the remaining decreasers are so weakened by overgrazing or environmental extremes that they have a hard time competing for nutrients, water and sunlight.
Healthy, productive pastures are dominated by decreasers. The composition of the decreaser community in healthy native rangelands was shaped by thousands of years of natural selection and environmental pressures. In tame pastures, humans take the wheel from Mother Nature as we seek to establish and maintain a stand of tame decreaser species that can be productive and long-lived in our particular soil and climate conditions. In both native and tame pastures, good grazing managers adjust stocking densities, grazing intensities, grazing and rest period length and frequency, etc. based on annual and seasonal variations in growing conditions to maintain pasture health and optimize long-term forage and animal productivity. Continue reading →
Are you managing a new-to-you pasture and you need to determine how to stock it? Perhaps it has been recently purchased or rented, or you simply don’t trust the information provided on historical stocking rates.
The first principle of pasture management is to balance the available forage supply with livestock demand. Carrying capacity (also known as grazing capacity) is the amount of forage available for grazing animals in a specific pasture or field. A substantial amount of Canada’s rangeland is in some form of public ownership (e.g. grazing leases, forest grazing allotments) and has carrying capacity data available. With privately owned or recently acquired land however, there may not be any information on historical forage production and carrying capacity.
Carrying Capacity is defined as the average number of livestock and/or wildlife that may be sustained on a pasture that fits the management goals. Site characteristics, such as soil, water, plant, and topography of the pasture, can impact carrying capacity. Forage production and availability for grazing can also affect carrying capacity. Source: Society for Range Management, 1998.
Carrying capacity can be calculated using a variety of techniques. All of them depend more or less on trial and error as they are monitored and adjusted over time as the carrying capacity for an individual year varies from the long-term average for the pasture. The effectiveness of each method depends on the kind of grazing land, but a combination of methods is generally required. Continue reading →
Adaptive grazing herd management applies to grazing practices that are developed with careful consideration to the specific conditions that exist on individual farms and ranches. When it comes to adaptive grazing management, it’s all about using the resources you have available and incorporating different techniques depending on where you live, says rancher and consultant Sean McGrath. McGrath spoke about the value of being flexible but also the importance of making a plan and measuring success, during a BCRC webinar last winter.
Managing the movement of cattle through pastures or paddocks will help producers achieve energy efficiency. “Plants are solar panels and to make them efficient, we need to make sure there are solar panels there to start with,” McGrath said. He pointed out that it is much cheaper for cattle to graze than it is to manually feed them and understanding the key principles of grazing management is vital for adaptive management (skip ahead to 15:05).
Producers should manage herd movement to prevent overgrazing, which is defined as a plant being grazed before it has recovered from the previous grazing event. “We would never cut a hay field on the first of June and come back and hay it on June 10. A pasture is no different,” McGrath reasoned.
Editor’s note: The following is part 2 of two-part series. See part 1.
Photo supplied by Ryan Boyd
The secret — if it is a secret — to pasturing cattle on alfalfa is to follow a few simple management steps to reduce the risk of bloat, say producers from across the country, who for years claim good success by including the forage legume in pasture mixes.
Straight alfalfa stands can be managed quite well, but most producers today are favouring alfalfa/grass forage blends. They are very productive, produce excellent rates of gain on cattle, help to reduce the bloat risk, and also provide important biodiversity. Biodiversity benefits the cattle in providing a range of crops that mature at different times and can handle varying growing conditions, as well as biodiversity to benefit soil health.
The main “not to do” message is don’t turn somewhat hungry cattle into a pre-bloom high percentage stand of alfalfa and leave them to selectively graze the lush leaves. If there is a heavy dew or rain as well, it creates a perfect storm for bloat.
The key “to do” messages include making sure cattle move onto alfalfa pastures with a full gut and the forage stand is dry. Introduce them to lusher forage gradually by limiting the amount of area they have access to in a day, and force them to eat the whole plant including stems and not just leaves. Other “to do” strategies that some producers use — supply a bloat-control agent in cattle drinking water, make some dry hay available as well, as the fibre in hay reduces the risk of gas build up in the rumen, and include low-bloat forage legumes such as sainfoin in the pasture mix.
It is important to apply some basic management principles to capitalize on the benefits of having alfalfa in a grazing program. As grazing research summarized in Part 1 has confirmed over the years, not including alfalfa in pasture mixes can be like leaving money on the table.
Here is what producers from across the country had to say about how alfalfa is managed in their grazing programs: Continue reading → |
Last week during National Black History Month, ground was broken on the National Mall in Washington, DC, for what will become the National Museum of African American History and Culture. In his remarks at the ceremony, President Obama mentioned that he wanted his daughters to see the famous African Americans like Harriet Tubman not as larger-than-life characters, but as inspiration of “how ordinary Americans can do extraordinary things.”
Image: This original photo of Harriet Tubman in the handbook lists the many roles she played in addition to being a “conductor” on the Underground Railroad, including nurse, spy and scout for the Union army during the Civil War. Her quote: “I looked at my hands to see if I was the same person now I was free. There was such a glory over everything… I felt like I was in heaven.” Source: The Underground Railroad: Official National Park Handbook.
One of the most dramatic areas of African American history is the story of the fight against slavery and the profile in courage represented by the ordinary people who did extraordinary things while participating in the Underground Railroad.
The National Park Service (NPS) has produced a number of exemplary publications about it, with three of them available today from the U.S. Government Bookstore, including the
- Underground Railroad: Official Map and Guide,
- the Discovering the Underground Railroad: Junior Ranger Activity Book, and
- the Underground Railroad: Official National Park Handbook, a perpetual bestseller.
That these items are not your typical guidebooks about a single historic site is due to the fact that the Underground Railroad itself is not a typical American national park.
Congress and the National Park Service act to preserve the legacy of the Underground Railroad
Back in 1990, Congress instructed the National Park Service to perform a special resource study of the Underground Railroad, its routes and operations in order to preserve and interpret this aspect of United States history.
Following the study, the National Park Service was mandated by Public Law 105-203 in 1998 (you can read the law on GPO’s FDSys site) to commemorate and preserve this history through a new National Underground Railroad Network to Freedom Program to “educate the public about the importance of the Underground Railroad in the eradication of slavery, its relevance in fostering the spirit of racial harmony and national reconciliation, and the evolution of our national civil rights movement.”
What was the Underground Railroad?
What was called the Underground Railroad was neither “underground” nor a “railroad,” but was instead a loose network of aid and assistance by antislavery sympathizers and freed blacks across the country that may have helped as many as one hundred thousand enslaved persons escape their bondage from before the American Revolution through the Civil War.
Image: NY State historical marker in Albany for the UGRR along the American Trails UGRR bicycle route.
Describing one of the most significant internal resistance movements ever, the National Park Service said in a 1996 press release that:
The Underground Railroad was perhaps the most dramatic protest against human bondage in United States history. It was a clandestine operation that began during colonial times, grew as part of the organized abolitionist movement, and reached a peak between 1830 and 1865. The story is filled with excitement and triumph as well as tragedy –-individual heroism and sacrifice as well as cooperation to help enslaved people reach freedom.
Where did the term “Underground Railroad” come from?
Historians cannot confirm the origins of the name, but one of the stories reported by the Park Service has the term coming out of Washington, DC, in 1839, when a recaptured fugitive slave allegedly claimed under torture that his escape plan instructions were to send him north, where “the railroad ran underground all the way to Boston.” However it came about, the term was widely in use by 1840, and is often shortened to “UGRR” by “those in the know.”
Image: An 1837 newspaper ad about a runaway slave from the book “The Underground Railroad from Slavery to Freedom” By Wilbur Henry Siebert, 1898.
Underground Railroad Routes
Another byproduct of the UGRR special resource study was that the National Park Service carried out an analysis of slavery and abolitionism and identified the primary escape routes used on the UGRR.
The map below is included in the Underground Railroad: Official Map and Guide, produced by the National Park Service Cartographic staff at Harpers Ferry Center, shows the general direction of escape routes. Contrary to popular belief, Canada was not the only destination for freedom-seeking slaves–since some fled to Mexico, Florida and the Caribbean– but it was the primary destination as the efforts to catch fugitives increased.
Image: Selected Routes of the Underground Railroad from the Underground Railroad: Official Map and Guide.
Additional outputs of the resource study and the subsequent research are the following three excellent Underground Railroad publications from the National Park Service.
Underground Railroad: Official National Park Handbook
The first book in our trio of publications is the Underground Railroad: Official National Park Handbook. It is comprised of a series of fascinating articles by top Underground Railroad historians that weave together a thorough view of the amazing stories behind the legend, illustrated with many drawings, court records, letters, paintings, photos, and other pictorial representations that help make this history come alive for the reader.
The handbook is broken into 3 major sections and 5 chapters:
- Part I: An Epic in United States History:
- 1- Myth and Reality by Larry Gara. This introductory chapter reviews and evaluates the truths vs. the legends that grew about the Underground Railroad.
- Part II: From Bondage to Freedom:
- 2- Slavery in America by Brenda E. Stevenson. In this chapter, the author details the rise of the institution of slavery in America and the harsh realities of life for the people who suffered under it. An interesting segment discusses the inequities of life for enslaved women vs. men.
- 3- The Underground Railroad by C. Peter Ripley. This fascinating chapter tells the courageous and often harrowing stories of freedom seekers and those who aided them, including Harriet Tubman who made nearly 20 trips to lead 300 slaves to freedom and Henry “Box” Brown who shipped himself in a 2X3’ wooden crate from Richmond to Philadelphia to escape. Sprinkled throughout are cameos of famous former fugitives and abolitionists including Sojourner Truth, Frederick Douglas, and abolitionists William Lloyd Garrison and Gerritt Smith, among many others.
- Part III: Tracking the Past:
- 4- Tracking the Past by the National Park Service. This chapter outlines some of the work done by the National Park Service and others to discover, verify and catalog the important people, places and artifacts related to the Underground Railroad.
- 5- Further Reading by Marie Tyler McGraw. In this final chapter, the author provides a recommended reading list for interested researchers of the authoritative works related to different aspects of the Underground Railroad story.
Underground Railroad: Official Map and Guide
This map and guide includes drawings, blurbs, maps and chronologies about different aspects of the slave trade and the Underground Railroad.
Included in this fold-out map and guide are the escape routes map shown earlier, vignettes of key figures from key “conductors” on the Railroad to abolitionists, and even a short glossary of terms related to the UGRR.
Discovering the Underground Railroad: Junior Ranger Activity Book
The final item in our trio of publications is the Discovering the Underground Railroad: Junior Ranger Activity Book.
Many National Parks offer visitors the opportunity to join the National Park Service Family as Junior Rangers. Interested students complete a series of activities during their park visit, share their answers with a park ranger, and receive an official Junior Ranger badge or patch and Junior Ranger certificate.
Since there is no one national park site for the Underground Railroad, the National Park Service came up with a different process with this activity book. Aspiring Underground Railroad Junior Rangers have to complete different numbers of activities in the book pertaining to their particular age level, then send the completed booklet in to the National Park Service’s Omaha office. There, “a ranger will go over your answers and then return your booklet along with an official Junior Ranger Badge for your efforts.”
This fun booklet includes activities appropriate from ages 5 to 10 and older, from word finders and mazes to essays and historical fact matching.
How can you get these Underground Railroad publications?
- Buy them online 24/7 at GPO’s Online Bookstore:
- Browse our entire Black History Month collection of Federal publications at the GPO Online Bookstore.
- Buy them at GPO’s retail bookstore at 710 North Capitol Street NW, Washington, DC 20401, open Monday-Friday, 9am to 4pm, except Federal holidays, (202) 512-0132.
- Find them in a library.
- Find some of the information online at the National Park Service’s National Underground Railroad Network to Freedom website.
About the Author: Michele Bartram is Promotions Manager for GPO’s Publication and Information Sales Division and is responsible for online and offline marketing of the US Government Online Bookstore (Bookstore.gpo.gov) and promoting Federal government content to the public. |
Three mummified animals from ancient Egypt have been digitally unwrapped and dissected by researchers, using high-resolution 3D scans that give unprecedented detail about the animals’ lives – and deaths – over 2000 years ago.
The three animals – a snake, a bird and a cat – are from the collection held by the Egypt Centre at Swansea University. Previous investigations had identified which animals they were, but very little else was known about what lay inside the mummies.
Now, thanks to X-ray micro CT scanning, which generates 3D images with a resolution 100 times greater than a medical CT scan, the animals’ remains can be analysed in extraordinary detail, right down to their smallest bones and teeth.
The team, led by Professor Richard Johnston of Swansea University, included experts from the Egypt Centre and from Cardiff and Leicester universities.
The ancient Egyptians mummified animals as well as humans, including cats, ibis, hawks, snakes, crocodiles and dogs. Sometimes they were buried with their owner or as a food supply for the afterlife.
But the most common animal mummies were votive offerings, bought by visitors to temples to offer to the gods, to act as a means of communication with them. Animals were bred or captured by keepers and then killed and embalmed by temple priests. It is believed that as many as 70 million animal mummies were created in this way.
Although other methods of scanning ancient artefacts without damaging them are available, they have limitations. Standard X-rays only give 2-dimensional images. Medical CT scans give 3D images, but the resolution is low.
Micro CT, in contrast, gives researchers high resolution 3D images. Used extensively within materials science to image internal structures on the micro-scale, the method involves building a 3D volume (or ‘tomogram’) from many individual projections or radiographs. The 3D shape can then be 3D printed or placed into virtual reality, allowing further analysis.
The team, using micro CT equipment at the Advanced Imaging of Materials (AIM) facility, Swansea University College of Engineering, found:
- The cat was a kitten of less than 5 months, according to evidence of unerupted teeth hidden within the jaw bone.
- Separation of vertebrae indicate that it had possibly been strangled
- The bird most closely resembles a Eurasian kestrel; micro CT scanning enables virtual bone measurement, making accurate species identification possible
- The snake was identified as a mummified juvenile Egyptian Cobra (Naja haje).
- Evidence of kidney damage showed it was probably deprived of water during its life, developing a form of gout.
- Analysis of bone fractures shows it was ultimately killed by a whipping action, prior to possibly undergoing an ‘opening of the mouth’ procedure during mummification; if true this demonstrates the first evidence for complex ritualistic behaviour applied to a snake.
Professor Richard Johnston of Swansea University College of Engineering, who led the research, said:
“Using micro CT we can effectively carry out a post-mortem on these animals, more than 2000 years after they died in ancient Egypt.
With a resolution up to 100 times higher than a medical CT scan, we were able to piece together new evidence of how they lived and died, revealing the conditions they were kept in, and possible causes of death.
These are the very latest scientific imaging techniques. Our work shows how the hi-tech tools of today can shed new light on the distant past.”
Dr Carolyn Graves-Brown from the Egypt Centre at Swansea University said:
“This collaboration between engineers, archaeologists, biologists, and Egyptologists shows the value of researchers from different subjects working together.
Our findings have uncovered new insights into animal mummification, religion and human-animal relationships in ancient Egypt.”
Header Image Credit : Swansea University |
Kids, like adults, are spending more time online. At some point during the COVID-19 pandemic, many children attended school via Zoom and completed assignments online. The trend toward more screen time — whether playing games or being in touch with friends — is likely to continue even after everyone returns to the classroom.
We already know that prolonged screen time can cause digital eye strain as well as dry eye symptoms, among other problems in children and adults. There is some indication that extended exposure to blue light may impact the development of retinal cells. However, studies on actual subjects still need to be done to establish a clear connection.
Spending a long time in front of screens can impact how quickly our tears evaporate, because we blink around 66% less when using a computer compared to other daily activities. When tears evaporate too quickly and aren’t replenished with blinking our eyes start to feel dry and gritty. So remember to blink every few seconds to prevent your eyes from drying out!
Blue Light Exposure
Screens, such as those that appear on computers, phones and tablets emit blue light. Recent studies have shown that overexposure to blue light can damage the retinal cells at the back of your eyes. This may increase the risk of vision issues such as age-related macular degeneration which eventually leads to permanent loss of vision.
Excess blue light has also been shown to disrupt the circadian rhythms that regulate our sleep patterns, as it tricks your internal clock into thinking that it is the middle of the day. This may lead to difficulty in falling asleep, insomnia, and daytime fatigue.
Digital Eye Strain
Nearly 60% of people who routinely use computers or digital devices experience symptoms of digital eye strain — also called computer vision syndrome. Symptoms of eye strain include eye fatigue and discomfort, dry eye, headaches, blurred vision, neck and shoulder pain, eye twitching, and red eyes.
Taking frequent breaks from your screen can help reduce eye strain and neck, back and shoulder pain during your workday.
It is recommended to take at least one 10-minute break every hour. During these breaks, stand up, move about and stretch your arms, legs, back, neck and shoulders to relieve tension and muscle aches.
Also, look away from your computer at least every 20 minutes and gaze at a distant object at least 20 feet away for at least 20 seconds. This relaxes the focusing lens inside the eye to prevent fatigue.
How to Make Virtual Learning Safer For Your Child
The following tips can lessen the impact of screens on your child’s eyes:
- Reduce overall screen time
- Encourage frequent breaks
- Use accessories that filter blue light (for example, blue light glasses)
- Schedule regular eye exams
Children need comprehensive eye exams to assess the health of their eyes, correct their vision and spot potential problems which can affect learning and behavior.
To schedule a pediatric eye exam near you, call our optometrist in Fayetteville today!
Q&A With Our Eye Doctor in Fayetteville, North Carolina
Blue light glasses, also known as computer glasses, effectively block the transmission of blue light emitted from devices and computer screens. They often include a coating to reduce glare to further reduce eye strain. These glasses can be purchased with or without a prescription.
If you find yourself gazing at screens all day, whether your computer, smartphone, iPad or television, you’re at risk of experiencing eye strain. So make sure you schedule frequent breaks from your screen and follow the 20-20-20 rule; every 20 minutes, look at something 20 feet away for 20 seconds. And while you’re at it, use this time to get up, walk around, and stretch. |
With rising awareness of global warming and effects of emissions on the environment, corporates and individuals alike are rising to tackle environmental issues. Carbon footprinting, the first step to reduce carbon emissions, is the total set of greenhouse gas emissions caused directly or indirectly by an individual, organization, event or product. The main reason for calculating a carbon footprint is to inform decisions on how to reduce the climate change impact of a company, service or product. Carbon footprints are measured by undertaking a greenhouse gas assessment. Once the size of a carbon footprint is known, a strategy can be devised to reduce it.
Why Carbon Footprint?
Growing public awareness about climate change and global warming has resulted in an increasing interest in ‘carbon footprinting’. The global community now recognizes the need to reduce greenhouse gas emissions to mitigate climate change. The most popular methods to reduce carbon footprint include use of alternative energy, reforestation, waste reduction and energy efficiency. Population, economic output, primary energy mix and carbon intensity are the major parameters in determining the carbon footprint of a particular country.
Carbon footprint is the foremost indicator of environmental responsibility and helps to identify climate impacts and lower them cost-effectively by strategic and operative planning, constructing a climate policy, environmental reporting etc. In addition, carbon footprint promotes positive, environmentally conscious company image and can boost the marketing of an organization and its products.
Types of Carbon Footprint
There are different types of carbon footprint, e.g. for organisations, individuals, products, services, and events. Different types of carbon footprint have different methods and boundaries. The various approaches and types of greenhouse gas assessment are discussed below.
- Product Carbon Footprint is suited for organizations which have distinct products and services. It delivers a view of GHG emissions specific to a single product or service. This can then be scaled up to the entire organization. Product Carbon Footprint can be assessed to capture either business-to-business view (cradle-to-gate) or business-to-consumer view (cradle-to-grave).
- Corporate Carbon Footprint is suited for organisations wishing to take an overview of the carbon footprint of the entire organization. The process starts off by identifying the business goals for the GHG inventory, setting up suitable organizational boundaries, selecting an appropriate baseline period; data collection and finally preparing plan for data quality management.
- Value-Chain Carbon Footprint includes activities associated with the product or services of an organization over entire value chain. This accounts for emissions arising from raw material procurement to the end of product life. Value-chain carbon footprint provides an aggregate view of all the products and services of the company.
Carbon Footprint in the Middle East
The world’s dependence on Middle East energy resources has caused the region to have some of the largest carbon footprints per capita worldwide. Oil and gas industry, electricity production, transportation, industrial heating and air-conditioning are responsible for most of the carbon emissions from the region. Qatar, Kuwait, UAE, Bahrain and Saudi Arabia figure among the world’s top-10 per capita carbon emitters. Infact, carbon emissions from Qatar are approximately 50 tons per capita, which is more than double the US per capita footprint of 19 tons per year. |
Mathematically, power factor can be expressed as the following:
Power Factor = Real Power (Watts) / Apparent Power (VA)
Looks harmless, doesn’t it! So, why is this an important ENERGY STAR and IEC 61000-3-2 requirement?
Power distribution and transmission corporations are severely penalized for low power factor in their systems, and these expenses are usually passed on to the end user. This makes designing for a high power factor very important in the context of IEC 61000-3-2 and ENERGY STAR. Apart from non-compliance with regulatory standards and increasing utility bills, low power factors can result in wear and tear of motors, transformers and other costly equipment due to overheating.
In AC power conversion systems, loads aren’t always our forgiving resistive kinds. A reactive load (think coils, capacitors, transformers, motors), in reality, dissipates or absorbs energy resulting in a less efficient power system. Ideally, the power factor of a power system would be 1 (or 100%), and some switch mode power supplies (SMPS) have succeeded at meeting or exceeding the 0.9 (or 90%+) mark.
Inductive loads, for example, cause the load current to lag the load voltage by a certain phase angle. If the voltage and current waveforms are sinusoidal, the cosine of that phase angle is another representation of the power factor. Since switch mode power supplies typically produce waveforms that are not sinusoids (see Fig. 1), an accurate measurement of power factor would require a measurement system to compute real and apparent power.
Fig.1 – Load voltage and current waveforms of a switch mode power supply measured by the MSO5104B oscilloscope
The low power factor caused by an inductive load is compensated for by adding a capacitive load, and vice-versa. However, a typical switch mode power supply requires a more complex power factor correction circuit. This process is known as power factor correction.
Power factor also manifests itself in the current harmonics measurements. For example, the distortion due to low power factor will result in poor current harmonics. Some oscilloscopes can provide insight into whether the harmonics meet standards. Shown below in Fig. 2 is the power analysis software (DPOPWR) on a Tektronix MSO5104B oscilloscope indicating that the Harmonics “pass” IEC 61000-3-2 requirements. The gray bars display the standard limits, and the green bars indicate the actual measurements.
Fig. 2 – Current harmonics measurements using DPOPWR on an MSO5104B oscilloscope
In fact, this scope can also measure the power factor of an SMPS. Fig. 3 below indicates a power factor of 0.7715 or 77.15%. You can verify that the scope measures the true and apparent powers before making this calculation.
Fig.3 – Power quality analysis using DPOPWR on an MSO5104B oscilloscope
Also notice that Crest Factor, True Power and Apparent Power and several other measurement options are readily available on DPOPWR. Feel the need to brush up on some of these power supply related measurements? Refer to our Fundamentals of AC Power Measurements Primer or watch our webinar on Testing Power Supplies for Energy Efficiency. As always, be sure to post up any questions or thoughts in the comments section below. |
When NASA launched the WISE satellite in 2009, astronomers hoped it would be able to spot loads of cool, dim objects known as brown dwarfs. Bigger than a planet, a brown dwarf is not quite a star, either—it is too small to sustain the nuclear fusion reactions that turn hydrogen to helium. But it may burn to some degree, using a heavy isotope of hydrogen called deuterium as fusion fuel.
Because brown dwarfs are so dim, it is entirely possible that some of them lie very close to the sun—as close as any known star—and have yet to be discovered. But more than three years after WISE (short for the Wide-Field Infrared Survey Explorer) launched, the map of the sun’s immediate vicinity has remained largely unchanged. Until now.
In a study to appear in the Astrophysical Journal Letters (pdf), Kevin Luhman, an associate professor of astronomy and astrophysics at Pennsylvania State University, announced that he has located a previously unknown denizen of the sun’s neighborhood. Using data from WISE, Luhman has identified a pair of brown dwarfs, bound into a binary system, just 6.5 light-years away. That is nearer to the sun than all but two known star systems, both of which were located more than 95 years ago: the Alpha Centauri triple star system (about 4.3 light-years away) and Barnard’s Star (six light-years).
“I think it is a spectacular find,” WISE principal investigator Edward Wright of the University of California, Los Angeles, wrote in an email, adding that the distance measurement appears robust. “So while this is the third-nearest star (nearly tied for second with Barnard's star), these are definitely the two nearest known brown dwarfs.”
By extrapolating the orbit back in time for the binary brown dwarf system, known as WISE 1049-5319, Luhman was able to find archival images from other telescopes that registered the object as a moving speck of light as far back as 1978. And he gathered some new imagery of his own—at the Gemini South Telescope in Chile, Luhman caught a glimpse of the object that revealed the speck to be not one but two brown dwarfs locked in a tight orbital dance. Separated by about three times the distance between Earth and the sun, Luhman estimates that the two brown dwarfs circle each other every 25 years or so.
WISE 1049-5319 would make an excellent target for exoplanet hunters, Luhman notes. At such close proximity, any planets that might orbit the brown dwarfs would offer astronomers the rare opportunity to photograph exoplanets and study their properties directly, rather than simply inferring the presence of planets through their influence on the stars that host them. A word about the possibility of extraterrestrial life in such a planetary system: although it is theoretically possible for life to exist on a planet orbiting a brown dwarf, such a world would “suffer a number of critical habitability issues,” according to a 2012 study. Such issues include strong tidal effects from a brown dwarf on the planets near enough to feel its feeble heat and the gradual cooling of the brown dwarf as it ages. |
By Rachelle Kean, a teacher and writer
Science, Math, Health, Fitness
Three class periods
10 – 12 (adaptable for a younger audience)
- understand the ways in which nutritional food labels are read and used on common foods
- determine the number of calories in a peanut (or the amount of fat in potato chips) so that comparisons to other foods can be made
- understand the detrimental effects of fats on the body and their relationship to heart disease, diabetes, and obesity
- increase awareness of healthy food choices for themselves and their classmates by designing and conducting a scientific experiment using observational skills and data analysis
Few will disagree that fast foods are a staple in the diets of many Americans. Even our nation’s schools feature vending machines full of foods that are high in calories, short on nutrition, and all too easy to buy. With busy lifestyles and complicated schedules, what are the long term effects of a diet high in saturated fats? What about all the “good carbs” and “bad carbs” we have been hearing so much about?
In this three part lesson, students will examine nutrition labels for caloric intake using various snack foods. Then, they will determine the number of calories in a food item. Finally, they will conduct a research project in which they examine the food choices of their classmates.
Part I: Pre-lesson Activity
- Separate students into two groups. Give one group the story “Fast Food Nation” and the second group “Fighting Fat.” Students should record the main points of the article in a notebook or journal and prepare a brief synopsis for the class. A brief debate is a great way to get them interested. Some possible debate topics include –
- Should “characters” be used for advertising when the target audience is often very young?
- Should schools limit or eliminate access to vending machines and soda machines in schools? Why or why not?
- Is it a school’s responsibility to notify parents and/or students when a student is seriously overweight considering all of the ill health effects?
- What are some ways that American life leads to obesity and what can be done about it?
- Should gym class be made harder? Why or why not? Do you think students would approve?
- Should grocery stores and convenience stores make junk food less visible? And should the prices be higher for high fat/low nutrition foods?
- Should nutrition content of school lunches be made easily available to students who want to make better choices?
- What impact do you think food labels have on the choices Americans make with their foods? Do you think they should have warning labels similar to the Surgeon General’s warning on cigarettes?
- Using the stories and the Web, students should define the following:
- Trans fats
- %DV (percent daily value)
Part II: Nutrition Labels
- Many people do not know how to read a nutrition label properly. Although nutrition labels are in place to help the consumer know exactly what is in their foods, many simply don’t understand the caloric requirements of the human body as it relates to nutrition labels. Go to the U.S. Food and Drug Administration Web site at http://www.cfsan.fda.gov/~dms/foodlab.html to find more information and sample nutrition labels.
- Divide the class into groups of 2-4 students. Distribute several boxes or small bags of snack foods to the students. Ask students to get out 1 serving of the food. They should not look at the nutrition label, but are to take a good guess at what they would consider one serving. Then, have each student measure the actual amount of the snack that they have withdrawn and record this in a journal.
- Investigate the parts of the food label with the students. In particular, demonstrate the number of calories in the food items, serving size, percent daily value, and the 2000-2500 calorie diet on which the label is designed. Then have students calculate the actual amount of fat and calories in their sample size. Discuss their reactions. They should clearly see that the serving sizes are often very low. In addition, discuss the caloric intake needs of a teenager or complete a calorie calculator.
Part III: Calories and Fat
- This section has two alternative labs based on material availability. One is from the Wyoming Energy Curriculum and needs simpler materials. It is aimed at younger students. The second is a higher level chemistry laboratory aimed at older students in a true chemistry lab.
- Print the student and teacher materials for the lab of your choice. Complete the labs and discuss the amount of fat and calories in average foods. Concentrate on student reactions to the amount of fat and calories in those foods.
Part IV: School Surveys
- Students should now have a great idea of how fat and calories play a role in their lives. But what about their classmates? Students will put together a “secret survey” in which they will watch their classmates at lunch and determine a nutrition index of the foods that are chosen.
- Divide the class into groups of 2-3 students. Obtain a lunch menu for the week and distribute copies to each student. Using colored pencils or markers, students will determine a scale rating of 1-5 for the food choices. 1 will be a very high fat, high calorie, low nutrition food item, while a 5 will be a healthy whole food like apples, real chicken breasts, and so on. They should make the key and list example foods at each level. For instance, broccoli, although a good food source, once loaded with cheese or butter can go lower on the scale. Other foods such as snack cakes would be a 1.
- Students must then make a data table in which to record their experimental results or use the worksheet provided. They will spend one day in the cafeteria watching the foods that others eat. They will record this in their data table. Together, they will determine the ratings of the foods that are chosen.
- For an even greater challenge, have students design their own observational experiments. For instance, how much money is spent on average on the snack machines each day? How many students choose apples over apple pies? And so on. It is a great way to get students to design their own experiments and record their results. A complete lab write-up would be appropriate for older students to turn in.
- Watch the movie “Super Size Me” and have students comment on the facts presented in the film.
- Research and compare the marketing budgets of several large fast food or soft drink companies and ask students to debate the topic.
- Have students create a comparative timeline of the advent of fast food popularity and the growing trends in obesity. |
Every day we can observe how the modern world is changing very rapidly. New technologies are created all the time, and this process is becoming more and more fast. One of such innovations is Artificial Intelligence. It is one of the most important branches of computer sciences that focuses on the development of machines who are thinking like humans.
“Artificial Intelligence sounds a little bit scary, but the truth is that this technology is truly amazing. It is crucial to many spheres, including Medicine, Finance, Economics, Game development, as well as Education.”
Artificial Intelligence changes education with a speed of light, and here in this article, you can find out more about how it is happening.
How Does AI Affect Education?
Artificial Intelligence these days is one of the most discussed topics and very important technology. It can be widely used in many areas, and it is especially useful in education. How does it help to make the educational system better? Here are just a few outstanding ways in which AI improved education:
- AI gives students access to learning resources no matter where the students are. This is especially helpful for those who can’t attend a school or just prefer education at home. Now, you don’t have to go to an actual school if you don’t like it: you can just study with the help of the internet.
- The grading system will be fairer. There are many situations in which teachers often grade students based on the teacher’s opinions. AI will automate this process and it will be impossible for a teacher to evaluate students’ works unfairly. Students will be treated equally in a neutral environment.
- Students will be able to use online homework help. If you feel like you have a lot of tasks to do and don’t know how to manage all of them, you can always use the assistance of professional custom thesis writing services like customwritings or hire an online tutor.
- AI will be able to adjust to the needs and improvements of all students. It can be hard for a teacher to watch all the students and prepare a special program for them. AI will be able to build programs for each student. It will see the improvements and will adapt the curriculum needed.
- Many tasks will be automatic. Teachers often have to take care of a lot of paperwork, and because of that, they can’t put all of their efforts into helping students. AI will be able to do that kind of work instead of teachers, and the educators will have a chance to spend more time assisting students.
- Voice assistants will become extremely useful in the classroom. Such assistants will not be able to substitute a teacher, but they will definitely be able to help students learn new materials faster. Instead of handbooks and textbooks, students could use voice assistants, which will be much cheaper and will save more trees that could have been cut for the production of a textbook.
Tips on How to Study Online
“With the help of Artificial Intelligence and the Internet, Studying is now so much easier than we could have ever Imagined”
Still, there are a few important rules and things you should keep in mind. To study more effectively and successfully, use these tips that will help become a productive online learner:
Make Sure you have a Good Internet Connection. Sounds obvious, but still it is very important. Otherwise, the studying process will take a lot of time because of constant lags.
Set Goals. No matter what kind of learning path you choose, it is important to determine what goals you are and what you would like to eventually achieve. This is how you will know what works for you and what does not.
Make A Plan. It is critical to build a plan since it will help you study more effectively. One of the best ways to create a good plan is to use to-do lists, calendars, checklists, schedule your time and lessons, and set time limits. With the help of these techniques, you will see how much better the quality of the studying process has become.
Find Nice Study Place. It will be hard for you to focus on something if there is a lot of noise around you. Pick a quiet workplace where no one will be able to distract you and study there.
Don’t Be Afraid of New Technologies
The world is changing every day, and it is not something that can be stopped. The only thing that we could do is just to embrace the changes and let them improve our lives. Artificial Intelligence is still in the early stages of development, but still, it influenced our education a lot. Make sure you get the most out of the new technologies, and you will see how much better the learning process will become.
Also, Read Examples of Artificial Intelligence |
PPT – Roman Mosaic Powerpoint Presentation Year 3 geography. A fantastic Year 3 powerpoint presentation to give information on What Roman Mosaic is in a child-friendly way, 9 fantastic slides support learning by providing a true visual understanding of the craft of Roman Mosaic tiles and designs. Key information is written to engage and increase levels of understanding. This teaching resource is complimented with Roman Mosaic related lesson PDF files, which are available to download separately in the Year 3 Humanities Geography category or related items below.
An abstract from this resource. – – The craftsmen would plan a design first, then work on a small area at a time. The tiny pieces were stuck to the floor with a type of cement called ‘mortar’.
Apple for the teacher resources have been designed by experienced teachers to assist Home Educators, Parents, SEN and Primary Teachers including NQT’s, with assessment, lesson ideas and educational teaching resources.
Please join our growing communities of Apple for the Teacher members at one of our social media groups to link and share ideas with other educational professionals and parents.
facebook Twitter Linkedin Instagram Youtube
Apple for the teacher Ltd Educational resources – putting children at the core of their learning |
It may not tell you whether Francis Bacon really authored Shakespeare's plays, but a common computer program designed to compress large files can sort out who wrote what with greater than 90% accuracy.
To a computer, Hamlet's first soliloquy is just a string of characters--but that string still contains information. Just how much information is what determines the string's "entropy," essentially the minimum number of bits needed to encode the string. Unless a string is infinitely long, it's impossible to calculate it's exact entropy. But a program that compresses files provides a convenient estimate: the length of the compressed file containing the string. By estimating entropy, sophisticated compression programs can identify the language and even the author of unfamiliar prose. Now, mathematicians Dario Benedetto and Emanuele Caglioti and physicist Vittorio Loreto of the University of Rome have shown that freely available, off-the-shelf software can do the trick too.
The researchers employed a common program called gzip. Gzip replaces the original file with a catalog of building blocks a few characters long, and instructions for putting the blocks back together. The trick to sleuthing texts is to compress a file containing a longer known text followed by a shorter unidentified text. If the known and unidentified texts are similar, such as a Shakespeare play and a sonnet, gzip will do a slightly better job of compressing the composite file because both require roughly similar building blocks.
To test the program, the researchers collected 90 texts by 11 Italian authors and measured their length when compressed. They used a short piece of one text as the "unidentified" sample. Then they appended this sample to each of the other 89 files and measured the compressed length of each. When the length of a composite file changed little from that of its original compression, the file was "recognizing" the unidentified text, as it needed relatively few additional building blocks. The researchers repeated this process 90 times, taking the "identified" sample from a different text each time, and in 93% of the cases, the method correctly revealed whether the same author had written both the known and unidentified texts, the researchers report in the 28 January issue of Physical Review Letters.
The results demonstrate the power of compression programs to classify language, says William Teahan, a computer scientist at the University of Wales in Bangor, U.K. Compression programs might someday serve as the basis for software that automatically categorize huge numbers of documents or accurately mine enormous troves of data--such as the World Wide Web--for documents discussing a particular topic, Teahan says.
The gzip home page |
This article needs additional citations for verification. (October 2007) (Learn how and when to remove this template message)
Rocket artillery is a type of artillery which utilizes rockets as a projectile. The use of rocket artillery dates back to medieval China where devices such as fire arrows were used (albeit mostly as a psychological weapon). Fire arrows were also used in multiple launch systems and transported via carts. By the late nineteenth century, due to improvements in the power and range of conventional artillery, the use of early military rockets declined; they were finally used on a small scale by both sides during the American Civil War. Modern rocket artillery was first employed during World War II, in the form of the German Nebelwerfer family of rocket ordnance designs, Soviet Katyusha-series and numerious other systems employed on a smaller scale by the Western allies and Japan. In modern use, the rockets are often guided by an internal guiding system or GPS in order to maintain accuracy.
The use of rockets as some form of artillery dates back to medieval China where devices such as fire arrows were used (albeit mostly as a psychological weapon). Fire arrows were also used in multiple launch systems and transported via carts. Devices such as the Korean hwacha were able to fire hundreds of fire arrows simultaneously. The use of medieval rocket artillery was picked up by the invading Mongols and spread to the Ottoman Turks who in turn used them on the European battlefield.
The use of war-rockets is well documented in Medieval Europe. In 1408 Duke John the Fearless of Burgundy used 300 incendiary rockets in the Battle of Othée. The city dwellers coped with this tactic by covering their roofs with dirt.
Metal-cylinder rocket artilleryEdit
The earliest successful utilization of metal-cylinder rocket artillery is associated with Kingdom of Mysore, South India. Tipu Sultan's father Hyder Ali successfully established the powerful Sultanate of Mysore and introduced the first iron-cased metal-cylinder rocket. The Mysorean rockets of this period were innovative, chiefly because of the use of iron tubes that tightly packed the gunpowder propellant; this enabled higher thrust and longer range for the missile (up to 2 km range). Tipu Sultan used them against the larger forces of the British East India Company during the Anglo-Mysore Wars especially during the Battle of Pollilur (1780). Although the rockets were quite primitive, they had a demoralizing effect on the enemy due to the noise and bursting light.
According to Stephen Oliver Fought and John F. Guilmartin, Jr. in Encyclopædia Britannica (2008):
Hyder Ali, prince of Mysore, developed war rockets with an important change: the use of metal cylinders to contain the combustion powder. Although the hammered soft iron he used was crude, the bursting strength of the container of black powder was much higher than the earlier paper construction. Thus a greater internal pressure was possible, with a resultant greater thrust of the propulsive jet. The rocket body was lashed with leather thongs to a long bamboo stick. Range was perhaps up to three-quarters of a mile (more than a kilometre). Although individually these rockets were not accurate, dispersion error became less important when large numbers were fired rapidly in mass attacks. They were particularly effective against cavalry and were hurled into the air, after lighting, or skimmed along the hard dry ground. Hyder Ali's son, Tipu Sultan, continued to develop and expand the use of rocket weapons, reportedly increasing the number of rocket troops from 1,200 to a corps of 5,000. In battles at Seringapatam in 1792 and 1799 these rockets were used with considerable effect against the British.
The Indian Tipu Sultan's rocket experiences, including Munro's book of 1789, eventually led to the Royal Arsenal beginning a military rocket R&D program in 1801. Several rocket cases were collected from Mysore and sent to Britain for analysis. The development was chiefly the work of Col. (later Sir) William Congreve, son of the Comptroller of the Royal Arsenal, Woolwich, London, who set on a vigorous research and development programme at the Arsenal's laboratory; after development work was complete, the rockets were manufactured in quantity further north, near Waltham Abbey, Essex. He was told that "the British at Seringapatam had suffered more from the rockets than from the shells or any other weapon used by the enemy". "In at least one instance", an eyewitness told Congreve, "a single rocket had killed three men and badly wounded others".
It has been suggested that Congreve may have adapted iron-cased gunpowder rockets for use by the British military from prototypes created by the Irish nationalist Robert Emmet during Emmet's Rebellion in 1803. But this seems far less likely given the fact that the British had been exposed to Indian rockets since 1780 at the latest, and that a vast quantity of unused rockets and their construction equipment fell into British hands at the end of the Anglo-Mysore Wars in 1799, at least 4 years before Emmet's rockets.
Congreve introduced a standardised formula for the making of gunpowder at Woolwich and introduced mechanical grinding mills to produce powder of uniform size and consistency. Machines were also employed to ensure the packing of the powder was perfectly uniform. His rockets were more elongated had a much larger payload and were mounted on sticks; this allowed them to be launched from the sea at a greater range. He also introduced shot into the payload that added shrapnel damage to the incendiary capability of the rocket. By 1805 he was able to introduce a comprehensive weapons system to the British Army.
The rocket had a "cylindro-conoidal" warhead and were launched in pairs from half troughs on simple metal A-frames. The original rocket design had the guide pole side-mounted on the warhead, this was improved in 1815 with a base plate with a threaded hole. They could be fired up to two miles, the range being set by the degree of elevation of the launching frame, although at any range they were fairly inaccurate and had a tendency for premature explosion. They were as much a psychological weapon as a physical one, and they were rarely or never used except alongside other types of artillery. Congreve designed several different warhead sizes from 3 to 24 pounds (1.4 to 10.9 kg). The 24 pounds (11 kg) type with a 15 foot (4.6 m) guide pole was the most widely used variant. Different warheads were used, including explosive, shrapnel and incendiary. They were manufactured at a special facility near the Waltham Abbey Royal Gunpowder Mills beside the River Lea in Essex.
These rockets were used during the Napoleonic Wars against the city of Boulogne, and during the naval bombardment of Copenhagen, where over 25,000 rockets were launched causing severe incendiary damage to the city. The rockets were also adapted for the purpose of flares for signalling and battlefield illumination. Henry Trengrouse utilized the rocket in his life-saving apparatus, in which the rocket was launched at a shipwreck with an attached line to help rescue the victims.
After the rockets were successfully used during Napoleon's defeat at the Battle of Waterloo, various countries were quick to adopt the weapon and establish special rocket brigades. The British created the British Army Rocket Brigade in 1818, followed by the Austrian Army and the Russian Army.
One persistent problem with the rockets was their lack of aerodynamic stability. The British engineer William Hale designed a rocket with a combination of tail fins and directed nozzles for the exhaust. This imparted a spin to the rocket during flight, which stabilized its trajectory and greatly improved its accuracy, although it did sacrifice somewhat of the maximum range. Hale rockets were enthusiastically adopted by the United States, and during the Mexican War in 1846 a volunteer brigade of rocketeers was pivotal in the surrender of Mexican forces at the Siege of Veracruz.
By the late nineteenth century, due to improvements in the power and range of conventional artillery, the use of military rockets declined; they were finally used on a small scale by both sides during the American Civil War.
World War IIEdit
Modern rocket artillery was first employed during World War II, in the form of the German Nebelwerfer family of rocket ordnance designs, and Soviet Katyusha-series. The Soviet Katyushas, nicknamed by German troops Stalin's Organ because of their visual resemblance to a church musical organ and alluding to the sound of the weapon's rockets, were mounted on trucks or light tanks, while the early German Nebelwerfer ordnance pieces were mounted on a small wheeled carriage which was light enough to be moved by several men and could easily be deployed nearly anywhere, while also being towed by most vehicles. The Germans also had self-propelled rocket artillery in the form of the Panzerwerfer and Wurfrahmen 40 which equipped half-track armoured fighting vehicles. An oddity in the subject of rocket artillery during this time was the German "Sturmtiger", a vehicle based on the Tiger I heavy tank chassis that was armed with a 380 mm rocket mortar.
The Western Allies of World War II employed little rocket artillery. During later periods of the war, British and Canadian troops used the Land Mattress, a towed rocket launcher. The United States Army built and deployed a small number of turret-mounted T34 Calliope and T40 Whizbang rocket artillery tanks (converted from M4 Sherman medium tanks) in France and Italy. In 1945, the British Army also fitted some M4 Shermans with two 60 lb RP3 rockets, the same as used on ground attack aircraft and known as "Tulip".
In the Pacific, however, the US Navy made heavy use of rocket artillery on their LSM(R) transports, adding to the already intense bombardment by the guns of heavy warships to soften up Japanese-held islands before the US Marines would land. On Iwo Jima, the Marines made use of rocket artillery trucks in a similar fashion as the Soviet Katyusha, but on a smaller scale.
The Japanese Imperial Army deployed the naval Type 4 20 cm (8 in) Rocket Launcher and army Type 4 40 cm (16 in) Rocket Launcher against the United States Marines and Army troops at Iwo Jima and Okinawa, and United States Army troops during the Battle of Luzon. Their deployment was limited relative to other mortar types and the projectiles on the 40 cm launcher were so large and heavy that they had to be loaded using small hand-operated cranes, but they were extremely accurate and had a pronounced psychological effect on opposing troops, who called them "Screaming Mimis", a nickname originally applied to the German Nebelwerfer tube-launched rocket mortar series in the European Theater of Operations. They were often used at night to conceal their launching sites and increase their disruptiveness and psychological effectiveness. The Japanese 20 cm rockets were launched from tubes or launching troughs, while the larger rockets were launched from steel ramps reinforced with wooden monopods.
The Japanese also deployed a limited number of 447mm rocket launchers, termed 45 cm Rocket Mortars by United States personnel who test-fired them at the close of the war. Their projectiles consisted of a 1,500 lb cylinder filled with propellant and ballistite sticks detonated by black powder, which produced a blast crater approximately the size of an American 1,000 lb bomb. In effect, this made the 447mm projectile a type of surface-to-surface barrel bomb. While these latter weapons were captured at Luzon and proved effective in subsequent testing, it is not clear that they were ever used against American troops, in contrast to the more common 20 and 40 cm types, which clearly contributed to the 37,870 American casualties sustained at Luzon.
Post-World War IIEdit
Israel fitted some of their Sherman tanks with different rocket artillery. An unconventional Sherman conversion was the turretless Kilshon ("Trident") that launched an AGM-45 Shrike anti-radiation missile.
The Soviet Union continued its development of the Katyusha during the Cold War, and also exported them widely.
Modern rocket artillery such as the US M270 Multiple Launch Rocket System is highly mobile and are used in similar fashion to other self-propelled artillery. Global Positioning and Inertial Navigation terminal guidance systems have been introduced.
During the Kargil war of 1999, Indian army pressed into service the Pinaka MBRL against Pakistani forces. The system was under development and still was able to successfully perform after which the Indian Army showed interest in inducting the system into service.
Rocket artillery vs gun artilleryEdit
- Rockets produce no or little recoil, while conventional gun artillery systems produce significant recoil. Unless firing within a very small arc with the possibility of wrecking a self-propelled artillery system's vehicle suspension, gun artillery must usually be braced against recoil. In this state they are immobile, and can not change position easily. Rocket artillery is much more mobile and can change position easily. This "shoot-and-scoot" ability makes the platform difficult to target. A rocket artillery piece could, conceivably, fire on the move. Rocket systems produce a significant amount of backblast, however, which imposes its own restrictions. Launchers may be sighted by the firing arcs of the rockets, and their fire can damage themselves or neighbouring vehicles.
- Rocket artillery cannot usually match the accuracy and sustained rate of fire of conventional gun artillery. They may be capable of very destructive strikes by delivering a large mass of explosives simultaneously, thus increasing the shock effect and giving the target less time to take cover. Modern computer-controlled conventional artillery have recently begun to acquire the possibility to do something similar through MRSI but it is an open question if MRSI is really practical in a combat situation. On the other hand, precision-guided rocket artillery demonstrates extreme accuracy, comparable with the best guided gun artillery systems.
- Rocket artillery typically has a very large fire signature, leaving a clear smoke trail showing exactly where the barrage came from. Since the barrage does not take much time to execute, however, the rocket artillery can move away quickly.
- Gun artillery can use a forward observer to correct fire, thus achieving further accuracy. This is usually not practical with rocket artillery.
- Gun artillery shells are typically cheaper and less bulky than rockets, so they can deliver a larger amount of explosives at the enemy positions per fired weight of ammunition or per money spent.
- While gun artillery shells are smaller than rockets, the gun itself must be very large to match the range of rockets. Therefore, rockets typically have longer range while the rocket launchers remain small enough to mount on mobile vehicles. Extremely large guns like the Paris Gun and the Schwerer Gustav have been rendered obsolete by long range missiles.
- Rate of fire: If the artillery barrage was intended as a preparation for an attack, and it usually is, a short but intense barrage will give the enemy less time to prepare by, for instance, dispersing or entering prepared fortifications such as trenches and bunkers.
- The higher accuracy of gun artillery means that it can be used to attack an enemy close to a friendly force. This, combined with the higher capacity for sustained fire makes gun artillery more suitable than rocket artillery for defensive fire.
- The accuracy of gun artillery and its ability to be rapidly laid to engage targets makes it the system of choice for the engagement of moving targets and to deliver counter-battery fire.
- Many multiple rocket launcher vehicles now have the capability to fire guided rockets, eliminating the accuracy disadvantage.
|Wikimedia Commons has media related to Rocket artillery.|
- Nicholas Michael and G A Embleton (1983). Armies of Medieval Burgundy 1364–1477. Osprey. p. 21.
- Frederick C. Durant III; Stephen Oliver Fought; John F. Guilmartin, Jr. "Rocket and missile system". Encyclopædia Britannica. Archived from the original on 18 November 2011. Retrieved 19 December 2011.
- "YouTube". www.youtube.com. Archived from the original on 2015-12-09.
- Baker D (1978) The Rocket, New Cavendish Books, London
- Von Braun W, Ordway III F. I. History of rocketry and space travel, Nelson
- Ley E (1958) Rockets, missiles, and space travel, Chapman & Hall, London
- Patrick M. Geoghegan(2003) Robert Emmet: a life, p.107, McGill-Queens University Press, Canada
- RODDAM NARASIMHA (2 April 1985). "Rockets in Mysore and Britain, 1750-1850 A.D." (PDF). National Aerospace Laboratories, India. Archived (PDF) from the original on 3 March 2012. Retrieved 19 December 2011.
- Van Riper; A. Bowdoin (2007). Rockets and Missiles: The Life Story of a Technology. JHU Press. Archived from the original on 2013-12-02. Retrieved 2013-02-07.
- "British Rockets". nps.gov. Archived from the original on 2007-08-14.
- Lewis, Jim (2009) From Gunpowder to Guns: the story of the two Lea Valley armouries, Hendon: Middlesex University Press, ISBN 978-1-904750-85-7, page 68
- "Japanese Defense 04". battleofmanila.org. Archived from the original on 2015-03-31.
- "ARDE's Pinaka rocket system generates Rs 2,500 crore business". sakaaltimes.com. Archived from the original on 2015-04-03. |
In this lesson, students learn how to create their own youth consumer magazine or Internet site.
In this lesson, students will produce a 20 minute news broadcast.
In this lesson, students explore how magazines are developed to reach specific target markets.
In this lesson, students examine the visual codes used on television and in movies through an exploration of various camera techniques. Students begin with a discussion about camera-subject distance, and review various film techniques that are used to create visual meaning.
This lesson encourages children to explore the differences between their real families and TV families by imagining how their own families might be portrayed on a television show.
This lesson develops a beginning awareness by students of how they feel towards, and respond to, different sports, and how the media represents athletics.
This is the second of five lessons designed to teachstudents to think critically about the way aboriginal peoples andvisible minorities are portrayed in the press.
In this lesson, students will write a news article for the school newspaper.
In this lesson students develop an awareness of the ways in which public perceptions regarding young people have been affected by media portrayals of youth violence and youth crime.
This lesson is based on an article, which ran in the January 21, 1995 issue of the London Free Press. |
A new review highlights that tiny nanoparticles provide major potential in detecting and treating disease.
Exosomes can be produced by cells (left), altered before production or after purification (middle), and made in the laboratory (right) depending on their final use. CREDIT: Dr Marta I. Oliveira, INL, Portugal
Exosomes are tiny biological nanoparticles capable of transferring information between cells. These nanoparticles provide noteworthy potential in detecting and treating disease. This is considered to be the most comprehensive overview that has been concluded in this field of research.
According to Dr Steven Conlan from
Swansea University, Dr Mauro Ferrari of Houston Methodist Research Institute in Texas, and Dr Inês Mendes Pinto from the International Iberian Nanotechnology Laboratory in Portugal, Regenerative Medicine and Cancer Treatment are the areas that could obtain benefits. Their commissioned paper, which is titled Exosomes as Reconfigurable Therapeutic Systems, was published on June 22 nd, 2017, by Cell Press in Trends in Molecular Medicine.
Exosomes are particles developed by all cells present in the body and range from 30-130 nm in size - a nanometer refers to one-billionth of a meter. Exosomes behave as biological signaling systems, carrying proteins, lipids, RNA and DNA and communicating between cells. They are capable of driving biological processes, from transmitting information through breast milk to modulating gene expression.
The complete potential of exosomes is just slowly being revealed even though they were discovered in 1983. The Researchers demonstrate that the possible medical benefits of the nanoparticles fall into three broad categories:
Activating immune responses to enhance immunity
Detecting disease - by acting as disease-specific biomarkers
Treating diseases - acting as the vehicle for drugs, for instance, bearing cancer therapies as their payload, to target tumors
Exosomes have a number of useful properties and one such property refers to their ability to cross barriers such as the plasma membrane of cells, or the brain/blood barrier. This allows them to be ideal for delivering therapeutic molecules in an extremely targeted manner.
The promising benefits of exosomes are discussed in a number of research projects - cited in the paper - under way or already completed, in the following areas:
A small-cell lung cancer trial
Regeneration of tissue and muscle
Enhanced testing for prostate cancer
Stem cell-derived exosomes strengthening heart muscles
The team points out that there is more to be done before research into exosomes translates into new treatments and techniques. It is essential to consider the side effects, and a standardized approach ideal for isolating, characterizing and then storing exosomes will have to be developed.
Researchers will also have to make sure that the properties of exosomes do not cause any harm, for example, it is possible for them to transfer drug resistance and also pacify the immune system.
However, the potential indeed is extremely clear, with the Researchers describing exosomes as "increasingly promising".
Professor Steve Conlan of Swansea University Medical School, one of the authors of the paper, said,
"Our survey of research into exosomes shows clearly that they offer enormous potential as a basis for detecting and treating disease.
Further studies are necessary to turn this research into clinical outcomes, but researchers and funders should be very encouraged by our findings. Our own research in Swansea is investigating the use of exosomes and exosome-like synthetic nanoparticles in combatting ovarian and endometrial cancer.
Progress in this field depends on partnership. As the authorship of our own paper illustrates, researchers in different countries are increasingly working together in nanohealth. Swansea University has wider links with Houston and Portuguese based researchers in the field.
It's also important to build partnerships outside academia, in particular with government and companies in this fast-growing sector." |
Listen to this post as a podcast:
This post contains Amazon Affiliate links. When you make a purchase through these links, Cult of Pedagogy gets a small percentage of the sale at no extra cost to you.
If you were to start singing “The Itsy-Bitsy Spider” right now, I bet you’d have a hard time keeping your hands still. That’s because most of us who know the song learned it with gestures, and things we learn with physical movement tend to stick.
We can apply that same principle to classroom learning, using movement to enhance learning from preschool all the way through college. Let’s take a look at what the research says about movement-based learning, then explore six different ways you can add more movement to your instruction.
The Research on Movement
The concept of “learning styles” has overwhelmingly been labeled a myth by researchers, so attempting to determine which of your students are kinesthetic learners will not be a good use of your time. What is worth your time is using movement when working with all learners, because plenty of research backs that up.
- In general, people learn better when information is presented in more than one way (Sankey, Birch, & Gardiner, 2010). In other words, if we take in information through more than one sense, we’re more likely to encode it in long-term memory. This would include visual, verbal, and kinesthetic modes of learning.
- Specifically, the use of gestures results in more enduring learning than learning without gestures (Cook, Yip, & Goldin-Meadow, S, 2010). So even the addition of a few small hand gestures can have an impact on how well students remember material.
- Study after study shows that physical activity activates the brain, improves cognitive function, and is correlated with improved academic performance (Donnelly & Lambourne, 2011). This means any kind of physical activity, not just movement associated with the material we’re learning, can benefit students academically.
Six Ways to Add Movement to Instruction
1. Total Physical Response
Developed for use with second-language learners in the 1960’s (Asher, 1966), Total Physical Response simply has students act out physical gestures to represent vocabulary words. Shown to be highly effective with both children (Singh, 2011) and adults (Carruthers, 2010), TPR can also be used to help learners remember new vocabulary terms in their native language. In other words, it can be used in any content area, with any student.
This video from the Teacher Toolkit shows Texas teacher Michael Rowland using TPR with his third grade students, who are English learners.
And here is Craig Gaslow, another Texas teacher who teaches AP Human Geography, demonstrating how he uses TPR to teach three different models of diffusion:
Finally, here are Scott Causer’s high school students demonstrating a few earth science processes using TPR:
What strategies support learning? Here’s high school science using TPR to recall processes in Earth Science -thanks Mr. Causer and team for modeling! @ESMSchoolDist @cultofpedagogy pic.twitter.com/njoAaVdBB9— Naomi Trivison (@NaomiTrivison) March 30, 2019
In this strategy, students create a physical “snapshot” with their bodies, a still picture that represents an idea.
This example, demonstrated by 4th grade teacher Stefannie Cundiff, comes from Teacher Toolkit:
Another example is explained in this Slides presentation by Tyler Jacobs, an Idaho-based high school ELA and social studies teacher. Click the image below to view the slideshow in a new window:
In these, students demonstrate a concept with some kind of motion or interactivity. They could represent non-human components, like in the examples below, or they might actually take on the role of humans in a re-enactment of an event.
The first example comes from the University of British Columbia, where electrical engineering professor Matthew Yedlin has students simulate the difference between linear growth and exponential growth:
The second example, a simulation of the circulatory system, comes from Michelle Cliche, who teaches grades 4 and 5 in London, Ontario, Canada.
Ss learned about the complexity of the circulatory system by acting out delivery of blood through red & blue bean bags. S in middle is ❤️ directing blood flow to different organs. Later, we did this on playground climbers because heart has to pump ‘up’. pic.twitter.com/dhzTqRTEOp— Michelle Cliche (@Clicheteach) March 29, 2019
Finally, here’s a different kind of simulation demonstrated by Amy Tepperman, where participants show fractions by placing different body parts on the floor, Twister-style:
An important note about simulations: If you are doing simulations about historical periods or events, proceed with caution. Many, many teachers have ended up traumatizing students with these types of simulations. This article from Teaching Tolerance explores the topic in depth.
4. Songs with Movement
Songs are another powerful way to teach concepts to students, and if the songs also incorporate movement, even better.
Dan Adler, a 6th grade science teacher in Lawrence, MA, regularly uses songs with physical movements to help his students remember challenging concepts. In this video, students demonstrate “Bodak Particles,” a song about phase change sung to an instrumental version of Cardi B’s song “Bodak Yellow.” (Get a copy of the lyrics here.)
Check out our science scholars rapping about phase changes! pic.twitter.com/uO3sJQubgw— UP Academy Leonard (@up_leonard) November 8, 2018
5. Virtual and Augmented Reality
The experiences offered by virtual and augmented reality allow students to move around in and interact with virtual objects and spaces in ways that would be difficult if not impossible to pull off in the real world.
Augmented Reality layers digital enhancements on top of objects in the real, physical world. Using a device, like a smartphone, loaded with AR software, users point it at a picture or physical object, and the software brings up some kind of digital element like a 3D animation, text, or a video. (Pokémon Go is an example of an AR game.)
One set of AR tools that have tremendous learning potential comes from a company called Merge. Their Merge Cube is a handheld cube that can “become” a variety of objects when paired with the Merge Goggles, as shown below:
Virtual Reality immerses the user in a 360-degree environment, a computer-generated simulation, viewable through a VR headset, and allows them to move through and interact with that environment.
One outstanding source for VR experiences is Google Expeditions, which offers tours to over 500 different locations: historical landmarks, national and state parks, underwater sites, and up-close studies of scientific phenomena.
And if you don’t find the exact tour you want, you and your students can even create your own expeditions with Google Tour Creator.
6. Brain Breaks
I left this one for the end because it’s the easiest to implement. This brain breaks guide from the University of Texas summarizes the research on the connection between movement and academic achievement and offers dozens of ideas for brain breaks that can be put into action immediately.
Brain break videos are super easy to find on YouTube, especially for younger kids. My daughter came home from first grade years ago excited to demonstrate this one for me:
But what about older kids? High school math teacher David Sladkey has written a book of brain breaks that are great for all ages, even middle and high school kids who may not be as comfortable with the Tooty Ta. in this video, students demonstrate a simple toe-tapping brain break:
Tips for Getting Started
- Don’t overdo it. Attempting to add a movement component to every concept students learn would not only be too time-consuming, it could also get old, shifting from a fun novelty to something that causes students to slump over in their seats and say, Not THAT again!!
- Get input from students. When coming up with gestures or movements, you don’t have to rely solely on your own brain. Ask students to help you come up with ideas. Involving students in the process will help them remember concepts better and will increase the likelihood that you’ll end up with really effective movements.
- Use movement for retrieval practice. Rather than using movement in a single lesson, apply the principles of retrieval practice by repeating the movements in short practice sessions, spacing them out over time, and interleaving concepts with one another.
- For younger students, meaningful gestures matter more. While adults learn from both iconic gestures (those that have some meaning in relation to the concept they are connected to) and beat gestures (those that have no inherent meaning but are just tied to particular “beats” in a sentence or phrase), younger children learned more from iconic gestures (So, Sim Chen-Hui, & Low Wei-Shan, 2012). So when working with younger students, try to make the gestures match the concepts in some way.
Asher, J. J. (1969). The total physical response approach to second language learning. The modern language journal, 53(1), 3-17.
Carruthers, S. W. (2010). The total physical response method and its compatibility to adult ESL-learners. Retrieved from http://tesolteachers.net/t.pdf
Cook, S. W., Yip, T. K., & Goldin-Meadow, S. (2010). Gesturing makes memories that last. Journal of memory and language, 63(4), 465-475.
Donnelly, J. E., & Lambourne, K. (2011). Classroom-based physical activity, cognition, and academic achievement. Preventive medicine, 52, S36-S42.
Sankey, M., Birch, D., & Gardiner, M. (2010). Engaging students through multimodal learning environments: The journey continues. In Proceedings ASCILITE 2010: 27th annual conference of the Australasian Society for Computers in Learning in Tertiary Education: Curriculum, technology and transformation for an unknown future (pp. 852-863). University of Queensland.
Singh, J. P. (2011). Effectiveness of total physical response. Academic Voices: A Multidisciplinary Journal, 1, 20-22.
So, W. C., Sim Chen-Hui, C., & Low Wei-Shan, J. (2012). Mnemonic effect of iconic gesture and beat gesture in adults and children: Is meaning in gesture important for memory recall?. Language and Cognitive Processes, 27(5), 665-681. |
Sterling Brown, a black poet and scholar active primarily in the 1930s, parsed out seven black stereotypes propagated primarily by white writers. Most of these are instantly recognizable today. Brown points out that whether such stereotyping is racist or not, it is bad writing. However, a character's struggle against being stereotyped, as in stories such as "Not Your Singing, Dancing Spade" by Flannery O'Connor, made and still makes excellent literature.
The Contented Slave
From the inception of slavery in America, its supporters needed to find ways to rationalize its existence. One way was through the stereotyped "contented slave," a black person so lazily happy with his lot that he saw no reason for struggle. The contented slave was always paired with the "good master," a white owner who treated the slave as a lesser person but still with humanity and respect. This stereotype was used well into the 1940s and can frequently be spotted in 1930s literature and movies including "Gone with the Wind" and "The Little Colonel."
The Wretched Freeman
The "wretched freeman" was set up as a counterpoint to the contented slave. He was the embodiment of the slavery supporter's argument that a slave was never intended to be free. Freedom itself makes him miserable, and when freed, he desires nothing so much as to once again be a slave.
The Comic Negro
The "comic Negro" was a mainstay of minstrel shows for more than a century. This character was more of a caricature, with his personal and physical traits grossly exaggerated for the sake of humor. The comic Negro was never a main character but always a sidekick -- the comic relief. He laughed at himself just as everyone laughed at him. Topsy, a character in Uncle Tom's Cabin, is a comic Negro.
The Tragic Mulatto
The "tragic mulatto" is perhaps the only one of these stereotypes that has died out. Usually female, she has so many white ancestors she could "pass" for white. The tragic mulatto was first used by abolitionists to bring home the reality of slavery -- in which a girl as white as many in their audience was degraded and owned. Worse, however, was the implication that the white blood in a black slave was what gave that slave the impetus to escape, and that the black blood in that person tied them to the savagery and lack of control stereotypically associated with blacks. When slavery was ended, the tragic mulatto often tried to pass as a white person, always with terrible social and legal consequences; a frequent plot had her giving birth to a baby who was clearly of black origin. Most of these stories died out in the 1950s, at about the same time the Civil Rights Movement encouraged blacks to be proud of their heritage.
The Local Color Negro
These stereotyped characters were usually found in groups, like a Greek chorus, and took on whichever other stereotype was appropriate to the story. Most of the time, they were "contented slaves," though they might be "exotic primitives" in Africa or Barbados. These characters were treated as scenery, there only to give flavor and color to the setting of a story. You might see the same thing today in a novel about voodoo, where the blacks at a voodoo ritual are there only to set the scene.
The Exotic Primitive
This is perhaps the most offensive yet most lingering of the literary stereotypes listed. This stereotyped character embodies all the cliches about the "primitive African." Lust, sexual prowess, a wildly uncontrolled desire for drinking and drugs and often a lifestyle of casual violence mark this character. Its origins were in the "savage inheritance" that white writers believed belonged to descendants of Africans.
The Brute Negro
Early in the history of American literature, the "brute Negro" did not exist. His brutishness had been tamed out of him, domesticated through slavery. In fact, this alleged civilizing factor was one of the arguments in support of slavery. While historical figures such as Nat Turner brought up the specter of this stereotype, it did not gain traction until Reconstruction, when freed slaves competed with Southern whites still in shock at the upturning of their world. The brute Negro, manipulated and controlled by cunning Yankee carpetbaggers, became the literary repository of everything evil. |
A gear train is a mechanical system formed by mounting gears on a frame so the teeth of the gears engage. Gear teeth are designed to ensure the pitch circles of engaging gears roll on each other without slipping, providing a smooth transmission of rotation from one gear to the next. The implementation of the involute tooth yielded a standard gear design that provides a constant speed ratio.
Gears are a crucial part of many motors and machines. Gears help increase torque output by providing gear reduction and they adjust the direction of rotation like the shaft to the rear wheels of automotive vehicles. Here are some basic types of gears and how they are different from each other.
Spur gears are mounted in series on parallel shafts to achieve large gear reductions.
The most common gears are spur gears and are used in series for large gear reductions. The teeth on spur gears are straight and are mounted in parallel on different shafts. Spur gears are used in washing machines, screwdrivers, windup alarm clocks, and other devices. These are particularly loud, due to the gear tooth engaging and colliding. Each impact makes loud noises and causes vibration, which is why spur gears are not used in machinery like cars. A normal gear ratio range is 1:1 to 6:1.
Helical gears have a smoother operation due to the angle twist creating instant contact with the gear teeth.
Helical gears operate more smoothly and quietly compared to spur gears due to the way the teeth interact. The teeth on a helical gear cut at an angle to the face of the gear. When two of the teeth start to engage, the contact is gradual–starting at one end of the tooth and maintaining contact as the gear rotates into full engagement. The typical range of the helix angle is about 15 to 30 deg. The thrust load varies directly with the magnitude of tangent of helix angle. Helical is the most commonly used gear in transmissions. They also generate large amounts of thrust and use bearings to help support the thrust load. Helical gears can be used to adjust the rotation angle by 90 deg. when mounted on perpendicular shafts. Its normal gear ratio range is 3:2 to 10:1.
The image above shows two different configurations for bevel gears: straight and spiral teeth.
Bevel gears are used to change the direction of a shaft’s rotation. Bevel gears have teeth that are available in straight, spiral, or hypoid shape. Straight teeth have similar characteristics to spur gears and also have a large impact when engaged. Like spur gears, the normal gear ratio range for straight bevel gears is 3:2 to 5:1.
Spiral teeth operate the same as helical gears. They produce less vibration and noise when compared to straight teeth. The right hand of the spiral bevel is the outer half of the tooth, inclined to travel in the clockwise direction from the axial plane. The left hand of the spiral bevel travels in the counterclockwise direction. The normal gear ratio range is 3:2 to 4:1.
Hypoid gears are a type of spiral gear in which the shape is a revolved hyperboloid instead of conical shape. The hypoid gear places the pinion off-axis to the ring gear or crown wheel. This allows the pinion to be larger in diameter and provide more contact area.
The pinion and gear are often always opposite hand and the spiral angle of the pinion is usually larger then the angle of the gear. Hypoid gears are used in power transmissions due to their large gear ratios. The normal gear ratio range is 10:1 to 200:1.
Worm gears are used in large gear reductions. Gear ratio ranges of 5:1 to 300:1 are typical. The setup is designed so that the worm can turn the gear, but the gear cannot turn the worm. The angle of the worm is shallow and as a result the gear is held in place due to the friction between the two. The gear is found in applications such as conveyor systems in which the locking feature can act as a brake or an emergency stop. |
United States Holocaust Memorial Museum:
Women in the Third Reich
Women played a vital role in Adolf Hitler’s plan to create an ideal German Community (Volksgemeinschaft). Hitler believed a larger, racially purer population would enhance Germany’s military strength and provide settlers to colonize conquered territory in eastern Europe. The Third Reich’s aggressive population policy encouraged “racially pure” women to bear as many “Aryan” children as possible. |
Sore throats occur when an infection causes the lining of the throat to swell, which in turn makes it sore and difficult to swallow. They are normally caused by either a viral or bacterial infection.
A sore throat is commonly spread by sharing drinks, kissing, coughing, nose blowing and sneezing.
The majority of sore throats are caused by a viral infection. They tend to be more common in winter when we spend more time indoors in contact with other people, enabling germs to spread rapidly.
Viral infections usually last only a few days, as the body is normally able to fight off the infection by itself.
Sore throats caused by a bacterial infection are often more troublesome and can lead to conditions such as tonsillitis or ear infections. The bacteria that cause the majority of sore throats is called streptococcus. In most cases antibiotic treatment prescribed by a GP will be the only successful treatment.
Other reasons for a sore throat include pollution, changes in temperature, smoking and overuse of the vocal cords.
The symptoms associated with a sore throat can include:
- Painful red throat.
- Swollen tonsils.
- Difficulty in swallowing.
- Neck stiffness.
Sore throats caused by a virus often clear up quickly on their own whereas a bacterial infection may require you to consult your GP.
A wide range of over the counter products are available which are useful for relieving the pain of a sore throat including lozenges, gargles and throat sprays.
Lozenges often contain an antibacterial agent and a local anaesthetic. The large amounts of saliva produced when sucking the lozenge along with the ingredients help to soothe the throat by lubricating the throat and washing off the infective organisms.
Throat sprays often contain anaesthetic, which act directly on the throat to provide immediate relief.
Other suggestions to help releive a sore throat include:
- Taking a painkiller such as paracetamol or aspirin (16 plus).
- Drinking plenty of non-alcoholic fluids.
- Gargling with warm salty water.
- Gargling with soluble aspirin (16 plus).
- Drinking honey or lemon tea.
- Don’t smoke.
If your symptoms persist for longer than three days then speak to your doctor or pharmacist.
The information provided on this website does not replace medical advice.
If you want to find out more, or are worried about any medical issue or symptoms that you may be experiencing, please contact our pharmacist or see your doctor. |
Does my child's nursery use the EYFS curriculum?
All children in England who go to nurseries, pre-schools and reception classes will be following a curriculum called the Early Years Foundation Stage (EYFS).
Children who are with registered childminders will also be doing the EYFS curriculum. Remember though, that this is a play-based curriculum, so your child will be learning mainly through play and traditional activities such as cooking or having stories read to them.
What is play-based learning?
For many years, it has been recognised that when children play, they are actually learning. Play seems to be a practical way of helping children to develop skills and knowledge.
For play to be effective, children need different types of play opportunities. Play also needs to be organised by adults in a range of ways. |
Chest Wall Tumors
(See also Overview of Lung Tumors.)
Chest wall tumors, which may be cancerous or noncancerous, are tumors of the rib cage and its muscles, connective tissues, and nerves, that can interfere with lung function.
Tumors of the chest wall may develop in the chest wall (called a primary tumor) or spread (metastasize) to the chest wall from a cancer located elsewhere in the body. Almost half of chest wall tumors are noncancerous (benign).
The most common noncancerous chest wall tumors are osteochondroma, chondroma, and fibrous dysplasia.
A wide range of cancerous (malignant) chest wall tumors exist. Over half are cancers that have spread to the chest wall from distant organs or from nearby structures, such as a breast or a lung. The most common cancerous tumors arising from the chest wall are sarcomas.
Chondrosarcomas are the most common primary chest wall sarcoma and arise from cartilage of the anterior tract of ribs and less commonly of the sternum, scapula, or clavicle. Bone tumors include osteosarcoma and small-cell malignant tumors (such as Ewing sarcoma or a skin tumor).
The most common soft-tissue primary cancerous tumors are fibrosarcomas (desmoids and neurofibrosarcomas) and malignant fibrous histiocytomas. Other primary tumors include chondroblastomas, osteoblastomas, melanomas, lymphomas, rhabdomyosarcomas, lymphangiosarcomas, multiple myeloma, and plasmacytomas.
People with chest wall tumors require imaging tests, such as chest x-ray, computed tomography (CT), magnetic resonance imaging (MRI), and sometimes positron emission tomography (PET)–CT to determine the original site and extent of the tumor and whether it has developed in the chest wall tumor or is a metastasis from a tumor elsewhere in the body. A biopsy may be done to confirm the diagnosis.
Most chest wall tumors are removed surgically. If needed, the chest wall is then reconstructed, sometimes with tissues from elsewhere in the body. |
The use of additive manufacturing (AM) techniques to produce medical devices is growing at a rapid pace, and it’s easy to see why. Additive manufacturing allows manufacturers to lower production costs by reducing waste and decrease time to market by simplifying (or eliminating) tooling and equipment. Employing AM technologies also makes it possible to produce patient-specific designs and devices with complex geometries, and to do so at a lower cost than traditional manufacturing methods.
The terms “rapid prototyping” and “additive manufacturing” are often used interchangeably. But rapid prototyping can refer to a variety of methods – including additive manufacturing – by which models are produced for demonstration, proof of concept, and feasibility testing. The term additive manufacturing specifically refers to a production method that involves “building up” a part by adding layers of material.
It’s important to note that medical devices produced via additive manufacturing techniques can be validated, but the method and requirements for validation should be addressed during the initial product development.
There are many additive manufacturing technologies used for the development and production of medical devices, and each technology is designed for specific metals or resins. The combination of the AM process and its suitable materials often drives the decision of which technology to use for a given device or purpose. But when choosing which additive manufacturing technology to use, or even whether AM is appropriate for a given product, it’s important to consider the implications that it will have on the entire product development and release process, from prototyping to production.
Processes and Materials
Stereolithography, (SL) is an additive manufacturing technique in which an ultraviolet (UV) laser cures, or hardens, a liquid plastic. The stereolithography apparatus (SLA) creates the part, layer by layer, by submerging a metal platform into a vat of photopolymer (liquid plastic that is hardened with light). The platform is lowered by an amount equal to the thickness of the first layer, and the UV laser traces the pattern of the initial layer onto the surface of the liquid. The platform is then lowered by the thickness of the next layer, and the laser again traces the pattern of that layer on the surface of the resin, curing it and joining it to the layer below. It’s important to note that for objects with thin walls or other delicate structures, the SL process requires supports during manufacturing. These supports are removed after the completion of the build.
There are three resins that are particularly well suited for producing medical devices with stereolithography, and each one has unique benefits for specific applications.
- BioClear: This resin produces parts that are clear, strong, and water-resistant. BioClear is USP VI certified, and is primarily used for medical applications that are non-implant and have limited contact with the body, such as device prototypes or dental drill guides.
- Watershed XC 11122: The unique feature of Watershed XC 11122 is that it creates parts that are nearly colorless, with an appearance and mechanical properties similar to commonly used clear thermoplastics such as polycarbonate. Watershed has excellent water resistance and is dimensionally stable.
- ProtoGen O-XT 18420: This is an ABS-like photopolymer that has excellent chemical resistance and can withstand a wide range of temperature and humidity levels. Its white color makes it easy to see details, making it preferable for applications such as surgical planning models.
Stereolithography has relatively high accuracy and feature definition, combined with isotropic material properties. It is widely used in the development process to evaluate form, fit, and function. It is also a good choice for patterns to support low-volume production methods such as resin casting and investment casting.
Selective Laser Sintering
In the selective laser sintering (SLS) process, a laser heats a powdered material – usually plastic, ceramic, or glass – to just below its melting point, hardening and bonding it to create a 3D structure. Similar to the stereolithography technique, SLS builds the part on a platform, which lowers incrementally as the laser create layers that form the part. In contrast to stereolithography, SLS typically does not require support structures, even for thin, fragile parts.
A common use of SLS technology in medical device manufacturing is to produce patient-specific surgical cutting blocks (also referred to as surgical cutting guides). While SLS can be used with a variety of materials, nylon is often the material of choice in these types of applications. SLS produces high-precision parts and geometries that are nearly impossible to obtain by other methods, and nylon provides a robust part with a high melting point.
Electron Beam Melting and Direct Metal Laser Sintering
The electron beam melting (EBM) process manufactures parts by melting powdered metal with an electron beam (unlike SLS, which heats the powdered material to just below its melting point). Manufacturing with EBM technology takes place in a vacuum to avoid scattering of the electron beam, and the vacuum environment helps eliminate impurities trapped in the metal. Parts produced with EBM technology are also stress-free and have high yield strength.
While EBM technology is suitable for a wide range of metals and alloys – aluminum stainless steel, titanium and cobalt chrome – the most commonly used materials for medical devices are titanium and stainless steel. The US Food and Drug Administration (FDA) has approved titanium implantable devices produced via EBM.
Another additive manufacturing technology that uses metal to produce parts is direct metal laser sintering (DMLS). This process is essentially the same as selective laser sintering, but DMLS builds the part from powdered metal rather than plastic, ceramic, or glass. Like EBM technology, DMLS can be used with stainless steel and titanium, making it a suitable option for medical devices.
Choose the Right Partner
In 2016, the FDA addressed the topic of producing medical devices through additive manufacturing technology, with its draft guidance document, Technical Considerations for Additive Manufactured Devices. While this document covers two main areas – “Design and Manufacturing Considerations” and “Device Testing Considerations” – it is not meant to be a comprehensive guide.
As a manufacturer, navigating the possibilities for additive manufacturing of medical devices can be overwhelming. But a development partner who is comfortable with all the materials and technologies available can help you make the best decisions regarding technical and performance requirements, validation methods, and time-to-market objectives.
Vaupell, Inc., a division of Sumitomo Bakelite (TYO: 4203) provides complete product development services for medical device manufacturers. Our team can assist in design for manufacturability, project engineering, 3D printing, and injection molding expertise for medical device projects from concept to commercialization. Our innovation center located in Hudson, NH can assist in concept, feasibility, and short-run production builds. Scale-up and commercial launch are supported in our ISO 13485 registered facilities in MA and MI.
Vaupell has in-house capability for SLA, SLS, DMLS, metals and plastics machining, prototype tooling building, and injection molding. Connect with us to learn more about how we can assist you in your next line extension or product development project. |
Science, Tech, Math › Social Sciences Cannibalism: Archaeological and Anthropological Studies Is it True that We Are All Descended from Cannibals? Share Flipboard Email Print European colonial imagination of cannibalism in Brazil, painted by Jan van Kessel in 1644. Corbis via Getty Images / Getty Images Social Sciences Archaeology Basics Ancient Civilizations Excavations History of Animal and Plant Domestication Psychology Sociology Economics Environment Ergonomics Maritime By K. Kris Hirst Archaeology Expert M.A., Anthropology, University of Iowa B.Ed., Illinois State University K. Kris Hirst is an archaeologist with 30 years of field experience. Her work has appeared in scholarly publications such as Archaeology Online and Science. our editorial process Twitter Twitter K. Kris Hirst Updated April 13, 2019 Cannibalism refers to a range of behaviors in which one member of a species consumes the parts or all of another member. The behavior occurs commonly in numerous birds, insects, and mammals, including chimpanzees and humans. Key Takeaways: Cannibalism Cannibalism is a common behavior in birds and insects, and primates including humans.The technical term for humans eating humans is anthropophagy. Earliest evidence for anthropophagy is 780,000 years ago, at Gran Dolina, Spain.Genetic and archaeological evidence suggests it may have been a relatively common practice in the ancient past, perhaps as part of an ancestor worship ritual. Human cannibalism (or anthropophagy) is one of the most taboo behaviors of modern society and at the same time one of our earliest cultural practices. Recent biological evidence suggests that cannibalism was not only not rare in ancient history, it was so common that most of us carry around genetic evidence of our self-consuming past. Categories of Human Cannibalism Although the stereotype of the cannibal's feast is a pith-helmeted fellow standing in a stew pot, or the pathological antics of a serial killer, today scholars recognize human cannibalism as a wide variety of behaviors with a wide range of meanings and intentions. Outside of pathological cannibalism, which is very rare and not particularly relevant to this discussion, anthropologists and archaeologists divide cannibalism into six major categories, two referring to the relationship between consumer and consumed, and four referring to the meaning of the consumption. Endocannibalism (sometimes spelled endo-cannibalism) refers to consumption of members of one's own groupExocannibalism (or exo-cannibalism) refers to the consumption of outsidersMortuary cannibalism takes place as part of funerary rites and can be practiced as a form of affection, or as an act of renewal and reproductionWarfare cannibalism is the consumption of enemies, which can be in part honoring brave opponents or exhibiting power over the defeatedSurvival cannibalism is consumption of weaker individuals (very young, very old, sickly) under conditions of starvation such as shipwreck, military siege, and famine Other recognized but less-studied categories include medicinal, which involves the ingestion of human tissue for medical purposes; technological, including cadaver-derived drugs from pituitary glands for human growth hormone; autocannibalism, eating parts of oneself including hair and fingernails; placentophagy, in which the mother consumes her new-born baby's placenta; and innocent cannibalism, when a person is unaware that they are eating human flesh. What Does it Mean? Cannibalism is often characterized as part of the "darker side of humanity", along with rape, enslavement, infanticide, incest, and mate-desertion. All of those traits are ancient parts of our history which are associated with violence and the violation of modern social norms. Western anthropologists have attempted to explain the occurrence of cannibalism, beginning with French philosopher Michel de Montaigne's 1580 essay on cannibalism seeing it as a form of cultural relativism. Polish anthropologist Bronislaw Malinowski declared that everything in human society had a function, including cannibalism; British anthropologist E.E. Evans-Pritchard saw cannibalism as fulfilling a human requirement for meat. Everybody Wants to be a Cannibal American anthropologist Marshall Sahlins saw cannibalism as one of several practices that developed as a combination of symbolism, ritual, and cosmology; and Austrian psychoanalyst Sigmund Freud 502 saw it as reflective of underlying psychoses. Serial killers throughout history, including Richard Chase, committed acts of cannibalism. American anthropologist Shirley Lindenbaum's extensive compilation of explanations (2004) also includes Dutch anthropologist Jojada Verrips, who argues that cannibalism may well be a deep-seated desire in all humans and the accompanying anxiety about it in us even today: the cravings for cannibalism in modern days are met by movies, books, and music, as substitutes for our cannibalistic tendencies. The remnants of cannibalistic rituals could also be said to be found in explicit references, such as the Christian Eucharist (in which worshipers consume ritual substitutes of the body and blood of Christ). Ironically, the early Christians were called cannibals by the Romans because of the Eucharist; while Christians called the Romans cannibals for roasting their victims at the stake. Defining the Other The word cannibal is fairly recent; it comes from Columbus' reports from his second voyage to the Caribbean in 1493, in which he uses the word to refer to Caribs in the Antilles who were identified as eaters of human flesh. The connection with colonialism is not a coincidence. Social discourse about cannibalism within a European or western tradition is much older, but almost always as an institution among "other cultures", people who eat people need/deserve to be subjugated. It has been suggested (described in Lindenbaum) that reports of institutionalized cannibalism were always greatly exaggerated. The English explorer Captain James Cook's journals, for example, suggest that the preoccupation of the crew with cannibalism might have led the Maori to exaggerate the relish in which they consumed roasted human flesh. The True "Darker Side of Humanity" Post-colonial studies suggest that some of the stories of cannibalism by missionaries, administrators, and adventurers, as well as allegations by neighboring groups, were politically-motivated derogatory or ethnic stereotypes. Some skeptics still view cannibalism as never having happened, a product of the European imagination and a tool of the Empire, with its origins in the disturbed human psyche. The common factor in the history of cannibal allegations is the combination of denial in ourselves and attribution of it to those we wish to defame, conquer, and civilize. But, as Lindenbaum quotes Claude Rawson, in these egalitarian times we are in double denial, denial about ourselves has been extended to denial on behalf of those we wish to rehabilitate and acknowledge as our equals. We are All Cannibals? Recent molecular studies have suggested, however, that all of us were cannibals at one time. The genetic propensity that makes a person resistant to prion diseases (also known as transmissable spongiform encephalopathies or TSEs such as Creutzfeldt-Jakob disease, kuru, and scrapie)—a propensity that most humans have—may have resulted from ancient human consumption of human brains. This, in turn, makes it likely that cannibalism was once a very widespread human practice indeed. More recent identification of cannibalism is based primarily on the recognition of butchering marks on human bones, the same kinds of butchering marks—long bone breakage for marrow extraction, cutmarks and chop marks resulting from skinning, defleshing and evisceration, and marks left by chewing—as that seen on animals prepared for meals. Evidence of cooking and the presence of human bone in coprolites (fossilized feces) have also been used to support a cannibalism hypothesis. Cannibalism through Human History The earliest evidence for human cannibalism to date has been discovered at the lower paleolithic site of Gran Dolina (Spain), where about 780,000 years ago, six individuals of Homo antecessor were butchered. Other important sites include the Middle Paleolithic sites of Moula-Guercy France (100,000 years ago), Klasies River Caves (80,000 years ago in South Africa), and El Sidron (Spain 49,000 years ago). Cutmarked and broken human bones found in several Upper Paleolithic Magdalenian sites (15,000-12,000 BP), particularly in the Dordogne valley of France and the Rhine Valley of Germany, including Gough's cave, hold evidence that human corpses had been dismembered for nutritional cannibalism, but skull treatment to make skull-cups also suggest possible ritual cannibalism. Late Neolithic Social Crisis During the late Neolithic in Germany and Austria (5300–4950 BCE), at several sites such as Herxheim, entire villages were butchered and eaten and their remains thrown into ditches. Boulestin and colleagues surmise a crisis occurred, an example of collective violence found at several sites in the end of the Linear Pottery culture. More recent events studied by scholars include the Anasazi site of Cowboy Wash (the United States, ca 1100 CE), Aztecs of 15th century CE Mexico, colonial-era Jamestown, Virginia, Alferd Packer, the Donner Party (both 19th century USA), and the Fore of Papua New Guinea (who stopped cannibalism as a mortuary ritual in 1959). Sources Anderson, Warwick. "Objectivity and Its Discontents." Social Studies of Science 43.4 (2013): 557–76. Print.Bello, Silvia M., et al. "Upper Palaeolithic Ritualistic Cannibalism at Gough's Cave (Somerset, UK): The Human Remains from Head to Toe." Journal of Human Evolution 82 (2015): 170–89. Print.Cole, James. "Assessing the Calorific Significance of Episodes of Human Cannibalism in the Palaeolithic." Scientific Reports 7 (2017): 44707. Print.Lindenbaum, Shirley. "Thinking About Cannibalism." Annual Review of Anthropology 33 (2004): 475–98. Print.Milburn, Josh. "Chewing over in Vitro Meat: Animal Ethics, Cannibalism and Social Progress." Res Publica 22.3 (2016): 249–65. Print.Nyamnjoh, Francis B., ed. "Eating and Being Eaten: Cannibalism as Food for Thought." Mankon, Bamenda, Cameroon: Langaa Research & Publishing CIG, 2018.Rosas, Antonio, et al. "Les Néandertaliens D’el Sidrón (Asturies, Espagne). Actualisation D’un Nouvel Échantillon." L'Anthropologie 116.1 (2012): 57–76. Print.Saladié, Palmira, et al. "Intergroup Cannibalism in the European Early Pleistocene: The Range Expansion and Imbalance of Power Hypotheses." Journal of Human Evolution 63.5 (2012): 682–95. |
How Do Magnets Work? - We Did Extensive Research!
Magnets are objects that we interact with daily from when we move a refrigerator magnet to hold a note in place to when we use our credit cards at the grocery store. With how often we use them, the general population doesn’t know much about the science or the delicate nature behind magnets.
How Do Magnets Work? When a magnet forms when the electrons an object line up in the same direction creating a field. This magnetic field can attract or repel the particles of other objects. This is what makes the magnets able to pull objects forward or push them away.
Magnets are simple objects, yet there are still some parts about what makes a magnet work that is still trying to be understood. If you want to learn more about what makes magnets work and how they continue to work even after years of use, keep reading.
How Do Magnets Work?
Before we can understand how magnets work, we need to know what kinds of magnets there are. We can then figure out how these different types of magnets react to other objects and each other.
Types of Magnets
There are three main types of magnets, all of which we will be discussing in this article. They are classified as permanent, temporary, and electrical magnets.
Permanent magnets are the ones we use on our refrigerators or use in the science classrooms. They work without being introduced to a different magnetic field or having an electric current run through it.
In other words, permanent magnets are the most basic types of magnets that you probably come in contact with on a daily basis.
Temporary magnets, on the other hand, function like permanent magnets when they are in a magnetic field, however, once the object leaves the area, it loses its magnetic charge.
As an example of this concept, think of a paper clip that has been rubbed on a refrigerator magnet. For a while, the paper clip will be magnetized, but eventually, it will lose its magnetic charge.
This is exactly what a temporary magnet is. You can think of it as essentially the opposite of the permanent magnet that was first described in this section.
Finally, an electrical magnet is a magnet that functions when electricity is running through it. It is a piece of iron with a conducting coil wrapped around the metal core. The coil is what receives the electric charge and magnetized the core.
These magnets can also be made of a variety of materials. These materials are either paramagnetic or ferromagnetic. A paramagnetic material has a weaker magnetic field that a ferromagnetic material, though.
A paramagnetic material can be magnesium or lithium, while a ferromagnetic material, which is more common because it can form a stronger magnetic field, include metals such as nickel, iron, cobalt, and more.
While reading through this article, assume that the magnet being referred to is a permanent magnet made from ferromagnetic materials.
A permanent magnet, such as a refrigerator magnet, is something more familiar to the average consumer, and ferromagnetic materials, as mentioned before, are more common when it comes to what makes up a magnet.
So, let’s get into a deeper explanation of how magnets actually work when they are reacting to each other as well as other objects. Take a look at the list down below to get a few main points on the topic, and keep reading to get all of the details.
How Magnets Work:
- Electrons of the magnet are lined up correctly
- The magnet’s electrons push or pull objects and other magnets
As previously stated, a magnet works because the electrons of the magnet are lined up correctly.
An electron is a negative subatomic particle. Everything is made up of electrons; however, an electric or magnetic field is only created when all the electrons line up their negative charge in the same direction.
We know that a magnet’s electrons are what pushes or pulls an object. Scientists have two theories that explain how the magnetic field produced by these lined up electrons communicate with the particles of other objects.
These two theories are the large-scale classical theory and the quantum mechanics small scale theory.
Since it would be difficult to understand the concepts of these theories by just reading the names, we have done some in-depth research in order to put together a detailed explanation of each one, which you will find in the data table down below.
Take a quick look over this information, which will provide you with the main ideas for both of these theories, and keep reading to get all of the details that we’ve managed to dig up.
|Magnetic Theory||Large-Scale Classical Theory||Quantom Mechanics Small Scale Theory|
|Reaction of Magnetic field:||The Magnetic Field Creates Clouds Of Energy||The Magnetic Field Gives Off Invisible Particles|
|How The Magnetic Theory Works:||The Clouds Interact With Other Objects Through Their Electrons||The Particles Interact With The Electrons Of Other Objects|
|How The Objects Are Attracted:||The Clouds Push And Pull The Electrons Of The Other Object||The Invisbile Particles Tell The Electrons Of The Object To Move (Forward or Backward)|
When you are trying to understand how magnets work, it is important to get a good grasp on the scientific concept that is behind these reactions. These concepts lie within the Large-Scale Classical Theory and Quantum Mechanics Small Scale Theory.
The large-scale theory is the first concept that we will be going over in this section. It basically suggests that the magnetic field creates clouds of energy within it.
These clouds, as a result, will interact with the electrons that exist within the other objects, pushing or pulling the electrons.
The small-scale theory, on the contrary, states that the electrons in a magnetic field give off invisible particles.
In short, these particles tell the electrons of the object to move forward, which would be toward the magnet, or backward, meaning away from the magnet.
The most basic differences between these two theories occur in the reaction of the magnetic field. They either create clouds of energy or invisible particles, but ultimately these components will manipulate the electrons of the object in question in order to attract or repel it.
However, there are some elements of magnets that science can’t explain. To better understand these factors, take a look at the list down below.
Unexplainable Elements Of Magnets:
- The Polarity (North and South Poles)
- Electric Field Emissions (Why They Do It)
The first is how magnets always have a north and south pole. These poles essentially determine what a magnet will be attracted to and how it will execute its pull. Through all of the related scientific research, there is no conclusive evidence of why this is.
Additionally, scientists are not sure why the particles of a magnet emit an electric field at all. In other words, while we know that magnets have different poles and give off electric fields, we don’t know why. Time will tell if we will ever find the answers to these questions, though.
How Magnets React to Each Other
In middle school science, classrooms across the globe students are introduced to magnets and how they work. You might even remember these lessons from your own school aged years.
A common and straightforward experiment that helps children learn about the power of magnets includes turning a needle floating on a piece of paper in a cup of water into a compass.
When completed, this type of experiment would show you exactly how magnetic fields and their poles work.
The compass experiment and other similar experiments help in understanding magnetic poles, which is the main idea of what we will be going over in this section.
Seeing magnetic poles work will help in understanding how magnets act and change whenever they are near each other. For visualization, think of a magnet like a globe with its northern and southern poles that keep the planet in rotation.
Magnets can either attract or repel each other, depending on their poles. To begin this discussion, let’s touch on a few main points before we get too in-depth about the topic.
- North and South Pole (one on each side)
- Attract and repel the magnet from other objects/magnets
- Attract = come together /Repel = move apart
- Correct poles must be lined up in order to bring two magnets together
The poles on a magnet are classified as North and South poles, with one on each respective side. Each pole will determine whether the magnet will be attracted or repelled from other objects or magnets.
These terms are basically exactly what they sound like: if they are attracted to each other, then the magnets will be pulled together.
Contrarily, if they repel each other, you will find that when you hold them next to each other, their respective magnetic fields will keep them from touching each other.
If the opposite poles of the magnets come in contact, then they will be attracted to each other. However, if the magnets are touching the same poles, then the magnets will repel each other. Continue reading to get all of the details on magnets reacting to each other.
How Magnets React To Each Other (In Terms Of Poles):
- North to North: Repel
- South to South: Repel
- North to South: Attract
Some additional factors can affect magnets reactions to each other, such as size and type of material used.
For instance, a smaller magnet made of paramagnetic material will have a weaker attraction and repulsion than a larger magnet made up of ferromagnetic materials.
- Magnets have North and South Poles
- The poles on each side of the magnet determine what they will be attracted to (including other magnets)
- Opposite poles attract, while the same poles repel each other
Do Magnets Ever Stop Working?
Although their name might imply otherwise, permanent magnets can still lose charge or magnetism. In other words, permanent magnets are not necessarily permanent.
This theory can be tested by applying a magnet to a vertical metal object. If the magnet slides or if the magnet is easily removed, that means you have a weakened magnet.
The best way to know how to prevent a magnet from weakening, or just to understand how this process works in general, is by knowing what causes weakened magnets.
Leadings Causes For Weakened/Demagnetized Magnets:
- Temperature Change (Extreme Heat)
- Another Magnet (Stronger)
- Shock (Strong Physical Force)
There are three ways that a magnet can become weakened or demagnetized: temperature change, another magnet, or shock.
The first way a magnet can lose its magnetism is by coming in contact with excessive level of heat. More specifically, this occurs when a magnet is heated to “Curie point”, or the point where the magnet will lose its magnetism.
The Curie point will vary depending on the strength of the magnet and the material the magnet is made of. With that being said, depending on the type of material the magnet is made of, it can have a high or low Curie point.
The second way a magnet can lose its magnetic strength is through a demagnetizing field.
A demagnetizing field is when a stronger magnet with an opposite pole is applied to the magnet being demagnetized. If the magnet is demagnetized on its north side, then the stronger magnet will be using its south side to complete this action.
The third way a magnet can be demagnetized is through shock. Shock does not mean an electrical shock, though; it means being hit with a very strong physical force.
If the properly working magnet is struck hard enough, then the inner workings of the magnet will be misaligned. This misalignment will cause the electrons to no longer form a magnetic field.
Most magnets, when weakened, will return to their original charge eventually when given time. Permanent magnetic field loss only occurs when the magnet has been exposed to significant demagnetization for an extended period of time.
Can You Fix a Magnet That Loses Its Strength?
After reading the information about demagnetization in the previous section, you might be wondering if it is possible for the force to be recovered once this is done. The answer to this question is yes: magnets can be re-magnetized when they lose their strength.
Here are the ways that a magnet can be strengthened after it has lost its force:
- Shock (extreme force, not electrical)
- Magnetization though a stronger magnet
- Direct contact with weaker magnets (ie stacking)
- Freezing temperatures
There are four ways that magnets can be strengthened again: through shock, a stronger magnetic field, direct contact with other magnets, and reaching extremely low temperatures.
The first way that the magnetism or strength of a magnet can be recouped is through shock. Just like with losing magnetism, the shock isn’t electrical, but extreme force.
While this does seem counter-intuitive, the theory is that if the first blow misaligned the interior of the magnet then striking the magnet again will allow the electrons to line up again. If done correctly, the magnet can work just as well as it did before.
The second way to re-magnetize a magnet is through rubbing a stronger magnet against the weaker magnet. Some of the magnetic charges will transfer to the weakened magnet, therefore allowing it to regain some or all of its force.
Think of the compass experiment mentioned earlier. If you run the needle along the length of a magnet, the needle will absorb some of the magnetic charges.
For a compass, this is enough. If you need to recharge something more substantial than a needle for a compass, make sure you have a stronger magnet.
The third way that a magnet can regain its original power is by coming in direct contact with another weak magnet. The joint charge of these two magnets will make them stronger.
The fourth and final way that a magnet can become re-magnetized is through reaching freezing temperatures. Just as allowing the magnet to become too hot can disrupt the integrity of the magnet, placing it in cold temperatures instead will counteract these effects and restore the magnetic field.
The fact that two of the three ways to restore charge to the magnet are the same ways you can lose the magnetic charge shows just how adaptable and resilient magnets can be. If the magnets are not restored through the methods mentioned above, chances are it has been permanently demagnetized.
Magnets seem to be a source of endless curiosity. It doesn’t matter if you’re a Ph.D. wielding physicist working on the Hadron Collider to a three-year-old toddler just discovering the magic of magnets on the refrigerator.
While it is true that there are a lot of aspects about magnets and the way they work that are non-conclusive as far as scientific research goes, there are a lot of things to take away from this lesson.
With so many things to learn about magnets, we are still searching for new ways to play with and understand them. |
- Points, Lines, and Planes
- Postulates and Theorems
- Segments, Midpoints, and Rays
- Angles and Angle Pairs
- Special Angles
- Lines: Intersecting, Perpendicular, Parallel
- Parallel and Perpendicular Planes
Arcs and Inscribed Angles
Central angles are probably the angles most often associated with a circle, but by no means are they the only ones. Angles may be inscribed in the circumference of the circle or formed by intersecting chords and other lines.
- Inscribed angle: In a circle, this is an angle formed by two chords with the vertex on the circle.
- Intercepted arc: Corresponding to an angle, this is the portion of the circle that lies in the interior of the angle together with the endpoints of the arc.
In Figure 1 , ∠ ABC is an inscribed angle and is its intercepted arc.
Figure 2 shows examples of angles that are not inscribed angles.
Refer to Figure 3 and the example that accompanies it.
Notice that m ∠3 is exactly half of m, and m ∠4 is half of m ∠3 and ∠4 are inscribed angles, and and are their intercepted arcs, which leads to the following theorem.
Theorem 70: The measure of an inscribed angle in a circle equals half the measure of its intercepted arc.
The following two theorems directly follow from Theorem 70.
Theorem 71: If two inscribed angles of a circle intercept the same arc or arcs of equal measure, then the inscribed angles have equal measure.
Theorem 72: If an inscribed angle intercepts a semicircle, then its measure is 90°.
Example 1: Find m ∠ C in Figure 4 .
Example 2: Find m ∠ A and m ∠ B in Figure 5 .
Example 3: In Figure 6 , QS is a diameter. Find m ∠ R. m ∠ R = 90° (Theorem 72).
Example 4: In Figure 7 of circle O, m 60° and m ∠1 = 25°.
Find each of the following.
- m ∠ CAD
- m ∠ BOC
- m ∠ ABC |
Plasma recombination is a process by which positive ions of a plasma capture a free (energetic) electron and combine with electrons or negative ions to form new neutral atoms.
Recombination usually take place in the whole volume of a plasma (volume recombination), although in some cases it is confined to some special region of it. Each kind of reaction is called a recombining mode and their individual rates are strongly affected by the properties of the plasma such as its energy (heat), density of each species, pressure and temperature of the surrounding environment. An everyday example of rapid plasma recombination occurs when a fluorescent lamp is switched off. The low-density plasma in the lamp (which generates the light by bombardment of the fluorescent coating on the inside of the glass wall) recombines in a fraction of a second after the plasma-generating electric field is removed by switching off the electric power source.
Hydrogen recombination modes are of vital importance in the development of divertor regions for tokamak reactors. In fact they will provide a good way for extracting the energy produced in the core of the plasma. At the present time, it is believed that the most likely plasma losses observed in the recombining region are due to two different modes: electron ion recombination (EIR) and molecular activated recombination (MAR). |
It’s picnic time! Time to bring the fun of grilling and outdoors to your literacy centers! This picnic literacy centers pack includes 9 literacy and writing activities.
Each student recording sheet is offered in color and black and white.
These are appropriate for first grade and second grade, depending on student skill level.
The packet includes:
Picnic ABC Order – Students will select six of the eighteen cards to put in ABC order and write them on their recording sheet. They will then select six more and record in order. They will then use the last six cards and record them in ABC order.
Short and Long Vowel Sounds – Students will sort cards into short and long vowel sounds piles and record on sheet.
Picnic Sentence Scramble – Student will unscramble sentence cards and write the sentences on the student recording sheet. They must also determine the correct capitalization and punctuation.
Fact or Opinion? – Student will read picnic-themed sentences, determine if they are fact or opinion, and circle the correct answer. They will then write one original sentence about picnics and determine if their sentence is a fact or opinion.
How to Plan a Picnic – Students will write six steps to planning a picnic.
Compare and Contrast – Students will use a Venn diagram to compare ants and butterflies.
How Many Words? – Students will write as many words as they can using the letters in the words “Grilling Hamburgers.” At the bottom of the sheet are optional letters to cut out to arrange as desired.
Can-Have-Are – Students will complete the Can-Have-Are chart for ants.
Memory Game – Picnic-themed pictures in a memory game. |
Teaching Children Preschool and Kindergarten Age to Hold Pencils Correctly so the can Write Easier
Teaching children to write can be fun and rewarding experience but sometimes gripping a pencil properly gets in a child’s way of mastering the procedure. The challenge could be even more difficult if the person doing the teaching is right-handed and the child is left-handed or vice versa. Therefore, when teaching children to hold their pencils properly, the first thing that helpers must establish is whether the children are right-handed or left-handed. If the child favors the opposite hand of the person doing the teaching, the person doing the teaching must think in opposites while instructing proper pencil holding procedures.
Show The Child How To Hold The Pencil By Gripping A Pencil Yourself
By holding a pencil yourself, children can examine pencil position while in your hand so they can copy your grip. This examination gives children a good idea of how and where about they are supposed to place their own pencils between their own fingers. This examination may be extremely useful to children prone to learning by “sight” as opposed to “hearing”. However, the demonstration is likely to help any child with the process.
Tell Children To Copy Your Pencil Holding Position
Once children have examined the pencil in your hand, have them pick up a pencil and imitate your pencil holding position. Sometimes, verbal pencil holding assistance does the trick, especially when children are used to holding crayons in different manners for coloring pictures and such. Other times, however, helpers may need to assist children with verbal instructions as well as assisting them physically by placing pencils between their fingers.
If the child is a preschooler and in some cases even a kindergartener, bear in mind that their little fingers may be nimble, however, their dexterity may not be fully developed. For this reason, expect to see them readjust the pencil with their other hand quite a bit until they get the hang of picking up the pencil and adjusting it with the same hand.
More Pencil Holding Grip Strategies
When physically placing pencils for children, try to get them to concentrate on using their thumb to squeeze the pencils against their second and third fingers. Often times, children will wrap all their fingers around their pencils. This type of pencil holding prevents flexibility while writing, and thus, helpers should encourage children to avoid this type of pencil grip.
Sometimes persons teaching pencil holding tend to be rigid in their beliefs that all writers must hold pencils the same exact way. This is simply not true. The main thing is that children are able to hold their pencils in comfortable positions that allow them to write with flexibility at realistic speeds.
When teaching children to hold their pencils properly, commend them for their efforts as well as for any letters, numbers, shapes, or “whatever” they manage to write. Remember they are just beginning to write and will more than likely relish repeating the task when helpers view their efforts favorably. |
The Solar Energy Trainer introduces the fundamentals of a solar cell (photovoltaic Cell) and conversion of the suns photons into electrical energy. This can be monitored with a built in voltmeter and ammeter to measure the voltage and current produced. The Trainer allows the study of the characteristics and application of solar energy and the charge of batteries using solar energy.
- Calculation of voltage and current of solar cells
- Calculation of voltage and current of solar cells in parallel
- Study of V – I curve and power curve of solar cells to find the maximum power point (MPP) and efficiency of a solar cell
- Calculation of solar cell efficiency
- Application of solar cells in domestic use to charge a battery operated lamp, fan and radio |
Space weather, conditions in space caused by the Sun that can affect satellites and technology on Earth as well as human life and health. As modern civilization has become more dependent on continent-sized electric power distribution grids, global satellite communication and navigation systems, and military and civilian satellite imaging, it has become more susceptible to the effects of space weather.
Space weather phenomena
Earth is surrounded by a magnetic field that extends far out into space in a teardrop-shaped cavity called the magnetosphere. The magnetosphere is compressed on the dayside and stretched out into a long “magnetotail” on the nightside by interaction with the solar wind. The solar wind is a flux of charged particles that flows at supersonic velocity from the Sun’s outer atmosphere (the corona) and carries with it the solar magnetic field. The solar wind and solar magnetic field (named the interplanetary magnetic field [IMF] when it is observed away from the Sun) expand throughout the entire solar system, extending to more than three times the distance of Neptune’s orbit before the solar wind is slowed by the interstellar medium. The cavity that contains the solar wind and the IMF is called the heliosphere; it is analogous to Earth’s magnetosphere.
As can be seen from the description above, humanity lives within the region affected by the dynamic atmosphere of the Sun. The most visible manifestation of the interaction of the Sun’s outer atmosphere and Earth’s space magnetospheric environment is the aurora. The aurora borealis, or northern lights, in the Northern Hemisphere and the aurora australis, or southern lights, in the Southern Hemisphere are visible light emissions caused by the collision of charged particles (ions and electrons) from the solar wind with the upper atmosphere of Earth. Auroral emissions typically occur at altitudes of about 100 km (60 miles) and are often green, white, or reddish in colour depending on what species (atomic oxygen, molecular oxygen, or nitrogen, respectively) is primarily emitting light. The auroras appear in oval regions near the North and South Poles owing to the influence of Earth’s dipole magnetic field. Charged particles from space easily move along geomagnetic field lines and intercept the upper atmosphere at high latitudes (that is, toward the poles) because that is where the field lines originate. The energy responsible for accelerating charged particles into the polar regions comes from the interaction of the magnetized solar wind flowing by Earth’s magnetic field. This interaction can lead to large disturbances of Earth’s magnetosphere called geomagnetic storms, which are the main manifestation of severe space weather.
The amount of energy, mass, and momentum flowing from the Sun through the heliosphere and into Earth’s magnetosphere and ionosphere is variable over a number of timescales. Chief among these timescales is the 11-year solar cycle, defined by the waxing and waning of solar activity as seen in the number of sunspots. Within the solar cycle, solar storms such as flares and coronal mass ejections (CMEs) are most numerous within a several-year period known as the solar maximum. Between solar maxima there is a several-year period, called the solar minimum, when the Sun’s activity can be extremely low. The solar minimum that began in approximately 2007 and reached its lowest point in December 2008 was the deepest minimum in at least a century. The next solar maximum is expected to begin in 2013.
Test Your Knowledge
Space: Fact or Fiction?
The primary physical mechanism responsible for much of this energy, mass, and momentum flow is magnetic reconnection, which can explosively convert magnetic energy into kinetic energy of the magnetospheric plasma and disconnect or break parcels of magnetic flux. On Earth’s dayside, magnetic reconnection takes place at the intersection of solar magnetic field lines with those of Earth’s magnetic field. In this process, solar plasma can enter Earth’s magnetosphere, where the plasma is accelerated and energized. On Earth’s nightside, magnetic reconnection happens in the magnetotail, where magnetic flux tubes originally disconnected on the dayside are reconnected. Electric currents are one way that this energy from magnetic reconnection is carried through the system. These currents can connect regions of the magnetosphere through Earth’s ionosphere. The electrical currents flowing in the ionosphere in turn induce voltages and currents in the ground and in long telephone or power transmission lines.
Shortly after the first telegraph wires were strung in the 19th century, geomagnetic storms began to show technological effects. The largest storm on record, which occurred on Sept. 2, 1859, was accompanied by auroras visible in the tropics; it also caused fires as the enhanced electric current flowing through telegraph wires ignited recording tape at telegraph stations. British astronomer Richard Carrington noted the coincidence (but did not claim a direct connection) between the auroras and a solar flare he had observed the day before, thus prefiguring the discipline of space weather research.
During the space age, which began with the launch of Sputnik in 1957, the effects of space weather have multiplied. Today many vital technological systems on the ground, in the air, and in space are susceptible to space weather.
Effects on satellites
There are two main space weather concerns for Earth-orbiting satellites: radiation exposure and atmospheric satellite drag. Radiation exposure is the interaction of charged particles and electromagnetic radiation with a spacecraft’s surfaces, instruments, and electronic components. Satellite drag can have a serious impact on the orbital lifetime of low-Earth-orbiting satellites.
Satellites in Earth orbit are exposed to significant amounts of high-energy electromagnetic radiation and charged particles that do not reach Earth’s surface on account of its protective atmosphere. The space environment around Earth is filled with energetic charged particles that are trapped in the Van Allen radiation belts. The spatial extent, the energy, and the amount of radiation in the Van Allen belts are controlled by space weather, with large increases in their size and amount of radiation occurring during large geomagnetic storms. Although satellites usually do not orbit directly in the Van Allen belts, these charged particles have a significant impact on the design of spacecraft and space instrumentation.
For example, high-energy electrons can penetrate spacecraft and deposit their charge in the dielectric (insulating) material of electronic circuit boards. If enough charge is built up, a discharge can break down the material, causing the electronic component to fail. This can have catastrophic consequences if the damaged electronic circuit controls a critical component of the spacecraft.
Atmospheric satellite drag
Though the uppermost layer of Earth’s atmosphere, the thermosphere, is extremely tenuous compared with the dense lower layer at the surface, it is not a perfect vacuum. Indeed, the density of the gas a few hundred kilometres above Earth’s surface is appreciable enough that over time it can lower the altitude of an orbiting satellite. Since the satellite’s velocity and the neutral gas density increase with decreasing altitude, the amount of drag quickly increases, causing a satellite to reenter Earth’s atmosphere and either burn up or crash to the surface. The density of the upper atmosphere at any given altitude varies with the amount of solar radiation it receives, and the amount of solar radiation in turn varies either day-to-day depending on solar activity or over the 11-year solar cycle. Between solar minimum and solar maximum, the temperature of the thermosphere roughly doubles. The upper atmosphere extends farther during solar maximum, and its density at any given altitude increases. In general, a satellite must have an altitude of at least 200 km (120 miles); otherwise, the high thermospheric density will prevent the satellite from completing more than a few orbits. Even the Hubble Space Telescope and the International Space Station (ISS), which orbit at altitudes of about 600 and 340 km (370 and 210 miles), respectively, would eventually reenter Earth’s atmosphere if they were not continuously reboosted to their original orbits.
Effects on manned spaceflight
One major hazard of manned planetary exploration is high-energy radiation, for the radiation that affects the electronic components of satellites can also damage living tissue. Radiation sickness, damage to DNA and cells, and even death are space weather concerns for astronauts who would make flights to the Moon or the multiyear journey to Mars. Solar energetic particles and cosmic rays are difficult to predict or protect against. Large solar storms, such as from flares and CMEs, can produce lethal radiation environments on the Moon or in interplanetary space. Shielding of spacecraft and surface laboratories on the Moon and Mars would be a critical component for any such human spaceflight effort. Even in low Earth orbit within the magnetosphere, astronauts on the ISS receive a dose of radiation equivalent to about 5–10 chest X-rays per day, which causes an increased risk of cancer.
Effects on satellite communications and navigation
Communication from the ground to satellites is affected by space weather as a result of perturbations of the ionosphere, which can reflect, refract, or absorb radio waves. This includes radio signals from Global Positioning System (GPS) satellites. Space weather can change the density structure of the ionosphere by creating areas of enhanced density. This modification of the ionosphere makes GPS less accurate and can even lead to a complete loss of the signal because the ionosphere can act as a lens or a mirror to radio waves traveling through it. Because the ionosphere has a different refractive index from the layers above and below it, radio waves are “bent” (refracted) as they pass from one layer to another. Under certain conditions and broadcast frequencies, the radio waves can be absorbed or even completely reflected. Sharp and localized differences (or gradients) in the density of the ionosphere also contribute significantly to the effects of space weather on satellite communication and navigation. These gradients become most pronounced during geomagnetic storms.
Effects on Earth’s surface
The greatest potential damage caused by space weather in economic terms would be the destruction of infrastructure required for continent-sized power distribution systems. The electric currents driven by the coupling of the solar wind and the interplanetary magnetic field with the geomagnetic field—which during the 19th century flowed through telegraph wires—can now find their way into electric power transmission lines with potentially devastating consequences. The enhanced currents can damage or destroy electrical transformers, causing a cascade of power failures across a large portion of the electric grid. For example, a storm during the 1989 solar maximum caused a massive power outage in Canada when transformers failed in Quebec. It has been estimated that if a geomagnetic storm like that of 1859 hit today, a large fraction of the North American power grid could be disabled, with estimated recovery times of months to years and financial losses of hundreds of billions of dollars.
The U.S. government has developed a Space Weather Prediction Center (SWPC) as part of the National Oceanic and Atmospheric Administration. The SWPC is based in Boulder, Colo., and observes the Sun in real time from both ground-based observatories and satellites in order to predict geomagnetic storms. Satellites stationed at geosynchronous orbit and at the first Lagrangian point measure charged particles and the solar and interplanetary magnetic fields. Scientists can combine these observations with empirical models of Earth’s space environment and thus forecast space weather for the government, power companies, airlines, and satellite communication and navigation providers and users from around the world. |
Most basic Word training includes a brief introduction to using the Replace command, and it usually goes a little something like this:
"You've written a long report and referred to your boss, Kathy Jones, as Cathy Jones all the way through it. Instead of reading the entire report to locate and manually change occurrences of 'Cathy,' you can use the Replace command to quickly fix the error throughout the document."
While that's a great starting point for showing a class how handy the Replace command can be, simple text replacement is barely the tip of the search-and-replace iceberg. To make the tool truly useful to your students as they tackle day-to-day word processing chores, you might consider sharing some of these additional Replace command tricks:
- Replace a specific format throughout a document.
- Eliminate extra spaces after punctuation.
- Target and fix specific grammar problems.
Let's look at some simple exercises you can use to demonstrate these tricks to your Word students.
Replacing a format
Suppose you're working on a document that includes a few dozen underlined book titles, which you instead need to italicize.
- Choose Replace from the Edit menu. In the Find And Replace dialog box, click More to expand the Replace tab.
- With the insertion point in the Find What text box, click the Format button and choose Font from the drop-down list.
- In the Find Font dialog box, select Single from the Underline drop-down list and then click OK to return to the Find And Replace dialog box.
- Click in the Replace With text box, click Format, and choose Font again.
- Select None from the Underline drop-down list and choose Italic from the Font Style list box, then click OK. Word will display your specifications below the Find What and Replace With text boxes, as shown in Figure A.
- Click Replace All, and Word will substitute italics for the underline format throughout the document.
|Word will show your formatting specifications in the Find And Replace dialog box.|
Be sure your students know that Word retains the formatting specifications for the Find What and Replace With text boxes for the duration of the current Word session. If they need to perform a different replacement operation that doesn't involve those formats, they'll need to click in each text box and click the No Formatting button to remove the specifications.
Removing extra spaces after punctuation
This chore comes up a lot for users who work with documents created by two-space adherents—those who still insist on typing two spaces between sentences. To eliminate the extra spaces and ensure uniformity in the text, users can run the Replace command on the document.
- Choose Replace from the Edit menu. (Clear the formatting left over from the preceding exercise, if necessary.)
- In the Find What text box, type a period and two spaces.
- In the Replace With text box, type a period and one space.
- Click Replace All.
- Repeat the process to remove extra spaces after question marks and exclamation points, substituting each character for the period in each replacement operation.
Using Replace for document cleanup
The final example is a bit different from the first two. Instead of globally fixing a problem, it allows users to zero in on recurring errors and correct them on a case-by-case basis. If they know they tend to make a particular mistake when they’re in a hurry, they can use this trick to catch and fix the problem without laboriously rereading the entire document or running a full-blown grammar check.
We'll demonstrate this technique by searching for the use of "it's" where it should be "its," but you might encourage your students to come up with their own lists of the errors they commonly make.
- Choose Replace from the Edit menu.
- In the Find What text box, type it's.
- In the Replace With text box, type its.
- Click Find Next. Word will jump to the first occurrence of “it’s”, as shown in Figure B.
|We clicked Find Next to jump to the first instance of the term ”it’s” in this document.|
- If the word is used incorrectly (as in Figure B), simply click Replace to insert the possessive version (its) and jump to the next occurrence. If the word is used correctly, click Find Next to leave it as is and jump to the next “it’s.”
- Repeat the process to work your way through the document, finding and fixing the misused terms in the text.
Jody Gilbert has been writing and editing technical articles for the past 25 years. She was part of the team that launched TechRepublic and is now senior editor for Tech Pro Research. |
Hydroponics means growing plants in water and a nutrient solution instead of soil. There are several advantages to growing plants in this manner. The crop requires less space and no soil. There are also no soil diseases or insects to infect the crop. Plants require less water and no cultivation. Organic and biological control of disease and insects are less costly and easier because the system is enclosed. Best of all, there is no weeding.
Hydroponic growing is divided into two groups, water-based and medium-based. In a medium-based system, a growing medium--such as rockwool, perlite or vermiculite--is used at the root of the plant to help hold the nutrition. Media-based hydroponics is generally considered to be more expensive because of the cost of the media. However, media-based systems are capable of surviving short power outages because the media keeps the roots moist.
Water-based systems require a continuous flow of both water and nutrient solution, since there is no media to hold and store liquid. Water-based hydroponics where the nutrient liquid is collected back into the reservoir are known as recovery systems. If the nutrient liquid is not reused, it is a non-recovery system. Plants in water-based hydroponics are vulnerable during a power loss or equipment failure because no water can flow to the roots.
The nutrient solution is an important part of the hydroponic system, says the Virginia Cooperative Extension. The plants in a hydroponic system do not have access to the food provided by soil, they need to be fed through a nutrient solution. The Extension recommends home gardeners purchase a nutrient fertilizer that is specific for hydroponics and follow the dilution rate on the packaging. Test the pH of the solution often to be sure it is between 5 and 6.
Other Growth Requirements
Plants in a hydroponic system have other growth requirements. In addition to nutrients, the plants need oxygen. An air stone and air pump can oxygenate the liquid nutrient. Plants need light to grow. Virginia Cooperative Extension suggest that metal halide lamps, gro-lights and fluorescent lights can provide adequate lighting if natural sunlight is not an option. Air circulation is also important as it prevents disease.
Choosing a System
Choose the system that works best for the space available, the time you have to invest and what you want to grow. Media-based systems cost more money to maintain. Water-based systems require more time to maintain, although some of the work can be completed through timers. Systems that use a wick to move the nutrient feed are generally not suited to large plants or long rooted plants; instead use drip systems or flood-and-drain type systems. Any of the hydroponic systems are good for greens, oriental vegetables, herbs and edible flowers. Ebb-and-flow systems or drip systems work well for peppers and tomatoes. |
AMD virtualization (AMD-V) is a virtualization technology developed by Advanced Micro Devices. AMD-V technology takes some of the tasks that virtual machine managers perform through software emulation and simplifies those tasks through enhancements in the processor’s instruction set.
The enhanced parallel port (EPP) is an old, but still widely used, standard input/output (I/O) interface that connects peripheral devices, such as a printer or a scanner, to a PC. The four standard parallel ports are the parallel port (PS/2), standard parallel port (SPP), EPP and extended capabilities port (ECP). The EPP is quicker than older ports and can transmit more data while allowing channel direction switching. This port is appropriate for portable hard drives, data acquisition and network adapters. The EPP is used mainly for PCs that support eight-bit bidirectional communication at Industry Standard Architecture (ISA) bus speeds. EPP introduced advanced performance with backward SPP compatibility. The EPP is about 10 times faster than the older port modes.
A parallel port was first used in 1981 to provide a physical interface between a PC and a printer. The original parallel port was called the normal port or SPP, and it soon became a de facto standard for most PCs. By 1987, the PS/2 was introduced. This port was a lot faster and had bidirectional port capabilities. The PS/2 could read data from a peripheral device to the host. The bidirectional EPP was developed in 1994 to provide a high-performance interface. This mode was implemented as part of the Institute of Electrical and Electronics Engineers (IEEE) standard. The bidirectional ECP was also introduced in 1994 by Microsoft and Hewlett Packard for use with printers and scanners. It features direct memory access (DMA), first in/first out (FIFO), data compression and channel addressing. The original standard parallel port (SPP) was unidirectional (one direction) and could transfer eight-bit data. The PS/2 parallel port introduced an eight-bit bidirectional data port that was two times faster. Both the SPP and PS/2 transferred data at a rate of 50 to 150 KBps. Each new parallel port design helped improve the performance and speed of data transfer.Both the EPP and ECP support an eight-bit bidirectional port. Usually, EPP is used for newer models of printers and scanners, whereas ECP is used for non-printer peripherals, such as network adapters or disk drives. Although EPP and ECP are quite different, there are modern products that support both EPP and ECP collectively.
Read More »
Join 138,000+ IT pros on our weekly newsletter
Home | Advertising Info | Write for Us | About | Contact Us
2010 - 2014
Partner Sites : |
tough, inured to extremes of heat and cold, content with meager rations, spending most of their waking hours in the saddle, hitting hard and suddenly, they could be defeated only by well-disciplined troops operating under first-rate commanders. The Germans, in spite of their bravery as individuals, could offer no effective resistance to the Huns, and the wedge of nomad invaders drove through the strongest Germanic peoples into the heart of Europe. Many of the Germans became subjects or tributaries of the Huns; those who escaped this fate milled around frantically looking for a place of safety. The most obvious refuge was behind the fortified lines of the Roman frontier, and tremendous pressure built up all along the border. From the Rhine delta to the Black Sea the Germans were on the move, and the Roman government could do nothing to stop them.
Since the Germans could not be stopped, the obvious move was to regularize the situation by admitting them as allies serving in the Roman army. This policy was followed with the Visigoths, the first group to cross the frontier. It was not entirely successful, since the Visigoths became annoyed at being treated as a subject people and repeatedly revolted, asking for more land, more pay, and higher offices for their leaders. They defeated a Roman army at Adrianople in 378; they pillaged the western Balkans and moved into Italy, where they sacked Rome in 410. Then they were persuaded to continue their migration to Spain, where they drove out another group of invaders and set up a Visigothic kingdom. In spite of these excesses, the bond between the Visigoths and the Roman government was never entirely broken. They served the Empire occasionally in wars with other Germanic peoples and one of their kings died, fighting for Rome, in a great battle against the Huns in 451.
Meanwhile, the push across the frontiers continued. The Vandals marched from central Germany, through Gaul and Spain, to North Africa. The Burgundians occupied the valley of the Rhone. A mercenary army in Italy set up a king of their own in 476, the |
According to the National Education Association, kindergarten is the bridge between early childhood care and elementary school. As a result, it is a very important year for students and requires some special knowledge from teachers.
Kindergarten teachers have a lot of the same responsibilities of other teachers but also have to take into account that many of their students will be experiencing school for the first time. They must teach students basic skills in reading and writing in addition to showing them how to behave in the classroom and play nicely with others.
Kindergarten teachers need to be able to assess where students are academically and emotionally in order to set goals for the class and individual students. With the exception of extracurricular activities such as physical education, art class, or lunch, all lessons are typically conducted in one classroom.
In order to teach, a bachelor’s degree is required in addition to a teaching certificate. Kindergarten teachers are usually certified to teach any grade between kindergarten and fifth or sixth grade. For teachers who have a strong desire to teach kindergarten, it may be wise to have practical experience or coursework in early childhood education.
A Day in the Life
- Morning: Kindergarten teachers will start the day early by preparing the days lesson and getting the classroom ready for students. She may greet students and parents when they get dropped off and instruct students to put their things away upon entering the classroom. Some schools will offer students breakfast or a snack in the morning and the teacher will be responsible for distributing that.
- Mid-Morning: By mid-morning, the teacher may have students learning while sitting on a rug, or she might take a more hands-on approach to some lessons and have them work in groups at tables.
- Lunch: A teacher assistant might take the children out for lunch and recess. This gives the teacher time to look at homework, clean the classroom, and prepare for the afternoon lessons.
- Afternoon: The teacher may have additional time to plan lessons while the students take a nap after recess. After nap time, instruction resumes. The teacher may have the option to put the teacher assistant in charge of part of the class so that students can work in small groups or individually and still get the attention they need.
- After Work: Kindergarten teachers, especially in the first few years, will likely have work to do after the students leave. This can include planning lessons, calling parents, meeting with school administrators, or checking homework.
A bachelor’s degree is the minimum education required for kindergarten teachers across the nation, but many states reward teachers who have a master’s degree with a higher salary. Public school teachers have to receive state certification that shows they are capable of teaching any elementary school grade. This is done usually through a combination of certification classes, standardized tests, and a practicum. Our state certification pages have detailed information about what each state requires for kindergarten teachers.
Areas of Specialization
Kindergarten teachers have the same options for specialization as any elementary school teacher. They can receive additional training to teach music, art, or physical education. They may also consider earning a master’s degree in special education in order to increase their salary potential and be able to effectively teach students with learning differences.
Previous and Next Steps
Before landing a job as a kindergarten teacher, many people worked as interns in schools and other educational organizations. Working as a substitute teacher often does not require a credential and is a good way to get experience while going to school for a degree or certificate. Administrators will also sometimes look to the substitute pool when trying to decide who to hire for a full-time position.
Kindergarten teachers can build on their experience over time and make their skills work in other positions. For instance, some schools have grade level chair positions, which means one kindergarten teacher would supervise the other kindergarten teachers. Another option is to spend a year as a teacher on special assignment. These teachers can focus on a specific area where the school might be struggling such as literacy, technology, or discipline, and mentor teachers on how to improve in that area. There are many options for kindergarten teachers to further their careers if they are willing to gain additional experience and education.
For average salary information for kindergarten teachers (and several other early childhood education-related positions), go to our Jobs page and select a state. |
What is Congressional redistricting?
Redistricting is the process of redrawing district boundaries when a
state has more representatives than districts.
When does redistricting occur?
Redistricting occurs every ten years, with the national census.
Why does the US House have to be redistricted?
The United States Constitution requires congressional seats to be
reapportioned among the states after each decennial census. Because the
Supreme Court in the 1960s interpreted the Constitution to require that
each US House district have equal numbers of people, any state with more
than one district must adjust its district lines.
Who is in charge of redistricting?
State legislators and governors re-draw the boundaries of the US
House districts, although Congress has the right to regulate and modify |
Evidence for a changing climate continues to accumulate but I'm well aware that many people remain skeptical about the role of humans in this process. In this column I thought to review one of the several pieces of evidence for human involvement in the present period of global warming.
Over the past several hundred thousand years our planet has experienced climatic cycles oscillating between thousands of years of warmth with small polar icecaps and thousands of years of cold with large areas of the Earth's higher latitudes covered by thick glaciers and sea ice. These cycles have had a rough periodicity with the glacial maximums occurring about every hundred thousand years. The record of these glacial maximums and the intervening warm epochs has been found to be preserved both in marine sediments and in ice cores from Greenland and Antarctica. The cause of these historical climate cycles are considered to be the result of variations in the amount of solar radiation reaching the Earth because of well-known variations in the Earth's orbit around the sun and in the inclination of the Earth's axis with respect to the sun.
Specifically, the data from marine sediments and ice cores indicates that during the past several hundred millennia it has taken about 10,000 years for the planet to warm up during a warming spell and 50,000 or more years to cool back down. While the warm spell between these periods of change lasts around 30,000 years. In synchrony with these changes in global temperature, the level of carbon dioxide in the atmosphere rose and fell, and during these climatic cycles rising temperatures usually preceded the rising atmospheric carbon dioxide.
The present warm period began about 15,000 years ago and by the beginning of the last millennium the amount of carbon dioxide in the atmosphere had reached its usual historical levels for warm periods, which was 250-280 parts per million (ppm). Then, coincident with the start of the Industrial Age in the mid-Eighteenth Century, something quite unusual began to happen. The carbon dioxide level in the atmosphere started going up again, not only well before there was any evidence of a further rise in global temperature but, more strikingly, at rates many times the rate it had risen in the past. The concentration of atmospheric carbon dioxide at the present time is close to 390 ppm and it is rising at a rate of more than 2 ppm per year. Given that we have not experienced a prolonged period of massive volcanism during the last 200 years the most likely sources for much of this excess carbon dioxide are, of course, the burning of fossil fuels and the clearing of forests. Now, some 200 years after this unusually rapid rise in atmospheric carbon dioxide began, we've begun to experience a rise in the global mean surface temperature, fulfilling the decades-old predictions of climate scientists.
Is this sufficient evidence that humans are playing a significant role in global warming? |
- The BEST of Animated Short Films for Instruction
- World History Resources
- QR Code Ideas and Resources
- TPT Tips
- Google Cardboard (How To)
- Spelling Resources
- Behavior Management
- Aurasma Resources
- Class Dojo
- Student Book Reviews
- Word Work FREEBIES!
- Web 2.0 Tools
- Literacy Links
- Vocabulary Activities and Resources
- Literature Circle Resources
- Student Engagement - An Open and Shut Case
Monday, May 18, 2009
Spelling and Technology Integration
Those of us that use word study activities in conjunction with spelling units, appreciate the effectiveness of this approach in helping students see and find patterns in their words. As I teach spelling units, I've enjoyed the games and activities that are suggested through the Spell It Write program, yet I've been challenged to integrate technology into the spelling program itself. Below are some of the technology tools that I've used to help enhance my spelling program.
The most VALUABLE resource I've used is Spellingcity.com. Each time I teach a spelling unit (I do one every OTHER week in sixth grade) I upload the list of 24 words onto spellingcity.com. This resource allows the students access to their lists at home and provides them with games, audio spelling quiz, handwriting worksheets and other review activities to use with their personalized spelling list.
A second tool that I've implemented to help with spelling word study is this customizable word template. After the students have finished their word sorts, I use this template on the Smartboard and allow students to come to the board and slide the word slips on the board to share with the class how they sorted their words. I also toggle between the template and a random name generator called fruitmachine to select the students that will come to the front and share! You can find this at www.classtools.net
One final resource that I've found helpful for Spelling, is this link that provides customizable powerpoint templates. Spelling review games featuring suffixes, prefixes, root words, etc. can be created using these templates. http://jc-schools.net/tutorials/gameboard.htm
The Teacherscorner.net is also a valuable resource for creating customized worksheets. I use it to create a spelling scramble (or other puzzle board) based on the weekly spelling words.
Posted by Mary Howard |
BC: !!THE END!! Thank You for reading our Mixbook! We hope that you have understood and appreciated our efforts at showing the five themes of geography.
FC: By: Sung Ho Park Moses Lee Timmy Park
1: Location: A place of settlement,activity, or residence. We used a map of Korea because it shows longitude and latitude. Showing the longitude and latitude makes the map an example of absolute location. An example of relative location would be something like the Eiffel tower which is a famous landmark in Paris.
3: Movement: The first picture represents the movement of people such as the Pilgrims who were forced from their lands. In the bottom right picture, the spreading of religions such as Christianity show that ideas move as well. The final picture shows the spreading of goods. For example, the gun was spread throughout the world by the Europeans long ago.
5: Place includes the physical and the human characteristics of geograhy. Wildlife, such as plants and animals, and the terrain are all included in the theme, Place. Place affects how people eat, how they dress, and what they build. | Place: Place includes the physical and the human characteristics of geograhy. Wildlife, such as plants and animals, and the terrain are all included in the theme, Place. Place affects how people eat, how they dress, and what they build.
7: Interaction: The girl in this picture is currently at a resort in Mexico. As you can see, she has adapted her style of clothing to a more cooling outift, thus showing interaction between her and the environment. The other picture on the right, shows a bridge that crosses a rocky chasm. The type of interaction in this picture is the modification of nature to suit man's needs. The final type of interaction is the human dependence on the environment. Things such as the Nile River in the final picture are needed for water and transportation.
9: Region: There are two main kinds of region that are used in geography. The first is shown in the map of Great Britain. This is the formal type of region and it is defined as the same government or same language in a certain area. The second region is the Vernacular region. The picture of the United States shows the division of the country depending on the perception of people. |
Biological diversity provides a basis for the supply of a multitude of ecosystem services and an opportunity to utilise the various benefits of nature. Furthermore, biodiversity supports the functioning and restorability of ecosystems, for example the ability to withstand damage and reduce the threat of alien species.
Biodiversity relies on the ecological network
Safeguarding and promoting biodiversity is an integral part of forestry at Metsähallitus. The majority of the area in forestry use consists of multiple-use forests which are suited as habitats for most of our forest species.
Multiple-use forests include special sites of the ecological network with the purpose of maintaining valuable habitats characteristic of the area as well as their species, some of which are quite demanding. The sites of the ecological network are excluded from forest management activities, or they are treated carefully considering their special nature.
Valuable habitats are totally excluded from forest operations
The core of the ecological network is made up of protected areas, valuable habitats as well as occurrences of certain species.
Valuable habitats include herb-rich forests, stands of broadleaved deciduous species, old-growth forests, heathland forests with plenty of decaying wood, sunlit slopes on sandy esker ridges, wooded heritage biotopes, wooded cliffs and bluffs, fertile mires as well as the surroundings of small water bodies such as springs, brooks, rivulets and small ponds. Some of these valuable habitats are defined in the Forest Act and the Nature Conservation Act.
Ecological corridors and buffer zones enhance the spread of species
The core areas excluded from commercial forestry are linked by ecological corridors and stepping stones. Besides these, the ecological network includes various buffer zones such as environmentally valuable forests and biodiversity enhancement areas. Recreational and landscape sites also support the ecological network. The planning and treatment of special sites are tailored processes. The management of forests adjacent to protected areas is planned in co-operation with experts from Metsähallitus Natural Heritage Services.
Metsähallitus has many kinds of special sites established by its own decision. Site-specific planning is a way to safeguard their special environmental values. The ecological value of small protected areas or concentrations of valuable habitats may be enhanced by locating special sites in their vicinity.
Retention trees are important for species dependent on decaying wood
Many threatened forest species need decaying wood. Leaving retention trees is a way to secure a sufficient supply of deadwood also in future phases of stand development. Retention trees are also very significant to the landscape.
Ideal retention trees include exceptionally large and prominent individuals as well as cavity trees. Groups of retention trees should preferably consist of large-diameter broadleaves, especially larger aspens and goat willows. Large-diameter pines are also important as potential nest trees for large birds of prey.
Groups of retention trees are placed by ecological and landscape criteria. Regarding species dependent on decaying wood, larger concentrations of retention trees are more effective than single trees or groups of a few trees.
Active management of habitats
Metsähallitus promotes biodiversity in multiple-use forests by actively managing important habitats. Habitat management is generally practised on a small scale, and focused on key sites in terms of biodiversity. Active habitat management includes prescribed burning, burning of retention tree groups, restoration of mires as well as management of sunexposed esker habitats and herb-rich forests. Metsähallitus is involved in the restoration of brooks and streams in joint projects with other actors. |
|The appendix is a small, finger-like structure attached to the large intestine in the lower right side of the abdomen. Sometimes, this structure becomes blocked, causing swelling. The appendix can quickly then become the habitat of a bacterial infection. This inflammation and infection of the appendix is referred to as appendicitis, and it is considered a medical emergency.
While anyone can develop appendicitis, the condition is more common in people between the ages of 10 and 30. Once appendicitis is detected and promptly treated, most patients recover without further complications or long-tem consequences. However, if treatment is delayed, the appendix can rupture, causing infection to spread into the abdomen. A ruptured appendix is very serious, as it can result in a potentially fatal infection. |
This site has hundreds of basic multiplication activities. Printables include multiplication games, quizzes, word problem worksheets, cut-and-glue activities, flashcards, math mystery pictures, and much, much more.
Decimal Addition & Subtraction. Find sums and differences for pairs of decimals on these worksheets. These practice pages have decimals in tenths, hundredths, and thousandths.
3-Digit Addition. These printable worksheets and games have addition problems with 3-digit addends. Includes math riddles, a magic digit game, math crossword puzzles, and column addition worksheets. (Approx. Level: 2nd grade, 3rd Grade).
We have multiplication sheets for timed tests or extra practice, as well as flashcards and games. Most resources on this page cover basic multiplication facts 0-10. If you are teaching all the way up to 12, you might want to jump to the Basic Multiplication 0-12 page. For 2-digit and 3-digit multiplication, head on over to the Multi-Digit Multiplication page. |
The Doomsday Clock was created in a publication called the Bulletin of the Atomic Scientists in 1947 and was intended as a stark graphical representation of how close the planet Earth is to nuclear annihilation. The minute hand shows the relative time remaining for life on Earth. This is measured in “minutes to midnight.” The minute hand is moved once per year.
At the start of the Cold War (1947), the clock was set at seven minutes to midnight. In 1991, just after the end of the Cold War, the clock showed 17 minutes to midnight.
This year the Bulletin of the Atomic Scientists issued a shocking announcement that The Doomsday Clock was moved forward to 2½ minutes to midnight, the closest to disaster the clock has been since 1953. (At that time, it was set at two minutes to midnight due to a U.S. decision to pursue the hydrogen bomb.)
[As part of the reason] for moving the Doomsday Clock forward to “two and a half minutes to midnight,” The Bulletin cited the North Korean situation.
North Korea has made great strides in short-range and intermediate-range missiles, and is working toward an intercontinental ballistic missile (ICBM), that could reach Los Angeles and much of the rest of the United States from their territory. North Korea also has a store of plutonium and highly-enriched uranium (HEU) that can be converted into nuclear weapons. It has made progress in the miniaturization and ruggedization of those weapons so they can be converted to warheads and placed on the missiles. The only remaining element of the nightmare scenario is intent. |
Vitamin D is found naturally in cod liver oil, swordfish, salmon, tuna, eggs and Swiss cheese. Many foods have been fortified with vitamin D like cereal, yogurt, milk and margarine. Sun exposure helps many patients meet their vitamin D requirements as well. The recommended daily value of vitamin D is 400 IU's for children 4 and over and adults. Many people are supplementing to increase their levels of this vitamin due to a vitamin D lacking diet and less sun exposure than necessary.Rickets and osteomalacia are the most common diseases associated with vitamin D deficiency. Vitamin D deficiencies are usually the result of dietary inadequacy, impaired absorption and use, increased requirement, or increased excretion. A vitamin D deficiency can occur when usual intake is lower than recommended levels over time, exposure to sunlight is limited, the kidneys cannot convert 25(OH)D to its active form, or absorption of vitamin D from the digestive tract is inadequate. Vitamin D-deficient diets are associated with milk allergy, lactose intolerance, ovo-vegetarianism, and veganism. |
Figure 9 shows an insulated coil whose terminals are connected to a sensitive galvanometer (G). There is a bar magnet, AB, close to the coil. When the bar magnet is suddenly moved towards the coil to position A ‘B’, there is a deflection in the galvanometer. This deflection in the galvanometer lasts so long as there are relative motions of the magnet with respect to the coil; that is, the flux linking with the coil changes.
Figure 10 shows that when the magnet is withdrawn, the deflection of the galvanometer is in the direction opposite to the above case. This deflection exists so long as the bar magnet is in relative motion to the coil; that is, the flux linking with the coil changes.
Figure 9 Generation of Induced emf
Figure 10 Magnet With drawn
In both the cases, the deflection is reduced to zero when the bar magnet becomes stationary. The flux linked with the coil increases as the bar magnet approaches the coil in the first case, while in the second case, the flux linked with the coil decreases when the bar magnet is withdrawn.
It is clear from above that deflections in the two cases are in different directions. The deflection in the galvanometer indicates that there is an induced current produced in the coil. In the first case, the induced current flows through the coil in an anticlockwise direction as seen from the bar magnet. This indicates that the face of the coil is the N pole. To move the bar magnet towards the coil, a force from outside must be given to the bar magnet. When the magnet is withdrawn, the current flows in a clockwise direction, as seen from the bar magnet. This indicates that the face of the coil is the S pole. So once again force must be supplied to the bar magnet from outside to take it away from the coil. Here, the principle of conservation of energy is fully satisfied.
From the above results, Faraday proposed two laws known as Faraday’s first and second laws.
- First law: Whenever there is variation of magnetic flux linked with a coil, an emf is induced in it. Or an emf is induced in a conductor whenever it cuts the magnetic flux.
- Second law: The magnitude of this induced emf is the rate of change of flux linkage.
Let the initial flux be Φ1 and the final flux be Φ2 in time t. Let the turns of the coil be N. Therefore, the initial flux linkage is NΦ1 and the final flux linkage is NΦ2.
In differential form, we can put Equation (1) as
The law states that the induced current will flow in such a direction that it will oppose the cause that produces it. It is explained as follows:
The current through the coil 1 is varied by the rheostat, as shown in Figure 11. An emf is induced in coil 2 due to variation of flux in coil 1. The induced emf between P and Q will be such that the current will flow from P to Q via the resistance R so that it will oppose the flux that is linked with it. In this case Pwill have a positive polarity and Q will have a negative polarity. The induced emf is given by .
Figure 11 Example of Lenz’s Law |
Chinese language is an unfamiliar territory for many whose first language use an alphabetic system such as English. Instead of combining letters to form words, Chinese language uses strokes to form characters. The commonly used Chinese characters alone contain 3000 words. Not only that, Chinese is one more tonal language with four different tones, commonly marked with accent marks, many years . unmarked represents the neutral tone. The accent marks are only available unfavorable PinYin to represent the pronunciation. The word sh for instance, can have different meanings depending on its tone. The first tone sh can mean poetry or wet or teacher. The second tone sh can mean ten or time or true. The third tone sh can mean history or to start or to cause. The fourth tone sh can mean yes or room or matter. In essence, are usually many similar sounds with assorted meanings. As a few fact, a Chinese linguist in the 20th century Zhao Yuanren composed a 10-line classical Chinese poem using only the sound shi.
Next, it isn’t always possible to guess the pronunciation of a character. The character for wood, for instance, is pronounced mu. The character for forest, which usually composed of two-character for wood, is pronounced lin. Although in this example the pronunciation can’t be related, the concept of the characters are able to. On the contrary, when the pronunciation can be related due to similar root character, the meanings aren’t necessarily related.
PinYin itself, although alphabetized, is not pronounced the same way as the alphabetic sounds. There are unfamiliar sounds because u with an umlaut () that sounds like a plan of I and u. Like all things unfamiliar, it will result in uncertainty and are worried. Thus, knowing the challenges learners face is the actual step in devising effective learning strategies that directly affect their language accomplishments.
The Strategies Would once Learn Chinese Characters
Prof Ko-Yin Sung of Chinese Language Study from Utah Expenses hikes conducted a search amongst non-heritage, non-Asian Chinese language learners and uncovered interesting results which could help future learners in forming a complete study plan. Her study revolves all through most used often Chinese character learning strategies and how those strategies affect the learners’ ability in understanding and producing the sound and the writing on the Chinese characters.
The study finds that among greatest twenty most regularly used strategies, eight regarding are related to when a personality is first introduced to learners. These include:
1. Repeating the character several times aloud or silently.
2. Writing the character down.
3. Noting how the is utilized for context.
4. Noting the tone and associating it with pinyin.
5. Observing the character and stroke order.
6. Visualising the element.
7. Learning the explanation of the character.
8. Associating the character with previously learned design.
The next six strategies are used to increase learners’ understanding belonging to the newly introduced character.
9. Converting the character into native language and finding a similar.
10. Looking in the textbook or dictionary.
11. Checking if brand new character is used above.
12. Trying to discover how to say hello in chinese they are used in conversation.
13. While using character in sentences orally.
14. Asking how the character could provide in lines.
However, learning strategies come to diminish beyond those two learning portions. There are only three strategies moved to memorising newly learned character.
15. Saying and writing the character at the same time frame.
16. Saying and picturing the character in mindset.
17. Given the sound, visualising the character shape and meaning.
And hard work only one strategy applied in practising new characters.
18. Making sentences and writing them out.
And new characters are reviewed truly worth two strategies only.
19. Writing the characters many years.
20. Reading over notes, example sentences and the textbook.
Of the twenty strategies mentioned above, four are only to be most significant in increasing learner’s skills of speaking, listening, reading, and writing of the new characters. The four strategies are:
Writing the characters down
Observing the stroke order
Making association with a similar character
Saying and writing the character repeatedly
A research by Stice in 1987 showed that students only retained 10% of back as they learned from what they read, 26% from what they hear, and 30% from what they see. When learning modes are combined, an essential improvement to learn retention is noted. Learning retention jumped to 50% when seeing and hearing are combined, and even higher at 70% when students the materials are generally learning, and learning retention is in the highest at 90% when students the materials considerable learning as they quite simply do anything. Simply reading the characters are not enough. Learners must associate the sound with the characters, make a connection that isn’t characters to create them memorable, and practice recalling the newly learned characters.
Study shows that recalling new characters learned improves learning retention and reinforces learning. One way to practice this will be by using an app for instance The Intelligent Flashcards. This specific unit flashcard app is designed for the New Practical Chinese Reader textbooks, making it convenient to review characters based on chapters. Simply does it show stroke order animation, it is also accompanied by native speaker sound files, making this app the more convenient that another app with regard to Anki. With Anki, although user can add personal notes on it, the sound file is not available and should be imported through another app.
Another important learning strategy to incorporate is observing the actual characters tend to be in situation. This can be practiced by observing real-life conversations to supplement the textbook and audio tracks conversation. Usually interesting to remember that learners studied the particular abovementioned research were unwilling to adopt the educational strategies recommended by the instructors, because watching Chinese TV shows or listening Chinese song selections. There could be many reasons for this. The style of the shows or songs can not appeal into the learners.
Use of this program is not quite as convenient. Also if the shows could be accessed online, rarely real estate agent subtitled in either both Chinese and English which will make the shows more employed to beginner learners in getting the language. Also, most of this very popular Chinese Tv fall in the historical genre, which is often a favourite one of several Chinese, for instance The Empress of Asia. However, the language spoken in this type of TV show is a bit more complex compared contemporary spoken Chinese. |
Fly By Wire
Fly-by-wire is a system that replaces the conventional manual-mechanical flight controls of an aircraft with an electronic interface. A computer system is interposed between the pilots and the final actuators-surfaces. The term “Fly-By-Wire” implies a purely electrically-signalled control system..
The movements of flight controls(which commanded by Pilots) are converted to electronic signals and these signals are transmitted by wires to the flight control computer, and flight control computers determine how to move actuators at each control surface.
Fly by wire aircraft incorporate flight-envelope protection system into its flight control software(inside flight control computer), It is used in all modern commercial fly-by-wire aircraft. Flight envelope protection system prevents the pilot of an aircraft from making control commands that would force the aircraft to exceed its structural and aerodynamic operating limits.
The flight data which used by flight computers for prevent dangerous actions-maneuras, stabilize the plane and control the plane are;
- Pitch, yaw, roll rate and linear acceleration
- Angle of attact and sideslip
- Airspeed, pressure, altitude and radio altimeter indications
- Sidestick-Yoke and pedal demands
- Other cabin commands such as landing gear condition, thrust lever position etc.
Fly By Wire – Basic Operation
- When the pilot move the sidestick to command the flight control computer to make a certain action such as pitch the aircraft up or roll to one side, this demand is first of all transduced into electrical signal in the cabin and sent to flight control computer, the signal is sent through multiple wires (channels) to ensure that the signal reaches the computers.
- The flight control computer also receives the data signals concerning the flight conditions and servo-valves and actuators positions . Then the flight control computer evaluates and calculates (in accordance with the position of aircraft, airspeed, actuator positions, pressure, altitude, landing gear condition and signals come from the pilot’s sidestick-yoke and pedal demands) what control surface movements will cause the plane to perform that action, and then the computer chose the best action and convert it to electrical signals.
- These signals are then sent to command the electronic controllers for each surface. The controllers at each surface receive these commands and then move the actuators attached to the control surfaces until it has moved to where the flight control computer commanded it to.
- The controllers measure the position of the flight control surface with potentiometers in the actuators and then send a signal back to the flight computer (usually a negative voltage) reporting the position of the actuator. When the actuator reaches the desired position the two signals (incoming and outgoing) cancel each other and the actuator stops moving (completing a feedback loop).
This process is repeated continuously as the aircraft is flying.
Fly By Wire – Advantages
- Flight-Envelope Protection software automatically help to stabilize the aircraft and prevent the unsafe actions.
- Turbulence suppression and consequent decrease of fatigue loads and increase of passenger comfort
- Drag reduction by an optimised trim setting
- Easier interfacing to auto-pilot and other automatic flight control systems
- Maintenance reduction
- Reduction of airlines’ pilot training costs (flight handling becomes very similar in an whole aircraft family)
- Flight-control computers continuously “fly” the aircraft, pilot’s workloads can be reduced
- Fly-by-wire control systems also improve economy in flight because Fly By Wire Aircraft can be lighter due to they eliminate the need for many mechanical, and heavy, flight-control mechanisms and wires except of hydrolic systems cover less space, less complex and reliable.
Differences between Airbus and Boeing in using Fly By Wire system
The Airbus A320 was the first commercial aircraft to incorporate full flight-envelope protection into its flight-control software.Airbus give full-authority to FBW controls, the flight envelope protection cannot be overridden completely, although the crew can fly beyond flight envelope limits by selecting an alternate “control law”. But in the Boeing 777 has taken a different approach by allowing the crew to override flight envelope limits using excessive force on the flight controls. Airbus used full-authority FBW controls. Boeing followed with their 777 and later designs. (Just B777 and B787) Boeing uses yoke on the B777, with fly by wire primary flight controls. Airbus does the same with a sidestick. Both provide full envelope protection. |
Newborn jaundice - also newbornsicterus or icterus neonatorum (old Greek icteros = jaundice) - describes the appearance of yellowing of the skin and dermis of the eyes ("Sclera") of the newborns. This yellow coloration is caused by the accumulation of breakdown products of the red blood pigment (hemoglobin). The breakdown product responsible for this is called bilirubin.
Jaundice in the first days of life is usually a physiological, harmless process that occurs in around 60% of newborns.
It is an expression of the replacement of the red blood pigment (hemoglobin) from the fetus by the adult ("adult") dye of the newborn.
Newborn jaundice that lasts more than two weeks after birth is called jaundice prolongatus.
The jaundice often reaches its full extent around the 5th day of life, after which it usually heals on its own and without consequences. Only rarely are bilirubin concentrations so high that threatening complications can occur ("Kernicterus"or"Bilirubin encephalopathy').
Newborn jaundice can have a variety of causes, but first of all you have to choose between the physiological, harmless jaundice and the jaundice due to congenital or acquired Metabolic disorders in the Bilirubin breakdown can be distinguished.
The physiological, harmless newborn jaundice is caused by an increased breakdown of the prenatal red blood pigment (fetal hemoglobin), which is replaced by adult (adult) hemoglobin after birth. By having the responsible Enzymes However, if they are immature and not fully active in the liver, the bilirubin cannot be broken down as quickly as it occurs and is deposited in the skin and the sclera.
Newborn jaundice due to disturbances in the bilirubin metabolism or an excess amount of red blood pigment apart from the normal hemoglobin change after birth can in turn have numerous causes. These include e.g. Bruisingthat occurred in newborns during childbirth and need to be removed, biliary stasis due to congenital narrowing or obstruction of the Bile duct, Inflammation of the liver (hepatitis) or a blood cell breakdown (Hemolysis) because of a Blood group intolerance between child and maternal blood group during pregnancy ("Rh factor intolerance" or. Haemolyticus neonatorum disease).
Also, prolonged neonatal jaundice can be a sign of a congenital hypothyroidism or one Neonatal infection be.
Often - depending on the severity of the jaundice - there is only a visible yellowing of the skin and the sclera of the newborn without any further symptoms. The yellow color itself cannot be felt by the offspring. This is usually the case in the context of physiological, harmless neonatal jaundice.
However, if for various reasons massive amounts of bilirubin accumulate, which cannot be broken down and excreted, this can in turn penetrate some nerve cells in the brain and lead to cell death there (Nuclear ncterus). A wide variety of symptoms, mainly neurological, can then occur.
These include a striking one Poor drinking and tiredness or indifference of the newborn, weakened newborn reflexes, high-pitched screaming, cramps in the neck and back muscles (Opisthotonus) and a downward look of the eyes when the eyelids are opened (sunset phenomenon).
Newborn jaundice occurs in over 50% of all newborns in the first few weeks of life. The yellowing of the skin is often due to completely natural processes at this age. The Height of Bilirubins is a marker for the severity of newborn jaundice. The bilirubin is yellow Breakdown product of the red blood pigment hemoglobin. An increase in bilirubin above age-typical values must be further clarified and treated. Heavily increased bilirubin values can lead to serious damage to the newborn. The bilirubin determination can be done non-invasively through the skin. The degree of yellowness of the skin is determined via a light signal and compared with age-appropriate standard values. For a more precise assessment of increased values, the total bilirubin in the blood is usually determined.
In the first week of life, in the sense of normal (physiological) neonatal jaundice, the total bilirubin must not exceed 15 mg / dl. Everything above is pathological, so it has disease value. On the first day of life, the total bilirubin value must not exceed 7 mg / dl. If this is the case, one speaks of premature newborn jaundice (Jaundice praecox). In contrast, neonatal jaundice can be called Icterus prolongatus also last for a week. To find the cause, in addition to the total bilirubin, a further breakdown in the blood into direct and indirect bilirubin must be made.
Appropriate therapy is initiated depending on the level of the values. For babies born at the appointment, phototherapy is initiated from a value of more than 20 mg / dl. In premature babies, the indication for phototherapy is usually made earlier, as even lower values lead to damage. A blood exchange transfusion must be initiated in children born at full maturity with a value above approx. 25 mg / dl.
Read about this too: Kernicterus
Physiological, harmless newborn jaundice usually begins right in the first days of life (approx. 3.-6. Day), often has its climax around the 5th day of life and then develops gradually without consequences up to about the 10th day of life back.
However, if the children are already born with neonatal jaundice, or if it occurs within the first 24-36 hours, one speaks of one early Jaundice (Jaundice praecox), which is usually based on a blood group incompatibility between mother and child (Morbus haemolyticus neonatorum). Does the mother have another blood group characteristic (Rhesus factor) as the child, it can happen that the mother antibody against the “foreign” blood cells of the child and these antibodies enter the child's blood system. This can lead to a destruction of the red blood cells in the child and an increased attack of the red blood pigment. If the neonatal jaundice lasts longer than two weeks, contrary to the rule, it is referred to as prolonged jaundice (Icterus prolongus) designated. Under certain circumstances, this can be an indication of a disorder of the bilirubin metabolism, which can be congenital or acquired and requires further clarification.
Consequences / late effects
A physiological, harmless neonatal jaundice of mild to moderate intensity usually heals on its own without consequences. So there are no (late) consequences.
However, if the bilirubin concentration in the blood exceeds a certain threshold value (Icterus gravis = more than 20 mg / dl), there is a risk of bilirubin “crossing” into the brain and thus kernicterus with the destruction of nerve cells. This preferably leads to cell destruction in the so-called basal ganglia. These are brain structures that are of great importance for the regulation of movement, information and emotion processing processes.
If a newborn suffers from kernicterus, adequate therapy must be initiated as soon as possible (usually from bilirubin concentrations of> 15 mg / dl) to prevent irreversible brain damage.
Otherwise there can be serious long-term consequences for the child, caused by mental and motor development delays, epileptic seizures, movement disorders (spasticity in the context of a infantile cerebral palsy) and deafness are marked.
Is Newborn Jaundice Contagious?
The causes of physiological neonatal jaundice are not due to infection. There is therefore no risk of infection. The pathological neonatal jaundice can in rare cases be caused by a infectious hepatitis to be triggered. Infection depending on the type of hepatitis is then potentially possible.
Since physiological neonatal jaundice usually heals on its own after one to one and a half weeks without consequences, no therapy is actually necessary.
However, if the bilirubin concentration in the newborn's blood rises too high, a suitable therapy is primarily carried out to prevent the dreaded complication of kernic terus.
The two most common therapy options are phototherapy on the one hand and so-called exchange transfusion on the other.
In phototherapy, artificial light in the blue range (430-490nm wavelength) is used to irradiate the newborn. As a result, the bilirubin is converted from its previously insoluble form ("unconjugated") into a water-soluble form ("conjugated") and can thus be excreted in the bile and urine. The step is therefore taken over which the immature enzymes of the child's liver cannot cope with in full activity.
However, strict attention must be paid to adequate protection of the eyes from the radiation and adequate hydration during phototherapy, as the newborns lose fluids through increased sweating.
If the phototherapy does not produce a satisfactory result, an exchange transfusion can be tried as a further therapeutic means, especially in the context of jaundice due to blood group incompatibility between mother and child. This usually happens when the mother has a rhesus negative and the child has a rhesus positive blood group, so that the mother forms antibodies against the child's blood group characteristic, which then leads to the destruction of the child's red blood cells.
In exchange transfusions, blood is taken from the newborn via the umbilical vein and rhesus-negative blood is administered until all of the newborn's blood has been exchanged. This should prevent further breakdown of blood cells and an increase in bilirubin levels.
Homeopathy for newborn jaundice
The means used in homeopathic therapy or for the prevention of newborn jaundice include various substances: On the one hand, can phosphorus given as the main remedy.
Furthermore can China are used, a homeopathic remedy made from the bark of the Chinese tree, which is often used in cases of blood group intolerance, and Lycopodium (Pollen of the barnacle moss) and Aconite (Monkshood). |
Sensorineural Hearing Loss occurs when the sensory part of the ear (the inner ear and outer hair cells) or the neural part of the ear (nerve that runs from the ear up to the brain) lose their ability to function normally. This happens to many people as they age and from the “wear and tear” that we put on our hearing. It can also be congenital or occur from illness.
Conductive Hearing Loss occurs when there is something physically obstructing the ear canal or middle ear and sound cannot be effectively conducted through the canal and middle ear into the inner ear. Oftentimes conductive hearing loss is temporary, but there are some medical conditions where it is permanent. A few examples of conductive hearing loss are: having wax impacted in the outer ear canal, having fluid in the middle ear (from an ear infection), or having bony growths around or on the ossicles (middle ear bones).
Sudden Hearing Loss occurs all at once. There is no gradual loss of hearing; moreover people wake up in the morning and find that they cannot hear out of one of their ears at all. There are some diseases of the ear that present with this symptom, but not all causes of sudden hearing loss are known.
There are classes of drugs that can harm the hair cells in the inner ear and cause hearing loss. Certain mycin drugs, as well as chemotherapy drugs, can damage one’s hearing, especially in the high frequency range. Many doctors will do hearing testing to monitor their patients’ hearing throughout a cycle of chemotherapy.
Noise Induced Hearing Loss (NIHL) occurs when people work or play around loud noises for extended periods of time. Most often the damage occurs in the high frequencies, but leaving the low frequencies within the normal range. Some examples of noisy jobs are: factory work, presses, machinery, farm equipment, tools, pressure guns/tools. Some examples of noisy hobbies are: hunting, shooting, boating, working with engines/cars, concerts/music.
A one-time blast or very loud noise can permanently damage one’s hearing.
If there is physical damage to the ear or skull, hearing loss can be permanent or temporary due to the damages inflicted to the structures in the ear or brain. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.