content
stringlengths
275
370k
With changing lifestyles and sedentary habits, the world is witnessing new diseases and disorders. Eating disorders are setting up the trend these days as more and more are falling prey to them. But do eating disorders have a role to play in your mental health condition? Let’s try to figure out. Know more about Eating Disorders Eating disorders are basically a range of psychological disorders characterized by abnormal or disturbed eating habits. In simplified form, eating disorders are illnesses in which people experience severe disturbances in their eating behaviors that directly relate to their thoughts and emotions. There are five types of eating disorders. - Anorexia Nervosa Anorexia is diagnosed when the patient weighs at least 15% less than the normal weight expected for their height. Limited food intake, fear of being “fat” and problems with body image are some common symptoms. - Bulimia Nervosa Although they diet frequently and exercise rigorously, the patients can be slightly underweight, normal weight, overweight or even obese. They binge eat frequently and may consume short amount of food in no time. After they binge eat, they may have stomach ache and a fear of gaining weight. So, they may vomit out the food or may consume laxatives in order to get rid of the food. - Binge Eating Disorders The patient may eat too much food regularly and feel lack of control over their eating. The patient may eat quickly, eat more food than intended, eat when not hungry, and may continue eating even after they are full. This is accompanied by a feeling of guilt, disgust or shame due to amount of food eaten. - Rumination Disorder This is characterized by repeatedly and persistently regurgitating food after eating. But this is not due to some medical disorder. Food is brought back up to the mouth without nausea or gagging. Gurgitation may not be intentional. - Restrictive Food Intake Disorder It is characterized by failing to meet the daily nutrition requirements. The patients may have no interest in eating or avoid the foods with particular sensory qualities. The patients are also concerned about the consequences of eating like the fear of choking. No fear of gaining weight is involved. Now that we know about the various eating disorders, let’s try and understand their effect on the behavior of the patients. Anyone can develop an eating disorder regardless of their age, gender or sex. Eating disorders usually lead to stress. And the people with eating disorders usually hide or deny that they are in stress. Hence, symptoms may be hard to detect in these cases. But there are some common hallmarks that can be observed. These can be: - Constant intrusive thoughts about food - Negative body image and lack of confidence and self esteem - Constant thoughts on binging or excessive exercising and regularly checking the body image - Inability to focus - Inability to have meals with others - Significant anxiety and physiological responses to beliefs about food or body image - Nutritional instability The above said symptoms give us a clear indication that eating disorders have a strong link to the mental state of the patient. Eating Disorders and Mental Health “If you don’t eat, your brain won’t function properly!” This is something that parents often tell children when they don’t eat properly. And perhaps it’s true. Brain function is often impacted by people suffering from eating disorders. This is because we train our brains to think and react in a certain way. In the case of teenagers, for example, the thought of food and exercise of discussion regarding the same can act as triggers. They may often think that they are ugly and overweight. This leads to negative body image and extremely low self esteem. Also, when the brain does not get enough food, it constantly thinks about what it needs i.e. food. So, there is a decrease in focus, ability to pay attention and to think rationally. This way, constant thoughts about food, weight, body image issues occur. In the case of children or even adults, the eating disorder might become a “best friend”. This means that the children become protective about the disorder. So, they react with a great resistance when parents try to help them. Sometimes, parents may fail to understand this reaction. But eating disorders usually don’t let people get better by clouding their thinking procedure. Understand that people with eating disorders can think quite clearly about other aspects of their life. They just can’t rationalize their thoughts about food, weight, body image and calories. With proper medical assistance and unconditional support from family and friends, these can be overthrown in no time. Click Here to read more about eating disorder recovery and treatment. Do you have a sweet tooth and wish there are more varieties of chocolate? Check out our article on shimmering, iridescent chocolates.
Deforestation is suspected to have contributed to the mysterious collapse of Mayan civilization more than 1,000 years ago. A new study shows that the forest-clearing also decimated carbon reservoirs in the tropical soils of the Yucatan peninsula region long after ancient cities were abandoned and the forests grew back. The findings, published in the journal Nature Geoscience, underscore how important soils and our treatment of them could be in determining future levels of greenhouse gases in the planet’s atmosphere. The Maya began farming around 4,000 years ago, and the spread of agriculture and building of cities eventually led to widespread deforestation and soil erosion, previous research has shown. What’s most surprising in the new study is that the soils in the region haven’t fully recovered as carbon sinks in over a millennium of reforestation, says McGill University geochemist Peter Douglas, lead author of the new paper. Ecosystem ‘fundamentally changed’ “When you go to this area today, much of it looks like dense, old-growth rainforest,” says Douglas, an Assistant Professor of Earth and Planetary Sciences at McGill. “But when you look at soil carbon storage, it seems the ecosystem was fundamentally changed and never returned to its original state.” Soil is one of the largest storehouses of carbon on Earth, containing at least twice as much carbon as today’s atmosphere. Yet scientists have very little understanding of how soil carbon reservoirs change on timescales longer than a decade or so. The new study, along with other recently published research, suggests that these reservoirs can change dramatically on timescales spanning centuries or even millennia. To investigate these long-term effects, Douglas and his co-authors examined sediment cores extracted from the bottom of three lakes in the Maya Lowlands of southern Mexico and Guatemala. The researchers used measurements of radiocarbon, an isotope that decays with time, to determine the age of molecules called plant waxes, which are usually stored in soils for a long time because they become attached to minerals. They then compared the age of wax molecules with that of plant fossils deposited with the sediments. The team — which included scientists from Yale University, ETH Zurich, the University of Florida and the University of Wisconsin-Superior — found that once the ancient Maya began deforesting the landscape, the age difference between the fossils and the plant waxes went from being very large to very small. This implies that carbon was being stored in soils for much shorter periods of time. The project stemmed from research that Douglas had done several years ago as a PhD student at Yale, using plant-wax molecules to trace past climate change affecting the ancient Maya. At the same time, work by other researchers was indicating that these molecules were a good tracer for changes in soil-carbon reservoirs. “Putting these things together, we realized there was an important data-set here relating ancient deforestation to changes in soil carbon reservoirs,” Douglas explains. Protecting old-growth tropical forests “This offers another reason — adding to a long list — to protect the remaining areas of old-growth tropical forests in the world,” Douglas says. “It could also have implications for how we design things like carbon offsets, which often involve reforestation but don’t fully account for the long-term storage of carbon.” (Carbon offsets enable companies or individuals to offset their greenhouse-gas emissions by purchasing credits from environmental projects, such as tree-planting.) The technique used by the researchers has been developed only recently. In the years ahead, “it would be great to analyze tropical forests in other regions of the world to see if the same patterns emerge — and to see if past human deforestation and agriculture had an impact on soil carbon reservoirs globally,” Douglas says. “I’m also very interested in applying this technique to permafrost regions in Canada to see what happened to carbon stored in permafrost during previous periods of climate change.” source : sciencedaily.com
According to Lassalle, wages cannot fall below subsistence level because without subsistence, laborers will be unable to work for long. However, competition among laborers for employment will drive wages down to this minimal level. This followed from Malthus' demographic theory, according to which the growth rate of population was an increasing function of wages, reaching a zero for a unique positive value of the real wages rate, called the subsistence wage. Assuming the demand for labor to be a given monotonically decreasing function of the real wages rate, the theory then predicted that, in the long-run equilibrium of the system, labor supply (i.e. population) will be equated to the numbers demanded at the subsistence wage. The justification for this was that when wages are higher, the supply of labor will increase relative to demand, creating an excess supply and thus depressing market real wages; when wages are lower, labor supply will fall, increasing market real wages. This would create a dynamic convergence towards a subsistence-wage equilibrium with constant population. As David Ricardo first noticed, this prediction would not come true as long as a new investment or some other factor caused the demand for labor to increase at least as fast as population: in that case the equality between labor demanded and supplied could in fact be kept with real wages higher than the subsistence level, and hence an increasing population. In most of his analysis, however, Ricardo kept Malthus' theory as a simplifying assumption. During the mid-1800s, when Lassalle articulated his theory, wages for both manufacturing laborers and agricultural workers were in large part quite close to subsistence level. Furthermore, not only did Ricardo believe that the market price of labor could long exceed the subsistence or natural wage, he also claimed that the natural wage was not what was needed to physically sustain the laborer, but depended on "habits and customs": Socialist critics of Lasalle and of the alleged Iron Law of Wages, such as Karl Marx, argued that although there was a tendency for wages to fall to subsistence levels, there were also tendencies which worked in opposing directions. Marx criticized the Malthusian basis for the Iron Law of Wages. According to Malthus, humanity is largely destined to live in poverty because an increase in productive capacity results in an increase in population. Marx also criticized Lasalle for misunderstanding David Ricardo. Ludwig von Mises argues that if one adopts this reasoning in order to demonstrate that in the long run no rise in the average wage rate above the minimum is possible, one must also imply that no fall in the average rate can occur.
Secondary Teaching Pack Download this comprehensive pack of seven curriculum-linked lesson plans full of exciting and innovative ways to teach human rights to children aged 11-16. The pack contains all the resources you need to make a Human Rights Day, or just one lesson, engaging and memorable. This piece of classroom origami will help young people explore human rights themes in fiction with questions designed to promote discussion and critical thinking. New Online Course Amnesty Internationals new free online course launches on 13th April and will explore the Universal Declaration of Human Rights and how it empowers you to know, claim and defend rights. Students will also learn about inequality and how change happens. Amnesty has fifteen Human Rights courses online including introduction to Amnesty, digital security and human rights and the death penalty that can be used for older students. Please check the course to make sure its appropriate for students. Human Rights and Fiction Use these freely available books and resources from the CILIP Carnegie and Kate Greenaway shortlists to explore fiction and human rights. Find at the bottom of the page teaching resources and free audio versions and use our story explorer to enrich reading. Log in to love this resource Log in to share this resource
Biological soil crusts are found throughout the world and play important roles in the ecosystems in which they occur. In arid regions, these living soil crusts are dominated by cyanobacteria and also include soil lichens, mosses, green algae, microfungi and bacteria. In the high deserts of the Colorado Plateau (which includes parts of Utah, Arizona, Colorado and New Mexico), these knobby black crusts are extraordinarily well-developed, and may represent 70 to 80 percent of the living ground cover. What is Cyanobacteria? Cyanobacteria, previously called blue-green algae, are some of the oldest known life forms. It is thought that these organisms were among the first land colonizers of Earth's early land masses, and played an integral role in the formation and stabilization of early soils. The earliest cyanobacteria fossils found are called stromatolites, which date back more than 3.5 billion years. Extremely thick mats of these organisms converted Earth's original carbon dioxide rich atmosphere into one rich in oxygen and capable of sustaining life. Unfortunately, many human activities are incompatible with the presence and well-being of biological soil crusts. The fibers that confer such tensile strength to these crusts are no match for the compressional stress placed on them by footprints or machinery, especially when the crusts are dry and brittle. Air pollutants, both from urban areas and coal-fired power plants, also harm these crusts. Tracks in continuous strips, such as those produced by vehicles or bicycles, are especially damaging, creating areas that are highly vulnerable to wind and water erosion. Rainfall carries away loose material, often creating channels along these tracks, especially when they occur on slopes. Wind not only blows pieces of the pulverized crust away, thereby preventing reattachment to disturbed areas, but also disturbs the underlying loose soil, often covering nearby crusts. Since crustal organisms need light to photosynthesize, burial can mean death. When large sandy areas are impacted during dry periods, previously stable areas can become a series of shifting sand dunes in just a few years. Impacted areas may never fully recover. Under the best circumstances, a thin veneer of biological soil crust may return in five to seven years. Damage done to the sheath material, and the accompanying loss of soil nutrients, is repaired slowly during up to 50 years of cyanobacterial growth. Lichens and mosses may take even longer to recover. Don't Bust the Crust Last updated: May 7, 2020
Historic Loire Valley Historic Loire Valley The Loire Valley has been the site of both aristocratic grandeur and thousands of years of territorial conflict. Now a UNESCO World Heritage Site with the 15th century Château de Chenonceau as its most famous landmark, the Loire Valley’s beauty endures through its historic towns, architectural monuments, and amazing landscapes. Odyssey Traveller organises walking tours to the Loire Valley. Let’s look back at the history of this chateaux-studded French countryside. Romans Rule the Valley The Loire Valley is an important location for trade and transportation as it sits on the banks of the Loire, France’s longest river. The valley has been inhabited by the Cenomani, a Celtic tribe of the Cisalpine Gauls, since the Iron Ages. Their descendants and the Druids unsuccessfully repelled Julius Caesar and his troops marching into the valley, leading to the Romans taking over the region in 52 BC. The towns of Angers, Le Mans, Orléans, and Tours were formed under the reign of Rome’s first emperor, Augustus. These settlements were modelled after the Roman cities, complete with baths, forums, and theatres. The Romans also began planting vineyards, a practice continued by the monastic orders and which led to the birth of world-class French wine from the region. Christianity was introduced to the area, and by the 4th century, the entire region had converted to the new religion. The French Age Roman armies thwarted the invading troops of Attila the Hun in 451, but they themselves were usurped by Clovis, the pagan King of the Franks who later converted to Christianity. Seen as the founder of France, a derivation of his name, Louis, became the principal name of France’s kings. As France was still a decentralised state, the power of Loire’s local nobility rivalled the power of the French throne. The church itself has a more cohesive power than the throne, and the Loire nobles often turned to the church instead of the crown to mediate their disputes. Charlemagne held territory in the valley, and his sons inherited his land upon his death in 814 and established the dukedoms of Anjou and Blois. Henry Platagenet, count of Anjou and duke of Normandy and Aquitane, led the invasion of England and became King Henry II in 1154, worsening the territorial conflict as England claimed ownership not only of the valley, but of the French Crown itself. Hundred Years’ War The Hundred Years’ War was a series of conflicts waged from 1337 to 1453 by the House of Plantagenet, rulers of the Kingdom of England, against the French House of Valois. There were two points of conflict: one, the duchy of Guyenne (or Aquitaine) belonged to the kings of England but remained a fiefdom of the French crown, and the English kings wanted exclusive ownership; two, as the closest relatives of the last direct king of the House of Capet (Charles IV, who died in 1328), the kings of England from 1337 claimed the crown of France. Charles IV had a daughter, but as females were denied succession to the French throne, the House of Capet ended. England’s Edward III, son of Charles IV’s sister Isabella, was Charles’s closest male relative by blood, but the French nobility ruled that Isabella, who herself did not possess the right to inherit, could not transmit this right to her son. Charles IV was succeeded by his cousin Philip who belonged to the House of Valois, which became embroiled in the long war with Edward III and the succeeding English kings. The Loire region became the unfortunate focus of the Hundred Years’ War when the English besieged Orléans. One of the heroes that emerged from the battle was the young Joan of Arc, whose military campaigns led to the lifting of the siege of Orléans in 1429 and the coronation of the French king Charles VII. In 1431, Joan of Arc was captured by the English-allied Burgundian troops and was burned at the stake, dying at the age of 19. Joan’s victory was a major turning point in the war: it boosted French morale and led to French troops recapturing territory from the English. Travellers can visit Maison de Jeanne d’Arc, a 1960s reconstruction of the 15th century house Joan of Arc stayed in during the siege of Orléans. The Maison de Jeanne d’Arc houses exhibits and a research centre dedicated to her life, with a 15-minute film (in French or English) about her origins and impact as centrepiece. French Renaissance and the Religious Wars With the Loire Valley firmly back in French hands, Francois I, his successors, and other French nobility began to build magnificent chateaux in the valley. Francois I resided in the Château de Chambord, now a major tourist attraction. The 15th century also saw the beginning of the French Renaissance, as France invaded Italy and came into contact with the region’s art and architecture. In 1516, Francois I invited the great Italian painter and inventor Leonardo da Vinci to the Loire Valley, providing him with the Château du Clos Lucé, then called Château de Cloux. Da Vinci arrived at the valley with his paintings, including the timeless Mona Lisa, which now hangs in the Louvre. The beauty and developments brought by the French Renaissance came hand-in-hand with the violent religious wars between the Protestants and Catholics. In the 16th century, more and more French people converted to Protestantism and joined the Reformed Church of France. Members of the church were called Huguenots and were at first treated with tolerance. It didn’t take long for this tolerance to turn to hostility. In 1572, thousands of Huguenots were murdered in Paris by Catholics in what became known as St. Bartholomew’s Day massacre. A peace compromise was reached in 1576, but this ended in 1584 when Huguenot leader Henry of Navarre became heir to the French throne. This led to the War of the Three Henrys: the Protestant Henry of Navarre, the moderate King Henry III, and the ultra-Roman Catholic Henri I, Duke of Guise. The war ended when Henry III acknowledged Henry of Navarre as his heir upon his death. Catholic forces, backed by Spain, continued to oppose Henry of Navarre’s ascension, until he converted to Roman Catholicism. Crowned Henry IV, France’s Huguenot-turned-Catholic king promulgated the Edict of Nantes in 1598, which guaranteed religious liberties to Protestants and effectively ended the religious wars. 17th Century and Modern-Day Loire Valley Loire’s economy accelerated during the 17th century with a growth in agriculture and textile production. But the weakened French monarchy shifted its political focus from Loire to Paris, leading to the decline of the region. In 1789, the French Revolution overthrew the monarchy, ended feudalism, and established a republic. War entered the Loire region once again in the 20th Century, when the region became occupied by the Germans in 1940. The region did not become prosperous until after World War II. Many of the chateaux were destroyed during the Reign of Terror and much of the region was bombed during the world war, but restoration in the 1960s and the public opening of the chateaux led to the growth of the Loire region’s tourism industry. Travellers can now go on escorted tours of the 300 chateaux, 22 of which are in the Grands Sites du Val de Loire (Major Sites of the Loire Valley) collective. The Loire Valley, between Sully-sur-Loire and Chalonnes, was listed by UNESCO as a World Heritage Site in 2000. Odyssey Traveller’s Loire Valley walking tour includes a tour of the famous Château de Chenonceau, also known as Château des Dames (Castle of the Ladies), as the castle was overseen by powerful women throughout its history. This charming castle is located literally on the River Cher as it was constructed over a bridge. Thomas Bohier, Chamberlain to the king of France (Charles VIII), purchased the property in 1512 and built a Renaissance-style castle on the property. The work was overseen by his wife Katherine Briçonnet. In 1535, the property was seized by the Crown due to unpaid debts, and in 1547 was offered by Henry II, then married to Catherine de Medici, as a gift to his mistress, Diane de Poitiers. After Henry’s death, Catherine forced Diane out to make Château de Chenonceau her residence. After Catherine’s death, ownership of the castle passed to other families. In the 18th century, under the ownership of the Dupin family, it became a venue for artists and intellectuals to gather and talk, hosting the likes of Voltaire, Rousseau, Montesquieu and Buffon. The castle was purchased by the Menier family in 1913, who owns it to this day. In the 20th Century, it was used as a hospital in World War I and served as an escape route in World War II as it sits on the border between Nazi-occupied territory north of the River Cher and the free territories in the south. Travellers can now visit the chateau and roam its gardens. Other places of interest included in the tour are the Château d’Ussé, originally a stronghold from the Middle Ages transformed to an elegant residential palace, and the 17th Century Chateau Gaudrelle where travellers can view the chateau’s vast vineyards and sample their fine Vouvrays wine. If you’d like to learn more about the history of the Loire Valley and see firsthand the merging of medieval architecture with modern developments in the region, sign up for Odyssey Traveller’s 18-day walking tour of the Loire Valley, especially designed for the active senior. The tour is designed so that participants can do all of the proposed walks to visit iconic landmarks or opt out on days when they’d rather stay in or explore places near their accommodation. We hope to see you there! About Odyssey Traveller We specialise in educational small group tours for seniors, typically groups between six to 12 people from Australia, New Zealand, USA, Canada and Britain. Our maximum number of people on a tour is 18 mature aged travellers. Typically, our clients begin travelling with us from their mid 50’s onward. But be prepared to meet fellow travellers in their 80s and beyond! Both couples and solo travellers are very welcome on our tours. We have some 150 tours and offer 300 scheduled departures on offer each year. Odyssey has been offering this style of adventure and educational programs since 1983. Odyssey Traveller is committed to charitable activities that support the environment and cultural development of Australian and New Zealand communities. Odyssey Traveller scholarship for Australia & New Zealand University students. We are also pleased to announce that since 2012, Odyssey has been awarding $10,000 Equity & Merit Cash Scholarships each year. We award scholarships on the basis of academic performance and demonstrated financial need. We award at least one scholarship per year. We’re supported through our educational travel programs, and your participation helps Odyssey achieve its goals. Students can apply for the scholarship by clicking on this link to find out more details. Join our loyalty program when you join an international small group tour. Every International small group tour taken typically contributes to your membership level in our Loyalty Program for regular travellers. Membership of the alumni starts when you choose to take your first international small group tour with Odyssey Traveller, discounts in tour pricing for direct bookings accrue from your third tour with Odyssey Traveller. To see the discounts and benefits of being a Bronze, Silver, Gold, and Diamond alumni member with us, please see this page. For more information on Odyssey Traveller and our educational small group tours, visit and explore our website, and remember to visit these pages in particular:
The World’s Largest Earwig The largest earwig species ever recorded was the Saint Helena earwig, which could grow up to 3.3 inches in length. The species had a long, dark brown-black colored body with reddish colored legs. It had six legs located on the front portion of its body and was characterized by a large, forked tail, often referred to as a pincher. The Saint Helena earwig, also known as the giant earwig or the Saint Helena striped earwig, could be found on the island of Saint Helena in certain forests, plains, and near seabird colonies on rocky outcroppings. Researchers identified three specific areas that this species inhabited: the Prosperous Bay plain, the Horse Point plain, and the dry areas of the eastern region of the island. Informally, the Saint Helena giant earwig is sometimes called as the “dodo of the dermapteran,” a name that refers to the order of insects to which the species belonged. Earwigs are an interesting insect species due to their social behavior. Unlike most insects, which are often solitary creatures that care for and provide for themselves, earwigs exhibits familial recognition. This behavior is most noteworthy in earwig mothers, who care for their young in a number of ways, including keeping and protecting the nest of eggs, cleaning the nest and eggs, assisting baby earwigs during the hatching process, feeding young earwigs, and sleeping with baby earwigs in a communal nest. Earwigs typically construct their shelters underground in long and deep tunnels. Researchers report that most earwig species only leave these underground shelters after long periods of rain, and can be spotted on the ground at night. Discovery of the World’s Largest Earwig A Danish entomologist was the first scientist to ever collect this species in 1798 in the Saint Helena, a tropical island in the Atlantic Ocean. Despite its record-breaking size, the Saint Helena earwig did not receive attention again until 1913, and then again in 1962, from two ornithologists. The Saint Helena earwig did not receive its scientific name, L. herculeana, until 1965, when it was discovered that the specimen had previously been confused with the L. loveridgei species. Some researchers speculate that this species was largely ignored due to a broad disinterest in earwig species, its confusion with another earwig species, and because it was an endemic species that could only be found on the island of Saint Helena. An increasing interest in the areas of nature, biodiversity, zoology, and environmental conservation led to a slightly increased interest in this species. Beginning in the 1960’s, more researchers began searching for the Saint Helena earwig, however these efforts were largely unsuccessful. The last sighting of the species was recorded in 1967. In an attempt to publicize its conservation status, the local government designed and released a collectible stamp with an image of the giant earwig in 1982. Just six years later, the London Zoo financed an exploration project in what was one of the final attempts at obtaining a live specimen. Subsequent searches were conducted in 1993 and 2003. Factors Leading to Extinction of the World’s Largest Earwig In November of 2014, the International Union for the Conservation of Nature (IUCN) categorized the Saint Helena giant earwig on its Red List as officially extinct. While the organization acknowledges that the insect species may still exist in a very remote location on the island, it stated that all scientific evidence indicates that the insect is extinct. Experts in the field of entomology believe that two factors ultimately led to the extinction of the world’s largest earwig: invasive species and habitat destruction. An invasive species is somehow introduced to an ecosystem where it is not considered a native species. Many invasive species are introduced to ecosystems by humans, either intentionally or unintentionally, while others independently migrate to and colonize a new home. Scientists believe that the Saint Helena giant earwig was forced to compete for survival against several invasive species, including spiders, centipedes, mice, and rats. It is believed that all of these species relied on the giant earwig as a dietary source. In particular, the Scolopendra morsitans centipede was likely the biggest challenge to the giant earwig, providing competition for food and habitat. Another common factor in the endangerment and extinction of a vast number of animal species is habitat destruction. Habitat destruction occurs when external forces render an ecosystem unfit for the life it once sustained. This factor may be caused by natural occurrences, like flooding or storms, or by human activity, like deforestation and agriculture. The Saint Helena giant earwig was threatened by two specific instances of habitat destruction, both caused by humans. The first was the deforestation of the gumwood forests, where it was known to inhabit. These forests were destroyed to make way for agricultural endeavors and to clear space for sprawling urbanization. Additionally, the construction industry harvested rocks from coastal areas of the island in order to keep up with increasing demands in development. These coastal rocks housed colonies of both the seabird and the Saint Helena giant earwig. Lack of Attention Surrounding Its Extinction Most people can cite at least a few animal species that are either endangered or already extinct, the vast majority of which will be charismatic species, meaning they are easily recognizable and often considered attractive to look at. Some examples of charismatic endangered species include pandas, tigers, and elephants. The general public is less likely to recognize insects or consider their potential extinction. Given this lack of attention given to insect species, the extinction of the Saint Helena giant earwig was only published in a few media reports. This lack of coverage means that many individuals are still unaware of its conservation status. Critics claim that conservation groups also tend to ignore the plight of insects, focusing instead on bird and mammal species. In fact, the IUCN has identified and recorded the conservation statuses of approximately 100% of identified mammal and bird species in the world and less than 1% of global insect species. However, insects are important in maintaining the balance within ecosystems. About the Author Amber is a freelance writer, English as a foreign language teacher, and Spanish-English translator. She lives with her husband and 3 cats. Your MLA Citation Your APA Citation Your Chicago Citation Your Harvard CitationRemember to italicize the title of this article in your Harvard citation.
What is a cerebral aneurysm? A cerebral aneurysm (also called an intracranial aneurysm or brain aneurysm) is a bulging, weakened area in the wall of an artery in the brain, resulting in an abnormal widening or ballooning. Because there is a weakened spot in the artery wall, there is a risk for rupture (bursting) of the aneurysm. A cerebral aneurysm generally occurs in an artery located in the front part of the brain which supplies oxygen-rich blood to the brain tissue. A normal artery wall is made up of three layers. The aneurysm wall is thin and weak because of an abnormal loss or absence of the muscular layer of the artery wall, leaving only two layers. The most common type of cerebral aneurysm is called a saccular, or berry aneurysm, occurring in 90 percent of cerebral aneurysms. This type of aneurysm looks like a “berry” with a narrow stem. More than one aneurysm may be present at the same time. Two other types of cerebral aneurysms are fusiform and dissecting aneurysms. A fusiform aneurysm bulges out on all sides (circumferentially). Fusiform aneurysms are generally associated with atherosclerosis. A dissecting aneurysm may result from a tear in the inner layer of the artery wall, causing blood to leak into the layers. This may cause a ballooning out on one side of the artery wall or it may block off or obstruct blood flow through the artery. Dissecting aneurysms may occur with traumatic injury. The shape and location of the aneurysm may affect what treatment is performed. Most cerebral aneurysms (90 percent) are present without any symptoms and are small in size (less than 10 millimeters, or less than one half of an inch, in diameter). Smaller aneurysms may have a lower risk of rupture. Although a cerebral aneurysm may be present without symptoms, the most common initial symptom of a cerebral saccular aneurysm is a subarachnoid hemorrhage (SAH). SAH is bleeding into the subarachnoid space (the space between the brain and the membranes that cover the brain). A ruptured cerebral saccular aneurysm is the most common cause (80 percent) of SAH. SAH is a medical emergency and may be the cause of a hemorrhagic (bleeding) stroke. Hemorrhagic strokes occur when a blood vessel that supplies the brain ruptures and bleeds. When an artery bleeds into the brain, the brain cells and tissues do not receive oxygen and nutrients. In addition, pressure builds up in surrounding tissues, and irritation and swelling occurs. About 20 percent of strokes are caused by hemorrhagic bleeding. Increased risk of rupture is associated with aneurysms that are greater than 10 millimeters (less than one half of an inch) in diameter, a particular location (circulation in the back portion of the brain), and/or previous rupture of another aneurysm. A significant risk of death is associated with the rupture of a cerebral aneurysm. What causes a cerebral aneurysm? Currently, the cause of cerebral aneurysms is not clearly understood. The formation of cerebral saccular aneurysms has been associated with predominantly two factors: an abnormal degenerative (breaking down) change in the wall of an artery and the effects of pressure from the pulsations of blood being pumped forward through the arteries in the brain. Certain locations of an aneurysm may create greater pressure on the aneurysm such as at a bifurcation (where the artery divides). The forming of a cerebral aneurysm has also been linked to risk factors that are inherited or may develop later in life (acquired risk factors). Inherited risk factors associated with aneurysm formation may include, but are not limited to, the following: • alpha-glucosidase deficiency – a complete or partial deficiency of the lysosomal enzyme, alpha-glucosidase. This enzyme is necessary to break down glycogen and to convert it into glucose. • alpha 1-antitrypsin deficiency – a hereditary disease that may lead to hepatitis and cirrhosis of the liver or emphysema of the lungs • arteriovenous malformation (AVM) – an abnormal connection between an artery and a vein • coarctation of the aorta – a narrowing of the aorta (the main artery coming from the heart) • Ehlers-Danlos syndrome – a connective tissue disorder (less common) • family history of aneurysms • female gender • fibromuscular dysplasia – an arterial disease, cause unknown, that most often affects the medium and large arteries of young to middle-aged women • hereditary hemorrhagic telangiectasia – a genetic disorder of the blood vessels in which there is a tendency to form blood vessels that lack capillaries between an artery and a vein • Klinefelter syndrome – a genetic condition in men in which an extra X sex chromosome is present • Noonan’s syndrome – a genetic disorder that causes abnormal development of many parts and systems of the body • polycystic kidney disease (PCKD) – a genetic disorder characterized by the growth of numerous cysts filled with fluid in the kidneys. PCKD is the most common medical disease associated with saccular aneurysms. • tuberous sclerosis – a type of neurocutaneous syndrome that can cause tumors to grow inside the brain, spinal cord, organs, skin, and skeletal bones Acquired risk factors associated with aneurysm formation may include, but are not limited to, the following: • age (greater than 40 years of age) • alcohol consumption (especially binge drinking) • atherosclerosis – a build-up of plaque (made up of deposits of fatty substances, cholesterol, cellular waste products, calcium, and fibrin) in the inner lining of an artery • current cigarette smoking • use of illicit drugs such as cocaine or amphetamine • hypertension (high blood pressure) • trauma (injury) to the head A risk factor is anything that may increase a person’s chance of developing a disease. It may be an activity, such as smoking, diet, family history, or many other things. Different diseases have different risk factors. Although these risk factors increase a person’s risk, they do not necessarily cause the disease. Some people with one or more risk factors never develop the disease, while others develop disease and have no known risk factors. Knowing your risk factors to any disease can help to guide you into the appropriate actions, including changing behaviors and being clinically monitored for the disease. What are the symptoms of a cerebral aneurysm? The presence of a cerebral aneurysm may not be known until the time of rupture. However, occasionally there may be symptoms that occur prior to an actual rupture due to a small amount of blood that may leak, called “warning leaks,” into the brain. The symptoms of an unruptured cerebral aneurysm include, but are not limited to, the following: • eye pain • vision deficits (problems with seeing) The first evidence of a cerebral aneurysm may be a subarachnoid hemorrhage (SAH), due to rupture of the aneurysm. Symptoms that may occur at the time of SAH include, but are not limited to, the following: • initial sign – rapid onset of “worst headache ever in my life” • stiff neck • nausea and vomiting • changes in mental status, such as drowsiness • pain in specific areas, such as the eyes • dilated pupils • loss of consciousness • hypertension (high blood pressure) • motor deficits (loss of balance or coordination) • photophobia (sensitivity to light) • back or leg pain • cranial nerve deficits (problems with certain functions of the eyes, nose, tongue, and/or ears that are controlled by one or more of the 12 cranial nerves) The symptoms of a cerebral aneurysm may resemble other problems or medical conditions. Always consult your physician for a diagnosis. How is a cerebral aneurysm diagnosed? A cerebral aneurysm is often discovered after it has ruptured or by chance during diagnostic examinations such as computed tomography (CT scan), magnetic resonance imaging (MRI), or angiography that are being done for other conditions. In addition to a complete medical history and physical examination, diagnostic procedures for a cerebral aneurysm may include: digital subtraction angiography (DSA) – provides an image of the blood vessels in the brain to detect a problem with blood flow. The procedure involves inserting a catheter (a small, thin tube) into an artery in the leg and passing it up to blood vessels in the brain. A contrast dye is injected through the catheter and x-ray images are taken of the blood vessels. computed tomography scan (CT or CAT scan) – a diagnostic imaging procedure that uses a combination of x-rays and computer technology to produce cross-sectional images (often called slices), both horizontally and vertically, of the body. A CT scan shows detailed images of any part of the body, including the bones, muscles, fat, and organs. CT scans are more detailed than general x-rays, and may be used to detect abnormalities and help identify the location or type of stroke. magnetic resonance imaging (MRI) – a diagnostic procedure that uses a combination of large magnets, radiofrequencies, and a computer to produce detailed images of organs and structures within the body. An MRI uses magnetic fields to detect small changes in brain tissue that help to locate and diagnose a stroke. magnetic resonance angiography (MRA) – a noninvasive diagnostic procedure that uses a combination of magnetic resonance technology (MRI) and intravenous (IV) contrast dye to visualize blood vessels. Contrast dye causes blood vessels to appear opaque on the MRI image, allowing the physician to visualize the blood vessels being evaluated. What is the treatment for cerebral aneurysm? Specific treatment for a cerebral aneurysm will be determined by your physician based on: • your age, overall health, and medical history • extent of the condition • your signs and symptoms • your tolerance for specific medications, procedures, or therapies • expectations for the course of the condition • your opinion or preference Depending on your situation, the physician will make recommendations for the intervention that is appropriate. Whichever intervention is chosen, the main concern is to decrease the risk of a subarachnoid hemorrhage, either initially or from a repeated episode of bleeding. Many factors are considered when making treatment decisions for a cerebral aneurysm. The size and location of the aneurysm, the presence or absence of symptoms, the patient’s age and medical condition, and the presence or absence of other risk factors for aneurysm rupture are considered. In some cases, the aneurysm may not be treated but the patient will be closely followed by a physician. In other cases, surgical treatment may be indicated. There are two primary surgical treatments for a cerebral aneurysm: • open craniotomy (surgical clipping) This procedure involves the surgical removal of part of the skull. The physician exposes the aneurysm and places a metal clip across the neck of the aneurysm to prevent blood flow into the aneurysm sac. Once the clipping is completed, the skull is sutured back together. • endovascular coiling or coil embolization Endovascular coiling is a minimally invasive technique, which means an incision in the skull is not required to treat the cerebral aneurysm. Rather, a catheter is advanced from a blood vessel in the groin up into the blood vessels in the brain. Fluoroscopy (a special type of x-ray, similar to an x-ray “movie”) will be used to assist in advancing the catheter to the head and into the aneurysm. Once the catheter is in place, very tiny platinum coils are advanced through the catheter into the aneurysm. These tiny, soft, platinum coils, which are visible on x-ray, conform to the shape of the aneurysm. The coiled aneurysm becomes clotted off (embolization), preventing rupture. This procedure is performed either under general or local anesthesia.
If you arrived here, this probably means that you have heard before of something called protocol. The first thing to know is that there are a bunch of protocols, not only one, and that every protocol has a bunch of rules inside him. Let’s take a real world example. Knowing how to drive your car from one point to another is a form of making a complete action. Let’s take that driving for example. Things that we need to do in between entering the car and exiting the car one the other side of the city are nothing else that set of rules that we need to do for making the whole action. If you know how to drive a car, you learned the protocol of driving. Where’s the break, where are the gears, where can I turn the lights on, how to throttle? Without all there little rules, we couldn’t drive the car, and with all that knowledge, every man is able drive every car in the world. But is this enough to be able to drive from point A to point B? Not really. We have the protocol to drive the car, and that protocol can everyone use to drive the cars that they have, but what will happen when all this people come to the road together and meet, for example, on a crossroad. All of them will need some other bunch of rules to decide how needs to go first and pass the crossroad safely. Knowing the traffic signs is the second protocol. And the story is not over but I will stop here. It is now clear that every system, the networking system too, needs all different sets of rules that form sets of protocols which give to the system the ability to function properly. This is not the young driver’s school so we are going back to the networking and Internet. You’ve heard of several protocols now and the story is going on to explain the protocols that make the Internet work The most important protocols on today’s networks are TCP – the Transmission Control Protocol and the IP – Internet Protocol. Usually they work together and in the discussion we can see TCP/IP written like one protocol. These two protocols are in charge of defining rules for transporting packets through network, taking care of common language for these computers to understand each other. One of the most common examples of Application Layer network protocol in the Internet story is HTTP – hypertext transfer protocol and is what we use to view Web sites through a browser. This is the reason for which we have the “http://” at the front of web page address. On other side, if you wish to download something from the web page, you just need to click on the link and the computer will ask you where to save the file. This is the moment when computer is starting to use FTP – file transfer protocol to load the file from server to your computer hard disk. Protocols like these and whole bunch other create the framework within which all devices must operate to be part of the Internet. IP address is the unique name for every computer on the network. With computers named with IP address, every of them can easily find any other computer following the IP protocol.
MODULE 1 - WIND What is wind? How do you measure it? Students construct anemometers to measure wind speed, collect data, and compare wind velocity in different locations. This module contains the very basis of sailing knowledge - an awareness of wind and from which direction it is blowing. MODULE 2 - BUOYANCY What makes a boat float? What is the basis for a sound hull design? Students test their creative and cooperative skills in this group module which is as fun as it is educational. The fun and learning continues as they put this concept to practical use when sailing on a 420 or a J-22 in our protected marina. MODULE 3 - SAIL AREA What shape are sails and why? How do we calculate the area of a triangular sail? What is the perimeter of a sail? Students learn a practical application of the Pythagorean Theorem and a more in-depth understanding of a sail, which is actually a three-dimensional object that creates lift when wind flows around it. MODULE 6 - MARINE DEBRIS In this module, students spend time observing and collecting debris on a nearby beach. What appears to be a pristine seashore from a distance actually contains more man-made objects than one can imagine! Students participate in the Rozalia Project by cleaning up and recording marine debris for this national project. MODULE 7 - UPWIND SAILING ANGLES Students learn the advantages of tacking a boat at 90 degrees and how they can apply this principle to racing or sailing a boat upwind in optimum fashion. MODULE 8 - LAND AND SEA BREEZES Students learn the difference between a land and sea breeze and what causes each to occur. The San Francisco Bay is a perfect venue for this discussion.
Tunnel Diodes and Quantum Tunneling What is a tunnel diode? A tunnel diode (also called the Esaki diode) is a diode that is capable of operating into the microwave frequency range. This is made possible by the phenomena of tunneling, which is a quantum mechanical phenomenon where a particle tunnels through a very thin barrier, where the classical (normal) laws of physics says it can not pass through or over. Who discovered it? The physicist Leo Esaki invented the tunnel diode in August 1958, when he was with Sony. He was awarded the Nobel Prize in physics in 1973, for discovering the tunneling phenomena in semiconductors. The prize was shared with Ivar Giaever, who discovered tunneling in superconductors, and Brian Josephson, who predicted the properties of a supercurrent through a tunnel barrier. Why is tunneling important? Tunneling can be a parasitic effect in microchips where it can be a source of current leakage and results in substantial power drain and heating effects that plague high-speed devices. This limits how small computer chips can be made. As the fabrication process of silicon chips shrinks every 18 months (Moore's Law) the barriers of insulation that are supposed to keep electrons in, become so thin that rules of classical physics do not apply anymore and the rules of quantum mechanics take over and the phenomenon of tunneling starts to occur. An European research project has used the unwanted effects of tunneling to their advantage by creating a field effect transistor in which the gate channel is controlled by quantum tunneling. This results in a reduction in gate volt from 1 volt to approximately 0.2 volts and reducing power consumption by two orders of magnitude. If future computer processors were to use this transistor, Moore's law may be able to continue for another 4 years when the fabrication process reaches 5 nm in about the year 2021. Tunneling also has applications in precision measurements of voltages and magnetic fields, as well as creating a super efficient solar cell. Also, the measurement of radioactive decay was approximated by tunneling an electron into the nucleus of a radioactive product (Electron Capture). Because of this negative resistance, the tunnel diode is used as oscillators, amplifiers and in switching circuits. |Multijunction Solar Cell. | Notice the Tunnel Diode! Now back to tunnel diodes... Under normal forward bias operation, the voltage begins to increase, and electronics begin to tunnel through the 10 nm p-n junction of the diode. As voltage increases further, the current drops - this is called negative resistance because current decreases with increasing voltage. As voltage increases further, the diode begins to operate as a normal diode again. Here are some common IV curves: Now here is the IV curve of a tunnel diode: Using the Analog Discovery as an IV Curve Tracer. The Analog Discovery can be used as an IV curve tracer because it is capable of plotting in XY mode. In the Waveforms software press the "Add XY" button. Construct the circuit to the left on a breadboard. In the Waveforms oscilloscope window the X value is C1 (voltage across the diode, Vd) and the Y value is the diode current (M1) in amperes. This current is simply the voltage across the Resistor R divided by resistance R (M1 = C2/R). The input signal of AWG1 should start as a sine or triangular wave with a frequency of about 10 Hz, 2.5 V amplitude, and 2.5 V offset. Using these settings, you should be able to view a normal diode's IV curve. Here is a list of tunnel diodes you can purchase: For my test I used an 1N3717 manufactured by GE. To observe the IV curve of a tunnel diode requires a bit of adjustment. Look at the values set in this picture: By Right clicking on the graph, turn on persistence to view the negative resistance region. You should get a IV curve that looks like this: I think it's pretty cool that you are basically observing an effect that was discovered more than 50 years ago, and it still has applications and implications in electronics today and in the future. For example take this graphene transistor that utilizes a tunneling transistor design: Also, in the near future super fast networks (30Gbps) will use terahertz technology that uses Resonant tunneling diodes!
This week, students will be working on projects that will further their understanding of various aspects of the Civil War. Students are working in heterogeneous teams on projects that will examine battles of the war, generals of the war, women in the war, inventions before and during the war that affected its outcome, and the ending of the war and Reconstruction. Together, the projects should give students a deeper insight into this most deadly of American conflicts. Homework for this week will have students examining aspects other than their area of focus. For example, students working on inventions should consider looking up information on women or generals of the Civil War. They will then write a short paragaph (five sentences) about what they learned about that aspect of the war. Those paragraphs are due on Tuesday. If you have any questions regarding the homework or classwork projects, please feel free to drop me an e-mail. Rubrics should be up by Monday and should be in each student's drive.
As we all know, a Zener Diode is used for regulating voltage in practical circuit applications. Though we know about the importance of this device, do we really mind knowing about how it was invented? If you had that query in your mind, then reading further would give you a thorough knowledge on the discovery of Zener Diode and its properties. You shall also be informed of the great scientists who worked on this device. Before going into the invention story, let us have a brief note on what a Zener Diode is. This semi-conductor device permits the flow of current in a unidirectional way. Provided with sufficient voltage, they allow the flow of current in the opposite direction as well. The excess voltage required for reversing the direction of flow of current is termed as breakdown voltage or Zener voltage. The major role of a Zener Diode is to function as a voltage regulator. It is fairly employed in many electrical and electronic tools and equipments. The Zener Diode varies in a wider range based on its mounting location. Most of them are either mounted onto a surface or found in holed components. The surface mount Zener Diodes are directly mounted on a printed circuit board. In the other technique, components are attached to the holes with the help of wires. The codes that represent Zener Diodes always begin with either of the letters BZX or BZY. The early history of Zener Diode Only when the need for semi conductor materials exceeded, the urge to develop a device like Zener Diode spread deeper and wider. Many early inventions were made till 1905, yet a more focused work on semiconductor devices were started only at the time of the Second World War. It was Clarence Melvin Zener who first elaborated on the advantageous properties of this diode. Clarence Melvin Zener Clarence Zener was a professor at Carnegie Mellon University in the department of Physics. His interests were focused on solid-state physics. In 1926, he graduated from Stanford University and received his doctorate from the Stanford University by 1929. He developed the Zener Diode in 1950 and employed it in modern computer circuits. Clarence Zener published a paper on explaining the electrical insulator’s breakdown in 1934. He was recognized across the world for introducing a field of science with internal friction which was the subject on what most of his studies were focused. Principle behind the invention of the Diode The very basic principle that paved way for invention is the unidirectional way of flow of current. The very first Diode by Sir Thomas Alva Edison was a light bulb with certain modifications on it. Edison noticed that with an additional electrode, and connection to the positive side, facilitated the current flow from the filament towards the empty space.Though he observed, Edison was not at all sure about the physics behind this effect.Joseph J Thompson explained the reason behind this and was awarded the Nobel Prize in 1897. This lead to the invention of vacuum tube diodes. You can read the incredible story behind invention of Vacuum Tubes to know more about the people behind. Pre- Zener Diodes Many other scientists were interested in finding the alternative usage of this principle. John Ambrose Fleming tried using this valve for converting radio waves into signals that could be measured by employing a galvanometer. The Fleming valve is recognized to be the first true electronic device as of now. In 1906, Greenleaf W. Pickard invented another new Diode. With his earlier studies, Pickard confessed that electrons can flow only in a single direction, employing certain minerals like silicon. Placing a silicon piece between a metal base and a metal wire, he developed a valve that can be utilized in detecting the radio waves. This was named as cat’s whisker Diode as fine wires were employed in it. H. C. Dunwoody patented a developed form of this Diode that had the employment of carborundum in it. Limitations of Cat’s whisker Diode Though they were a revised version of electron tubes, they had certain limitation that shortened its usage. The first limitation was that they were more fragile and were prone to misalignment. Hence they required a very careful adjustment. Considering this reason, the use of cat’s whisper Diodes were fairly reduced, neglecting its advantage of working even in a very high frequency. During the period of the Second World War, the BELL laboratories developed another new type of diode usingsemiconductors (silicon & germanium). Russel Ohl, who was a metallurgist in the Bell laboratories, developed a diode using Germanium crystal that produced electricity in response to light. It was facilitating the conversion of solar energy to electricity. It was found that the silicon piece that was cut has a large amount of impurity. The area where the impurity joined the silicon was termed as the junction. Years later, it was found that the impurities had developed response to the solar particles. This lead to invention of semiconductor based PN junction diodes. You can read the amazing story & series of events occurred at BELL labs during the invention of PN juction diode to gain more knowledge. Even before the identification of the reason, the production of these solar converters hadbeen initiated. The usage of Diodes thus slowly expanded from time to time. Electron tube diodes are used very rarely. But then, the diodes that are made out of the material silicon have a wider application. This helps in detecting the high frequency electromagnetic waves. They also serve as components for the conversion of energy obtained from Sun to electrical signals. Inside many electronic devices like televisions and the computers, the diode systems are employed for averting the alternative current (AC) to direct current (DC) and also in the regulation of the voltage level. Automobiles are imparted with high potential Diodes. Their role is also similar to that of the other Diodes like converting the AC to the DC. In the early 1906, incandescent lamps were replaced with the LED or the light emitting Diodes. The lights used in the headlights of car and other light bulbs are also very likely to be replaced with the light emitting Diodes.
What are Digital Filters and Why Are They Required In Today's Audio DACS? by Resonessence Labs Technical Staff Aliases are not present in the ‘older’ analog recording formats: tape and vinyl records capture continuous signals, and do not create these artifacts. Perhaps your first introduction to aliasing due to a finite sample rate, would have been when you watched cowboy movies in the sixties and seventies: sometimes the wheels of the wagon trains would appear to be going the wrong way, or even slowing and reversing in direction despite the wagon clearly continuing to move. This was due to the camera used to make the movie: it was sampling the scene at 24 frames per second, but the wheel spokes were moving much faster than that. They were captured by the camera having moved more than one-spoke in revolution, and the camera therefore generated artifacts in the video playback that showed the wheels moving at the wrong rate. This effect will occur every time something is represented in a non-continuous fashion. The first time physicists discovered this phenomena was when they looked at vibrations in crystals. Something very odd was happening as the frequency of the vibration increased: the energy was coming out in the wrong place! It took some very clever physicists to realize that the crystal was made up of discrete atoms, all the same distance apart in the crystal, and the vibrations (called phonons) passing though the crystal, were being sampled by the atoms all a similar distance apart. So, because these pieces of crystal were made of a finite number of atoms all the same distance apart, when the phonon frequency was such that it moved more than one cycle in the distance between atoms (equivalent to the wagon wheel moving more that one spoke-distance between frames) the phonon frequency was changed – it was wrong. Leon Brillouin, a French Physicist was amongst the first to figure out what was going on and what are called “Brillouin Zones” define how a crystal creates phonon aliases. He figured this out in the decade of the 1920′s. Our problem in the audio world is much simpler than Brillouin’s, because it is only in one dimension, and engineers are used to thinking about the Brillouin zones as just certain frequencies that cannot be exceeded before there is “a problem”. The frequency where problems start to occur is at half the equivalent sampling rate. So, for example, in digital music recorded on a CD, the studio has sampled the signal at 44.1kS/s, and what physicists would call the first Brillouin zone, ends at half this rate: at 22.05Khz. Engineers just call this the half sample rate, or sometimes the Nyquist limit, (after Harry Nyquist who recognized that there was a limit to the rate at which information could travel down a telegraph line in 1928, and later defined the maximum frequency that could be encoded into discrete samples). If we ask the studio to encode a sound of 30Khz into the CD, 30Khz will not come out when we playback. Rather, 14.1Khz will come out. You can perhaps see where the 14.1Khz comes from: it is the difference between 30Khz we applied and the 44.1Khz we used to sample the signal. Nothing is wrong, nothing is faulty, in this scenario: each element is operating at mathematical perfection, it is just that a signal of 30Khz cannot be captured into a series of samples taken at 44.1Khz because it exceeds half the sample rate – it exceeds 22.05Khz. How can we cope with this? What if the music content has a cymbal sound with more than 22.05Khz in it? The answer assumed by the clever engineers at Phillips and Sony who first came up with the CD, was to argue as follows: since the human ear cannot hear above about 20Khz, let us make an analog filter that removes all above 20Khz, then there can be no problem, 20Khz is just below 22.05Khz and no aliases will be created, since there is no signal above 20Khz. You may ask why did they not simply increase the sample rate to say 100Khz and so the first problem would not occur until 50Khz? The answer is that they could not do that because that would have more than doubled the number of samples on the CD, and the CD had to play for at least 45mins so that it could capture one whole vinyl album. In other words, there were commercial considerations that dictated that the sample rate be as slow as possible. Not good for us audiophiles, and it has taken us years to break this constraint: now we can finally get 24 bit 192Khz sampled music without compromise. "The problem they have is that any signal above 22.05Khz will alias, some engineers use the term “fold back”, into the audio domain and so there has to be a filter, an analog filter, that removes all the sounds above 22.05Khz (in fact they choose 20Khz) to prevent this problem." But let’s return to what Phillips and Sony had to do in the 1970′s to make CDs viable. The problem they have is that any signal above 22.05Khz will alias, some engineers use the term “fold back”, into the audio domain and so there has to be a filter, an analog filter, that removes all the sounds above 22.05Khz (in fact they choose 20Khz) to prevent this problem. This is not trivial: they are asking the analog designer to make a filter that lets through 20Khz, but blocks off all signals above 22.05Khz! Any analog designer would tell you this is far from trivial: the 22.05Khz is much too close to the 20Khz. “Can you not give me a break and say let through 20Khz and block off say 50Khz?” the analog designer would say, to which the company has to reply, “No if you can’t do this, we can’t fit an album on a CD, and who would buy that?”
Franklin was a statesman and diplomat for the newly formed United States, as well as a prolific author and inventor. Franklin helped draft, and then signed, the Declaration of Independence in 1776, and he was a delegate to the Constitutional Convention in 1787. As a civic leader, he initiated a number of new programs in Philadelphia, including a fire company, fire insurance, a library, and a university. map of the Gulf Stream appears in the book by Benjamin Franklin and dates from 1769. The Gulf Stream is depicted as the dark gray swath that runs along the east coast of what is now the United States. The amount of water carried in the Gulf Stream is equal to almost 100 million cubic meters per second, which is nearly 100 times the combined flow of all the rivers on Earth! The speed of the Gulf Stream can be as high as 5 knots. Now you can see why ships heading north and eastward across the North Atlantic tried to stay in the current. It would nearly double their speed, so they could complete their voyages more quickly.
Well before young children arrive for the first day of school, their brains have undergone an extraordinary process of development. At birth, the brain weighs about 400 grams and has 100 billion neurons. By the age of 2, at 1100 grams, it is about 80% the size of an adult brain. At some early stages of development, the brain is adding up to a half-million neurons per minute, and by age 3, it in will have 1000 trillion neuron connections. Today, technological advances such as functional Magnetic Resonance Imaging (fMRI) are giving researchers a window onto the deepest structures and most complex functions of the brain. That’s allowing neuroscientists a detailed new understanding of how a stimulating environment and a sense of security in early childhood can be critically important for healthy brain development. Research is also creating new insights into what can go wrong. At a Capitol Hill briefing organized by AAAS, neuroscientists described how conditions associated with poverty—including a lack of nurturing and high levels of stress—can set off a cascade of neural and hormonal responses that disrupt brain development and have negative impacts on language, learning, and attention. The new research is shedding a different light on the effects of poverty, and raising critically important questions for communities and policymakers in areas ranging from education and health to social welfare and juvenile justice. Scientists have long known that poor nutrition or exposure to lead could affect brain development, but could a lack of books or an excess of stress be similar cause for concern? “Where a child grows up in impoverished conditions… with limited cognitive stimulation, high levels of stress, and so forth, that person is more likely to grow up with compromised physical and mental health and lowered academic achievement,” said Martha Farah, director of the Center for Neuroscience and Society at the University of Pennsylvania. “The promise of neuroscience is to understand how this works,” Farah added, “so that you can intervene” to help families and communities provide a better environment for children. The 90-minute briefing, held 26 June, drew an audience of over 100 people, including congressional staff, federal scientists, and journalists. It was held with the support of U.S. Representatives Chaka Fattah (D-Pennsylvania) and Brian Bilbray (R-California), and was the first in a series of three Capitol Hill briefings on neuroscience organized by the AAAS Office of Government Relations and underwritten by The Dana Foundation. In addition to Farah, the panel included James Griffin, director of the Early Learning and School Readiness Program at the National Institutes of Health, and Annapurni Jayam-Trouth, chair of the Department of Neurology at Howard University, who served as a discussant. The briefing was moderated by Alan I. Leshner, the AAAS chief executive officer and executive publisher of the journal Science. Leshner, a neuroscientist, said that much research remains to be done to understand the interplay of factors shaping childhood brain development. But he called the new insights “exciting” and said they provide a valuable example “of how an emerging body of scientific literature really can in fact help us to develop far more effective social programs and programs intended to better the lives of all children.” Fattah is the author of the Fattah Neuroscience Initiative, signed into law last November by President Barack Obama to establish a high-level, interagency working group to coordinate and advance federally funded neuroscience. In remarks that introduced the researchers’ presentations, he cited the potential of neuroscience to create understanding and treatments for conditions ranging from mental illness to traumatic brain injury. “It’s critically important that we focus in a much more robust way to move this research area forward,” Fattah said. “It portends a great deal for our country and our world.” The End of “Nature vs. Nurture” American culture venerates the ideas of free will and self-determination, and tends to view poverty as a mark of flawed character and moral weakness. That has been the backdrop for a long-running debate in science, and throughout much of American social and political culture, over whether child development is shaped more by genetic factors or environmental factors. But at the AAAS briefing, findings discussed by the researchers challenge some old paradigms. Put simply: It’s not nature vs. nurture—it’s a constant interplay between the two. Genetics may shape the response to environmental conditions, and stimulus from the environment can influence genetic responses for better or for worse. Understanding that is essential to understanding early brain development, Farah and Griffin agreed. In its earliest growth, Griffin said, the human brain is developing its basic components and functions—for example, the primal “fight or flight” response to a perceived threat. Even from an early age, infants show signs of awareness, recognition of ambiguity, and an ability to solve problems. But brain areas such as the prefrontal cortex that govern more complex functions—language, problem-solving, self-regulation, and social bonding—tend to develop later, between the ages of 1½ and 4 years. “This really is a crucial period in brain development,” Griffin explained. “We know we need to… take full advantage of what we can do for children (at that age) so they reach their full potential.” Separate from neurons, neural connections have their own significance. “It’s not just the neurons,” Griffin said. “It’s how they connect [with chemical neurotransmitters] and fire together that we’ve learned is the most important” to assisting brain functions—and impeding them. Between birth and the age of 3, he said, the brain produces an excess of neurons and neural connections. But waves of neural growth are followed by periods of “pruning” in which neurons and connections that are not being used are, in effect, taken off-line. “It’s a use-it-or-lose-it process,” he explained. “If something isn’t being used, if it doesn’t become part of the circuitry, it basically is pruned off. Those neurons go away. Connections aren’t made.” The Powers of Nurturing and Stress That process of building connections points to the importance of cognitive nurturing. When parents or caregivers spend time talking or reading with a child, the intellectual activity stimulates the brain, helps to activate and build neural connections that link different parts of the brain. Without engagement, those connections may be diminished or lost. But stress is a sort of counterpoint to nurturing. Stress, too, is a sort of stimulation, and as a part of everyday life, it isn’t always bad. But it can be disruptive to both parent-child relationships and children’s learning, especially if it is overwhelming and chronic. “Stressful lives can cause parents to engage less with children,” Farah said. Sometimes stress results from the lack of appropriate stimulation, like boredom, or it can arise from excess or inappropriate stimulation. In some homes, Griffin added, there are few books, perhaps because they’re too expensive. But “the TV is on all the time, at really loud levels. It’s inappropriate content for a child, but it’s literally so loud that the child pays attention to it anyway, even though they have no idea what’s going on.” “Remember all those primitive things, like ‘fight or flight’?” he asked. “When you’re in a stressful overload situation and you’re a very young child, it just becomes overwhelming. And when you don’t have a parent to help mediate the stress, it’s even more so.” In those conditions, learning—and neural connections—may be disrupted. Stress causes the body to produce the hormone cortisol, and at high levels the stress can become toxic, Griffin said. That can have a direct impact on development of the pre-frontal cortex, which is the seat of attention, judgment, and self-control. The effect, Griffin said, is “disregulating.” Such stress leaves a physical signature that can be seen on functional brain imaging. Farah said her lab and others are finding that higher income levels are associated with greater volume in the prefrontal cortex and in the hippocampus, a center for memory and learning. Related studies suggest disparities in brain function between low-income and higher-income children. Farah cited “highly robust, sizeable differences” in the functions of these areas, affecting language, self-regulation, and working memory. Other research, Farah said, shows that nurturing can offset the effects of stress, in effect making the child more resilient. Consider one study on relationships among mother rats and their pups: Those “reared by extremely solicitous, nurturing mama rats, grow up to have better memory and better stress responses,” Farah told the briefing. “They’re better able to handle stress. And this is true especially when rat pups are stressed.” For rats as well as humans, the implications are that levels of nurturing can have a direct bearing on the hippocampus, which in turn has an impact on memory and learning. Those two systems are crucial to a person’s long-term well-being. There’s a specific issue for policymakers that arises from these findings. As Griffin noted: “The psychological stress associated with growing up in poverty can impair early learning abilities, affecting school readiness skills.” Solutions in Parenting and Policy Both Griffin and Farah emphasized that the brain can recover from effects of childhood poverty. But, they said, preventing harm is more efficient than repairing it. “We continue to learn throughout our lives,” Griffin said. “The brain is also able to make new connections after trauma…. At any age, you can learn a new language, you can learn a new skill…. This is one of the things we know with education: If you don’t master things early on, trying to learn it later requires even more—and even more costly—efforts, and even remediation. That’s another reason why intervening early makes all the sense in the world.” Early experiences promote healthy brain development in areas that affect language, memory, learning, judgment, and emotional well-being. “Research says that everyday experiences with parents and other adults can optimize development in these areas,” Griffin said. “The experiences that children have with their environment, and with people in their environment, especially their parents and other caregivers, really shape this process.” The simple answer, then, lies in parenting skills practiced at home and in early education: talking, reading, nurturing. Griffin described the familiar picture of a parent reading to a child. Even when the child is very young, a parent might sit and read from a floppy cloth book. “The infant doesn’t know it’s a book—they don’t know what you’re doing is reading,” he explained. “At most, the infant may mouth the book.” But the infant is learning motor control and language; from a parent’s modeling, the infant or child is learning what a book is, how to hold it, and the joy of reading. As the child grows older, she may begin to turn the pages, describe the pictures, and pretend to read. From repetition of this process, a sense of mastery begins to emerge. And with mastery comes a desire to learn more. The process of this “scaffolding,” or supporting learning, he said, results in a brain that’s healthy and ready to learn more. For children, the future “depends on us understanding really, fully, what is the science… and how do we put it into practice,” said Trouth, the chair of neurology at Howard University. “The science must be transmitted via many channels of education to the general public, especially to low-income communities, so intervention can begin early, at home.” Clearly, however, the new research has important policy implications, for helping to address poverty-related issues and, perhaps, for helping to break the cycles that sustain poverty. Turning the new insights into productive policies is almost certain to cause political objections. In Farah’s view, that’s where neuroscience can play a constructive role in helping to engage policymakers and the public. “At this point,” she explained, “we are able to study and understand the development in the brain of things like executive function, memory, and language, and we can begin to apply it in domains as diverse as early childhood education, juvenile justice, and policy for poverty alleviation.” Could neuroscience create more neutral ground for discussing such policy? That’s her hope. “The science is fascinating. I think it can engage people with the awesomeness of brain development… and how it emerges from the interplay of genes and environment. It can renew people’s interest in finding solutions. It also helps to replace the morally fraught concepts of effort, trying harder, pulling yourself up by your bootstraps.” In place of those traditional views is the more dispassionate recognition that neurons “develop differently under different circumstances,” Farah said. That could help shift the policy discussion away from moral judgments to a public health orientation. Ultimately, Farah suggested, the new science might even bring people together. The insights “affect everybody,” she said. “We’re not just talking about helping the bottom 20% of children below the poverty line. Understanding the effects of stress and cognitive stimulation on brain development can help everybody’s children to fulfill their potential.” Watch video highlights of the AAAS Capitol Hill briefing on early childhood brain development.
Here's a quick look at the network theorems which will be introduced in this tutorial. More Network Theorems : Norton’s theorem, Maximum power transfer theorem, Substitution theorem, Reciprocity theorem, Millman’s theorem, Compensation Theorem, Tellegen’s theorem, Star-Delta transformation We will also go through a number of problems based on these network theorems. In a linear, active, bilateral network consisting of active sources, passive elements and a load resistor RL, the circuit can be replaced by a single current source of magnitude IN and a resistor RN parallel to the load, where IN is the short circuit current through the points where the load is connected and RN is the equivalent resistance as seen from the terminal where the load is connected. Statement: this theorem states that in an active, linear, bilateral network, maximum power is delivered to the load when the load resistance is equal to the equivalent resistance looking back into the network from the terminals where the load is connected. The value of the maximum power is given by V2/4RL. In simpler terms, Maximum power is delivered from a source to a load when the load resistance is equal to the source resistance, assuming that the load resistance is a variable. In any linear bilateral active network, any branch within a circuit may be placed by an equivalent branch, provided the replacement branch has the same current through it and the voltage across it as the original branch. The reciprocity theorem tells us that in a linear passive bilateral network and the corresponding response may be interchanged. Statement: The ratio of excitation remains in a reciprocal network with respect to an interchange between the points of application of excitation and measurement of response. Millman’s theorem is used to simplify circuits having several parallel voltage sources In any linear bilateral active network, if any branch carrying a current I has its impedance Z changed by an amount Delta(Z), the resulting changes that occur in the other branches are the same as those which would have been caused by the injection of a voltage source of ( –I x Delta(Z) ) in the modified branch. This theorem states that the sum o f the power in the elements in a circuit is zero at any instant of time. Complete tutorial with solved problems : Related Tutorials ( Introduction to Electrical Circuits - DC ) :
Imagine studying science in a lab that is thousands to trillions of miles away from you. You can’t touch anything, and you can’t control the experiments – all you can do is watch the events unfold. This science is called astronomy. The timescales on which the Universe builds, grows, and destroys its structures are stupefying – thousands of years to cough up a planetary nebula, tens of millions of years to make a star, billions of years to make a habitable planet…as humans, we simply cannot expect to ever see, first hand, these events unfold in their entirety. Thankfully, the Universe is large enough that we can piece together a timelapse of the events in stages, as instances of it are happening to different systems across the Cosmos. All astronomers use information carried on beams of light to learn how the Universe’s cosmic experiments are coming along. (The term light is just an abbreviation for a huge range of electromagnetic radiation given off by nearly everything in the Universe.) The most plentiful form of light that our cosmic experiments give off is radio wavelength light. It is also the weakest of the energies given off by the Universe’s diverse experiments. A typical radio wave carries a billionth the energy of an optical one. Even cell phone signals swamp cosmic signals. However, that has not stopped us from inventing state-of-the-art technologies for receiving these fascinating cosmic broadcasts. With them we see the invisible Universe, the experiments of beginnings, endings, order, and mayhem. Structure of the Universe We live on a planet that orbits a star, the Sun. Our Sun is one of billions of stars that orbit around a disk-shaped structure called a galaxy. Our Milky Way Galaxy is one of several dozen in the Local Group of galaxies. The Local Group is a rural member of the Virgo Supercluster, a collection of 100 similar galaxy groups. Millions of superclusters give structure to the infinite Universe. Mapped across space, the superclusters sketch out a kind of spherical webbing, like a snapshot of jumbled soap bubbles, with major galaxy clusters hanging out where the bubbles merge, and minor clusters, like the Local Group, residing farther into the voids. How did the Universe get this bubbly shape? The Big Bang The Big Bang was the primeval event that brought all space and time, all matter and energy, into being. For several hundred thousand years immediately thereafter, the Universe was a very hot soup of particles and radiation. Light was trapped inside this mayhem. As the Universe cooled, the first hydrogen and helium atoms began to form, assembling from those free-floating particles. When the particles became occupied in atoms, light finally had a chance of finding a path around them and out into space, giving us our first look at the young Universe. Images taken by the Wilkinson Microwave Anisotropy Probe (WMAP) surveyed the entire microwave-length radio sky to map this very distant escape of light. WMAP showed us that indeed, the Universe was starting to take shape even in those early times. But were the clumps the seeds that grew into the first galaxies? Some astronomers believe the universe was built with small pieces, such as gas clouds and star clusters, that merged over time to form galaxies and clusters of galaxies. Others theorize that the early Universe broke first into colossal clumps that contained enough building materials to make structures on the grandest scale — great walls and sheets of millions of galaxies — that fragmented into increasingly smaller gas and clouds, ultimately resulting in individual galaxies. Astronomers would like to understand how density fluctuations in a sea of subatomic particles could have formed the great variety of galaxy shapes and sizes that make up the Universe as we see it today. And understanding galaxy evolution is necessary for addressing the even more fundamental questions about the expansion of space and the ultimate fate of the Universe. Astronomers using ALMA are observing galaxies in their early phases, as they were 10 billion years ago, and will establish the star-forming history in near and distant galaxies.
A polymeric solid can be thought of as a material that contains many chemically bonded parts or units which themselves are bonded together to form a solid. The word polymer literally means "many parts." Two industrially important polymeric materials are plastics and elastomers. Plastics are a large and varied group of synthetic materials which are processed by forming or molding into shape. Just as there are many types of metals such as aluminum and copper, there are many types of plastics, such as polyethylene and nylon. Elastomers or rubbers can be elastically deformed a large amount when a force is applied to them and can return to their original shape (or almost) when the force is released. Polymers have many properties that make them attractive to use in certain conditions. Many polymers: - are less dense than metals or ceramics, - resist atmospheric and other forms of corrosion, - offer good compatibility with human tissue, or - exhibit excellent resistance to the conduction of electrical current. The polymer plastics can be divided into two classes, thermoplastics and thermosetting plastics, depending on how they are structurally and chemically bonded. Thermoplastic polymers comprise the four most important commodity materials – polyethylene, polypropylene, polystyrene and polyvinyl chloride. There are also a number of specialized engineering polymers. The term ‘thermoplastic’ indicates that these materials melt on heating and may be processed by a variety of molding and extrusion techniques. Alternately, ‘thermosetting’ polymers can not be melted or remelted. Thermosetting polymers include alkyds, amino and phenolic resins, epoxies, polyurethanes, and unsaturated polyesters. Rubber is a natural occurring polymer. However, most polymers are created by engineering the combination of hydrogen and carbon atoms and the arrangement of the chains they form. The polymer molecule is a long chain of covalent-bonded atoms and secondary bonds then hold groups of polymer chains together to form the polymeric material. Polymers are primarily produced from petroleum or natural gas raw products but the use of organic substances is growing. The super-material known as Kevlar is a man-made polymer. Kevlar is used in bullet-proof vests, strong/lightweight frames, and underwater cables that are 20 times stronger than steel.
A computer is a system comprising several electronic items that work together to achieve a task in a period of time that is much shorter than if done manually. It basically receives information (text, pictures, audio files ...) and then processes it and put in a format that is easy to understand and use. When used properly, a computer can handle large amount of information and can therefore be very efficient. Computers may be used to process, store, transform, retrieve, generate, analyze and present large amount of information. The most important electronic component in a computer is the central processing unit (CPU) which in turn comprises the Arithmetic Logical Unit (ALU) that does the actual arithmetic and logical computations. Data and instructions are held in memory ready to be transported through the control and data bus to and from the CPU and also to the input/output peripherals such as monitors, printers, scanners, digital cameras and speakers. Fig. 1 - Computer Architecture The CPU uses the address bus to carry the addresses of the memory location or I/O devices it is accessing. The control bus is used by the CPU to carry control signals between the CPU and other components such as input/output peripherals in the computer. The hardware is made up of all the physical parts such as CPU, memory, monitor, printer ... and the software is the set of all programs (set of instructions) that helps the computer do what it is supposed to do. All operations in a computer are done on binary numbers represented using 1s and 0s only. Physically, the 1 is represented by a swicth that is "ON" and a 0 is represented by a switch that is "OFF". These are done using electronic devices that work at a very high speed. From the keyboard, we input letters and numbers in decimal form but all these are translated into streams of 1s and 0s using codes and then processed.
"Seeing those caribou marching single-file across the tundra puts what we're doing here in the Arctic into perspective," said Miller, principal investigator of the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE), a five-year NASA-led field campaign studying how climate change is affecting the Arctic's carbon cycle. "The Arctic is critical to understanding global climate," he said. "Climate change is already happening in the Arctic, faster than its ecosystems can adapt. Looking at the Arctic is like looking at the canary in the coal mine for the entire Earth system." Aboard the NASA C-23 Sherpa aircraft from NASA's Wallops Flight Facility, Wallops Island, Va., Miller, CARVE Project Manager Steve Dinardo of JPL and the CARVE science team are probing deep into the frozen lands above the Arctic Circle. The team is measuring emissions of the greenhouse gases carbon dioxide and methane from thawing permafrost -- signals that may hold a key to Earth's climate future. What Lies Beneath Permafrost (perennially frozen) soils underlie much of the Arctic. Each summer, the top layers of these soils thaw. The thawed layer varies in depth from about 4 inches (10 centimeters) in the coldest tundra regions to several yards, or meters, in the southern boreal forests. This active soil layer at the surface provides the precarious foothold on which Arctic vegetation survives. The Arctic's extremely cold, wet conditions prevent dead plants and animals from decomposing, so each year another layer gets added to the reservoirs of organic carbon sequestered just beneath the topsoil. Over hundreds of millennia, Arctic permafrost soils have accumulated vast stores of organic carbon - an estimated 1,400 to 1,850 petagrams of it (a petagram is 2.2 trillion pounds, or 1 billion metric tons). That's about half of all the estimated organic carbon stored in Earth's soils. In comparison, about 350 petagrams of carbon have been emitted from all fossil-fuel combustion and human activities since 1850. Most of this carbon is located in thaw-vulnerable topsoils within 10 feet (3 meters) of the surface. But, as scientists are learning, permafrost - and its stored carbon - may not be as permanent as its name implies. And that has them concerned. "Permafrost soils are warming even faster than Arctic air temperatures - as much as 2.7 to 4.5 degrees Fahrenheit (1.5 to 2.5 degrees Celsius) in just the past 30 years," Miller said. "As heat from Earth's surface penetrates into permafrost, it threatens to mobilize these organic carbon reservoirs and release them into the atmosphere as carbon dioxide and methane, upsetting the Arctic's carbon balance and greatly exacerbating global warming." Current climate models do not adequately account for the impact of climate change on permafrost and how its degradation may affect regional and global climate. Scientists want to know how much permafrost carbon may be vulnerable to release as Earth's climate warms, and how fast it may be released. CARVing Out a Better Understanding of Arctic Carbon Enter CARVE. Now in its third year, this NASA Earth Ventures program investigation is expanding our understanding of how the Arctic's water and carbon cycles are linked to climate, as well as what effects fires and thawing permafrost are having on Arctic carbon emissions. CARVE is testing hypotheses that Arctic carbon reservoirs are vulnerable to climate warming, while delivering the first direct measurements and detailed regional maps of Arctic carbon dioxide and methane sources and demonstrating new remote sensing and modeling capabilities. About two dozen scientists from 12 institutions are participating. "The Arctic is warming dramatically - two to three times faster than mid-latitude regions - yet we lack sustained observations and accurate climate models to know with confidence how the balance of carbon among living things will respond to climate change and related phenomena in the 21st century," said Miller. "Changes in climate may trigger transformations that are simply not reversible within our lifetimes, potentially causing rapid changes in the Earth system that will require adaptations by people and ecosystems." The CARVE team flew test flights in 2011 and science flights in 2012. This April and May, they completed the first two of seven planned monthly campaigns in 2013, and they are currently flying their June campaign. Each two-week flight campaign across the Alaskan Arctic is designed to capture seasonal variations in the Arctic carbon cycle: spring thaw in April/May, the peak of the summer growing season in June/July, and the annual fall refreeze and first snow in September/October. From a base in Fairbanks, Alaska, the C-23 flies up to eight hours a day to sites on Alaska's North Slope, interior and Yukon River Valley over tundra, permafrost, boreal forests, peatlands and wetlands. The C-23 won't win any beauty contests - its pilots refer to it as "a UPS truck with a bad nose job." Inside, it's extremely noisy - the pilots and crew wear noise-cancelling headphones to communicate. "When you take the headphones off, it's like being at a NASCAR race," Miller quipped. But what the C-23 lacks in beauty and quiet, it makes up for in reliability and its ability to fly "down in the mud," so to speak. Most of the time, it flies about 500 feet (152 meters) above ground level, with periodic ascents to higher altitudes to collect background data. Most airborne missions measuring atmospheric carbon dioxide and methane do not fly as low. "CARVE shows you need to fly very close to the surface in the Arctic to capture the interesting exchanges of carbon taking place between Earth's surface and atmosphere," Miller said. Onboard the plane, sophisticated instruments "sniff" the atmosphere for greenhouse gases. They include a very sensitive spectrometer that analyzes sunlight reflected from Earth's surface to measure atmospheric carbon dioxide, methane and carbon monoxide. This instrument is an airborne simulator for NASA's Orbiting Carbon Observatory-2 (OCO-2) mission to be launched in 2014. Other instruments analyze air samples from outside the plane for the same chemicals. Aircraft navigation data and basic weather data are also collected. Initial data are delivered to scientists within 12 hours. Air samples are shipped to the University of Colorado's Institute for Arctic and Alpine Research Stable Isotope Laboratory and Radiocarbon Laboratory in Boulder for analyses to determine the carbon's sources and whether it came from thawing permafrost. Much of CARVE's science will come from flying at least three years, Miller says. "We are showing the power of using dependable, low-cost prop planes to make frequent, repeat measurements over time to look for changes from month to month and year to year." Ground observations complement the aircraft data and are used to calibrate and validate them. The ground sites serve as anchor points for CARVE's flight tracks. Ground data include air samples from tall towers and measurements of soil moisture and temperature to determine whether soil is frozen, thawed or flooded. A Tale of Two Greenhouse Gases It's important to accurately characterize the soils and state of the land surfaces. There's a strong correlation between soil characteristics and release of carbon dioxide and methane. Historically, the cold, wet soils of Arctic ecosystems have stored more carbon than they have released. If climate change causes the Arctic to get warmer and drier, scientists expect most of the carbon to be released as carbon dioxide. If it gets warmer and wetter, most will be in the form of methane. The distinction is critical. Molecule per molecule, methane is 22 times more potent as a greenhouse gas than carbon dioxide on a 100-year timescale, and 105 times more potent on a 20-year timescale. If just one percent of the permafrost carbon released over a short time period is methane, it will have the same greenhouse impact as the 99 percent that is released as carbon dioxide. Characterizing this methane to carbon dioxide ratio is a major CARVE objective. There are other correlations between Arctic soil characteristics and the release of carbon dioxide and methane. Variations in the timing of spring thaw and the length of the growing season have a major impact on vegetation productivity and whether high northern latitude regions generate or store carbon. CARVE is also studying wildfire impacts on the Arctic's carbon cycle. Fires in boreal forests or tundra accelerate the thawing of permafrost and carbon release. Detailed fire observation records since 1942 show the average annual number of Alaska wildfires has increased, and fires with burn areas larger than 100,000 acres are occurring more frequently, trends scientists expect to accelerate in a warming Arctic. CARVE's simultaneous measurements of greenhouse gases will help quantify how much carbon is released to the atmosphere from fires in Alaska - a crucial and uncertain element of its carbon budget. The CARVE science team is busy analyzing data from its first full year of science flights. What they're finding, Miller said, is both amazing and potentially troubling. "Some of the methane and carbon dioxide concentrations we've measured have been large, and we're seeing very different patterns from what models suggest," Miller said. "We saw large, regional-scale episodic bursts of higher-than-normal carbon dioxide and methane in interior Alaska and across the North Slope during the spring thaw, and they lasted until after the fall refreeze. To cite another example, in July 2012 we saw methane levels over swamps in the Innoko Wilderness that were 650 parts per billion higher than normal background levels. That's similar to what you might find in a large city." Ultimately, the scientists hope their observations will indicate whether an irreversible permafrost tipping point may be near at hand. While scientists don't yet believe the Arctic has reached that tipping point, no one knows for sure. "We hope CARVE may be able to find that 'smoking gun,' if one exists," Miller said. Other institutions participating in CARVE include City College of New York; the joint University of Colorado/National Oceanic and Atmospheric Administration's Cooperative Institute for Research in Environmental Sciences, Boulder, Colo.; San Diego State University; University of California, Irvine; California Institute of Technology, Pasadena; Harvard University, Cambridge, Mass.; University of California, Berkeley; Lawrence Berkeley National Laboratory, Berkeley, Calif.; University of California, Santa Barbara; NOAA's Earth System Research Laboratory, Boulder, Colo.; and University of Melbourne, Victoria, Australia. For more information on CARVE, visit: http://science.nasa.gov/missions/carve/ . News Media ContactAlan Buis Jet Propulsion Laboratory, Pasadena, Calif.
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) A phosphodiester bond is a group of strong covalent bonds between the phosphorus atom in a phosphate group and two other molecules over two ester bonds. Phosphodiester bonds are central to all life on Earth, as they make up the backbone of the strands of DNA. In DNA and RNA, the phosphodiester bond is the linkage between the 3' carbon atom and the 5' Carbon of the ribose sugar. The phosphate groups in the phosphodiester bond are very negatively-charged. Because the phosphate groups have a pKa near 0, they are negatively-charged at pH 7. This repulsion forces the phosphates to take opposite sides of the DNA strands and is neutralized by proteins (histones), metal ions, and polyamines. In order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. When a single phosphate or two phosphates known as pyrophosphates break away and catalyze the reaction, the phosphodiester bond is formed. Hydrolysis of Phosphodiester bonds can be catalyzed by the action of phosphodiesterases which play an important role in repairing DNA sequences. Enzyme activity Edit See also Edit |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Magna Carta was accepted by King John of England in 1215 and granted certain rights to English noblemen. Although the rights it gave, and the number of people to whom it gave them, were few, Magna Carta became a symbol for subjecting powerful rulers to law and fundamental rights. It holds an historic legacy as “the Great Charter of the Liberties” and its influence still endures today. The Original Document and Pints Magna Carta has been described as a “major constitutional document” and “the banner, the symbol, of our liberties”. But, most of the provisions of the original Magna Carta concerned the property of English nobles, who forced King John to seal (agree to) the document at Runnymede in 1215. Most of the original provisions are no longer in force, because they are not really relevant to today’s world. One interesting provision, although not still law, is responsible for the beloved pint, as it provided for ale to be served in a standard measure, called the ‘London quarter’ (2 pints). The Words of Magna Carta – “Pure Gold” Certain provisions of the Magna Carta which remain valid contain some of the most important rules in the history of English law. Sir Edward Coke, a 17th century English lawyer and politician, described these provisions as “pure gold”. Lord Bingham, a judge of the UK’s highest court, proclaimed that the words still “have the power to make the blood race”. Clause 39 of the Magna Carta states: No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any way, nor will we proceed with force against him, or send others to do so, except by the lawful judgment of his equals or by the law of the land. Magna Carta’s Clause 40 states: To no one will we sell, to no one deny or delay right or justice. (Slightly amended, these two clauses make up Article 29 of a consolidated version of Magna Carta, which was issued in 1225.) Together, these provisions form the origins of what became the right to freedom from arbitrary detention and the right to a fair trial. Although originally intended only for a small class of people (the English nobility), these important provisions paved the way for our modern human rights laws, under which the State must respect and protect the rights and liberties of people within their jurisdiction. Magna Carta and Human Rights Today Magna Carta recognised three great constitutional ideas, which we still see today. First, fundamental rights can only be taken away or interfered with by due process and in accordance with the law. Second, government rests upon the consent of the governed, which is reinforced by our right to free and fair elections. Third, government, as well as the governed, is bound by the law, so the Human Rights Act 1998 makes it clear that public authorities can’t infringe our rights. Advances in basic rights protection are rarely achieved without criticism or attempts to undermine them. After Magna Carta was sealed, Pope Innocent III declared it “illegal, unjust, harmful to royal rights and shameful to the English people”. Still, Magna Carta remains important today. The UK’s highest court in January 2017 declared that it contained the ‘most long-standing and fundamental’ rights. In the intervening 800 odd years, the ideas behind Magna Carta have been exported throughout the world. The United Nations’ Universal Declaration of Human Rights (UDHR) was hailed by Eleanor Roosevelt, chair of the drafting committee, as “the international Magna Carta of all men everywhere”. The rights contained in the UDHR had great influence on the Human Rights Convention, which takes effect in UK law through the Human Rights Act 1998. The links between these laws and the values which underlie Magna Carta are clear; modern human rights laws show our commitment to the ancient ideas that power must be subjected to the rule of law and must not be allowed to violate fundamental freedoms. For more information: - Read about the history of the Universal Declaration of Human Rights. - Learn more about the Human Rights Convention. - Find out about other important human rights protections.
Can we predict volcanic eruptions? Scientists map underground magma flows. By measuring the electric and magnetic fields beneath Mount Rainier, scientists could see the journey that molten rock takes from deep inside the Earth to the volcano's magma chamber. Molten rock travels a long road before it spews from volcanoes during deadly eruptions. Mapping out the journey could help scientists better understand how volcanoes work and improve early warnings of oncoming blasts, but tracking down blobs of magma deep within the Earth's crust is no easy task. Now, at Washington's Mount Rainier and Mount St. Helens, two of the most dangerous volcanoes in the United States, researchers are getting their best look yet at magma's underground path via a pair of new scientific studies. The first study, published today (July 16) in the journal Nature, clearly illustrates how magma is produced deep beneath Mount Rainier. With the second study, which is just getting underway, researchers hope to generate similarly revealing results for Mount St. Helens. Birth of the Cascades Mount Rainier and Mount St. Helens are two of scores of snow-capped volcanoes that march up the West Coast, from Northern California to British Columbia, Canada. If Mount Rainier erupts, its glaciers could melt and trigger lethal mudflows called lahars that would race through the Seattle-Tacoma metropolitan area. Similar lahars scoured the landscape when Mount St. Helens erupted in 1980. [Gallery: The Incredible Eruption of Mount St. Helens] The Cascade volcanoes belch and smoke because of a collision between two tectonic plates — the pieces of crust that shift and slide on Earth's surface. One plate, the Juan de Fuca, is sliding eastward and descending below the westward-moving North American Plate. This collision between the two plates is called a subduction zone. Subduction zones birth volcanoes because the sinking crust is wet — it's been soaking at the bottom of the ocean for millions of years. As the Juan de Fuca plate inches downward, the temperature and pressure on the plate rises, altering the rocks in the subducting crust. Water locked in minerals in the rocks escapes as the heat and pressure increase, and the water slowly rises toward the surface. Adding a little water to the rocks above the subduction zone lowers their melting point, creating magma. In 2006, researchers measured variations in magnetic and electrical fields beneath Mount Rainier to see how this process of subduction feeds magma to Washington's volcanoes, Magnetic and electric conductivity fluctuates with changes in geologic structures underground, and water and molten rocks show up especially clearly with this method, said lead study author Shane McGary, a geophysicist at the College of New Jersey in Ewing. A seismic study done at the same time as the magentotelluric survey helped the researchers resolve the boundaries between solid and molten rock. The results clearly illuminate the route molten rocks take from their underground birthplace in the subduction zone, to the magma chamber beneath Mount Rainer. [Big Blasts: History's 10 Most Destructive Volcanoes] "The most striking thing is we can clearly see the slab to surface path," McGary said of the results. Here's how Mount Rainer's magma forms, according to the study. Water escapes from the top of the Juan de Fuca plate about 50 miles (80 kilometers) below the volcano. The fluids come up and trigger melting in the overlying rock, and this mix of water and magma rises like an elevator straight toward the surface. (Water squeezed out at shallower depths of 25 miles (40 km) also travels over and joins this ascending mix.) For unknown reasons, the elevator shaft is on the coastal side of Mount Rainier, not directly underneath the volcano. Within 12 miles (20 km) of the Earth's surface, the magma slush shifts eastward toward Mount Rainier. "I don't think anyone knows why volcanoes don't form directly above [the rising magma], but this seems to be the characteristic of subduction zones," McGary said. Soon, however, scientists may solve the puzzle of what's happening with the shifting magma. This summer, a horde of volunteers is helping researchers set off small explosions all over Mount St. Helens to peer into the volcano's depth. The explosions are much smaller than the earthquakes that rock the volcano daily, and present no risk of setting off an eruption, according to the project scientists. The energy from the explosions will be recorded on thousands of portable seismometers, or earthquake monitors, placed by volunteers. The experiment will provide the clearest picture yet of the geology beneath Mount St. Helens. The explosions are part of a $3 million, multiyear project called iMUSH, for Imaging Magma Under St. Helens. "We conceived of the study because we have a decent idea of what's happening in the upper crust [underneath Mount St. Helens], but we've had trouble looking deeper," said John Vidale, director of the University of Washington-based Pacific Northwest Seismic Network, and one of the leaders of the project. "This will tell us where the pathways of the magma are, and the geologic structures through which they're moving." In addition to the temporary seismometers, scientists will expand the permanent seismic listening network at the volcano and conduct a magnetic and electrical survey even larger than the Mount Rainier experiment. The overall goal is to probe Mount St. Helens' depths and see how the volcano connects to its neighbors. For instance, does its magma pool in a giant underground reservoir that connects to Mount Rainier and Mount Adams? Or does each volcano have its own supply? And does the molten rock rise in fits and starts, or is there a speedy route to the surface? "We know there's magma underneath these volcanoes, but if we can image the source and understand the relationship between them, it could tell us important things about this area," said Adam Schultz, a geophysicist at Oregon State University in Corvallis, who is also helping lead the project. The answers will also help researchers understand how volcanoes fill their tanks after eruptions. Earlier this year, the U.S. Geological Survey announced that Mount St. Helens was showing signs of slowly filling again with magma. - Amazing Images: Volcanoes from Space - 7 Ways the Earth Changes in the Blink of an Eye - Images: One-of-a-Kind Places on Earth Copyright 2014 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
Pulley paradox discussion. So why are crowned pulleys necessary for proper tracking of a flat belt? I've seen some internet sites that suggest it has something to do with centrifugal forces. These belts operate at low speeds, and the tracking behavior we wish to explain is easily observed at very low speeds where intertial effects play no role. One side of the belt on a tapered-diameter pulley has greater tension. Can it be that this gives rise to greater friction, and that friction pulls the belt up the slope? Unfortunately for this hypothesis, the friction acts down the slope on a belt that is moving up the slope. In the textbook Elements of Mechanism, Third Ed. by Schwamb, Merrill and James (Wiley, 1921), we find a rather insightful answer, at least as applies to a leather or slightly stiff, but still somewhat elastic, belt. The belt in contact with the truncated cone lies on the pulley relatively flat and undistorted in shape. But along the dotted line at a the belt moving upward makes first contact with the pulley. Just below a the belt has a slight sidewise bend. But the important thing is that as the belt moves from a to b without slippinng, it moves along the dotted line to a point farther up the incline of the cone, and this process continues until the belt rides onto the apex. Now let's look at the case of a flat belt running over two cylindrical pulleys whose axles are misaligned. Will the belt crawl to the right (where the belt tension will be higher) or to the left (where the tension is lower)? If the belt around the cylindrical pulleys does not slip, there's no geometric reason why the belt would move sidewise on either pulley. A different reason must be sought than the one we found above. Quoting the textbook referenced above: This passage is a little murky at first reading. The essential difference between the two cases can be seen in the diagrams. In Fig. 1 the belt makes contact with the puley at line a. Note that the upper edge of the pulley (line b) makes an angle with line a. Also, note that line a is perpendicular to the incoming portion of the belt. In Fig 2. the belt makes contact with the puley at line a. Note that the upper edge of the cylindrical pulley (line b) is parallel to line a. Also note that line a is not perpendicular to the incoming portion of the belt. This is the important difference between the two cases. In Fig. 1, each new piece of belt coming onto the pulley is carried, without slipping, to a point higher (to the right in Fig. 1) on the slope. In Fig. 2, each new piece of belt coming onto the pulley is "laid onto" the pulley at a point slightly lower (to the left in Fig. 2) on the slope, and is carried around without slipping. What I like about this puzzle is that (1) the behavior on the crowned pulley is counter-intuitive; (2) most of the initial hypotheses you make will turn out to be wrong; and (3) some explanations of the crowned pulley seem so "right"until you apply the same reasoning to the parallel shaft problem, then it's "back to the drawing-board". I like puzzles that have several levels of apparent paradox and counter-intuitive features. They teach us not to trust our intuition, which is a good thing. Intuition can sometimes be a part of the problem solving process, but at some point, it must give way to "sweating the details" and being ruthlessly critical of "plausible-sounding" answers. Fig. 3 shows rubber-band models made with steel construction set parts and wooden pulleys. The model on the left has a wooden file handle as a pulley. This serves to illustrate how the rubber band will migrate along the wooden pulley from the narrowest part at the left, to the its largest diameter. The file handle has both convex and concave profiles. Placed in the concave part, the rubber band will quickly rise up the slope and will stabilize at the largest radius, even if that is the very narrow portion near the right end. In the misaligned cylindrical pulley model, on the right, the rubber band moves to the left side, where the pulleys are closer together, contrary to most people's expectations. It's better to use wooden dowels, or cover the metal cylinder surfaces with something like cloth tape to prevent belt slippage. Input and suggestions are welcome at the address shown to the right. When commenting on a specific document, please reference it by name or content.
Researchers at Cornell University have developed a simple silicon device for speeding up optical data. The device incorporates a silicon chip called a “time lens,” lengths of optical fiber, and a laser. It splits up a data stream encoded at 10 gigabits per second, puts it back together, and outputs the same data at 270 gigabits per second. Speeding up optical data transmission usually requires a lot of energy and bulky, expensive optics. The new system is energy efficient and is integrated on a compact silicon chip. It could be used to move vast quantities of data at fast speeds over the Internet or on optical chips inside computers. Most of today’s telecommunications data is encoded at a rate of 10 gigabits per second. As engineers have tried to expand to greater bandwidths, they’ve come up against a problem. “As you get to very high data rates, there are no easy ways of encoding the data,” says Alexander Gaeta, professor of applied and engineering physics at Cornell University, who developed the silicon device with Michal Lipson, associate professor of electrical and computer engineering. Their work is described online in the journal Nature Photonics. The new device could also be a critical step in the development of practical optical chips. As electronics speed up, “power consumption is becoming a more constraining issue, especially at the chip level,” says Keren Bergman, professor of electrical engineering at Columbia University, who was not involved with the research. “You can’t have your laptop run faster without it getting hotter” and consuming more energy, says Bergman. Electronics have an upper limit of about 100 gigahertz. Optical chips could make computers run faster without generating waste heat, but because of the nature of light–photons don’t like to interact–it takes a lot of energy to create speedy optical signals. The new ultrafast modulator gets around this problem because it can compress data encoded with conventional equipment to ultrahigh speeds. The Cornell device is called a “time telescope.” While an ordinary lens changes the spatial form of a light wave, a time lens stretches it out or compresses it over time. Brian Kolner, now a professor of applied science and electrical and computer engineering at the University of California, Davis, laid the theoretical groundwork for the time lens in 1988 while working at Hewlett-Packard. He made one in the early 1990s, but it required an expensive crystal modulator that took a lot of energy. The Cornell work, Kolner says, is “a sensible engineering step forward to reduce the proofs of principle to a useful practice.”
Each orbital in an atom is specified by a set of three quantum numbers (n, ℓ, m) and each electron is designated by a set of four quantum numbers (n, ℓ, m and s). Principle quantum number (n) - It was proposed by Bohr and denoted by n. - It determines the average distance between electron and nucleus, means it denotes the size of atom. - It determine the energy of the electron in an orbit where electron is present. - The maximum number of an electron in an orbit represented by this quantum number as 2n2. No energy shells in atoms of known elements possess more limit 32 electrons. - It gives the information of orbit K, L, M, N——- - Angular momentum can also be calculated using principle quantum number. Azimuthal quantum number( ℓ ) - Azimuthal quantum number is also known as angular quantum number. Proposed by Sommerfield and denoted by ℓ . - It determines the number of sub shells or sublevels to which the electron belongs. - It tells about the shape of subshells. - It also expresses the energies of subshells s <p <d< f (increasing energy). - The value of ℓ=(n-1) always. Where n is the number of principle shell. - It represent the orbital angular momentum. Which is equal to h/2π √ℓ (ℓ + a) - The maximum number of electrons in subshell = 2(2ℓ+1) - For a given value of ‘n’ the total values of ℓ is always equal to the value of ‘n‘ . s – subshell -> 2 electrons d – subshell -> 10 electrons p – subshell -> 6 electrons f – subshell -> 14 electrons. Magnetic quantum number (m) - It was proposed by Zeeman and denoted by ‘m’ - It gives the number of permitted orientation of subshells. - The value of m varies from – ℓ to + ℓ through zero - It tells about the splitting of spectral lines in the magnetic field i.e. this quantum number proves the Zeeman effect. - For a given value of ‘n’ the total value of ‘m’ is equal to n2. - For a given value of ℓ the total value of ‘m’ is equal to (2ℓ + 1). - Degenerate orbitals : Orbitals having the same energy are known as degenerate orbitals. e.g. for p subshell Px Py Pz - The number of degenerate orbitals of s subshell = 0. Spin quantum numbers (s) - It was proposed by Goldshmidt & Ulen Back and denoted by the symbol of s. - The value of s is + 1/2 and -1/2, which signifies the spin or rotation or direction of electron on it’s axis during movement. - The spin may be clockwise or anticlockwise. - It represents the value of spin angular momentum is equal to h/2π √s(s+1) - Maximum spin of an atom = 1/2 x number of unpaired electron. - This quantum number is not the result of solution of schrodinger equation as solved for H-atom. Graphical Representation of Allowable Combinations of Quantum Numbers - Biology Important Topics - Chemistry Important Topics - Cyclotron Working Principle - Froth Floatation Process - Large Hadron Collider (LHC) - Modulation in Communication System - Physics Important Topics - Preserve and protect Heritage Monuments in India - Properties of Electromagnetic Waves - Rutherford’s Nuclear Model of Atom - Salt Analysis - Structure of DNA - Swine Flu in India - Which came first: The Chicken or the Egg ? - Why Sky is Blue in Color?
Math Word Problems Gaining proficiency with mathematical word problems is a crucial skill for your students to master. By Greg Harrison Quite often, when math students hear the phrase "word problems" all sorts of negative thoughts enter their heads, then a blank stare appears on their faces as they look at the problem on the paper. I've seen many students who were terrific at math shut down when faced with a word problem that required two or more steps to solve. Why is this? Part of the reason is that word problems require reading, and many students who are incredibly good at math are not as confident or proficient when it comes to language arts. Another part of the reason is that many students have not been taught strategies they can utilize when it comes to solving a word problem. And so, when a word problem comes up on a homework assignment, a quiz, or standardized test, they often feel defeated before they even get started on it. Like anything else in mathematics, in order to become confident and successful problem solvers, students must be taught problem solving strategies and given many opportunities to put those strategies to use. It's all about the practice! Thus, I have made problem solving a regular part of my daily math class. When math class begins, my students take out their Daily Math books. The students divide a sheet of paper in their books into four sections by drawing a vertical line down the middle and a horizontal line across the middle of the page. I do the same thing up on the whiteboard. Each of the sections contains a math problem. Anything goes! The problems can range from simple calculation problems, to Roman Numerals, to number patterns, to geometry concepts, to beginning algebra problems . . . anything at all. I always reserve the fourth box for a word problem. Here's an example: I want to put wood tiles on part of the classroom floor. The space is 12 feet long by 10 feet wide. The square-foot tiles come in packages of 25. How many packages will I need to buy in order to cover the space? This is a classic example of a "two-step" word problem. It requires "two-steps" because it takes two separate calculations to solve it. Here are the strategies I teach my students to solve a problem like this: 1) Take three deep breaths to help you relax, and send lots of oxygen to your brain. (This may sound strange to you, but trust me . . . it works!) 2) Read the problem twice. 3) Circle all of the numbers in the problem. 4) Underline the sentence tells you what you have to figure out. 5) Decide which operations you have to use to solve each step. 6) Perform your calculations and solve the problem. Most students will realize that they have to figure out how many square feet (how big) the space is that I want to tile. If they multiply 12 X 10, they will come up with 120 square feet and will know that they need 120 tiles to do the job. Now they have some options. Some students will count by 25's to see how many packages are needed to cover the floor. They can count up - 25, 50, 75, 100, 125. Voila! Five packages will be needed. Other students will use division, 120 divided by 25 is 4.8, so a full 5 packages will be needed. Two-step problems are, of course, more difficult than one-step problems. In the beginning of the year, I would have made this problem a one-step problem by simply asking them how many square feet of floor I wanted to tile. As the year goes on, and as your students get more comfortable with solving word problems, you simply add more steps. To make this example into a 3-step problem, you could tack on the sentence: If the packages of tiles cost $18.00 each, how much money will I have to spend in order to tile the floor? In this simple way you can incorporate word problems into your daily math lesson plans for your students. You will find that as their confidence grows, they will begin saying "Give us a hard problem!" After all, math word problems are like riddles, and we all know how much children love a good riddle. Additionally, math word problems are a great example of how we, as adults, use math in "real-life" situations - which is another reason why it's so important to give your students guidance and practice in this critical area of mathematics. Here are some other lesson plans which will give you some terrific ideas for teaching and incorporating mathematical word problems into your daily lessons. Mathematical Word Problem Lesson Plans: Word problem activities don't have to be boring! Most everyone loves pumpkins, and here is a lesson that should pique your students' interest. They read and analyze a chart about ten of the biggest pumpkins in the world, determine what math operation needs to be used to solve word problems, and complete a worksheet in the plan. A very fine lesson! In this motivating lesson, students take turns acting as "math coaches" who assist other students in solving word problems by identifying key words that usually indicate specific mathematical operations. This lesson speaks to a very important concept used for problem solving - recognizing the words that often give them clues as to how to proceed when deciding on an operation to use, and solving the problem. Once your students become more confident and proficient with their problem solving, this very clever lesson will give them some higher-level practice. Students access the Disaster Math website, and select from the following Disaster Math games: Hurricane, Tornado, Wild Fire, Winter Storm, or Flood Math. They work in pairs and attempt to solve a variety of word problems associated with each disaster. This lesson is geared for upper elementary/middle school students, and provides an excellent opportunity to solve word problems regarding a favorite food for most kids - cereal! Students utilize the nutrition labels on a variety of cereal boxes to solve word problems and, eventually, to create their own cereal. A masterful lesson!
A tessellation is simply is a set of figures that can cover a flat surface leaving no gaps. To explain it in simpler terms – consider the floor of your house. That is a flat surface – called a “plane” in mathematical terms. And you’ll notice that the floor is covered with some tiles or marbles of different shapes. That is a good example of a “tessellation”. The one difference here is that technically a plane is infinite in length and width so it’s like a floor that goes on forever. Of course, when we are talking about floors, the shapes used to cover it are mostly rectangles or squares (in fact, the word “tessellation” comes from the Latin word tessella – which means “small square”). The word “Tiling” is also commonly used to refer to “tessellations”. There are different kinds of tessellations – the ones of most interest are tessellations created using polygons. If you use only one kind of polygon to tile the entire plane – that’s called a “Regular Tessellation” As it turns out, there are only three possible polygons that can be used here. There are only three rules to be followed when doing a “regular tessellation” of a plane - The tessellation must cover a plane (or an infinite floor) without any gaps or any overlaps. - All the tiles must be the same shape and size and must be regular polygons (that means all sides are the same length) - Each vertex (the points where the corners of the tiles meet) should look the same Of course, you would have guessed that one is a square. What are the other two? They are triangles and hexagons. Let me show you examples of these two here. You may wonder why other shapes won’t work. Let’s try with pentagons and see what shape we come up with. You can see that there is a gap and that’s not allowed. So what’s unique to those 3 shapes (triangle, square and hexagon)? As it turns out, the key here is that the internal angles of each of these three is an exact divisor of 360 (internal angle of triangle is 60, that of square is 90, and for a hexagon is 120). The mathematics to explain this is a little complicated, so we won’t look at it here If you use a combination of more than one regular polygon to tile the plane, then it’s called a “semi-regular” tessellation. If you look at the rules above, only rule 2 changes slightly for semi-regular tessellations. All the other rules are still the same. For example, you can use a combination of triangles and hexagons as follows to create a semi-regular tessellation. There are eight such tessellations possible There are many other types of tessellations, like edge-to-edge tessellation (where the only condition is that adjacent tiles should share sides fully, not partially), and Penrose tilings. Each of these has many fascinating properties which mathematicians are continuing to study even today. Tessellations are also used in computer graphics where objects to be shown on screen are broken up like tessellations so that the computer can easily draw it on the monitor screen.
May Day occurs on May 1 and refers to any of several public holidays. In many countries, May Day is synonymous with International Workers' Day, or Labour Day, which celebrates the social and economic achievements of the labor movement. As a day of celebration, however, the holiday has ancient origins and can relate to many customs that have survived into modern times. Many of these customs are due to May Day being a cross-quarter day, meaning that it falls approximately halfway between an equinox and a solstice. May Day can refer to various labour celebrations conducted on May 1 that commemorate the fight for the eight hour day. May Day in this regard is called International Workers' Day, or Labour Day. The choice of May 1st was a commemoration by the Second International for the people involved in the 1886 Haymarket affair in Chicago, Illinois. As the culmination of three days of labor unrest in the United States, the Haymarket incident was a source of outrage and admiration from people around the globe. In countries other than the United States and Canada, residents sought to make May Day an official holiday and their efforts largely succeeded. For this reason, in most of the world today, May Day has become an international celebration of the social and economic achievements of the labour movement. Although May Day received its inspiration from the United States, the U.S. Congress designated May 1 as Loyalty Day in 1958 due to the day's appropriation by the Soviet Union. Alternatively, Labor Day traditionally occurs sometime in September in the United States. Some view this as an effort to isolate American workers from the worldwide community. Yes, this holiday also had religious origins. However, unlike Christmas or Halloween I'd say it has fully evolved. May Day has a no (or at least fewer) religious remnants while one might argue Christmas is still a very religious holiday. Further, it's current form serves a purpose greater than wasting money on costumes and potentially inflating your dentist's bill (i.e. Halloween). If you're going to celebrate only one "Western" holiday, my vote goes to May Day!
Power Bites: Personal Power by Margaret A. Hill Got a little brother with energy to spare? Maybe he could drive your CD player instead of driving you nuts! New inventions that convert human power into electricity make that notion a very real possibility. Think about walking or jogging. The pressure of your foot striking the pavement is a form of mechanical energy. Several engineering groups are successfully building “heel-strike” mechanisms that will capture that mechanical energy and convert it into electrical energy. The key to this technology? Electroactive polymers. These human-made plastics generate an electric charge when they are compressed or bent. When placed inside the heel of a boot or shoe, electroactive materials become power generators as each step stretches and squashes them into performance. Making these electricity generators function inside the boot heels of soldiers is the aim of engineers at SRI International in Palo Alto, CA, and at the Massachusetts Institute of Technology (MIT) in Cambridge, MA. The trick has been to develop a durable material capable of sufficient energy output that can be electronically interfaced with the devices needing power. SRI International has created a rugged electricity-generating material that meets these needs. When engineered into boot heels, the system is expected to generate enough electricity from eight hours of walking to power the wearer's communication device, GPS (global positioning system), and night vision goggles. Engineers at MIT are working on their own microscale version of an energy-harvesting system. It uses a mini-hydraulic mechanism to collect energy from foot pressure. This mechanism then supplies the degree of pressure needed to activate an electroactive material with high electrical output. Both the SRI International and the MIT boot generators are months away from field-testing, but prospects look good. This new technology will eliminate batteries, lighten pack loads, and extend power availability. Not to mention—maybe?—be applied to little brothers with excess energy! - What has been one of the challenges facing researchers developing uses for electroactive polymers? [anno: Answers may vary but could include that developing an electroactive polymer that is durable enough to withstand the repeated pressures from walking while still generating enough energy.] - Researchers are hoping to develop these materials and eliminate battery packs. If an electroactive polymer in a boot heel is collecting energy to power personal devices, how would the devices get the power? Think about what you learned about circuits in Lesson 2. Draw a diagram that shows a person wearing a pair of boots with electroactive polymers in the heels. The person wants to power a GPS unit worn on the belt. How would the boots power the GPS unit? Show how the energy would flow in this circuit. Label the power source, conductor, and the object using the current. [anno: Diagrams will vary but should show a device in the boot labeled as the power source, a wire running between the boot and the GPS unit labeled as the conductor, and the GPS unit as the object using the current.]
What on Earth is Greater Than? This What on Earth is Greater Than? lesson plan also includes: - Join to access all included materials Students compare things that are greater than, less than, or equal to, compare the Earth to other planets, and list planets from greatest size to smallest size. 3 Views 49 Downloads - Activities & Projects - Graphics & Images - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Writing Prompts - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Locating Fractions Greater than One on the Number Line Supplement your lesson on improper fractions with this simple resource. Working on number lines labeled with whole numbers between 0 and 5, young mathematicians represent basic improper fractions with halves and thirds. The fractions... 3rd - 5th Math CCSS: Designed Greater Than, Less Than or Equal To: Math Challenge Take number comparison problems to the next level. The class must add first to determine the numbers in each set and then use the appropriate symbol to show which number is greater than, less than, or equal. There are 10 problems and 1... 2nd - 4th Math Find the Area of Polygons with More than 4 Sides Strange and unique shapes are found everywhere in the world around us. The final video in this series teaches young learners to break complex shapes into rectangles when finding their area. This process is clearly modeled as the... 5 mins 2nd - 4th Math CCSS: Designed Greater Than or Less Than With "Mr. Great" Those tricky symbols for greater than and less than have stumped young mathematicians for generations. Mr. Great is a paper plate cut into a Pac-Man shape that can be used to keep track of which direction the symbols should face. This... K - 5th Math CCSS: Adaptable Generate a Scale Drawing Using Scale Factors Greater Than and Less Than One Different scale factors produce different results. Show your learners the difference between scale factors greater than and less than one and their different results. The video emphasizes using division to find the scale factor between... 5 mins 6th - 8th Math CCSS: Designed Understand that Inequalities Have More than One Solution Using a Number Line To go on a field trip, Mrs. Robinson’s class needs to raise at least 80 dollars. Given four possible solutions, it is up to your number crunchers to use a number line to figure out which solutions would allow Mrs. Robinson’s class to go... 5 mins 5th - 7th Math CCSS: Designed
Hepatitis C – What You Need to Know What is hepatitis C? Hepatitis C virus infection is the most common chronic bloodborne infection in the United States; approximately 3.2 million persons are chronically infected. It is the leading cause of cirrhosis and liver cancer and the most common reason for liver transplantation in the United States. Approximately 8,000–10,000 people die every year from hepatitis C related liver disease. Hepatitis C is a contagious liver disease that results from infection with the hepatitis C virus. It can range in severity from a mild illness lasting a few weeks to a serious, lifelong illness. Hepatitis C is usually spread when blood from a person infected with the hepatitis C virus enters the body of someone who is not infected. Hepatitis C can be either “acute” or “chronic.” Types of hepatitis C Acute hepatitis C virus infection is a short-term illness that occurs within the first 6 months after someone is exposed to the hepatitis C virus. For most people, acute infection leads to chronic infection. Chronic hepatitis C virus infection is a long-term illness that occurs when the hepatitis C virus remains in a person’s body. Hepatitis C virus infection can last a lifetime and lead to serious liver problems, including cirrhosis (scarring of the liver), liver cancer or death. Contact and Spread Hepatitis C is spread when blood from a person infected with the hepatitis C virus enters the body of someone who is not infected. Today, most people become infected with the hepatitis C virus by sharing needles or other equipment to inject drugs. Before 1992, when widespread screening of the blood supply began in the United States, Hepatitis C was also commonly spread through blood transfusions and organ transplants. People can become infected with the hepatitis C virus during such activities as: - Sharing needles, syringes, or other equipment to - Needlestick injuries in health care settings - Being born to a mother who has hepatitis C Less commonly, a person can also get hepatitis C virus - Sharing personal care items that may have come in contact with another person’s blood, such as razors or toothbrushes - Having sexual contact with a person infected with the hepatitis C virus Hepatitis C virus is not spread by sharing eating utensils, breastfeeding, hugging, kissing, holding hands, coughing, or sneezing. It is also not spread through food or water. Hepatitis C virus has not been shown to be transmitted by mosquitoes or other insects. If symptoms occur, the average time is 6–7 weeks after exposure, but this can range from 2 weeks to 6 months. However, many people infected with the hepatitis C virus do not develop symptoms. Even if a person with hepatitis C has no symptoms, he or she can still spread the virus to others. Symptoms of acute hepatitis C, if they appear, can - Loss of appetite - Abdominal pain - Dark urine - Clay-colored bowel movements - Joint pain - Jaundice (yellow color in the skin or eyes) Symptoms of chronic hepatitis C: Most people with chronic hepatitis C do not have any symptoms. However, if a person has been infected for many years, his or her liver may be damaged. In many cases, there are no symptoms of the disease until liver problems have developed. Diagnosis and Testing Since many people with hepatitis C do not have symptoms, the disease is often detected during routine blood tests to measure liver function and liver enzyme (protein produced by the liver) levels. Talk to your doctor about getting tested for hepatitis C if any of the following are true: - You are a current or former injection drug user, even if you injected only one time or many years ago. - You were treated for a blood clotting problem - You received a blood transfusion or organ transplant before July 1992. - You are on long-term hemodialysis treatment. - You have abnormal liver tests or liver disease. - You work in health care or public safety and were exposed to blood through a needlestick or other sharp object injury. - You are infected with HIV. Acute hepatitis C There is no medication available to treat acute hepatitis C infection. Doctors usually recommend rest, adequate nutrition, and fluids. Chronic hepatitis C Each person should discuss treatment options with a doctor who specializes in treating hepatitis. People with chronic hepatitis C should be monitored regularly for signs of liver disease and evaluated for treatment. The treatment most often used for hepatitis C is a combination of two medicines, interferon and ribavirin. However, not every person with chronic hepatitis C needs or will benefit from treatment. In addition, the drugs may cause serious side effects in some In May 2011, the Food and Drug Administration approved 2 drugs for chronic hepatitis C. The first one is boceprevir and the other is telaprevir (Incivek). Both drugs block an enzyme that helps the virus reproduce. The drugs are intended to improve on standard treatments using the injected drug pegylated interferon alpha and the pill There is no vaccine for hepatitis C. The best way to prevent hepatitis C is by avoiding behaviors that can spread the - Don’t share needles or syringes. - Practice “safer” sex. Hepatitis C can be spread through sexual contact but the risk of transmission from sexual contact is believed to be low. The risk increases for those who have multiple sex partners, have a sexually transmitted disease, engage in rough sex, or are infected with HIV. - Don’t share razors, toothbrushes or nail - If you ever tested positive for the hepatitis C virus (or hepatitis B virus), experts recommend never donating blood, organs, or semen because this can spread the infection to the - If you are getting a tattoo or body piercing, make certain that the artist or piercer sterilizes needles and equipment, uses disposable gloves, and washes hands properly. |Transmission of hepatitis C (and other infectious diseases) is possible when poor infection-control practices are used during tattooing or piercing. Since tattoo instruments come in contact with blood and bodily fluids, infection is possible if instruments are used on more than one person without being sterilized or without proper hygiene. Licensed, commercial tattooing facilities are not known to spread hepatitis C, but unregulated tattooing and piercing, as found in prisons and other informal settings, does increase the risk of transmission. For more information regarding hepatitis C: From the Palm Beach County Health Dept. Epidemiology & Disease Control.
Outlining (Beethoven's Sonata #1) Outlining is a method for accelerating the learning process by simplifying the music. It is a simplifying process just like HS practice or practicing in short segments. Its main characteristic is that it allows you to maintain the musical flow or rhythm, and to do this at the final speed almost immediately, with a minimum of practice. This enables you to practice the musical content of the piece long before that segment can be played satisfactorily or at speed. It also helps you to acquire difficult technique quickly by teaching the larger playing members (arms, shoulders) how to move correctly; when this is accomplished, the smaller members often fall into place more easily. It also eliminates many pitfalls for timing and musical interpretation errors. The simplifications are accomplished by using various devices, such as deleting "less important notes" or combining a series of notes into a chord. You then get back to the original music gradually by progressively restoring the simplified notes. Whiteside has a good description of outlining on P.141 of the first book, and P.54-61, 105-107, and 191-196 of the second book, where several examples are analyzed; see the Reference section. For a given passage, there are usually many ways to simply the score or to restore notes, and a person using outlining for the first time will need some practice before s/he can take full advantage of the method. It is obviously easiest to learn outlining under the guidance of a teacher. Suffice it to say here that how you delete notes (or add them back in) depends on the specific composition and what you are trying to achieve; i.e., whether you are trying to acquire technique or whether you are trying to make sure that the musical content is correct. Note that struggling with technique can quickly destroy your sense of the music. The idea behind outlining is that, by getting to the music first, the technique will follow more quickly because music and technique are inseparable. In practice, it requires a lot of work before outlining can become useful. Unlike HS practice, etc., it cannot be learned so easily. My suggestion is for you to use it initially only when absolutely necessary (where other methods have failed), and to gradually increase its use as you become better at it. It can be especially helpful when you find it difficult to play HT after completing your HS work. Even after you have partly learned a piece, outlining can be used to increase the precision and improve the memorizing. I will demonstrate two very simple examples to illustrate outlining. Common methods of simplification are (1) deleting notes, (2) converting runs, etc., into chords, and (3) converting complex passages into simpler ones. An important rule is that, although the music is simplified, you generally should retain the same fingering that was required before the simplification. Chopin's music often employs tempo rubato and other devices that require exquisite control and coordination of the two hands. In his Fantaisie Impromptu (Op. 66), the six notes of each LH arpeggio (e.g., C#3G#3C#4E4C#4G#3) can be simplified to two notes (C#3E4, played with 51). There should be no need to simplify the RH. This is a good way to make sure that all notes from the two hands that fall on the same beat are played accurately together. Also, for students having difficulty with the 3-4 timing, this simplification will allow play at any speed with the difficulty removed. By first increasing the speed in this way, it may be easier to pick up the 3-4 timing, especially if you cycle just half a bar. The second application is to Beethoven's Sonata #1 (Op. 2, No. 1). I noted in the Reference that Gieseking was remiss in dismissing the 4th movement as "presenting no new problems" in spite of the difficult LH arpeggio which is very fast. Let's try to complete the wonderful job Gieseking did in getting us started on this Sonata by making sure that we can play this exciting final movement. The initial 4 triplets of the LH can be learned by using parallel set exercises applied to each triplet and then cycling. The first triplet in the 3rd bar can be practiced in the same way, with the 524524 fingering. Here, I have inserted a false conjunction to permit easy, continuous cycling, in order to be able to work on the weak 4th finger. When the 4th finger is strong and under control, you can add the real conjunction, 5241. Here, TO is absolutely required. Then you can practice the descending arpeggio, 5241235. You can practice the ensuing ascending arpeggio using the same methods, but be careful not to use TU in the ascending arpeggio, since this is very easy to do. Remember the need for supple wrists for all arpeggios. For the RH, you can use the rules for practicing chords and jumps (sections 7.e and 7.f above). So far, everything is HS work. In order to play HT, use outlining. Simplify the LH so that you play only the beat notes (starting with the 2nd bar): F3F3F3F3F2E2F2F3, with fingering 55515551, which can be continually cycled. These are just the first notes of each triplet. Once this is mastered HS, you can start HT. The result should be much easier than if you had to play the full triplets. Once this becomes comfortable, adding the triplets will be easier than before, and you can do it with much less chance of incorporating mistakes. Since these arpeggios are the most challenging parts of this movement, by outlining them, you can now practice the entire movement at any speed. In the RH, the first 3 chords are soft, and the second 3 are forte. In the beginning, practice mainly accuracy and speed, so practice all 6 chords softly until this section is mastered. Then add the forte. To avoid hitting wrong notes, get into the habit of feeling the notes of the chords before depressing them. For the RH octave melody of bars 33-35, be sure not to play with any crescendo, especially the last G. And the entire Sonata, of course, is played with no pedal. In order to eliminate any chance of a disastrous ending, be sure to play the last 4 notes of this movement with the LH, bringing it into position well before it is needed. For technique acquisition, the other methods of this book are usually more effective than outlining which, even when it works, can be time consuming. However, as in the Sonata example above, a simple outlining can enable you to practice an entire movement at speed, and including most of the musical considerations. In the meantime, you can use the other methods of this book to acquire the technique needed to "fill in" the outlining.
Welsh king. Son of Cadwallon who devastated Northumbria before being killed by King Oswald in 634. Cadwaladr himself suffered a serious defeat by the West Saxons at Pinhoe near Exeter in 658. His death in 664–5 seems to have marked the end of British hopes of recovery from the Saxon invasion. Though his deeds are not recorded, he is a significant figure in later prophetic poems, becoming, like Arthur, a semi‐mythical hero, who would rise again and lead his people to victory. Subjects: British History.
PART A: "INQUIRY" APPROACHES TO TEACHING SCIENCE Definition of "inquiry" The essence of the inquiry approach is to teach pupils to handle situations Which they encounter when dealing with the physical world by using Techniques, which are applied by, research scientists. Inquiry means that Teachers design situations so that pupils are caused to employ procedures Research scientists use to recognise problems, to ask questions, to apply Investigational procedures, and to provide consistent descriptions, Predictions, and explanations, which are compatible with shared experience Of the physical world. "Inquiry" is used deliberately in the context of an investigation in Science and the approach to teaching science described here. "Enquiry" will Be used to refer to all other questions, probes, surveys, or examinations Of a general nature so that the terms will not be confused. "Inquiry" should not be confused with "discovery". Discovery assumes a Realist or logical positivist approach to the world, which is not Necessarily present in "inquiry". Inquiry tends to imply a constructionist Approach to teaching science. Inquiry is open-ended and on going. Discovery concentrates upon closure on some important process, fact, Principle, or law, which is required by the science syllabus. How to teach using an inquiry approach There are a number of teaching strategies, which can be classified as Inquiry. However, the approaches have a number of common aspects. The Rationale for the inquiry approach has strong support from constructionist Psychology. The teacher applies procedures so that: (a) There is a primary emphasis on a hands-on, problem-centred approach; (b) the focus lies with learning and applying appropriate investigational or analytical strategies (This does not have anything to do with the use of the so-called "scientific method".); (c) memorising the "facts" of science which may arise is not as important as development of an understanding of the manner of...
Treatment of diplegia can involve physical therapy (e.g., strength training), electrical stimulation of muscles, use of assistive devices (e.g., walkers, wheelchairs, braces), medication, and surgery. A common medication used for treatment of diplegia is botox (Botulinum toxin) because it helps to decrease muscle contractions. Surgical approaches commonly involve lengthening of the hamstrings, which improves flexion and motion. Hamstrings are a group of muscles at the back of the thigh that bend the knee and swing the leg backwards from the thigh. If the cause of diplegia is reversible (e.g, infection) then treatment of the cause (e.g., with antibiotic medication) may reverse the condition. Diplegia is also known as double hemiplegia. Diplegia comes from the Greek word, "di" meaning "two," and the word "plege," meaning "stroke." Put the two words together and you have "two stroke," referring to the two sides of the body affected by the paralysis. Because strokes sometimes lead to loss of movement and/or sensation in parts of the body, the word "plegia" is used to refer to such conditions. Other types of "plegias" include quadriplegia, hemiplegia, and paraplegia. Diplegia is paralysis (loss of muscle function) of the same or similar body parts on both sides of the. Examples of diplegia would be when both hands are paralyzed or when two similar parts of the face on both sides of the body are paralyzed. The face, arms, or legs are commonly affected in diplegia. In children, a common cause of diplegia is cerebral palsy. Cerebral palsy is a type of brain damage that occurs during pregnancy, during birth, during infancy, or during early childhood that causes the child to have difficulties with movement and posture. Most people with cerebral palsy have diplegia in the legs but the arms are sometimes affected as well. Other types of brain injury (such as strokes) can result in diplegia. A stroke is a burst artery (a type of blood vessel that carries blood away from the heart) or a blockage of an artery in the brain. Infectious diseases, toxic exposures, and metabolism disturbances affecting the brain or spinal cord can also cause diplegia. Metabolism is the chemical actions in cells that release energy from nutrients or use energy to create other substances.
In 1826 the French Monk Pre'vost recorded that if he moved a piece of paper across a bright sunbeam in a darkened room, he perceived hints of the colors purple and yellow. Twelve years later, the German philosopher, physicist and psychologist Gustav Fechner published a paper on what he called "subjective colors," which were visible when a black and white disk was rotated. While others would advance his work, this paper was so seminal that almost 200 years later his name remains associated with the phenomenon. In 1895, Charles Benham popularized this effect by selling spinning tops with a black-and-white pattern on them that produced colors when spun. The toy became extremely successful and in so doing introduced millions of people to Fechner's phenomenon. Benham's top, also referred to as Benham's disk. When spun, colors such as maroon, gray-green, hints of pink and dark blue can be observed. The order in which the colors appear depends on the direction the disk spins. The brightness and depth of color is affected by the brightness and quality of the light illuminating the disk. Fluorescent lights intensifies blues, incandescent lights strengthen reds. Bright light make brighter colors but if too bright, such as direct sunlight, can wash them out. The black-and-white patterns also affect color production. Some produce bright colors in a broad range. Others produce none at all. Some people are more receptive to seeing Fechner colors, others less so. However, this may be affected by the pattern on the disk and viewing conditions. Someone who usually doesn't see patterns on one disk may do so if a different disk is used. The colors are always very subdued and take some experience to spot. (Sight is the most complex and least understood of all the senses. The explanation offered below is consistent with accepted theories but employs simplifications which, while not crossing the lines of accuracy, bends them slightly in the interests of brevity and clarity.) The human eye sees when light sensitive cells in the eye's retina are excited by light. These cells are divided into rods, which respond to brightness, and cones, which respond to colors. Cones are further divided into three groups: one for red, one for green and one for blue. When one of these cones is excited by light for which it is designed to respond, it sends an electro-chemical signal to the brain. The vision center in the brain combines this information with that of all the other cones and from this data decides what color to paint on that part of the image we're seeing. The two important points are that (1) the process of seeing color is based on a chemical reaction in the cone and (2) that the actual color we see is created in the mind not just by the input of one set of cones but from the blend of signals from all of them. Because seeing is based on a chemical reaction, it can't be instantaneous. It takes a very short, but still finite, amount of time for one of the three sets of cones to respond to seeing their particular color, and equally, to respond when that color stimulation is removed. Every chemical reaction progresses based on the nature of the chemicals involved. Touch a match to a piece of nylon rope and it will start to burn very slowly. Touch a match to a pile of gunpowder and that reaction will progress significantly faster. The same is true for the eye's cones. Because each of the three sets of cones has a slightly different chemical reaction that corresponds to its color, then each set of cones responds to the sudden appearance, or absence, of its color with a slightly different speed. If they are all being stimulated by white light and the light suddenly goes out, it turns out that because the red chemical reactions stops faster than blue and greens, then as the white light fades there will be slightly more blue electro-chemical signal being sent to the brain than red. The result is that the brain will interpret this as the white light looking slightly blue or green as it fades. The same happens in reverse. When a bright white light suddenly appears, red reacts faster than the blue and greens so the brain's initial interpretation will be that the white light looks slightly reddish. The color illusion perceived depends on purity of the initial white light, whether it's turning on or off and how fast it does so. Normally, these processes happen so rapidly that they are impossible to notice. However, in the case of Fechner's spinning disks or Benham's tops the brief hint of color repeats every time the disk rotates. When the rotational speed is optimal, these minute flashes in the brain build up enough persistence to be noticed. It's important to note that these colors aren't really being seen. The eyes are looking at a disk with zones that flash white and black. But the time limitations of the chemical reactions that send signals to the brain fool the brain into thinking it's seeing colors. This is why no one can photograph Fechner colors. There aren't any colors to photograph. It would be more accurate to refer to them as Fechner's color illusion. What none of the references I found could explain was why different colors appear at different radii from the center of the disk and why the order of the colors reverses when the direction of rotation reverses. Age, health and individual uniqueness affect the reaction speeds of the cones, which is why some people can't see Fechner colors. Since many of these factors change over time and circumstances, someone who can't see the colors on one day may be able to do so on a different day. The pattern on the disk as well as the rotation speed also play a major roles. Finally, the colors are so subdued that a person may be seeing them but not recognizing them. It's a little like the 3-D hidden image pictures. They're easy once you get the hang of them, but frustratingly difficult until then. The most common colors are a dark red or maroon, a pale gray-green and dark blue. Under the proper conditions orange and a dirty yellow will appear. Occasionally the dark blue deepens to purple. Most Fechner disks are divided into two distinct semicircles. One consists of (usually) concentric black arcs against a white background. The other semicircle is typically solid black. The purpose of the solid black semicircle is to increase contrast so the faint colors are easier to see. This is the same effect that makes black opals appear to have brighter colors than crystal opals. The color producing areas may be equally bright in both gems but the black background of the black opal make the colors stand out more. To discover which black-and-white disk patterns worked the best, I searched for as many different patterns as possible, printed them on card stock, cut them out and attached them one at a time to the face of a disk sanding attachment in an electric drill. Testing each in turn, I was able to determine which produced the strongest color illusions and which did not. Along the way I also learned that for the most part the rotational speed needs to be between 80 and 300 revolutions per minute (rpm.) Faster and the spinning pattern blurs into shades of gray. Slower and the color illusion doesn't appear. The following list presents what I found, starting with the least interesting: I could not get the Fechner color illusion from any of the four disks above no matter what light was used, how fast the disk rotated, or in which direction. This is particularly odd in the case of the disk that's second from the right because that is the design many references cite as being what Fechner used. Displays very weak hints of blue one-third from the center when rotating 120 rpm counter clockwise. When rotating 120 rpm clockwise the weak blue zone moves to two-thirds out from the center. This is a poorly performing disk. Hints of dirty yellow over gray halfway out form the center in both rotational directions. Not very interesting. Clockwise rotation at 150 rpm produces a brown-yellow streak halfway out from the center with a pale blue streak close to it but slightly further out. A second similar pair shows up two-thirds of the way out. With counter-clockwise rotation the brown-yellow and pale blue streaks are reversed but their distances from the center are the same. The color illusion is weak and not very satisfying. Counter-clockwise rotation at 180 rpm produces a faint green one-third from the center, a dirty yellow halfway out and a brownish-red streak near the edge. Reversing the rotation yields hints of green near the center and a moderately solid red-brown streak halfway out. At higher speeds the red-brown streak holds up well, turning into an almost solid arc. The outer three arcs produced hints of light pink when rotating counter-clockwise at 150 rpm. Clockwise, at the same speed, I saw a hint of pink in the center and two very faint gray-green lines slightly further out. This pattern is interesting because it works best at very low speeds, typically 80 rpm. Clockwise, one-third the way out from the center shows three faint pink lines followed by three pale gray-green lines and finally three bluish lines. Reversing the direction of rotation reverses the order of colors. This is one of the more interesting Fechner disks. At 100 rpm, counter-clockwise, the central four arcs appear strongly purple while the next three look pale blue. Beyond that the disk has a slightly yellow tint. Increasing the speed to 180 rpm changes the center arcs to blue, the middle arcs light green and the outer three arcs a dark red-brown. Driven clockwise, the colors are reversed, but the purple is almost impossible to perceive in the outermost ring. The red of the center four arcs is stronger but the rpms have to be kept slower for the colors to be as strong. Considering that this disk is almost identical to the one two spots earlier, it's surprising how much better it works. Making sure the arcs are concentric appears to be important. At 120 rpm clockwise there is a solid maroon circle near the center, a very pale but solid gray-green circle next and finally an extremely dark blue, almost black, arc near the edge. More interestingly, this is one of the few Fechner discs that holds colors at rpms up to 600. At that speed the colors have a lot of gray in them but from center outward are clearly violet, brown-red, pale green and very dark blue. Reversing the direction of rotation reverses the colors. However, now the outer ring, which should be violet, is blue. This disk is identical to the previous one except that the solid arcs have been divided into three fat lines. At 100 rpm clockwise, the center three arcs appear maroon and all the other arcs look dark blue. At 150 rpm the center turns more reddish, the next set of lines yellow-green, then gray green, then blue. At 600 rpm the center looks gray-violet, then dull, faint gray-red, gray-green and finally dark blue. The colors reverse order when the direction of rotation is changed. This is an excellent disk for showing the effect of speed on color density. At low rpms the colors appear solid and more saturated, but they flicker. At higher rpms the colors look washed out but appear to be continuous circles all the way around the disk. Next we have the solid-arc disk again but this time the arcs are divided into four thin lines. At 120 rpm clockwise the center arcs now look decidedly red, not maroon. At higher speeds the violet in the central violet zone is stronger and the next set of arcs outward has a hint of pink in it. This pinkish zone sometimes hints at orange. Otherwise it performs the same as the previous disk. Of all the Fechner disks tested I'd rate this one as the best. This last disc is the one used in Benham tops and is by far the most complex and also the most frustrating. It produces some of the brightest, clearest reds but the spiral-like pattern of the arcs makes them appear to be moving inward and outward so there is never one place you can focus on to study them. It's like looking at meteors. Just when you spot one, it's moved on to somewhere else. Clockwise rotation at 150 rpm produces two streaks of red moving apart inside each wedge shape defined by the fine lines. Underneath these red lines is a hint of green while outside is a line of very dark blue. This Benham disk appears to produce much clearer colors when rotating clockwise than counter-clockwise. For my money I think the following is the best. It's been enlarged in case anyone wishes to copy it to create their own Fechner disk. Making sure that the disk rotates exactly about its center is important for achieving maximum color saturation. If the disk rotates off center the colors will be more muted. Slower rpms create comet-like streaks of colors that look deeper and richer but flicker. Higher rpms produce more even bands of colors that are less saturated. Why Don't Videos Capture Fechner Colors? As with still photos, there are no real colors to record. As to why a video of a spinning Fechner disk doesn't create the illusionary colors, the on-and-off refreshing of computer monitors interferes with the flickering of Fechner disks and in so doing blocks the Fechner illusion from being created in the viewer's mind. In the following video I can't detect any color, yet as I shot the video I could clearly see a wide range of strong colors when I looked directly at the disk: I've previewed a dozen other Youtube videos and while a few of them displayed some beige lines, none of them created anything like the full color illusion. The following video shows a simulation of what the Fechner disk above looks like when viewed in person: In actual use the colors are slightly more intense, particulary the blue, which is a deep, rich midnight blue. Fechner colors are real, but illusionary. They don't exist in the external world but do exist within our brains. While they may be subdued compared to real colors, the fact that they can spring from pure black and white is nothing short of miraculous. I used family members as test subjects and they could all see the colors. I suspect the claim that some people can't see them is more the product of their not being trained to spot the colors than some physical or mental deficiency. The best patterns for producing colors appear to be those with just a few sets of concentric arcs. Very complicated designs tend to wash out to gray even at low rpms. Non-concentric arcs produce weaker colors but sometimes create the rare pink. I experimented with a Fechner cylinder, a tube marked with a pattern similar to the most effective disk pattern, but no matter what rotational speed I used I could not create Fechner colors with it. Fechner disks are inexpensive and easy to make and are great science projects for school or personal interest. There is surprisingly little detailed information available on which disk designs produce the best colors so the field is ripe for exploration and discovery. I sincerely hope you found this page helpful and wish you the best of luck in your study of this interesting phenomenon. Return to my main page to browse 60 other subjects
Atoms looser than expected Single-electron merry-go-round measures universal atomic force All the atoms in the universe just got looser, at least in the eyes of humans. No, the laws of physics didn't change overnight, but our knowledge of how strong atoms are held together did have to be readjusted a bit in light of a new experiment conducted at Harvard University. By studying how a single electron behaves inside an electronic bottle, Gerald Gabrielse and his colleagues at Harvard were able to calculate a new value for a number six times more precise than the previous measurements called the fine structure constant, which specifies the strength of the electromagnetic force, which holds electrons inside atoms, governs the nature of light and provides all electric and magnetic effects we know, from a flash of lightning to a magnet on a refrigerator. Knowledge of these fundamentals helps scientists and engineers design new kinds of electronic devices–and obtain more profound details on the workings of the universe. Gabrielse sums up the experiment this way: "Little did we know that the binding energies of all the atoms in the universe were smaller by a millionth of a percent--a lot of energy given the huge number of atoms in the universe." Electrons are the outermost part of every atom. When detached from their home atoms, electrons constitute the electricity that flows through all powered machines. By studying an individual electron in isolation from any other particle, scientists can eliminate complications of measuring a single object too small to see with even the most powerful microscopes. The Harvard scientists achieved extraordinary conditions of isolation for their individual electron. First of all, the inside of their trap apparatus is pumped free of almost all other particles, establishing a vacuum comparable to that in interplanetary space. And it's ultra-frigid inside: the apparatus is chilled to millionths of a degree above absolute zero, a temperature far colder than the surface of Pluto. The lone electron and its surrounding cage constitute a sort of gigantic atom. Combined electric and magnetic forces in the trap keep the electron in its circular orbit. In addition to this circular motion, the electron wobbles up and down in the vertical direction, the direction of the magnetic field. It's like a giant merry-go-round, with an electromagnetic trap as the carousel and the electron as the lone horse. The circuitry used to activate the electrodes keeping the electron pretty much centered in the trap is so sensitive that the system knows when the electron is bobbing upwards and approaching one of the electrodes. A feedback effect using the combined electric and magnetic forces, supplied by electrodes and coils, restricts the motion of the electron. This allows the electron's energy to be measured with great precision. By measuring the electron's properties so meticulously, physicists could improve their calculation of the fine structure constant, the number that determines the strength of the electromagnetic forces that hold all atoms together. The new value for the constant is slightly smaller than the best previous value (revealing atoms to be just a tiny bit looser) and six times more accurate. The Harvard work with the special electron trap has taken more than twenty years and has produced more than a half dozen PhD theses, all centering on a single electron. These results appear in two papers in the July 21, 2006 issue of the journal Physical Review Letters (prl.aps.org). For copies of the papers and other questions, contact American Institute of Physics Last reviewed: By John M. Grohol, Psy.D. on 30 Apr 2016 Published on PsychCentral.com. All rights reserved.
A procedure is a series of steps followed in a regular, definite order to achieve a specified result. The goal of a written procedure is to enable a user to carry out an action with which he or she might not be familiar. Procedures save the writer time, transfer expertise, ensure consistency, and prevent errors and accidents. Procedures may amount to a single sheet for assembling a table, a lengthy manual of operating routines for a nuclear reactor, or a computer manual full of routines for using an operating system like UNIX or DOS. A procedure is generally organized as follows: An important aspect of procedures is their extensive use of chunking and step-syntax. Chunking is the sorting of parallel elements into prose sentences or elements that are easily located on the page. Step-syntax is the use of special imperative sentences to identify the action in each step of the procedure. A typical imperative begins with the action first, as follows: Cut the end of the cable, as shown in Figure 2-1, removing any sharp wire ends that protrude from the jacket. Most instructions contain one or more safety elements. A warning is given before any step that may present an element of harm to the individual performing the step. A caution is given before any step that could present some risk to equipment. A note is included before or after any step that may need some additional explanation. Assembling the Interference Cable Seal Warning: Do not work with live cables, which may electrocute you. - Ensure that the cable is not attached to a power source. - Determine whether or not the optional seal boot will be used and assemble the seal parts (parts list, Section X). - Cut the end of the cable, as shown in Figure 2-1, removing any sharp wire ends that protrude from the jacket. Caution: Do not use lubricants that may degrade the cable sheath. - Apply a light coating of silicone compound, such as DC-55, to several inches of the cable. - Slide the retaining washer over the cable and push it back out of the way. - Slide the boot (if used) over the cable and push it back out of the way. - Slide the seal packing onto the cable. - Thread the cable the desired amount through the housing bore and out the small opening at the end of the housing. - Lubricate the outer surface of the packing with silicone compound. Note: If the packing has been lubricated, the next step is easily accomplished with thumb pressure only. If additional pressure is required, use a blunt rod to squeeze the packing into the housing. - Gently squeeze the packing and slide it into the annular space of the housing bore. - Slide the boot (if used) and retaining washer into place and fasten them with the retaining ring. - Prepare the end of the cable for making electrical connections.
A combination of images from radio, infrared, optical, ultraviolet and gamma-ray observatories have been combined to create this unique, comprehensive view of the Crab Nebula: the result of a star that exploded almost 1000 years ago. Image credit: NASA, ESA, G. Dubner (IAFE, CONICET-University of Buenos Aires) et al.; A. Loll et al.; T. Temim et al.; F. Seward et al.; VLA/NRAO/AUI/NSF; Chandra/CXC; Spitzer/JPL-Caltech; XMM-Newton/ESA; and Hubble/STScI. “The origin and evolution of life are connected in the most intimate way with the origin and evolution of the stars.” -Carl Sagan The Crab Nebula is one of the most interesting and compelling objects in the entire night sky. In the year 1054, a supernova went off in the constellation of Taurus, where it became brighter than anything other than the Sun and Moon in the sky. Some 700 years later, astronomers discovered the remnant of that supernova: the Crab Nebula. An optical composite/mosaic of the Crab Nebula as taken with the Hubble Space Telescope. The different colors correspond to different elements, and reveal the presence of hydrogen, oxygen, silicon and more, all segregated by mass. Image credit: NASA, ESA, J. Hester and A. Loll (Arizona State University). For nearly a millennium, it’s been expanding at 0.5% the speed of light, and the nebula now spans more than 11 light years across. With a neutron star at its core and a shell with incredibly intricate structures, it’s one of our greatest cosmic clues to where the Universe’s enriched, heavy elements came from. The VLA view of the Crab Nebula showcases a view of this supernova remnant unlike any other we’ve seen. Image credit: NRAO/AUI/NSF. With the advent of a new, five-wavelength composite, we’re seeing this nebula as never before, and closing in on the last of this supernova’s puzzles.
Are you ready to have your mind blown by the awesomeness of science, and the grandeur of the universe? If so, read on. Let’s begin in 2009, when space shuttle Atlantis flew to the Hubble Space Telescope and added several improvements, including the wide field camera 3. This camera added the capability for the telescope to see into the near-infrared portion of the spectrum — light with a slightly longer wavelength than that of visible light. This light, it turns out, is very good for studying distant galaxies. Using this new Hubble camera in 2012 a large team of astronomers identified a number of interesting objects they wanted to study further with ground-based telescopes, which although they do not have the benefit of being located above Earth’s blurring atmosphere, do have larger mirrors. On two nights in April some University of Texas and Texas A&M astronomers got time on one of the two largest telescopes in the world, the Kecks, in Hawaii. They patiently took data on about 40 objects that the Hubble data indicated might be galaxies that formed in the early universe. To the surprise of the astronomers, only one of them was a hit. “We were excited and disappointed at the same time,” said Steve Finkelstein, an astronomer at the University of Texas who made the observations. But oh, what a hit. It turns out that the object — catchily named z8_GND_5296 — was a galaxy nearly 30 billion light years from Earth. What they were seeing is the galaxy as it existed about 700 million years after the Big Bang. A paper describing the discovery is published today in Nature (see abstract). For a universe that 13.8 billion years old, that’s the baby years. In fact, astronomers have never observed a galaxy that’s older than this. Finding objects this old, when galaxies and stars were only first beginning to form, is essential if scientists are to piece together the story of how our universe developed. What’s surprising is that only one of the 40 objects on their list turned out to be a distant galaxy. Where were the others? The answer may have to do with a cosmic fog — pervasive hydrogen gas that existed throughout the universe during the first 500 million to 700 million years of the universe’s existence. This neutral hydrogen effectively shrouds most early galaxies from existing telescopes. So what burned off this cosmic fog? It was actually the first really big and bright stars — about 20 to 100 times the size of our sun. These stars emitted very energetic light particles that collided with the electrons zipping around hydrogen atoms. When these photons have enough energy, as those do from very bright stars, they strip off the electrons. The time when this process occurred is called reionization, and the discovery of this galaxy is helping astronomers to understand when and how it occurred. The universe we live in today remains ionized. Here’s a graphic that shows when and how some of these processes occurred. What’s with the Aggies and Longhorns working together, anyway? Turns out astronomers from the two departments work closely with one another, and are collaborating with other institutions to build the Giant Magellan Telescope. When it opens, as early as 2018, this will be the largest telescope in the world — 2.5 times the size of the Kecks. And it will allow Texans to see even further back into the early days of the universe. If everything’s bigger and better in Texas, astronomy ought to be no exception.
Area and Perimeter Here we will learn about how to find area and perimeter of a plane figures. The perimeter is used to measure the boundaries and the area is used to measure the regions enclosed. The length of the boundary of a closed figure is called the perimeter of the plane figure. The units of perimeter are same as that of length, i.e., m, cm, mm, etc. A part of the plane enclosed by a simple closed figure is called a plane region and the measurement of plane region enclosed is called its area. Area is measured in square units. The units of area and the relation between them is given below: The different geometrical shapes formula of area and perimeter with examples are discussed below: Perimeter and Area of Rectangle: ● The perimeter of rectangle = 2(l + b). ● Area of rectangle = l × b; (l and b are the length and breadth of rectangle) ● Diagonal of rectangle = √(l2 + b2) Perimeter and Area of the Square: ● Perimeter of square = 4 × S. ● Area of square = S × S. ● Diagonal of square = S√2; (S is the side of square) Perimeter and Area of the Triangle: ● Perimeter of triangle = (a + b + c); (a, b, c are 3 sides of a triangle) ● Area of triangle = √(s(s - a) (s - b) (s - c)); (s is the semi-perimeter of triangle) ● S = 1/2 (a + b + c) ● Area of triangle = 1/2 × b × h; (b base , h height) ● Area of an equilateral triangle = (a2√3)/4; (a is the side of triangle) Perimeter and Area of the Parallelogram: ● Perimeter of parallelogram = 2 (sum of adjacent sides) ● Area of parallelogram = base × height Perimeter and Area of the Rhombus: ● Area of rhombus = base × height ● Area of rhombus = 1/2 × length of one diagonal × length of other diagonal ● Perimeter of rhombus = 4 × side Perimeter and Area of the Trapezium: ● Area of trapezium = 1/2 (sum of parallel sides) × (perpendicular distance between them) = 1/2 (p1 + p2) × h (p1, p2 are 2 parallel sides) Circumference and Area of Circle: ● Circumference of circle = 2πr Where, π = 3.14 or π = 22/7 r is the radius of circle d is the diameter of circle ● Area of circle = πr2 ● Area of ring = Area of outer circle - Area of inner circle. Area and Perimeter● Mensuration - Worksheets Perimeter and Area of Rectangle Perimeter and Area of Square Area of the Path Area and Perimeter of the Triangle Area and Perimeter of the Parallelogram Area and Perimeter of Rhombus Area of Trapezium Circumference and Area of Circle Units of Area Conversion Practice Test on Area and Perimeter of Rectangle Practice Test on Area and Perimeter of Square Worksheet on Area and Perimeter of Rectangles Worksheet on Area and Perimeter of Squares Worksheet on Area of the Path Worksheet on Circumference and Area of Circle Worksheet on Area and Perimeter of Triangle 7th Grade Math Problems 8th Grade Math Practice From Area and Perimeter to HOME PAGE
This worksheet can be used in speech and language therapy when utilizing the Expanding Expression Tool Kit Tm. At the top of the worksheet, students can draw a picture of the object they are describing. This helps with visualization when providing details and describing the object below. Students color each circle to the corresponding bead from EET and then write out each description in the box to the right. For my lower students, they dictate what they want to say, and I use yellow highlighter for them to trace their text or if time is limited, or write it out for them.
1911 Encyclopædia Britannica/Busiris BUSIRIS, in a Greek legend preserved in a fragment of Pherecydes, an Egyptian king, son of Poseidon and Lyssianassa. After Egypt has been afflicted for nine years with famine, Phrasius, a seer of Cyprus, arrived in Egypt and announced that the cessation of the famine would not take place until a foreigner was yearly sacrificed to Zeus or Jupiter. Busiris commenced by sacrificing the prophet, and continued the custom by offering a foreigner on the altar of the god. It is here that Busiris enters into the circle of the myths and parerga of Heracles, who had arrived in Egypt from Libya, and was seized and bound ready to be killed and offered at the altar of Zeus in Memphis. Heracles burst the bonds which bound him, and, seizing his club, slew Busiris with his son Amphidamas and his herald Chalbes. This exploit is often represented on vase paintings from the 6th century b.c. and onwards, the Egyptian monarch and his companions being represented as negroes, and the legend is referred to by Herodotus and later writers. Although some of the Greek writers made Busiris an Egyptian king and a successor of Menes, about the sixtieth of the series, and the builder of Thebes, those better informed by the Egyptians rejected him altogether. Various esoterical explanations were given of the myth, and the name not found as a king was recognized as that of the tomb of Osiris. Busiris is here probably an earlier and less accurate Graecism than Osiris for the name of the Egyptian god Usiri, like Bubastis, Buto, for the goddesses Ubasti and Uto. Busiris, Bubastis, Buto, more strictly represent Pusiri, Pubasti, Puto, cities sacred to these divinities. All three were situated in the Delta, and would be amongst the first known to the Greeks. All shrines of Osiris were called P-usiri, but the principal city of the name was in the centre of the Delta, capital of the 9th (Busirite) nome of Lower Egypt; another one near Memphis (now Abusir) may have helped the formation of the legend in that quarter. The name Busiris in this legend may have been caught up merely at random by the early Greeks, or they may have vaguely connected their legend with the Egyptian myth of the slaying of Osiris (as king of Egypt) by his mighty brother Seth, who was in certain aspects a patron of foreigners. Phrasius, Chalbes and Epaphus (for the grandfather of Busiris) are all explicable as Graecized Egyptian names, but other names in the legend are purely Greek. The sacrifice of foreign prisoners before a god, a regular scene on temple walls, is perhaps only symbolical, at any rate for the later days of Egyptian history, but foreign intruders must often have suffered rude treatment at the hands of the Egyptians, in spite of the generally mild character of the latter. See H. v. Gartringen, in Pauly-Wissowa, Realencyclopadie, for the evidence from the side of classical archaeology.
These rights, however, do not necessarily give the monarch any such power. So, what power does the Queen actually have? The royal prerogative. The royal prerogative is, in a nutshell, a collection of executive powers and privileges held by a reigning (not ruling) monarch. Most of these powers are now exercised by Her Maj’s ministers in government, who implement the law of the day under the power of the Crown, hence the phrase ‘Her Majesty’s Government’. Over the last century, these executive powers have been scarcely used, making many question why they still exist at all (and yes, it’s the Republicans asking these questions). The answer to this question is simple. Prerogative powers remain a way to protect British democracy and ensure that nobody, including the monarch and ruling government (in practice), can seize power. The Queen’s prerogative powers vary greatly and fall into a plethora of long definitions and practices. Here’s the condensed version: Summoning/suspending parliament: The Queen has the power to suspend and summon the elected parliament. Declaring war: She can declare war against another country, but really, nowadays, this falls on the ruling prime minister, who can exercise the royal prerogative without council from the government of the day. Appointing the elected prime minister: The Queen is responsible for appointing the prime minister after a general election or resignation. She does this by choosing the candidate with the most support from the House of Commons. If a prime minister resigns, she will seek advice before naming a successor. If she does it without advice, uproar would ensue. The issue and control of passports: Issuing and withdrawing passports falls under the royal prerogative, which means every British passport being issued in the Queen’s name. This power is used by ministers on behalf of Her Majesty. Oh, and because of this the Queen herself, does not need a passport. The monarch is above the law: As Queen, Elizabeth II cannot be prosecuted as the law is carried out in her name. Appoint/remove ministers : Her Majesty has the power to appoint and remove ministers representing the Crown (i.e. her). Royal assent: It is the Queen’s responsibility to the nation and her peoples to approve bills from parliament by signing them into law. She can also refuse a bill if she believes it will harm the country. Head of the armed forces: The Queen is head of the British armed forces – all that join the military must swear an oath of allegiance to her. Commissioning officers: Elizabeth II can commission officers into the armed forces and also remove them. Peerages: The Queen may create a life or hereditary peerage for anyone. All honours are given under the authority of the Crown. Because of this, the Queen has the last word on all knighthoods, peerages etc. Want more royally awesome facts? Here are 17 things you didn’t know about the Queen!
Water Strand Overview A scientific, model-based understanding of water is necessary for citizens to be able to explain and predict the pathways that water and substances in water will follow through natural and engineered systems and the implications of these pathways for local and global water supplies. For example, if a local developer proposes to build a new shopping center, citizens should be able to use scientific knowledge to evaluate experts’ claims about the impact of runoff from the roofs and parking lots of the shopping center on stream flows, local flooding, groundwater supplies and water quality. Citizens should also be able to participate in public discussion about how these potential impacts should be managed. Students often learn school science narratives where they memorize terminology and places, but often fail to be able to apply their understandings to real world problems. A scientific, model-based account of water describes the structure of natural and human-engineered components of surface, atmospheric, soil/groundwater, and biotic systems; the processes of evaporation, infiltration, runoff, and transpiration that move water; the forces that drive water through these systems (e.g., gravity, pressure, thermal energy); and the factors that constrain rate and direction of water moving through systems (e.g., permeability, topography). A model-based account of substances in water includes the nature of the electrochemical properties of substances, whether substances move in suspension or solution, and the processes that mix and separate substances from water as it moves through natural and human components of systems. Importantly, a scientific model-based account of water includes how water moving through these pathways impacts water supplies and how human decisions and actions impact these pathways. The water systems learning progression has four levels of achievement, from informal force-dynamic accounts to scientific, model-based accounts. Level 4: Scientific Model-based Accounts – Accounts acknowledge driving forces and constraining factors on pathways for water and substances in water. Level 3: Incomplete School Science Accounts – Accounts provide detailed, although frequently incomplete, stories of water pathways. These stories include hidden and invisible aspects of systems. Level 2: Force-dynamic Accounts with Mechanisms – Accounts rely on actors or perceived natural tendencies of water to explain water movements or changes in water quality. Level 1: Human-Centric Force-dynamic Accounts – Accounts identify water in visible, familiar contexts, focus on human uses and experiences with water and rely on humans to move water or change water quality. Our project has developed the following resources for teachers and researchers: Assessments – Open-response assessment items and an annotated key of student performance based on the learning progression - School Water Pathways – A week-long inquiry science unit focused on supporting students in developing more model-based accounts of water moving along multiple pathways through natural and human-engineered systems. The unit is focused around the question, “How much water falls on our schoolyard during a year and where does it go?” This unit includes teacher and student pages, formative assessment prompts, and student tools for reasoning. - Substances in Water Unit – A week-long inquiry science unit focused on supporting students in understanding how substances mix and move with water through natural and human-engineered systems. This unit includes teacher and student pages, formative assessment prompts, and student tools for reasoning - Formative Assessment Packages – Four formative assessment packages that include student prompts and interpretative materials for teachers. - Tools for Reasoning – Graphic organizers to support students in considering driving forces and constraining factors for tracing water and substances in water along multiple pathways Water Professional Development Materials Materials were developed and used with K-12 teachers in California, Colorado, Maryland, and Michigan from 2008-2013. These materials are directed at both content knowledge and pedagogy, with materials focusing on how to use learning progressions while teaching about water in the classroom. Journal Articles, Book Chapters, Conference Presentations, and Papers - Covitt, B.A., Syswerda, S.P, Caplan, B., and Cano, A.A. 2014. Teachers’ Use of Learning Progression-Based Formative Assessment in Water Instruction. Presentation from the Annual Meeting of the National Association for Research in Science Teaching, Pittsburgh, PA. Powerpoint. Paper. - Gunkel, K.L., Covitt, B.A., and Salinas, I. 2014. Teachers’ Uses of Learning Progression-Based Tools for Reasoning in Teaching about Water in Environmental Systems.Presentation from the Annual Meeting of the National Association for Research in Science Teaching, Pittsburgh, PA. Powerpoint. Paper. - Caplan, B., Gunckel, K. L., Warnock, A., & Cano, A. (2013). Investigating water pathways in schoolyards. Green Teacher, 98(Winter), 28-33.(download Paper) - Salinas, I., Covitt, B. A., & Gunckel, K. L. (2013). Sustancias en el Agua: Progresiones de Aprendizaje para Diseñar Intervenciones Curriculares. Educacion Quimica, 24(4), 391-398. - Covitt, B.A. & Gunckel, K. L., (2012, March). Using a water systems learning progression to design and test formative assessments and tools for reasoning. Paper presented at the 2012 Annual International Conference of the National Association for Research in Science Teaching, Indianapolis, IN. (download Powerpoint , Paper) - Gunckel, K. L., Covitt, B. A., Salinas, I., & Anderson, C. W. (2012). A Learning Progression for Water in Socio-Ecological Systems. Journal of Research in Science Teaching, 49(7), 843-868. doi: 10.1002/tea.21024 (abstract) - Gunckel, K. L., Mohan, L., Covitt, B. A., & Anderson, C. W. (2012). Addressing challenges in developing learning progressions for environmental science literacy. In A. Alonzo & A. W. Gotwals (Eds.), Learning progressions in science (pp. 39-76). Rotterdam, The Netherlands: Sense Publishers. Book - Caplan, B., and Covitt, B.A. (2011, September). Tracing water and substances in water through pathways in the schoolyard: A new perspective on teaching the water cycle. Sustaining the Blue Planet Global Water Education Conference, Bozeman, MT. (download) - Covitt, B.A., Gunckel, K. L., Salinas, I., & Anderson, C. W. (2011, September). Learning Progression Based Reasoning Tools for Understanding Water Systems. Sustaining the Blue Planet Global Water Education Conference, Bozeman, MT. (download) - Covitt, B. A., Gunckel, K. L., & Anderson, C. W. (2010, April). A Learning progression for understanding water in socio-ecological systems. Poster presented at the 91st Annual Meeting of the American Educational Research Association, Denver, CO. (download) - Gunckel, K. L., Covitt, B. A., & Anderson, C. W. (2010, March). Teacher responses to assessments of understanding of water in socio-ecological systems: A learning progressions approach. Paper presented at the 2010 Annual International Conference of the National Association for Research in Science Teaching, Philadelphia, PA. (download Paper) - LaDue, N., Covitt, B., Gunckel, K. 2010. Exploring Teacher and Student Conceptions of Groundwater through Drawings. Conference of the North American Association for Environmental Education, Buffalo. (download) - Covitt, B. A., Gunckel, K. L., & Anderson, C. W. (2009). Students’ developing understanding of water in environmental systems. Journal of Environmental Education, 40(3), 37-51. Abstract - Gunckel, K. L., Covitt, B. A., & Anderson, C. W. (2009, June). Learning a secondary Discourse: Shifts from force-dynamic to model-based reasoning in understanding water in socio-ecological systems. Paper presented at the Learning Progressions in Science (LeaPS) Conference, Iowa City, IA. (download Paper Powerpoint) - Gunckel, K. L., Covitt, B. A., Dionese, Dudek, & Anderson, C. W. (2009, April). Developing a learning progression for student understanding of water in environmental systems. Paper presented at the 2009 Annual International Conference of the National Association for Research in Science Teaching, Garden Grove, CA. (download Paper, Poster) Development of these materials was supported by a grant from the National Science Foundation: Targeted Partnership: Culturally relevant ecology, learning progressions and environmental literacy (NSF-0832173). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
They keep quantum particles cool Creating junctions inside future quantum processors is a very important step in their design. Actually, some may argue that this is the most important step, as, without a pathway to carry information from one place to the other inside a processor, all other innovations in the area are useless. However, the main problem with junctions is that they heat ions – single atoms, stripped of their electrons – to the point where they lose their precious quantum properties and become useless. Researchers at the National Institute of Standards and Technology (NIST) have now devised a new type of ion-trap junction that successfully manages to keep ions very cool during transit. In addition to this remarkable feature, the innovation also allows ions to pass through at high enough speeds to set the basis for future quantum computer architectures. A single one can move through the junction in less than 20 microseconds, and in 50 to 100 microseconds from one area of a processor to the other. This speed makes processing large-scale information possible, NIST physicists announce. Thus, a major source of potential computational errors and processing slowdowns is almost neutralized, and its potential effects are brought to a minimum. The trap itself, a small rectangular device of about 5x2 millimeters, is made from alumina (aluminum oxide) and is constructed by laser machines for a maximum precision and zero-error. Its 46 electrodes are created from its gold plating, and it contains 18 ion-trapping zones. A unique, X-shaped bridge makes the junction between the electrodes. The trapping zones are necessary in order to group ions in the pairs they need to be in for the quantum computer to efficiently process information. In all lab tests conducted thus far, which have used single beryllium ions, the success rate has been of over 99.99 percent. In addition, the experts noticed that the temperature of the ions was about a million times lower than that recorded in other junction bridges. The electrically charged atoms (ions) move through these junctions at terribly high speeds and bump into each other, which generates a tremendous amount of heat. By ensuring that the heat was not created in the first place, the experts basically set the basis for the creation of a very effective information-transmitting system for future quantum computers.
Scientists have finally identified how sugar feeds cancer in a new research paper which has been hailed as a ‘breakthrough’. The study, explains why cancer cells rapidly break down sugars without producing much energy – a phenomenon discovered in 1920, dubbed the ‘Warburg effect’. Until now, it hasn’t been clear whether the effect was a symptom of cancer, or a cause. But a nine-year joint research project conducted by a coalition of Dutch universities has shown that sugar naturally connects with a gene called ‘ras’, which is essential to each cancer cell’s ability to survive. This connection traps cancer so forcefully that cells are powerless to expel it, creating a ‘vicious cycle’ that stimulates the cancer and persistently metabolizes the sugar.
Data from the Meteosat satellite 36,000 km from Earth, has been used to measure the temperature of lava at the Nyiragongo lava lake in the Democratic Republic of Congo. An international team compared data from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) on board Meteosat with data collected at the lava lake with thermal cameras. Researchers say the technique could be used to help monitor volcanoes in remote places all over the world, and may help with the difficult task of anticipating eruptions. Data from the Meteosat satellite has been used to measure the temperature of lava at a remote volcano in Africa. The scientists compared data from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) on board Meteosat with ground data from a thermal camera, to show the temperature of the lava lake at Nyiragongo, in the Democratic Republic of Congo. The technique was pioneered in Europe, and the researchers say it could be used to help monitor volcanoes in remote places all over the world. “I first used the technique during a lava fountain at Mt Etna in August 2011,” says Dr. Gaetana Ganci, who worked on the study with colleagues Letizia Spampinato, Sonia Calvari and Ciro Del Negro from the Istituto Nazionale di Geofisica e Vulcanologia (INGV) in Italy. “The first time I saw both signals I was really surprised. We found a very similar radiant heat flux curve — that’s the measurement of heat energy being given out — from the ground-based thermal camera placed a few kilometres from Etna and from SEVIRI at 36,000km above the Earth.” Transferring the technique to Nyiragongo was important — partly because the exposed lava lake can yield data important for modelling shallow volcanic systems in general, but more importantly because advance warning of eruptions is necessary for the rapidly expanding city of Goma nearby. The research, published in the Journal of Geophysical Research: Solid Earth is the first time in which Nyiragongo’s lake has been studied using ground-based thermal images in addition to satellite data to monitor the volcano’s radiative power record. Dr. Ganci and her colleagues developed an algorithm they call HOTSAT to detect thermal anomalies in the Earth’s surface temperature linked to volcanoes. They calculate the amount of heat energy being given out in a target area based on analysis of SEVIRI images. Combining the frequent SEVIRI images with the more detailed but less frequent images from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), they showed that temperature anomalies could be observed from space before an eruption is underway. They believe that space-based observations can be a significant help in the difficult task of predicting volcanic eruptions, but that providing advance warning will never be easy. “Satellite data are a precious means to improve the understanding of volcanic processes. There are cases of thermal anomalies being observed in volcanic areas just before an eruption,” says Ganci. “Combining different kinds of data from the ground and from space would be the optimal condition — including infra-red, radar interferometry, seismic measurements etc. But even in well-monitored volcanoes like Mt. Etna, predicting eruptions is not a trivial thing.” The team developed HOTSAT with a view to making an automatic system for monitoring volcanic activity. They are now developing a new version of HOTSAT. This should allow the processing of all the volcanic areas that can be monitored by SEVIRI in near-real time.Continuing ground-based observations will be needed for validation. “For remote volcanoes, such as Nyiragongo, providing reliability to satellite data analysis is even more important than in Europe. Thanks to ground-based measurements made by Pedro Hernández, David Calvo, Nemesio Pérez (ITER, INVOLCAN Spain), Dario Tedesco (University of Naples, Italy) and Mathieu Yalire (Goma Volcanological Observatory), we could make a step in this direction,” says Ganci. “This study shows the range of science that can be done with Meteosat,” says Dr. Marianne Koenig, EUMETSAT’s atmospheric and imagery applications manager for the Meteosat Second Generation satellites, “And opens up the possibility of monitoring isolated volcanoes.” Note : The above story is based on materials provided by European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT).
According to the World Health Organization (WHO), the Ebola virus causes an acute, serious illness which is often fatal if untreated. Ebola virus disease (EVD) first appeared in 1976 in 2 simultaneous outbreaks, one in Nzara, Sudan, and the other in Yambuku, Democratic Republic of Congo. The latter occurred in a village near the Ebola River, from which the disease takes its name. The current outbreak in West Africa, (first cases notified in March 2014), is the largest and most complex Ebola outbreak since the Ebola virus was first discovered in 1976. There have been more cases and deaths in this outbreak than all others combined. It has also spread between countries starting in Guinea then spreading across land borders to Sierra Leone and Liberia, by air (1 traveler) to Nigeria and USA (1 traveler), and by land to Senegal (1 traveler) and Mali (2 travelers). The most severely affected countries, Guinea, Liberia and Sierra Leone, have very weak health systems, lack human and infrastructural resources, and have only recently emerged from long periods of conflict and instability. On 8 August 2014, the WHO Director-General declared the West Africa outbreak a Public Health Emergency of International Concern under the International Health Regulations (2005). The virus family Filoviridae includes three genera: Cuevavirus, Marburgvirus, and Ebolavirus. There are five species that have been identified: Zaire, Bundibugyo, Sudan, Reston and Taï Forest. The first three, Bundibugyo ebolavirus, Zaire ebolavirus, and Sudan ebolavirus have been associated with large outbreaks in Africa. The virus causing the 2014 West African outbreak belongs to the Zaire species.
Ottoman Period in Jerusalem |Hasan Bey Mosquefrom Jaffa| When the Ottoman Turks defected the Mameluke forces in 1517, Palestine came under the rule of a new empire that was to dominate the entire Near East for the next 400 years. At the outset, particularly during the reign of Sultan Suleiman, known in Arabic as "the Law maker," but better known as Suleiman the Magnificent, Jerusalem flourished. Walls and gates, which had lain in ruins since the Ayyubid period, were rebuilt. The ancient aqueduct was reactivated and public drinking fountains were installed. After Suleiman's death, however, cultural and economic stagnation set in, Jerusalem again became a small, unimportant town. For the next 300 years its population barely increased, while trade and commerce were frozen; Jerusalem became a backwater. Although the renewal of Jerusalem's Jewish community is attributed to the activity of Nahmanides, who arrived in the city in 1267, the community's true consolidation occured in the 15th and 16th centuries, with the influx of Jews who had been expelled from Spain. The 19th century witnessed far-reaching changes, along with the gradual weakening of the Ottoman Empire. Political change in Jerusalem and indeed throughout the country was accelerated as part of a policy of Europeanization. European institutions in Jerusalem, particularly those of a religious character, enjoyed growing influence. Foreign consulates, merchants and settlers, grew in numbers and in power These foreigners brought in their wake many innovations: modern postal systems run by the various consulates; the use of the wheel for modes of transportation; stagecoach and carriage, the wheelbarrow and the cart; and the oil-lantern. These were among the first signs of modernization in the city. By mid-century the first paved road ran from Jaffa to Jerusalem; by 1892 the railroad had reached the city. The Wall and the Damascus Gate |The Wall and the Damascus| The wall that encloses the present-day Old City of Jerusalem was built in the sixteenth century by the Ottoman ruler Suleiman the Magnificent. Originally it had seven gates; an eighth, aptly named New Gate, was added in the late nineteenth century in the wall's northwest sector. The largest and most splendid of the portals is Damascus Gate. Located on the wall's northern side, it is adjacent to ruins attesting that this has been the site of the city's main entrance since ancient times. The gate's defenses include slits for firing at attackers, thick doors, and an opening from which boiling oil could be spilled on assailants below. |A Mosque in the Dunavat District| The relationship between the Ottoman State and , Albania began after 1325. Albania became an Ottoman territory during the reign of Murad 11 (1421-1451). The country was divided into "timars" in accordance with the Ottoman economic system. The first known and published Ottoman document is a sanjak-i defter dated 835/1431 (H.Ynalcik, Survey of suret-i defter-i sanjak-i Arnavid, Ankara, 1954). Albania, occupying a much larger region then today, became an independent State in 1912. |Halwati Tekke 1782| |BeylerBeyi Palace from Algeria| Algeria occupies almost the same territories today as it. did during the Ottoman period. The renowned Barbarossa Brothers, Arudj and Khayr al-din; volunteered in 922/1516for Ottoman sovereignty in order to protect Algeria from the Spanish attacks in the Mediterranean Sea. After Khayr al-din became the Admiral of the Ottoman Fleet, Algeria was governed first by beylerbeys then by pashas sent from the Capital until the l6th century. A sort of regency was then established first by aghas of military origin by the dominance of beys who ruled with the help of beys autonomously until the French conquest of Algeria in 1837.Examples of Ottoman architectural works can still be found in Algiers, Constantine, Tlemçen and Ouhran. including mainly mosques, mausoleums, palaces, fortresses, barracks, bridges, fountains and aqueducts.This architecture is characterised by clean white exterior walls and block-like volumes common to North Africa. However, the centralised plan of the mosques reflects the innovations of the Capital. Furthermore, certain arche forms, bricks covering roofs, compositions in plasterwork made of flowers stemming from vases and the use of tiles to decorate palaces indicate to what extent the modes and styles of Istanbul penetrated into the regional architecture of Algeria.
What is Hansen’s disease? Hansen’s disease, also known as leprosy, is a chronic disease caused by a bacterium called Mycobacterium leprae. It mainly affects the nerves, skin, eyes, and lining of the nose. In spite of its reputation, Hansen’s disease is not easily spread to others and can be cured with antibiotics. Who gets Hansen’s disease? Most people are naturally immune to Hansen’s disease. Those at greatest risk for the disease are people who have close contact for months with a person who has the disease but is not being treated. Hansen’s disease is rare in the United States and in most countries in the world. How is Hansen’s disease spread? Although the mode of transmission has not been proven, the major source of the bacteria is probably nasal secretions from patients with untreated disease, probably spread through respiratory droplets. Transmission is not achieved through casual contact, like sitting next to someone with Hansen’s disease or sharing a meal. Some armadillos are naturally infected with the bacteria that cause Hansen’s disease, but the risk to most people who come in contact with armadillos is very low. What are the symptoms of Hansen’s disease? The symptoms of Hansen’s disease can be very different depending on the type of Hansen’s disease and what part of the body is affected. The first signs of Hansen’s disease are usually pale or slightly red areas or a rash on the skin. Other symptoms can include loss of feeling in the hands and feet, muscle weakness, nodules on the body and a blocked/stuffy nose. If left untreated, Hansen’s disease can lead to more serve symptoms, such as paralysis, blindness, and chronic ulcers. How soon after exposure do symptoms appear? The bacteria grow very slowly. It can take from a few weeks up to 30 years (average of 3-10 years) for symptoms to develop after a person has been exposed to the bacteria. How long can an infected person spread Hansen’s disease? In most cases, a person will not be able to infect others after a receiving a few days of treatment. How is Hansen’s disease diagnosed? Hansen’s disease is diagnosed by examining a biopsy of the skin or nerve. What is the treatment for Hansen’s disease? Specific antibiotics can be prescribed by a doctor. Treatment involves taking multiple drugs for a long time (i.e., 1-2 years). It is very important for a patient to fully complete the treatment. How can Hansen’s disease be prevented? The best way to prevent the spread of Hansen’s disease is early diagnosis and treatment of people who are infected. Household and other close contacts should be seen by a doctor as soon as possible and then every year for five years after contact with a person who has the disease. How can I get more information about Hansen’s disease? - If you have concerns about Hansen’s disease, contact your healthcare provider. - Call your local health department. A directory of local health departments is located at https://www.vdh.virginia.gov/local-health-districts/. - Visit the Centers for Disease Control and Prevention website at https://www.cdc.gov/leprosy/.
By Sarah “Steve” Mosko You’d think that finding far less plastic pollution on the ocean’s surface than scientists expected would be something to cheer about. The reality, however, is that this is likely bad news, for both the ocean food web and humans eating at the top. Ingestion of tiny plastic debris by sea creatures likely explains the plastics’ disappearance and exposes a worrisome entry point for risky chemicals into the food web. Except for a transient slowdown during the recent economic recession, global plastics consumption has risen steadily since plastic materials were introduced in the 1950s and subsequently incorporated into nearly every facet of modern life. Annual global consumption is already about 300 million tons with no foreseeable leveling off as markets expand in the Asia-Pacific region and new applications are conceived every day. Land-based sources are responsible for the lion’s share of plastic waste entering the oceans: littering, wind-blown trash escaping from trash cans and landfills, and storm drain runoff when the capacity of water treatment plants is exceeded. Furthermore, recent studies reveal an alarming worldwide marine buildup of microplastics (defined as a millimeter or less) from two other previously unrecognized sources. Spherical plastic microbeads, no more than a half millimeter, are manufactured into skin care products and designed to be washed down the drain but escape water treatment plants not equipped to capture them. Plastic microfibers from laundering polyester fabrics find their way to the ocean via the same route. Given that plastics do not biodegrade within any meaningful human time-scale, it’s been assumed that the quantity of plastic pollution measured over time on the surface waters of the ocean will mirror global plastics production and hence should be rising. However, regional sampling over time indicates that plastic debris in surface waters has been rather static since the 1980s. In a report just published in the Proceedings of the National Academy of Sciences, an international cadre of scientists calculated that the total load of plastic debris on the surface of the world’s oceans should weigh roughly a million tons, based on a combination of production figures since the 1970s, estimates of the fraction of plastics released into coastal waters that reach open ocean, and the 50 percent of plastics known to be buoyant. However, extrapolating from actual trawl samples taken worldwide during a recent global circumnavigation study (Malaspina 2010 expedition), the scientists discovered that the total weight of surface water plastics is only somewhere between 10,000 and 40,000 tons. This means that only a tiny fraction (1-4 percent) of buoyant plastics at sea is account for. Insight into where the rest might have gone emerged from an analysis of the size distribution of the remaining floating debris. From previous research it was already known that plastic fragments no bigger than a half centimeter outnumber larger debris on the ocean’s surface, a phenomenon attributed to the fact that weathering continually breaks up plastics into ever smaller fragments. Thus the scientists were surprised to find a striking paucity of debris in the one millimeter and smaller size range, the opposite of what would be expected from progressive fragmentation. This indicates that microplastics are being selectively removed from the surface. The scientists posit that zooplankton-eating fish likely account for the loss in surface microplastics. The missing microplastics are the same size as zooplankton, thus easily mistaken for food. Furthermore, zooplankton eaters that live deep in the ocean rise to the surface at night to feed. This explanation is supported by fact that plastic debris found in the stomachs of the fish that live off zooplankton are the same size as the missing surface debris, and the same size plastics are also commonly found in the stomachs of larger fish that feed on the plankton eaters. The number of marine wildlife species known to ingest plastic waste is already in the hundreds. In recent decades, disturbing autopsy images have surfaced in larger creatures – like whales, dolphins, turtles, fish and seabirds – illustrating stomach/intestinal blockage or perforation from ingesting often recognizable plastic items such as plastic bags, fishing line and bottle caps. However, a spate of recent studies has also documented ingestion of microplastics in the millimeter and micrometer range by smaller sea life at lower tiers throughout the ocean food web, everything from zooplankton at the web’s very base to sandworms, barnacles and small crustaceans. One recent study finding that micrometer-sized microplastics ingested by the tiniest zooplankton show up rapidly in the intestines of zooplankton one step up the food chain underscores the potential for upward transfer of plastic debris from one tier to the next. Similarly, transfer to shore crabs from eating common mussels which fed on microplastics is also known to occur. Humans’ selfish fears about the take up of plastic materials throughout the food web stem largely from potential chemical threats which could be delivered up the chain. Hazardous chemicals are manufactured into various plastics, like known endocrine disruptors (e.g. phthalate plasticizers and bisphenol-A) or carcinogens (e.g. vinyl chloride and brominated flame retardants). Also, plastics are oily materials and, as such, concentrate oily contaminants from the surrounding seawater, like PCBs (polychlorinated biphenyls) and the breakdown products of the banned pesticide DDT. Researchers have shown that toxic chemicals within or on the surface of ingested plastic debris can transfer to the tissues of wildlife (e.g. seabirds), and the accumulation and even bio-magnification in wildlife as you go up the food chain when toxins are not readily metabolized is likely the greatest threat to humans. A study finding microplastics in the soft tissues of oysters and mussels cultured specifically for human consumption, just published in the journal Environmental Pollution, is also unwelcome news for humans. The authors estimated that a shellfish lover could already be ingesting over 10,000 microplastic particles in a year. Add to this recent laboratory evidence of tangible health consequences of ingesting chemicals associated with marine plastics. For example, altered expression of genes signaling endocrine system disruption was recently documented in both male and female fish after eating a diet containing small amounts of microplastics which had been exposed for a few months first to seawater in the San Diego Bay. Microplastics are generally believed to represent a greater chemical threat than macroplastics because the larger relative surface area of smaller debris allows for more adsorption of toxins from seawater. Thus far, scientists have focused primarily on plastics in the visible millimeter plus size range. They express worry, however, that as plastics fragment further into the micrometer and even the nanometer range (100 times smaller than the width of a human hair), the risks to the food web could multiply, not just because of increasing surface area but also because the tinier the debris the more diverse the wildlife able to ingest it. Scientists have not ruled out that other factors might also contribute to the disappearance of surface water microplastics, but the evidence thus far points to ingestion as the main one. For instance, plastic debris will sink once biofouling (colonization by micro-organisms) causes it to lose buoyancy, but field experiments show that defouling occurs rapidly after the debris is submerged, allowing it to float back to the surface. Some historians already refer to the current era as The Age of Plastics. Just as runaway global warming looms as an unexpected consequence of the wanton burning of fossil fuels, the poisoning of the ocean food web could be the lasting legacy of the plastics era. Non-profit marine protection organizations, like the Algalita Marine Research Foundation in Long Beach, the Santa Monica-based 5 Gyres Institute and the Ocean Conservancy in Washing, D.C., are working to draw attention to the urgent need to stem the flow of further plastic debris into the oceans. There is general agreement that schemes to clean up plastic debris out in the mid-ocean are impractical, no matter how well-intended, as any after-the-fact approach is akin to trying to push back against water blasting from a fire hose. Better to turn off the deluge at the nozzle. To fully address the global problem of plastic ocean pollution, the plastics industry must ultimately reformulate its products, though this will obviously take time. In the interim, we already know two relevant facts, that rivers are a major source of plastic waste entering the oceans and that a sizeable fraction of plastic debris at sea is eventually deposited on shorelines. Thus, directing resources now to both developing devices to capture waste in rivers before it reaches open ocean and cleaning up waste littering the shorelines makes the best sense. Consumers can also do their part now through simple behavior changes, like using reusable shopping bags and opting for products packaged in non-plastic alternatives, like glass or paper. And of course everyone is welcome to pitch in on the annual International Coastal Cleanup. On the Sept. 20, 2014 cleanup day, volunteers in 92 countries picked up over 12 million pounds of trash. Contact the Ocean Conservancy to locate a beach cleanup in your area for next year’s event.
Hi Readers! Are you searching for some information on Rectal Cancer? Then here’s what you are looking for in just a few seconds of reading. Know all about Rectal cancer its origin and statistics reflecting its incidence. And much more! Rectal cancer is a disease in which malignant (cancer) cells form in the tissues of the rectum. The rectum is part of the body’s digestive system. The digestive system removes and processes nutrients like vitamins, minerals, carbohydrates, fats, proteins, and water from foods and helps pass waste material out of the body. All your food goes here for processing! The digestive system is made up of the esophagus, stomach, and the small and large intestines. The large bowel or colon is the first 6 feet of the large intestine. The rectum and the anal canal constitute last 6 inches. The anal canal ends at the anus (the opening of the large intestine to the outside of the body). That’s about the digestive system. Now look into the Risk Factor! Age and family history can affect the risk of developing this disease. Change in bowel habits or blood in the stool are the possible signs of rectal cancer Three Important Factors! The treatment and prognosis of this cancer depend on the stage of the cancer, which is determined by the following 3 considerations: - How deeply the tumor has invaded the wall of the rectum - Whether the lymph nodes appear to have cancer in them - Whether the cancer has spread to any other locations in the body like the liver and the lungs. Radiation treatments are given daily, for up to 6 weeks, weekly 5 days. Each treatment lasts only a few minutes and is completely painless; it is similar to having an x-ray film taken. The main side effects of radiation therapy for rectal cancer include rectal or bladder irritation, mild skin irritation, diarrhea and fatigue. These side effects usually resolve soon after the treatment is complete. Chemoradiation is often given for stages II and III rectal cancer. Preoperative chemoradiation is sometimes performed to decrease the size of the tumor. Statistics!! The following statistics relate to the incidence of rectal cancer: - According to the Cancer Facts and Figures, American Cancer Society 23,220 new male cases for rectum cancer in the US 2004 - According to the Cancer Facts and Figures, American Cancer Society 18,400 new cases in women in the US 2002 - According to the Cancer Facts and Figures, American Cancer Society 17,350 new female cases for rectum cancer in the US 2004 The term ‘prevalence’ of rectal cancer usually refers to the estimated population of people who are managing rectal cancer at any given time. The term ‘incidence’ of rectal cancer refers to the annual diagnosis rate, or the number of new cases of rectal cancer diagnosed each year. Hence, these two statistics types can differ greatly.
This history of the Chinese Battle of Zhuolu is signifigant to Israelite history because provides information concerning the migration of Israelite descendants into Southen China/Southeast Asia. Chiyou, the ancestor of the Miao-Hmong peoples of southern China, was a formidable opposition in the Battle of Zhuolu and he is worshipped as a god of war. “The Battle of Zhuolu (traditional Chinese: 涿鹿之戰) was the second battle in the history of China as recorded in the Records of the Grand Historian, fought between the Yellow Emperor (Huang Di) and Chiyou. The battle was fought in Zhuolu, near the present-day border of Hebei and Liaoning. The victory for the Yellow Emperor here is often credited as history, although almost everything from that time period is considered legendary.“ – Wikipedia, Battle of Zhuolu The Yellow Emperor (Huang Di) and the Red Emperor (Yandi/Shennong) were forced to join forces in order to deal with Chiyou. The coalition between the Red and Yellow Emperors and their subsequent conquest of the fertile yellow river central plains resulted in the creation of the Huaxia people. The Red and Yellow Emperors are still honored by the colors of the Chinese national flag of the Huaxia people of northern China. Red for the Flame Emperor and Yellow for the Yellow Emperor “In the pre-Qin era, present-day Luoyang and its nearby areas were considered the “Center of the World”, as the political seat of the Xia Dynasty was located around Songshan and the Yi-Luo river basin.” – Wikipedia, Central Plain (China) The fertility of the central plains allowed the huge population of the Huaxia to be sustained but it must be remembered that the Huaxia was a nation comprised of two separate nations, one belonging to Yandi and the other belonging to Huang Di. “Geographically speaking, the central plain is the heart of china, similar to the Ganges plain in India, and is the traditional power seat of china. The Xia, Shang, and Zhou dynasty was established in and around these areas so for them, a unified china consists of these areas under control. They are also known as the nine provinces.” – Pakistan Defense It must also be stated again, that without the amalgamation of these formerly two separate groups, the Battle of Zhuolu would probably not have been won by the Huaxia. So, what made Chiyou and his people such formidable warriors? Fire! Chiyou’s relationship with fire is the key to solving the mystery behind his association with war. It should first be noted that Chiyou is a descendent of Shennong. Shennong, the founder of Chinese traditional herbal medicine, has been idenitified as Yandi. Yandi was known as the Red Emperor or Flame Emperor because of his relationship with fire. “Yan literally means “flame”, and K. C. Wu speculates that this appellation may be connected with the fire used to clear the fields in slash and burn agriculture. In any case, it appears that agricultural innovations by Shennong and his descendants contributed to some sort of social success that lead them to style themselves as di (Chinese: 帝; literally: “emperors”), rather than hou (Chinese: 侯; literally: “lord”), as in the case of lesser leaders… The Zuo Zhuan states that in 525 BC, the descendants of Yan were recognized as long having been masters of fire and having used fire in their names.” – Wikipedia, Yan Emperor Chiyou, as a descendant of the Flame Emperor was also a master of fire, as were his people. However, Yandi is known as the first emperor China and the mastery of fire was developed long before China. Both Shennong and the mastery of fire can trace their origins to Phoenicia/Israel/Syria by way of Africa. We will focus on Africa for now. Chinese and African foreign relations are very ancient and much trade and travel has occurred between them both. Transoceanic trade between East Africa and China predates the voyages of Zheng He and was dominated by the people whom Zheng He’s surame is derived from—the Zanji. The Zanji are an east African Bantu people. We have already identified the Bantu as the ancient Israelites (CLICK HERE). Identifying the Bantu as the Israelites allows us to see the correlation between east African Bantu bull worship, which originated in ancient Egypt. This bull veneration was diffused throughout the African continent during the Bantu Expansion by Bantu African Israelites who had allow built a golden calf after their Exodus from Egypt. Many of the Israelites could not depart from the sinful culture and religion they learned in Egypt and for this reason, were not allowed to enter into the Promised Land. However, they entered into many other lands. We see the bull worship of Egypt reflected by many different Bantu tribes in Africa. “So I turned and came down from the mount, and the mount burned with fire: and the two tables of the covenant were in my two hands. And I looked, and, behold, ye had sinned against the Lord your God, and had made you a molten calf: ye had turned aside quickly out of the way which the Lord had commanded you.” – Deuteronomy 9:15-16 “They have turned aside quickly out of the way which I commanded them: they have made them a molten calf, and have worshipped it, and have sacrificed thereunto, and said, These be thy gods, O Israel, which have brought thee up out of the land of Egypt.” – Exodus 32:8 And we see that same Egypto-Phoenician bull worship of the Bantu Africans reflected in the various depictions, his descendent Chiyou and Chiyou’s descendants the Miao-Hmong peoples of southern China (the biblical land of Sinim H5515). “And they made a calf in those days, and offered sacrifice unto the idol, and rejoiced in the works of their own hands.” – Acts 7:41 “Putting aside the horns, which have at the same time assumed the aspect of a fork, we cannot but be struck by the resemblance of this symbol to those of the Phœnician Caducei, where the Disk seems to be supported by a conical stem.” – The Origins of the Caduceus, The Migration of Symbols, by Goblet d’Alviella, PG 232 “They have set up kings, but not by me: they have made princes, and I knew it not: of their silver and their gold have they made them idols, that they may be cut off. Thy calf, O Samaria, hath cast thee off; mine anger is kindled against them: how long will it be ere they attain to innocency? For from Israel was it also: the workman made it; therefore it is not God: but the calf of Samaria shall be broken in pieces.” – Hosea 8:4-6 These bulls horns worn upon the heads of the Miao-Hmong peoples, who are descended from Shennong, the Flame Emperor, through his descendant Chiyou and the slash-and-burn agricultural technique used in China are very strong evidences supporting the fact of the ancient origins of China in Africa. Along with the Slash-and Burn technique, the terraced rice field agricultural technique was introduced to Asia from Bantu Israelite Africans. The terracing effect is based upon the same techniques that the pyramids/step pyramids were constructed with. The Israelites introduced this agricultural skill based on the skills they used to build Egypt and its treasure cities (CLICK HERE) This techniques allows for the energy descending from heaven (the rain) to be equally distributed upon the area of earth that has been terraced/stepped. This can also be compared to the construction of Borobudur Temple in Indonesia. Read more about the dissipation of energy but clicking THE TEMPLE’S ANTENNA. Moreover, the Chinese, seeking to prove, through DNA analysis, an independent origin for the Chinese people, actually found evidence to the contrary and actually validated the African Bantu/Israelite origins of the Chinese people. With this being established, it will be easier to understand the association between Chiyou, fire and warfare. Again, we find that ancient trans-oceanic trade between China and Africa was common. The east coasts of Africa, the land of the Zanji Bantu Israelites, was the location of many coastal trade towns. The coastal trade was supplied by the resource rich interior of Africa and transported across various land trade routes. One of the most important route from the east coast to the interior was the Luba and Lunda trade network which moved from the east coast of Africa (Tanzania area) to the west coast of Angola through the Congo region. “The Mbudye tradition states that all of the rulers of the Luba Empire traced their ancestry to Kalala Ilunga, a mystical hunter credited with toppling the cruel ruler known as Nkongolo. This figure is also credited with the introduction of advanced iron forging techniques to the Luba peoples. Luba traders linked the Zaire forest to the north with the mineral-rich region in the center of modern Zambia known as the Copperbelt. The trade routes passing through Luba territory were also connected with wider networks extending to both the Atlantic and Indian Ocean coasts… The ruling class held a virtual monopoly on trade items such as salt, copper, and iron ore. This allowed them to continue their dominance in much of Central Africa.” – Wikipedia, Kingdom of Luba This particular trade network was known for it’s copper and iron ore, which the Chinese eagerly traded for to mint coinage for their economy and build agricultural tools weapons. “The 600 year old coin that proves China was trading with East Africa BEFORE Europeans arrived. [The] copper coin, which has a square hole in the center so it could be worn on a belt Was issued by Emperor Yongle of China who reigned from 1403-1425 during the Ming Dynasty” – Daily Mail The agricultural tools and weapons were needed to feed and defend the vast populations of China, which was the main causal factor behind the Zhuolu War. The growing populations and demand for food and land between competing tribes is a situation very reminiscent of the circumstances behind the Bantu Expansion. The Bantu Expansion was the process of the migration and conquest of sub-Saharan Africa by the Bantu peoples. Their knowledge of animal husbandry and agriculture, acquired by the Israelite Bantu’s while in Egypt, allowed for the nomadic tribes to settle and move from subsistence farming to permanent dwellings with farms of a much larger scale. This abundance and advanced agriculture would not have been possible without the Bantu’s mastery of fire. Controlling fire allowed the Bantu to make great advances in urban design, as they were able to make iron from clay furnaces they constructed. “In 1978 anthropology professor, Peter Schmidt, and professor of engineering, Donald Avery, both of Brown University, announced to the world that, between 1,500-2,00 years ago, Africans living on the western shores of Lake Victoria, in Tanzania, had produced carbon steel. The Africans had done this in pre-heated forced-draft furnaces, a method that was technologically more sophisticated than any developed in Europe until the mid-19th century. “We have found,” said Professor Schmidt, “a technological process in the African Iron Age which is exceedingly complex… To be able to say that technologically superior culture developed in Africa more that 1,500 years ago overturns popular and scholarly ideas that technological sophistication developed in Europe and in Africa.” – The Lost Sciences of Africa, by Ivan Van Sertima Recent research and scholarship has proven that various Bantu tribes were able to even produce high quality steel in these same clay furnaces, 2000 years before steel was able to be produce in steel (albeit not of equal quality as the African). “The issue of innovation features prominently in studies of precolonial African metalworking. In the 1980s, Schmidt and Avery (1983) advanced the hypothesis that East African iron smelters practiced the technique of preheating the air in the furnace before smelting began, thereby raising temperatures to produce high-carbon steels. This preheating technique is central to the blast furnace method prevalent today, and its postulation emphasized the high technical skill of traditional African metallurgists.” – The Oxford Handbook of African Archaeology, edited by Peter Mitchell, Paul Lane, PG 140 “The Haya (northern Tanzania) carbon steel pre-heated furnace discovered by Prof. Peter Schmidt and professor of engineering Donald H. Avery of Brown University, reported in the Science Magazine of September 22, 1979 that Africans produced steel 2,000 years before Europe. The furnace reached temperatures of 1800C, some 200C and 400C higher than the highest reached in European cold blast bloomer.” Shaping the Society Christianity and Culture: Special Reference to …, Volume 2, By Pastor Stephen Kyeyune, PG 111 The whole film can be viewed from parts 1-6, but it is dubbed in European Spanish. Even if you don’t understand Spanish, it is still interesting to watch the film and you are certain to learn a lot from just the visuals. This is, by far, the most beautiful and largest furnace I’ve seen: The ability to produce steel and iron tools and weapons was a skill that gave the Bantu the ability to dominate the continent of Africa quickly. The mastery of fire allowed the Bantu to master other elements such as the earth through their farming tools and slash-and-burn African savannah agricultural techniques and the mastery of other tribes of people through their development of stronger metals and weaponry. As a result of their mastery of fire and agriculture, Bantu communities grew quickly and more resources and food were required to sustain their populations. As tribal families grew, new territory was sought to provide for the increased demands which often brought conflict between less and greater nations. The competition for resources during the Bantu expansion was fierce and many battles were fought for the domination of trade networks and territory for their respective kingdoms. This constant warfare and the need for increasingly more efficient agricultural tools provided an environment under which some of the most advanced blacksmiths and metallurgical knowledge developed. The Bantu Expansion even created the circumstances for the diffusion of this knowledge into other countries as Bantu men sought more land to provide for their ever increasing populations. “According to the Song dynasty history book Lushi (路史), Chiyou’s surname was Jiang (姜), and he was a descendant of Yandi. According to legend, Chiyou had a bronze head with metal foreheads.” – Chiyou, Wikipedia We can conclude that the description of Chiyou in the Song Dynasty history book, Lushi, is that of a Negro. It is already known that the Bantu, who are Negroes, were present in Ancient China. However, the description of Chiyou’s head as being bronze sounds like the biblical description of the most famous Negro in the universe—Christ, the Messiah. “And his feet like unto fine brass, as if they burned in a furnace; and his voice as the sound of many waters” – Revelations 1:15 Beyond the burnished bronze head of Chiyou, the description also speaks of the metal foreheads (plural probably in detail of the people with Chiyou). The metal forheads are probably indicative of that which we see depicted on the Olmec heads of Meso-America. Ivan Van Sertima, in his book They Came Before Colombus, provides ample evidence in support of West African’s sailing across the Atlantics and form colonies and trade alliances with some Native American populations. Moreover, the people depicted in the heads of the Olmec statues have also been associated with a maritime warrior dynasty that also travelled to China. The metal forehead protector is and helmets are evidence of a highly militarized group who were extremely familiar with warfare and weapons forging. These people were no strangers to the battlefield and introduced the warrior culture to the Far East. In China they are known as the Xi or Xia people. The Xi/Xia (Chinese for Hero) people preceded the Shang Dynasty of China, who were also described as Negroes. “The Olmec are known as the Xi People, a group that migrated from Africa. Another group of people who joined the Olmes were the Black Xia of China. According to historians such as Wayne B. Chandler (African Presence in Early America), two of China’s earliest dynasties, the Shang and the Shia, were both heavily Black African/Black Oceanic dynasties, with Mongol Chinese as well. They dominated China about 2800 B.C. to 1100 B.C. As early as 2200 B.C., members of the Black Shia began migrating out of China after they were replaced by the Black Shang Dynasty. The book, “A History of the African-Olmecs presents many references from Chinese sources to support the fact of Black civilizations in ancient China. About 1100 B.C., migrants from northern China predominated by Mongoloids called Chou, invaded the Chang Kingdom and described the Chang as “black and oily skinned.” During that period many of the Black Chang migrated to Southern China, Indo-China and the Pacific Islands. Others went to the Americas, where they met an established Black Mende culture in Mexico.” – A History of African Olmecs, by Paul Barton “In conclusion, the Olmec people were called Xi. They did not speak a Mixe-Zoque language they spoke a Mande language, which is the substratum language for many Mexican languages.” – Africans Came Before Columbus, Evidence of Africans in Ancient America, by Clyde A. Winters It is also interesting to note that the Olmec/Xi people, who travelled to the America’s and China, that came from Africa, spoke Mande (language of the Manding people of West Africa). Mande is also said to be the substratum language for many Mexican languages as well as for Chinese. “Wiener (1922) and Lawrence (1961) have maintained that the Olmec writing was identical to the Manding writing used in Africa. Wiener believed that the Tuxtla statuette was engraved with Mande signs… This affinity between Olmec and Mande signs supported the hypothesis of Wiener that the Tuxtla statuette was written in a Mande/Malinke-Bambara language.” – Is Olmec Syllabic Writing African, Chinese or Mixe?, by Clyde A. Winters The connection between the African Olmecs Mande language and Chinese can be found on the ancient Chinese Oracle Bone. “The first evidence of Chinese writing appeared around 2000 B.C., pottery found from such period, clearly marks the shell and oracle bone characters, which represented writing. According to archeological studies, the Shang symbols, is relatively compared with ancient Manding symbols. Though they show a slight difference in contemporary pronunciations these symbols, have the same meaning and shape. This suggest a genetic relationship between these scripts… this cognation of scripts supports the proposed Dravidian and Manding migration and settlement of ancient China during Xia times.” – Africans in the America’s Our Journey Throughout the World, by Sabas Whittaker, M.F.A., PG 79 The connection between Mande language and another eastern language one derived from Chinese, Japanese, is also made by Jewish historian, linguist, veteran and engineer, Joseph Eidelberg, in his book, The Biblical Hebrew Origin of the Japanese People. “Later in his engineering carrier, while managing large Israeli overseas engineering operations, including Iran and Ivory Coast, the author discovered some interesting linguistic and cultural similarities between the Hebrew language and Bambara, spoken by an African tribe in Mali. Joseph, by then fluent in seven languages, began exploring ancient cultures, customs, symbols and words with great interest and unlimited curiosity. It soon became clear to him that through traces of culture, language and symbolic similarities between Hebrew, African languages and Japanese, he may have the right tools and the key to explore what were for the most fascinating mysteries of Jewish history…” – The Biblical Hebrew Origin of the Japanese People, by Joseph Eidelberg, PG xi-xii All of these languages, Japanese, Ancient Chinese and Manding are connected by Hebrew. Hebrew was the language spoken by the Israelites who were scattered to the four corners of the earth and started that dispersion even before the Exodus. It is through these migrations that they started colonies throughout the world and maintained communication through trade networks. It should be quite evident that the Bantu imported iron forging into China and through the iron furnaces technologies were developed that advanced the Chinese culture and civilization. Israelites arriving by land, from Syria and Silk Road, as well as by Sea (Bantu/Phoenicians) from Africa and India. introduced this technology to China. In fact, in reference to the Phoenician West Africans who travelled to the Americas (as the Olmecs) and to China (as the Xi/Xia), the Phoenician Israelites seem to have founded China and are the origin of the name ‘China’. The name China seems to be derived from the Greek word for Canaan, Xvã (Chi-n-ã). Canaan is said to be the first Phoenician, so perhaps the Greeks were referring to the Phoenicians (Israelites) when they spoke of Xvã (China/Canaan). “In historical times the Phoenicians called themselves Canaanites, and their land Canaan, “the lowlands,” the latter applying equally to the coast, and the inland highlands, which the Israelites occupied.” – The American Catholic Quarterly Review, Volume 30, PG 333 Josephus, in Histories 2.104, cites Herodotus, who stated that, “The Phoenicians and the Syrians of Palestine were circumcised…” Josephus further states that, “there were no inhabitants of Palestine that are circumcised except the Judaeans [the Israelites].” Beside Phoenicia, which possess a large Israelite population and was itself being called ‘China’ by the Greeks, another connection between the name China and the Israelite Phoenicians of the land of Canaan can be found in the Hebrew word Kinah/Qiynah H7015. Kinah was the name of a town in the tribe of Judah as well as an eponym for the color purple. “The Hebrew word ‘kinah,’ referenced in the Shekinah pillar, means ‘purple.’ Thus, the Shekinah literally means ‘she-purple.’ The Biblical land of Canaan has the same ‘kinah’ etymology, making it the ‘land of purple.’ As a result, the merchants that came from Canaan were called the purple merchants and even traded in a powdered purple pigment.” – Purple Shekinah, by Richard Merrick Shennong and his descendant Chiyou, were both descendants of the Bantu Israelites and it is for this reason, that they were known as the Flame Emperor and the masters of Fire. The weapons created around the time of the Zhuolu War are evidence of their mastery of fire. “The Sword of Goujian (越王勾踐劍) is an archaeological artifact of the Spring and Autumn period (771 to 403BC) found in 1965 in Hubei, China. Renowned for its sharpness and resistance to tarnish, this historical artifact of ancient China is currently in the possession of the Hubei Provincial Museum. The sword was found sheathed in a wooden scabbard finished in black lacquer. The scabbard had an almost air-tight fit with the sword body. Unsheathing the sword revealed an untarnished blade, despite the tomb being soaked in underground water for over 2,000 years. On one side of the blade, two columns of text are visible. Eight characters are written in an ancient script. The script was found to be Bird-worm seal script (literally “birds and worms characters” owing to the intricate decorations of the defining strokes), a variant of seal script. Initial analysis of the text deciphered six of the characters, “King of Yue” (越王) and “made this sword for [his] personal use” (自作用剑) … It is likely that the chemical composition, along with the almost air-tight scabbard, led to the exceptional state of preservation.” – Sword of Goujian, Wikipedia “The Spear of Fuchai (吳王夫差矛) is purportedly the spear of King Fuchai of Wu, the arch-rival of King Goujian of Yue. It was unearthed in Jiangling, Hubei in November 1983. The script on it is a kind of script used only in the states of Wu, Yue (state), and Chu called 鸟虫文 or bird and worms script, a variant of seal script. The inscription mirrors the text of King Goujian’s Sword, except changing the name of the owner and the type of weapon. In this case, the text reads, “吴王夫差自作用矛” or “[Belonging to] King Fuchai of Wu made for his personal use, this spear.” – Spear of Fuchai, Wikipedia The Bantu Israelites’ mastery of fire and the use of fire in the names of the descendants of Yandi and Chiyou, is the origin of another epithet of the people—the fire nation. Being that the Japanese language is derived from Chinese, we can make the association between the Chinese masters of fire and the Japanese kanji that describes the Fire nation. THE FIRE NATION The Japanese kanji, 黒, meaning ‘black’, can is derived from two separate kanji. Think of it as a compound word. Interestingly enough, the same kanji, 黑, also means pot/kettle which is related to the Kettle god. “The Kitchen God also known as the Stove God, named Zao Jun, Zao Shen, or Zhang Lang, is the most important of a plethora of Chinese domestic gods that protect the hearth and family. The Kitchen God is celebrated in Vietnamese culture as well.” – Kitchen God, Wikipedia A separate but related fire god is also found in Iran. The Haji Piruz is a Persian character symbolic of the ancient Zoroastrian fire-keeper. Notice that he wears black face makeup hinting at the appearance of the original Persian Fire keepers. “Hāji Piruz or Hajji Firuz, popularly (Persian: حاجی پیروز ) in the language of literature and satire Haji or Hajji also (Persian: هاجى a satire maker) is the traditional herald of Nowruz, the Persian New Year that is usually represented by Donya in first year of dabirestan. He oversees celebrations for the new year perhaps as a remnant of the ancient Zoroastrian fire-keeper. His face is covered in soot and he is clad in bright red clothes and a felt hat.” – Blackface, Wikipedia “It appears that Haji Firuz represents the red-dressed fire keepers of the Zoroastrians, who at the last Tuesday of the year, was sent by the white-dressed moghsor priests to spread the news about the arrival of the Nowruz. The fire-keeper’s second duty was to call on the people to burn their old items in the fire, and to renew their life and regain health by obtaining the solved energy of the fire… Mehrdad Bahar opined that the figure of the Haji Firuz is derived from ceremonies and legends connected to the epic of prince Siavash… He speculates that the name Siyāwaxš might mean “black man” or “dark-faced man” and suggests that the black part of the name may be a reference either to the blackening of the faces of the participants in the afore-mentioned Mesopotamian ceremonies, or to the black masks that they wore for the festivities.” – Haiji Firuz, Wikipedia As mentioned earlier, the kanji for surname and fire (里 (nation/surname) + 灬 (fire)) were added together to create the kanji symbol of the nation/people of fire. It is said that the descendants of China’s Yan (flame) Emperor, also used fire as their surname and national symbol of identification. “A long debate has existed over whether or not the Yan Emperor was the same person as the legendary Shennong. An academic conference held in China in 2004 achieved general consensus that the Yan Emperor and Shennong were the same person… No written records are known to exist from the era of Yan’s reign. However, he and Shennong are mentioned in many of the classic works of ancient China. Yan literally means “flame”, and K. C. Wu speculates that this appellation may be connected with the fire used to clear the fields in slash and burn agriculture… The Zuo Zhuan states that in 525 BC, the descendants of Yan were recognized as long having been masters of fire and having used fire in their names. Yandi was known as “Emperor of the South”… Both Huangdi and Yandi are considered in some sense ancestral to Chinese culture and people. Also, the tradition of associating a certain color with a particular dynasty may have begun with the Flame Emperors. According to the Five Elements, or Wu Xing model, red, fire, should be succeeded by yellow, earth—or Yangdi by Huangdi.” – Yan Emperor, Wikipedia Another nation of fire can also be found in India and they were known as the Aryans. “After the poems of the Rig Veda a story emerges. Over several centuries, is the tale of tribes moving across north India, led by the god of fire; burning forest looking for new land. The leaders of these tribes spoke Sanskrit [and] the Rig Veda shows that they fought battles among themselves and they called themselves, Aryans.” It is said that only Brahmins are allowed to learn how to read the Rig Veda in the ancient Sanskrit language according to the documentary, The Story of India, episode 1, by Michael Woods. The Aryans in fact were a branch of Scythians (Saka/Sacae) “The Scythians, another Aryan group, also moved north from the Caucasus into Europe where their name was changed by the Romans to distinguish between them and other peoples. The sacred emblems of the Scythians included the serpent, the Ox (Nimrod/Taurus), fire (the Sun, knowledge), and Tho or Theo, the god the Egyptians called Pan.” – The Biggest Secret, By David Icke, PG 61 “A fourth people, related to these Aryan tribes, who appear at this time in the narrative of Herodotus, are the Scythians.” – H. G. Wells Complete 41 Novels- In the Days of the Comet Invisible Man Soul …, By H. G. Wells “These Scyths called themselves Aryas, the. ” noble ” or ” illustrious “, a title such as is common among primitive peoples; and they became the ancestors of the Medes, the Persians, and the Indian Aryas [Aryans].” – James Kennedy (1919). XV. The Aryan Invasion of Northern India: an Essay in Ethnology and History. Journal of the Royal Asiatic Society of Great Britain & Ireland (New Series), 51, pp 493-529 “There is no doubt that the Saka tribes living in Iran belong to the Iranian stock and were from the Aryan race.” – Iranian History at a Glance, By Dr Reza Shabani, PG 65 To further confirm the fact that the Scythians were indeed Aryans, we only have to look towards one of the ancient Scythian homelands—Sistan. Sistan, also known as Sakastan (land of the Sakas/Scythians), was within the territory known as Ariana (Aryan). “Sistan derives its name from Sakastan which, on its part, derives from the name of the Saka tribes. The Saka (known as Scythians in Greek sources) began to settle in this region during the Parthian era… Sīstān, also known as Scythia, Sijistān (Arabic: سجستان), and Sākāstān (Persian/Baloch/Pashto: ساكاستان; literally “land of the Saka or Scythians”), is a historical region in present-day eastern Iran (Sistan and Baluchestan Province), southern Afghanistan (Nimruz, Kandahar, and Zabul Province), and the Nok Kundi region of Balochistan (western Pakistan). At times, the Saka territory encompassed areas as far east as Minnagara on the Indus River, in southwestern Sindh province of present-day Pakistan. Sistan was a part of the region of ancient Ariana. Sistan was once the homeland of Saka, a Scythian tribe of Iranian origin.” – Sistan, Wikipedia “The Greek term Arianē (Latin: Ariana) is based upon an Iranian word found in Avestan Airiiana- (especially in Airiianəm Vaēǰō, the name of the Iranian peoples’ mother country)… The names Ariana and Aria, and many other ancient titles of which Aria is a component element, are connected with the Sanskrit term Arya-, the Avestan term Airya-, and the Old Persian term Ariya-, a self designation of the peoples of Ancient India and Ancient Iran, meaning “noble”, “excellent” and “honourable”.” – Ariana, Wikipedia “The term Iranian is derived from the Old Iranian ethnical adjective Aryana which is itself a cognate of the Sanskrit word Arya. The name Iran is from Aryānām; lit: “(Land) of the Aryans”. The old Proto-Indo-Iranian term Arya, per Thieme meaning “hospitable”, is believed to have been one of the self-referential terms used by the Aryans, at least in the areas populated by Aryans who migrated south from Central Asia. Another meaning for Aryan is “noble”.” – Iranian peoples, Wikipedia “The religious beliefs of the Scythians was a type of Pre-Zoroastrian Iranian religion and differed from the post-Zoroastrian Iranian thoughts. Foremost in the Scythian pantheon stood Tabiti, who was later replaced by Atar, the fire-pantheon of Iranian tribes, and Agni, the fire deity of Indo-Aryans.” – History of Humanity: From the seventh century B.C. to the seventh century A.D., edited by Sigfried J. de Laet, Joachim Herrmann, PG 182 The connections between fire and the Aryan-Scythians and the Israelites are numerous. This is a very interesting fact because the Scythians were Israelites and there was another Fire Nation of Israelites who arrived in China by Sea. They also introduced fire technology and slash-burn agriculture to China under the leadership of Shennong/Yandi and later, Chiyou. In fact, the red color on the red and yellow Chinese flag is symbolic of the Flame Emperor and his tribesmen—the Fire Nation. We remember that his descendants were known to be ‘masters of fire’ and this mastery was mainly attributed to their blacksmithing abilities. According to Chinese legend, it was Chiyou who introduced the craft of the blacksmith to the Chinese, much like Azazel introduced warfare to mankind before the flood. “The Miáo are an ancient Chinese people, according to legend they are descended from the tribe of Chīyóu. Chiyou is said to have bulls horns growing from his head and to have been the inventor of the Jian (sword), Ji (halbred), and other traditional Chinese weapons. He is still worshiped by the Miao peoples as their ancestor, and has been worshiped since ancient times by the Han Chinese as a god of war. Even today the Miao people attribute their martial arts to the teachings of Chiyou.” – Miao People, Wikipedia HMONG MIAO MARTIAL ARTS Notice the Mitre/turban worn by the Miao men with the crown of the head exposed. Even the women have fight in them. The ancient Hmong kicking game resembles the leg kicking of Muay Thai. The more ancient style Muay Boran, was developed by the same Hmong-Khmer people who also developed Hmong martial arts from Chiyou. It is for these reasons that Chiyou was such a formidable foe during the war of Zhuolu. Like Hephaestus of the Greeks, Chiyou and his tribesmen crafted the finest weapons and tools. Chiyou also taught the people the Martial Arts and the techniques on how to efficiently utilize weapons and tools they had crafted. These martial techniques and the knowledge of Iron/Steel forging in blast furnace was derived from the Israelites/Bantu, of which Chiyou was a descendent. Chiyou was a fierce warrior in ancient times and his descendants originating in southern China continue with the same warrior spirit today. The Vietnamese were never defeated in the Vietnam War and the United States had to withdraw because of the rigors of jungle warfare and the high battle morale of the people. The whole world has grown familiar with Muay Thai Kick Boxing as an essential element of cross-training for any mixed-martial artist. The eternal fire of the Israelites by way of Yandi, Chiyou, and their descendants is still burning until this day. “He led his tribe in battle against the coalition of tribes under Huang Di (the Yellow Emperor) who is considered the father of the Han (Chinese) ethnic group. After losing to Huang Di at the Battle of Zhuolu (26th century BC) his tribe was forced south. Since that time the Miao have been pushed south, with some settling in each area while others moved on. Over time the Miao fragmented into different tribes each with their own local customs… Today Miao tribes can be found from Hunan province in the north, west to Sichuan and Yunnan provinces, and south to Laos and Burma.” – Martial Arts of the Miao People, Youtube Interesting to note the similarities between Miao headdress and that found on the Easter Island Statues and the Seminole Indian of North America. Are these correlations further evidence to support the Phoenician connections to the Far East by way of the African and American continents? For More information on Israelites in China, the following blogs are relevant: Question, comment, concerns? OTHER RELATED TOPICS Phoenicians in the Far East Lost Tribes on The Silk Road
An ecosystem involves all the living and non-living aspects of the area. Desert ecosystems are unusual because they are very dry and have specifically evolved plants and animals that can survive the local climate. Learning about desert ecosystems can be fun when doing educational activities and projects about their different aspects. Keep reading to learn more about the desert for kids and adults alike! Describe Desert Climate If we are going to define desert, you look at the amount rainfall. Deserts are often hot during the day and cool at night, but there is some variation. A fun way to learn about the desert for kids is to do a temperature and rainfall map. Start with a map of the world with the desert areas outlined. Have kids research the temperatures of each of the deserts and categorize them by temperature. Color code the deserts according to temperature. Give them a clear sheet such as an overhead projector page and have them do patterns over the deserts based on the average yearly rainfall. The animals that live in the desert are specifically adapted to the environment. One desert ecosystem learning activity involves animal projects. You can tell kids about the adaptations of different animals for the desert or have them read and research them on their own. Then ask them to design their own desert dwelling animal. Kids can apply the information they learned to create their animal and then explain why their animal would do well in the desert environment. Certain plants are also adapted for desert life. They have evolved to live on very little water in very hot climates. Learning about the desert for kids can start with desert caring for a desert plant like a cactus. This can be a project for an entire class or just one kid. Extensively research the requirements for the plant, and set up an area for it that has the right amount of sun or light. Have children make a calendar for watering the plant, describe desert plant needs, and the amount of water needed. The most important part of this project is the planning. You can then compare the needs of the desert plant to that of a rain forest plant. A desert ecosystem not only includes the climate and living things, but the soil and sand as well. For kids, learning about soil can be very boring, because it's all about the types of materials in the soil. One way to make this more interesting is to create small bowls of different things that can be found in the desert soil such as sand and small amounts of dead plant matter. You can set it up in proportions so that they can see how much of one material there is compared to another. Have kids describe desert soil/sand vs. forest soil. They can then see what's in the soil and feel the materials. After looking at them individually, kids can combine the materials to create their own desert soil. About the Author Halley Wilson started publishing in 2003 with Niner Online at the University of North Carolina, Charlotte. She has a Bachelor of Arts in Japanese with a minor in anthropology from the University of North Carolina at Chapel Hill and is currently enrolled in a Master of Arts program for general linguistics there. Giovanni Allievi/iStock/Getty Images
Energy is generally defined as the potential to do work or produce heat. This definition causes the SI unit for energy to be the same as the unit of work – the joule (J). Joule is a derived unit of energy, and it is named in honor of James Prescott Joule and his experiments on the mechanical equivalent of heat. In more fundamental terms, 1 joule is equal to: 1 J = 1 kg.m2/s2 Since energy is a fundamental physical quantity and is used in various physical and engineering branches, there are many energy units in physics and engineering. Kilowatt-hour (unit: kWh) Kilowatt-hour (unit: kWh). Kilowatt-hour is a derived unit of energy. It is used to measure energy, especially electrical energy in commercial applications. One kilowatt-hour equals one kilowatt of power produced or consumed for one hour (kilowatts multiplied by the time in hours). Electric utilities commonly use the kilowatt-hour as a billing unit for energy delivered to consumers. 1kW . h = 1kW . 3600s = 3600kWs = 3600kJ = 3600000J. One kilowatt-hour corresponds to the heat required to evaporate 1.58 kg of liquid water at 100°C. A 100-watt radio that operates for 10 hours continuously consumes one kilowatt-hour. - 1 kWh = 3.6 x 106 J - 1 kWh = 8.6 x 105 cal - 1 kWh = 3412 BTU
3. What punishment do you consider to be the least / most severe? Exercise 2. Match the following English words and expressions with their Ukrainian equivalents: 2. corporal punishment 3. confinement in jail 5. as well as Exercise 3. Match the words and their transcription, read and translate the words: Exercise 4. Read the text to understand what information on crime investigation is of primary importance or new for you. Types of Punishment Criminal Punishment is a penalty imposed by the government on individuals who violate criminal law. People who commit crimes may be punished in a variety of ways. Offenders may be subject to fines or other monetary assessments, the infliction of physical pain (corporal punishment), or confinement in jail or prison for a period of time (incarceration). In general, societies punish individuals to achieve revenge against wrongdoers and to prevent further crime – both by the person punished and by others contemplating criminal behaviour. Some modern forms of criminal punishment reflect a philosophy of correction, rather than (or in addition to) one of penalty. Correctional programs attempt to teach offenders how to substitute lawful types of behaviour for unlawful actions. Throughout history and in many different parts of the world, societies have devised a wide assortment of punishment methods. In ancient times, societies widely accepted the law of equal retaliation (known as lex talionis), a form of corporal punishment that demanded "an eye for an eye." If one person's criminal actions injured another person, authorities would similarly maim the criminal. Certain countries throughout the world still practice corporal punishment. For instance, in some Islamic nations officials exact revenge-based corporal punishments against criminals such as amputation of a thief's hand. Monetary compensation is another historic punishment method. In England during the early Middle Ages payments of "blood money" were required as compensation for death, personal injury, and theft. Although some societies still use ancient forms of harsh physical punishment, punishments have also evolved along with civilization and become less cruel. Contemporary criminal punishment also seeks to correct unlawful behaviour, rather than simply punish wrongdoers. Certain punishments require offenders to provide compensation for the damage caused by their crimes. There are three chief types of compensation: fines, restitution, and community service. A fine is a monetary penalty imposed on an offender and paid to the court. However, fines have not been widely used as criminal punishment because most criminals do not have the money to pay them. Moreover, fining criminals may actually encourage them to commit more crimes in order to pay the fines. The term restitution refers to the practice of requiring offenders to financially compensate crime victims for the damage the offenders caused. This damage may include psychological, physical, or financial harm to the victim. In most cases, crime victims must initiate the process of obtaining restitution from the offender. Judges may impose restitution in conjunction with other forms of punishment, such as probation (supervised release to the community) or incarceration. Alternatively, restitution may be included as a condition of an offenders parole program. Prisoners who receive parole obtain an early release from incarceration and remain free, provided they meet certain conditions. Offenders sentenced to community service perform services for the state or community rather than directly compensating the crime victim or victims. Some of the money saved by the government as a result of community service work may be diverted to a fund to compensate crime victims. The most serious or repeat offenders are incarcerated. Criminals may be incarcerated in jails or in prisons. Jails typically house persons convicted of misdemeanours (less serious crimes), as well as individuals awaiting trial. Prisons are state or federally operated facilities that house individuals convicted of more serious crimes, known as felonies. The most extreme form of punishment is death. Execution of an offender is known as capital punishment. Like corporal punishment, capital punishment has been abolished in Ukraine. Exercise 1. Read the statements. Are they true or false? 1. Criminal Punishment is imposed by the individuals who violate criminal law. 2. A fine is a kind of a monetary assessment. 3. Confinement in jail or prison for a period of time is called incarceration. 4. The only reason to punish offenders is to achieve revenge against wrongdoers. 5. At present societies widely accept the law of equal retaliation. 6. No societies use the forms of harsh physical punishment nowadays. 7. Community service is one of the three types of compensation for the damage caused by their crimes. 8. Fines are often used as criminal punishment. 9. Restitution may be included as a condition of an offenders parole program. 10. The most serious or repeat offenders are incarcerated. 11. Criminals may be incarcerated in courts or police office. 12. Both corporal and capital punishments have been abolished in Ukraine. Exercise 2. Match the parts of the sentences. A. Corporal punishment 1) supervised release to the community 2) less serious crimes C. Lex talionis 3) a monetary penalty imposed on an offender and paid to the court 4) the practice of requiring offenders to financially compensate crime victims for the damage the offenders caused 5) the infliction of physical pain 6) performing services for the state or community 7) execution of an offender 8) confinement in jail or prison for a period of time H. Community service 9) obtaining an early release from incarceration while remaining free, provided an offender meets certain conditions I. Capital punishment 10) more serious crimes 11) the law of equal retaliation, a form of corporal punishment that demanded "an eye for an eye" 1) supervised release to the community III. VOCABULARY STUDY Exercise 1. Match the words with their definitions and with the crimes committed assault. remain in one's home for a certain period of time spend the rest of one's life in a young offender prison with no chance of going back into society who is waiting to go to court driving rights are removed for a certain period of time leaves marks on driving record / involves paying a fine hunting out of season pay money as punishment for a youth that steals minor / petty crime a car for the first time do volunteer work such as teaching children about crime or cleaning up garbage life in prison spend a certain amount of months or years locked away from society Exercise 2. Complete the text with the words from the box. The major driving force underlying all punishment is _____ also referred to as retribution. The word retribution derives from a Latin word meaning "to pay back." In retaliation for _____ societies seek to punish individuals who violate the rules. Criminal punishment is also intended as a deterrent to future criminality. Offenders who are _____ may be deterred from future wrongdoing because they fear additional punishment. Others who contemplate _____ may also be deterred from _____ behaviour. Societies also _____ punishments in order to incapacitate dangerous or unlawful individuals by restricting their liberty and to _____ these wrongdoers and correct their behaviour. Exercise 3. Make up sentences from the words. 1) from society / or incarceration /crime prevention / Isolating criminals / is the most direct method of / through confinement. 2) penalize wrongdoers / seeks to / and transform their behavior, / rather than / correct criminals / merely / Contemporary criminal punishment. 3) harsh physical punishment, / some societies / punishments have also / Although / evolved along with civilization / and become less cruel / still use ancient forms of. 4) contemporary punishments / In most industrialized societies, / are / or / either fines / or both / terms of incarceration. 5) refers to / requiring offenders / to financially compensate / for the damage / the offenders caused / The term restitution / the practice of / crime victims. 6) or /are incarcerated / The most serious / repeat offenders. 7) certain undesirable individuals, / such as / Some societies / with banishment or exile / criminals and political and religious dissidents, / punish. 8) capital punishment / Opponents of / barbaric and degrading / see it as / to the dignity of the individual. Exercise 4. Give the English equivalents for the following word combinations: IV. GRAMMAR FOCUS Exercise 1. Look at the list of the connectors and match them with their synonyms. as well as what is more Exercise 2. Point out sentences with these connectors in the text and explain the use. Exercise 1. Role-play Student A is a police officer and student B is a suspect. Make up a dialogue. The replies below will help you. Questions from law breakers or suspected criminals. − Why did you pull me over? − Have I done something wrong? − Is this illegal? − What are my rights? − Can I call a lawyer? − Where are you taking me? − Can I make a phone call? Questions police may ask a suspected criminal. − Are you carrying any illegal drugs? − Do you have a weapon? − Does this belong to you? − Whose car is this? − Where were you at eight last night? Informing someone of laws and police procedures. − You are under arrest. − Put your hands on your head. − I am taking you to the police station. − Please get in the police car. − You will have to pay a fine for this. − I will give you a warning this time. − I'm going to write you a ticket. − We'll tow your car to the station. − Smoking in restaurants is illegal in this country.
Diabetes, a chronic medical condition, develops when there is excessive level of glucose in the body because of body’s incapability to absorb glucose into the blood stream. Under normal circumstance, the absorption of glucose into the body’s cells is undertaken with the help of a hormone called insulin, produced by pancreas. However, in the case of diabetes the pancreas are unable to produce any or adequate amount of insulin required to absorb glucose into body’s cell. In some cases the insulin that is produced is not able work properly (known as insulin resistance). - Glucose is a sugar that is the ultimate source of energy for all human beings. It works as a fuel for energy, allowing us to work, play and live our lives. - While glucose is produced from carbohydrates in our diet, its absorption into body’s cells is carried out by insulin. Insulin is a hormone produced by pancreas. - In case of diabetes, the glucose in the body is not absorbed into its cells, continues getting built up in the blood itself and is can’t be used as a fuel. - There are three major types of the disease: Type 1, Type 2, and Gestational Diabetes. ONE of every THREE people with diabetes is unaware that they have it. Might you or a loved one be one of them? Read on to see if your risk of having diabetes is high. Who gets Diabetes Before proceeding to the Diabetes prevention tips, let us discuss about the Risk factors for Diabetes. There are many factors that increase your risk for diabetes. To find out about your risk, note each item on this list that applies to you. - I am 45 years of age or older. - My Current Waist size is more than normal. You can measure your waist circumference by placing a tape measure around your body at the top of your hipbone and above your belly button. A measure of more than 35 inches for women and more than 40 inches for men-is considered above normal. - I have a parent, siblings with diabetes. - I have had diabetes while I was pregnant or I gave birth to a baby weighing> 9 pounds - I have been told that my blood glucose (blood sugar) levels are higher than normal. - I am fairly inactive. I am physically active less than three times a week. - I have been told that I have polycystic ovary syndrome (PCOS). If you have any of the points above, be sure to talk with your health care team about your risk for diabetes and whether you should be tested. Risk classification wrt different types of Diabetes. Symptoms of Diabetes: Both type 1 and 2 of diabetes have similar initial warning signs. Early symptoms of diabetes come to the fore often in the case when the level of glucose in the blood is higher than normal levels. The diabetes symptoms can be very mild in nature and escape notice for long. This scenario is especially true in the case of Type 2 of diabetes. In certain cases, people learn about their Type 2 diabetes much later when they face problems from long term damage caused by the disease. How it is diagnosed Blood Glucose : A blood glucose test’s primary objective is to measure the magnitude of glucose in an individual’s blood. Glucose is a sugar that comes from carbohydrates and works as a main source of energy for the body.There are 3 different types of blood test been done i.e Random Blood Sugar, Fasting blood Sugar and Post Penndel blood Sugar. Random blood sugar may be high at times due to the food intake, hence one high random blood sugar number doesn’t mean your are diabetic. Fasting blood sugar test is advised in such scenarios. Fasting glucose test: For preparing for this blood test you should not eat anything for 8 hours before taking it. The results can be analysed based on the following guidelines: - If your blood sugar is less than 100 – Normal - If your blood sugar is 100-125 – Prediabetes - If your blood sugar is 126 or higher – Diabetes Oral glucose tolerance test: This blood test follows the Fasting plasma glucose test. After taking the Fasting plasma glucose test, you would need to drink a sugary solution. Two hours hence, you’ll take the Oral glucose tolerance test. The results can be analysed based on the following guidelines: - If your blood sugar is less than 140 after the second test – Normal - If your blood sugar is 140-199 after the second test – Prediabetes - If your blood sugar is 200 or higher after the second test – Diabetes Hemoglobin A1C(or average blood sugar) test: This blood test shows your average blood sugar level for the past 3 to 4 months. Doctors can use it to diagnose prediabetes or diabetes or, if you already know you have diabetes, it helps show whether it’s under control. The results can be analysed based on the following guidelines: Diabetes Prevention & Management –Steps we can take: Whatever your risks are, there’s a lot you can do for diabetes prevention. Quit your sedentary lifestyle for Diabetes Prevention: One of the major drivers of growing cases of diabetes is the increasing occurrence of obesity caused by sedentary lifestyle. It is no longer confined to middle-aged and elderly people, but is increasingly common among young people and even children. Therefore, it is imperative that various measures to prevent or delay the development of diabetes are urgently deployed and used. Get physically active in life: Physical activity needs to be another important component of your diabetes prevention & management plan. Working out makes your muscles use sugar (glucose) for energy. Regular and consistent exercising also supports your body to use insulin more productively. Also, the more active your workout regime is, the longer the positive effect lasts. But even light activities — such as housework, gardening or being on your feet for extended periods — can improve your blood sugar level. These factors work together to lower your blood sugar level & contributes towards Diabetes Prevention. Keep your weight within or near a healthy range: Watching out for one’s weight is also quite useful in diabetes prevention, especially for individuals who are overweight or obese. Losing 5-10% of weight can help reduce and bring stability to blood sugar levels. This improvement would probably over time also enable a reduction in the consumption of diabetes medication. Eat a balanced diet: The right meal plan will help you improve your blood glucose, blood pressure, and cholesterol numbers and also help keep your weight on track. There is no one perfect diet for diabetes prevention rather it is essential that different foods are combined to create a nutritious and balanced diet regime. Diet should have a healthy mix of food items that are rich in fibre and have adequate amounts of vitamins, minerals and proteins in the form pulses. It is also vital that meal portions are kept moderate. Large meals have the potential to increase sugar levels in the blood and with moderate meals you can keep the sugar levels to normal levels. No smoking, no alcohol: People with diabetes who smoke have higher blood-sugar levels and less control over their blood-sugar levels than non-smokers with diabetes. Smoking affects circulation by increasing heart rate and blood pressure and by making small blood vessels narrower. Smoking allows dangerous fatty material to get accumulated in blood cells and blood-vessel walls and also makes them sticky and narrow. This can lead to several diseases including cardiovascular ailments, stroke and blood vessel related diseases. Young adult smokers with diabetes are at further risks as they are two to three times more likely to be sick than non-smokers with diabetes. Diabetes should also closely watch out their alcohol consumption.The liver normally releases stored sugar to counteract falling blood sugar levels. But if your liver is busy metabolising alcohol, your blood sugar level may not get the boost it needs from the liver.This is essentially the reason that shortly post alcohol consumption, the body experiences low blood sugar levels as the liver is engaged in removing alcohol from the bloodstream rather than regulate blood sugar levels. Epidemiological studies show a significant burden of type 2 diabetes in India. This could be attributed to a high genetic risk and other risk factors such as age, obesity, abdominal adiposity and a high percentage of body fat playing a significant role in increasing diabetes in India. For a given BMI, Indians have higher Abdominal Obesity which is an important parameter to determine several metabolic syndrome diseases. A large proportion of urban Indian adults have the metabolic syndrome which increases the risk of both diabetes and cardiovascular diseases.Indians develop diabetes at a lower body mass index (BMI) and waist circumference compared to western countries. There are huge regional variances at play in the context of the prevalence of diabetes in India with a low occurrence of the disease witnessed in rural parts and high occurrence in urban subjects.The disease is more prevalent in southern regions as compared to northern and eastern parts of the country. Important Links – Diabetes Risk Assessment Diagnostic Tests and Packages – Introduction to Diabetes Profile Understanding Diabetic Retinopathy Living with Diabetes in India Whats your Risk of Type II Diabetes … Living with Diabetes in India Diabetes Prevention & Management –Steps we can take General Diet Plan Personalized Diet Plan AllizHealth provides you with tools and tracker to keep a tap on yours and your family’s health, It also provides services like online booking of health packages, access to fitness and wellness centers or consulting a doctor for second opinion. Click here to use AllizHealth Portal. Now you can keep a track of your loved one’s health and live a happy life. Reduce fear of losing or misplacing vital health information. Manage self and family health data and access from anywhere in the world. Click here to use Digital Health Wallet. Labkhoj is health check-up and lab discovery platform where You can search, compare and book health test and package in your location at a discounted price. Click here to Book Health Tests.
According to the duration of information storage in the brain structures of an individual, short-term (ST), long-term (LT) and working memory (OP) are distinguished. A person who has received the task of remembering new information, in order to successfully complete the task, refers not only to previously acquired knowledge (KP), but also to specific instructions and instructions that help to realize the intended goal (OP). What it is? In the middle of the twentieth century. scientists have identified ongoing operational transformations in short-term memory during the implementation of cognitive processes by an individual. During the performance of complex mathematical calculations, when solving problems, intermediate results are kept in the head as long as the person operates with them. Subsequently, some of the actions are forgotten. Information superfluous for further work is forced out of memory. The period of their storage is determined by the task set by the individual. Data can be saved from a few seconds to several minutes. Removing unnecessary facts makes room for the assimilation of new information. Researchers have defined this process. Working memory is the retention in the brain structures of a person of the initial data necessary to perform a separate act of action. The main characteristic of EP is the memorization and reproduction of information necessary for the implementation of a specific operation in the current activity. Thus, working memory in psychology is an intermediate link between short-term and long-term memorization. It works when an individual performs actual operations for a short period of time in order to maintain a trace of the image necessary to complete the current task. The functioning of the OP is associated with strong neuropsychic stress due to the interaction of a number of opposing centers of excitation. OP is able to hold no more than two variable factors at the moment of operating with objects whose state is changing. A person perfectly understands complex long phrases thanks to a good OP. Comprehension of the text occurs due to the short-term memorization of some of its elements. A mathematical problem lends itself to solution, because for some time the necessary numbers are retained in memory. Working memory is characterized by selectivity, superficiality and short duration. For example, when preparing a presentation, a student remembers his report to the smallest detail, up to the choice of intonation at the time of speaking. After the presentation, the essence of the information presented remains in the long-term memory storage, and small details and some subtleties of reproducing the presentation itself are erased. How to develop? Any modern person knows that pumping the RAM of a computer means testing its bandwidth and latency, as well as adding additional volume. Is it possible for a person to train this function of the brain in order to increase the capacity of this type of memory? It is possible to improve the productivity of the EP through the maximum load, the absence of which accelerates the process of weakening and aging of the mental function. The mechanical memorization of educational material does not allow knowledge to break through from the operational storage to the long-term archive. Mechanical memorization of poetry and prose text does not contribute to the development of OP. Memorizing rhyming lines and prose in a meaningful way, replenishing vocabulary, learning foreign languages, solving problems, solving crossword puzzles and rebuses brings benefits. There are prospects for strengthening the OP provided that daily training exercises are performed. Examples of tasks assigned to a person: - keeping a daily diary with fixing interesting information received during the day; - speaking aloud any perceived new information; - retelling to relatives, friends, acquaintances of the plots of films watched, the content of books read; - a written description of the architectural features of the building seen; - compiling an identikit of any passerby, randomly met person; - showing interest in learning the functions of new smartphones and other electronic equipment; - nightly recovery in memory of experienced emotions, events of the past day with a mental recreation of the smallest details in reverse order. Memory can only be trained by actions. Solving simple problems, mental calculations without the use of a calculator is an excellent training for working memory. Division into groups All information received must be structured. The division into blocks effectively affects the final result. Reading by syllables, groups of words contributes to the organization of the integrity of the memorized text. To quickly memorize a number consisting of 9 characters, it is advised to break it into parts of 3 digits. Let’s say you need to remember the document number 314365404. The first group of digital characters can be associated with the number «pi». The second third of the number is remembered by the number of days in a year. The last digital group is associated with the occurrence of an error when searching for the necessary information on the Internet, when the message “Error 404 Not Found” is displayed on a blank screen. When combining the learned individual digital groups, the reproduction of the entire number is easy and fast. Information of any complexity is well remembered through the use of mnemonics aimed at the figurative perception of the material. The technique involves the creation of associative links, increased concentration of attention. Incoming information is mentally encoded by generating images. In the course is the attraction of smells, taste sensations, sounds, music, pictures and a wide variety of emotions. To memorize important information, you do not need to immediately fix it in a notebook. First you need to create an image in your imagination, a vivid association. The more absurd the associative series are, the easier it is to remember the necessary material. Reproducing a list of necessary purchases in memory is easy using the method of binding to objects encountered along the way from the apartment to the supermarket. For example, a person must buy tomatoes, sugar, canned corn, slippers, toothpaste, a children’s jacket, a fresh issue of their favorite magazine. When memorizing, it is necessary to deliberately exaggerate images, revive objects, modify them, and focus on a specific detail of the object. Not a single purchase will be missed if a person imagines his way to the place of purchase of goods. When leaving the apartment, he in his imagination stumbles upon a huge mountain of fresh bright red tomatoes. Passing by mailboxes, a potential buyer sees snow-white granulated sugar quickly pouring out of their slots. The exit to the street is blocked by a huge can of corn, the size of which can exceed human height. Slippers of bright colors stick out from the urn, standing near the bench at the entrance. Tubes of toothpaste are casually scattered on the bench. A lilac bush is covered with a children’s jacket, sparkling due to neon lighting. The trail is littered with colorful magazines. Involving musical pieces in the memorization process helps to associate specific information with certain sounds. Subsequently, the sound sequences allow you to extract the necessary information from the brain structures without any extra effort. The advantage of using music to memorize facts is the ability to relax the psyche and respite in the use of active attention. At the same time, stress is relieved. Proper nutrition, sufficient replenishment of the body with vitamins, a healthy lifestyle, good sleep, daily walks, and physical exercises help to improve the functioning of RAM. It is advisable to avoid negative emotions and stressful situations. Smokers and drinkers have a significant decrease in the memorization of any information, so it makes sense to get rid of bad habits.
Red Sea Slave Trade Red Sea Slave Trade - Jonathan MiranJonathan MiranWestern Washington University Together with the Trans-Saharan and Indian Ocean slave trades, the Red Sea slave trade is one of the arenas that comprise what is still referred to as the “Islamic,” “Oriental,” or “Arab” slave trades that involved the transfer of enslaved people from sub-Saharan Africa to different parts of the Muslim world. It arguably represents one of the oldest, most enduring, and complex multidirectional patterns of human flow. It animated a series of routes and networks that moved African enslaved people mainly to Arabia, the eastern Mediterranean, the Gulf, Iran, and India. The Red Sea and Gulf of Aden slave trade also constituted part of a broader commercial system that comprised, in varying degrees, the greater Nile Valley trade system through which enslaved people from the northeast African interior were moved via overland routes to Egypt and beyond. Unlike the Atlantic slave trade system, where slave cargoes were commonplace, enslaved people were most often shipped across the Red Sea on regular sailing boats carrying a variety of other commodities. At the peak of the trade during the nineteenth century, a large majority of enslaved people exported through the Red Sea were in their teens. The sex ratio heavily favored females. Enslaved individuals from northeast Africa were exploited in a host of occupations that varied from “luxury” slaves (eunuchs and concubines) to domestic servants to labor-intensive enterprises such as pearl divers, masons, laborers in ports, and workers on agricultural plantations. Others were employed in urban economies in transportation, artisanship, and trade. Estimates based on a notoriously weak evidentiary base (for most periods) put Red Sea slave exports for the entire period between 800 ce and around 1900 ce at a total of just under 2,500,000, though this figure may be higher or lower. The heyday of the Red Sea trade was in the 19th century with estimates of around 500,000 enslaved people exported during the period. The abolition and suppression of the slave trade proper in the Red Sea region took a century to accomplish. It is infamously known as one of the most enduring slave trades in the world and it was only in the mid-20th century, when slavery was legally abolished in Yemen and in Saudi Arabia (both in 1962), that illicit slave smuggling across the sea was choked off. But legal abolition has not ended various forms and practices of human trafficking, smuggling, forced labor, debt bondage, commercial sex trafficking, and in some cases enslavement. These persist in the third decade of the 21st century in most of the modern countries bordering the Red Sea and, as in the past, with a reach that extends far and wide, beyond the region proper. - North Africa and the Gulf
Taxonomies of Educational Objectives The First Taxonomy of Educational Objectives: Cognitive Domain, The Affective Domain, Revision of the Taxonomy Educational objectives describe the goals toward which the education process is directed–the learning that is to result from instruction. When drawn up by an education authority or professional organization, objectives are usually called standards. Taxonomies are classification systems based on an organizational scheme. In this instance, a set of carefully defined terms, organized from simple to complex and from concrete to abstract, provide a framework of categories into which one may classify educational goals. Such schemes can: - Provide a common language about educational goals that can bridge subject matter and grade levels - Serve as a touchstone for specifying the meaning of broad educational goals for the classroom - Help to determine the congruence of goals, classroom activities and assessments - Provide a panorama of the range of possible educational goals against which the limited breadth and depth of any particular educational curriculum may be contrasted The First Taxonomy of Educational Objectives: Cognitive Domain The idea of creating a taxonomy of educational objectives was conceived by Benjamin Bloom in the 1950s, the assistant director of the University of Chicago's Board of Examinations. Bloom sought to reduce the extensive labor of test development by exchanging test items among universities. He believed this could be facilitated by developing a carefully defined framework into which items measuring the same objective could be classified. Examiners and testing specialists from across the country were assembled into a working group that met periodically over a number of years. The result was a framework with six major categories and many subcategories for the most common objectives of classroom instruction–those dealing with the cognitive domain. To facilitate test development, the framework provided extensive examples of test items (largely multiple choice) for each major category. Here is an overview of the categories that make up the framework: - 1.0. Knowledge - 1.1. Knowledge of specifics - 1.1.1. Knowledge of terminology - 1.1.2. Knowledge of specific facts - 1.2. Knowledge of ways and means of dealing with specifics - 1.2.1. Knowledge of conventions - 1.2.2. Knowledge of trends and sequences - 1.2.3. Knowledge of classifications and categories - 1.2.4. Knowledge of criteria - 1.2.5. Knowledge of methodology - 1.3. Knowledge of universals and abstractions in a field - 1.3.1. Knowledge of principles and generalizations - 1.3.2. Knowledge of theories and structures - 2.0. Comprehension - 2.1. Translation - 2.2. Interpretation - 2.3. Extrapolation - 3.0. Application - 4.0. Analysis - 4.1. Analysis of elements - 4.2. Analysis of relationships - 4.3. Analysis of organizational principles - 5.0. Synthesis - 5.1. Production of a unique communication - 5.2. Production of a plan, or proposed set of operations - 5.3. Derivation of a set of abstract relations - 6.0. Evaluation - 6.1. Evaluation in terms of internal evidence - 6.2. Judgments in terms of external criteria The categories were designed to range from simple to complex and from concrete to abstract. Further, it was assumed that the taxonomy represented a cumulative hierarchy, so that mastery of each simpler category was prerequisite to mastery of the next, more complex one. A meta-analysis of the scanty empirical evidence available, which is described in the Lorin Anderson and David Krathwohl taxonomy revision noted below, supports this assumption for Comprehension through Analysis. The data were ambiguous, however, with respect to the location of Knowledge in the hierarchy and for the order of Evaluation and Synthesis. The taxonomy has been used for the analysis of a course's objectives, an entire curriculum, or a test in order to determine the relative emphasis on each major category. The unceasing growth of knowledge exerts constant pressure on educators to pack more and more into each course. Thus, these analyses repeatedly show a marked overemphasis on Knowledge objectives. Because memory for most knowledge is short, in contrast to learning in the other categories, such findings raise important questions about learning priorities. Along these same lines is the taxonomy's use to assure that objectives, instructional activities, and assessment are congruent (aligned) with one another. Even when instruction emphasizes objectives in the more complex categories, the difficulty of constructing test items to measure such achievement often results in tests that emphasize knowledge measurement instead. Alignment analyses highlight this inconsistency. The taxonomy has also commonly been used in developing a test's blueprint, providing the detail for guiding item development to assure adequate, and appropriate curriculum coverage. Some standardized tests show how their test items are distributed across taxonomy categories. The Affective Domain In addition to devising the cognitive taxonomy, the Bloom group later grappled with a taxonomy of the affective domain–objectives concerned with interests, attitudes, adjustment, appreciation, and values. This taxonomy consisted of five categories arranged in order of increased internalization. Like the cognitive taxonomy, it assumed that learning at the lower category was prerequisite to the attainment of the next higher one. Here is an overview of the categories: - 1.0. Receiving (Attending) - 1.1. Awareness - 1.2. Willingness to receive - 1.3. Controlled or selected attention - 2.0. Responding - 2.1. Acquiescence in responding - 2.2. Willingness to respond - 2.3. Satisfaction in response - 3.0. Valuing - 3.1. Acceptance of a value - 3.2. Preference for a value - 3.3. Commitment - 4.0. Organization - 4.1. Conceptualization of a value - 4.2. Organization of a value system - 5.0. Characterization by a value or value complex - 5.1. Generalized set - 5.2. Characterization In addition, Elizabeth Simpson, Ravindrakumar Dave, and Anita Harrow developed taxonomies of the psychomotor domain. Revision of the Taxonomy A forty-year retrospective of the impact of the Cognitive Taxonomy by Lorin Anderson and Lauren Sosniak in 1994 (dating back to its preliminary edition in 1954) resulted in renewed consideration of a revision, prior efforts having failed to come to fruition. In 1995, Anderson and Krathwohl co-chaired a group to explore this possibility, and the group agreed on guidelines for attempting a revision. Like the original group, they met twice yearly, and in 2001 they produced A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, hereinafter referred to as the revision. Whereas the original was unidimensional, the revision had two dimensions, based on the two parts of objectives: (1) nouns describing the content (knowledge) to be learned, and (2) verbs describing what students will learn to do with that content; that is, the processes they use in producing or working with knowledge. The Knowledge dimension. The Knowledge category of the original cognitive taxonomy included both a content aspect and the action aspect of remembering. These were separated in the revision, so that the content aspect (the nouns) became its own dimension with four categories: - A. Factual Knowledge (the basic elements students must know to be acquainted with a discipline or solve problems in it) - a. Knowledge of terminology - b. Knowledge of specific details and elements - B. Conceptual Knowledge (the interrelationships among the basic elements within a larger structure that enable them to function together) - a. Knowledge of classifications and categories - b. Knowledge of principles and generalizations - c. Knowledge of theories, models, and structures - C. Procedural Knowledge (how to do something, including methods of inquiry and criteria for using skills, algorithms, techniques, and methods) - a. Knowledge of subject-specific skills and algorithms - b. Knowledge of subject-specific techniques and methods - c. Knowledge of criteria for determining when to use appropriate procedures - D. Metacognitive Knowledge (knowledge of cognition in general, as well as awareness and knowledge of one's own cognition) - a. Strategic knowledge - b. Knowledge about cognitive tasks, including appropriate contextual and conditional knowledge - c. Self-knowledge The Process dimension. In the revision, the concepts of the six original categories were retained but changed to verbs for the second (process) dimension. The action aspect of Knowledge was retitled as Remember. Comprehension became Understand. Synthesis, replaced by Create, became the top category. Subcategories, all new, consisted of verbs in gerund form. In overview, the dimension's categories are: - 1.0. Remember (retrieving relevant knowledge from long-term memory) - 1.1. Recognizing - 1.2. Recalling - 2.0. Understand (determining the meaning of instructional messages, including oral, written, and graphic communication - 2.1. Interpreting - 2.2. Exemplifying - 2.3. Classifying - 2.4. Summarizing - 2.5. Inferring - 2.6. Comparing - 2.7. Explaining - 3.0. Apply (carrying out or using a procedure in a given situation) - 3.1. Executing - 3.2. Implementing - 4.0. Analyze (breaking material into its constituent parts and detecting how the parts relate to one another and to an overall structure or purpose) - 4.1. Differentiating - 4.2. Organizing - 4.3. Attributing - 5.0. Evaluate (making judgments based on criteria and standards - 5.1. Checking - 5.2. Critiquing - 6.0. Create (putting elements together to form a novel, coherent whole or make an original product) - 6.1. Generating - 6.2. Planning - 6.3. Producing The Taxonomy Table With these two dimensions one can construct a taxonomy table in which one can locate the junction of the classifications of an objective's verb and noun. Consider the objective: "The student should be able to recognize the facts and/or assumptions that are essential to an argument." The opening phrase, "The student should be able to," is common to objectives–it is the unique part of the objective that we classify. The verb is "recognize" and the noun is really a noun clause: "the facts and assumptions that are essential to an argument." First, it is determined what is meant by "recognize." Initially, the term appears to belong to the category Remember because recognizing is Remember's first subcategory. But recognizing, the subcategory, refers to something learned before, which is not its meaning here. Here, it means that, on analyzing the logic of the argument, the student teases out the facts and assumptions on which the argument depends. The correct classification is Analyze. The noun clause, "the facts or assumptions that are essential to an argument," appears to include two kinds of knowledge. "The facts" is clearly Factual Knowledge, and "the assumptions"–as in assuming an argument's facts are true–may also be Factual Knowledge. But assuming a principle or concept as part of an argument (e.g. evolution) would be classified as Conceptual Knowledge. So this objective would fall into two cells of the taxonomy table–the junction of Analyze with Factual Knowledge and with Conceptual Knowledge, as shown by the X's in Figure 1. Just as objectives can be classified in a table, so can classroom activities used to attain them. Likewise, one can construct a table for assessment tasks and test items. If goals, activities, and assessments are aligned, the X's should fall in identical cells in all three tables. To the extent that they do not, the goals may be only partially attained and/or measured, and steps can be taken to restore alignment. Comments inserted into classroom vignettes in the revision explain the classification of objectives, activities, and assessments as they lead to three completed taxonomy tables. The three tables are then compared to show the alignment, or lack of it, in each vignette. The six vignettes include different subject matters in elementary and secondary education. Alternative Classification Frameworks Since the publication of the original framework, numerous alternatives have appeared–intended to supplement, improve upon, or replace it. Chapter 15 of the revision analyzes nineteen such frameworks in relation to the original and revised Taxonomies. Eleven are unidimensional, while eight include two or more dimensions. Some use entirely new terms, and a few include the affective domain. For example, in 1981 Robert Stahl and Gary Murphy provided these new headings: Preparation, Observation, Reception, Transformation, Information Acquisition, Retention, Transfersion, Incorporation, Organization, and Generation. The Organization heading bridges to the affective domain. David Merrill, in 1994, devised a framework similar to the revised taxonomy, using two dimensions, each with four categories, to form a Performance-Content matrix with a student performance dimension (Remember-Instance, Remember-Generality, Use, and Find) and a subject matter dimension (Fact, Concept, Procedure, and Principle). The 1977 framework of Larry Hannah and John Michaelis is even more similar. Alfred DeBlock (1972) and others have developed frameworks with more than two dimensions, while Dean Hauenstein's 1998 framework provided taxonomies for all three domains. Marzano's taxonomy (2001) proposes a combination of three kinds of knowledge–Information (often called declarative knowledge), Mental Procedures (procedural knowledge), and Psychomotor Procedures. Marzano also develops a processing model of actions that successively flow through three hierarchically related systems of thinking: first the Self System, then the Metacognitive system, and finally the Cognitive system (which includes Retrieval, Comprehension, Analysis, and Knowledge Utilization). See also: CURRICULUM, subentry on SCHOOL. ANDERSON, LORIN W., and KRATHWOHL, DAVID R., eds. 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman. ANDERSON, LORIN W., and SOSNIAK, LAUREN A., eds. 1994. Bloom's Taxonomy: A Forty-Year Retrospective. Ninety-third Yearbook of the National Society for the Study of Education. Chicago: University of Chicago Press. BLOOM, BENJAMIN. S., ed. 1956. Taxonomy of Educational Objectives: The Classification of Educational Goals; Handbook I, Cognitive Domain. New York: David McKay. DAVE, RAVINDRAKUMAR H. 1970. "Psychomotor Levels." In Developing and Writing Behavioral Objectives, ed. Robert J. Armstrong. Tucson AZ: Educational Innovators Press. DEBLOCK, ALFRED, et al. 1972. "La Taxonomie des Objectifs pour la Discipline du Latin." Didactica Classica Gandensia 17:12–13, 119–131. FLEISHMAN, EDWIN A., and QUAINTANCE, MARILYN K. 1984. Taxonomies of Human Performance: The Description of Human Tasks. Orlando, FL: Academic Press. HANNAH, LARRY S., and MICHAELIS, JOHN U. 1977. A Comprehensive Framework for Instructional Objectives: A Guide to Systematic Planning and Evaluation. Reading, MA: Addison-Wesley. HARROW, ANITA J. 1972. A Taxonomy of the Psychomotor Domain: A Guide for Developing Behavioral Objectives. New York: David McKay. HAUENSTEIN, A. DEAN. 1998. A Conceptual Framework for Educational Objectives: A Holistic Approach to Traditional Taxonomies. Lanham, MD: University Press of America. KRATHWOHL, DAVID R.; BLOOM, BENJAMIN S.; and MASIA, BERTRAM B. 1964. Taxonomy of Educational Objectives: The Classification of Educational Goals; Handbook II: The Affective Domain. New York: David McKay. MARZANO, ROBERT J. 2001. Designing a New Taxonomy of Educational Objectives. Thousand Oaks, CA: Corwin Press. MERRILL, M. DAVID. 1994. Instructional Design Theory. Englewood Cliffs, NJ: Educational Technology Publications. SIMPSON, BETTY J. 1966. "The Classification of Educational Objectives: Psychomotor Domain." Illinois Journal of Home Economics 10 (4):110–144. STAHL, ROBERT J., and MURPHY, GARY T. 1981. The Domain of Cognition: An Alternative to Bloom's Cognitive Domain within the Framework of an Information-Processing Model. ERIC Document Reproduction Service No. ED 208511. DAVID R. KRATHWOHL LORIN W. ANDERSON - Harold Taylor (1914–1993) - Hilda Taba (1902–1967) - Evaluation, Intergroup Education, The Taba Curriculum Framework
The lost continent, which is mostly submerged with all of New Zealand, is about half the size of Australia. By drilling deep into its upper layer, the new scientific expedition could provide clues about how the diving of one of Earth’s plates beneath another, a process called subduction. The new expedition could also reveal how that Earth-altering event changed ocean currents and the climate. Expedition co-chief scientist Gerald Dickens, professor of Earth, environmental and planetary science at Rice University, said in a statement. “This expedition will answer a lot of questions about Zealandia.” The drill ship JOIDES Resolution leaves Townsville on Friday for a two-month research voyage with up to 55 scientists on board. Professor Neville Exon of ANU Research School of Earth Scientists also agreed that this expedition would help scientists understand the “global tectonic configuration that started about 53 million years ago”. “Zealandia, including today’s Lord Howe Rise, was largely part of Australia until 75 million years ago, when it started to break away and move to the north-east. That movement halted 53 million years ago,” he added. Zealandia was named by US geophysicist Bruce Luyendyk in 1995 and was not considered continent material until February last year when a team of international scientists declared satellite technology and gravity maps of the sea floor had shown Zealandia was a large, unified area to qualify as a continent.
Intestinal bacteria from healthy infants prevent food allergy New research shows that healthy infants have intestinal bacteria that prevent the development of food allergies. Researchers from the University of Chicago, Argonne National Laboratory and the University of Naples Federico II in Italy found that when gut microbes from healthy human infants were transplanted into germ-free mice, the animals were protected from an allergic reaction when exposed to cow’s milk. Gut microbes from infants allergic to milk did not offer the same protection; mice receiving these bacteria suffered an allergic reaction when given cow’s milk. Cow’s milk allergy is the most common food allergy affecting children. The study, published this week in Nature Medicine, also identifies a specific bacterial species that protects against allergic responses to food. “This study allows us to define a causal relationship and shows that the microbiota itself can dictate whether or not you get an allergic response” said Cathryn Nagler, Ph.D., the Bunning Food Allergy Professor at UChicago and senior author of the study. The research, funded in part by the National Institute of Allergy and Infectious Diseases, is the result of a long collaboration between Nagler and Roberto Berni Canani, MD, Ph.D., Chief of the Pediatric Allergy Program and CEINGE Advanced Biotechnologies at the University Federico II of Naples, Italy. In 2015, the two worked together on a project that found significant differences in the gut microbiomes of healthy infants and those with cow’s milk allergy. That eventually led them to ask if those differences somehow contributed to the development of the allergy. The researchers transplanted gut microbes from each of eight infant donors—four healthy and four with cow’s milk allergy—into groups of mice via fecal samples. The mice had been raised in a completely sterile, germ-free environment, meaning they had no bacteria of their own. The mice were fed the same formula as the infants to help the bacteria colonize properly by providing the same sources of nutrients. Mice that received bacteria from allergic infants suffered from anaphylaxis, a life-threatening allergic reaction, when exposed to cow’s milk for the first time. Germ-free control mice that were not given any bacteria also experienced this severe reaction. Those that received healthy bacteria appeared to be completely protected, however, and did not suffer an allergic reaction. “These findings demonstrate the critical role of the gut microbiota in the development of food allergy and strongly suggest that modulating bacterial communities is relevant to stopping the food allergy disease burden,” Canani said. “These data are paving the way for innovative interventions for the prevention and treatment of food allergy that are under evaluation at our Centers.” The researchers also studied the composition of microbes in the intestinal tract of the mice and analyzed differences in gene expression between the healthy and allergic groups. This allowed them to pinpoint a particular species, Anaerostipes caccae, that appears to protect against allergic reactions when it is present in the gut. A.caccae is part of a class of bacteria, Clostridia, that Nagler and her colleagues identified in a 2014 study that protects against nut allergies. These bacteria produce butyrate, a short-chain fatty acid that previous research has shown is a crucial nutrient for establishing a healthy microbial community in the gut. This suggests that this class of butyrate-producing bacteria provides more general protection against other common food allergies as well. These bacteria or their metabolites could be used as part of biotherapeutic drugs to prevent or reverse other common food allergies. “What we see with this work again is how, in the context of all of the different types of microorganisms inhabiting the gastrointestinal tract, one single organism can have such a profound effect on how the host is affected by dietary components,” said Dionysios Antonopoulos, Ph.D., Microbial Systems Biologist at Argonne, Assistant Professor of Medicine at UChicago and a co-author of the study. “We also get a new appreciation for the distinct roles that each of these members play beyond the generalization that the ‘microbiome’ is involved.” The findings from this study are already helping scientists create new kinds of treatments for food allergies. Nagler is the president and co-founder of ClostraBio, a startup company that is developing biotherapeutics based on synthetic microbial metabolites. With the help of UChicago’s Polsky Center for Entrepreneurship and Innovation, ClostraBio recently raised $3.5 million in funding to continue testing these products in animal models before starting human trials. Source: Read Full Article
Scientists Spy On Bees, See Harmful Effects Of Common Insecticide A team of researchers peered inside bumblebee colonies and spied on insects individually labelled with a tiny tag to figure out exactly how exposure to a common insecticide changes their behavior in the nest. They found that the insecticide — from a controversial group called neonicotinoids — made the bees more sluggish and antisocial, spending more time on the periphery of the nest. It also made them less-attentive parents, according to research published Thursday in the journal Science. Neonicotinoids, commonly known as "neonics," are near-ubiquitous in farming in many countries. They're commonly applied to the seeds of crops such as corn or soy before planting. The plant then carries traces of the insecticide as it grows, even showing up in the pollen, which scientists believe is one way bees are exposed. As NPR's Dan Charles has reported, "neonicotinoid residues also have been found in the pollen of wildflowers growing near fields and in nearby streams." A growing body of research points to their deleterious effects on bees, which serve an important role in pollinating crops. Scientists have previously found that the insecticides can impair a bee's ability to forage and limit the growth of a colony. "There's a whole slew of important behaviors happening within the nest that aren't associated with forging directly, and so how these compounds might be affecting those behaviors, we really haven't understood so well," Harvard University biologist James Crall, the study's lead author, said in an interview with NPR. He says scientists think the chemical is disrupting the insect's central nervous system, which can change bee behavior in subtle ways — such as how bees regulate the temperature of their young. Typically, a colony does a good job of maintaining its temperature within a very narrow range, Crall says. But one experiment showed that in colonies exposed to neonicotinoids, "that ability was impaired, so they were less good at maintaining temperature in that narrow preferred range." Bumblebees also typically build a kind of wax blanket over the developing young to insulate them from the cold. "Actually in our control colonies, in the outdoor conditions we're putting them in, almost all of our colonies built some amount of that sort of insulating wax canopy," Crall says. But none of the colonies exposed to the insecticide built that protective layer. Christian Krupke, a Purdue University entomologist not involved in the research, told NPR that he finds this the most interesting finding of the study. He said some stressed bee colonies have been known to succumb during the winter. The fact that insecticide exposure appears to disrupt bees from building an insulating layer "presents a mechanism for that observation that we've seen in various sorts [of bees] ... that winter is the time of greatest hazard." In another experiment, the scientists exposed nine colonies to a common type of neonic. Another nine colonies were not exposed. Then they used a robotic arm to take video inside all the colonies, tracking the individually tagged bees. "We can map out things like where they are, who they're interacting with, and how much nursing they're doing," Crall said. That technology marks a step forward in a long history of humans tracking bee behavior, he added, because it lets them keep tabs on "every bee in a colony at the same time, which is basically impossible for a human to do." They saw changes in bee behavior colony-wide — but the magnitude of the effect changed based on time of day, becoming stronger at night. "Not only are we seeing these kind of effects, but they actually have some kind of interaction with the natural circadian rhythm of the colony," Crall said. Bayer, a prominent maker of neonicotinoids, has previously questioned scientific research suggesting the chemicals have harmful impacts on bees. "While we haven't yet had the opportunity to review [this] full study, it appears to confirm what is already known about neonicotinoids and bees: Exposures to higher doses can cause differences in bee behavior, whereas lower doses are well tolerated by bees," the company said in an emailed statement. It did not specify what it views as "higher" or "lower" doses. The scientists who conducted this bee study disagree with the company's characterization, saying their research shows that "field-realistic levels" of the pesticide impact social interactions within the nest. Earlier this year, the European Food Safety Authority concluded that "most uses of neonicotinoid pesticides represent a risk to wild bees." That prompted the European Commission to tighten existing restrictions on neonicotinoid use in the EU. Copyright 2020 NPR. To see more, visit https://www.npr.org.
Wetlands are among the most complex and productive ecosystems in the world, comparable to rainforests and coral reefs. They can host an immense variety of species of microbes, plants, insects, amphibians, reptiles, birds, fish and mammals. All these species are closely linked to wetlands and to each other, forming a life cycle and a complex set of interactions. For this reason, protectign wetland habitats is essential for maintaining global and national biodiversity. This course introduces you to the Ramsar Convention, which was the first treaty to recognize that wetlands are among the most productive sources of ecological support on Earth. It will take you 1 hour approximately to complete the course, excluding additional materials. Note on Partners: The InforMEA Initiative brings together Multilateral Environmental Agreements (MEA) to develop harmonized and interoperable information systems for the benefit of Parties and the environment community at large. It is facilitated by UN Environment and financially supported by the European Union. This course was developed with the Secretariat of the Convention on Wetlands (Ramsar Convention).
The cornea is the clear tissue at the front of the eye that transmits light to the retina at the back of the eye. The cornea is covered by an epithelium and surrounded by a narrow band of tissue known as the limbus. The limbus has two important roles in maintaining a healthy corneal epithelium. First, stem cells for the corneal epithelium reside at the limbus and not in the cornea. Second, the limbus acts as a barrier separating the clear avascular corneal epithelium from the surrounding vascular conjunctival tissue. A failure of these limbal functions can result in the painful and blinding disease of limbal stem cell deficiency. In this disease, the corneal epithelium cannot be maintained by the stem cells, and the corneal surface becomes replaced by hazy conjunctival tissue. There are many causes of limbal stem cell deficiency, such as burns to the eye, inflammatory diseases, and hereditary diseases. Current understanding of the pathophysiology of the disease is discussed here. In particular, understanding whether the limbal stem cells are lost or become dysfunctional or indeed whether the limbal microenvironment is disturbed is important when developing appropriate management strategies for the disease.
What is the most important biomolecule and why? Lipids are the responsible for energy storage in a cell and are the major component of the cell membrane. Among all these biomolecules, I would pick nucleic acids as the most important for life. There are two types of nucleic acids: DNA (deoxyribonucleic acid) and RNA (ribonucleic acid). What is the most important biomolecule to the human body? Why is protein the most important biomolecule? Proteins are the most diverse biomolecules on Earth, performing many functions required for life. Protein enzymes are biological catalysts, maintaining life by regulating where and when cellular reactions occur. Structural proteins provide internal and external support to protect and maintain cell shape. What biomolecule is the most important for a healthy person to eat? Proteins are the building blocks of life. Every cell in the human body contains protein. The basic structure of protein is a chain of amino acids. You need protein in your diet to help your body repair cells and make new ones. Why are biomolecules important to life? Biomolecules are important for the functioning of living organisms. These molecules perform or trigger important biochemical reactions in living organisms. When studying biomolecules, one can understand the physiological function that regulates the proper growth and development of a human body. Are biomolecules living things? Biomolecule, also called biological molecule, any of numerous substances that are produced by cells and living organisms. Biomolecules have a wide range of sizes and structures and perform a vast array of functions. The four major types of biomolecules are carbohydrates, lipids, nucleic acids, and proteins. What are 4 types of biomolecules? There are four major classes of biological macromolecules (carbohydrates, lipids, proteins, and nucleic acids), and each is an important component of the cell and performs a wide array of functions. Which biomolecule is the main source of energy? Are vitamins biomolecules? A diverse range of biomolecules exist, including: Small molecules: Lipids, fatty acids, glycolipids, sterols, monosaccharides. Vitamins. What are the 13 vitamins your body needs? There are 13 essential vitamins — vitamins A, C, D, E, K, and the B vitamins (thiamine, riboflavin, niacin, pantothenic acid, biotin, B6, B12, and folate). Vitamins have different jobs to help keep the body working properly. Why vitamins are named as ABC and K? The “e” at the end was later removed when it was recognized that vitamins need not be amines. The letters (A, B, C and so on) were assigned to the vitamins in the order of their discovery. The one exception was vitamin K which was assigned its “K” from “Koagulation” by the Danish researcher Henrik Dam. Where are vitamins found? Water-soluble vitamins (vitamin C, the B vitamins and folic acid) are mainly found in: fruit and vegetables. grains. milk and dairy foods. What food is rich in Vit D? Good sources of vitamin D - oily fish – such as salmon, sardines, herring and mackerel. - red meat. - egg yolks. - fortified foods – such as some fat spreads and breakfast cereals. Which fruit has which vitamin? Vitamin content of f ruit and vegetables |Apple||Vitamin A Vitamin B1 Vitamin B2 Vitamin B6 Vitamin C Folate (folic acid)| |Banana||Vitamin A Vitamin B1 Vitamin B2 Vitamin B6 Vitamin C Folate (folic acid)| |Blackberries||Vitamin A Vitamin B1 Vitamin B2 Vitamin B6 Vitamin C Folate (folic acid)| How many vitamins are there in total? In humans there are 13 vitamins: 4 fat-soluble (A, D, E, and K) and 9 water-soluble (8 B vitamins and vitamin C). Water-soluble vitamins dissolve easily in water and, in general, are readily excreted from the body, to the degree that urinary output is a strong predictor of vitamin consumption. What a human body needs daily? Macronutrients include water, protein, carbohydrates, and fats. Keep reading for more information about where to find these nutrients, and why a person needs them. The six essential nutrients are vitamins, minerals, protein, fats, water, and carbohydrates. Which is the most important vitamin? Vitamin D is arguably the most important vitamin you could take. Vitamin D is actually a hormone; it’s not even a vitamin and it affects our entire body. How many vitamins can I take a day? “Most people think it’s fine to take as much as they want,” says Rosenbloom. “I know people who take 10,000 mg a day.” However, the upper tolerable limit is 2,000 mg a day. “People at risk for kidney stones can increase that risk; people also can get diarrhea. What is the number 1 supplement in the world? Is it OK to take many vitamins at once? Combining supplements will not normally interfere with the way they work and in some cases may be beneficial, for example vitamin C helps iron absorption. However, certain supplements may interact with each other. Are vitamins a waste of money? Vitamins, supplements have no added health benefits, study contends. A new report says taking supplements could be a waste of money and may even be harmful to your health. What vitamins are not good for you? Potential risks of taking too many vitamins - Vitamin C. Although vitamin C has relatively low toxicity, high doses of it can cause gastrointestinal disturbances, including diarrhea, cramps, nausea, and vomiting. - Vitamin B3 (niacin). - Vitamin B6 (pyridoxine). - Vitamin B9 (folate). Do vitamins really work? The Vitamin Verdict. The researchers concluded that multivitamins don’t reduce the risk for heart disease, cancer, cognitive decline (such as memory loss and slowed-down thinking) or an early death. Which vitamins are worth taking? According to Nutritionists, These Are the 7 Ingredients Your Multivitamin Should Have - Vitamin D. Vitamin D helps our bodies absorb calcium, which is important for bone health. - Magnesium. Magnesium is an essential nutrient, which means that we must get it from food or supplements. - Vitamin B-12. How do you know if a vitamin is good quality? Which vitamins and herbal supplements can you trust? - LOOK FOR “USP,” “NSF,” or “Consumer Lab” on the bottle. - DO YOUR RESEARCH. - KNOW THE PERCENTAGE OF THE KEY INGREDIENT TO LOOK FOR. - KNOW WHICH PART OF THE PLANT YOU WANT. - DON’T JUST GO BY THE COUNTRY. - DON’T BELIEVE THE HYPE. - DON’T BUY THE CHEAPEST SUPPLEMENT… OR THE MOST EXPENSIVE. - BUT IS MY MULTIVITAMIN SAFE? Are daily vitamins worth taking? If you take a multivitamin, it’s probably because you want to do everything you can to protect your health. But there is still limited evidence that a daily cocktail of essential vitamins and minerals actually delivers what you expect. Most studies find no benefit from multivitamins in protecting the brain or heart. What is the best brand of vitamins? Nature Made offer several vitamin and supplement products that are both USP certified and the #1 Pharmacist Recommended brand. The range of supplements that the company offer include: multivitamins. prenatal vitamins. What vitamins help sexually? Vitamin C. Vitamin C helps improve blood flow. Blood flow affects your erectile function, so vitamin C may help sexual function. Vitamin C can’t be stored in the body, so you need to eat enough foods rich in vitamin C every day. Are vitamins bad for your liver? Even in high doses, most vitamins have few adverse events and do not harm the liver. Many vitamins are normally concentrated in, metabolized by and actually stored in the liver, particularly the fat soluble vitamins. Should I take vitamin D every day? If you’re taking a vitamin D supplement, you probably don’t need more than 600 to 800 IU per day, which is adequate for most people. Some people may need a higher dose, however, including those with a bone health disorder and those with a condition that interferes with the absorption of vitamin D or calcium, says Dr.
Part Two: Time, Space and Motion 5. Revolution in physics Two thousand years ago, it was thought that the laws of the universe were completely covered by Euclid's geometry. There was nothing more to be said. This is the illusion of every period. For a long time after Newton's death, scientists thought that he had said the last word about the laws of nature. Laplace lamented that there was only one universe, and that Newton had had the good fortune to discover all its laws. For two hundred years, Newton's particle theory of light was generally accepted, as against the theory, advocated by the Dutch physicist Christiaan Huygens (1629-95), that light was a wave. Then the particle theory was negated by the Frenchman, Augustin Jean Fresnel (1788-27), whose wave theory was experimentally confirmed by Jean.B.L. Foucault (1819-68). Newton had predicted that light, which travels at 186,000 miles per second (around 300,000 km) in empty space, should travel faster in water. The supporters of the wave theory predicted a lower speed, and were shown to be correct. The great breakthrough for wave theory, however, was accomplished by the outstanding Scottish scientist James Clerk Maxwell, in the latter half of the 19th century. Maxwell based himself in the first instance on the experimental work of Michael Faraday, who discovered electromagnetic induction, and investigated the properties of the magnet, with its two poles, north and south, involving invisible forces stretching to the ends of the earth. Maxwell gave these empirical discoveries a universal form by translating them into mathematics. His work led to the discovery of the field, on which Einstein later based his general theory of relativity. One generation stands on the shoulders of its predecessors, both negating and preserving earlier discoveries, continually deepening them, and giving them a more general form and content. Seven years after Maxwell's death, Heinrich Rudolf Hertz (1857-94) first detected the electromagnetic waves predicted by Maxwell. The particle theory, which had held sway ever since Newton, appeared to be annihilated by Maxwell's electromagnetics. Once again, scientists believed themselves in possession of a theory that could explain everything. There were just a few questions to be cleared up, and we would really know all there was to know about the workings of the universe. Of course, there were a few discrepancies that were troublesome, but they appeared to be small details which could safely be ignored. However, within a few decades, these “minor” discrepancies proved sufficient to overthrow the entire edifice and effect a veritable scientific revolution. Waves or particles? Everyone knows what a wave is. It is a common feature associated with water. Just as waves can be caused by a duck moving over the surface of a pond, so a charged particle, an electron for example, can cause an electromagnetic wave, when it moves through space. The oscillatory motion of the electron disturbs the electric and magnetic fields, causing waves to spread out continuously, like the ripples on the pond. Of course, the analogy is only approximate. There is a fundamental difference between a wave on water and an electromagnetic wave. The latter does not require a continuous medium through which to travel, like water. An electromagnetic oscillation is a periodical disturbance that propagates itself through the electrical structure of matter. However, the comparison may help to make the idea clearer. The fact that we cannot see these waves does not mean that their presence cannot be detected even in everyday life. We have direct experience of light waves and radio waves, and even X-rays. The only differences between them are their frequency. We know that a wave on water will cause a floating object to bob up and down faster or slower, depending on the intensity of the wave—the ripples caused by the duck, as compared to those provoked by a speedboat. Similarly, the oscillations of the electrons will be proportionate to the intensity of the light wave. The equations of Maxwell, backed up by the experiments of Hertz and others, provided powerful evidence to support the theory that light consisted of waves, which were electromagnetic in character. However, at the turn of the century, evidence was accumulating which suggested that this theory was wrong. In 1900 Max Planck had shown that the classical wave theory made predictions that were not verified in practice. He suggested that light came in discrete particles or “packets” ( quanta). The situation was complicated by the fact that different experiments proved different things. It could be shown that an electron was a particle by letting it strike a fluorescent screen and observing the resulting scintillations; or by watching the tracks made by electrons in a cloud chamber; or by the tiny spot that appeared on a developed photographer's plate. On the other hand, if two holes are made in a screen, and electrons were allowed to flood in from a single source, they caused an interference pattern, which indicated the presence of a wave. The most peculiar result of all, however, was obtained in the celebrated two-slot experiment, in which a single electron is fired at a screen containing two slots and a photographer's plate behind it. Which of the two holes did the electron pass through? The interference pattern on the plate is quite clearly a two-hole pattern. This proves that the electron must have gone through both holes, and then set up an interference pattern. This is against all the laws of common sense, but it has been shown to be irrefutable. The electron behaves both like a particle and a wave. It is in two (or more than two) places at once, and in several states of motion at once! However, as Banesh Hoffmann comments: “Let us not imagine that scientists accepted these new ideas with cries of joy. They fought them and resisted them as much as they could, inventing all sorts of traps and alternative hypotheses in vain attempts to escape them. But the glaring paradoxes were there as early as 1905 in the case of light, and even earlier, and no one had the courage or wit to resolve them until the advent of the new quantum mechanics. The new ideas are so difficult to accept because we still instinctively strive to picture them in terms of the old-fashioned particle, despite Heisenberg's indeterminacy principle. We still shrink from visualising an electron as something which, having motion, may have no position, and having position, may have no such thing as motion or rest.” 1 Here we see the negation of the negation at work. At first sight, we seem to have come full circle. Newton's particle theory of light was negated by Maxwell's wave theory. This, in turn, was negated by the new particle theory, advocated by Planck and Einstein. Yet this does not mean going back to the old Newtonian theory, but a qualitative leap forward, involving a genuine revolution in science. All of science had to be overhauled, including Newton's law of gravitation. This revolution did not invalidate Maxwell's equations, which still remain valid for a vast field of operations. It merely showed that, beyond certain limits, the ideas of classical physics no longer apply. The phenomena of the world of subatomic particles cannot be understood by the methods of classical mechanics. Here the ideas of quantum mechanics and relativity come into play. For most of the present century, physics has been dominated by the theory of relativity and quantum mechanics that in the beginning were rejected out of hand by the scientific establishment, which clung tenaciously to the old views. There is an important lesson here. Any attempt to impose a “final solution” to our view of the universe is doomed to fail. The development of quantum physics represented a giant step forward in science, a decisive break with the old stultifying mechanical determinism of “classical” physics. (The “metaphysical” method, as Engels would have called it.) Instead, we have a much more flexible, dynamic—in a word dialectical—view of nature. Beginning with Planck's discovery of the existence of the quantum, which at first appeared to be a tiny detail, almost an anecdote, the face of physics was transformed. Here was a new science which could explain the phenomenon of radioactive transformation and analyse in great detail the complex data of spectroscopy. It directly led to the establishment of a new science—theoretical chemistry, capable of solving previously insoluble questions. In general, a whole series of theoretical difficulties were eliminated, once the new standpoint was accepted. The new physics revealed the staggering forces locked up within the atomic nucleus. This led directly to the exploitation of nuclear energy—the path to the potential destruction of life on earth—or the vista of undreamed of and limitless abundance and social progress through the peaceful use of nuclear fusion. Einstein's theory of relativity explains that mass and energy are equivalents. If the mass of an object is known, by multiplying it by the square of the speed of light, it becomes energy. Einstein (1879-1955) showed that light, hitherto thought of as a wave, behaved like a particle. Light, in other words, is just another form of matter. This was proved in 1919, when it was shown that light bends under the force of gravity. Louis de Broglie later pointed out that matter, which was thought to consist of particles, partakes of the nature of waves. The division between matter and energy was abolished once and for all. Matter and energy are…the same. Here was a mighty advance for science. And from the standpoint of dialectical materialism matter and energy are the same. Engels described energy (“motion”) as “the mode of existence, the inherent attribute, of matter.” 2 The argument that dominated particle physics for many years, whether subatomic particles like photons and electrons were particles or waves was finally resolved by quantum mechanics, which asserts that subatomic particles can and do behave both like a particle and like a wave. Like a wave, light produces interferences, yet a photon of light also bounces off all electrons, like a particle. This goes against the laws of formal logic. How can “common sense” accept that an electron can be in two places at the same time? Or even move, at incredible speeds, simultaneously, in different directions? For light to behave both as a wave and as a particle was seen as an intolerable contradiction. The attempts to explain the contradictory phenomena of the subatomic world in terms of formal logic leads to the abandonment of rational thinking all together. In his conclusion to a work dealing with the quantum revolution, Banesh Hoffmann is capable of writing: “How much more, then, shall we marvel at the wondrous powers of God who created the heaven and the earth from a primal essence of such exquisite subtlety that with it he could fashion brains and minds afire with the divine gift of clairvoyance to penetrate his mysteries. If the mind of a mere Bohr or Einstein astound us with its power, how may we begin to extol the glory of God who created them?” 3 Unfortunately, this is not an isolated example. A great part of modern literature about science, including a lot written by scientists themselves, is thoroughly impregnated with such mystical, religious or quasi-religious notions. This is a direct result of the idealist philosophy, which a great many scientists, consciously or unconsciously, have adopted. The laws of quantum mechanics fly in the face of “common sense” (i.e., formal logic), but are in perfect consonance with dialectical materialism. Take, for example, the conception of a point. All traditional geometry is derived from a point, which subsequently becomes a line, a plane, a cube, etc. Yet close observation reveals that the point does not exist. The point is conceived as the smallest expression of space, something that has no dimension. In reality, such a point consists of atoms—electrons, nuclei, photons, and even smaller particles. Ultimately, it disappears in a restless flux of swirling quantum waves. And there is no end to this process. No fixed “point” at all. That is the final answer to the idealists who seek to find perfect “forms” which allegedly lie “beyond” observable material reality. The only “ultimate reality” is the infinite, eternal, ever-changing material universe, which is far more wonderful in its endless variety of form and processes than the most fabulous adventures of science fiction. Instead of a fixed location—a “point”—we have a process, a never-ending flux. All attempts to impose a limit on this, in the form of a beginning or an end, will inevitably fail. Disappearance of matter? Long before the discovery of relativity, science had discovered two fundamental principles—the conservation of energy and the conservation of mass. The first of these was worked out by Leibniz in the 17th century, and subsequently developed in the 19th century as a corollary of a principle of mechanics. Long before that, early man discovered in practice the principle of the equivalence of work and heat, when he made fire by means of friction, thus translating a given amount of energy (work) into heat. At the beginning of this century, it was discovered that mass is merely one of the forms of energy. A particle of matter is nothing more than energy, highly concentrated and localised. The amount of energy concentrated in a particle is proportional to its mass, and the total amount of energy always remains the same. The loss of one kind of energy is compensated for by the gain of another kind of energy. While constantly changing its form, nevertheless, energy always remains the same. The revolution effected by Einstein was to show that mass itself contains a staggering amount of energy. The equivalence of mass and energy is expressed by the formula E = mc2 in which c represents the velocity of light (about 186,000 miles per second), E is the energy that is contained in the stationary body, and m is its mass. The energy contained in the mass m is equal to this mass, multiplied by the square of the tremendous speed of light. Mass is therefore an immensely concentrated form of energy, the power of which may be conveyed by the fact that the energy released by an atomic explosion is less than one tenth of one per cent of the mass converted into energy. Normally this vast amount of energy locked up in matter is not manifested, and therefore passes unnoticed. But if the processes within the nucleus reach a critical point, part of the energy is released, as kinetic energy. Since mass is only one of the forms of energy, matter and energy can neither be created nor destroyed. The forms of energy, on the other hand, are extremely diverse. For example, when protons in the sun unite to form helium nuclei, nuclear energy is released. This may first appear as the kinetic energy of motion of nuclei, contributing to the heat energy from the sun. Part of this energy is emitted from the sun in the form of photons, containing particles of electromagnetic energy. The latter, in turn, is transformed by the process of photosynthesis into the stored chemical energy in plants, which, in turn, is acquired by man by eating the plants, or animals which have fed upon the plants, to provide the warmth and energy for muscles, blood circulation, brain, etc. The laws of classical physics in general cannot be applied to processes at the subatomic level. However, there is one law that knows no exception in nature—the law of the conservation of energy. Physicists know that neither a positive nor a negative charge can be created out of nothing. This fact is expressed by the law of the conservation of electric charge. Thus, in the process of producing a beta particle, the disappearance of the neutron (which has no charge) gives rise to a pair of particles with opposed charges—a positively charged proton and a negatively charged electron. Taken together, the two new particles have a combined electrical charge equal to zero. If we take the opposite process, when a proton emits a positron and changes into a neutron, the charge of the original particle (the proton) is positive, and the resulting pair of particles (the neutron and positron), taken together, are positively charged. In all these myriad changes, the law of the conservation of electrical charge is strictly maintained, as are all the other conservation laws. Not even the tiniest fraction of energy is created or destroyed. Nor will such a phenomenon ever occur. When an electron and its anti-particle, the positron, destroy themselves, their mass “disappears”, that is to say, it is transformed into two light-particles (photons) which fly apart in opposite directions. However, these have the same total energy as the particles from which they emerged. Mass-energy, linear momentum and electric charge are all preserved. This phenomenon has nothing in common with disappearance in the sense of annihilation. Dialectically, the electron and positron are negated and preserved at the same time. Matter and energy (which is merely two ways of saying the same thing) can neither be created nor destroyed, only transformed. From the standpoint of dialectical materialism, matter is the objective reality given to us in sense perception. That includes not just “solid” objects, but also light. Photons are just as much matter as electrons or positrons. Mass is constantly being changed into energy (including light—photons) and energy into mass. The “annihilation” of a positron and an electron produces a pair of photons, but we also see the opposite process: when two photons meet, an electron and a positron can be produced, provided that the photons possess sufficient energy. This is sometimes presented as the creation of matter “from nothing”. It is no such thing. What we see here is neither the destruction nor the creation of anything, but the continuous transformation of matter into energy, and vice versa. When a photon hits an atom, it ceases to exist as a photon. It vanishes, but causes a change in the atom—an electron jumps from one orbit to another of higher energy. Here too, the opposite process occurs. When an electron jumps to an orbit of lower energy, a photon emerges. The process of continual change that characterises the world at the subatomic level is a striking confirmation of the fact that dialectics is not just a subjective invention of the mind, but actually corresponds to objective processes taking place in nature. This process has gone on uninterruptedly for all eternity. It is a concrete demonstration of the indestructibility of matter—precisely the opposite of what it was meant to prove. "Bricks of matter"? For centuries, scientists have tried in vain to find the “bricks of matter”—the ultimate, smallest particle. A hundred years ago, they thought they had found it in the atom (which, in Greek, signifies “that which cannot be divided”). The discovery of subatomic particles led physics to probe deeper into the structure of matter. By 1928 scientists imagined that they had discovered the smallest particles—protons, electrons and photons. All the material world was supposed to be made up of these three. Subsequently, this was shattered by the discovery of the neutron, the positron, the deuteron, then a host of other particles, ever smaller, with an increasingly fleeting existence—neutrinos, pi-mesons, mu-mesons, k-mesons, and many others. The life span of some of these particles is so evanescent—maybe a billionth of a second—that they have been described as “virtual particles”—something utterly unthinkable in the pre-quantum era. The tauon lasts only for a trillionth of a second, before breaking down into a muon, and then to an electron. The neutral pion is even more fleeting, breaking down in less than one quadrillionth of a second to form a pair of gamma rays. However, these gammas live to a ripe old age compared to others, which have a life of only one hundredth of a microsecond. Other particles, like the neutral sigma particle, break down after a hundred trillionth of a second. In the 1960s, even this was overtaken by the discovery of particles so evanescent that their existence could only be determined from the necessity of explaining their breakdown products. The half-lives of these particles are in the region of a few trillionths of a second. These are known as resonance particles. And even this was not the end of the story. Over a hundred and fifty new particles were later discovered, which have been called hadrons. The situation was becoming extremely confused. An American physicist, Dr. Murray Gell-Mann, in an attempt to explain the structure of subatomic particles, postulated still other, more basic particles, the quarks, which were yet again heralded as the “ultimate building-blocks of matter”. Gell-Mann theorised that there were six different kinds of quarks and that the quark family was parallel to a six-member family of lighter particles known as leptons. All matter was now supposed to consist of these twelve particles. Even these, the most basic forms of matter so far known to science, still possess the same contradictory qualities we observe throughout nature, in accordance with the dialectical law of the unity of opposites. Quarks also exist in pairs, and possess a positive and negative charge, although it is, unusually, expressed in fractions. Despite the fact that experience has demonstrated that there is no limit to matter, scientists still persist in the vain search for the “bricks of matter”. It is true that such expressions are the sensational inventions of journalists and some scientists with an over-developed flare for self-promotion, and that the search for ever smaller and fundamental particles is undoubtedly a bona fide scientific activity, which serves to deepen our knowledge of the workings of nature. Nevertheless, one certainly gets the impression that at least some of them really do believe that it is possible to reach a kind of ultimate level of reality, beyond which there is nothing left to discover, at least at the subatomic level. The quark is supposed to be the last of twelve subatomic “building blocks” are said to make up all matter. Dr. David Schramm was reported as saying “The exciting thing is that this is the final piece of matter as we know it, as predicted by cosmology and the Standard Model of particle physics. It is the final piece of that puzzle'.” 4 So the quark is the “ultimate particle”. It is said to be fundamental and structureless. But similar claims were made in the past for the atom, then the proton, and so on and so forth. And in the same way, we can confidently predict the discovery of still more “fundamental” forms of matter in the future. The fact that the present state of our knowledge and technology does not permit us to determine the properties of the quark does not entitle us to affirm that it has no structure. The properties of the quark still await analysis, and there is no reason to suppose that this will not be achieved, pointing the way to a still deeper probing of the endless properties of matter. This is the way science has always advanced. The supposedly unbreachable barriers to knowledge erected by one generation are overturned by the next, and so on down the ages. The whole of previous experience gives us every reason to believe that this dialectical process of the advance of human knowledge is as endless as the infinite universe itself. 1. Hoffmann, B. The Strange Story of the Quantum, p. 147.↩ 2. Engels, F. Dialectics of Nature, p. 92.↩ 3. Hoffmann, B. op. cit., pp. 194-5.↩ 4. Financial Times, 1/4/94, our emphasis.↩
by CAROLYN L. MAZLOOMI (SCROLL DOWN TO SEE PHOTOS OF THIS EXHIBIT) Why this exhibit at this time? 0n January 8, 2018, President Donald Trump signed into law The 400 Years of African-American History Commission Act, appointing a commission to arrange celebration of the 400th anniversary of the arrival of the first Africans in the English colonies. This year-long celebration recognizes and highlights the resilience and cultural contributions of Africans and African Americans over 400 years to America. 2019 serves as a year to acknowledge the painful impact that slavery and laws that enforced racial discrimination had on the United States and its African American citizens. This year America will celebrate the contributions of African Americans. The history of African Americans is filled with tragedies that have shaped the black experience in America; however African Americans have contributed to the social, cultural economic, academic, and moral well-being of this nation. Colonel Charles Young is a little known unsung hero, who offers up courage and perseverance during an extremely difficult time in our nation’s history for its African American citizens. The life of Col. Charles Young most certainly highlights the resilience and contributions of African Americans in a significant way. The exhibited narrative quilts provide a visual diary of Col. Charles Young’s life. Young’s life is filled with extraordinary accomplishments. He graduated at sixteen at the top of his high school class; could speak several languages, including Spanish, Greek, Spanish, German and Latin; he taught high school and college; he was the third African American to graduate from West Point; received the NAACP Sprigarn Medal for his service as an attaché in Liberia; There he helped Liberia to build the country’s infrastructure; the first African American U. S. Park Service Superintendent of Sequoia and Grant National Parks; leader of the Buffalo Soldier Regiment; he led his squadron and defeated Pancho Villa’s forces in Mexico without losing a single soldier; became the second honorary member of Omega Psi Phi Fraternity; and was a talented musician and composer. In 1903, Captain Charles Young became the first African American to appointed to serve as superintendent of a national park. Col. Young lead five-hundred men from the Buffalo Soldier Regiment to drive timber wolves and poachers from Sequoia and Yosemite National Parks. Under Young’s commend, the soldiers enforced the law, protected the tourists, cut new trails, and protected natural resources. Young’s incredible contributions to American history and the U. S. National Park Service prompted President Obama to choose his home in historic Wilberforce, Ohio, as the site of the National Buffalo Soldier Monument. The monument was once the private home of Col. Young and his family. (SCROLL DOWN TO SEE PHOTOS OF THIS EXHIBIT) Why use quilts to tell Col. Young’s story? Quilting is one of America’s most powerful art forms because of its widespread appeal and association with comfort, warmth, and healing. Quilts and quilt making are important to America and Black culture in particular, because the art form was historically one of the few mediums accessible to marginalized groups to tell their own story, to provide warmth for their families, and to empower them with a voice through cloth. Choosing quilts as the visual medium for this exhibit accentuates the intersections of African American contributions to American cultural production while at once informing others about the art form and its role in African American history. Story quilts, are great vehicles to tell the African American story because they link us to ancestral traditions and help us to appreciate the value of various forms of oral history. For the African American viewer, the Yours for Race and Country is a validating expression of cultural genius. For viewers external to the culture, it is an awakening to the unknown and uncelebrated contributions of an extraordinary man, Col. Charles Young. Women of Color Quilters Network and Friends are proud to provide an unprecedented visual learning experience, by intersecting art with African American history and underscoring their importance to our common and shared American reality. It is this often unknown and underappreciated shared reality that must be voiced if we are ever to truly value the unique contributions diverse groups make to the fabric of our nation. We empower the memory and accomplishments of Col. Charles Young with voice through cloth as we continue to tell the story of African Americans. This exhibition is supported in part by an award from the National Endowment for the Arts. Additional support provided by Sara and Michelle Vance Waddell Fund.
Science Buddies' kindergarten science projects are the perfect way for kindergarten students to have fun exploring science, technology, engineering, and math (STEM). Our kindergarten projects are written and tested by scientists and are specifically created for use by students in kindergarten. Students can choose to follow the science experiment as written or put their own spin on the project. For a personalized list of science projects, kindergarten students can use the Science Buddies Topic Selection Wizard. The wizard asks students to respond to a series of simple statements and then uses their answers to recommend age-appropriate projects that fit their interests. Some people have a photographic memory and can memorize anything they see almost instantly! Wouldn't that make homework easy? Other people can remember almost anything they hear. Try this experiment to see which type of memory you have. A day at the beach is a wonderful way to spend time with your family and friends. You can swim, play games, and build sand castles. But have you ever thought about how all of that sand got there and wondered why the shoreline weaves in and out of the ocean? In this science project, you will investigate how ocean waves build beaches by making a model of the beach and shoreline. All you need is a tiny surfer and a beach volleyball court for your model, and you can imagine that you are in… Have you ever been swimming at the beach and gotten some water in your mouth by mistake? Then you know that the ocean is very salty. But what about other bodies of water? How much salt do they have compared to the ocean? When you picture video games, you probably picture realistic figures, a lot of color, and a lot of detail, right? Those descriptions do not really describe video games from the early 1980's. So why do video games today look better than video games from the 80's? One major change between then and now is the number of pixels, or dots on the screen, used to represent video game objects. When Nintendo® first introduced the Super Mario Bros game for the Nintendo Entertainment System (NES) in… Have you ever looked through a magnifying lens? Why do things look bigger when you look at them through the magnifying lens? Even though the object appears to get larger, it really stays the same size. Each lens has its own unique power of magnification, which can be measured with a ruler. How powerful is your lens? Why is your grandmother always wondering if you are drinking enough milk? Our bones are made out of calcium, a mineral found in milk, and drinking milk can lead to strong healthy bones. What about other animals? What are their bones made of? What kind of bones do they have? Are there animals without bones? Are endoskeletons and exoskeletons made out of the same materials? Are you good at remembering addresses and phone numbers? How many numbers do you think you can remember? Try this experiment to test your digit span, the maximum number of digits that you can remember. Are you really picky about food? Or do you know someone who is? It might be because he or she is a supertaster! To supertasters, the flavors of foods are much stronger than to average tasters. Are you a supertaster? Find out with this tongue-based science fair project!
Coconuts, especially as a trade commodity, offer a startling illustration of global interconnections that have existed for over two thousand years. Thanks to this extensive history, coconuts offer many examples that can assist museums in “decolonizing” — that is, in pushing back against traditional national-temporal contexts as the primary model for organizing various cultural histories that have been built upon violent forms of theft and appropriation. Through coconuts, we can see Europeans seeking out South Asian health food from the Roman era into the Middle Ages. When Europeans introduced coconuts to colonies in the global colonial period, coconut utensils and instruments exerted resistance to cultural extermination even as indigenous peoples were forced to accommodate and adopt European customs. As a kind of catalog for an exhibit that hasn’t taken place, Coconuts: A Tiny History considers primarily material examples of coconut history: cups, recipes, and ‘ukuleles. Europeans began to craft coconut shells into fine stemware in the Middle Ages, and continued this tradition well into the twentieth century. During the global colonial period they introduced these cups and their drinking traditions to colonies around the world, where the cups underwent unique transformations. Meanwhile coconuts were undergoing their own revolution, from food to industrial commodity. In the nineteenth century, as the US developed an especially coconut-rich cuisine, England turned coconut oil into an important ingredient in the industrial production of candles and soap. Coconut ‘ukuleles offer a final case study that illustrates many of the historical strands traced throughout this book. ‘Ukuleles developed in the Kingdom of Hawai’i from small guitars played by Portuguese migrant laborers. After the Kingdom was taken over by the US, indigenous Hawai’ians and Portuguese-Hawai’ians built small ‘ukuleles made from a unique hardwood, the coconut shell. How each of these inventors designed and marketed their coconut ukes tells us much about coconuts, and even more about indigenous strategies of resistance and assimilation to colonization.
During October and November 2018, certain private sector laboratories have reported an increase in Salmonella cases in KwaZulu-Natal Province. Salmonella was also the most likely cause of two recent foodborne disease outbreaks reported from eThekwini Municipality. Salmonella bacteria normally live in the guts of animals, including chicken, cattle, pigs and reptiles. People can become infected with Salmonella through eating contaminated food products such as meat, chicken, dairy and eggs. Less commonly, fruit and vegetables can become contaminated with Salmonella bacteria when coming in contact with manure, livestock or untreated water. Sometimes, a person can become infected after being in contact with someone who has diarrhoea. Infection with Salmonella leads to symptoms of fever, body aches and pains, nausea, abdominal cramps, diarrhoea and vomiting; these usually start 12-72 hours after being infected. Illness is usually self-limiting and resolves in a few days, but occasionally can be severe enough to require admission to hospital for rehydration and antibiotic treatment. Certain people are at higher risk of developing severe disease, including very young children, the elderly, and those with weak immune systems (including HIV). Salmonella illness is more common in spring and summer, when warmer weather and unrefrigerated foods create ideal conditions for the bacteria to grow. The NICD is assisting the district and provincial health department outbreak response teams’ ongoing investigations. Teams are gathering additional information about the outbreaks and other reported cases, and also collecting food and environmental samples to investigate the source/s of the outbreaks. The NICD has received isolates from private and public laboratories, and is conducting additional tests to determine if the Salmonella bacteria isolated from ill people across the province are related. Health-promotion and food safety education is ongoing. The public is encouraged to practice the World Health Organization’s Five keys to Safer Food, which include: - Wash hands and surfaces before, and regularly during food preparation - Separate raw and cooked food, and don’t mix utensils and surfaces when preparing food - Cook food thoroughly – all bacteria are killed above 70oC - Keep food at safe temperatures – either simmering hot, or in the fridge - Use safe water and safe ingredients to prepare food.
Censuses are taken by governments to establish the numbers and characteristics of a population. Census and land records allow you to trace your ancestors through each generation of a family tree. You may be able to follow where a relative lived through each decade, or discover when they moved house or started a new job, and how their family evolved through births, deaths, and marriages. Furthermore, your ancestors’ households will often reveal the names of their siblings, which would be difficult to trace using the birth indexes alone. More than 678 million records are included in United States and Canada census category. The first U.S. census was taken in 1790 and a new census has been recorded every ten years since. The taking of a census every decade is a legal obligation of the federal government, outlined in the United States Constitution. The counting of every man, woman, and child is required in order to determine the number of delegates each state may send to the U.S. House of Representatives, where representation is based on population. Up until 1940, enumerators, or counters, hired by the U.S. government went door to door in order to count every person. If someone wasn't home, the enumerator returned. However, as thorough as some enumerators were, there were still occasional errors due to issues with literacy, embellishments about age or time of naturalization, and simple misspelling of names. Due to the sensitive nature of census information, every census is held to a 72-year privacy rule before it is released to the public. Included in this category is the full U.S. census collection, from 1790 to 1940. The first full government census of Ireland was taken in 1821 with further censuses every ten years from 1831 through to 1911. No census was taken in 1921, due to the War of Independence, and after that, the first census of the population of the Irish Free State was taken in 1926. After 1946, censuses have been taken around every five years, apart from 1976, when it was cancelled to save money. For various reasons, few Irish census records survive today. The original returns for 1861 and 1871 were destroyed soon after they were taken. The 1881 and 1891 returns were pulped during World War 1, probably due to the paper shortage. In addition, the returns for 1821 to 1851 were, apart from a few survivals, destroyed in the 1922 fire at the Public Record Office at the start of the Irish Civil War. The fragments of nineteenth century Irish censuses that do survive can be explored in the record set Irish census 1821-1851. The census search forms for 1841 and 1851 also provide great details from the early returns. Today, the 1901 and 1911 Census of Ireland is held by the National Archives of Ireland and is searchable on their website. We provide census substitutes to help you fill in the missing gaps, including Griffith’s Valuation 1847-1864, one of the most widely used resources to trace your Irish family in the 1800s. It contains information about households that lived through the Famine period until the start of civil registration in 1864. In addition, the Landed Estate Court Rentals offer a wealth of information about land occupation in mid-19th century Ireland. Originally published to organise the sale of bankrupt estates these records contain information about tenants, rented lots, tenancy terms, and boundary maps. Over 500,000 tenants are included in this collection, and it deals with more than 8,000 estates around the country. We also have a range of local censuses available to search. Electoral registers and other official documents also offer another avenue to pursue when using census substitutes to build your Irish family tree and a range of these can be found on Findmypast. Included in this category are more than 342 million local and national census records, rate books, electoral registers, and land tax documents. The earliest censuses only recorded statistical details for England, Wales & Scotland. The first to record the details of individuals was the 1841 census, although it contains less information than the censuses that followed. The 1841 census enumerators were instructed to round down a person’s age to the nearest multiple of five. A person aged 69, for example, was enumerated as 65. Due to the personal nature of census information, a 100-year secrecy rule is in place. This means the most recent census available is the 1911 census, which was released early (with some information redacted) after an appeal lodged under the Freedom of Information Act. On Findmypast, you can view the previously hidden information in the 'infirmity' column of the 1911 census for England, Wales & Scotland. If your ancestors filled in this column, you'll be able to see information about your family's health in 1911. This collection contains over 6.7 million records available from Australia and New Zealand, including the ever-important electoral rolls – a foundation source of genealogy. There are records spanning every state and territory in Australia, as well as several important New Zealand collections, from a period between the 1840s to the mid-1900s. Using electoral rolls forms an important part of the family history research process. As enrolment is compulsory for all eligible voters (with the exception of Norfolk Island), there is a strong chance that one of your ancestors can be located through these records. Electoral rolls contain valuable information such as name, address, occupation and polling place. Because census data is not always available, electoral rolls often make an informative alternative to census data.
Common Core Math Vocabulary Are you as smart as a fourth grader? Test your basic Common Core Math vocabulary and find out! Hangmoon Algebra Vocab It will give you the definition. You have to guess the word. Reverse Distributive Property Find the equivalent expressions. Solving 1 and 2 Step Equations Intro Solving 1 and 2 step equations TriviaFriendzie: Math, Reading, Science, Social Studies Something like Jeopardy, used to study for tests. Number Spellings 11 - 20 Spell the numbers by clicking on the letters below. Commas, Elements, Math, and more! play with your friends. have fun learning how to use a comma and some other things Gill's Angles: Angles, Triangles, Parallel Lines, Transversals Angles, Triangles, Parallel Lines, Transversals Pythagorean Theorem: Midpoint, Perimeter and Area Unit 5 Lesson 2: We will work on finding the midpoint, the Pythagorean theorem and finding the area and perimeter of triangles and rectangles. Match an Integer to Its Opposite Matching integers and their opposites
Polyphase Motor Design Chapter 10 - Polyphase AC Circuits Perhaps the most important benefit of polyphase AC power over single-phase is the design and operation of AC motors. As we studied in the first chapter of this book, some types of AC motors are virtually identical in construction to their alternator (generator) counterparts, consisting of stationary wire windings and a rotating magnet assembly. (Other AC motor designs are not quite this simple, but we will leave those details to another lesson). If the rotating magnet is able to keep up with the frequency of the alternating current energizing the electromagnet windings (coils), it will continue to be pulled around clockwise. (Figure above) However, clockwise is not the only valid direction for this motor’s shaft to spin. It could just as easily be powered in a counter-clockwise direction by the same AC voltage waveform a in Figure below. Notice that with the exact same sequence of polarity cycles (voltage, current, and magnetic poles produced by the coils), the magnetic rotor can spin in either direction. This is a common trait of all single-phase AC “induction” and “synchronous” motors: they have no normal or “correct” direction of rotation. The natural question should arise at this point: how can the motor get started in the intended direction if it can run either way just as well? The answer is that these motors need a little help getting started. Once helped to spin in a particular direction. they will continue to spin that way as long as AC power is maintained to the windings. Where that “help” comes from for a single-phase AC motor to get going in one direction can vary. Usually, it comes from an additional set of windings positioned differently from the main set, and energized with an AC voltage that is out of phase with the main power. (Figure below) These supplementary coils are typically connected in series with a capacitor to introduce a phase shift in current between the two sets of windings. (Figure below) That phase shift creates magnetic fields from coils 2a and 2b that are equally out of step with the fields from coils 1a and 1b. The result is a set of magnetic fields with a definite phase rotation. It is this phase rotation that pulls the rotating magnet around in a definite direction. Polyphase AC motors require no such trickery to spin in a definite direction. Because their supply voltage waveforms already have a definite rotation sequence, so do the respective magnetic fields generated by the motor’s stationary windings. In fact, the combination of all three phase winding sets working together creates what is often called a rotating magnetic field. It was this concept of a rotating magnetic field that inspired Nikola Tesla to design the world’s first polyphase electrical systems (simply to make simpler, more efficient motors). The line current and safety advantages of polyphase power over single phase power were discovered later. What can be a confusing concept is made much clearer through analogy. Have you ever seen a row of blinking light bulbs such as the kind used in Christmas decorations? Some strings appear to “move” in a definite direction as the bulbs alternately glow and darken in sequence. Other strings just blink on and off with no apparent motion. What makes the difference between the two types of bulb strings? Answer: phase shift! Examine a string of lights where every other bulb is lit at any given time as in (Figure below) When all of the “1” bulbs are lit, the “2” bulbs are dark, and vice versa. With this blinking sequence, there is no definite “motion” to the bulbs’ light. Your eyes could follow a “motion” from left to right just as easily as from right to left. Technically, the “1” and “2” bulb blinking sequences are 180o out of phase (exactly opposite each other). This is analogous to the single-phase AC motor, which can run just as easily in either direction, but which cannot start on its own because its magnetic field alternation lacks a definite “rotation.” Now let’s examine a string of lights where there are three sets of bulbs to be sequenced instead of just two, and these three sets are equally out of phase with each other in Figure below. If the lighting sequence is 1-2-3 (the sequence shown in (Figure above)), the bulbs will appear to “move” from left to right. Now imagine this blinking string of bulbs arranged into a circle as in Figure below. Now the lights in Figure above appear to be “moving” in a clockwise direction because they are arranged around a circle instead of a straight line. It should come as no surprise that the appearance of motion will reverse if the phase sequence of the bulbs is reversed. The blinking pattern will either appear to move clockwise or counter-clockwise depending on the phase sequence. This is analogous to a three-phase AC motor with three sets of windings energized by voltage sources of three different phase shifts in Figure below. With phase shifts of less than 180o we get true rotation of the magnetic field. With single-phase motors, the rotating magnetic field necessary for self-starting must to be created by way of capacitive phase shift. With polyphase motors, the necessary phase shifts are there already. Plus, the direction of shaft rotation for polyphase motors is very easily reversed: just swap any two “hot” wires going to the motor, and it will run in the opposite direction! - AC “induction” and “synchronous” motors work by having a rotating magnet follow the alternating magnetic fields produced by stationary wire windings. - Single-phase AC motors of this type need help to get started spinning in a particular direction. - By introducing a phase shift of less than 180o to the magnetic fields in such a motor, a definite direction of shaft rotation can be established. - Single-phase induction motors often use an auxiliary winding connected in series with a capacitor to create the necessary phase shift. - Polyphase motors don’t need such measures; their direction of rotation is fixed by the phase sequence of the voltage they’re powered by. - Swapping any two “hot” wires on a polyphase AC motor will reverse its phase sequence, thus reversing its shaft rotation. Published under the terms and conditions of the Design Science License
Drainage from the ear; Otorrhea; Ear bleeding; Bleeding from ear Ear discharge is drainage of blood, Most of the time, any fluid leaking out of an ear is ear wax. A ruptured eardrum can cause a white, slightly bloody, or yellow discharge from the ear. Dry crusted material on a child's pillow is often a sign of a ruptured eardrum. The eardrum may also bleed. Causes of a ruptured eardrum include: - Foreign object in the ear canal - Injury from a blow to the head, foreign object, very loud noises, or sudden pressure changes (such as in airplanes) - Inserting cotton-tipped swabs or other small objects into the ear - Middle ear infection Other causes of ear discharge include: Caring for ear discharge at home depends on the cause. When to Contact a Medical Professional Call your health care provider if: - The discharge is white, yellow, clear, or bloody. - The discharge is the result of an injury. - The discharge has lasted more than 5 days. - There is severe pain. - The discharge is associated with other symptoms, such as fever or headache. - There is loss of hearing. - There is redness or swelling coming out of the ear canal. - Facial weakness or asymmetry What to Expect at Your Office Visit The provider will perform a physical exam and look inside the ears. You may be asked questions, such as: - When did the ear drainage begin? - What does it look like? - How long has it lasted? - Does it drain all the time or off-and-on? - What other symptoms do you have (for example, fever, ear pain, headache)? The provider may take a sample of the ear drainage and send it to a lab for examination. The provider may recommend anti-inflammatory or antibiotic medicines, which are placed in the ear. Antibiotics may be given by mouth if a ruptured eardrum from an ear infection is causing the discharge. Bauer CA, Jenkins HA. Otologic symptoms and syndromes. In: Flint PW, Haughey BH, Lund V, et al, eds. Cummings Otolaryngology: Head & Neck Surgery. 6th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 156. Brant JA, Ruckenstein MJ. Infections of the external ear. In: Flint PW, Haughey BH, Lund V, et al, eds. Cummings Otolaryngology: Head & Neck Surgery. 6th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 137. Lee DJ, Roberts D. Topical therapies for external ear disorders. In: Flint PW, Haughey BH, Lund V, et al, eds. Cummings Otolaryngology: Head & Neck Surgery. 6th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 138. O'Handley JG, Tobin EJ, Shah AR. Otorhinolaryngology. In: Rakel RE, Rakel DP, eds. Textbook of Family Medicine. 9th ed. Philadelphia, PA: Elsevier; 2016:chap 18. Last reviewed on: 5/17/2018 Reviewed by: Josef Shargorodsky, MD, MPH, Johns Hopkins University School of Medicine, Baltimore, MD. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team.
Neuroscientists at University of California at Berkeley have developed a technique for creating digital images that correspond with neural activity in the brain. This represents one of the first steps toward a computer being able to tap directly into what our brain sees, imagines, and even dreams. (Link to video) Every image that we see activates photoreceptors in the retina of the eye. The information is fed through the optic nerve to the back of the brain. There, the information is assembled and interpreted by increasingly higher-level processes of the brain. In this experiment, subjects watched clips of movie trailers while an fMRI machine scanned their brains in real time. The computer mapped activity throughout millions of “voxels” (3D pixels). The computer gradually learned to associate qualities of shape, edges, and motion occurring in the film with corresponding patterns of brain activity. It then built “dictionaries” by matching video images with patterns of brain activity, and then predicting patterns that it guessed would be created by novel videos, using a palette of 18 million seconds of random clips taken from the internet. Over time, the computer could crunch all this data into a set of images that played out alongside the original video. If I understand the process correctly, the images we’re seeing on the right side (“clips reconstructed from brain activity”) are actually running averages created by blending a hundred or so random YouTube clips that met the computer’s predictions of what images would match the patterns it was monitoring in the brain. In other words, the right-hand image is generated from existing clips, not from scratch. In this video (link), you can see the novel video that's causing the brain activity in the upper left of the screen, and some of the samples (strung out in a line) that the computer is guessing must be causing that kind of brain activity. That would explain the momentary ghostly word fragments that pop up in the images, as well as the strange color and shape-shifts from the original. The result is a moving image that looks a bit like a blurry version of the original video, but one that has been a bit generalized based on the available palette of average images. Evidently, the perception of faces triggers the brain in very active ways, judging from the relative clarity of the computer’s generated images, compared to other kinds of images. I wonder what would happen if you set this system up in a biofeedback loop, so that the brain activity and image generation could play off against each other? It might be like a computer-aided hallucination. Article on Gizmodo Thanks, Christian Schlierkamp
In the dark the element selenium is a poor conductor of electricity. When light shines on it, however, its conductivity increases in direct proportion to the light’s intensity. Selenium can also convert light directly into electricity. The Swedish chemist Jöns Jakob Berzelius discovered selenium in 1817, but the element’s photosensitivity was not known until 50 years later. Selenium was used in an early form of the telephone and also contributed to experiments that led to motion pictures. Selenium has since been used in photoelectric devices in solar cells, in traffic-control lights, and in photographic exposure meters. Selenium is used in rectifiers because it can convert alternating electric current to pulsating direct current. When incorporated in small amounts into glass, the selenium compound cadmium selenide serves as a decolorizer; in larger quantities it gives glass a clear red color that is useful in making signal lights. The element is also used to make red enamels for ceramics and steel ware as well as in the vulcanization of rubber to increase resistance to abrasion. In xerography, selenium is used for reproducing documents. As an element selenium is a member of the oxygen family in the periodic table and is closely allied to sulfur and tellurium. It is nontoxic to humans and is considered to be an essential trace element. Some selenium compounds, however, such as hydrogen selenide are extremely toxic. The presence of selenium in some soils makes plants growing in those soils poisonous to animals. Combined with oxygen to form selenium dioxide, selenium is used as an oxidizing agent. Selenium oxychloride is a powerful solvent. Selenium is part of the residue that collects as a by-product of copper refining. It is recovered by roasting the residue with soda or sulfuric acid or by smelting with soda and niter. |Group in periodic table||16 (VIa)| |Boiling point||1,265 °F (685 °C)| |Melting point||423 °F (217 °C)|
Every child has a right to a quality education, yet many children are unable to realize this right due to the impact of natural disasters. The Asia-Pacific is the most disaster prone in the world, and disasters have accounted for the loss of half a million lifes in this region during the last ten years. Unfortunately, children bear the brunt of these emergencies. In the coming decades, 200 million children will have their lives severely affected by disasters, and it will be the deprived and marginalized children who are the most vulnerable. Educational inequities are intensified because of schools being damaged or destroyed (due to poor site selection, design, or construction), schools being used as evacuation centres, and because disaster risk reduction (DRR) policies are not being adequately resourced or prioritized through different levels of governments and to the community level. As a result, disasters cause significant and prolonged disruption to children’s education. Given this situation, we need to make education safe from disaster in order to achieve much needed progress in the field of global education. Education Safe from Disasters (ESD) is an Asia-Pacific regional initiative from Save the Children, aimed at strengthening risk reduction, emergency response and resilience building to ensure that all children learn from a quality basic education at all times, including emergencies. It aims to defend two fundamental child rights: the right to safety and survival and the right to education. Our 0/0 goal: Zero children are killed or injured at school during a disaster, Zero days of education are lost due to disaster. ESD is linking emergency preparedness and disaster relief actions in order to have a lasting effect. All projects consist of three pillars: Creating Safe Learning Facilities - This includes selecting safe school sites, a disaster-resilient design construction and successive quality controls. Establishing a School Disaster Management - Along with national and local stakeholders (including children and parents) and in order to plan for educational continuity, risk assessments are conducted and participatory warning committees are created. Implementing Risk Reduction and Resilience Education - In order to develop a culture of safety we advocate the prioritization of disaster risk reduction in national curricula, while providing community-based training and materials.
The Hubble Space Telescope celebrates its 30th birthday in orbit around Earth this month! It’s hard to believe how much this telescope has changed the face of astronomy in just three decades. It had a rough start — an 8-foot mirror just slightly out of focus in the most famous case of spherical aberration of all time. But subsequent repairs and upgrades by space shuttle astronauts made Hubble a symbol of the ingenuity of human spaceflight and one of the most important scientific instruments ever created. Beginning as a twinkle in the eye of the late Nancy Grace Roman, the Hubble Space Telescope’s work over the past thirty years changed the way we view the universe, and more is yet to come! We’ve all seen the amazing images created by Hubble and its team of scientists, but have you seen Hubble yourself? You actually can! Hubble’s orbit – around 330 miles overhead — is close enough to Earth that you can see it at night. The best times are within an hour after sunset or before sunrise, when its solar panels are angled best to reflect the light of the Sun back down to Earth. You can’t see the structure of the telescope, but you can identify it as a bright star-like point, moving silently across the night sky. It’s not as bright as the Space Station, which is much larger and whose orbit is closer to Earth (about 220 miles), but it’s still very noticeable as a single steady dot of light, speeding across the sky. Hubble’s orbit brings it directly overhead for observers located near tropical latitudes; observers further north and south can see it closer to the horizon. You can find sighting opportunities using satellite tracking apps for your smartphone or tablet, and dedicated satellite tracking websites. These resources can also help you identify other satellites that you may see passing overhead during your stargazing sessions. NASA has a dedicated site for Hubble’s 30th’s anniversary at bit.ly/NASAHubble30. The Night Sky Network’s “Why Do We Put Telescopes in Space?” activity can help you and your audiences discover why we launch telescopes into orbit, high above the interference of Earth’s atmosphere, at bit.ly/TelescopesInSpace. Amateur astronomers may especially enjoy Hubble’s images of the beautiful objects found in both the Caldwell and Messier catalogs, at bit.ly/HubbleCaldwell and bit.ly/HubbleMessier. As we celebrate Hubble’s legacy, we look forward to the future, as there is another telescope ramping up that promises to further revolutionize our understanding of the early universe: the James Webb Space Telescope! Discover more about the history and future of Hubble and space telescopes at nasa.gov. Image Credit: NASA Hubble’s “first light” image. Even with the not-yet-corrected imperfections in its mirror, its images were generally sharper compared to photos taken by ground-based telescopes at the time. Image Credit: NASA This article is distributed by NASA Night Sky Network The Night Sky Network program supports astronomy clubs across the USA dedicated to astronomy outreach. Visit nightsky.jpl.nasa.gov to find local clubs, events, and more!
View Larger Image Animal ABC + 123 Wooden Blocks Set 20 solid-wood blocks feature letters and numbers, plus counting dots and pictures of familiar objects to illustrate each one. It's a classic learning manipulative that will lead to hours of counting, sorting, and building fun-in a modern color palette that today's families will love! Extension Activities: More Ways to Play and Learn: - Ask the child to make a tall tower, stacking as many blocks as possible until the tower tumbles. - Ask the child to sort the blocks by color. Ask the child to in other ways to sort and group the blocks - such as gathering the letters in his or her own name, or sorting sea animals from land animals. - Help the child to count all the blocks in a given group. Then help the child add two groups together, counting all the combined blocks - Ask the child to select a number block. Help the child name the number, trace the number with a finger, count the counting dots, then make a tower of the same number of blocks. - Gather a group of several blocks and arrange them so that all but one block has an animal facing up. Ask the child to identify the block that is different and turn it to make the group complete. Repeat with numbers, capital letters, lowercase letters, and counting dots. Dimensions: 9.75" x 8" x 2.25" Packaged Please share your thoughts about this toy with your Facebook friends in the Comment box below.
Petrochemicals are chemical products derived from petroleum. Some chemical compounds made from petroleum are also obtained from other fossil fuels, such as coal or natural gas, or renewable sources such as corn or sugar cane. The two most common petrochemical classes are olefins (including ethylene and propylene) and aromatics (including benzene, toluene and xylene isomers). Oil refineries produce olefins and aromatics by the fluid catalytic cracking of petroleum fractions. Chemical plants produce olefins by steam cracking of natural gas liquids like ethane and propane. The catalytic reforming of naphtha produces aromatics. Olefins and aromatics are the building-blocks for a wide range of materials such as solvents, detergents, and adhesives. Olefins are the basis for polymers and oligomers used in plastics, resins, fibers, elastomers, lubricants, and gels. Primary petrochemicals are divided into three groups depending on their chemical structure:
Alternative Names diet for cystic fibrosis, cystic fibrosis, nutritional considerations Definition Cystic fibrosis (CF) is a genetic disease. CF occurs in 1 in 2500 births in Australia and is the most common genetic disease in Caucasian Australians. CF prevents the body from absorbing enough nutrients. This makes it difficult for people with CF to meet increased nutrient needs. As a result, people with CF may need to eat an enriched diet with more kilojoules and take extra vitamins and enzymes. Information Most people with CF are diagnosed by the age of 3. Doctors and dieticians closely follow them. The dietician handles the complex nutritional issues that come up in the care of these individuals. CF affects the mucous-producing glands in the pancreas, lungs and intestines. It causes thick mucous to build up and clog the lungs. This can lead to life-threatening infections. The mucous can also block the pancreas. This is the gland that makes many of the hormones and enzymes needed for digestion of food. The mucous build up can cause malabsorption of nutrients. This is when nutrients from foods are not absorbed, but are instead passed out in the stool. Because of this, people with CF must eat a lot more food to receive enough kilojoules and nutrients to maintain normal weight. Children with CF, whose bodies are using kilojoules and nutrients to grow, must sometimes consume up to six times as many kilojoules as a healthy child in order to grow properly. Enzymes are proteins made in our bodies. They spark various reactions, including those involved in the breakdown of food. Many times, a person with CF, does not produce enough of a fat-digestive enzyme called lipase. They may need to take specially formulated enzyme supplements, with each meal, to aid in digestion. Since people with CF cannot absorb salt from sweat, they need salt, which is sodium chloride, in larger amounts. This is especially true during hot weather when there is increased sweating. Drinking plenty of fluids is important to avoid dehydration, or low body fluid levels. Higher amounts of vitamins and minerals may also be needed. Because the body cannot absorb many nutrients, a person with CF needs about twice the recommended Dietary Intake (RDI) for fat-soluble vitamins, which are vitamins A, D,E and K. Many times supplements are needed. www.vg.edu.au/vdw/cflink.htm (lots of interesting links named) Author: Reviewer: eknowhow Medical Review Panel Editor: Dr John Hearne Last Updated: 20/09/2004 Contributors Potential conflict of interest information for reviewers available on request This website and article is not a substitute for independent professional advice. Nothing contained in this website is intended to be used as medical advice and it is not intended to be used to diagnose, treat, cure or prevent any disease, nor should it be used for therapeutic purposes or as a substitute for your own health professional's advice. All Health and any associated parties do not accept any liability for any injury, loss or damage incurred by use of or reliance on the information.
The intercept method, also known as Marcq St. Hilaire method, is an astronomical navigation method of calculating an observer's position on earth. It was originally called the azimuth intercept method because the process involves drawing a line which intercepts the azimuth line. This name was shortened to intercept method and the intercept distance was shortened to 'intercept'. The method yields a line of position (LOP) on which the observer is situated. The intersection of two or more such lines will define the observer's position, called a "fix". Sights may be taken at short intervals, usually during hours of twilight, or they may be taken at an interval of an hour or more (as in observing the Sun during the day). In either case, the lines of position, if taken at different times, must be advanced or retired to correct for the movement of the ship during the interval between observations. If observations are taken at short intervals, a few minutes at most, the corrected lines of position by convention yield a "fix". If the lines of position must be advanced or retired by an hour or more, convention dictates that the result is referred to as a "running fix". The intercept method is based on the following principle. The actual distance from the observer to the geographical position (GP) of a celestial body (that is, the point where it is directly overhead) is "measured" using a sextant. The observer has already estimated his position by dead reckoning and calculated the distance from the estimated position to the body's GP; the difference between the "measured" and calculated distances is called the intercept. The diagram on the right shows why the zenith distance of a celestial body is equal to the angular distance of its GP from the observer's position. The rays of light from a celestial body are assumed to be parallel (unless the observer is looking at the moon, which is too close for such a simplification). The angle at the centre of the Earth that the ray of light passing through the body's GP makes with the line running from the observer's zenith is the same as the zenith distance. This is because they are corresponding angles. In practice it is not necessary to use zenith distances, which are 90° minus altitude, as the calculations can be done using observed altitude and calculated altitude. Taking a sight using the intercept method consists of the following process: - Observe the altitude above the horizon Ho of a celestial body and note the time of the observation. - Assume a certain geographical position (lat., lon.), it does not matter which one so long as it is within, say, 50 NM of the actual position (or even 100 NM would not introduce too much error). Compute the altitude Hc and azimuth Zn with which an observer situated at that assumed position would observe the body. - If the actual observed altitude Ho is smaller than the computed altitude Hc this means the observer is farther away from the body than the observer at the assumed position, and vice versa. For each minute of arc the distance is one NM and the difference between Hc and Ho expressed in minutes of arc (which equal NM) is termed the "intercept". The navigator now has computed the intercept and azimuth of the body. - On the chart he marks the assumed position AP and draws a line in the direction of the azimuth Zn. He then measures the intercept distance along this azimuth line, towards the body if Ho>Hc and away from it if Ho<Hc. At this new point he draws a perpendicular to the azimuth line and that is the line of position LOP at the moment of the observation. - The reason that the chosen AP is not important (within limits) is that if a position closer to the body is chosen then Hc will be greater but the distance will be measured from the new AP which is closer to the body and the end resulting LOP will be the same. Suitable bodies for celestial sights are selected, often using a Rude Star Finder. Using a sextant, an altitude is obtained of the sun, the moon, a star or a planet. The name of the body and the precise time of the sight in UTC is recorded. Then the sextant is read and the altitude (Hs) of the body is recorded. Once all sights are taken and recorded, the navigator is ready to start the process of sight reduction and plotting. The first step in sight reduction is to correct the sextant altitude for various errors and corrections. The instrument may have an error, IC or index correction (See article on adjusting a sextant). Refraction by the atmosphere is corrected for with the aid of a table or calculation and the observer's height of eye above sea level results in a "dip" correction, (as the observer's eye is raised the horizon dips below the horizontal). If the Sun or Moon was observed, a semidiameter correction is also applied to find the centre of the object. The resulting value is "observed altitude" (Ho). Next, using an accurate clock, the observed celestial object's geographic position (GP) is looked up in an almanac. That's the point on the Earth's surface directly below it (where the object is in the zenith). The latitude of the geographic position is called declination, and the longitude is usually called the hour angle. Next, the altitude and azimuth of the celestial body are computed for a selected position (assumed position or AP). This involves resolving a spherical triangle. Given the three magnitudes: local hour angle (LHA), observed body's declination (dec), and assumed latitude (lat), the altitude Hc and azimuth Zn must be computed. The local hour angle, LHA, is the difference between the AP longitude and the hour angle of the observed object. It is always measured in a westerly direction from the assumed position. The relevant formulas (derived using the spherical trigonometric identities) are: - Hc = Computed altitude - Zn = Computed azimuth - lat = Latitude - dec = Declination - LHA = Local Hour Angle These computations can be done easily using electronic calculators or computers but traditionally there were methods which used logarithm or haversine tables. Some of these methods were H.O. 211 (Ageton), Davies, haversine, etc. The relevant haversine formula for Hc is Where Hc is the zenith distance, or complement of Hc. Hc = 90° - Hc. The relevant formula for Zn is When using such tables or a computer or scientific calculator, the navigation triangle is solved directly, so any assumed position can be used. Often the dead reckoning DR position is used. This simplifies plotting and also reduces any slight error caused by plotting a segment of a circle as a straight line. With the use of astral navigation for air navigation, faster methods needed to be developed and tables of precomputed triangles were developed. When using precomputed sight reduction tables, selection of the assumed position is one of the trickier steps for the fledgling navigator to master. Sight reduction tables provide solutions for navigation triangles of integral degree values. When using precomputed sight reduction tables, such as H.O. 229, the assumed position must be selected to yield integer degree values for LHA (local hour angle) and latitude. West longitudes are subtracted and east longitudes are added to GHA to derive LHA, so AP's must be selected accordingly. When using precomputed sight reduction tables each observation and each body will require a different assumed position. Professional navigators are divided in usage between sight reduction tables on the one hand, and handheld computers or scientific calculators on the other. The methods are equally accurate. It is simply a matter of personal preference which method is used. An experienced navigator can reduce a sight from start to finish in about 5 minutes using nautical tables or a scientific calculator. The precise location of the assumed position has no great impact on the result, as long as it is reasonably close to the observer's actual position. An assumed position within 1 degree of arc of the observer's actual position is usually considered acceptable. The calculated altitude (Hc) is compared to the observed altitude (Ho, sextant altitude (Hs) corrected for various errors). The difference between Hc and Ho is called "intercept" and is the observer's distance from the assumed position. The resulting line of position (LOP) is a small segment of the circle of equal altitude, and is represented by a straight line perpendicular to the azimuth of the celestial body. When plotting the small segment of this circle on a chart it is drawn as a straight line, the resulting tiny errors are too small to be significant. Navigators use the memory aid "computed greater away" to determine whether the observer is farther from the body's geographic position (measure intercept from Hc away from the azimuth). If the Hc is less than Ho, then the observer is closer to the body's geographic position, and intercept is measured from the AP toward the azimuth direction. The last step in the process is to plot the lines of position LOP and determine the vessel's location. Each assumed position is plotted first. Best practise is to then advance or retire the assumed positions to correct for vessel motion during the interval between sights. Each LOP is then constructed from its associated AP by striking off the azimuth to the body, measuring intercept toward or away from the azimuth, and constructing the perpendicular line of position. To obtain a fix (a position) this LOP must be crossed with another LOP either from another sight or from elsewhere e.g. a bearing of a point of land or crossing a depth contour such as the 200 metre depth line on a chart. Until the age of satellite navigation ships usually took sights at dawn, during the forenoon, at noon (meridian transit of the Sun) and dusk. The morning and evening sights were taken during twilight while the horizon was visible and the stars, planets and/or moon were visible, at least through the telescope of a sextant. Two observations are always required to give a position accurate to within a mile under favourable conditions. Three are always sufficient. A fix is called a running fix when one or more of the LOPs used to obtain it is an LOP advanced or retrieved over time. In order to get a fix the LOP must cross at an angle, the closer to 90° the better. This means the observations must have different azimuths. During the day, if only the Sun is visible, it is possible to get an LOP from the observation but not a fix as another LOP is needed. What may be done is take a first sight which yields one LOP and, some hours later, when the Sun's azimuth has changed substantially, take a second sight which yields a second LOP. Knowing the distance and course sailed in the interval, the first LOP can be advanced to its new position and the intersection with the second LOP yields a running fix. Any sight can be advanced and used to obtain a running fix. It may be that the navigator due to weather conditions could only obtain a single sight at dawn. The resulting LOP can then be advanced when, later in the morning, a Sun observation becomes possible. The precision of a running fix depends on the error in distance and course so, naturally, a running fix tends to be less precise than an unqualified fix and the navigator must take into account his confidence in the exactitude of distance and course to estimate the resulting error in the running fix. Determining a fix by crossing LOPs and advancing LOPs to get running fixes are not specific to the intercept method and can be used with any sight reduction method or with LOPs obtained by any other method (bearings, etc.). - Celestial navigation - Circle of equal altitude - Haversine formula - Longitude by chronometer - Nicholls's Concise Guide, Volume 1, by Charles H. Brown F.R.S.G.S. Extra Master - Norie's Nautical Tables, edited by Capt. A.G. Blance - The Nautical Almanac 2005, published by Her Majesty's Nautical Almanac Office - Navigation for School and College, by A.C Gardner and W.G. Creelman
Causes of water scarcity are aplenty. However, we must first understand the term water scarcity. Water scarcity refers to the situation where the potable, unpolluted water is lower than the demand in a region. According to recent reports, nearly 1.2 billion people lack access to clean drinking water. Furthermore, water shortages can cause a variety of illnesses which can range from food poisoning to cholera. Typically, water scarcity is driven by two important factors – which is the increasing use of freshwater and depletion of usable freshwater resources. Furthermore, the scarcity can be of two types – physical water scarcity and economic water scarcity. Physical water scarcity is caused when a natural water resource is unable to meet the demands of a particular region. Economic water scarcity is caused by the mismanagement of sufficiently available water resource. However, there are a lot more causes of water shortage: Major Causes Of Water Scarcity Following are some of the major causes of water shortage: - Climate change - Natural calamities such as droughts and floods - Increased human consumption - Overuse and wastage of water - A global rise in freshwater demand - Overuse of aquifers and its consequent slow recharge When an individual is water-stressed, it implies that there is no sufficient access to potable water. An estimated 1.1 billion people are under water stress. In countries such as Africa, a large percentage of individuals have no easy access to fresh water. One of the most common methods of acquiring freshwater is by digging holes in riverbeds. Scarcity of water can also cause water pollution. For instance, if inadequate water is available for sanitation, water gets polluted through the introduction of disease-causing pathogens. In fact, 88% of all water-borne diseases are caused this way. Furthermore, water scarcity can cause an imbalance in the ecosystem. Foodchains are affected, and biodiversity is harmed. To explore more causes of water scarcity or other related information, please register at BYJU’S.
These sites have information, pictures, and videos about roller coasters. Design your own roller coaster online or follow the hands-on instructions to make one in the classroom. Learn how physics concepts such as potential energy, kinetic energy, and inertia apply to these rides. There are links to eThemes Resources on simple machines and the scientific method. Set you own speed, mass, loop, gravity, friction, and hills to see how a roller coaster works. This interactive design process allows students to determine the factors that go into designing roller coasters. NOTE: Site has links to external websites. NOTE: Applet requires java. This site explains how roller coasters run. The second page has an animation that shows the point when potential energy is converted to kinetic energy. Other topics include forces, gravity, inertia, and acceleration. NOTE: The site includes advertising. Presents interactive models of machines such as levers and pulley systems. Explains basic concepts of physics such as mechanics, force, and motion. Includes sites on the mechanics of sports and Leonardo da Vinci's inventions. Learn basic concepts of physics. Learn about force, motion, and friction using interactive simulations where you can manipulate the variables. Includes a link to an eThemes on Simple Machines, Magnets, and Gravity. Learn about different forms of energy, renewable and nonrenewable energy. Includes definitions, videos, online games, and quizzes. Also read tips on how to conserve energy. Includes links to an eMINTS Webquest and eThemes Resources on heat, light, the electromagnetic spectrum, sound, and electricity.
Study Guide for Chapter 23 - Nationalism Triumphs in Europe Terms and People to Know 1. Rhine Confederation German Confederation Zollverein Frankfurt Frederick William IV Otto Hohenzollerns Schleswig and Holstein Danish War Seven Weeks War or Austro Prussian War Franco Prussian War North German Confederation Napoleon III Ems Dispatch kaiser chancellor Second Reich Bundesrat Reichstag Sec 2. Krupp Steel August Thyssen Kulturkampf Social Democraric Party William II Sec 3. Giussepe Mazzini Count Camillo Cavour Young Italy Movement The Red Cross Risorgimento Victor Emmanuel II Giuseppe Garibaldi Red Shirts anarchists 4. Francis I of Austria Francis Deak The Francis Joseph The Balkans The Ottoman Empire "dying man or sick man of Europe" "Balkan powder keg" "Orthodoxy, autocracy, and nationalism" Alexander II Crimean War Emancipation Decree zemstvoz People's Will Alexander III Constantine Pobedonostsev pogroms refugees Nicholas II Trans-Siberian Railway Vladimir Ulyanov "Lenin" Russo-Japanese War Father George Gapon St. Petersburg's Winter Palace Bloody Sunday The Revolution of 1905 October Manifesto Duma Peter Stolypin Rasputin Explain how early German nationalism paved the way for German unity. • How was Bismarck able to unify Germany? • What economic changes occurred in Germany in the mid 1800s? • What did Bismarck do to adjust to the industrial and socialist changes in Germany? • How was Italy able to unify and what problems did they face after 1861? • Describe and explain the nationalistic conflcits in the Balkans and the formation of the Dual • Describe and explain Russia'a attempts at reforms and industrialization. • Outline the causes and results of the Revolution of 1905. Essay Questions ( give specific examples to support your statements ) 1. Describe and explain the unification of Germany. How was Bismarck able to unify Germany and what problems occurred after Germany was united?(be specific) 2. Describe and explain the unification of Italy. How were Italian nationalists able to unify Italy and what problems did they encounter after unification?( be specific, support your statements with facts) 3. Describe and explain the events that took place in the Austrian Empire and how the nationalist movement in the empire was a force that caused disunity as opposed to unity. What compromises were reached and why were they not effective? (Be Specific) 4. Describe and explain the reforms and changes in Russia after the Congress of Vienna. Explain the causes and results of the Revolution of 1905. ( Be Specific)
Central Arizona Project The Central Arizona Project (CAP) brings water from the Colorado River to urban areas in central Arizona by way of a massive canal. The 336 mile canal carries 1.5 million acre-feet of water from Lake Havasu City to points east, terminating 14 miles south of Tucson. Water from the canal reaches municipal users (the cities of Mesa, Phoenix, and Scottsdale, for example), agricultural irrigation districts such as the Maricopa-Stanfield Irrigation District, and twelve American Indian communities. Water is also conveyed to groundwater recharge facilities for storage underground. The mission of the Central Arizona Water Conservation District (CAWCD), administrators of the Central Arizona Project, is “to deliver the full allocation of Colorado River water to central Arizona in a reliable, cost effective and environmentally sound manner.” This prodigious water delivery system was created in response to two issues at the very heart of Arizona’s water use. One is to address a 2.5 million acre-foot groundwater overdraft. The second is to allow Arizona to draw its full allocation from the Colorado River, a whopping 2.8 million acre-feet annually. Central Arizona Project History The Central Arizona Project Association was formed to lobby Congress and educate citizens in 1946, just two years after Arizona signed the Colorado River Compact. In 1944, Arizona’s hard won 4.4 million acre-foot allocation of the Colorado River addressed a large portion of its water needs. However, all that water wasn’t needed in the western districts. That is, Arizona was not using the full river allocation. Meanwhile, Arizona needed water for the middle of the state, where groundwater demands from urban growth and agriculture were dropping the water table at such an alarming rate that the ground was slowly subsiding. For the next two decades, Arizona tussled with California over Colorado River allotments. In 1966, a decision by the U.S. Supreme Court decreed that despite California’s use of Arizona’s allotment, the doctrine of prior appropriation would not stand and Arizona would receive the allocated 4.4 million acre-feet of river water. Four years later, legislation to build the CAP canal passed through the U.S. Congress as part of the Colorado River Basin Project Act of 1968. The bill provided for the Bureau of Reclamation, part of the Department of the Interior, to fund and construct CAP and for another entity to repay the federal government for certain costs of construction once the system was complete. The Central Arizona Water Conservation District was formed in 1971. This entity manages and operates CAP and repays the federal government for reimbursable construction costs. In 1973, canal construction began at Lake Havasu. Twenty years later the canal was complete, ending just south of Tucson. The Central Arizona Groundwater Replenishment District Artificial groundwater recharge comprises a major portion of Central Arizona Project (CAP) water delivery. Six recharge facilities store almost 400,000 acre-feet in underground reservoirs. This is same amount of water that industry in Arizona uses. Recharge is a long-established and effective water management tool that allows renewable surface water supplies to be stored underground now for recovery later during periods of reduced water supply. From an environmental perspective, there are many benefits to this practice. Underground storage minimizes evaporation and creates a “reserve” of water that can be used during periods of prolonged drought. Water that seeps into the aquifer undergoes a natural cleansing process, eliminating the need for additional water treatment plants. In fact, the quality of recharged surface water is improved by filtration through underlying sediments in a process known as soil aquifer treatment. Most importantly, recharged water can begin to alleviate a portion of Arizona’s groundwater debt and can actually raise the levels of some area aquifers. Learn more about CAP's Recharge Program. Consequences of Groundwater Overdraft The state of Arizona is suffering from a 2.5 million acre-foot groundwater overdraft. This means that 2.5 million acre-feet of groundwater are being removed from the ground faster than nature can replace it. The loss of such a volume causes a reduction in the levels of the water table, or aquifer. When groundwater levels drop, water becomes harder to access. Wells and pumps that initially had to penetrate 50 or 100 feet underground for water must now go deeper. Engineering and construction costs rise, as does the volume of materials and the complexity of the process needed to raise water to the surface. Drastic drops in groundwater levels cause basin subsidence. Underground water provides structural support for the surface of the earth. If that water is removed rapidly from a vulnerable area, the ground may slowly subside. Land subsidence across Arizona has been occurring since the early 1900s. Areas in Maricopa and Pinal Counties have subsided more than eighteen feet over the last century. These drops in surface level, generally measured both in centimeters and feet, can cause serious structural damage to homes, agricultural lands and industry. The Arizona Department of Water Resources offers information on ground subsidence and an interactive map of active land subsidence areas. In severe cases, a crack in the ground, a phenomenon known as an “earth fissure,” will appear. Earth fissures are associated with basin subsidence that accompanies extensive ground water mining. In Arizona, fissures were first noted near Eloy in 1929; fissures have been identified in Cochise, Maricopa, Pima, and Pinal Counties. Their physical appearance varies greatly, but they may be more than a mile in length, up to 15 feet wide, and hundreds of feet deep. During torrential rains they erode rapidly, presenting a substantial hazard to people and infrastructure. Moreover, fissures provide a ready conduit to deliver runoff and contaminated waters to basin aquifers. Rapid population growth in southern Arizona is increasingly juxtaposing population centers and fissures. The Arizona Geological Survey provides information on earth fissures to the State Land Department and the public.
Black History Month is a time for Americans to pay close attention to our history. Fortunately, there are authors and illustrators who have done an exceptional job of researching and telling stories that compel children to ask questions and learn more about a part of our history that should not be forgotten. Here is a list of award-winning books, resources, and ideas to use in your classroom: Freedom in Congo Square by Carole Boston Weatherford This noteworthy book leads the reader through the days of the week as lived by slaves in New Orleans, Louisiana. Weatherford welcomes the youngest of readers to learn about slavery by using an engaging poetic pattern and a predictable text structure. The foreword, written by author Freddi Williams Evans, presents a detailed account of Congo Square’s history. Use the “Understanding Text Structure Task Cards” to explore the poetic format the author uses to teach readers about slavery. Two Friends: Susan B. Anthony and Frederick Douglass by Dean Robbins and Sean Qualls Did you know that Susan B. Anthony and Frederick Douglass occasionally met to share ideas about establishing equal rights for women and African-Americans? Author Dean Robbins writes an intriguing story by employing a text structure that compares two extremely brave people. Create a Venn Diagram with students to compare and contrast the two significant difference makers in American history. Pink and Say by Patricia Polacco Learn about the civil war through the eyes of two young boys in this truly remarkable piece of literature. The heartrending story offers a vast amount of literary teaching points for units of study. Children can practice questioning, inferring, and understanding author’s purpose as they connect with a story told with an authentic voice that you can almost hear as you read. Explore the extension activity in the Pink and Say Super Pack to support readers as they compare and contrast soldiers who were only boys. Henry’s Freedom Box: A True Story from the Underground Railroad by Ellen Levine What is a Freedom Box? What is the Underground Railroad? Why does the boy on the cover of the book look lost? These are just some of the questions children will wonder about when they study the beautifully illustrated cover of this book. As answers to their questions unfold in the story, children will learn about the tremendous hardships slaves endured. Preparing for a minilesson focused on questioning is easy to do with the with this ready-to-go lesson plan from Bookpagez. Under the Quilt of Night by Deborah Hopkinson Imagine running for your freedom in the dead of night, hypersensitive to the sounds of the dark, hoping to not get caught. You trust an obscured map sewn in a quilt to help guide you to safety. This is a story, told in verse, about how one young slave desperately ran for her life to find freedom. Pair this book with “Henry’s Freedom Box: A Story From the Underground Railroad” to compare the different ways in which brave African-Americans escaped slavery. Download the reading comprehension lessons from the Under the Quilt of the Night Super Pack to help readers gain a deeper understanding of the story outlined in this picture book. The Book Itch: Freedom, Truth & Harlem’s Greatest Bookstore by Vaunda Micheaux Nelson There once was a bookstore that was like no other in our country’s history. The store was opened around 1930 by Lewis Henri Michaux where he sold books written by and about African Americans. Michaux wanted everyone to understand the power of knowledge. He encouraged his neighbors to read books, ask questions and learn more about events American history that should be remembered. Pair this book with “The Boy Who Harnessed the Wind” by William Kamkwamba; the perfect example of why reading for knowledge is powerful. Use the Retelling and Summarizing Task Cards for both books to practice using an essential reading comprehension strategy. Seeds of Freedom: The Peaceful Integration of Huntsville, Alabama by Hester Bass When brave people plant seeds to make a difference, it takes courageous people to make the seeds grow. This story chronicles the peaceful, yet powerful, movements that occurred in Huntsville, Alabama in the 1960’s. The events in this book will provoke children to ask questions about a time in history that was not too long ago. Pair this book with the Retelling and Summarizing lessons in The Other Side Super Pack to practice recounting the historical events that happened in both books. Martin’s Big Words by Doreen Rappaport Author Doreen Rappaport has an extraordinary way of writing stories around important phrases spoken and written by historical figures. Marrying Rappaport’s talent with Martin Luther King’s quotes makes for an outstanding book about the Civil Rights Movement. Pair this book with other books written by Rappaport for a study in text structure and illustration. BookPagez also has lessons for working more closely with understanding the author’s purpose. Brown Girl Dreaming by Jacqueline Woodson This is a book you will want to take your time with. You’ll find yourself appreciating and rereading passages that are rich with voice. Woodson’s writing style compels readers to think deeply about what it was like growing up as an African-American in the 1960’s and 1970’s. Bookpagez offers valuable resources to go along with every chapter of this book. Charlie Parker Played Be Bop by Chris Raschka Have you had the pleasure of reading this oldie but goodie aloud to a group of children? You will feel like you are playing right along with Charlie Parker and his jazz band. The words create music all on their own with a beat that makes you want to snap your fingers and tap your feet. Pair this book with another musical adventure entitled “This Jazz Man” by Karen Ehrhardt. This is another book that will make you want to dance as you read about notable jazz players; including Charlie Parker. Little Melba and Her Big Trombone by Katheryn Russell-Brown Melba Liston loved playing the trombone. She was an exceptionally talented player. Some didn’t see her that way, though. The color of her skin and her gender bothered people. They treated Liston unfairly, and she almost quit playing the trombone. Fortunately, her friends convinced her to play wherever she could find an audience, and her travels took her around the world. Pair this book with the Amazing Grace Super Pack to make connections with the literature.
Bordered by three immense lakes, Huron, Erie and Ontario, this County map of Ontario is from the 1874 Mitchell World Atlas. Hand colored and reproduced here in original size of 12"x15" archivally printed on acid free paper. In Ontario, there was a nineteenth century shift of religious power from the Tory elite to the middle class merchants and professionals. The leadership of the Anglican Church had once been unquestioned, partly due to the intimate networks of patron client relations, faded gradually. Their power declined with the introduction of more modern ideals based on merit. By the 1870s, the new middle class was firmly in control and the old elite had all but vanished. Beginning in the late 1870s, the Ontario Woman's Christian Temperance Union banded together under the objective of incorporating "scientific temperance" in the schools' curricula. This study reinforced moralistic temperance messages with the study of anatomy. When this proved unsuccessful, the Union moved to dry up Ontario through government action. This succeeded in eliminating alcohol in many rural areas and towns, but not in the larger cities. The sale and consumption of liquor, wine, and beer today are still controlled by the government, though to a lesser extent. This ensured strict community standards and the upholding of revenue generation from the alcohol. The year 1813 brought the Battle of Lake Erie during the War of 1812. A year before, when the war broke out, the British had immediately leaped to seize control of the Lake. But in 1813, the United States Navy defeated and captured the British Royal Navy, ensuring American control of the lake for the rest of the war. This in turn allowed the American force to recover Detroit and win the Battle of the Thames (Ontario). The Battle proved one of the biggest naval conflicts of the war. Archival reproduction print from high resolution scan. 12" x 15"
Investigations is a complete mathematics program for grades K-5. Students using Investigations in Number, Data, and Space are expected to learn arithmetic, basic facts and much more. The focus of instruction is on mathematical thinking and reasoning. Students using the complete Investigations curriculum develop an understanding of: - number, operations, and early algebraic ideas - geometry and measurement - data analysis and probability - patterns, functions, and the math of change, which provide foundations for algebra Investigations is based on our goals and guiding principles, years of work with real teachers and students, and research about what we now know about how children learn mathematics. It is carefully designed to invite all students into mathematics and to help them develop a deep understanding of fundamental mathematical ideas. “Understanding refers to a student’s grasp of fundamental mathematical ideas. Students with understanding know more than isolated facts and procedures. They know why a mathematical idea is important and the contexts in which it is useful. Furthermore, they are aware of many connections between mathematical ideas. In fact, the degree of students’ understanding is related to the richness and extent of the connections they have made.” (2002, Helping Children Learn Mathematics, p. 10.) As a natural part of their everyday mathematics work, Investigations students: - explore problems in depth. - find more than one way to solve many of the problems they encounter. - reason mathematically and develop problem-solving strategies. - examine and explain mathematical thinking and reasoning. - communicate their ideas orally and on paper, using “clear and concise” notation. - represent their thinking using models, diagrams, and graphs. - make connections between mathematical ideas. - prove their ideas to others. - develop computational fluency – efficiency, accuracy, and flexibility. - choose from a variety of tools and appropriate technology. - work in a variety of groupings – whole class, individually, in pairs, and in small groups.
As microprocessors get smaller and more powerful, the search for a low-cost, low-power method of keeping them cool intensifies. Traditional methods of using fans and heat sinks are limited, and next-generation devices might be impossible without improved cooling technology. At Georgia Institute of Technology (www.gatech.edu),engineers have developed synthetic jet arrays that produce two to three times the cooling of a fan while using two-thirds less energy. The jets resemble tiny speakers with electromagnetic or piezoelectric drivers vibrating a diaphragm at 100 to 200 Hz. This sucks air into a cavity and expels it, creating pulsating jets of air that can be precisely directed. Though the jets move 70% less air than fans of comparable size, the airflow contains tiny vortices, which make the flow turbulent. Turbulent air mixes more efficiently with ambient air, breaking up thermal boundary layers and increasing heat transfer. The jets can be scaled to suit applications and turned on and off to meet changing thermal demands.
Severe Weather 101 What we do: Read more about NSSL's hail research here. - What is hail? - Hail is a form of precipitation that occurs when updrafts in thunderstorms carry raindrops upward into extremely cold areas of the atmosphere where they freeze into balls of ice. Hail can damage aircraft, homes and cars, and can be deadly to livestock and people. - How does hail form? - Hailstones grow by colliding with supercooled water drops. Supercooled water will freeze on contact with ice crystals, frozen raindrops, dust or some other nuclei. Thunderstorms that have a strong updraft keep lifting the hailstones up to the top of the cloud where they encounter more supercooled water and continue to grow. The hail falls when the thunderstorm's updraft can no longer support the weight of the ice or the updraft weakens. The stronger the updraft the larger the hailstone can grow. Hailstones can have layers like an onion if they travel up and down in an updraft, or they can have few or no layers if they are “balanced” in an updraft. One can tell how many times a hailstone traveled to the top of the storm by counting the layers. Hailstones can begin to melt and then re-freeze together - forming large and very irregularly shaped hail. - How does hail fall to the ground? - Hail falls when it becomes heavy enough to overcome the strength of the updraft and is pulled by gravity towards the earth. How it falls is dependent on what is going on inside the thunderstorm. Hailstones bump into other raindrops and other hailstones inside the thunderstorm, and this bumping slows down their fall. Drag and friction also slow their fall, so it is a complicated question! If the winds are strong enough, they can even blow hail so that it falls at an angle. This would explain why the screens on one side of a house can be shredded by hail and the rest are unharmed! - How fast does hail fall? - We really only have estimates about the speed hail falls. One estimate is that a 1cm hailstone falls at 9 m/s, and an 8cm stone, weighing .7kg falls at 48 m/s (171 km/h). However, the hailstone is not likely to reach terminal velocity due to friction, collisions with other hailstones or raindrops, wind, the viscosity of the wind, and melting. Also, the formula to calculate terminal velocity is based on the assumption that you are dealing with a perfect sphere. Hail is generally not a perfect sphere! - What areas have the most hail? - Though Florida has the most thunderstorms, Nebraska, Colorado, and Wyoming usually have the most hail storms. The area where these three states meet – “hail alley,” averages seven to nine hail days per year. The reason why this area gets so much hail is that the freezing levels (the area of the atmosphere at 32 degrees or less) in the high plains are much closer to the ground than they are at sea level, where hail has plenty of time to melt before reaching the ground. Other parts of the world that have damaging hailstorms include China, Russia, India and northern Italy. When viewed from the air, it is evident that hail falls in paths known as hail swaths. They can range in size from a few acres to an area 10 miles wide and 100 miles long. Piles of hail in hail swaths have been so deep, a snow plow was required to remove them, and occasionally, hail drifts have been reported. - How large can hail get? - Hail is usually pea-sized to marble-sized, but big thunderstorms can produce big hail. The largest hailstone recovered in the U.S. fell in Vivian, SD on June 23, 2010 with a diameter of 8 inches and a circumference of 18.62 inches. It weighed 1 lb 15 oz. - Estimating Hail Size - Hail size is estimated by comparing it to a known object. Most hail storms are made up of a mix of sizes, and only the very largest hail stones pose serious risk to people caught in the open. - Pea = 1/4 inch diameter - Marble/mothball = 1/2 inch diameter - Dime/Penny = 3/4 inch diameter - Nickel = 7/8 inch - Quarter = 1 inch — hail quarter size or larger is considered severe - Ping-Pong Ball = 1 1/2 inch - Golf Ball = 1 3/4 inches - Tennis Ball = 2 1/2 inches - Baseball = 2 3/4 inches - Tea cup = 3 inches - Grapefruit = 4 inches - Softball = 4 1/2 inches What we do: NSSL's mPING project collects reports from the public about hail and other weather phenomena in their vicinity via a free mobile app. This data is used to refine radar algorithms that detect hail, and to enhance climatological information about hail in the U.S. Similarly, the Severe Hail Verification Experiment (SHAVE) collected data on hail by making phone calls to the public along the path of selected storms.
The fragrant blossoms and tasty fruit of citrus trees (Citrus spp.) can be a welcome addition to gardens in U.S. Department of Agriculture plant hardiness zones 9 through 11. Young citrus trees begin producing fruit within five years of being grafted or budded. Don't be alarmed if a citrus tree drops some of its green fruit during the summer. The tree will likely have more than enough fruit come autumn and winter. Fruit grown from the seeds of citrus fruit will not be the same as the fruit produced by the parent tree. To guarantee the quality of the fruit, small branches from citrus trees that produce high-quality fruit are grafted or budded onto rootstock that is strong and absorbs water and nutrients well, rather than producing good fruit. Citrus fruit produced will be like the grafted tree, instead of the rootstock. Grafting also shortens the time a citrus tree requires before producing fruit. Citrus trees will begin producing fruit within five years of being grafted, depending on the type of fruit. More Than Enough Citrus trees produce an overabundance of fragrant blossoms. The tree will not have the resources or energy to ripen all the fruit if every flower is pollinated. Rather than risking immature or low-quality fruit, citrus trees drop green fruit several weeks after the spent flowers fall, based on what the tree can ripen. Seeing unripe fruit falling can concern gardeners, but it is a perfectly normal part of life for the citrus tree. Pick Over Time Once picked, citrus fruit does not continue to ripen or produce sugars. Instead, the fruit should be left on the tree until it is completely ripe or it will be sour and dry. Citrus fruit does not have to be harvested all at once when it is ripe. Fruit that is left on the tree will continue to develop sugars and become even sweeter and tastier, so it can be picked and enjoyed over several weeks. Temperatures have a big impact on fruit quality. High heat in the summer develops sugar in citrus fruits. While citrus trees have low tolerance for freezing, cold winter temperatures cause lower acid levels in citrus fruits. The tastiest fruit will be grown when summers are long and hot and after winter temperatures drop. Citrus trees need to be well-watered in the hot summer months, however, or the fruit will split and be ruined. Splitting often occurs soon after rain when the tree has been too dry. - BananaStock/BananaStock/Getty Images
Urban forests help to improve our air quality. Heat from the earth is trapped in the atmosphere due to high levels of carbon dioxide (CO2) and other heat-trapping gases that prohibit it from releasing the heat into space. This creates a phenomenon known today as the “greenhouse effect.” Therefore, trees help by removing (sequestering) CO2 from the atmosphere during photosynthesis to form carbohydrates that are used in plant structure/function and return oxygen back into the atmosphere as a byproduct. Roughly half of the greenhouse effect is caused by CO2. Therefore, trees act as carbon sinks, alleviating the greenhouse effect. I believe this will happen because when the light source is nearer to the plant more of the plants surface area is coming in to contact with the light from the desk lamp therefore more photosynthesis will occur which will mean more oxygen will be produced which will create more bubbles.... Plants make their own food thru the process of photosynthesis Hypothesis:Photosynthesis is a process by which green plants and certain other organisms use the energy of light to convert carbon dioxide and water into the simple sugar glucose.... Could Human Photosynthesis power .. It will be very interesting to see how light will influence the rate of photosynthesis in plants and what will happen if they do not get the required light in order to produce starch . found in the skin of humans and other mammals can .. Light is a very important factor in the rate of photosynthesis, in my project I am going to test that plants do need light in order to photosynthesise. Will we ever… photosynthesise like plants? – Phenomena This quote shows that photosynthesis is the most essential part of the exchange between humans and plants because it produces all the resources we need. Here’s the tenth piece from my BBC column Humans have to .. Essential fatty acids that we can't make on our own but plants do make. How Does The Survival Of The Earth Depend on Photosynthesis? Photosynthesis utilizes carbon dioxide and water in a process that releases oxygen. Helps maintain a normal levels of carbon dioxide in the atmosphere By: Shelby-Rae Kubashek What is the balance between photosynthesis and cellular respiration? It captures light energy and converts it to chemical energy solar energy is converted to chemical energy Production Of Carbohydrate By Plants turns carbon dioxide and water into sugar cellular respiration- glucose is ultimately broken down to yield carbon dioxide and water What does photosynthesis make? the ultimate source of all calories all living things consume is photosynthesis Photosynthesis is the ultimate source of all the carbon that is stored in organic compounds in all living matters How Do Humans Threaten The Balance Between Photosynthesis And Cellular Respiration? If plants get too much carbon dioxide that will produce more oxygen which takes more energy and will cause them to need even more water, sunlight and carbon dioxide to produce their energy as for humans the pollution will be too much for us to stay healthy and have clean air. make their own sustenance through the process of photosynthesis Photosynthesis is a naturally occurring process that takes place in plants, algae and even some forms of bacteria. They utlize the sun’s energy to convert CO2 into carbohydrates. Photosynthesis is an essential part of life on Earth. During photosynthesis CO2 expelled by animals, humans and burning hydrocarbons is absorbed and oxygen is released. There are some forms of bacteria that perform anoxygenic photosynthesis and no oxygen is released. Photosynthesis the counterpart of cellular respiration.
When you see the Moon way up in the sky, it’s hard to get a sense of perspective about how big the Moon really is. Just how big is the Moon compared to Earth? Let’s take a look at the diameter first. The diameter of the Moon is 3,474 km. Now, let’s compare this to the Earth. The diameter of the Earth is 12,742 km. This means that the Moon is approximately 27% the size of the Earth. What about surface area? The surface area of the Moon is 37.9 million square kilometers. That sounds like a lot, but it’s actually smaller than the continent of Asia, which is only 44.4 million square km. The surface ares of the whole Earth is 510 million square km, so the area of the Moon compared to Earth is only 7.4%. How about volume? The volume of the Moon is 21.9 billion cubic km. Again, that sounds like a huge number, but the volume of the Earth is more like 1 trillion cubic kilometers. So the volume of the Moon is only 2% compared to the volume of the Earth. Finally, let’s take a look at mass. The mass of the Moon is 7.347 x 1022 kg. But the Earth is much more massive. The mass of the Earth is 5.97x 1024 kg. This means that the mass of the Moon is only 1.2% of the mass of the Earth. You would need 81 objects with the mass of the Moon to match the mass of the Earth. You can listen to a very interesting podcast about the formation of the Moon from Astronomy Cast, Episode 17: Where Did the Moon Come From?
10 Insane Ancient Achievements that Science Can’t Explain 10 Insane Ancient Achievements that Science Can’t Explain Out-of-place artifact (OOPArt) is a... 10 Insane Ancient Achievements that Science Can’t Explain Out-of-place artifact (OOPArt) is a term coined by American naturalist and cryptozoologist Ivan T. Sanderson for an object of historical, archaeological, or paleontological interest found in a very unusual or seemingly impossible context that could challenge conventional historical chronology. The term “out-of-place artifact” is rarely used by mainstream historians or scientists. Its use is largely confined to cryptozoologists, proponents of ancient astronaut theories, and paranormal enthusiasts…. 1. Tiwanacu and Puma Punku Tiwanaku (Spanish: Tiahuanaco and Tiahuanacu) is an important Pre-Columbianarchaeological site in western Bolivia, South America. Pumapunku also called “Puma Pumku” or “Puma Puncu”, is part of a large temple complex or monument group that is part of the Tiwanaku. Tiahuanaco is an example of engineering so monumental that it dwarfs even the work of the Aztecs. Stone blocks on the site weigh many tons. They bear no chisel marks, so the means by which they were shaped remains a mystery.The stone itself came from two different quarries. One supplied sandstone and was situated 10 miles away. It shows signs of having produced blocks weighing up to 400 tons. The other supplied andesite and was located 50 miles away, raising the question of how the enormous blocks were transported in an age before the horse was domesticated in South America. Close examination of the structures shows an unusualtechnique behind their building. The stone blocks were notched, then fitted together so that they interlocked in three dimensions. The result was buildings strong enough to withstand earthquakes. 2. Nazca Lines The high desert of Peru holds one of the most mystifying monuments of the known world—the massive-scale geoglyphs known as the Nazca Lines. The “lines” are ranging from geometric patterns to “drawings” of different animals and stylized human-like forms. The ancient lines can only be truly taken in, their forms discerned, from high in the air, leaving generations mystified as to how these precise works could’ve been completed long before the documented invention of human flight. Who built them and what was their purpose? Are the lines signs left by an alien race? Ancient “crop circles”? Landing strips for alien gods/astronauts? Relics of a ancient people far more advanced—capable of human flight—then previously imagined? Or perhaps a giant astronomical calendar? Sacsayhuamán (also known as Sacsahuaman) is a walled complex near the old city of Cusco, at an altitude of 3,701 m. or 12,000 feet. The site is part of the City of Cuzco, which was added to the UNESCO World Heritage List in 1983. They are three parallel walls built in different levels with lime-stones of enormous sizes. Zigzagging walls are made of boulders used for the first or lower levels are the biggest; there is one that is 8.5 m high (28 ft.) and weights about 140 metric tons. Those boulders classify the walls as being of cyclopean or megalithic architecture. There are no other walls like these. They are different from Stonehenge, different from the Pyramids of the Egyptians and the Maya, different from any of the other ancient monolithic stone-works. Scientists are not certain how these huge stones were transported and processed to fit so perfectly that no blade of grass or steel can slide between them. There is no mortar. The stones often join in complex and irregular surfaces that would appear to be a nightmare for the stonemason. Stonehenge is a megalithic monument on the Salisbury Plain in Southern England, composed mainly of thirty upright stones (sarsens, each over ten feet tall and weighing 26 tons), aligned in a circle, with thirty lintels (6 tons each) perched horizontally atop the sarsens in a continuous circle. There is also an inner circle composed of similar stones, also constructed in post-and-lintel fashion. Gerald Hawkins, a Professor of Astronomy, concluded that Stonehenge was a sophisticated astronomical observatory designed to predict eclipses (Stonehenge Decoded). The positioning of the stones provides a wealth of information, as does the choice of the site itself. If you can see the alignment, general relationship, and the use of these stones then you will know the reason for the construction. The author, and other astronomers, discovered the 56-year cycle of eclipses by decoding Stonehenge! The movement of stones once each year from an initial fixed position allows to predict accurately every important lunar event for hundreds of years. This computer would need resetting about once every 300 years by advancing the stones by one space. Mankind generally used the cycle of the moon as a unit of timekeeping. 5. Costa Rica Stone Spheres One of the strangest mysteries in archaeology was discovered in the Diquis Delta of Costa Rica. Since the 1930s, hundreds of stone balls have been documented, ranging in size from a few centimetres to over two meters in diameter. Some weigh 16 tons. Almost all of them are made of granodiorite, a hard, igneous stone. These objects are monolithic sculptures made by human hands. 6. Trilithon at Baalbeck The mysterious ruins of Baalbek. One of the great Power Places of the ancient world. For thousands of years its secrets have been shrouded in darkness, or bathed in an artificial light by those who would offer us a simplistic solution to its mysteries. The Temple of Jupiter is one of the most impressive Temples in Baalbeck. It measures 88×48 meters and stands on a podium 13 meters above the surrounding terrain and 7 meters above the courtyard. It is reached by a monumental stairway. One of the most amazing engineering achievements is the Podium which was built with some of the largest stone blocks ever hewn. On the west side of the podium is the “Trilithon”, a celebrated group of three enormous stones weighing about 800 tons each. Some archaeologists might well wish that Baalbek had been buried forever. For it is here that we find the largest dressed stone block in the world – the infamous Stone of the South, lying in its quarry just ten minutes walk from the temple acropolis. This huge stone weighs approximately 1,000 tons – almost as heavy as three Boeing 747 aircraft. 7. Great Pyramid of Giza The Great Pyramid of Giza (also called the Khufu’s Pyramid, Pyramid of Khufu, and Pyramid of Cheops) is the oldest and largest of the three pyramids in the Giza Necropolis bordering what is now Cairo, Egypt, and is the only one of the Seven Wonders of the Ancient World that survives substantially intact. It is believed the pyramid was built as a tomb for Fourth dynasty Egyptian King Khufu (Cheops in Greek) and constructed over a 20 year period concluding around 2560 BC. The Great Pyramid was the tallest man-made structure in the world for over 3,800 years. Originally the Great Pyramid was covered by casing stones that formed a smooth outer surface, and what is seen today is the underlying core structure. Some of the casing stones that once covered the structure can still be seen around the base. There have been varying scientific and alternative theories regarding the Great Pyramid’s construction techniques. Most accepted construction theories are based on the idea that it was built by moving huge stones from a quarry and dragging and lifting them into place. 8. Shroud of Turin The Shroud of Turin is reputedly Christ’s burial cloth. It has been a religious relic since the Middle Ages. To believers it was divine proof the Christ was resurrected from the grave, to doubters it was evidence of human gullibility and one of the greatest hoaxes in the history of art. No one has been able to prove that it is the burial cloth of Jesus of Nazareth, but its haunting image of a man’s wounded body is proof enough for true believers. The Shroud of Turin, as seen by the naked eye, is a negative image of a man with his hands folded. The linen is 14 feet, 3 inches long and 3 feet, 7 inches wide. The shroud bears the image of a man with wounds similar to those suffered by Jesus. One theory is simply that the Shroud is a painting . It has been proposed that it was painted using iron oxide in an animal protein binder. The STURP scientists have concluded from their studies that no paints, pigments, dyes or stains have been found to make up the visible image. Could the image have been produced by a burst of radiation (heat or light) acting over short period of time which would have scorched the cloth? Scientists have not been able to duplicate the characteristics of the Shroud using this method just like the painting hypothesis. Also the color and ultraviolet characteristics of the Shroud body image and a scorch are different. The shroud body image does not fluoresce under UV light but scorches like the burns from 1532 do fluoresce under UV light. Thus many scientists rule out the radiation theory. 9. Star Child Skull In the 1930?s, in a small rural village 100 miles southwest of Chihuahua, Mexico, at the back of a mine tunnel, two mysterious remains were found: a complete human skeleton and a smaller, malformed skeleton. In late February of 1999, Lloyd Pye was first shown the Starchild skull by its owners. Nameless then, it was a highly anomalous skull. The long-standing Star Being legends of Central and South America provide a plausible mechanism for how a highly abnormal skull (relative to humans) might have been biologically created rather than genetically or congenitally malformed, or physically manipulated by deliberate deformation (binding). The terrain of the bone in the eye sockets contains incredibly subtle indentations and ridges that are perfectly symmetrical in both sockets, which simply have to have been formed by genetic directions rather than by deformations. 10. The Antikythera Mechanism The device, made of bronze and encased in wood, was found by divers off the Mediterranean island Antikythera in 1900. “This device is just extraordinary, the only thing of its kind,” says Mike Edmunds (Cardiff University, Wales) one of the scientists investigating this amazing artefact.“The design is beautiful. The astronomy is exactly right. The way the mechanics are designed just makes your jaw drop.” Nothing like this instrument is preserved elsewhere. Nothing comparable to it is known. from any ancient scientific text or literary allusion. On the contrary, from all that we know of science and technology in the Hellenistic Age we should have felt that such a device could not exist. Some historians have suggested that the Greeks were not interested in experiment because of a contempt-perhaps induced by the existence of the institution of slavery-for manual labor. On the other hand it has long been recognized that in abstract mathematics and in mathematical astronomy they were no beginners but rather “fellows of another college” who reached great heights of sophistication. Many of the Greek scientific devices known to us from written descriptions show much mathematical ingenuity, but in all cases the purely mechanical part of the design seems relatively crude. Gearing was clearly known to the Greeks, but it was used only in relatively simple applications. They employed pairs of gears to change angular speed or mechanical ad- vantage, or to apply power through a right angle, as in the water-driven mill.
Picture of Max Planck Max Planck was a German physicist who lived between 1858-1947. The theories he developed changed our understanding of what happens inside an atom. His work later gave rise to the field of quantum physics, which studies energy inside atoms. Planck also believed that we have no control over the laws of nature. We can observe and try to understand them, but we can't change them. Shop Windows to the Universe Science Store! Our online store includes fun classroom activities for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist are also full of classroom activities on different topics in Earth and space science! You might also be interested in: What types of instructional experiences help K-8 students learn science with understanding? What do science educators teachers, teacher leaders, science specialists, professional development staff, curriculum designers, school administrators need to know to create and support such experiences?...more Florence Bascom, who lived from 1862 until 1945, was one of the most important geologists in the United States. She studied mineral crystals by looking at them very closely with a microscope. She also...more Niels Bohr was a Danish physicist who lived between 1885-1962. He studied the structure of atoms and developed a new theory about how the electrons in an atom were arranged. After helping build the first...more Marie Curie was a physicist and chemist who lived between 1867-1934. She studied radioactivity and the effects of x-rays. She was born Maria Skłodowska in Warsaw, Poland. Women could not study then...more Albert Einstein was a German physicist who lived between 1879-1955. He is probably the most well-known scientist in recent history. Have you heard of Einstein's famous theory? It is called the theory...more Robert Goddard was an American physicist who lived between 1882-1945. He studied rockets and showed how they could be used to travel into outer space and to the Moon. Goddard experimented with different...more Werner Heisenberg was a German physicist who lived between 1901-1976. Heisenberg is most famous for his "uncertainty principle", which explains the impossibility of knowing exactly where something is...more Edwin Hubble was an American astronomer who lived between 1889-1953. He spent a lot of time looking at groups of stars and planets, called galaxies, and trying to explain their motion. He found that all...more