content
stringlengths
275
370k
The Truth is one and the same always. Though ages and generations pass away—one generation goes and another comes; yet the Word, and the Power, and the Spirit of the Living God endures for ever and changes not. – Margaret Fell Fox, A Brief Collection of Remarkable Passages Relating to..., 1710 The seventeenth century in England was a time of great political, economic, and religious vitality, ferment, and upheaval. This century saw the English Civil War and the beheading of Charles I (1649), the Puritan Commonwealth under Oliver Cromwell, and the restoration of the monarchy under Charles II (1660). In religion, it was a period of bitter struggle between the established (Anglican) Church, the remnants of the Roman Catholic Church from which it had separated in 1534, the Puritans, and other Protestant sects. The printing press and newly allowed English translations of the Bible helped intensify this struggle. By the time of George Fox’s birth in 1624, more than a century had passed since the beginning of Luther’s reformation. The Protestant movement had begun in an attempt to instill a new spiritual life into Christianity, but it had often fallen victim to the rigidity it had earlier criticized in the Roman Catholic Church. Friends were not the first to protest that dogmatic belief had replaced living faith and authoritative Scriptures had replaced direct revelation; many religious groups throughout Europe were discontented with established religion and searched for a living faith. In this violent, seeking world George Fox in 1652 initiated a vigorous spiritual movement, later called the Religious Society of Friends, that stood in protest against a Christianity that many felt had idolized its forms and lost its inner spiritual life. As it grew and took hold, this movement drew into its fellowship many of those already involved in struggles with organized religions: many Seekers and members of the group called Diggers, and others like Elizabeth Hooten, Isaac Penington, and John Lilburne, the leader of the Levellers, who found many of their egalitarian aspirations shared among Friends.
Being allergic means that you are extremely sensitive to one or more of the ordinary matters. It can be food, chemicals, pharmaceuticals, and even ordinary house dust or particles that fall off a cat, dog or horse. Substances that cause allergies are called allergens. Why allergens have an impact on some people, while not on the other? Medicine has not yet been explained this. Today is known that heritage has a particular role in the development of allergies. In many families, grandparents, parents and children are susceptible to the same substance. However, sometimes there is only one family member who is allergic. It is also believed that excitement have a role in the occurrence of allergies. Fear, anger or concern seems to reinforce the allergic attack. What happens in the body when someone has an allergy? It is believed that allergen, you get into the body, stimulates cells to produce antibodies. Antibodies are one of the defense forces of the organism. But when it comes to allergies, it leads to adverse reactions. It is believed that allergic substance, together with antibodies whose creation has caused by allergens, triggers the body release of chemical – histamine. A histamine acts on the blood vessels and the lungs, causing allergic phenomena. The amount of histamine is either very small, or histamine is only in the affected parts of the body, because histamine is not found in the blood of people who have allergic attacks. As you can see, there are still many things to discover about the allergy.
Trace the milestones of our species’ fascinating history in African Hall's Human Odyssey—and discover why the 7 billion people on Earth today are far more alike than you might think. The human species' path hasn't been an easy one; in fact, Homo sapiens nearly went extinct some 70,000 – 90,000 years ago. Learn more about that close call—and the milestones that dot our species’ evolutionary history—while exploring the origins of humankind. Examine the skull casts of three early hominid species, then watch as their faces appear, thanks to “Pepper’s Ghost” optical illusion technology. Compare the distinctive gaits of a chimpanzee, Australopithecus afarensis, and a modern human. Use touchscreen stations to trace our species’ migration from its African origins, then see how Academy research is helping us better understand our human journey.
Reading time ( words) Atoms are composed of electrons moving around a central nucleus they are bound to. The electrons can also be torn away, overcoming the confining force of their nucleus, using the powerful electric field of a laser. Half a century ago, the theorist Walter Henneberger wondered if it was possible to free an electron from its atom with the laser field, but still make it stay around the nucleus. Many scientists considered this hypothesis to be impossible. However, it was recently successfully confirmed by physicists from the University of Geneva (UNIGE), Switzerland, and the Max Born Institute (MBI) in Berlin, Germany. For the first time, they managed to control the shape of the laser pulse to keep an electron both free and bound to its nucleus, and were at the same time able to regulate the electronic structure of this atom dressed by the laser. What’s more, they also made these unusual states amplify laser light. They also identified a no-go area. In this area nicknamed “Death Valley”, physicists lose all their power over the electron. These results shatter the usual concepts related to the ionisation of matter. Since the 1980s, many experiments have tried to confirm the hypothesis advanced by the theorist Walter Henneberger: an electron can be placed in a dual state that is neither free nor bound. Trapped in the laser, the electron would be forced to pass back and forth in front of its nucleus, and would thus be exposed to the electric field of both the laser and the nucleus. This dual state would make it possible to control the motion of the electrons exposed to the electric field of both the nucleus and the laser, and would let the physicists to create atoms with “new”, tunable by light, electronic structure. But is this really possible? Leveraging the Natural Oscillations of the Electron The more intense a laser is, the easier should it be to ionise the atom—in other words, to tear the electrons away from the attracting electric field of their nucleus and free them into space. “But once the atom is ionised, the electrons don’t just leave their atom like a train leaves a station, they still feel the electric field of the laser”, explains Jean-Pierre Wolf, a professor at the applied physics department of the UNIGE Faculty of Sciences. “We thus wanted to know if, after the electrons are freed from their atoms, it is still possible to trap them in the laser and force them to stay near the nucleus, as the hypothesis of Walter Henneberger suggests”, he adds. The only way to do this is to find the right shape for the laser pulse to be applied, to impose oscillations on the electron that are exactly identical, so that its energy and state remain stable. “The electron does naturally oscillate in the field of the laser, but if the laser intensity changes these oscillations also change, and this forces the electron to constantly change its energy level and thus its state, even leaving the atom. This is what makes seeing such unusual states so difficult ”, adds Misha Ivanov, a professor at the theoretical department of MBI in Berlin. Modulating Laser Intensity to Avoid Death Valley The physicists tested different laser intensities so that the electron freed from the atom would have steady oscillations. They made a surprising discovery. “Contrary to natural expectations that suggest that the more intense a laser is, the easier it frees the electron, we discovered that there is a limit to the intensity, at which we can no longer ionise the atom”, observes Misha Ivanov. “Beyond this threshold, we can control the electron again”. The researchers dubbed this limit “Death Valley”, following the suggestion of Professor Joe Eberly from the University of Rochester. Confirming an Old Hypothesis to Revolutionize Physics Theory By placing the electron in a dual state which is neither free nor bound, the researchers found a way to manipulate these oscillations as they like. This enables them to directly work on the electronic structure of the atom. After several adjustments, for the first time, physicists from UNIGE and MBI were able to free the electron from its nucleus, and then trap it in the electric field of the laser, as Walter Henneberger suggested. “By applying an intensity of 100 trillion watts per cm2, we were able to go beyond the Death Valley threshold and trap the electron near its parent atom in a cycle of regular oscillations within the electric field of the laser”, Jean-Pierre Wolf says enthusiastically. As a comparison, the intensity of the sun on the earth is approximately 100 watts per m2. “This gives us the option of creating new atoms dressed by the field of the laser, with new electron energy levels», explains Jean-Pierre Wolf. “We previously thought that this dual state was impossible to create, and we’ve just proved the contrary. Moreover, we discovered that electrons placed in such states can amplify light. This will play a fundamental role in the theories and predictions on the propagation of intense lasers in gases, such as air”, he concludes.
Introduction to Graphic Displays Displaying categorical data in simple graphical formats such as bar charts and pie charts Displaying quantitative variable data in simple graphical formats such as dot plots, frequency histograms, and stem‐and‐leaf plots Using box plots to display numerical measures of data Interpreting graphic displays to make conclusions about the distribution of the variable Understanding scatter plots Pie charts and bar charts are graphic displays of data for categorical variables. Dot plots, stem‐and‐leaf plots, histograms, and box‐and‐whisker plots are graphic displays of data for numerical variables. As an example, consider the yearly expenditures of a college undergraduate. After collecting her data (expense records) for the past year, she finds the expenditures shown in Table 1. These figures, although presented in categories, do not allow for easy analysis. The reader must expend extra effort in order to compare amounts spent or relate individual proportions to the total. For ease of analysis, these data can be presented pictorially.
The generation of high-quality random numbers is desirable for a number of real world applications, most notably cryptography. The strength of many cryptographic protocols relies upon the quality of random numbers. Put simply, the higher the quality of random numbers, the stronger the resulting cryptography will be, and true random numbers are required for ultra-secure quantum communication.1,2 In practice, the generation of true random numbers is notoriously difficult. By true, we mean that the random numbers must be completely unpredictable and unreproducible. They must be bias-free bits (‘0’ and ‘1’) generated with equal probabilities. At the same time, a random number produced earlier cannot reliably be reproduced later, even by the same generator under exactly the same conditions. These requirements dictate that a computer algorithm cannot generate true random numbers because its output is always deterministic and reproducible. A true random number generator (RNG) therefore has to be hardware based, making use of the intrinsically unpredictable outcomes of a physical process. Quantum mechanical uncertainty is an ideal source of randomness in that it has the potential to offer the perfect randomness guaranteed by the laws of quantum physics. As shown in Figure 1(a), a quantum RNG can be realized by exploiting the particle-like nature of photons. In this elegant scheme, the source of randomness is the passive path selection of a photon hitting a semitransparent beam splitter. In theory, it should produce perfectly random numbers. However, random numbers obtained this way will be biased due to the inevitable imbalance in photon detection rates between the two detectors. This bias, then, has to be removed through mathematical post-processing (which is common for a physical RNG) in order to improve random number quality. Post-processing has been required for all quantum RNGs except for the one we have designed, and it was not known, prior to our work, whether a true quantum RNG could be purely physical. Figure 1. Two quantum RNG schemes based on the (a) particle-like or (b) wave-like nature of photons. In our design, we avoid the photon detection rate imbalance by exploiting the wave-like nature of photons.3 A long-lasting photon will collapse into a time window defined by measurement, and random wave function collapse is a source for true randomness (provided that each measurement time window is much smaller than the photon coherence time). As shown in Figure 1(b), a quantum RNG exploiting the wave-like nature of photons can be constructed simply with a source and detector. We use a laser diode as the source with a coherence time of 1ms, while the detector is gated at 1GHz with each detection window 1000 times smaller than the coherence time. Such a large disparity in time ensures an equal detection probability between any two adjacent detection gates. By assigning a bit value ‘0’ or ‘1’ according to a detection event at an even or odd clock cycle, a bias-free random number is readily obtainable. Figure 2. Photon count rate as a function of incident light intensity recorded for a self-differencing indium gallium arsenide single-photon APD. A maximum photon count rate of 497MHz is measured, which is very close to the value expected from a theoretical calculation (black line) assuming zero detector dead time. To realize this scheme, it is crucial for the detector to have certain characteristics. Ideally, the detector should be operated in gated mode in order to allow unambiguous bit-value assignments. Moreover, to achieve high bit rates that are free of bias, the detector must be able to handle high photon rates and possess a negligible counting dead time. Semiconductor avalanche photodiodes (APDs) are well suited to this task. Operated under a self-differencing mode4 that was originally devised by us for megabit-per-second (Mb/s) secure key-rate quantum key distribution,1,2 an analog telecommunication APD can be converted into a high-speed single-photon detector. Using this system, we achieved a record photon count rate of ~500MHz and an ultra-short dead time of less than 2ns (see Figure 2).5 Figure 3. Byte correlation pattern of 500m bits generated by our 52Mb/s quantum RNG. Incorporating a self-differencing APD, we have initially realized a quantum RNG with a random bit stream of 4Mb/s.3 Importantly, the quantum randomness has survived in this physical realization, and the random number outputs are intrinsically free of bias and do not require mathematical post-processing to pass random number statistical tests. It is the first time that any quantum RNG—and perhaps any physical RNG—has not required post-processing to pass stringent randomness tests. Furthermore, by using finer photon timing, the random bit rate can be increased by over an order of magnitude to 52Mb/s with no degradation in randomness.6 Figure 3 shows a visualization of the random output from our 52Mb/s RNG. Despite the state-of-the-art performance, the bit rate must be improved to serve demanding applications, such as high bit-rate quantum key distribution.1,2 Presently, the random bit rate is limited by the photon recording electronics, which can manage photons at only 5 million per second. In future work, we aim to design electronics to cope with a photon rate of 1 billion per second and with such technology available, we expect to see the bit rate surpassing 100Mb/s. Using finer timing, a bit rate of multi-gigabits per second is in sight. Finally, as this RNG is based on semiconductor components, we envisage high-level integration to obtain a compact and robust high-speed RNG with randomness of the highest quality. Zhiliang Yuan, James Dynes, Andrew Shields Cambridge Research Laboratory Toshiba Research Europe Ltd Cambridge, United Kingdom Zhiliang Yuan leads the quantum key distribution project at Toshiba Research Europe Ltd. James Dynes obtained his PhD in strong laser field interactions in 2005. After a post-doc at the London Centre for Nanotechnology, he began his current position as a research scientist at Toshiba Research Europe Ltd. His research interests are quantum key distribution and single photon detection. Andrew Shields leads the quantum information group at Toshiba Research Europe Ltd. His interests include semiconductor and photonic approaches to quantum information processing and quantum communications.
The earliest known existence of modern humans, or Homo sapiens, was previously dated to be around 200,000 years ago. It’s a view supported by genetic analysis and dated Homo sapiens fossils (Omo Kibish, estimated age 195,000 years, and Herto, estimated age 160,000 years), both found in modern-day Ethiopia, East Africa. But new research, published today in two Nature papers, offers a fresh perspective. The latest studies suggest that Homo sapiens spread across the entire African continent more than 100,000 years earlier than previously thought. This evidence pushes back the origins of our species to 300,000 years ago, and supports the idea that important changes in our biology and behaviour had already taken place across most of Africa by that time.
Darwin defined sexual selection as the effects of the "struggle between the individuals of one sex, generally the males, for the possession of the other sex". It is usually males who fight each other. Traits selected by male combat are called secondary sexual characteristics (including horns, antlers, etc.) and sometimes referred to as 'weapons'. Traits selected by mate choice are called 'ornaments'. Females often prefer to mate with males with external ornaments—exaggerated features of morphology. Genes that enable males to develop impressive ornaments or fighting ability may simply show off greater disease resistance or a more efficient metabolism—features that also benefit females. This idea is known as the 'good genes' hypothesis. Sexual selection is still being researched and discussed today. Modern views[change | edit source] Ernst Mayr said: - "Since Darwin’s days it has become clear that this kind of selection includes a far wider realm of phenomena, and instead of sexual selection it is better referred to as selection for reproductive success... genuine selection, not elimination, is involved, unlike survival selection. Considering how many new kinds of selection for reproductive success are discovered year after year, I am beginning to wonder whether it is not even more important than survival selection, at least in certain higher organisms". Competition between members of the same species[change | edit source] Today, biologists would say that certain evolutionary traits can be explained by competition between members of the same species. Competition can be before or after sexual intercourse. Competition between males and females[change | edit source] - Before copulation, intrasexual selection – usually between males – may take the form of male-to-male combat. Also, intersexual selection, or mate choice, occurs when females choose between male mates.. Traits selected by male combat are called secondary sexual characteristics (including horns, antlers etc.), which Darwin described as "weapons", while traits selected by mate (usually female) choice are called "ornaments". - After copulation, male–male competition may take the form of sperm competition, the competition between the sperm of two different males to fertilize an ovum. in 1970. More recently, interest has arisen in cryptic female choice, where a female will get rid of a male's sperm without his knowledge. This occurs in a wide range of species. Sexual conflict[change | edit source] Finally, sexual conflict is said to occur between breeding partners, sometimes leading to an evolutionary arms race between males and females. This is based on the simple fact that the interests of males and females in reproduction are fundamentally different. Males: their interest is to mate with a large number of completely faithful females, thus spreading their genes widely in the population. Females: Their interest is to mate with a large number of fit males, thus producing a large number of fit and varied offspring.92 Female mating preferences are widely recognized as being responsible for the rapid and divergent evolution of male secondary sexual traits. Females of many animal species prefer to mate with males with external ornaments – exaggerated features such as elaborate sex organs. Alternatively, genes that enable males to develop impressive ornaments or fighting ability may simply show off greater disease resistance or a more efficient metabolism. These features would be likely to be inherited by offspring. Related pages[change | edit source] References[change | edit source] - Darwin C. 1871. The Descent of Man and selection in relation to sex. John Murray, London. - Cronin, Helena 1991. The ant and the peacock: altruism and sexual selection from Darwin to today. Cambridge University Press. - Mayr, Ernst 1997. The objects of selection Proc. Natl. Acad. Sci. USA 94: 2091-94. - Campbell, N.A; J.B. Reece (2005). Biology. Benjamin Cummings. pp. 1230 pp.. ISBN 0-8053-7146-X. - Parker, Geoffrey A. 1970. Sperm competition and its evolutionary consequences in the insects. Biological Reviews. 45: 525-567. - Eberhard W.G. 1996. Female control: sexual selection by cryptic female choice. Princeton, Princeton University Press. - Locke Rowe & Göran Arnvist. 2005. Sexual conflict. Princeton Univ Press. - Schilthuizen, Menno 2001. Frogs, flies and dandelions: the making of species. Oxford University Press. ISBN 019850392X - Arnqvist G. and Rowe L. 2005. Sexual conflict. Princeton University Press, Princeton New Jersey - Crudgington H. & Siva-Jothy M.T. 2000. Genital damage, kicking and early death. Nature. 407: 855-856. - Andersson M 1994. Sexual selection. Princeton Univ Press, Princeton, NJ.
Back in 1979, amateur astronomer Gus Johnson discovered a supernova about 50 million light years away from Earth, when a star about 20 times more massive than our Sun collapsed. Since then, astronomers have been keeping an eye on SN 1979C, located in M 100 in the Virgo cluster. With observations from the Chandra telescope, the X-ray emissions from the object have led astronomers to believe the supernova remnant has become a black hole. If so, it would be the youngest black hole known to exist in our nearby cosmic neighborhood and would provide astronomers the unprecedented opportunity to watch this type of object develop from infancy. “If our interpretation is correct, this is the nearest example where the birth of a black hole has been observed,” said astronomer Daniel Patnaude during a NASA press briefing on Monday. Patnaude is from the Harvard-Smithsonian Center for Astrophysics and is the lead author of a new paper. SN 1970C belongs to a type of supernova explosions called Type II linear, or core collapse supernovae, which make up about 6% of known stellar explosions. While many new black holes in the distant universe previously have been detected in the form of gamma-ray bursts (GRBs), SN 1979C is different because it is much closer and core collapse supernovae are unlikely to be associated with a GRB. Theories say that most black holes should form when the core of a star collapses and a gamma-ray burst is not produced, but this may be the first time that this method of making a black hole has been observed. There has been a debate on what size star will create a black hole what size will create a neutron star. The 20 solar mass size is right on the boundary between the two, so astronomers are not completely sure this is a black hole or a neutron star. But since the X-ray emissions from this object have been steady over the past 31 years, astronomers believe this is a black hole, since as a neutron star cools, the X-ray emissions fade. This animation shows how a black hole may have formed in SN 1979C. The collapse of a massive star is shown, after it has exhausted its fuel. A flash of light from a shock breaking through the surface of the star is then shown, followed by a powerful supernova explosion. The view then zooms into the center of the explosion: Credits: NASA/CXC/A.Hobart However, as a caveat, co-author Avi Loeb said, it really takes about a lot longer than 31 years to see big changes, but he said the fact that the illumination has been steady gives evidence for a black hole. Although the evidence does point to a newly formed black hole, there are a few other possibilities of what it could be. Some have suggested the object could be a magnetar or a blast wave, but the evidence is showing those two options are not very probable. Another intriguing possibility is that a young, rapidly spinning neutron star with a powerful wind of high energy particles could be responsible for the X-ray emission. This would make the object in SN 1979C the youngest and brightest example of such a “pulsar wind nebula” and the youngest known neutron star. The Crab pulsar, the best-known example of a bright pulsar wind nebula, is about 950 years old. “I’m excited about this discovery regardless if it turns out to be black hole or a pulsar wind nebula,” said astrophysicst Alex Fillipenko, who participated in the briefing. “A pulsar wind nebula would be interesting because it would be the youngest known in that category.” “What is really exciting is that for the first time we know the exact birth date of this object,” said Kim Weaver, an astrophycisict from Goddard Space Flight Center, “We know it is very young and we want to watch how the system evolves and changes, as it grows into a child and becomes a teenager. More importantly, we’ll be able to understand the physics. This is a story of science in action.” The age of the possible black hole is, of course, based on our vantage point. Since the galaxy is 50 million light years away, the supernova occurred 50 million years ago. But for us, the explosion took place just 31 years ago. Source: NASA TV briefing, NASA
Keeping Watch on Coral Reefs This activity identifies and explains the benefits of and threats to coral reef systems. Students read tutorials, describe the role of satellites, analyze oceanographic data and identify actions that can be undertaken to reduce or eliminate threats to coral reefs. As a culminating activity, students prepare a public education program. Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials
Weimar and Nazi Germany The Weimar Constitution The impact of the Treaty of Versailles 1919 -1923: years of crisis? The Munich Putsch The Origins of the Nazi Party 1924 - 1929: A Golden era? German Foreign Policy 1919 to 1933 Germany in the Depression The Rise of the Nazi party - Why did people vote for Hitler? From Chancellor to Fuhrer The failures of Weimar Creating a totalitarian state Nazi methods of control - Organisation of the Nazi Party - Obedience to the Fuhrer Opposition to the Nazi's - Nazi Ideology The Economy under the Nazi's - The 2nd 4 Year Plan - Evaluation of the 4 Year Plan - How successful was the policy of Autarky? - German Labour Front - Dr Robert Ley Nazi Foreign Policy - Did Hitler plan to have a Second Education in Nazi Germany Women in Nazi Germany - The Jewish Problem in 1933 - Anti-Jewish Legislation - Policy 1933 - 1937 - Origins of Anti-Semitism "The Night of Broken Glass" (9-10 November 1938) On November 9, 1938, the Nazis unleashed a wave of pogroms against Germany's Jews. In the space of a few hours, thousands of synagogues and Jewish businesses and homes were damaged or destroyed. This event came to be called Kristallnacht ("Night of Broken Glass") for the shattered store windowpanes that carpeted German streets. The excuse for this violence was the assassination of a German diplomat in Paris, Ernst vom Rath, on the 7th of November 1938, by Herschel Grynszpan, a Jewish teenager whose parents, along with 17,000 other Polish Jews, had been recently expelled from the Reich. Though portrayed as spontaneous outbursts of popular outrage, these pogroms were calculated acts of retaliation carried out by the SA, SS, and local Nazi party organizations. Stormtroopers killed at least 91 Jews and injured many others. For the first time, Jews were arrested on a massive scale and transported to Nazi concentration camps. About 30,000 Jews were sent to Buchenwald, Dachau, and Sachsenhausen, where hundreds died within weeks of arrival. Release came only after the prisoners arranged to emigrate and agreed to transfer their property to "Aryans." Kristallnacht culminated the escalating violence against Jews that began during the incorporation of Austria into the Reich in March 1938. It also signalled the fateful transfer of responsibility for "solving" the "Jewish Question" to the SS. There are important lessons to be drawn from Kristallnacht, for it served as a bridge experience for both Jews and Nazis. For the Jews, there was the terrifying realization that political anti-Semitism can lead to violence, even in Western countries. It also demonstrated that apathy can still pervade the world when the lives of Jews or other minorities are threatened. For the Nazis, Kristallnacht taught that while the world might condemn their pogroms, it would not actively oppose them. World opinion, however, taught the Nazis the value of secrecy in the perpetration of future actions against Jews. Added to the complaints of Germans offended by the random violence of Kristallnacht, the stage was set for the "Final Solution"--the organized, bureaucratically efficient genocide of 6,000,000 men, women, and children. In retrospect, Kristallnacht was more than the shattering of windows and illusions. It portended the physical destruction of European Jewry. As such, we must remember and commemorate Kristallnacht as a memorial and as a warning. - How far did Germany recover under Stresemann? - How did the Nazi party develop, upto 1929? - How did Hitler become Chancellor? - Howdid Hitler create a dictatorship? - What were the main features of Totalitarian - What were the benefits of Nazi rule? Full Germany revision
The order Condylarthra is one of the most characteristic groups of Paleocene mammals, and it illustrates well the evolutionary level of the Paleocene mammal fauna. When compared to the mammal fauna of today, condylarths are relatively unspecialized placental mammals. However, in comparison to their insectivorous ancestors, members of the Condylarthra show the first signs of being omnivores or even herbivores. Since larger herbivores were absent on land after the extinction of the dinosaurs, this shift in diet triggered the tremendous evolutionary radiation of the condylarths that we can observe throughout the Paleocene. Result of this radiation are the different groups of ungulates (or "hoofed mammals") that form the dominant large herbivores in most Cenozoic animal communities on land, except on the island continent of Australia. The term ungulate refers here to a subgroup of placental mammals (the Ungulata) that are descendants of a common ancestor, the most primitive condylarth. Among recent mammals, the even- and odd-toed ungulates, hyraxes, elephants, aardvarks, sea cows and whales are traditionally regarded as members of the Ungulata (but see discussion at the end of this article). Besides condylarths several extinct groups must be added to the Ungulata, especially the endemic South American orders of ungulates. Although many ungulates have hoofs, this feature does not define the Ungulata. Some condylarths indeed have small hoofs on their feet, but the most primitive forms are clawed. On the other hand, hoofs have been independently acquired by groups that do not belong to the Ungulata, such as the extinct Pantodonta. The majority of condylarths is known from North America, the continent that has the most complete record of Paleocene mammals. Only few Paleocene mammal faunas have been discovered in Europe, but these show that condylarths were equally important in this part of the world and were often represented there by close relatives of North American forms. Surprisingly, only a few questionable condylarths have been described from the much richer Paleocene faunas of Asia. The ecological role of plant-eating mammals was taken over there by groups like the Anagalida and the Pantodonta. Yet these were hunted by carnivorous descendants of the condylarths, members of the order Mesonychia. South America is the only continent in the southern hemisphere that has a substantial record of Paleocene mammals, and the growing number of South American condylarths establishes an important link to the northern faunas. The most primitive known condylarth is the rat-sized Protungulatum ("before-ungulate") from the United States and Canada. Besides its occurrence in clearly early Paleocene sediments, remains of this genus are also found together with the teeth of dinosaurs. Therefore it was originally thought that Protungulatum (and some other types of mammals) appeared in the latest Cretaceous and co-existed with the last dinosaurs and with typically Cretaceous mammals in the so-called "Bugcreekian" faunas. In this scenario, early ungulates like Protungulatum would have been in competition with the herbivorous dinosaurs and could have contributed to the demise of these reptiles. However, a more recent interpretation is that the "Bugcreekian" assemblages are a mixture of early Paleocene and latest Cretaceous fossils, the latter having been eroded by rivers from older sediments and redeposited together with fresh animal remains in the early Paleocene. The issue is not finally resolved, but this new interpretation fits to the idea that mammals filled the vacant ecological niches after a catastrophic extinction of the dinosaurs, which may have been caused by the impact of a large meteorite. Whatever the precise age of the fossils, the status of Protungulatum as prototype for the later ungulates has not been challenged since its first description in 1965. Although still quite primitive, the dentition of Protungulatum foreshadows the typical trends observed in ungulates: Its teeth are low and have increased potential for crushing and grinding food, which probably allowed the animal to feed on soft plants, fruits and still some insects. Only few bones are known for these earliest condylarths beyond their jaws and dentition, so a lot remains to be learnt about their biology. Yet it is a big progress that potential ancestors of Protungulatum have recently been identified in the late Cretaceous of Uzbekistan. These small placental animals, called zhelestids, are not strictly classified as ungulates, but their teeth may show us a first stage in the evolution of an ungulate-like dentition. Protungulatum is traditionally regarded as a member of the Arctocyonidae, a family that was diverse and abundant in the Paleocene of North America and Europe. The arctocyonids are the least herbivorous group of condylarths. In fact their skulls look superficially like those of carnivores, with large canines and relatively sharp teeth, but arctocyonids had no specialized teeth for slicing meat and were probably omnivores. The limbs of arctocyonids were relatively short and showed none of the specializations that we typically associate with ungulates, like reduction of the side digits, fusion of bones or the possession of hoofs. Many arctocyonids are only known from their teeth, which show much individual variation, so the taxonomy of arctocyonid genera and species is highly unstable. Figure 1: Reconstruction of the agile climber Chriacus, a small arctocyonid from the early Paleocene to early Eocene of North America. Length including tail about 1m. From Cox (1988). See also reconstruction in a middle Paleocene forest. With an estimated body mass of 5 to 10 kg, Chriacus from the early Paleocene to early Eocene of North America is an example of the smaller arctocyonids. A nearly complete skeleton has been found in early Eocene rocks of Wyoming. It is essentially primitive in structure and similar to early Paleocene arctocyonids like Loxolophus that are less well known. As this skeleton shows, Chriacus was equally adept in the trees and on the ground. Among todays mammals it can be compared best to members of the racoon family and to the civets. Like most climbing mammals, Chriacus had powerful limb musculature, very mobile joints and feet bearing five digits with claws. The tail of Chriacus was long and robust. It was well adapted to be used in balancing and may even have formed a prehensile organ. As the design of the forelimb suggests, the animal was capable of digging. Chriacus may have eaten fruits, insects and other small animals. Interestingly, its lower incisors formed a kind of tooth-comb that was probably used for grooming. Although we can presume that larger arctocyonids like Arctocyon and Claenodon were still able to climb in trees, they obviously spent most of their time on the forest floor. Ecologically these animals may have been ecologically comparable to the bears. They may have partly taken over the role of large predators in Paleocene faunas, together with two other families of ungulates that evolved from early arctocyonids, the triisodontids and mesonychids. The latter two groups were traditionally also classified as condylarths but are now regarded as members of the order Mesonychia. All these carnivore-like ungulates are treated in a separate article on early predatory mammals. Besides the arctocyonids, members of the Periptychidae were the dominant condylarths of early Paleocene faunas in North America (the so-called Puercan fauna). Less than one million years after the end of the Cretaceous they already span a wide range of size, from squirrel-sized forms like Anisonchus to the sheep-sized Ectoconus, one of the largest Puercan mammals. A complete skeleton of Ectoconus ditrigonus has been found in New Mexico, U. S. A., making this species the best known mammal of the Puercan fauna. Incidentally the skeleton shows a severely diseased elbow joint, which must have made the animal lame. The general body plan of Ectoconus agrees in many points with the larger arctocyonids, like in the massive build, the small braincase of the skull, the stout short limbs and the long heavy tail. However, the five wide-spreading digits on its feet carried small hoofs similar to those of the recent tapirs, in contrast to the clawed feet of the arctocyonids. Although Ectoconus is a nearly ideal approximation of the generalized ungulate body plan, the dentition of the periptychids is specialized in a peculiar way, which disqualifies them from being potential ancestors of the modern ungulates. Most periptychids have enlarged, bulbous premolars that probably served for shredding tough plant material. Strong grooves run down from the top of these teeth in the last surviving genus, the early to late Paleocene Periptychus, an animal a little smaller than Ectoconus but with a larger head. The characteristic premolars of Periptychus are strikingly similar to those of some pigs, and in fact a pig- or peccary-like diet, mainly vegetarian, has been proposed for the periptychids. Figure 2: Reconstruction of the sheep-sized periptychid Ectoconus from the early Paleocene of New Mexico, U. S. A. From Williamson & Lucas (1993). A third family of condylarths, the Hyopsodontidae, got important later in the Paleocene when the periptychids had already passed the zenith of their evolution. Hyopsodontids were typically small animals that had little in common with our general idea of an ungulate and were more like insectivores in appearance. In some cases the resemblance to insectivores even extends to details of the teeth, and there is much debate whether some putative hyopsodontids like Litomylus are actually insectivores related to the hedgehogs. The best known hyopsodontid is Hyopsodus, a highly successful genus that occurs in the early Eocene all over the northern hemisphere and survived far into the Eocene as one of the last condylarths - although this is of course a question of definition of that group. Rare fossils from North America could indicate that Hyopsodus first appeared there in the latest Paleocene. Hyopsodus was a rat-sized gracile animal with shortened limbs and clawed feet. Its skeleton suggests that it was partly living in trees. A European subfamily of hyopsodontids is centered around Louisina from the later Paleocene of Germany and France. The related genus Paschatherium must have been extremely numerous in the latest Paleocene and earliest Eocene of Europe since it makes up the majority of all mammal fossils in some fossil sites. Ankle bones of this form suggest that Paschatherium, too, was versatile in the trees and may have been similar to squirrels in adaptation. Figure 3: Reconstruction of Hyopsodus, a rat-sized animal from the early to middle Eocene and possibly latest Paleocene. Hyopsodus is the only hyopsodontid for which the skeleton is adequately known. From Savage & Long (1986). Mioclaenus and allied genera from the Paleocene of North America have long be considered as another hyopsodontid subfamily. Today they are regarded as a separate family of condylarths, the Mioclaenidae. Mioclaenids got slightly larger than the hyopsodontids, up to hare-size. Like in the Periptychidae, enlarged, bulbous premolars are typical of the mioclaenid dentition. They were probably helpful in processing tougher, more fibrous plants. The middle Paleocene genus Mioclaenus carries the enlargement of the premolars to the extreme. In 1888 the famous American paleontologist Edward Drinker Cope placed an impressive number of twenty-six different species in Mioclaenus. Twenty-three of these have since been removed from the genus, which makes Mioclaenus one of the most abused genus names in the history of mammalian paleontology. The remainder of Cope's species are today all regarded as belonging to the type species, Mioclaenus turgidus. Although no typical mioclaenids are known from Europe, Pleuraspidotherium from the late Paleocene of France may be an aberrant member of this family. Pleuraspidotherium, an animal about 60 cm in head and body length, is one of the most abundant mammals in river deposits near Cernay, which have even been called the Pleuraspidotherium beds. Large herds of this condylarth must have browsed on the vegetation at the riverside. The cheek teeth of Pleuraspidotherium were surprisingly advanced for a Paleocene mammal: They have a so-called selenodont pattern of crescent-shaped ridges, similar to that developed much later by even-toed ungulates like deer and camel. Equipped with such a dentition, Pleuraspidotherium must have been one of the most exclusive herbivores of its time. Concerning its locomotion, however, the animal was still a generalist. Its skeleton is similar to that of arctocyonids like Chriacus but lacks the tree-climbing adaptations of the latter. Figure 4: Skull of Pleuraspidotherium aumonieri, an advanced herbivore that is abundant in the late Paleocene of Cernay, France. Length about 13cm. From Russell (1964). Fossils of mioclaenids have long been restricted to the northern hemisphere, but exciting discoveries have recently shown that these condylarths colonized South America and had their greatest success on that continent. Up to now, South American mioclaenids have only been identified in faunas from the earlier Paleocene, like Molinodus and Tiuclaenus from the important locality of Tiupampa in Bolivia. However, an endemic South American family of condylarths, the Didolodontidae, has been known since a long time from later faunas. Didolodontids and mioclaenids are very similar in their dentition, which suggests that the didolodontids evolved from mioclaenids in South America. Unfortunately, didolodontids are only represented by scanty fossil remains such as teeth, jaws and isolated ankle bones. They were mainly animals of small size, such as Ernestokokenia from the late Paleocene to early Eocene of Argentina, but also included larger forms like the middle Paleocene Lamegoia from Brazil. In their dentition the didolodontids are almost indistinguishable from the most primitive members of the Litopterna, an important order of endemic South American ungulates that later produced horse-like and camel-like forms. In comparison to the didolodontids, the major innovation in primitive litopterns is an advanced structure of the limbs, especially of the ankle. These changes are probably linked to a leaping type of locomotion in the small early litopterns; they allowed later litopterns to become adapted for fast running. Evidence from the dentition suggests strongly that the litopterns, like the didolodontids, have mioclaenid ancestors. To formalize these relationships, it has been recently proposed to define a new order of mammals, the Panameriungulata, for mioclaenids, didolodontids and litopterns. This would at least create a safe home for primitive forms with unknown ankle structure that cannot be confidently assigned to either didolodontid condylarths or litopterns, such as Asmithwoodwardia, a minute animal known from the middle Paleocene of Brazil and the early Eocene of Argentina. Back in the northern hemisphere, another family of condylarths, the Phenacodontidae, may include the ancestors of a more familiar ungulate order: The odd-toed ungulates or Perissodactyla, represented by horses, rhinos and tapirs in the recent fauna. Historically, phenacodontids form the core of the Condylarthra. Well-preserved skeletons are known for the type genus Phenacodus, which is a good model of an ancestral ungulate with beginning adaptations for running. Unlike arctocyonids, periptychids or mioclaenids, the phenacodontids are not part of the first wave of condylarths that populated North America. They first appear with the fox-sized Tetraclaenodon in the middle Paleocene of that continent. The appearance of the more advanced phenacodontids Phenacodus and Ectocion marks the beginning of late Paleocene time in North America. The type genus Phenacodus covers the large size range of phenacodontids and includes roughly sheep-sized animals. Members of the genus Ectocion were usually smaller, with a body mass of only 3 kg in the smallest species, but there is some overlap in size between the two genera. Phenacodontids were the dominant mammals in the latest Paleocene of North America and account for up to 50% of all mammal specimens in faunas of that age. Both Phenacodus and Ectocion survive until the middle Eocene, but phenacodontids become less common after the end of the Paleocene. Remarkable exceptions are local mass occurrences of the dog-sized phenacodontid Meniscotherium which forms real bonebeds in some places. Meniscotherium is mainly early Eocene in age, although first individuals may already have been present in the latest Paleocene. Phenacodus spread into Europe in the early Eocene as part of a major faunal exchange between the Old and New World, but it never became an important component of the European fauna. Figure 5: Reconstruction of the late Paleocene to middle Eocene Phenacodus, a sheep-sized herbivore with improved capabilities for running. From Savage & Long (1986). The skeleton of phenacodontids is generally primitive, especially in the long, heavy tail, but some similarities to perissodactyls are evident in Phenacodus: Its limbs are longer than in primitive condylarths and have five hoofed digits, the first and fifth digit being reduced in size. This foreshadows the early Eocene horse Hyracotherium (sometimes called "Eohippus"), which has completely lost the first digit of the hand and the first and fifth digit of the foot. The limb design indicates that Phenacodus was adapted to running to some degree. Only few limb remains are known for Ectocion, but they suggest that these smaller animals may even have been somewhat better runners than its larger relatives. The skull of phenacodontids is long and has a small braincase. In Phenacodus intermedius the nasal bones are retracted, like in recent tapirs, which may indicate that this species of Phenacodus had a short trunk. As the dentition shows, at least the later phenacodontids were herbivores: Their cheek teeth have low cusps that sometimes tend to become joined into crests, similar to early perissodactyls like Hyracotherium. In Meniscotherium these crests become crescent-shaped, a precocious adaptation for an early Tertiary mammal that can only be compared to the selenodont dentition of Pleuraspidotherium. Although regarded as closely related in the past, later studies have concluded that no special relationship exists between Pleuraspidotherium and Meniscotherium and that the latter is in fact a highly specialized phenacodontid. As the preceding survey shows, the order Condylarthra contains forms that were adapted to a wide diversity of different ways of life, ranging from squirrel-like tree-dwellers to sheep-sized herbivores and large bear-like omnivores. Although all condylarths share a common ancestor somewhere near Protungulatum, they are only united by their primitiveness in comparison to the more advanced ungulates. For this reason the order Condylarthra has been criticized as a "wastebasket" for primitive forms of ungulates, and it has been proposed to abandon altogether the concept this group. Several alternative orders have been proposed to harbour members of the condylarthra, most recently the Panameriungulata for the branch that led to the South American litopterns, but no consensus about a new systematic arrangement has emerged up to now. On the other hand, we have to admit that the role of condylarths as universal ancestors of the more modern ungulates is not yet sufficiently documented by the available fossils. In many cases the search for the "missing links" is still ongoing. The Litopterna may be the group of advanced ungulates that is most convincingly rooted in a specific group of condylarths, the mioclaenids. The case of the similarly adapted odd-toes ungulates or Perissodactyla of the northern hemisphere is less clear. If they did indeed evolve from phenacodontid condylarths, perissodactyls must have branched off from a primitive form like Tetraclaenodon, but transitional forms have not yet been found. In addition, an enigmatic skull with strikingly perissodactyl-like teeth has recently been described from the late Paleocene of China under the name Radinskya. Although no limb remains are known for Radinskya, this animal presents a plausible alternative for the origin of the odd-toed ungulates. The relationships of Radinskya to other Paleocene mammals are not clear, but it could belong to an endemic Asian family of herbivores called Phenacolophidae - also of uncertain affinities. In other cases molecular studies have questioned traditional views. Molecular biologists have recently removed elephants, hyraxes, aardvarks and sea cows from the Ungulata to a new group Afrotheria, an African radiation of placental mammals that also includes tencres, golden moles and elephant shrews (finally united with the name-giving pachyderms). If this is confirmed by future research, living ungulates, and thus still existing descendants of the Condylarthra, would be limited to even- and odd-toed ungulates and whales. Nevertheless, the recent study of a partial condylarth skeleton shows how a single fortunate discovery can revive old ideas that had already been abandoned. Judging from the teeth, the specimen from the middle Paleocene of New Mexico represents a small arctocyonid close to Chriacus. However, instead of the tree-climbing adaptations known from skeletons of Chriacus, limb fragments of the new specimen indicate initial adaptations for running, strikingly similar to early members of the even-toed ungulates or Artiodactyla. This provides new support for the hypothesis, originally based on dental evidence, that artiodactyls are the descendants of arctocyonid condylarths. At the same time, this surprising discovery demonstrates that Paleocene mammals were much more diverse in adaptations than their jaws and teeth can tell us. << Previous article >> Next article Back to introduction Major sources for this article: Archibald, J. D. 1996: Fossil Evidence for a Late Cretaceous Origin of "Hoofed" Mammals. Science 272, 1150-1153. Archibald, J. D. 1998: Archaic ungulates ("Condylarthra"). In: Janis, Ch. M., Scott, K. M. & Jacobs, L. L. (eds.): Evolution of Tertiary Mammals of North America. Volume 1: Terrestrial Carnivores, Ungulates, and Ungulatelike Mammals. Cambridge University Press, 292 - 331. Carroll, R. L. 1988: Vertebrate paleontology and evolution. Freeman and Company. Cifelli, R.L. 1983: The Origin and Affinities of the South American Condylarthra and Early Tertiary Litopterna (Mammalia). American Museum Novitates 2722, 1-49. Cox, B. (editor) 1988: Illustrated Encyclopedia of Dinosaurs and Prehistoric Animals. Macmillan London Limited. Godinot, M., Smith. T. & Smith, R. 1996: Mode de vie et affinités de Paschatherium (Condylarthra, Hyopsodontidae) d'après ses os du tarse. Palaeovertebrata 25 (2-4), 225-242. Madsen, O. et al. 2001: Parallel adaptive radiations in two major clades of placental mammals. Nature 409, 610-614. McKenna, M. C., Minchen, C., Ting, S.Y. & Zhexi, L. 1989: Radinskya yupingae, a perissodactyl-like mammal from the late Paleocene of China. In: Prothero, D. R. & Schoch, R. M. (eds.): The Evolution of Perissodactyls. Oxford Univ. Press, New York, 24-36. de Muizon, C. & Cifelli, R. L. 2000: The "condylarths" (archaic Ungulata, Mammalia) from the early Palaeocene of Tiupampa (Bolivia); implications on the origin of the South American ungulates. Geodiversitas 22 (1), 47-150. Murphy, W. J. et al. 2001: Molecular phylogenetics and the origins of placental mammals. Nature 409, 614-618. Kurtén, B. 1971: The age of mammals. Columbia University Press, New York. Lofgren, D. L. 1995: The Bug Creek Problem and the Cretaceous-Tertiary Transition at McGuire Creek, Montana. Univ. of California publications in geological sciences 140, 1-185. Matthew, W. D. 1937: Paleocene faunas of the San Juan Basin, New Mexico. Transactions of the Philosophical Society 30, 1-510. Rose, K. D. 1981: The Clarkforkian land-mammal age and mammalian faunal composition across the Paleocene-Eocene boundary. University of Michigan, Papers on Paleontology, v. 26, 1-197. Rose, K. D. 1987: Climbing Adaptations in the Early Eocene Mammal Chriacus and the Origin of Artiodactyla. Science 236, 314-316. Rose, K. D. 1996: On the origin of the Artiodactyla. Proc. Natl. Ac. Sci. USA 93, 1705-1709. Russell, D. E. 1964: Les mammifères paléocènes d'Europe. Mémoires du Muséum National d'Histoire Naturelle, Série C, 8, 1-324. Savage, R. J. G. & Long, M. R. 1986: Mammal evolution. An illustrated guide. British Museum (Natural History). Thewissen, J. G. M. 1990: Evolution of Paleocene and Eocene Phenacodontidae (Mammalia, Condylarthra). University of Michigan Papers on Paleontology 29, 1-107. Thewissen, J. G. M. 1991: Limb osteology and function of the primitive Paleocene ungulate Pleuraspidotherium with notes on Tricuspiodon and Dissacus (Mammalia). Geobios 24 (4), 483-495. Van Valen, L. 1978: The beginning of the Age of Mammals. Evolutionary Theory 4, 45-80. Williamson, T. E. & Lucas, S. G. 1992: Meniscotherium (Mammalia, "Condylarthra") from the Paleocene-Eocene of western North America. Bulletin of the New Mexico Museum of Natural History and Science 1, 1-75. Williamson, T. E. & Lucas, S. G. 1993: Paleocene vertebrate paleontology of the San Juan Basin, New Mexico. Bulletin of the New Mexico Museum of Natural History and Science 2, 105-135.
In the previous post, we have learned about the basic terminologies about circles. We continue this series by understanding the meaning of circumference of a circle. The circumference of a circle is basically the distance around the circle itself. If you want to find the circumference of a can, for example, you can get a measuring tape and wrap around it. The animation below shows, the meaning of circumference. As we can see, the circle with diameter 1 has circumference or approximately . Note: If you want to know where came from, read Calculating the Value of Pi. What is the perimeter of a circle with diameter 1 unit? The formula of finding the circumference of a circle is with circumference and diameter is . So, Find the circumference of a circle with radius 2.5 cm. The circumference of a circle with radius is Therefore, the circumference of a circle with radius 2.5 cm is 15.7 cm. Find the radius of a circle with a circumference 18.84 cm. Use . Dividing both sides by 6.28, we have Therefore, the radius of a circle with circumference 18.84 cm is 3 cm. Mike was jogging in circular park. Halfway completing the circle, he went back to where he started through a straight path. If he traveled a total distance of 514 meters, what is the total distance if he jogged around the park once? (Use ). The distance traveled by Mike is equal to half the circumference of the circular park and its diameter. Since the circumference of a circle is and the diameter is equal to , the distance D traveled by Mike is Substituting, we have . Factoring out , we have Dividing both sides by 5.14, we get Now, we are looking for the distance around the park (cirumfrence of the circle). That is, In the next post, we will discuss how to calculate the area of a circle.
Now that you’ve learned what formative assessment is, why you should use formative assessment in your classroom, and six practical tips, it’s time to take what you’ve learned and apply it to your class. These five formative assessment activities can be used across content areas to quickly gather information in regards to how well your students are mastering content or concepts. Use this activity to assess individual students’ understanding of the lesson or a particular focus skill. Give your students approximately 3 minutes to write a summary of the day’s learning/lesson. Next, ask them to highlight or circle 10 words or phrases that best represent the learning. (If a student doesn’t have that many, ask them to choose 5 or so.) Lastly, students take those chosen words and write a one sentence summary using the words. This activity also gauges individual students’ understanding of the lesson or a particular focus skill. Give each student an index card or ½ sheet of paper, or use our 3-2-1 summarizer sheet. Ask students to write 3 things they learned. Ask students to write 2 things they found interesting. Ask students to write 1 question they still have about the lesson or topic. Use the fishbowl activity to evaluate your classroom’s understanding of the lesson and find out what further explanation is needed. Give each student an index card or ½ sheet of paper. Each student writes a question about the lesson or topic. It might be something to which they do or do not know the answer. (No names are added.) Put all questions into a container, such as a fish bowl. Have your students’ pair up and give each pair two questions. Pairs then discuss possible answers and write an answer on the back. (They may leave it blank if they do not know an answer.) Once completed, all papers go back in the container. Pull as many index cards as time permits. Read the question and answer. Students respond with a thumbs up or down. Have your class decide on the correct answer and tell them if a correct answer isn’t on the card. This activity evaluates students’ (individuals and whole group) understanding of the lesson and also helps you discover what further explanation is needed. - Think – tell students to ponder a question or look for an example of a given skill in their writing. - Pair students up – students discuss their answer or share their writing.During this step students may wish to revise or alter their original idea or revise their writing with the skill in mind. - Share – a few students are called upon to share with the rest of the class. (Assessment occurs as you walk around listening to the discussions as well as during the sharing portion. It is important to circulate and hold students accountable for their discussions.) Dry Erase Boards Use this activity for any number of skill checks and to assess individual students’ understanding of the lesson or a particular focus skill. For example, tell students a simple sentence to write on their boards, e.g. “The children run. Then direct your students to revise the sentence using any grammar skill you would like to assess. Examples: Revise by adding an adjective or adverb. Revise by changing run to a stronger verb. Change the word children to a noun with a regular plural. Change the word children and use proper nouns instead. Change children to a pronoun. Expand the sentence and make it compound. Use a simile to explain how fast the children run. You can quickly check around the classroom to note who has mastered the skill and who has not.
What Is Stormwater Runoff? (Conservation Currents, Northern Virginia Soil and Water Conservation District) Stormwater is water from precipitation that flows across the ground when it rains or when snow and ice melt. Some of the water seeps into the ground. Water that is not absorbed and thus flows across the surface to storm drains, streams, and rivers is called runoff. Runoff can pose a threat to the quality of water in our streams and rivers because of the pollution it carries. Every drop of precipitation that strikes the soil loosens particles that wash away and end up as sediment in streams. Sediment and other debris clog fish gills, damage fish habitat, and block the light needed for plants to survive. In addition to sediment, runoff carries other pollutants it encounters on the ground and pavement. Common pollutants include oil, gasoline, and antifreeze dripped by cars and trucks; chemicals used on lawns and gardens; litter from improperly disposed trash; and livestock and pet waste. What can an individual do to reduce the pollutants in stormwater runoff? - Landscape with grass, shrubs, trees, and other plants to hold the soil together, lessening the chance of erosion. - Retain and maintain natural wooded areas, including the forest floor, to filter runoff. - Use mulch and other soil amendments to increase absorption of runoff. - Pick up pet waste and dispose of it with your household garbage. - Have your soil tested every three years and fertilize accordingly. - Recycle used motor oil, and keep your car in good repair. - And always remember that storm drains are not trash cans. They lead directly to streams, depositing runoff and all of the pollutants carried with it. Please don’t dump in storm drains!
For this, the first of a series of posts on the literate child, we would like to share with you a diagram that the Unit uses in our trainings to illustrate the development of a confident reader. Like all houses, the foundations have to be strong to support the rest of the building. The development of language lays the foundation for reading. Talking and listening skills must come first. When a child can communicate effectively, that is they can listen to others and engage in conversation about what they are doing, they are practicing important skills that are needed as they learn to read and write. If your child is constantly chatting to you, sharing their ideas and observations, they are practicing these skills. When you engage in conversation with your child they develop a bank of words, a vocabulary, to enable them to respond to you, their peers and the other adults in their lives. They learn the rules of engagement of conversation, how to respect the other person in the conversation by listening to what they have to say and when to take their turn to speak. This is a skill for life and is established in the early years. The development of language links with the next level of our house, exploration and experiences. Young children need to be exposed to books, books that they can choose to enjoy when they want to, for as long as they want to. Reading to your child from an early age and discussing the story and illustrations begins children on the pathway to understanding that print and pictures have meaning. Reading to your child promotes a healthy relationship as you sit together and enjoy the story, discussing the characters and predicting what might come next. Children love to turn the pages, taking a peek at how the story develops. As they grow older, they will have favourite stroies that they want read over and over until they can rote read the story with you. You will tire of the story long before your child does! It's important that your child is exposed to lots of stories as well as developing favourites. This is when your child will develop a love of reading and this will provide one of the most important factors of children learning to read, motivation. When a child enjoys books, they will want to know what the words mean, want to be able to read the book for themselves. This begins with their rote reading of the story with you and, as the child becomes more familiar with a range of books, you will notice them making up the story using the illustrations as a guide. This is very important as it aids with their comprehension. They may read to their teddy or to their friends or to you. Your encouragement at this time will ensure that your child wants to read to you and is motivated in their attempts. We will continue this entry soon and share with you more information and suggestions on how you can support your child to become a confident reader by looking at the rest of our house and the stage at which these levels develop. If you have any questions or comments to make regarding this entry, please do so in the comments section.
Learn something new every day More Info... by email The pollen count is a number that indicates the amount of airborne allergens that are present per cubic meter of air. These numbers are compiled by volunteers at institutions such as universities, medical centers, research facilities, and clinics, and have been an important part of the weather forecast since the beginning of the 20th century. The higher the pollen count, the more severely allergy sufferers will feel their symptoms. Volunteer agencies collect samples of the air on clear, sticky surfaces. The sample is viewed under a microscope, and each grain of pollen is counted and identified. The pollen count includes all the possible allergens in the air during the past 24 to 72 hours. Based on these numbers, projections are made as to how much pollen will be in the air for anywhere up to the next four days. Pollen counts are given on a scale of zero to 12. Pollen are microscopic molecules that are light and dry; easily carried by the wind, a single type of pollen can impact a large area. Typically, pollen counts are given for city-wide or county-wide areas, as the process is a time-consuming one that can involve hours of work. Air sampling devices are often placed on rooftops or in other open spaces, allowing for a collection sample from a wide area. There are more than 1,200 plants, trees, flowers, and weeds that can potentially cause allergies, and these are all measured in the pollen count. The pollen in the air is dependent on the season, although related plants can cause similar reactions in sensitive people. While exact pollen counts vary from year to year, those who follow them will learn when to expect a spike in the pollens that impact their health the most. Pollen counts are never exact, and numbers taken in the same city can be quite different. Weather conditions can have a drastic impact on numbers, as one sample collected after a rainstorm will have a considerably lower pollen count than one taken in the same city before the rain. The location of the air samplers can also result in different readings, especially if one is located in the middle of a developed area and the other is on the outskirts of town, near a park, or in close proximity to more vegetation. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
What Health Educators and Community Health Workers Do Health educators and community health workers educate people about the availability of healthcare services. Health educators teach people about behaviors that promote wellness. They develop and implement strategies to improve the health of individuals and communities. Community health workers provide a link between the community and health educators and other healthcare workers and develop and implement strategies to improve the health of individuals and communities. They collect data and discuss health concerns with members of specific populations or communities. Although the two occupations often work together, responsibilities of health educators and community health workers are distinct. Health educators typically do the following: - Assess the needs of the people and communities they serve - Develop programs and events to teach people about health topics - Teach people how to cope with or manage existing health conditions - Evaluate the effectiveness of programs and educational materials - Help people find health services or information - Provide training programs for other health professionals or community health workers - Supervise staff who implement health education programs - Collect and analyze data to learn about their audience and improve programs and services - Advocate for improved health resources and policies that promote health Community health workers do the following: - Provide outreach and discuss health care concerns with community members - Educate people about the importance and availability of healthcare services, such as cancer screenings - Collect data - Report findings to health educators and other healthcare providers - Provide informal counseling and social support - Conduct outreach programs - Ensure that people have access to the healthcare services they need - Advocate for individual and community needs The duties of health educators, who are sometimes called health education specialists, vary with their work settings. Most work in health care facilities, colleges, public health departments, nonprofits, and private businesses. Health educators who teach health classes in middle and high schools are considered teachers. For more information, see the profiles on middle school teachers and high school teachers. In health care facilities, health educators may work one-on-one with patients and their families. They teach patients about their diagnoses and about any necessary treatments or procedures. They may be called patient navigators because they help consumers find out about their health insurance options and direct people to outside resources, such as support groups and home health agencies. They lead hospital efforts in community health improvement. Health educators in health care facilities also help organize health screenings, such as blood pressure checks, and health classes on topics such as installing a car seat correctly. They also create programs to train medical staff to interact better with patients. For example, they may teach doctors how to explain complicated procedures to patients in simple language. In colleges, health educators create programs and materials on topics that affect young adults, such as smoking and alcohol use. They may train students to be peer educators and supervise the students’ delivery of health information in person or through social media. Health educators also advocate for campus wide policies to promote health. In public health departments, health educators administer public health campaigns on topics such as emergency preparedness, immunizations, proper nutrition or stress management. They develop materials to be used by other public health officials. During emergencies, they may provide safety information to the public and the media. Some health educators work with other professionals to create public policies that support healthy behaviors and environments. They may also oversee grants and grant-funded programs to improve the health of the public. Some participate in statewide and local committees dealing with topics such as aging. In nonprofits (including community health organizations), health educators create programs and materials about health issues for the community that their organization serves. They help organizations obtain funding and other resources. Many nonprofits focus on a particular disease or audience, so health educators in these organizations limit programs to that specific topic or audience. For example, a health educator may design a program to teach people with diabetes how to better manage their condition or a program for teen mothers on how to care for their newborns. In addition, health educators may educate policymakers about ways to improve public health and work on securing grant funding for programs to promote health and disease awareness. In private businesses, health educators identify common health problems among employees and create programs to improve health. They work with management to develop incentives for employees to adopt healthy behaviors, such as losing weight or controlling cholesterol. Health educators recommend changes to the workplace, such as creating smoke-free areas, to improve employee health. Community health workers have an in-depth knowledge of the communities they serve. They identify health-related issues that affect a community, they collect data, and they discuss health concerns with the people they serve. For example, they may help eligible residents of a neighborhood enroll in programs such as Medicaid or Medicare, explaining the benefits that these programs offer. Community health workers address any barriers to care and provide referrals for such needs as food, housing, education, and mental health services. Community health workers report their findings to health educators and healthcare providers so that the educators can create new programs or adjust existing programs or events to better suit the demands of their audience. Community health workers also advocate for the health needs of community members. In addition, they conduct outreach to engage community residents, assist residents with health system navigation, and to improve care coordination.
The Guatemalan black howler is known to occur in six protected areas: Cockscomb Basin Wildlife Sanctuary, Guanacaste and Monkey Bay National Parks (Belize); Rio Dulce and Tikal National Parks (Guatemala); and Palenque National Park (Mexico) (2). Additionally, a community-based conservation organization in Belize called the Community Baboon Sanctuary (this species is called 'baboon' in the local Creole dialect) has protected land along the Belize River, ensuring that this howler’s food trees are not destroyed to make way for pasture (10). Over 200 private landowners here in seven villages, stretching over 20 square miles, have voluntarily pledged to conserve their land for the protection of the Guatemalan black howler, many of which will consequently benefit from ecotourism. Indeed, one of the main aims of the Community Baboon Sanctuary is to help address habitat destruction by promoting sustainable tourism as an attractive alternative to destructive land management practices. At the same time, the Sanctuary conducts conservation research and educates the local community and visitors about the importance of biodiversity (8). Other conservation measures implemented by the Sanctuary include creative initiatives like building bridges made of rope and sticks that allow the monkeys to pass between gaps in the forest, and relocating a number of individuals to the Cockscomb Basin Wildlife Sanctuary (8). If similar efforts were made in Mexico and Guatemala, and ecotourism was promoted as a viable means of profiting from protected forest habitats, the Guatemalan black howler would perhaps have a much higher chance of long-term survival.
Scientists have decoded the genome of the axolotl, the Mexican amphibian with a Mona Lisa smile. It has 32 billion base pairs, which makes it ten times the size of the human genome, and the largest genome ever sequenced. The axolotl, endangered in the wild, has been bred in laboratories and studied for more than 150 years. It has the remarkable capacity to regrow amputated limbs complete with bones, muscles and nerves; to heal wounds without producing scar tissue; and even to regenerate damaged internal organs. This salamander can heal a crushed spinal cord and have it function just like it did before it was damaged. This ability, which exists to such an extent in no other animal, makes its genes of considerable interest. Now researchers, using one genetic sequencing technique to do their analysis and then another to “proof read” it, have provided researchers with the tools to study and manipulate the genes of the axolotl. Their study appears in Nature. “The techniques in this paper are all at the cutting edge,” said Ryan Kerney, a biologist at Gettysburg College who has published widely on amphibian genetics but was not involved in this study. “And the data they generated are incredibly thorough for any genome, much less one this large.” This is the first salamander genome ever sequenced. The reason it took so long is that it has so many repetitive parts, according to Elly M. Tanaka, a senior scientist at the Research Institute of Molecular Pathology in Vienna and senior author of the new study. The study was a huge computational effort, requiring techniques developed expressly for the purpose. “We want to understand the huge changes in the RNA and proteins that the cells produce to change from an adult cell to a stem cell,” Dr. Tanaka said. “How does an injury cause such a huge change? We can’t understand that without knowing how different parts of the genome are used to change how cells behave.” The researchers have identified some of the genes involved in regeneration, and some genes that exist only in the axolotl, but there is much work still to be done. “The adventure is just starting,” Dr. Tanaka said. “Completing the genome will open up a wealth of opportunities in studying how organisms regenerate. We’re just as excited as people were when they first decoded the human genome.”
What is Unemployment, How is it Measured, and Why Does the Fed Care? In this lesson, students read and interpret choropleth maps, which contain unemployment data. They compare verbal descriptions of the labor market from the Federal Reserve’s Beige Book with the mapped data. In addition, students compare unemployment data for different years. Students access or observe how to access this data online, using GeoFRED®.
Mosquitoes, like other insects, have a body consisting of 3 sections: the head,the thorax, and the abdomen. Only female mosquitoes bite. Mosquitoes that bite remove a small portion ofthe victim's blood which is used for development of eggs inside the female. Actually, mosquitoes do not bite because they cannot open their jaws. Mosquitoes actually stabthrough the victim's skin with 6 tiny, needle-like projections calledstylets. Mosquitoes can easily suck blood because the insect'ssaliva keeps it from clotting. The mosquitoes' life is observed in four stages:the egg, the larva, the pupa, and the adult. The female lays between 100 and 300 eggs at one time depending uponspecies. She can lay as many as 3,000 eggs in her lifetime. Most varieties lay their eggsin water, although there are some species that do not. Mosquito eggs must havemoisture in order to hatch. The hatchlings resemble a caterpillar or worm and are often called wrigglers. Thisis the larva stage of development. The larvae grows quickly. They molt four times in 4 to 10 days. Following the final molt, larva change into pupa. The pupa is comma -shaped. The pupa of many insects cannot move, butall kinds of mosquito pupa can swim. The pupa, however, does not eat. The adult female attracts the male by emitting a high-pitched sound made by her wings.Females of some species must sip blood before they can lay eggs that will hatch.Adult males may live only 7 to 10 days while females can live over 30 days.
CULTURAL AND RACIAL STEREOTYPES ON THE MIDWAY 1 CULTURAL AND RACIAL STEREOTYPES ON THE MIDWAY Josh Cole In 1893 the Worldʹs Columbian Exposition opened in Chicago. Th... CULTURAL AND RACIAL STEREOTYPES ON THE MIDWAY Josh Cole In 1893 the Worldʹs Columbian Exposition opened in Chicago. This fair was not the first of its kind, but it was the first one to take place in the Midwest.1 Chicago was viewed as the industrial capital of the region, so it made perfect sense for it to host one of the most extravagant spectacles of the nineteenth century. The fair itself was divided into two separate parts—the White City and the Midway Plaisance. The White City represented white, middle‐class American society, while the Midway served a completely different purpose. The Midway served as both an “educational” example of inferior civilizations and as an amusement for the paying spectators. By examining the numerous exhibits and racialized “others” on the Midway, one is able to see exactly how strong these feelings of Anglo‐ Saxon superiority were on a national scale. Racial and cultural stereotypes were widespread on the Midway, and they were upheld by nearly all of the spectators and organizers. Therefore, the Midway serves as an excellent case study of both social and cultural history. The Organizers and Visual Culture Frederic Ward Putnam of Harvard University headed the fairʹs anthropology section, with the assistance of Franz Boas. Their primary intent for the Midway was to make it educational for the spectators, and they hired a young man by the name of Sol Bloom to provide the Midway with this focus. Bloom frustrated Boas and Putnam; opting to bring entertainment, based on what he had seen in Paris in 1889, to the Midway instead. While Putnam and the fair board treated race in a Josh Cole is a graduate student in History from Tuscola, Illinois, and a member of Phi Alpha Theta. He wrote this paper for Dr. Lynne Curry in History 4950, Industrial America, in the Spring of 2006. 1The battle for the privilege of hosting the exposition had been between the East and the West; Chicago had faced strong competition from New York. David F. Burg, Chicago’s White City of 1893 (Lexington: The University Press of Kentucky, 1976), 42‐43. CULTURAL AND RACIAL STEREOTYPES traditional hegemonic manner, placing the advanced technology of the Anglo‐Saxon race at the center of the fairgrounds and the displays of the lesser races farther away from the center, Bloomʹs Midway relished the opportunity to make money from its commercialized exoticism.2 Bloom organized and coordinated the exhibits in accordance with racial and cultural stereotypes of the time. He knew that white Americans would simply be shocked and appalled with foreign and mysterious races and cultures while also being intrigued by their own curiosities. Exhibits were a very popular form of amusement during the era of industrialization. People did not just want to read about other lifestyles and exotic places, but they would rather like to experience them with their own senses. This notion was captured by Jacob Riis’ How the Other Half Lives. In it, Riis visually detailed the slums of New York with photographs as well as words. David Leviatin, the most recent editor of Riis’ masterpiece, believes that it offered its readers “a spellbinding glimpse of America’s future.” By skillfully combining word and picture, Riis managed to capture a view of society that revealed modern America’s vision of progress, and one that established Anglo‐Saxons at the top of civilization.3 The transformation of the urban daily newspaper, the astonishing success of George Eastman’s line of Kodak products, the growth of advertising, the popularity of the mail‐order catalog, and the rise of the national illustrated magazine all helped to create an exciting new visual culture saturated with graphic images, as Riis’ work demonstrated. These new realistic images were designed to entice the eye—to make viewers stop, look, and buy.4 The world’s fair in Chicago was no exception, and Bloom wanted to amaze and shock the visitors with Robert Rydell, John E. Findling, and Kimberly D. Pelle, Fair America: Worldʹs Fairs in the United States (Washington: Smithsonian Institution Press, 2000), 38‐39. 3Jacob A. Riis, How the Other Half Lives, ed. David Leviatin (New York: Bedford Books, 1996), 9‐10. 4Ibid., 24‐25. 2 something that they had never witnessed before—a visual hierarchy of race with Anglo‐Saxons at the top of civilization. He planned to hit the jackpot with the notion that public spectacles such as the circus‐like Midway were cultural phenomena, and he became rich as a direct result of it.5 Imperialism The overriding view on American foreign policy before the world’s fair was one that stressed isolation. People in power wanted to concentrate on American economics and politics at home rather than getting involved in other countries’ affairs. The world’s fair was the result of a change in mindset of the American public. The citizens of the United States realized that their economy would never live up to its full potential if their aims were not set outside of their national borders as well as overseas. Only then would America be recognized as a true “world power.” Social historian Robert Rydell believes that the Midway offered legitimacy to an imperialistic view of the world, and one that would recognize America as a world power. It allowed Americans, whether it be elites or lower‐class citizens, to establish their cultural hegemony as whites. Its anthropologically‐validated racial hierarchy served several purposes during this time. It legitimized racial exploitation at home and the creation of an empire abroad. Spectators identified the exhibited peoples as primitives who could be conquered by the superior Anglo‐Saxon race. The inferior races could only be improved by association with white Americans, and the foreign peoples would surely be open to the adoption of an industrially superior and civilized religion, government, and culture. Rudyard Kipling referred to this yearning as the “white man’s burden,” and Teddy Roosevelt bought into the idea as well. As Roosevelt saw it the United States was engaged in a millennial drama of manly racial advancement, in which American men “enacted their superior manhood” by asserting 5 Historia CULTURAL AND RACIAL STEREOTYPES imperialistic control over races of inferior manhood, to prove their virility, as a race and a nation. American men needed to take up the “strenuous life” and strive to advance civilization – through imperialistic warfare and racial violence if necessary.6 I agree with Rydell that imperialism played a major role on the Midway, for all of the evidence already mentioned. Carefully designed exhibits of the nonwhites on the Midway illustrated ideas that had been used to justify the political and economic repression of Native Americans, African‐Africans, and Asian‐Americans. These ideas of conquering, nurturing, and ultimately exploiting less civilized peoples were then used to validate American imperial policy overseas. The emphasis on white supremacy created a combined sense of nationalism and racism for Anglo‐Saxons.7 The world’s fair was a direct result of these feelings of nationalism and racism, and they were utilized more dramatically after 1893. The United States went to war with Spain in 1898, and the victory gave America a position in the western Pacific (the Philippines) which made it a sort of Asiatic colonial power. However, America was not satisfied, and used force to eventually make Hawaii a United States territory in February of 1900. Secretary of State John Milton Hay’s “Open Door” Policy indicated that America wished to have an influence in China, and this was reinforced in the United States sending 2,500 troops to the international army sent to restore order there in 1900.8 Teddy Roosevelt also acted upon his imperialistic desires, helping Panama to declare independence from Colombia in exchange for control of the Panama Canal Zone. In the Name of Science and Education Americans did not want the Midway to simply be a symbol of imperialistic desires, so all of its exhibits were classified under 6Paul Kennedy, “The United States as New Kid on the Block, 1890‐1940,” Major Problems in Gilded Age and program Era, ed. Leon Fink (New York: Houghton Mifflin Company, 2001), 287. 7Ibid., 235‐236. 8Ibid., 276‐277. the supervision of the Department of Ethnology and Archaeology, which gave them an air of scientific respectability; instruction and entertainment complemented each other on the Midway. The exhibits were organized to include “the civilized, the half civilized, and the savage worlds” in a racial hierarchy leading to a future utopia. Visitors passed between the walls of medieval villages, between mosques and pagodas, past the dwellings of colonial days, and past the cabins of South Seas Islanders and the Javanese, among them hints of ruder and more barbaric environments.9 It was hailed as a great object lesson in anthropology by leading anthropologists in America, and it provided visitors with an ethnological, scientific sanction for the stereotypical American view of the nonwhite world as barbaric and childlike.10 At the fair in Chicago, visitors could witness and even take part in the scientific observation, of and research on, the racial characteristics of the exhibited peoples on the Midway. Phrenology, craniology, physiognomy, and anthropometry shared the assumption that in the outward shape and physical appearance of the body and the inner character—of different races, but also of criminals, prostitutes, and deviants—was manifest. These “scientific” methods are generally understood to be unacceptable in modern society, yet they held significant credibility during the late nineteenth century. The outward shape of the subjectʹs body had to be measured and mapped meticulously. The results of these findings were collected, measured, classified, and filed at the fairʹs laboratory. At the same time this was occurring in Chicago, anthropological societies and museums of natural history around the nation accumulated tens of thousands of native skulls of exotic peoples in an effort to prove that these “scientific” practices clearly John R. McRae, “Oriental Verities on the American Frontier: The 1893 World’s Parliament of Religions and the Thought of Mason Abe,” Buddhist‐ Christian Studies 11 (1991): 11‐12. 10Robert Rydell, All the Worldʹs A Fair: Visions of Empire at American International Exhibitions, 1876‐1916 (Chicago: University of Chicago Press, 1984), 40. Historia CULTURAL AND RACIAL STEREOTYPES illustrated differences between the superior Anglo‐Saxon race and inferior foreign races by physical measurements.11 Evolutionary Scale of Civilization The Worldʹs Columbian Exposition in Chicago was not the first fair to use cultural and racial stereotypes as inspirations for their exhibits, whether it be for scientific experiments or ethnological observations. The worldʹs fair in Paris was a major inspiration for the Midway in Chicago. Robert Rydell recognized the significance of these fairs. “These events,” Rydell says of the expositions, “were triumphs of hegemony as well as symbolic edifices.” As symbolic universes the fairs legitimate the “world order” they create. For Rydell, this symbolic universe represented in ethnological exhibits, was the product of a union between “Darwinian theories about racial development and utopian dreams about Americaʹs material and national progress.” In the exhibits fairgoers could walk from white civilization to dark barbarity, and experience the notion of social Darwinism for themselves. The Midway emphasized the inferiority of exotic and primitive races, while the White City showed the evident superiority of the industrialized white American race.12 Amusements on the Midway were based on the practices of the “inferior” ethnic groups or segments of a colonyʹs population. For instance, all Native Americans might be thought to wear feather bonnets or all the inhabitants of French West Africa to be like the Dahomeyans shown at so many other exhibitions. These impressions were reinforced if the people on display were housed in structures associated with only one group—the wigwam, the igloo, the grass hut, the Indochinese temple, or the West African mud stockade. Organizers relied on these stereotypes of exotic peoples to attract more visitors who 11Raymond Corbey, “Ethnographic Showcases, 1870‐1930,” Cultural Anthropology 8, No. 3 (August 1993): 354‐355. 12Meg Armstrong, “A Jumble of Foreignness”: The Sublime Musayums of Nineteenth‐Century Fairs and Expositions,” Cultural Critique 23 (Winter 1992‐ 1993): 207. Historia CULTURAL AND RACIAL STEREOTYPES wanted to see differing and primitive lifestyles.13 The different lifestyles of colonial natives became standard fanfare at many expositions for the “education” and entertainment of westerners. The spectators and natives figured as categories in what Raymond Corbey considers western representations of “Self,” or characters in the story of the ascent to civilization, depicted as “the inevitable triumph of higher races over lower ones and as progress through science and imperial conquest.” Ethnologist Charles Rau, who observed the Midway in Chicago, stated that “the extreme lowness of our remote ancestors cannot be a source of humiliation; on the contrary, we should glory in our having advanced so far above them, and recognize the great truth that progress is the law that governs the development of mankind.”14 This statement serves as evidence that these sentiments of white superiority over exotic and foreign civilizations were shared by both the common people and the elites in American and western society. Lower‐ and middle‐class citizens did not feel that their racist views were immoral or bigoted because they were supported by some of the most famous and established intellectuals in nineteenth‐century society. The living exhibits on the Midway were typically organized on a scale from civilized to barbaric so that the lower‐ and middle‐ class citizens could easily see the distinction between civilization and barbarism. The lower a people, or race, was deemed to be by white America the further removed it was from the “Indian school” that marked one pole of the scale, or that of civilization, and thus closer to the White City. Philippine Igorots and African Pygmies were situated near the pole of barbarity at the other end of the scale, and referred to as the ultimate bottom dwellers on the evolutionary ladder and furthest away from the White City. These peoples were presented in all of their uncivilized horror, to be jeered and hissed at by the paying customers.15 Social historian Robert Muccigrosso also notes this particular assemblage of foreign villages clustered along the Midway. He and many other critics of the Midway assert that the arrangement of these Midway settlements exhibited racial and ethnic biases and were consciously designed to proclaim the superiority of white culture. They charge that officials intentionally arranged for non‐Western exhibits to be closer to the “black” city (Chicago) and farthest from the White City. According to these critics, this represented a ranking of cultural achievements, or a microcosm of the world of imperialism that exalted westerners over non‐westerners.16 I agree with Muccigrossoʹs assessment that the placement of non‐western exhibits was deliberately placed closer to the “black” city because this is exactly what Sol Bloom had in mind. The exotic and unknown peoples were arranged this way in order to represent their complete backwardness compared to the elegance of the White City. White City officials were not comfortable with its close proximity to the Midway due to its denigrating characteristics, so the two were placed as far away from each other as possible in order to maintain the purity and innocence of the industrialized White City. Harlow N. Higinbotham, president of the board of directors of the Columbian Exposition, rationalized the Midway in his official report published five years after the fair concluded. His description of the fair’s organization reinforced this separation of the Midway from the White City for moral reasons. He argued that “the eye and mind need[ed] relief” from the Court of Honor in the White City. The Midway granted the “opportunity for isolating...special features, thus preventing jarring contrasts between the beautiful buildings and the illimitable exhibits on the one hand, and the amusing, distracting, ludicrous, and noisy 13Burton Benedict, “International Exhibitions and National Identity,” Anthropology Today 7, No. 3 (June 1991): 8. 14Corbey, “Ethnographic Showcases, 1870‐1930,” 341‐342. Ibid., 345. Robert Muccigrosso, Celebrating the New World: Chicago’s Columbian Exposition of 1893 (Chicago: Ivan R. Dee, Inc., 1993), 164. 15 16 attractions” on the other. The low or popular culture of exotic and foreign peoples, in his mind, must not violate the sanctity of high white, industrialized culture.17 Higinbotham, as well as other elite board members, is the reason why the Midway was allowed to racially and culturally stereotype other groups of people and heavily profit from it. The Midway institutionalized the concept of Anglo‐Saxon racial supremacy and the uninterrupted progress of Western civilization and its organizers transferred this ideology to the organization of the fairgrounds. Although the exhibited peoples were isolated due to their “low culture” as Higinbotham describes, they served several functions on the Midway. The American firm William Foote & Co. African American Characters exploited a show with African‐ Americans—as the letterhead of the firm stated—appearing as “Savages, Slaves, Soldiers, and Citizens.” Crafts, hunting techniques, rituals, dances, and songs were among the activities staged, as well as stereotypical “authentic” performances like warfare, cannibalistic acts, and head‐hunting. Igorots from the Philippines could be seen eating dog meat, a food taboo in the west, while African Pygmies illustrated decapitation. The Dahomey “Amazons,” heavily armed, simulated fights for the amusement of the white visitors. Aborigines from Queensland, Australia were described on posters as cannibals and bloodthirsty monsters, further fueling the stereotype of them and other black peoples as animalistic abhorrences of nature.18 Egyptian and Dahomeyan Women, and the Media’s National Influence These exhibits were meant to entertain the public, and the Midway certainly was full of amusements. It contained sensational spectacles such as exotic dance shows and racialized “others” performing their daily tasks. One of the most popular stops on the Midway was Little Egypt, which offered the exotic female as an object of sexual desire, clearly reflected in the form Historia CULTURAL AND RACIAL STEREOTYPES of the “hootchy‐cootchy” dance.19 The women who performed these dances were linked to the ancient sphinx in printed media at the time, whether it was through writing or photography. The sphinx was not only a creature that was thought to be half‐ woman and half‐beast, but also timeless and lifeless. Popular white media sources did not stop there with their depictions, however. They also compared the modern Egyptian woman to Cleopatra, reinforcing the perception of these women as unchanging and timeless. This linking of the modern Egyptian woman with ancient monuments and thousand‐year‐ old Cleopatras imprisoned her in a time capsule. Her clothing and expression were seen as part of her bondage at the same time that the stereotypical statements of the press served only to seal her in a civilization of the past. Meg Armstrong believes that this portrayal of the Egyptian woman made her less individualistic and more mythical while also making her more “masculine” as her powerlessness was unveiled to the white, superior public.20 While observing these exotic women on the Midway, white spectators also viewed them through their own definition of Anglo‐Saxon beauty. Dahomeyan women had “dusky beauty” and “savagery” and were commonly depicted in Midway Types, a widely circulated Chicago Times portfolio of the exhibited people on the Midway. In addition to being viewed as the “savage tigresses” of the Midway, Dahomeyan women were depicted as lacking in the beauty and grace ascribed to favored races; one Amazon was ridiculed for carelessly dangling her legs over the edge of a hammock that was carrying her along the Midway. Impressionable young, white men were warned that dances and songs performed by these women, who enacted risqué caricatures of feminine allure, as one guidebook held, “deprive you of a peaceful nightʹs rest for months to come.”21 The Armstrong, “A Jumble of Foreignness,” 209. Ibid., 213‐214. 21Judy Sund, “Columbus and Columbia in Chicago, 1893: Man of Genius Meets Generic Woman,” The Art Bulletin 75, No. 3 (September 1993): 450‐451. 19 20 Ibid., 154. Corbey, “Ethnographic Showcases, 1870‐1930,” 347. intricacies and precision of these exotic dances and rituals by these foreign seductresses mesmerized many male spectators.22 Beauty on the Midway was determined by the eye of the white beholder, and this beauty was used as a common measure of civilization at the exposition. Dahomeyan women were gaped at in curiosity and awkward amusement as representatives from a non‐industrial, non‐Christian society during the era of social Darwinism.23 However, they still managed to become real, living persons through personalized use of their names in the newspapers.24 As propaganda to bolster white claims to racial superiority, the newspapers served to convince fairgoers of what they might have missed at their first observation and, aided by racist narratives, how they were supposed to think when confronting the image of any exotic visitor.25 The “Chinese Beauty” received little to no commentary in the media, while the “Javanese Beauty” was of a “people who were favorite types of study for all who visited them, their small stature, gentle ways and marked air of contentment winning the liking of all who saw them.”26 The status of nations as savage or civilized was determined by white superiors. The whiter the particular race was, the more beautiful the people were represented as a whole. Beautiful people were stereotyped as more civilized and closer to assimilation of American ideals as compared to darker‐ skinned Africans, for example. Reporter and journalist Marian Shaw went through the entire fairgrounds and noted her own personal feelings, particularly on the backwardness of the Chinese and other races Chicago Daily Herald (University of Illinois at Urbana‐Champaign Library; Chicago: Stuart Paddock, July 9, 1893, text‐fiche), 25. 23Chicago Tribune (University of Illinois at Urbana‐Champaign Library; Chicago: David Hiller, August 16, 1893, text‐fiche), 1. 24Chicago Herald (University of Illinois at Urbana‐Champaign Library; Chicago: Stuart Paddock, August 19, 1893, text‐fiche), 2. 25Chicago Evening Post (University of Illinois at Urbana‐Champaign Library; Chicago: Melville E. Stone, September 16, 1893, text‐fiche), 5. 26Armstrong, ““A Jumble of Foreignness,” 220‐221. 22 Historia CULTURAL AND RACIAL STEREOTYPES and cultures exhibited on the Midway. She visited Old Cairo Street, where camels roamed around the campgrounds or donkeys “driven by barefooted, yelling little Arabs, who, clad in long, dirty white garments resembling night gowns, scream and hoot and pummel the long‐suffering little beasts with their sticks.” She regarded these Arabs as wretches who made shrilling sounds from their “barbarous little throats.”27 Her articles expose the nineteenth‐century belief in the progress of the Anglo‐Saxon race in America and Europe as contrasted with other primitive races. This distinction of the races was neatly packaged in the separation of the highly symbolic White City with its white, Italian neoclassic buildings, and the Midway, with living exhibits from Java, Samoa, Egypt, and other exotic countries. Shaw, like other fair visitors, looked to the Midway as a kind of living time‐line showing the advances of the races.28 These people are they, who, in the mad race of nations for power and self, seem to have been left far behind, and, compared with the nations of today, are like untutored children. From the Bedouins of the desert and the South Sea Islanders, one can here trace, from living models, the progress of the human race from savagery and barbarism through all the intermediate stages to a condition still many degrees removed from the advanced civilization of the nineteenth century.29 These remarks clearly show that Shaw viewed herself as the social and cultural superior to these exhibited foreigners, but she did not view all of the peoples on display as uncivilized and heathen. African‐American Reactions to the Portrayal of Dahomeyans and Samoans African‐Americans wanted to be very involved in the organization of the Midway, and wanted full creative control of Shaw, Worldʹs Fair Notes, 58‐59. Ibid., 88‐89. 29Ibid., 56. 27 28 their own exhibits so that the darker‐skinned natives that Shaw described would not be misrepresented by white organizers. African‐Americans also wanted to show the more prominent and civilized features of black America, but they would be denied this active role. These people, who had vainly fought an unstated but effective color barrier in the expositionʹs planning phase were angered by the organizers ongoing display of Samoan and Dahomeyan male “savages” who were only capable of breaking bones and hunting animals. These groups of people were viewed as unable to achieve independent status in their own land due to their own inability to industrialize, which in turn meant becoming civilized. Frederick Douglass was offended by these representations of black peoples on the Midway. He protested that the warriors of the Dahomey Village perpetuated the stereotype of blacks as primitive savages, but the Samoan Islanders on the Midway fared even worse. Billed as people “so recently rescued from cannibalism,” the Samoans sang and danced but impressed visitors more with their size and reputed appetite for human flesh. The prevailing stereotype of people of color as barbarous and bloodthirsty brutes was simply too much for the Samoans to overcome with their more civilized and entertaining displays. Americans were not impressed with things familiar to them, and they wanted to see these inferior black brutes in all of their ferocity and vileness. White audiences still appreciated their playing “Yankee Doodle” on drums and gongs at the end of their staged presentation, but they were more impressed by their exoticism.30 Aside from these exotic portrayals of Africans, blacks were not favorably represented either in the industrialized White City or on the Midway. In fact, African‐Americans were banned from participating in and organizing the fair. Ida B. Wells and Frederick Douglass joined forces and compiled a booklet “The Reason Why the Colored American Is Not Represented in the World’s Columbian Exposition.” Some 10,000 copies of it were 30 Muccigrosso, Celebrating the New World, 164‐165. Historia CULTURAL AND RACIAL STEREOTYPES distributed during the fair. Wells and Douglass would disagree on the merits of the Colored Jubilee Day at the fair, a day specifically arranged in order to show national contributions of African Americans. Douglass saw it as a small victory for blacks, while the idealistic Wells scoffed at the notion of simply having one day to acknowledge all the successes of her people. The celebrated day did pass without Wells’s participation, but Douglass’ superb handling of the event changed Wells’s perception of it from a belittling occasion to an enlightening experience.31 Interactions/Reactions of the Observers and Exhibited People While the exhibits frustrated and angered African‐ Americans, white visitors were amused with the exotic dances and rituals. It was also quite usual for them to physically interact with the exhibited natives. They even threw money to Dahomeyan performers, who were made to beg for it. Clearly, the intent of the promoters of the Midway attractions was to simultaneously turn a profit while presenting the world in all its diversity, based on observation and actual interaction.32 Of course, the exhibited peoplesʹ behavior and movements were strictly controlled in order to preserve the safety of the paying customers. The peoples on display were represented as “different” from the spectators and forced to behave in a manner that clearly demonstrated their inferiority to that of the Anglo‐ Saxon visitors. It was unthinkable that they should mingle spontaneously with the spectators in almost all situations, and there were few opportunities for contact between the two parties. The living exhibits had to stay in a certain circumscribed part of the exhibition space, which represented their world; a boundary lay between this world and that of the citizens visiting and inspecting them, between wilderness and civility, nature 31Christopher Robert Reed, “All the World Is Here!”: The Black Presence at White City (Bloomington: Indiana University Press, 2000), 152‐205. 32ʺThe Fair as Educator,” Harperʹs Weekly 37 (University of Illinois at Urbana‐Champaign Library; New York: J. & J. Harper, June 10, 1893, text‐fiche), 543. and culture, which had to be respected unconditionally. All signs of acculturation were avoided as long as the natives were on show because they were clearly heathen peoples compared to Anglo‐Saxons.33 One key question that has to be asked is how did these exhibited individuals themselves, often more or less coerced into participation, experience and cope with the confining exhibits and the sometimes obnoxious spectators who viewed them as abhorrences of civilization? Many of the exhibited natives had to battle with homesickness, emotional confusion, difficulties of adjustment to the climate and food, and vicious infections. They often actively resisted the roles that were forced on them, for instance by running away, and they could be put back in harness only by force. The reality of the situation was that these “inferior” exhibited peoples basically had no means to escape their servitude to the organizers. It is obvious that they did not enjoy their time on the Midway, and received no real benefits from doing so. They were forced to display their inferior racial and cultural identities in order to entertain a superior white civilization.34 Native Americans Native Americans were one of the groups that had significant exposure on the Midway Plaisance so that they could demonstrate their inferior status. Although these “Indians” lived on American soil, they were still viewed as barbarians by a majority of white Americans at the end of the nineteenth century. On the Midway, they were set up in teepees, while going about their daily native customs such as cooking over a fire or making bead necklaces. Of course, the organizers insisted on the tepees as their habitats, although the majority of the native participants did not use tepees as their natural living quarters. Sometimes, the Native Americans performed certain rituals for the public. Marian Shaw, a newspaper reporter, noted 33 34 Corbey, “Ethnographic Showcases, 1870‐1930,” 344‐345. Ibid., 348. Historia CULTURAL AND RACIAL STEREOTYPES one of the spectacles. They were “artistically painted in chrome yellow, vermilion and green, with feathers, knives, tomahawks and all of the horrid accoutrements of savage warfare.”35 She despised their war dances and ceremonial music because it was primitive in her mind, as well as in the minds of most white observers. One sideshow that also constituted effective racism toward Native Americans was Buffalo Bill Codyʹs Wild West Show and the cleverly named “Congress of Rough Riders.” Indians were portrayed as murderous and warlike savages in these shows, and civilized white cowboys played the part of the courageous heroes and victors over the uncivilized heathens. They were very popular forms of entertainment and enjoyed enormous profits. At the opening ceremonies of the fair, several recently defeated Sioux chiefs (the Wounded Knee massacre had occurred only three years before) were made to appear at the climax of the festivities as the chorus was singing “My Country ʹTis of Thee.” These ceremonies and shows symbolized the triumph of white civilization over the inferior Indian nations, through both the portrayal of whites as military victors and the willingness of Native Americans to represent themselves as a conquered and obedient race.36 Native Americans were portrayed as uncivilized savages, but there was also an effort on the Midway to show that they could be assimilated into white, mainstream society. This desirability of “civilizing” North American Indians was an important theme in the late nineteenth century. This American emphasis on educating and assimilating Native Americans and other dependent peoples was tempered by ideas of racial and social evolution which placed darker‐skinned people much lower on an evolutionary scale than white civilization.37 Commissioner Morgan of the Indian Bureau of Affairs 35Marian Shaw, Worldʹs Fair Notes: A Woman Journalist Views Chicagoʹs 1893 Columbian Exposition (Chicago: Pogo Press, Incorporated, 1992), 59. 36McRae, “Oriental Verities on the American Frontier,” 12‐13. 37Benedict, “International Exhibitions and National Identity,” 7. envisioned an Indian exhibit which, in spite of a large dose of traditional flavor, would convince American citizens that the US government was making “United States citizens out of American savages.”38 Morgan turned to Carlisleʹs Richard Henry Pratt, the nationʹs best known Indian educator, to organize and supervise a Native American youth school on the Midway. He refused, citing that Buffalo Billʹs Wild West Show would provide ample illustration of Indian ways, and the government should not degrade itself by “illustrating in any way the old Indian camp life.”39 By all accounts the Wild West performances, featuring plenty of mounted warriors, were a great success, enjoyed by a public obviously more interested in the Indians of old. The school did eventually open on the Midway, showing Native American youths performing arithmetic and choral singing in a classroom. However, the former stereotype of the Indian as savage warrior prevailed, and the school itself was rarely attended and a financial failure. The Far East, China in particular Americansʹ attitudes toward the Japanese in 1893 were demeaning and patronizing. The Japanese were portrayed as “cousins” of the Chinese and visitors to the Japanese Village on the Midway were invited to view “part and parcel of the home life of the little brown men.” The possibility that these foreigners might become full citizens of the beautiful American utopia was increasingly problematic for Anglo‐Saxons. If, however, the Japanese were given at least a little respect in 1893, the Chinese were seen as replicas of the old stereotypes of the shrewd, cunning, and threatening “John Chinaman.” References to “almond‐eyed” and “saffron‐colored Mongolians” abounded throughout the entire nation. Hubert Howe Bancroft, who in the 1880s had written that “as a progressive people we reveal a race prejudice intolerable to civilization,” looked disdainfully upon 38Robert A. Trennert Jr., “Selling Indian Education at Worldʹs Fairs and Expositions, 1893‐1904,” American Indian Quarterly 11, No. 3 (Summer 1987): 205. 39Ibid., 205‐211. Historia CULTURAL AND RACIAL STEREOTYPES the Chinese theatre for the “oddity of the performance and for the nature of its themes.” He stated that China “is a country where the seat of honor is the stomach; where the roses have no fragrance and women no petticoats; where the laborer has no Sabbath and the magistrate no sense of integrity.”40 Charles Stevensʹ Uncle Jeremiah was a fictional story about a black family visiting the worldʹs fair in Chicago. Uncle Jeremiah, the main character of the book, notes his reactions to the displayed people on the Midway. Through the dialogue and actions of Uncle Jeremiah, Stevens reveals his own feelings about inferior people from the Far East. His character expressed pleasure that a few “decent‐looking Chinamen” who did not “look like rats and whose fluent English proclaims their long stay in ʹFliscoʹ were serving tea at the entrance to the theatre,” but he also stated his suspicion that the nearby temple probably contained the opium banks of the morally backward and drug‐ addicted Chinese actors. Stevensʹ use of Uncle Jeremiahʹs criticism and distrust of the Chinese is significant because it empowers African‐Americans in a white‐dominated society. Although African Americans were severely oppressed in late nineteenth‐century America, Stevens believes that they were able to find some comfort in the fact that other people, the Chinese in this case, were less civilized and acculturated than them. Stevensʹ portrayal of the Chinese placed them below that of blacks on the scale of civilization. Harperʹs Weekly, in an article on the Fourth of July parade staged by the villagers of the Midway Plaisance, had its own mixed view on the Chinese. “[They] are a meek people, but seem anxious to apologize and make atonement for their humility by the extraordinarily aggressive dragons and devils which they contrived. The dragon did much to raise the standing of the Midway Chinese among other more savage and not half so ingenious races.”41 The Chinese were thus viewed as creative and half‐intelligent, which was much more than could be said 40 41 Rydell, All the Worldʹs A Fair, 49‐51. Ibid., 51‐52. for numerous groups of people on display. Harperʹs Weekly, Uncle Jeremiah, and other printed sources popularized national images of the ʺChinamanʺ as timid but cunning and uncivilized, and were placed on the lower end of the evolutionary scale, but still exhibited superior traits over more barbaric people such as the Javanese and Dahomeyans. Jacob Riis agreed with this portrayal of the Chinese as cunning and manipulative, as well as a “constant and terrible menace to society, wholly regardless of their influence upon the industrial problems which their presence confuses.”42 The Javanese The unknown both amused and frightened spectators, who favored their own cultures but worried that these racialized “others” might taint white civilizationʹs progress with their own inadequacies. The Javanese village on the Midway serves as a good example of this fear. Although the Javanese lived in bamboo houses surrounded by tropical palm trees, spectators found the houses to be very awe‐inspiring due to their strength, imperviousness to rain, and extreme lightness as to be unaffected by earthquakes. The houses were built on stilts to protect people from snakes, which infested their native soil in Java. The Javanese themselves entertained the visitors with jugglery, dancing, fencing, wrestling, and snake‐charming. The “wajang‐wong” or Javanese formed a pantomime which greatly impressed Shaw as well.43 Although the Javanese appeared to be a primitive people, Marian Shaw thought that they were very efficient and civilized in how they lived their simple lives. It was this easygoing and romantic lifestyle that made Americans worry that their own superior lifestyles were being challenged by foreigners. These Javanese were generally referred to as “Brownies” by the visitors, a term that was reinforced by popular newspapers and journals of the day. “About the shade of a well‐done sweet 42 43 Riis, How the Other Half Lives, 126‐127. Rydell, All the Worldʹs A Fair, 57‐58. Historia CULTURAL AND RACIAL STEREOTYPES potato,” the Popular Monthly reported, “the Javanese holds the position closest to the American heart of all the semi‐civilized races.” The Javanese men were described as industrious workers, while the women were viewed as tireless domestic matriarchs. Described as cute and frisky, mild and inoffensive, but childlike above all else, the Javanese were allowed to entertain white Anglo‐Saxons as long as they remained in their evolutionary niche.44 They were just a step above the Dahomeyans on this racialized hierarchical ladder of civilization. The Lasting Legacy The Worldʹs Columbian Exposition in Chicago was designed to celebrate the four‐hundredth anniversary of Christopher Columbusʹ landing in the New World.45 The fair was meant to celebrate the progress of Anglo‐Saxon civilization. Industrialization was the driving force of this progress of superior society, and this was represented in all of the technology and machinery on display in the White City. However, social and cultural progress was solely evident on the Midway Plaisance where a hierarchy of races was on exhibit for the spectators of all classes in American society. Most of these white observers judged the exhibited racialized “others” based on national stereotypes of exotic and foreign races as primitive, inferior, backward, and in the need of white guidance and nurturing. Through the new visual culture of America, whites viewed themselves as superior to darker‐skinned Africans, warrior‐like Dahomeyans, feminine Egyptians, and sly and odd‐ looking Chinese and Japanese. Imperialism required Americans to view themselves as the ultimate, civilized world society that was destined to dominate and influence lesser civilizations, and nowhere was this more evident than on the Midway in Chicago. As one observer noted, “To the layman not interested in the arts and sciences it will remain the great attraction of the fair. One leaves it with a delightful feeling of having seen the one spot on Rydell, All the Worldʹs A Fair, 65‐66. Delays meant that it actually took place a year after the anniversary. the globe which gives in a very comprehensive way an idea of the worldʹs nationalities with their various customs and manners in surprising detail.”46 46Frank Leslieʹs Illustrated Weekly (University of Illinois at Urbana‐ Champaign Library; New York: Stuart Paddock, June 25, 1893, text‐fiche), 25.
Sustainable Practices in Waste Management Humans generate a lot of waste, much of which now affects the air we breathe, the water we drink, and land on which we live. According to the United Nations, about 11.2 billion tonnes of solid waste is collected worldwide, almost all of which comes from humans alone. We therefore not only need to manage this waste but also come up with strategies that will manage such waste sustainably. This article will talk about sustainable waste management, its importance and said strategies for sustainable waste management. Waste management refers to the practice of collecting, transporting, processing or disposing of, managing and monitoring various waste materials. It is important to observe sustainability in this aspect so that every bit of waste can be managed in an efficient manner rather than just dumping it all in landfills. - What is Sustainable Waste Management? - 4 Ways to Create an Efficient Waste Management Plan - Why Sustainable Waste Management Is Important? - Best Solutions for Sustainable Waste Management What is Sustainable Waste Management? Sustainable waste management refers to the collection, transportation, valorization and disposal of the various types of waste, in a manner that does not jeopardize the environment, human health or future generations. It includes any activity involved in the organization of waste management, from production to the final treatment. It is important to note that there are various types of wastes, such as municipal, which includes household, commercial and demolition waste; electronic or e-waste, which includes computer parts; and radioactive waste, among many other forms of waste. The goal of sustainable waste management is to reduce the amounts of natural resources consumed, reusing the materials taken from nature as much as it is possible, and creating as minimal waste as possible. It is our responsibility to maintain sustainability for the benefit of our environment as well as future generations. A well-functioning sustainable waste management system, should incorporate feedback loops, focus on processes, embody adaptability and divert wastes from the disposal. Sustainable waste management is a key concept of the circular economy and offers many opportunities and benefits to both the economy, the society and the environment. Sustainable waste management involves collecting, sorting, treating, recycling, and when properly facilitated providing a source of energy and resources. It, therefore, creates jobs, improves waste management methods, and lessens the impact of human activities on the environment, thereby, improving the air and water quality. It also reduces food wastage, keeps heavy environmental costs at bay, and prevents some human health conditions, thereby improving the overall human life. 4 Ways to Create an Efficient Waste Management Plan You can create an efficient plan for waste management in your facility through the following 4 ways: 1. Considering Sustainable Materials Management Don’t consider waste management as your last resort to manage waste efficiently; rather take the approach of sustainable materials management. The former needs you to look at all the waste that is generated and think of different methods in which you can recycle or reuse the waste. However, the latter allows you to make deliberate and informed decisions about how materials should flow at different manufacturing stages to generate less waste. 2. Planning at Every Stage Planning for waste management is not a one-time event but a process consisting of various stages that come together to help you achieve your goals. Follow and track your plan at every stage. By employing strategic planning, you get the opportunity to deliver sustainable improvements to local waste management practices as it has the ability to respond to the ever-changing waste and recovered materials markets. 3. Collaborating Whenever Possible You must collaborate with different organizations and companies that share the same goal. Public-Private Partnerships for Service Delivery (PPPSD) is one such approach that promotes sustainable and self-supporting partnerships between various businesses and local governments. This kind of collaboration helps in stimulating improved cooperation between public, private and citizen stakeholders. It also helps in minimizing the adverse effects of waste in poor communities, contributes to the sustainable improvement of recycling and solid waste management, and improves the livelihood of people and businesses in rural and urban communities equally. 4. Aiming to Avoid the Landfills Aim to stray away from landfills as much as possible. Civic bodies must make an effort to operate under various legislative requirements that want to achieve specific diversion goals. Determine the actual diversion rate at the different stages of recycling programs. You must know the quantity of materials that was usable in the production of recyclable products. Importance of Accurate Weighing in Recycling Everyone in the recycling industry stresses the importance of accurate weighing of materials. In fact, we can say that the recycling industry depends on weighing recycling waste accurately, regardless of whether you are a buyer or a seller. By incorporating weight scales such as truck scales, forklift scales, floor scales, bench scales, etc., you can ensure that every waste material, no matter what it is made up of, is weighed accurately so that you know exactly how much is being recycled, reduced and sent to the landfills. It also helps in getting the right amount of money corresponding to the exact quantity you are selling or buying. Why Sustainable Waste Management Is Important? 1. It creates space If waste was never managed, it would end up on land, either scattered or may be centralized in a landfill somewhere. Landfills are big and can use up a lot of space. In some confined areas, you will have to sustainably control and manage your waste, so that you make the best use of it. The best example is Singapore, which measures roughly 700 square kilometers and is home to over 5.5 million people. They already have land constraints and that is why the nation’s National Environment Agency understands the need to reuse waste as well as properly dispose of it. 2. It saves and also makes money Once you reuse or recycle anything, you will not need to go buy another of the same. You will be saving some money that would otherwise be used to buy an item that can be recycled or reused. It also means the agencies that take care of our trash, will not be forced to consistently manage our trash as there will be less of it. Increasing recycling can cut our disposal costs and improve our bottom line. In line with the concept of money, sustainable waste management can help generate money for some establishments. For instance, municipal administrations that collect the garbage can charge collection and recycling fees, making money in the process. This will also discourage institutions that generate a lot of waste, making them sustainable and more responsible for the environment. 3. It enhances sustainability Managing waste, energy and water, and doing it more efficiently, are at the core of sustainability. Improving our individual, business, government or organizational sustainability can boost our image as individuals, businesses, governments and organizations, respectively, attracting more quality tenants, clients, and customers to our establishments. It also positively engages our employees, volunteers and citizens. 4. It controls pollution Each waste we dump has a particular effect on the environment. For instance, pharmaceutical waste poisons our water, and waste foods invite flies and rodents. Sustainable waste management helps us understand our waste and how best to deal with it. As such, pharmaceutical waste should be taken to its original manufacturer for proper disposal such as incineration, waste food should be composted and plastics can be recycled. All these measures and more will help control pollution. As such, the pharmaceutical waste will not poison the water, plastics will not clog marine life and food wastes will not invite rodents. 5. It is the core of environmental conservation The greatest enemy to the environment is humans. We produce trash at an incredibly quick rate and our waste management methods are still poor. Sustainable waste management is therefore at the core of environmental conservation, seeing as it will help preserve the environment as well as improve it, not only for us but also for other species and future generations. It, therefore, conserves resources including trees, metals and water, reduces greenhouse gas emissions that contribute to global warming, and improves the existing resources such as providing compostable waste that nourishes the soil 6. It makes us into better and responsible inhabitants of the earth Humans cannot live without generating waste. As such, sustainable waste management will help us become better and responsible citizens of the planet by carefully, effectively and sustainably managing our waste. We will come up with better ways of managing waste, new technologies of dealing with our waste and the best alternatives for each waste. For instance, food remains and fruits can be composted, plastics recycled, and paper incinerated instead of dumping them all in a landfill. Best Solutions for Sustainable Waste Management 1. Go paperless Despite the world becoming more technologically advanced, most businesses still use paper and ink, which is one of the biggest waste categories. To become sustainably responsible for the environment, we need to cut down the amount of paper and ink we use. get rid of paper as far as it is possible, and instead implement policies that can allow individuals and businesses to go digital, go online and use cloud storage. Only print when it is absolutely necessary, and when doing so, print on both sides of the paper, and decrease the margins so that you reduce the number of sheets to use. Print in ‘draft mode’ to cut down on your ink consumption and in the bathroom, switch to hand dryers, to eliminate paper towels. 2. Incinerate waste Incineration is a technique that transforms waste through fire. Waste combustion generates heat and electricity, although it pollutes the air. Incineration works properly where you do not want to store the waste in a central location and where the waste cannot be used for any other purpose. Singapore, for instance, incinerates about 8,200 tons of garbage daily, reducing its waste volume by 90%. Their incineration plants, in turn, produce over 2,500 MWh of energy daily, enough to support 900 homes daily. Although it pollutes the air, Singapore’s incineration programs recover reusable metals that can be sold for profit. However, the nation is amplifying its recycling programs. 3. Donate anything useful Not everything that has been used should be drowned on the trash bin. Donate some of the stuff you no longer use or need as they will benefit those who receive them. For instance, restaurants, hotels and grocery stores should donate extra, perishable and prepared food to shelter homes and food banks. Donate soaps, toiletries, shampoos and skincare products, old computers, printers, hardware and other electronics, and old furniture like desks, and chairs to those who might need them. 4. Reduce, reuse, recycle Recycling saves energy, keeps materials out of landfills and incineration, and provides raw materials for new products. Have more bins for collecting recyclables like paper, glass, plastics and many more, which can then be recycled. Where possible, reuse some products like plastic bottles instead of throwing them away as soon as you use them. Reusing keeps these and more products from the garbage bin, conserving the environment. Also, minimize your use of some of these products. For instance, instead of having takeout food in buckets and cups that will end up in the bin, go into the restaurant and have the food there on plates that will be cleaned and used again and again. 5. Compost your lunches Composting is a green and great way of disposing of waste. At home, put the food in a bag and while at work, do it in the break room or cafeteria. You can compost the excess fruits, tea bags, eggshells, coffee filters, greasy pizza boxes, and much more. Tightly seal the composting bin or bag to minimize odors and fruit flies. Also, be sure to use compostable bags to easily transfer the waste to your own composting pile. The compost pile will become a good addition to your office or home garden, as it will nourish the soil. Composting, in general, converts and recovers organic matter into stabilized, hygienic and soil-like products, that are rich in humic compounds that enrich the soil. 6. Anaerobic digestion of waste This is a process similar to composting, although it does not use oxygen. Anaerobic digestion allows the treatment of organic waste and sludge by fermentation, in the absence of oxygen. The materials or waste are sealed off and the bacteria lives of the organic matter itself. It is a slower process in comparison to composting, but it can have far more useful results. Anaerobic digestion of waste results in methane, a key component to biogas, which is a renewable source of energy that can be used to cook, generating heat and even producing electricity for the home 7. Waste collection The waste collection should not be left to the municipal authorities but should be the responsibility of every individual, business, organization and government. The collection of household waste is done by garbage trucks, which go to each point of garbage production to collect the garbage. These systems should be streamlined or centralized so that all waste is classified, and collected the same way. What can be recycled should be recycled, and what cannot, should not contaminate the recyclable waste. This way, it is possible to enjoy some benefits of waste, like energy recovery, which could generate fuel or electricity as discussed above. 8. Educating the masses We inherently know that we need to take care of our waste. However, not everyone knows how to do it sustainably. It is therefore important to understand the amounts and types of waste we produce, as well as how to effectively and sustainably manage them. We also need to know to reduce their hauling costs and negotiate for waste and recycling services that fit our needs. After having all this knowledge, we need to share it to others, through books, videos, articles, seminars and all other available means. Kevin Hill heads up the marketing efforts and provides technical expertise to the sales and service teams at Quality Scales Unlimited in Byron, California. He enjoys everything mechanical and electronic, computers, the internet and spending time with family.
What are the three constitutional safeguards? The constitutional safeguards are broadly grouped in to five categories. - social safeguards. - Economic safeguards. - political safeguards. - service safeguards. - Educational and cultural safeguards. What are constitutional safeguards? – Protection from ‘unreasonable search + seizure’ – Restriction on warrants. The First Amendment. – Protects freedom of religion, press, speech and peaceable assembly. – Ensures that citizens have the right to ask the government to redress grievances. What constitutional amendments are relevant in criminal procedure? The most important amendments that apply to criminal law are the Fourth, Fifth, Sixth, and Eighth amendments. All of these constitutional rights must be ensured in criminal legal cases in the United States of America. What two amendments are safeguards to people accused of a crime? The Sixth Amendment guarantees the rights of criminal defendants, including the right to a public trial without unnecessary delay, the right to a lawyer, the right to an impartial jury, and the right to know who your accusers are and the nature of the charges and evidence against you. What is the Article 342? Article 342 provides for specification of tribes or tribal communities or parts of or groups within tribes or tribal communities which are deemed to be for the purposes of the Constitution the Scheduled Tribes in relation to that State or Union Territory. What is the Article 341? (1) The President 2 [may with respect to any State 3 [or Union territory], and where it is a State, after consultation with the Governor thereof, by public notification, specify the castes, races or tribes or parts of or groups within castes, races or tribes which shall for the purposes of this Constitution be deemed … What are the safeguards of Indian Constitution? Article 324 of the Constitution provides for the Election Commission, its powers and functions for maintenance of the Electoral Roll and conduct of elections in a free and fair manner. Why does the government create safeguards? Why does the government create safeguards? To protect the rights of the people. 3. … These guidelines are used to help decide when individual rights interfere with other important rights and interests, including the rights of other individuals. How many states are there in Indian Constitution? The State List or List-II is a list of 61 items. Initially there were 66 items in the list in Schedule Seven to the Constitution of India. The legislative section is divided into three lists: the Union List, the State List and the Concurrent List. What are the 3 most important amendments? Freedom of religion, speech, the press, assembly, and petition. You just studied 10 terms! What are the 5 sources of criminal procedure? These include the U.S. Constitution, the U.S. Supreme Court, state constitutions and courts, federal and state statutes, rules of criminal procedure, the American Law Institute’s Model Code of Pre-Arraignment Procedure, and the judicial decisions of federal and state courts. What 4 amendments protect the rights of the accused? These amendments include the fourth, fifth, sixth, eighth, and the fourteenth amendments. Their purpose is meant to ensure that people are treated fairly if suspected or arrested for crimes. The Fourth Amendment protects people from unreasonable searches and seizures without a warrant.
MANIFESTO STOP FOOD WASTE Access to food is a universal human right of mandatory legal compliance for all states that have ratified the Universal Declaration of Human Rights. The current food system’s model of production, supply, distribution and consumption has been unable to resolve the issues of food security and food sovereignty for communities worldwide. Almost 900 million people are poorly and inadequately fed while obesity affects one third of the industrialised nations’ population. It remains a tragic reality that throughout the globe huge amounts of food fail to reach our plates. Meanwhile there are millions of people experiencing the need for food assistance. Half of the current food losses could feed all the hungry people in the world. Food waste occurs at every stage of the supply chain. Estimations of the United Nations Food and Agriculture Organisation (FAO) show that one-third of food production is lost from farm to fork. In developing countries food waste is caused more by inefficiencies of agricultural production and post-harvesting storage while in industrialised economies waste is present in agro-industrial production, distribution and end-consumers. Food losses weaken the economy, make firms less competitive and increase household expenditures, forcing public administrations to finance surplus management and food losses that could have been saved. The environmental effects of food waste span from reduction of available fertile land, loss of biodiversity, over-use of water and energy, to emissions of greenhouse gases. For these reasons food waste is increasingly appearing on the political agenda as the public administration and private sector begin to look for ways to take action. Individuals, organisations, and public and private institutions signatories of this manifesto are aware that both the eradication of food waste and the assurance of food security require continuous efforts that go beyond specific action to demonstrate a commitment to collaborate on the following strategies: Raise social awareness about food wastage while promoting education on proper food manipulation in all the steps of the food chain and households. Support research and innovation oriented to develop skills for the full use of food. Promote better transparency of information on food usage and food waste. Promote legislative and regulatory changes necessary to encourage full use of food while ensuring food safety, and liability protection. Promote economic and fiscal measures aimed at preventing food losses to responsibly manage food waste according to the “Waste Hierarchy”. Facilitate donations of food and direct gleaning activities as a community safety net for social NGOs and disadvantaged groups. - Collaborate in halving food wastage by 2025 as recommended by the European Parliament resolution of January 19th, 2012. Platform for Resourceful Food Use
In connection with the solemnity of the transition from colonial rule to independence on June 30, 1960, King Baudouin 1 of Belgium came to Léopoldville (Kinshasa). Here he was welcomed at the airport by Congo’s first President Joseph Kasavubu (left) and the country’s first Prime Minister Patrice Lumumba (center). After General Mobutu conducted a coup in the fall of 1960, arrested Prime Minister Lumumba and handed him over to the rebel Moise Tshombe in Katanga (where he was assassinated), Mobutu was formally appointed commander of the armed forces by President Kasavubu in January 1961. From 1965 until he was toppled in May 1997, Mobutu was president of the Congo, which he renamed in Zaire in 1971. In the photo from 1961, Kasavubu stands in the middle and Mobutu on the left. Congo became independent on June 30, 1960, ruled by a coalition government with Lumumba as prime minister and Kasavubu as president. The army was still led by Belgian officers, and mutiny broke out in the armed forces. Belgium immediately sent troops to strike down the rebellion; officially to protect Belgian citizens. The majority of the European population fled the country in panic. Protected by Belgian soldiers, Moïse Tshombe declared Katanga’s independence on July 11, 1960, with himself as head of state. This happened in agreement with Belgium, South Africa and the United States, and with the support of the mining companies, who had great interests in the province’s mineral wealth. In August, the province of Kasaï also broke out and declared its independence, but this rebellion was turned down. The demolition of Katanga weakened Lumumba’s strategy for a united Congo, and the prime minister brought the crisis to the United Nations, which sent a peacekeeping force, the Opération des Nations Unies au Congo (ONUC) in 1960–1964, with, among other things, Norwegian participation. With ONUC in place, the Belgian forces were eventually sent home. The UN operation became one of the most comprehensive and demanding ever, and the UN force was involved in direct combat operations, including some of the many mercenaries fighting on both sides. The conflict took place at the height of the Cold War, and Lumumba received support from the Soviet Unionafter the United States would not assist him in securing Congo’s unity. The conflict was first and foremost a struggle between those who wanted real independence for the Congo, represented especially by Lumumba, and those who wanted a neo-colonial solution, such as Moïse Tshombe – where the West still controlled the raw materials. As a result, Belgium and the United States, among others, opposed Lumumba, who was considered a dangerous demagogue and portrayed as a Communist. President Kasavubu deposed Prime Minister Lumumba as a result of dealing with the Katanga crisis, but the National Assembly refused to approve the provision. It supported Lumumba, who then deposed the president. In this political crisis, the military power seized on September 14, 1960, under the command of the colonelJoseph-Désiré Mobutu. All political activity was suspended and Mobutu set up a commission to govern the country. At the same time, some of Lumumba’s ministers formed a new government led by Antoine Gizenga in Stanleyville (Kisangani) and claimed to be the only legal. Lumumba was arrested and beaten several times until he was murdered on January 17, 1961. The killing was carried out in Katanga on the basis of Mobutu’s decision and Tshombe’s order; by Katangan soldiers and Belgian police officers in Katanga’s service. Then the US intelligence agency had the CIAlong planned to kill the prime minister. The body of Lumumba and two other nationalists who were killed with him were parched and dissolved in acid to erase all traces. It was only in 2002 that Belgium acknowledged its involvement in the killing and assumed a moral responsibility for the murder of the elected Congolese prime minister. The killing at Lumumba was a setback for both Congolese nationalism and continental pan-Africanism, and the political development in Congo after his death consisted of decades of internal strife, civil war and foreign military interference. Lumumba’s supporters were persecuted and several revolts against Mobutu and Kasavubu fought; several thousand were killed in the first half of the 1960s. In 1966, Lumumba was proclaimed a national hero, while Mobutu started its national authentication policy. In 1961, UN Secretary General Dag Hammarskjöld died when his plane crashed over Northern Rhodesiawhile traveling in an attempt to mediate in the conflict. Lumumba’s political goal of uniting the vast land, much of which is inaccessible, into a unified state was not achieved. The absence of a strong central government, a clear policy and a democratic tradition from the first time after independence has been a major cause of Congo’s major problems since. Another cause is the country’s wealth, which has led to widespread corruption and foreign interference. Congo Parliament met again in 1961 and appointed a new government led by Cyrille Adoula, still with Kasavubu as president – and with Mobutu as the strong man. At two conferences in 1961, it was decided to establish a confederation of Congolese states, without this being implemented. Under pressure from the central government, Tshombe expressed his willingness to give up Katanga’s independence, but did not give up his position until he left in exile in 1963, when Katanga also ceased as a state. The following year, Tshombe was invited to form a unifying government, and faced military opposition to the government in several parts of the country. In 1964, he summoned Belgian forces in support of fighting the uprising led by the Marxist Conseil National de Libération (CNL), the so-called Simba; established in 1963 by, among others, later President Laurent-Désiré Kabila. The government also engaged a large number of foreign mercenaries. The rebels were supported by countries such as Algeria and Egypt, which provided weapons and advisers. Ugandan troops entered eastern Congo to support the rebels. This scenario repeated far and wide three decades later, during the next civil war in Congo – when the uprising in the east led to Uganda, among others,intervened, and the authorities hired mercenaries to strengthen a weak government army. A new election in 1965 was won by the Tshombe Convention National Congolaise (CONACO). The opposition gathered in the Front Démocratique Congolais (FDC). Supported by the FDC, President Kasavubu went against the majority in parliament and appointed Évariste Kimba as prime minister. This determined parliamentary situation ended with the military, led by General Mobutu, seizing power in November 1965. Insurgency in the Kwilu, Kivu and Katanga regions was defeated. Rebellion in Kisangani in 1966 and 1967 was initiated by followers of Tshombe, who in 1967 were sentenced to death in absentia for treason. After the coup, Mobutu himself took over as head of state and soon banned all party political activities. In 1967, he formed his own state party,Movement Popular Revolution (MPR). A new constitution from 1967 gave the president executive power. In Kivu and Katanga there were new revolts. An Angola force entered Katanga in 1967 in an attempt to reinstate Tshombe. One of CNL’s rebel leaders, Pierre Mulele, was executed in 1968. Tshombe died of natural causes in 1969.
Smoke from a summer wildfire is more than just an eye-stinging plume of nuisance. It’s a poison to the lungs and hearts of the people who breathe it in and a dense blanket that hampers firefighting operations. There’s an atmospheric feedback loop, says University of Utah atmospheric scientist Adam Kochanski, that can lock smoke in valleys in much the same way that temperature inversions lock the smog and gunk in the Salt Lake Valley each winter. But understanding this loop, Kochanski says, can help scientists predict how smoke will impact air quality in valleys, hopefully helping both residents and firefighters alike. Kochanski and colleagues’ study appears in the Journal of Geophysical Research-Atmospheres. The work was funded by grants from the U.S. Department of Agriculture and from NASA. Watch a video of a smoke simulation here. In 2015, firefighters battling wildfires in northern California noticed that smoke accumulating in valleys wasn’t going away. The smoke got so bad that air support had to cancel flights, slowing down the firefighting effort. “That raised the question,” Kochanski says, “Why is that? Why all of a sudden, is the smoke so persistent and what keeps this very thick layer of smoke in those valleys for such a long period of time?” Kochanski and his colleagues, including researchers from the Desert Research Institute, University of Colorado, Boulder and Harvard & Smithsonian Center for Astrophysics set out to answer those questions. Their first clue came from measurements of temperature both below and above the smoke layer. The air, they found, was warmer above the smoke than below. Residents of the Salt Lake Valley and other bowl-shaped valleys will recognize the pattern of warmer air above colder valley air as an inversion—a reversal of the normal cooling of air with altitude. The atmospheric chemistry of Salt Lake’s wintertime inversions is a little different than that of a fire inversion, but the physics are the same: Warm air rises, cold air sinks and the inversion puts a warm lid over a valley, trapping all the valley air below. The main difference is that in case of the smoke inversions the smoke layer makes the inversion even stronger. The feedback loop Kochanski and his colleagues modeled the dynamics of air and smoke in valley terrain and found a feedback loop that reinforces the atmospheric inversion conditions. “A key to this situation is the moment when the smoke appears in the atmosphere and it just does enough to block incoming solar radiation,” Kochanski says. With the sun’s energy blocked, the air at the ground begins to cool. “The near-surface cooling limits the mixing, enhances local inversions and leads to even higher smoke concentrations, that in turn block more solar radiation and make smoke even more persistent.” The cooling also weakens winds in the smoke-filled valley and stabilizes the atmosphere, impeding wind from breaking through and clearing out the stagnant air. Kochanski says there are three ways out of a fire inversion. One is the settling of smoke once the fire is extinguished, allowing more light and heat to reach the ground. Another is a sufficiently large wind that can push through and mix the layers of air. A third is precipitation, as falling rain can scrub the air clean of aerosols. “If it’s business as usual and day by day you have nice sunny weather without any wind or precipitation events, well, this positive feedback loop leads to more smoke in the valleys than could be expected just based on the fire behavior alone,” Kochanski says. A new kind of forecast Understanding the conditions that create the feedback loop help researchers predict how and when it might form or dissipate. Fire inversions will still remain a problem for firefighters, but Kochanski says that now researchers will be able to put together more accurate smoke forecasts. “We can better tell where, how dense and how persistent the smoke is going to be,” he says. “That’s something that wasn’t available before.” Weather models will also be able to forecast air quality effects from smoke in ways they couldn’t before, he adds. The results of this study are already being integrated into the National Predictive Services Program. “When I’m talking about applications,” Kochanski says. “It’s not 10 years from now. It’s something that we will start working on within the next couple of months.” Find the full study here. Banner image: California Air National guardsmen from the 129th Rescue Wing perform precision water bucket drops Aug. 26, 2013, in support of the Rim Fire suppression operation at Tuolumne County near Yosemite, California. Credit: Staff Sgt. Ed Drew/USAF (https://commons.wikimedia.org/wiki/File:California_Wildfires_(9627512379).jpg)
Feb. 4, 2019 – Climate change is causing significant changes to phytoplankton in the world’s oceans, and a new MIT study finds that over the coming decades these changes will affect the ocean’s color, intensifying its blue regions and its green ones. Satellites should detect these changes in hue, providing early warning of wide-scale changes to marine ecosystems. Writing in Nature Communications, researchers report that they have developed a global model that simulates the growth and interaction of different species of phytoplankton, or algae, and how the mix of species in various locations will change as temperatures rise around the world. The researchers also simulated the way phytoplankton absorb and reflect light, and how the ocean’s color changes as global warming affects the makeup of phytoplankton communities. The researchers ran the model through the end of the 21st century and found that, by the year 2100, more than 50 percent of the world’s oceans will shift in color, due to climate change. The study suggests that blue regions, such as the subtropics, will become even more blue, reflecting even less phytoplankton — and life in general — in those waters, compared with today. Some regions that are greener today, such as near the poles, may turn even deeper green, as warmer temperatures brew up larger blooms of more diverse phytoplankton. “The model suggests the changes won’t appear huge to the naked eye, and the ocean will still look like it has blue regions in the subtropics and greener regions near the equator and poles,” says lead author Stephanie Dutkiewicz, a principal research scientist at MIT’s Department of Earth, Atmospheric, and Planetary Sciences and the Joint Program on the Science and Policy of Global Change. “That basic pattern will still be there. But it’ll be enough different that it will affect the rest of the food web that phytoplankton supports.” Dutkiewicz’s co-authors include Oliver Jahn of MIT, Anna Hickman of the University of Southhampton, Stephanie Henson of the National Oceanography Centre Southampton, Claudie Beaulieu of the University of California at Santa Cruz, and Erwan Monier of the University of California at Davis. The ocean’s color depends on how sunlight interacts with whatever is in the water. Water molecules alone absorb almost all sunlight except for the blue part of the spectrum, which is reflected back out. Hence, relatively barren open-ocean regions appear as deep blue from space. If there are any organisms in the ocean, they can absorb and reflect different wavelengths of light, depending on their individual properties. Phytoplankton, for instance, contain chlorophyll, a pigment which absorbs mostly in the blue portions of sunlight to produce carbon for photosynthesis, and less in the green portions. As a result, more green light is reflected back out of the ocean, giving algae-rich regions a greenish hue. Since the late 1990s, satellites have taken continuous measurements of the ocean’s color. Scientists have used these measurements to derive the amount of chlorophyll, and by extension, phytoplankton, in a given ocean region. But Dutkiewicz says chlorophyll doesn’t necessarily have reflect the sensitive signal of climate change. Any significant swings in chlorophyll could very well be due to global warming, but they could also be due to “natural variability” — normal, periodic upticks in chlorophyll due to natural, weather-related phenomena. “An El Niño or La Niña event will throw up a very large change in chlorophyll because it’s changing the amount of nutrients that are coming into the system,” Dutkiewicz says. “Because of these big, natural changes that happen every few years, it’s hard to see if things are changing due to climate change, if you’re just looking at chlorophyll.” Modeling ocean light Instead of looking to derived estimates of chlorophyll, the team wondered whether they could see a clear signal of climate change’s effect on phytoplankton by looking at satellite measurements of reflected light alone. The group tweaked a computer model that it has used in the past to predict phytoplankton changes with rising temperatures and ocean acidification. This model takes information about phytoplankton, such as what they consume and how they grow, and incorporates this information into a physical model that simulates the ocean’s currents and mixing. This time around, the researchers added a new element to the model, that has not been included in other ocean modeling techniques: the ability to estimate the specific wavelengths of light that are absorbed and reflected by the ocean, depending on the amount and type of organisms in a given region. “Sunlight will come into the ocean, and anything that’s in the ocean will absorb it, like chlorophyll,” Dutkiewicz says. “Other things will absorb or scatter it, like something with a hard shell. So it’s a complicated process, how light is reflected back out of the ocean to give it its color.” When the group compared results of their model to actual measurements of reflected light that satellites had taken in the past, they found the two agreed well enough that the model could be used to predict the ocean’s color as environmental conditions change in the future. “The nice thing about this model is, we can use it as a laboratory, a place where we can experiment, to see how our planet is going to change,” Dutkiewicz says. A signal in blues and greens As the researchers cranked up global temperatures in the model, by up to 3 degrees Celsius by 2100 — what most scientists predict will occur under a business-as-usual scenario of relatively no action to reduce greenhouse gases — they found that wavelengths of light in the blue/green waveband responded the fastest. What’s more, Dutkiewicz observed that this blue/green waveband showed a very clear signal, or shift, due specifically to climate change, taking place much earlier than what scientists have previously found when they looked to chlorophyll, which they projected would exhibit a climate-driven change by 2055. “Chlorophyll is changing, but you can’t really see it because of its incredible natural variability,” Dutkiewicz says. “But you can see a significant, climate-related shift in some of these wavebands, in the signal being sent out to the satellites. So that’s where we should be looking in satellite measurements, for a real signal of change.” According to their model, climate change is already changing the makeup of phytoplankton, and by extension, the color of the oceans. By the end of the century, our blue planet may look visibly altered. “There will be a noticeable difference in the color of 50 percent of the ocean by the end of the 21st century,” Dutkiewicz says. “It could be potentially quite serious. Different types of phytoplankton absorb light differently, and if climate change shifts one community of phytoplankton to another, that will also change the types of food webs they can support. ” This research was supported, in part, by NASA and the Department of Energy.
The widespread COVID-19 pandemic has changed the course of our daily living, and is now the biggest challenge the world is facing. You’re probably feeling overwhelmed by the situation now, but take heart, there are preventive measures to beat the virus. Before learning the basic prevention, here are some facts about COVID-19: - A respiratory illness with symptoms that include fever, dry cough, fatigue, sore throat, and shortness of breath and are passed from person to person. - A new type of coronavirus that was caused by the novel coronavirus SARS-Cov2 (Severe Acute Respiratory Syndrome) that broke out in 2002-2003. - This disease originated in Wuhan, China in late December 2019 and has now become a worldwide pandemic as it spreads throughout the world since then. Interesting tips on how to beat the virus Ultraviolet rays can disinfect the virus. According to an infectious disease expert, Daniel Kuritzkes, “Direct sunlight can help rapidly diminish infectivity of viruses on surfaces.” Another group of experts has recently said that “The warmer [the place] is, the less contagious the coronavirus becomes.” Preventive Measures for COVID-19 Despite its threat, there are preventive measures we can do to combat coronavirus. Here are the - Practice good hygiene, eat healthy food, and have a good rest - Boosting your immune system to beat the virus is the best way you can do. - Use alcohol-based sanitizers (better to use 70% isopropyl alcohol) - Cover your mouth and nose with tissue when coughing or sneezing. - Dispose used tissues immediately. - Drink at least 12-15 glasses of water to keep yourself hydrated. - Eat fresh fruits and vegetables because they contain antioxidants and pre-biotics to help keep your immune - Sleep for 7-8 hours in total darkness. The hormone Melatonin increases our immunity. - Clean and disinfect frequently-used surfaces - Regularly clean shared frequently-touched surfaces at home such as doorknobs, tables, light switches, desks, faucets, toilets, handrails, remote controls (yes, since you are at home with everyone else now!), and the like. - Make sure to welcome fresh air while at home. Open your windows and adjust air conditioning usage. Proper ventilation is important to the body while inside the house. - Sanitize your mobile phones and other devices. - Make sure laundry items are in their proper places. If possible, do not shake dirty laundry to minimize the possibility of dispersing the virus through the air (especially if you have been outside for a while). - Observe social distancing As much as possible, we don’t want to go out of our houses to avoid contact with other people. But, in the event that we need to (like grocery shopping), then we must be ever-aware of social distancing measures. - Keep a 1-meter distance from other people when you’re outside. - Use a surgical mask (or any mask you prefer) when outside. - Avoid handshaking and other physical greetings. - As much as possible, avoid physical greetings with your loved ones once you arrive home. Wash hands, arms, and face first with soap and water the moment you arrive. (I’m sure they’ll understand). - Stay at home While our frontliners are doing their best to treat and take care of the COVID-19 patients, the best way you can do to help curb the spread of the infectious disease is to stay at home. Never attempt to disobey the rules under the quarantine policy issued by the national and local government. Make the most out of your time to be more productive spending time with your family, doing your work while at the comfort of your home. As we continue to combat COVID-19, it’s important to keep a “This-will-be-gone-soon” attitude and expectation while keeping the necessary preventive measures. And before you know it, this pandemic is over. Stay safe
Faraday's Second Law It states that, “When the same quantity of electricity is passed through different electrolytes, the masses of different ions liberated at the electrodes are directly proportional to their chemical equivalents (Equivalent weights).” i.e., Thus the electrochemical equivalent (Z) of an element is directly proportional to its equivalent weight (E), i.e., where, Faraday constant So, 1 Faraday = 1F =Electrical charge carried out by one mole of electrons. 1F = Charge on an electron × Avogadro's number. Number of Faraday
What are hybrid cars and how do they work? Do they play a role in achieving a green, sustainable future? Find the answers to help you decide whether it’s worth replacing your fuel-powered car for a hybrid. What is a hybrid car? Hybrid cars are vehicles that use both a small combustion engine and an electric motor. The combined propulsion from both mechanisms generates maximum power and increased fuel efficiency with minimum emissions. Put simply, hybrid cars combine a diesel or petrol engine with an electric motor and a battery. How do hybrid cars work? Hybrid cars generate an electrical current that’s stored in a large battery and used to help drive the car. The electrical energy is produced by a regenerative braking system. Essentially, as the driver applies the brakes, the electric motor works as an electricity generator, sending electricity into the battery for future use. Hybrid cars can also conserve energy by turning off the petrol or diesel engine when the car is parked, idle at a traffic light, or in neutral and at a standstill. Hybrid cars can also maintain their power when there’s enough energy from the electric motor to drive the vehicle without support from the regular combustion engine. Take action now Do you want to have a direct impact on climate change? Sir David Attenborough said the best thing we can do is to rewild the planet. So we run reforestation and rewilding programs across the globe to restore wild ecosystems and capture carbon.Get involved What’s the difference between standard hybrids and plug-in hybrids? The advantage of hybrid cars is that they recharge their own batteries ‘on the move’. As explained above, a hybrid car will even recharge itself when you hit the brakes. Standard hybrids (known as parallel and series hybrids) do not need to be powered at charging points - which are mainly designed for electric vehicles (EVs). Standard hybrid cars need fuel (i.e. petrol or diesel), even if operating in an electric-only mode. Plug-in hybrid cars (PHEVs) - which have a larger battery compared to standard hybrids - can be charged at EV charging points. Charging a PHEV enhances the ability of its electric motor to drive further without having to start the ICE. This substantially increases the vehicle’s fuel efficiency. Neither standard nor plug-in hybrid cars require you to plug it into an EV charging point. However, with a plug-in hybrid, you have the option to do so. Drivers of PHEVs will no longer struggle to find a local charging point, as broadband providers, such as Virgin Media, will be installing them at their broadband street cabinets - normally used for the company’s cable broadband network. The time is right for electric cars - in fact the time is criticalCarlos Ghosn Do hybrid cars hold the key to a green future? One of the greatest advantages of hybrids cars over gasoline-powered ones is that they consume less petrol or diesel and have better gas mileage. However, not all hybrids are as eco-friendly as each other. Standard hybrid cars release the same amount of greenhouse gases as conventional cars since they burn regular gasoline. Plug-in hybrids, however, consume less petrol and emit fewer emissions, so they’re more environmentally friendly. In comparison, the use of plug-in hybrid cars marks a further step towards a net-zero carbon economy compared to that of standard hybrids. Plug-in models may well hold the key to a greener, renewable future. As hybrid cars develop and gain popularity, we’ll see more infrastructure being built and even more manufacturers coming to the market with innovative and affordable hybrid models. Looking ahead, greater access to EV charging points will help hybrid drivers to improve the fuel efficiency of their cars, substantially benefiting the environment at the same time. 2018 saw the number of alternative fuel passenger cars registered reach 141,000 in the UK. Figures had increased fourfold since 2013, making it the greatest growth of any fuel type. If you'd like to learn more about the pros and cons of 100% electric vehicles, then be sure to read through our Electric Cars guide. Sources & further reading - “Hybrid cars and HOV lanes” - Transportation Research Part A: Policy and Practice - “Combining hybrid cars and synthetic fuels with electricity generation and carbon capture and storage” - Energy Policy - “Understanding the fuel savings potential from deploying hybrid cars in China” - Applied Energy - “About hybrid cars” - The Switch - “Electric Cars, The Pros and Cons” - Mossy Earth
What Are Solar Panels Made Of? Since 2000 the solar power industry has seen an average annual growth rate of 48%. This increase is because of incentive programs that allow homeowners to take advantage of alternative energy at a fraction of the cost. Solar panels have so many benefits. They don’t burn fossil fuels like more traditional energy sources, which means solar panels greatly reduce air pollution. But what are solar panels made of? In this article, we will learn more about solar panels: how they work and what they are made of. What Are Solar Panels Made Of? Solar panels are made up of solar cells created from silicon wafers. Silicon is a hard-brittle substance that is plentiful here on the earth’s surface. Have you ever noticed the tiny black specks in the sand at the beach? Those specks are silicon. Silicon is pretty amazing because it converts sunlight into energy all by itself. Even more amazing, silicon can be grown in a lab by scientists, making it widely available and renewable. These little pieces grown in labs are called silicon wafers and that’s what solar panels are made out of. However, it’s important to remember that silicon wafer filled solar cells can’t provide the energy you need to power your home all by themselves. These solar cells must be encapsulated in a metal casing with the wiring. That way, the solar cell’s electrons can escape and supply the necessary power for your home or business. Here are all the parts used to make a basic solar panel: - Solar cells made from silicon - Aluminum frame - Glass casing - Standard 12-volt wire - Bus wire What Is The Best Material To Use? Silicon has many different cell structures: single-cell known as monocrystalline or polycrystalline, used mostly for a type of panel known as “thin-film” solar panels. So, which type of silicon is best for making solar panels? Monocrystalline solar panels are considered the most efficient type of silicon because these solar panels are made out of the highest-grade silicon. They are the highest quality solar panels on the market in the US today. What Process Is Used To Make Panels? The materials used in manufacturing solar cells for clean renewable energy are only one part of the solar panel. If they are the ingredients, the solar panel manufacturing process makes up the detailed instructions for the recipe. The solar panel manufacturing process includes: - Cutting silicon wafers and attaching them to a solar panel - Connecting the electrical systems - Applying a coating to each cell to prevent reflecting - Creating a metal and glass casing to house the system How Do Panels Work? We have learned what solar panels are made of and how they are manufactured, but how exactly do they work? A solar panel works by allowing photons, light particles, to free electrons from the bonds of atoms. This generates flowing electricity. Due to the structure of silicon, it will always be “half full” of electrons, so it needs to “fill up” with help from nearby atoms. This process creates a crystalline structure. Scientists boost the electricity silicon creates during this process by adding free carrier electrons. The resulting silicon cells are used to make the surface of solar panels. Below that resides an opposite form of silicon with one less electron. The setup creates another imbalance, and when sunlight hits the lower type of silicon, the electrons start moving. Their never-ending process is what generates electricity. Can You Make Your Own DIY Solar Panels? You can build your own solar power system! Some experienced individuals who often complete DIY projects can even build their own solar panels. However, it’s important to remember that while you might be able to DIY a small amount of solar power for your home, it won’t be as efficient or produce as much power as a solar power system installed by professionals. For those wishing to install DIY solar power, don’t forget to consider: - Overall cost Hopefully, this article has helped explain what panels are, what they are made of, and how they can be used to reduce air pollution and provide meaningful renewable energy sources for the future. If you’re considering installing solar power for your home or business, you may want to speak with a professional installer. Asking questions, getting professional quotes, and weighing the pros and cons will help you to determine if solar is the right choice for you. It will also allow you to assess whether you might be able to DIY a small amount of solar power for yourself or if you should call in the professionals for more efficient power. If you want to learn more about solar financing options, check out our blog!
It’s hard to keep track of a business as it grows. It can get even more confusing when you try to understand how the structure has changed over time through, for example, expansion or administrative changes. Process mapping helps management see the changes by way of a diagram. In order for the diagram to be accurate ,though, certain rules need to be followed in building a process map. Define the Chart Symbols Every process map has a set of symbols that represent different tasks. Before you begin building a process map, these symbols need to be defined. Ovals, for example, show input at the start of the process or output at the end of the process. Boxes or rectangles show tasks or activities that take place during in the process. Arrows show the direction flow, and diamonds show points in the process when questions are asked or a decision is needed. (For an example, see Resources.) Define the Process Determine where the process begins and where it ends. Process mapping is usually done when you’re changing the way people do their work, for instance adding an automated process, when your company is merging with another company, when you’re introducing a new product line and need to understand the impact it will have on your staff, tasks and technologies, and when you’re trying to cut costs and improve efficiency. Determine what you’re trying to do, when it begins, when it ends and name your process map accordingly. List the Steps The steps can show sufficient information or an abundance of detail. Whichever path you choose to take, keep the wording simple. Write each step in “verb-object” form such as “action plan.” Create a Sequence Using Post-It notes or index cards, map the steps from left to right in the form of a diagram. Don’t worry about drawing arrows or figures just yet. That happens once you have a visual idea of what your map will look like. Draw the Diagram Draw the symbols based on the rules you already outlined for each shape—ovals represent input and output, for example. After the symbols are in place, draw the arrows. If a shape calls for more than one arrow, you may need to place a decision diamond there so that when you reach that step, you’ll know there are alternatives that need to be considered. In drawing the chart, use a systems model approach, where every step is linked to the next step or outcome by an arrow. Make sure the model is complete and includes pertinent information. - process flow image by Christopher Hall from Fotolia.com
Highlighting Our History: American Revolution Read-alouds PLUS for the Common Core As K-5 teachers go about the work of making instructional shifts in their teaching to meet the demands of the Common Core and new initiatives around STEM (STEAM) education, it can feel like “something has to give.” Many of you report that social studies has taken a back seat, and fear that students may lose a sense of our history and heritage. In this installment, TeachersFirst will show you how to leverage the power of daily read-alouds to practice some Common Core Standards for the English Language Arts while infusing some social studies content, specifically the Revolutionary period. Part of our job as teachers is to share books that students would not necessarily read on their own. The books featured on this list (both literature and informational text) will help to expand students' knowledge of U.S. history and provide teachers with some springboards for working with the Writing and Speaking-and-Listening standards of the Common Core. Many will appeal to students because they help to answer the question “What role did children play in the war?” Or “How did the Revolution impact children and families?” Additional titles can be found on this TeachersFirst CurriConnects list for the Colonial American period.
- 1 How can I teach myself math? - 2 How do you do math correctly? - 3 How can I learn math quickly? - 4 How do you solve a math problem? - 5 Why is math so hard? - 6 How can I be brilliant in maths? - 7 Do you multiply first if no brackets? - 8 What are the four rules of maths? - 9 Is Bodmas wrong? - 10 How can I be smarter in math? - 11 How can I learn math fun? - 12 Why do students fail mathematics? - 13 How do you simplify? - 14 Who invented math? - 15 What are the 7 hardest math problems? How can I teach myself math? How to Teach Yourself Math - Step One: Start with an Explanation. The first step to learning any math is to get a first-pass explanation of the topic. - Step Two: Do Practice Problems. - Step Three: Know Why The Math Works. - Step Four: Play with the Math. - Step Five: Apply the Math Outside the Classroom. How do you do math correctly? The correct order of operations Always perform the operations inside a parenthesis first, then do exponents. After that, do all the multiplication and division from left to right, and lastly do all the addition and subtraction from left to right. A popular way of remembering the order is the acronym PEMDAS. How can I learn math quickly? How to Learn Math Fast - Engage With the Subject. - Start From the Basics. - Develop Number Sense Rather Than Memorizing. - Have a Goal in Mind. - Answering Practice Questions Is Crucial. - Keep Track of Math Vocabulary. - Tricks and Tips to Learn Math Easily. - Master Problem Solving. How do you solve a math problem? Here are four steps to help solve any math problems easily: - Read carefully, understand, and identify the type of problem. - Draw and review your problem. - Develop the plan to solve it. - Solve the problem. Why is math so hard? Math is a very abstract subject. For students, learning usually happens best when they can relate it to real life. As math becomes more advanced and challenging, that can be difficult to do. As a result, many students find themselves needing to work harder and practice longer to understand more abstract math concepts. How can I be brilliant in maths? 10 Tips for Math Success - Do all of the homework. Don’t ever think of homework as a choice. - Fight not to miss class. - Find a friend to be your study partner. - Establish a good relationship with the teacher. - Analyze and understand every mistake. - Get help fast. - Don’t swallow your questions. - Basic skills are essential. Do you multiply first if no brackets? Just follow the rules of BODMAS to get the correct answer. There are no brackets or orders so start with division and multiplication. 7 ÷ 7 = 1 and 7 × 7 = 49. What are the four rules of maths? The four basic Mathematical rules are addition, subtraction, multiplication, and division. Read more. Is Bodmas wrong? Wrong answer Its letters stand for Brackets, Order (meaning powers), Division, Multiplication, Addition, Subtraction. It contains no brackets, powers, division, or multiplication so we’ll follow BODMAS and do the addition followed by the subtraction: This is erroneous. How can I be smarter in math? Study Smarter Reading the lecture material before each day’s class, testing yourself with problems in the textbook and trying to understand why you missed problems on homework or tests can all improve your math study skills, leading to higher grades on future assignments. How can I learn math fun? 15 Fun Ways to Practice Math - Roll the dice. Dice can be used in so many different ways when it comes to math. - Play math bingo. - Find fun ways to teach multiplication. - Turn regular board games into math games. - Play War. - Go online. - Make your own deck of cards. - Make a recipe. Why do students fail mathematics? Self-Doubt-Due to the lack of understanding, students often face self-doubt when they are solving math problems. This fear is also the reason why some students fails in mathematics. Fails to Pay Proper attention- During math lecture, some students get easily distracted, and they are fail to attention. How do you simplify? To simplify any algebraic expression, the following are the basic rules and steps: - Remove any grouping symbol such as brackets and parentheses by multiplying factors. - Use the exponent rule to remove grouping if the terms are containing exponents. - Combine the like terms by addition or subtraction. - Combine the constants. Who invented math? Beginning in the 6th century BC with the Pythagoreans, with Greek mathematics the Ancient Greeks began a systematic study of mathematics as a subject in its own right. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom, theorem, and proof. What are the 7 hardest math problems? The problems are the Birch and Swinnerton-Dyer conjecture, Hodge conjecture, Navier–Stokes existence and smoothness, P versus NP problem, Poincaré conjecture, Riemann hypothesis, and Yang–Mills existence and mass gap.
If you’ve painted a ceiling, you know the annoyance of drips: an initially smooth coat of paint collects in spots that dribble onto the floor. The drips come from a gravity-driven instability in the paint, which physicists have modeled for various liquids and surface tilts. Now Ruben Tomlin of Imperial College London and colleagues have added an electric field into the mix. Their mathematical modeling and simulations show that a strong enough field could stop the drips, an effect that might be used for thin-film cooling or to make uniform coatings in precision manufacturing. The team’s starting point is a flat slab coated on its underside with a viscous dielectric liquid like oil. In 2015, researchers studied such a setup by tilting the slab and letting the liquid run, finding they could predict the tilt angles at which drips would occur. The Imperial team approaches this problem anew. They derived a set of equations for the film’s surface contour that incorporate not only surface tension and gravity, as in the 2015 study, but also the effect of an electric field applied parallel to the substrate. This field introduces stresses at the film’s surface, which have a stabilizing effect akin to that of surface tension. Based on their equations, they determined that drips appear roughly when the liquid surface transitions from being “convectively” unstable (in which a disturbance in the liquid ripples away) to being “absolutely” unstable (in which the disturbance remains localized). The researchers predicted that a relatively high field strength would be needed to prevent drips for a 2.5-mm-thick film tilted to 60° degrees below horizontal. But they think lower values might suffice in other cases that are of technological interest. This research is published in Physical Review Fluids. Jessica Thomas is the Editor of Physics.
Source: US State Dept. History archive Youtube documentary: ‘Star Wars’ 1981-1988 THE STRATEGIC DEFENSE INITIATIVE (SDI): STAR WARS The Strategic Defense Initiative (SDI), also known as Star Wars, was a program first initiated on March 23, 1983 under President Ronald Reagan. The intent of this program was to develop a sophisticated anti-ballistic missile system in order to prevent missile attacks from other countries, specifically the Soviet Union. With the tension of the Cold War looming overhead, the Strategic Defense Initiative was the United States’ response to possible nuclear attacks from afar. Although the program seemed to have no negative consequences, there were concerns brought up about the program “contravening” the anti-ballistic missile (ABM) of the Strategic Arms Limitation Talks years before. For this reason, in conjunction with budgetary constraints, the Strategic Defense Initiative was ultimately set aside. The nickname “Star Wars” may have been attached to the program for some of its abstract and farfetched ideas, many of which included lasers. Furthermore, the previously released science fiction movie titled “Star Wars,” caused the public to easily associate this program with new and creative technologies. “The weapons required included space- and ground-based nuclear X-ray lasers, subatomic particle beams, and computer-guided projectiles fired by electromagnetic rail guns—all under the central control of a supercomputer system.” By using these systems, the United States planned to intercept intercontinental ballistic missiles while they still flew high above the Earth, minimizing their effects. However, there was a large power requirement for these types of weapons — power requirements so vast that nuclear power was the method of choice. Thus, as the reality of creating numerous nuclear plants diminished, so did the ambitious designs. By the end of SDI, the primary focus of the weapons design group was focused on “land based kinetic energy weapons.” These weapons were essentially guided missile projectiles. At the end of the Strategic Defense Initiative, thirty billion dollars had been invested in the program and no laser and mirror system was ever used, not on land, not in space. The Strategic Defense Initiative was eventually abandoned, and after a few years, it was nothing other than a short chapter in history books. With bold intentions, the Star Wars program was hopeful of a revolutionary defense system, a system which was said to be nearly impenetrable. Yet with political pressure, both domestic and international, combined with budgetary conflicts, the Strategic Defense Initiative was slated for failure from the start. Fear of Soviet retaliation due to violations in the ABM treaty from the first S.A.L.T. talks was a primary factor in these international pressures, but United States legislators and congressmen also argued that a creation of a large anti-ballistic missile system would raise tensions between the two nations and potentially spark a conflict. Because having a pre-emptive strike in a nuclear war would be advantageous, both nations were already on edge and so it was decided that any project which could jeopardize the balance would be discarded. The treaties set up by the S.A.L.T. talks remained in effect for nearly 30 years and it was not until 2001 when President George W. Bush cited Article 15 of the ABM treaty and pulled America out. By this point, the SDI was far behind and relations with Russia, no longer the Soviet Union, were vastly improved. 1981–1988: The Presidency of Ronald W. Reagan The principal foreign policy framework for the Ronald Reagan administration rejected acquiescence in the Cold War status quo that had emerged during the Nixon, Ford, and Carter presidencies. Reagan objected to the implied moral equivalency of détente, insisting instead on the superiority of representative government, free-market capitalism, and freedom of conscience over what he viewed as godless, collectivist, Communism. This more confrontational approach eventually came to be labeled the “Reagan Doctrine,” which advocated opposition to Communist-supported regimes wherever they existed, as well as a willingness to directly challenge the Soviet Union on a variety of fronts. Often referred to as “the great communicator,” Reagan utilized his rhetorical skills to frame the Cold War contest as a fundamental clash between good and evil. In his first inaugural address on January 20, 1981, the new President contrasted the “enemies of freedom” as doomed to fail when faced with the “will and moral courage of free men and women.” Later that year in an address at Notre Dame University he stated that “the West won’t contain Communism, it will transcend Communism.” In 1983, he famously characterized the Soviet Union as an “evil empire.” In 1987, standing in front of the Berlin Wall constructed by the East German communist regime a quarter-century earlier to stem the flow of East Germans to the West, Reagan challenged the patron of the East German regime, Soviet leader Mikhail Gorbachev, to “tear down this wall.” Reagan’s many memorable public addresses served as an important tool to galvanize support for policies that often sparked considerable controversy. The Reagan administration advocated a wide array of initiatives that heightened confrontation with the U.S.S.R. and its allies. Reagan engineered a significant increase in U.S. defense spending designed to modernize existing forces and achieve technological advances the Soviet Union could not match. For example, the administration advocated building a much larger navy with enhanced technical capabilities, deployment of intermediate-range nuclear missiles in Europe, development of terrain-hugging cruise missiles difficult both to detect and to shoot down, and the Strategic Defense Initiative, which held out the prospect of seizing the ultimate “high ground”—outer space—by preventing intercontinental nuclear missile warheads from reaching their targets. During his two terms in office, Reagan successfully advocated increasing the Defense Department budget by 35%. The United States supported Afghan resistance organizations opposing the Soviet-backed regime in Kabul, anti-communist forces in Angola, and the Contras in Nicaragua. In 1983 American forces invaded Grenada to forestall installation of a Marxist regime. The administration also greatly increased spending on the U.S. Information Agency, especially Voice of America and Radio Free Europe/Radio Liberty, signaling the importance placed on challenging Soviet ideology throughout the world. Conversely, heightened tensions also led Reagan administration officials to attempt conciliatory measures designed to reduce the threat of direct confrontation, especially nuclear war. In 1982, Reagan broached the idea of substantially decreasing nuclear weapon stockpiles, which eventually resulted in the landmark Strategic Arms Reduction Treaty (START). The emergence of Mikhail Gorbachev as the principal Soviet leader provided Reagan with a partner willing to engage in substantive negotiations. A series of summit meetings ensued which reduced tensions and produced concrete results, such as the 1987 Intermediate-Range Nuclear Forces Treaty (INF) that eliminated the deployment of theater-level nuclear missiles in Europe. The Reagan administration dealt with many other foreign policy issues as well. Affairs in and around Southwest Asia continued to present multiple challenges, including a Lebanese civil war, strained relations with Iran, and increasing tensions after the bombing of Libya in retaliation for a state-sanctioned terrorist attack in Berlin. A (third) joint communiqué with the Peoples’ Republic of China reiterated both parties’ commitment to improved relations, while not resolving all issues related to Taiwan. Trade disputes with Japan required constant attention. The United States had to determine what course to steer amid hostilities between allies during the Falklands Crisis. A variety of global issues also rose to prominence. After a decade of negotiation, the administration opted not to support Senate ratification of a comprehensive Law of the Sea treaty out of concerns about the potential for the internationalizing of seabed mining operations and potential impingement on U.S. Navy prerogatives. The United States increasingly promoted its enforcement preferences regarding drug trafficking, resulting in a landmark 1988 treaty. It became apparent that the AIDS crisis represented an international public health issue of consequence. With the increasing rapidity of international communications, enhanced global trade, and the rising world-wide movement of people, many issues previously considered “domestic” became subject to diplomatic negotiation. The Secretary of State occupied a prominent position in Reagan’s approach to creating and implementing foreign policy. Alexander Haig first occupied the chair, but an inability to exert as much influence as he desired caused him to resign after only 18 months in office. George Shultz served as Secretary for the remainder of Reagan’s two terms, and is generally regarded as an effective bureaucratic manager and influential policy leader. Caspar Weinberger served as Secretary of Defense for almost seven years and William Casey as Director of Central Intelligence for six years. Both played key roles in the foreign policy arena. The position of National Security Adviser was downgraded somewhat during the Reagan administration. The six individuals who occupied the position exercised relatively less influence than predecessors in the Carter, Ford, or Nixon White House. Perhaps to an unusual degree, the accomplishments of this administration are viewed in the light of events that occurred after Ronald Reagan left office in January 1989. Within a year, the Berlin Wall fell, and by the end of 1991 the Soviet Union had collapsed, signaling the end of the Cold War. Historians and other analysts continue to debate the extent of their influence, but there is no question that Ronald Reagan and his foreign policy advisers played key roles in this remarkable turn of events. Source: BBC History The end of the Cold War In 1979, the Soviet Union invaded Afghanistan to try to prop up the communist government there, which was being attacked by Muslim Mujaheddin fighters. This immediately caused a rift with America, which boycotted the 1980 Olympics. In 1980, Ronald Reagan became president of the USA. As a strong anti-communist, he called the Soviet Union the “evil empire” and increased spending on arms. The US military developed the neutron bomb, cruise missiles and a Star Wars defence system using space satellites. By 1985, the Soviet Union was in trouble. In 1985,Mikhail Gorbachev became leader of the USSR.
Grief is a natural response to a death or a loss, such as a divorce, an end to a relationship, or a move away from friends. Grief may produce physical, mental, social, or emotional reactions. Physical reactions can include change in appetite, headaches or stomach aches, sleeping problems, and illness. Emotional reactions can include anger, guilt, sadness, worry, and despair. Social reactions can include withdrawal from normal activities and the need to be near or apart from others. The grief process also depends on the situation surrounding the death or loss and the relationship with the person who died. Grief is normal, but when the symptoms are very intense or last a long time, professional help may be needed. Signs & Symptoms Children who are grieving may display many symptoms that impact their functioning. Some examples include: - Thumb sucking - Clinging to adults - Exaggerated fears - Temper tantrums - Physical symptoms (headaches, stomach aches, sleeping, and eating problems) - Mood swings - Feelings of helplessness and hopelessness - Increase in risk-taking and self-destructive behaviors - Anger, aggression, fighting, oppositional behavior - Withdrawal from adults and/or peers and activities they enjoyed prior to the loss - Depression, sadness - Lack of concentration To learn more about grief, read the Children’s Mental Health Matters Grief Fact Sheet.
To use all functions of this page, please activate cookies in your browser. With an accout for my.bionity.com you can always see everything at a glance – and you can configure your own website and individual newsletter. - My watch list - My saved searches - My saved topics - My newsletter Chromosomal crossover (or crossing over) is the process by which two chromosomes, paired up during prophase 1 of meiosis, exchange some portion of their DNA. Crossing over is specifically initiated in pachytene, before the synaptonemal complex develops, and is not completed until near the end of prophase 1. Crossover usually occurs when matching regions on matching chromosomes break and then reconnect to the other chromosome. The result of this process is an exchange of genes, called genetic recombination. Crossing over was first described, in theory, by Thomas Hunt Morgan. The physical basis of crossing over was first demonstrated by Harriet Creighton and Barbara McClintock in 1931. Additional recommended knowledge Meiotic recombination initiates with double-stranded breaks that are introduced into the DNA by the Spo11 protein. One or more exonucleases then digest the 5’ ends generated by the double-stranded breaks to produce 3’ single-stranded DNA tails. The meiosis-specific recombinase Dmc1 and the general recombinase Rad51 coat the single-stranded DNA to form nucleoprotein filaments. The recombinases catalyze invasion of the opposite chromatid by the single-stranded DNA from one end of the break. Next, the 3’ end of the invading DNA primes DNA synthesis, causing displacement of the complementary strand, which subsequently anneals to the single-stranded DNA generated from the other end of the initial double-stranded break. The structure that results is a cross-strand exchange that is known as a Holliday junction. The Holliday junction is a tetrahedral structure which can be 'pulled' by other recombinases, moving it along the four-stranded structure. In most eukaryotes, a cell carries two copies of each gene, each referred to as an allele. Each parent passes on one allele to each offspring. An individual gamete inherits a complete haploid complement of alleles on chromosomes that are independently selected from each pair of chromatids lined up on the metaphase plate. Without recombination, all alleles for those genes linked together on the same chromosome would be inherited together. Meiotic recombination allows a more independent selection between the two alleles that occupy the positions of single genes, as recombination shuffles the allele content between sister chromatids. Recombination does not have any influence on the statistical probability that another offspring will have the same combination. This theory of "independent assortment" of alleles is fundamental to genetic inheritance. However, there is an exception that requires further discussion. The frequency of recombination is actually not the same for all gene combinations. This leads to the notion of "genetic distance", which is a measure of recombination frequency averaged over a (suitably large) sample of pedigrees. Loosely speaking, one may say that this is because recombination is greatly influenced by the proximity of one gene to another. If two genes are located close together on a chromosome, the likelihood that a recombination event will separate these two genes is less than if they were farther apart. Genetic linkage describes the tendency of genes to be inherited together as a result of their location on the same chromosome. Linkage disequilibrium describes a situation in which some combinations of genes or genetic markers occur more or less frequently in a population than would be expected from their distances apart. This concept is applied when searching for a gene that may cause a particular disease. This is done by comparing the occurrence of a specific DNA sequence with the appearance of a disease. When a high correlation between the two is found, it is likely that the appropriate gene sequence is really closer. Although crossovers typically occur between homologous regions of matching chromosomes, similarities in sequence can result in mismatched alignments. These processes are called unbalanced recombination. Unbalanced recombination is fairly rare compared to normal recombination, but severe problems can arise if a gamete containing unbalanced recombinants becomes part of a zygote. The result can be a local duplication of genes on one chromosome and a deletion of these on the other, a translocation of part of one chromosome onto a different one, or an inversion. |This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Chromosomal_crossover". A list of authors is available in Wikipedia.|
Ask your child to name things that are pushed or pulled. You could go on a forces walk around your home and garden looking for objects that are pulled or pushed. Click on the link on the left to see how we can use forces to change the way things move. Let your child play with a selection of toys and have them sort them into two groups; whether they need to be pushed or pulled. Select a toy that has wheels e.g. a toy car. Can your child describe what happens to the car’s movement when the force used to push the car is increased/decreased? What about using a heavier car, does that need a bigger or smaller force to make it move? Can forces be used to stop the car moving? Ask them to show you how this is done. CHANGING SHAPE USING FORCES Use playdough or blue tac and ask your child to push and pull the dough. Encourage the use of these terms. Ask them to describe what happens to the shape of the dough when a push or pull force is used.
Activities to practice, revise and reinforce key Maths skills taught in school. Develops knowledge and confidence in number, fractions, decimal numbers and percentages. Clear examples and explanations included. Keep track of progress and encourage self evaluation using the progress charts. Answers included. Topics covered include: Understanding Fractions, Decimals and Percentages
Food packages often contain words and phrases like “low fat,” “reduced sodium,” “contains whole grain,” and more to make consumers think a food is healthy. These words and phrases provide tidbits of information about food, but the nutrition facts label is the best tool to use to identify and select healthy choices. |Figure 1: Changes to the Nutrition Facts Label| |Source: U.S. Food and Drug Administration, 2016.| Nutrition facts labels are printed on food packages to help consumers make informed food choices. In May 2016, the nutrition facts label was updated for the first time since its debut in 1994. The changes made to the nutrition facts label are listed in Figure 1. Manufacturers must begin to adopt the new label by July 2018. On the new nutrition facts label, calories and serving size are listed in larger, bolded type. The serving size is important to note because it influences the number of calories and all other nutrient amounts listed on the label. In Figure 1, the serving size listed is 2/3 cup and there are 230 calories per serving (2/3 cup) of the food. If a person only ate half a serving of the food (1/3 cup), they would only get half the calories and other nutrients listed on the label. If a person ate two servings of the food (1⅓ cup), they would get twice the calories and other nutrients listed on the label. Some packages of food contain more than one serving but are commonly eaten in a single day or sitting (e.g. a 24-ounce bottle of soda or a pint of ice cream). To make it easier for consumers to see the nutrients contained in these foods, the updated nutrition facts label requires these packages to have “dual column” labels (Figure 2). These labels show the calories and nutrients in both a single serving and the entire package of the food. The number of calories listed on the label indicates the amount of energy provided by one serving of the food. Daily caloric requirements vary from person to person by age, gender and activity level. The percent daily value (% DV) listed on the nutrition facts label is based on a diet of 2,000 calories per day. % Daily Value The % daily value (% DV), listed to the right of each nutrient on the label, indicates how much each nutrient in the serving of food contributes to an individual’s daily nutrition needs. These numbers are based on a diet of 2,000 calories per day. |Figure 2: Dual Column Labels| |Source: U.S. Food and Drug Administration, 2016.| Nutrients to Limit Most Americans eat more than enough fat, cholesterol, sodium and added sugar. People who consume too much of these nutrients may be at increased risk for certain chronic diseases like heart disease and some cancers. The percent daily value (% DV) indicates whether foods are high or low in these nutrients. In general, 5% or less is considered low and 20% or more is considered high. In the label pictured in Figure 1, for example, the % DV for added sugar in one serving of the food is 20%, so this would be considered a food that is high in sugar. Nutrients to Increase Most Americans do not get enough fiber, iron, calcium, potassium or Vitamin D. These nutrients are important because they help to prevent conditions like osteoporosis, anemia and heart disease. The new nutrition label lists the percent daily value (% DV) for each of these nutrients. Again, the percent daily value (% DV) indicates whether foods are high or low in these nutrients. In general, 5% or less is considered low and 20% or more is considered high. In the label pictured in Figure 1, for example, the % DV for iron in one serving of the food is 45%, so this would be considered a food that is high in iron. The amount of total fat in one serving of food includes the amount of saturated fat, unsaturated fat—sometimes further broken down into polyunsaturated fat and monounsaturated fat—and trans fat in the food. Eating too much saturated fat and trans fat can increase a person’s risk for heart disease, so it’s a good idea to look for foods that are low in these types of fat. The amount of total carbohydrates in one serving of a food includes the amount of fiber, starches and sugar—both added and naturally occurring—in the food. Most Americans eat too much sugar and not enough fiber, so it’s a good idea to look for foods that are high in fiber with little to no added sugar. The new nutrition label lists the amount of added sugar per serving of food. There is no percent daily value (% DV) listed for protein because protein needs vary from person to person, and most Americans get enough protein in their diet. The Ingredient List Foods with more than one ingredient have an ingredient list on their label. Ingredients are listed in descending order by weight, the first item having a greater total weight than any of the other ingredients. In addition to the nutrient amounts listed on the nutrition facts label, the ingredient list reveals sources of sugar and sodium that are added to food products. The ingredient list is helpful for people who are trying to limit certain nutrients or avoid certain foods, especially those who have food allergies or sensitivities. Denny, S. (2015). The Basics of the Nutrition Facts Panel. Academy of Nutrition and Dietetics. www.eatright.org/resource/food/nutrition/nutrition-facts-and-food-labels/the-basics-of-the-nutrition-facts-panel U.S. Food and Drug Administration. (2016). Changes to the Nutrition Facts Label. www.fda.gov/Food/GuidanceRegulation/GuidanceDocumentsRegulatoryInformation/LabelingNutrition/ucm385663.htm U.S. Food and Drug Administration. (2016). How to Understand and Use the Nutrition Facts Label. www.fda.gov/Food/IngredientsPackagingLabeling/LabelingNutrition/ucm274593.htm
Unit 1: Introduction to Economics: What Is It? Before we dive into the principles of microeconomics, we need to define some of the major ideas that lie at the heart of economics. What is the economic way of thinking? What do economists mean when they discuss market structure and the invisible hand? In this unit we identify and define these terms before addressing the driving principles behind microeconomics: the idea that individuals and firms (economic agents) make rational choices based on self-interest. These decisions are necessary, because resources are scarce. In other words, no good or item is infinitely available. We will also introduce a number of economic models, the assumptions and constraints associated with each, and the ways they help us better understand real-life situations. Completing this unit should take you approximately 9 hours. The concept of opportunity cost is critical to understanding individual choice, because you always have to give up something in order to get another thing. In other words, the real cost of purchasing good A is equal to the value of the next best alternative (good B) that you give up in order to purchase good A. For example, what would you rather be doing instead of studying this course? The task that you have forgone in order to study economics is the opportunity cost of studying economics. As you read the materials in this subunit, pay particular attention to what is meant by the "economic way of thinking." You will want to get used to using this lens or mode of thinking as a way to understand the work that economists do and why economic principles are so widely applicable across a number of fields. This subunit also examines the key differences between microeconomics and macroeconomics and between normative economics and positive economics. Economics communicates information in a variety of formats: text, tables, graphs, mathematical expressions, and more. The following resources explain the form and function of a number of these formats, as well as their use within the context of economics. Completing this section is optional. However, this course will expect you to have the following competencies, so be sure to review this subunit if you are not confident about your skill level: - Read data from a table and transform it into a graph. - Understand the coordinate plane (x, y) and its use for constructing graphs. - Calculate the slope of a linear graph and be able to explain what the numeric value of a slope means and the significance of negative versus positive slopes. - Understand the linear equation of a line, . - Explain what an intercept is and how to determine it. This model is an application of the production possibility frontier studied in the previous section, albeit in a global set-up. In this subunit, we will bring the argument for international trade, specialization, comparative advantage, and the resulting economic growth to light. The mechanics of comparative advantage will reveal why it would benefit a country to import goods it produces at home. Absolute and comparative advantage is a difficult topic. This quiz challenges you to use analytical reasoning to explore what absolute advantage and comparative advantage actually mean, what opportunity costs are, and the real choices that national economies have to make.
Huafu International strongly endorses the use of Assessment for Learning (AfL) in all its programmes. Key to AfL is the differentiation between formative and summative assessment. AfL also includes the following aspects: developing classroom talk and questioning; giving appropriate feedback; sharing criteria with learners; peer and self assessment; thoughtful and active learners. Formative assessment, which provides ‘assessment for learning’, determines students’ knowledge and skills, and is used to inform instruction and guide learning. It occurs during the course of a unit of study. Sources which can be used to provide formative assessment are quizzes, homework, portfolios, works in progress, teacher observation, class work, conversation. These types of work might receive feedback from the teacher in the form of comments, but should not be graded or given a mark of any kind. In the literature, the terms ‘AfL’ and ‘formative assessment’ are often used interchangeably. Summative assessment, which provides ‘assessment of learning’, typically occurs at the end of a unit of study to determine the level of achievement. It will lead to a mark or grade. Examples of work which can be used to provide summative assessment are unit tests and end of semester exams. Labs and lab reports can also be used as summative assessments, as can presentations, essays, and other coursework which measure student achievement. Summative assessments may also be used formatively inasmuch as they can be used to inform instruction and guide learning for the next phase of study, though it should be remembered that this is not their primary function. Knowledge of the difference between formative and summative assessment permeates the whole school. Students are given instruction on this in their classes, parents are informed during parents meetings, and there is regular staff training on this aspect. Developing classroom talk and questioning Asking questions, either orally or in writing, is crucial to the process of eliciting information about the current state of a student’s understanding. However, students can give the right answer for the wrong reasons, and for this reason superficially ‘correct’ answers need to be probed and misconceptions explored. Teachers need to spend time planning good diagnostic questions, possibly with colleagues. Students can also be trained to ask questions. Increased thinking time can also be productive, as can a ‘no hands up’ rule so that all students can be called on to answer. Giving appropriate feedback Feedback is always important, but needs to be approached cautiously. Negative effects can happen when the feedback focuses on student’s self-esteem, as when marks are given, or when praise focuses on the student rather than the learning. Research has shown that giving comment-only feedback is more effective in achieving student progress than giving a grade or grade and comments feedback. Sharing criteria with learners Sharing learning intentions, expectations, objectives, targets, and success criteria are crucial to students’ success. These should be framed in student friendly language wherever possible. Students need regular discussions and examples to develop their understanding. Peer and self assessment The previous three areas emphasise the role of the teacher. This area emphasises the student’s involvement. Peer assessment is an important complement to self assessment as students learn to take on the role of teachers and to see learning from their perspective. At the same time they can give and take criticism and advice in a non-threatening way, and in a language which is more comprehensible. Most important, both peer and self assessment places the work in the hands of the students. Thoughtful and active learners The ultimate goal of Assessment for Learning is to involve students in their own assessment so that they can reflect on where they are in their own learning, understand where they need to go next and work out what steps to take to get there. This is sometimes referred to as self-monitoring and self-regulation. In other words, students need to understand both the desired outcomes of their learning and the processes of learning by which these outcomes are achieved, and they need to act on this understanding.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer. July 15, 1996 Keck: The Largest Optical Telescope Credit: P. Stomski (W. M. Keck Observatory), Caltech, U. California Explanation: In buildings eight stories tall rest mirrors ten meters across that are slowly allowing humanity to map the universe. Alone, each is the world's largest optical telescope: Keck. Together, the twin Keck telescopes have the resolving power of a single telescope 90-meter in diameter, able to discern sources just milliarcseconds apart. Since opening in 1992, the real power of Keck I (left) has been in its enormous light-gathering ability - allowing astronomers to study faint and distant objects in our Galaxy and the universe. Keck II, completed earlier this year, and its twin are located on the dormant volcano Mauna Kea, Hawaii, USA. In the distance is Maui's volcano Haleakala. One reason Keck was built was because of the difficultly for astronomers to get funding for a smaller telescope. Authors & editors: NASA Technical Rep.: Sherri Calvo. Specific rights apply. A service of: LHEA at NASA/ GSFC
Texas TEKS Standards. A verb is a word that shows action or state of being. The resources above correspond to the standards listed below: Texas TEKS Standards TX.110.13. English Language Arts and Reading, Grade 2 (2.21) Oral and Written Conventions/Conventions. Students understand the function of and use the conventions of academic language when speaking and writing. Students continue to apply earlier standards with greater complexity. Students are expected to: 2.21 (A) Understand and use the following parts of speech in the context of reading, writing, and speaking: 2.21 (A) (i) Verbs (past, present, and future) NewPath Learning resources are fully aligned to US Education Standards. Select a standard below to view correlations to your selected resource:
The surface of graphene can be joined to oxygen to form graphene oxide. Graphene oxide is a type of graphene which could have a considerable effect on the chemical, pharmaceutical and electronic industries. It can work as an amazing paint since it provides an ultra strong, non-corrosive coating for an extensive series of industrial application. Graphene oxide can be utilized to paint numerous things such as glass, metals, bricks etc. The thermal and chemical stability of the coating acts like a graphite after going through a simple process. Dr. Rahul Nair and Sir Andre Geim managed a team which showed earlier that more than one layer films of graphene oxide are vacuum tight in dry condition. However, if it is uncovered it water or water vapors then it behave like a molecular sieves. These molecular strainer permit small molecules to pass below a particular size. This discovery could have a vast implication on for the sanitization of water. The configuration of oxide films creates this distinct property since it is composed of millions of tiny packed flakes. These flakes are piled on top of each other excluding the nano-sized capillaries between them. Water molecules along with other atoms and molecules prefer to be within these nanocappillaries. The study is printed in Nature Communication. The researchers of University of Manchester demonstrate that a chemical treatment can forcefully close those nanocappilaries. This process turns graphene films mechanically stronger and makes it resistant to things like gas, liquid and other chemicals. For instance, the researchers reveal that glassware or copper plates wrapped with graphene paint can be utilized as containers for some really strong acids. This extraordinary property of graphene paint has already drawn attention of several companies. These firms are now working with The University of Manchester for the production of novel protective and anticorrosion coatings.
Language immersion, or simply immersion, is a method of teaching a second language in which the learners’ second language (L2) is the medium of classroom instruction. Through this method, learners study school subjects, such as math, science, and social studies, in their L2. The main purpose of this method is to foster bilingualism, in other words, to develop learners' communicative competence or language proficiency in their L2 in addition to their first or native language (L1). Additional goals are the cognitive advantages to bilingualism. Immersion programs vary from one country or region to another because of language conflict, historical antecedents, language policy or public opinion. Moreover, immersion programs take on different formats based on: class time spent in L2, participation by native speaking (L1) students, learner age, school subjects taught in L2, and even the L2 itself as an additional and separate subject. The first modern language immersion programs appeared in Canada in the 1960s. Middle-income Anglophone (English-speaking) parents there convinced educators to establish an experimental French immersion program enabling their children 'to appreciate the traditions and culture of French-speaking Canadians as well as English-speaking Canadians'. - Early immersion: Students begin the second language from age 5 or 6. - Middle immersion: Students begin the second language from age 9 or 10. - Late immersion: Students begin the second language between ages 11 and 14. - Adult immersion: Students 17 or older. - In complete immersion, almost 100% of class time is spent in the foreign language. Subject matter taught in foreign language and language learning per se is incorporated as necessary throughout the curriculum. The goals are to become functionally proficient in the foreign language, to master subject content taught in the foreign languages, and to acquire an understanding of and appreciation for other cultures. This type of program is usually sequential, cumulative, continuous, proficiency-oriented, and part of an integrated grade school sequence. Even after this type of program, the language of the curriculum may revert to the first language of the learners after several years. - In partial immersion, about half of the class time is spent learning subject matter in the foreign language. The goals are to become functionally proficient in the second language, to master subject content taught in the foreign languages, and to acquire an understanding of and appreciation for other cultures, but to a lesser extent than complete immersion. - In content-based foreign languages in elementary schools (FLES), about 15–50% of class time is spent in the foreign language and time is spent learning it as well as learning subject matter in the foreign language. The goals of the program are to acquire proficiency in listening, speaking, reading, and writing the foreign language, to use subject content as a vehicle for acquiring foreign language skills, and to acquire an understanding of and appreciation for other cultures. - In FLES programs, 5–15% of class time is spent in the foreign language and time is spent learning language itself. It takes a minimum of 75 minutes per week, at least every other day. The goals of the program are to acquire proficiency in listening and speaking (degree of proficiency varies with the program), to acquire an understanding of and appreciation for other cultures, and to acquire some proficiency in reading and writing (emphasis varies with the program). - In FLEX (Foreign Language Experience) programs, frequent and regular sessions over a short period or short and/or infrequent sessions over an extended period are provided in the second language. Class is almost always in the first language. Only one to five percent of class time is spent sampling each of one or more languages and/or learning about language. The goals of the program are to develop an interest in foreign languages for future language study, to learn basic words and phrases in one or more foreign languages, to develop careful listening skills, to develop cultural awareness, and to develop linguistic awareness. This type of program is usually noncontinuous. - In submersion one or two students are learning the L2, which is the L1 for the rest of the class. By analogy, the former are "thrown into the ocean to learn how to swim", in the sense that the special student may lack the necessary language skills to follow the class and understand the subject. Professors may offer extra L2 lessons or a special in-class treatment to the student to compensate that. If they do not, the student may find the lessons too complicated. - In two-way immersion, also called "dual-" or "bilingual immersion", the student population consists of speakers of two or more languages. Ideally speaking, half of the class is made up of native speakers of the major language in the area (e.g., English in the U.S.) and the other half is of the target language (e.g., Spanish). Class time is split in half and taught in the major and target languages. This way students encourage and teach each other, and eventually all become bilingual. The goals are similar to those of partial immersion. Different ratios of the target language to the native language may occur. - In language travel, a person temporarily relocates to a place where the target language is the predominant language. For example, Canadian anglophones go to Quebec (see Explore, and Katimavik) while Irish anglophones go to the Gaeltacht. Often this involves a homestay with a family who speak only the target language. - There are also intensive immersion programs for new immigrants, such the ulpan in Israel. This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message) - "Improvement in linguistic and meta linguistic abilities" - An increase of cognitive ability "such as divergent thinking, concept formation, verbal abilities," listening skills "and general reasoning" - Improves one's "understanding of his/her native language." - "Opens the door to other cultures and helps a child understand and appreciate people from other countries." - "Increases job opportunities in many careers where knowing another language is a real asset." - Superior SAT scores and standardized testing - Enhances memory Learning a foreign language has its assets, and studies suggest that immersion is an effective way to learn foreign languages. Many immersion programs start in the elementary schools, with classroom time being dedicated to the foreign language anywhere between 50% and 90% of the day. Learning a second or third language not only helps an individual's personal mental skills, but also aids their future job skills. Jean Piaget, a developmental psychologist, had a theory that stated that when a child faces an idea that does not fit their understanding, it "becomes a catalyst for new thinking". As a new language is completely foreign to a child at first, it fits perfectly as this "catalyst for new thinking". Baker found that more than 1,000 studies have been completed on immersion programs and immersion language learners in Canada. These studies have given us a wealth of information. Across these studies, a number of important observations can be made. - Early immersion students "lag behind" their monolingual peers in literacy (reading, spelling, and punctuation) "for the first few years only". However, after the first few years, the immersion students catch up with their peers. - Immersion programs have no negative effects on spoken skills in the first language. - Early immersion students acquire almost-native-like proficiency in passive skill (listening and speaking) comprehension of the second language by the age of 11, but they don't reach the same level in reading and writing because they have enough level to communicate with their teachers. Also, if they communicate only with their teachers, they don't learn the skills to hold day-to-day conversations.:275,309 - Early immersion students are more successful in listening and reading proficiency than partial and late immersion students. - Immersion programs have no negative effects on the cognitive development of the students. - Monolingual peers perform better in sciences and math at an early age, however immersion students eventually catch up with, and in some cases, outperform their monolingual peers. - Studies have also shown that students in dual programs have "more positive attitudes towards bilingualism and multiculturalism". Cases by country In the United States, and since the 1980s, dual immersion programs have grown for a number of reasons: competition in a global economy, a growing population of second language learners, and the successes of previous programs. Language immersion classes can now be found throughout the US, in urban and suburban areas, in dual-immersion and single language immersion, and in an array of languages. As of May 2005, there were 317 dual immersion programs in US elementary schools, providing instruction in 10 languages, and 96% of programs were in Spanish. In Israel, the first full immersion program, the Brandeis University-Middlebury Program in Israel was founded in 2011. Participants are required to take the Middlebury College Language Pledge, a promise to speak only the language they are studying for the duration of their time in the program. - Bilingual education - English village - French immersion - Gaelscoileanna, Irish language immersion - Kura Kaupapa Māori, Maori language immersion - Native Language Immersion Student Achievement Act ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (October 2007) (Learn how and when to remove this template message)| - Baker, C. (1993). Foundations of Bilingual Education and Bilingualism. Clevedon: Multilingual Matters. - Benefits of Being Bilingual, Reshma Jirage, buzzle.com - Benefits of Being Bilingual, American Council on the Teaching of Foreign Languages (reprinted from the Center for Applied Linguistics) - Why study a foreign language?, Bernadette Morris, LEARN NC, a program of the University of North Carolina at Chapel Hill School of Education - Cognitive Benefits of Learning Language, Duke Gifted Letter: Volume 8, Issue 1, Fall 2007. The Duke University Talent Identification Program. Online Newsletter for Parents of Gifted Youth - Anderson, H., & Rhodes, N. (1983). Immersion and other innovations in U.S. elementary schools. In: "Studies in Language Learning, 4" (ERIC Document Reproduction Service No. ED 278 237) - Andrade, C., & Ging, D. (1988). "Urban FLES models: Progress and promise." Cincinnati, OH and Columbus, OH: Cincinnati Public Schools and Columbus Public Schools. (ERIC Document Reproduction Service No. ED 292 337) - Chen, Ya-Ling (2006). The Influence of Partial English Immersion Programs in Taiwan on Kindergartners' Perceptions of Chinese and English Languages and Cultures. The Asian EFL Journal Vol 8(1) - Criminale, U. (1985). "Launching foreign language programs in elementary schools: Highpoints, headaches, and how to's." Oklahoma City, OK. (ERIC Document Reproduction Service No. ED 255 039) - Curtain, H., & Pesola, C.A. (1994). "Languages and children-Making the match. Foreign language instruction in the elementary school." White Plains, NY: Longman Publishing Group. - Freeman, Yvonne (2005). Dual Language Essentials For Teachers and Administrators. Heinemann: Portsmouth, NH, 2005 - Potowski, Kim. Language and Identity in a Dual Immersion School. Multilingual Matters Limited, 2007. - Tagliere, Julia. "Foreign Language Study--Is Elementary School the Right Time to Start?" - Thayer, Y. (1988). "Getting started with French or Spanish in the elementary school: The cost in time and money." Radford, VA: Radford City Schools. (ERIC Document Reproduction Service No. ED 294 450) - Walker, Cheryl. "Foreign Language Study Important in Elementary School". Wake Forest University. - The Wingspread Journal. (July 1988). "Foreign language instruction in the elementary schools." Racine, WI: The Johnson Foundation. - Artigal, Josep Maria & Laurén, Christer (a cura di) (1996). Immersione linguistica per una futura Europa. I modelli catalano e finlandese. Bolzano: alpha beta verlag. ISBN 88-7223-024-1 - Baker, Collin (1993). Foundations of Bilingual Education and Bilingualism. Clevedon: Multilingual Matters. - California. Office of Bilingual Bicultural Education (1984). "Studies on immersion education: a collection for United States educators". The Department. - Genesee, Fred (1987). Learning through two languages: studies of immersion and bilingual education. Newbury House Publishers. - Lindholm-Leary, Kathryn J. (2001). "Dual language education". Clevedon: Multilingual Matters. ISBN 1-85359-531-4 - Maggipinto, Antonello (2000). Multilanguage acquisition, new technologies, education and global citizenship Paper given in New York (Congress of AAIS-American Association for Italian Studies). Published on Italian Culture: Iussues from 2000. - Maggipinto, Antonello et al. (2003). Lingue Veicolari e Apprendimento. Il Contesto dell'Unione Europea... Bergamo: Junior. ISBN 88-8434-140-X - Potowski, Kim (2007). "Language and Identity in a Dual Immersion School". Multilingual Matters Limited. - Ricci Garotti, Federica (a cura di) (1999). L'immersione linguistica. Una nuova prospettiva. Milano: Franco Angeli. Codice ISBN 88-464-1738-0 - Shapson, Stan & Mellen Day, Elaine (1996). "Studies in immersion education". Clevedon: Multilingual Matters. ISBN 1-85359-355-9 - Swain, Merrill & Lapkin, Sharon (1982). "Evaluating bilingual education: a Canadian case study". Clevedon: Multilingual Matters. ISBN 0-905028-10-4 - Swain, Merrill & Johnson, Robert Keith (1997). "Immersion education: international perspectives". Cambridge University Press. ISBN 0-521-58655-0 - Wode, Henning (1995)."Lernen in der Fremdsprache: Grundzüge von Immersion und bilingualem Unterricht". Hueber. ISBN 3-19-006621-3
A renewable energy source is geothermal energy. It entails capturing the heat that is contained within the Earth’s surface, or beneath our feet. It can be used to produce energy on a big scale (utility level), as well as to heat and cool homes and businesses on a smaller scale. Despite being used for a long time, geothermal energy is less well-known than other alternative energy sources like solar and wind power. We’ve put together a quick summary of this power source’s main advantages and disadvantages to assist you in learning more about it; you can also find more in-depth information further down the page. The pros and cons of geothermal energy |Generally environmentally friendly; does not cause significant pollution||Some minor environmental issues| |Renewable and sustainable||Sustainability relies on reservoirs being properly managed| |Reliable||High initial costs| |Great for heating and cooling||Can cause earthquakes in extreme cases| What is geothermal energy? Rocks and water make up the Earth’s crust, and beneath that lies a layer of hot, molten rock (magma). Magma is extremely hot—even hotter than the sun’s surface. Magna’s heat is a significant source of energy that can be used to create electricity. We drill into the earth to accomplish this, and generally speaking, the lower you go, the hotter it gets. Steam is produced when water is heated using the underground heat. After that, a turbine elevated above the ground is spun in order to generate electricity for the grid. A constant and reliable renewable energy source, geothermal is virtually entirely pollution-free. Advantages of geothermal energy 1. Environmentally friendly Most people agree that geothermal energy is environmentally benign. A geothermal power plant has a negligible carbon footprint. According to the EIA, an average geothermal power plant emits 99% less carbon dioxide (CO2) for each megawatt-hour (MWh) of electricity it produces. Utilizing geothermal energy has certain polluting effects, however they are minimal in comparison to the pollution caused by traditional fossil fuels like coal and natural gas. It is thought that furthering the development of our geothermal resources will aid in the fight against global warming. 2. Renewable and sustainable Geothermal reservoirs are refilled organically and derived from natural resources. Consequently, geothermal energy is a renewable energy source. Another term for renewable energy sources is “sustainable.” In contrast to conventional energy sources like coal and fossil fuels, geothermal energy is a resource that can support its own consumption rate. Our geothermal reserves contain energy that, according to scientists, will actually last for billions of years. 3. Massive potential Currently, 17 terawatts (TW) of power are consumed globally each year from both fossil and renewable energy sources. Although that may seem like a lot, the Earth actually contains many times more energy than that! But the majority of geothermal energy is either expensive or impossible to acquire. Between 0.035 and 2 TW are realistic estimates for the potential of geothermal power facilities. Only 12.7 gigawatts (GW) of energy are now produced by geothermal power plants worldwide, however installed geothermal heating capacity is slightly higher at 28 GW. This indicates that there is a large window of opportunity for more geothermal energy production. A dependable source of energy is geothermal energy. With surprising accuracy, we can forecast a geothermal power plant’s electricity production. With solar and wind, on the other hand, the weather has little effect on how much power is produced. Therefore, geothermal power plants are quite effective in supplying the baseload energy requirement. Real power output is relatively near to total installed capacity in geothermal power plants, which have a high capacity factor. In 2017, the average power output around the globe exceeded 80% (capacity factor) of the total installed capacity, but as much as 96% of that capacity has been utilized. 5. Great for heating and cooling For the power-generating turbines used in geothermal energy to work efficiently, the water must be heated to a temperature of at least 150°C (approximately 300°F). Geothermal energy can also be used for heating and cooling, which is a simpler application. The (quite minor) temperature difference between a ground source and the surface is exploited in this method. Seasonal temperature variations are often more difficult to affect the earth than the air. Therefore, a geothermal heat pump can use the ground just a few feet below the surface as a heat sink or source, similar to how an electrical heat pump uses the heat in the air. In the past few years, geothermal heating and cooling has become increasingly popular among homeowners. Disadvantages of geothermal energy 1. Environmental issues Under the earth’s surface, there are a lot of greenhouse gases. Some of these gases escape when geothermal energy is used, traveling to the surface and entering the atmosphere. Near geothermal power facilities, these pollutants are typically more prevalent. Sulfur dioxide and silica emissions from geothermal power facilities are negligible. Mercury, arsenic, and boron are only a few of the harmful heavy metals that may be present in the reservoirs. Despite this, the pollution produced by geothermal energy is extremely low and pales in comparison to that of coal and other fossil fuels. The Union of Concerned Scientists also reports that there have been no occurrences of geothermal site water contamination in the US. 2. Surface instability (earthquakes) The stability of the soil may be impacted by the construction of geothermal power facilities. In fact, both Germany and New Zealand have experienced subsidence (sinking of the Earth’s surface) as a result of geothermal power facilities. Hydraulic fracturing, a necessary component of creating enhanced geothermal system (EGS) power plants, can cause earthquakes. An earthquake with a Richter magnitude of 3.4 occurred in Switzerland in 2006 as a result of the construction of a geothermal power plant. The cost of commercial geothermal energy projects is high. A geothermal power station with a capacity of 1 megawatt typically has installation costs that range from $2.5 to $5 million (MW). The discovery and drilling of new reservoirs contribute significantly to cost growth and generally account for 50% of total expenses. As was already established, the majority of geothermal resources cannot be used profitably, at least not with the level of technology, subsidies, and energy prices that exist today. Geothermal heating and cooling systems for households and commercial buildings have high upfront expenses. Nevertheless, these systems should be viewed as long-term investments because they are probably going to save you money in the future. Ground source heat pumps typically range in price from $15,000 to 40,000 when installed, with a 10–20 year payback period. Reliable geothermal reservoirs are difficult to find. Some nations are endowed with abundant natural resources; for instance, the Philippines and Iceland use geothermal energy to supply almost one-third of their electricity needs. Significant energy losses must be considered when transferring geothermal energy over large distances using hot water rather than electricity. 5. Sustainability issues Over millions of years, rainwater seeps into the geothermal reservoirs via the earth’s surface. According to studies, if the fluid is taken quicker than it is supplied, the reservoirs may get empty. After the thermal energy has been used, efforts can be made to inject fluid back into the geothermal reservoir (the turbine has generated electricity). Geothermal energy may be sustained if reservoirs are maintained properly. Since home geothermal heating and cooling uses geothermal energy differently than geothermal power plants, this is not a problem. - Geothermal energy is derived from the massive amount of heat that exists under the Earth’s surface. - Geothermal energy can be used to generate electricity by drilling underground and tapping into the heat to operate steam turbines on the surface. - Geothermal can also be used for heating and cooling by taking advantage of the temperature differences above and below the ground. - Pros of geothermal energy: it’s environmentally-friendly, renewable and sustainable, reliable, great for heating and cooling, and has massive potential. - Cons of geothermal energy: generates waste, reservoirs require proper management, it’s location-specific, has high initial cost, and can cause earthquakes in extreme cases. - Geothermal has the potential to become a major global energy source, but is held back by its high upfront costs.
What is Mac OS X?© Amit Singh. All Rights Reserved. Written in December 2003 XNU: The Kernel The Mac OS X kernel is called XNU. It can be viewed as consisting of the following components: XNU contains code based on Mach, the legendary architecture that originated as a research project at Carnegie Mellon University in the mid 1980s (Mach itself traces its philosophy to the Accent operating system, also developed at CMU), and has been part of many important systems. Early versions of Mach had monolithic kernels, with much of BSD's code in the kernel. Mach 3.0 was the first microkernel implementation. XNU's Mach component is based on Mach 3.0, although it's not used as a microkernel. The BSD subsystem is part of the kernel and so are various other subsystems that are typically implemented as user-space servers in microkernel systems. XNU's Mach is responsible for various low-level aspects of the system, such as: - preemptive multitasking, including kernel threads (POSIX threads on Mac OS X are implemented using kernel threads) - protected memory - virtual memory management - inter-process communication - interrupt management - real-time support - kernel debugging support (the built-in low-level kernel debugger, ddb, is part of XNU's Mach component, and so is kdp, a remote kernel debugging protocol implementation) - console I/O The sequence of events prior to the kernel is passed control is described in Booting Mac OS X. The secondary bootloader eventually calls the kernel's "startup" code, forwarding various boot arguments to it. This low-level code is where every processor in the system starts (from the kernel's point of view). Various important variables, like maximum virtual and physical addresses, the threshold temperature for throttling down a CPU's speed, are initialized here, BAT registers are cleared, Altivec (if present) is initialized, caches are initialized, etc. Eventually this code jumps to boot initialization code for the architecture ( ppc_init() on the PowerPC). Thereafter: - A template thread is filled in, and an initial thread is created from this template. It is set to be the "current" thread. - Some CPU housekeeping is done. - The "Platform Expert" (see below) is initialized ( PE_init_platform()), with a flag indicating that the VM is not yet initialized. This saves the boot arguments, the device tree and display information in a state variable. Another call to PE_init_platform()is made after the VM is initialized. - Mach VM is initialized. - The function machine_startup()is called. It takes some actions based on the boot arguments, performs some housekeeping, starts thermal monitoring for the CPU, and calls setup_main()performs a lot of work: initializing the scheduler, IPC, kernel extension loading, clock, timers, tasks, threads, etc. and finally creates a kernel thread called startup_threadthat creates further kernel threads. startup_threadcreates a number of other threads (the idle threads, service threads for clock and device, ...). It also initializes the thread reaper, the stack swapin and the periodic scheduler mechanism. It is here that the BSD subsystem is initialized (via startup_threadbecomes the pageout daemon once it finishes its work. At this point, Mach is up and running. XNU's BSD component uses FreeBSD as the primary reference codebase (although some code might be traced to other BSDs). Darwin 7.x (Mac OS X 10.3.x) uses FreeBSD 5.x. As mentioned before, BSD runs not as an external (or user-level) server, but is part of the kernel itself. Some aspects that BSD is responsible for include: - process model - user ids, permissions, basic security policies - POSIX API, BSD style system calls - TCP/IP stack, BSD sockets, firewall - VFS and filesystems (see Mac OS X Filesystems for details) - System V IPC - crypto framework - various synchronization mechanisms Note that XNU has a unified buffer cache but it ties in to Mach's VM. XNU uses a synchronization abstraction (built on top of Mach mutexes) called funnels to serialize access to the BSD portion of the kernel. The kernel variables pointing to these funnels have the _flock suffix, such as network_flock. When Mach initializes the BSD subsystem via a call to bsd_init(), the first operation performed is the allocation of funnels (the kernel funnel's state is set to - The kernel memory allocator is initialized. - The "Platform Expert" (see below) is called upon to see if there are any boot arguments for BSD. - VFS buffers/hash tables are allocated and initialized. - Process related structures are allocated/initialized. This includes the list of all processes, the list of zombie processes, hash tables for process ids and process groups. - Process 0 is created and initialized (credentials, file descriptor table, audit information, limits, etc.). The variable kernprocpoints to process 0. - The machine dependent real-time clock's time and date are initialized. - The Unified Buffer Cache is initialized (via ubc_init(), which essentially initializes a Mach VM Zone via zinit(), which allocates a region of memory from the page-level allocator). - Various VFS structures/mechanisms are initialized: the vnode table, the filesystem event mechanism, the vnode name cache, etc. Each present filesystem time is also initialized. mbufs(memory buffers, used heavily in network memory-management) are initialized via - Facilities/subsystems such as aio, and System V IPC are initialized. - The kernel's generic MIB (management information base) is initialized. - The data link interface layer is initialized. - Sockets and protocol families are initialized. - Kernel profiling is started, and BSD is "published" as a resource in the IOKit. - Ethernet devices are initialized. - A Mach Zone is initialized for the vnode pager. - BSD tries to mount the root filesystem (which could be coming over the network, for example, a Mac OS X disk image ( .dmg) exported over NFS). devfsis mounted on - A new process is created (cloned) from kernproc(process 0). This newly created process has pid1, and is set to become mach_init, which starts mach_initis loaded and run via bsdinit_task(), which is called by the BSD asynchronous trap handler ( The rest of the user space startup is described in Mac OS X System Startup. I/O Kit, the object-oriented device driver framework of the XNU kernel is radically different from that on traditional systems. I/O Kit uses a restricted subset of C++ (based on Embedded C++) as its programming language. This system is implemented by the libkern library. Features of C++ that are not allowed in this subset include: - multiple inheritance - RTTI (run-time type information), although I/O Kit has its own run-time typing system The device driver model provided by the I/O Kit has several useful features (in no particular order): - numerous device families (ATA/ATAPI, FireWire, Graphics, HID, Network, PCI, USB, HID, ...) - object oriented abstractions of devices that can be shared - plug-and-play and hot-plugging - power management - preemptive multitasking, threading, symmetric multiprocessing, memory protection and data management - dynamic matching and loading of drivers (multiple bus types) - a database for tracking and maintaining detailed information on instantiated objects (the I/O Registry) - a database of all I/O Kit classes available on a system (the I/O Catalog) - an extensive API - mechanisms/interfaces for applications and user-space drivers to communicate with the I/O Kit - driver stacking I/O Kit's implementation consists of three C++ libraries that are present in the kernel and available to loadable drivers: Kernel/IOKit. The I/O Kit includes a modular, layered run-time architecture that presents an abstraction of the underlying hardware by capturing the dynamic relationships between the various hardware/software components (involved in an I/O connection). Various tools such as kextcache, etc. let you explore and control various aspects of I/O Kit. For example, the following command shows status of dynamically loaded kernel extensions: Index Refs Address Size Wired Name (Version) <Linked Against> 1 1 0x0 0x0 0x0 com.apple.kernel (7.2) 2 1 0x0 0x0 0x0 com.apple.kpi.bsd (7.2) 3 1 0x0 0x0 0x0 com.apple.kpi.iokit (7.2) 4 1 0x0 0x0 0x0 com.apple.kpi.libkern (7.2) The following command lists the details of the I/O Kit registry in excruciating detail: % ioreg -l -w 0 +-o Root <class IORegistryEntry, retain count 12> | "IOKitBuildVersion" = "IOKit Component Version 7.2: Thu Dec 11 16:15:20 PST 2003; | "IONDRVFramebufferGeneration" = <0000000200000002> /* thousands of lines of output */ The Platform Expert is an object (one can think of it as a driver) that knows the type of platform that the system is running on. I/O Kit registers a nub (see below) for the Platform Expert. This nub then loads the correct platform specific driver, which further discovers the buses present on the system, registering a nub for each bus found. The I/O Kit loads a matching driver for each bus nub, which discovers the devices connected to the bus, and so on. Thus, the Platform Expert is responsible for actions such as: - Building the device tree (as described above) - Parse certain boot arguments - Identify the machine (including processor and bus clock speeds) - Initialize a "user interface" to be used in case of kernel panics libkern and libsa As described earlier, the I/O Kit uses a restricted subset of C++. This system, implemented by libkern, provides features such as: - Dynamic object allocation, construction, destruction (including data structures such as Arrays, Booleans, Dictionaries, ...) - Certain atomic operations, miscellaneous functions ( - Provisions for tracking the number of current instances for each class - Ways to avoid the "Fragile Base Class Problem" libsa provides functions for miscellaneous purposes: binary searching, symbol remangling (used for gcc 2.95 to 3.3, for example), dgraphs, catalogs, kernel extension management, sorting, patching vtables, etc.
Do you know why having learnt how to tie shoelaces at the age of 2 we remember it for a lifetime, but we cannot recall our second birthday or other events of this time period? This happens due to the phenomenon called childhood amnesia and that’s the topic of this article. There are basically two types of memory – declarative and procedural (or non-declarative). Declarative memory stands for recollections of past which are comprehended and can be easily reproduced (declared). It is divided into semantic memory (“what I know” memory of gained knowledge) and episodic (“what I remember” memory of experienced events). Procedural or skill memory is memory of “how to do this or that”. It forms out of our consciousness and is responsible for skill formation. Lacing is an example of skill as well as guitar playing and cycling. At the initial stages of skill formation we have to think about our actions so that they could become automatic. Since lacing is not difficult activity, we maintain this childhood skill during the whole life. At the same time, Birthday party is an event which refers to episodic memorizing. According to biology research, it is children maldevelopment of this kind of memory that defines absence of memories about our early childhood and is called infantile amnesia. On average people have first memories from their 3-4 year old childhood. Why can’t episodic memory start working earlier? There are several hypotheses to underpin this fact. 1. Immature Brain Hypothesis (Bauer, 2002). Development of memorizing ability is connected with the development of frontal lobe of the brain which happens during first years of life. 2. Language Hypothesis (Nelson and Fivush, 2004) is one according to which a child can’t form episodic memory until it gains the ability of narrating events. Language itself is necessary not only for rendering information, but also for coding it to memorize. Without ability to talk children can’t provide the narrative structure to their thoughts and memories or cause-effect connections. Such unstructured memories are simply forgotten, and being adults we can’t recover them. 3. Self-Identity Hypothesis (Howe and Courage, 1997) explains that the ability to form autobiographical memory comes with the ability to identify one as a self. The word ‘mine’ appears in children language at the age of 2, giving opportunity to create stories about themselves. Being 3 years old they begin to identify themselves and their thoughts as their ‘own’. Therefore, events turn into experience and memories. Consequently, episodic memory (formed at 3-4 years) helps us to reconstruct events starting from this very age. Earlier memories fail to be saved. Whereas simple skills gained in the early childhood remain unchanged for all our life.
This article possibly contains original research. (September 2007) (Learn how and when to remove this template message) Ingenuity is the quality of being clever, original, and inventive, often in the process of applying ideas to solve problems or meet challenges. Etymology [ edit ] Ingenuity (Ingenium) is the root Latin word for engineering. For example, the process of figuring out how to cross a mountain stream using a fallen log, building an airplane model from a sheet of paper, or starting a new company in a foreign culture all involve the exercising of ingenuity. Human ingenuity has led to various technological developments through applied science, and can also be seen in the development of new social organizations, institutions, and relationships. Ingenuity involves the most complex human thought processes, bringing together our thinking and acting both individually and collectively to take advantage of opportunities and/or overcome problems. Application [ edit ] One example of how ingenuity is used conceptually can be found in the analysis of Thomas Homer-Dixon, building on that of Paul Romer, to refer to what is usually called instructional capital. In this case, Homer-Dixon used the phrase 'ingenuity gap' denotes the space between a challenge and its solution. His particular contribution is to explore the social dimensions of ingenuity. Typically we think of ingenuity being used to build faster computers or more advanced medical treatments. Homer-Dixon argues that as the complexity of the world increases, our ability to solve the problems we face is becoming critical. Human ingenuity is also included in many school systems, with most teachers encouraging students to be educated in human ingenuity. These challenges require more than improvements arising from physics, chemistry and biology, as one will need to consider the highly complex interactions of individuals, institutions, cultures, and networks involving all of the human family around the globe. Organizing ourselves differently, communicating and making decisions in new ways, are examples of social ingenuity. If one's ability to generate adequate solutions to these problems is inadequate, the ingenuity gap will lead to a wide range of social problems. The full exploration of these ideas in meeting social challenges is featured in The Ingenuity Gap, one of Thomas Homer-Dixon's earliest books. In another of Homer-Dixon's books, The Up Side of Down, he argues that increasingly expensive oil, driven by scarcity, will lead to great social instability. Walking across an empty room requires very little ingenuity. If the room is full of snakes, hungry bears, and land mines, the ingenuity requirement will have gone up considerably. Discussion/Argument [ edit ] It is not clear though if Dixon or Romer considered it impossible to do so, or if they were simply not familiar with the prior analysis of "applied ideas", "intellectual capital", "talent", or "innovation" where instructional and individual contributions have been carefully separated, by economic theorists. See also [ edit ] |Look up ingenuity in Wiktionary, the free dictionary.|
Stack-Based Buffer Overflow, Stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside the intended data structure - usually a fixed length buffer. Here are the characteristics of stack-based programming: - "Stack " is a memory space in which automatic variables allocated. - Function parameters are allocated on the stack and are not automatically initialized by the system, so they usually have garbage in them until they are initialized. - Once a function has completed its cycle, the reference to the variable in the stack is removed. The attacker may exploit stack-based buffer overflows to manipulate the program in various ways by overwriting: - A local variable that is near the buffer in memory on the stack to change the behavior of the program that may benefit the attacker. - The return address in a stack frame. Once the function returns, execution will resume at the return address as specified by the attacker, usually a user input-filled buffer. - A function pointer, or exception handler, which is subsequently executed. The factors that contribute to overcome the exploits are - Null bytes in addresses; - Variability in the location of shellcode; - Differences between environments. NOP or NOOP (short form of no peration or no operation performed) is an assembly language instruction/command that effectively does nothing at all. The explicit purpose of this command is not to change the state of status flags or memory locations in the code. This means NOP enables the developer to force memory alignment to act as a place holder to be replaced by active instructions later on in program development. NOP opcode can be used to form an NOP slide, which allows code to execute when the exact value of the instruction pointer is indeterminate. It is the oldest and most widely used technique for successfully exploiting a stack buffer overflow. It helps to know/locate the exact address of the buffer by effectively increasing the size of the target stack buffer area. Heap Buffer Overflow Heap buffer overflow occurs in the heap data area and may be introduced accidentally by an application programmer, or it may result from a deliberate exploit. In either case, the overflow occurs when an application copies more data into a buffer than the buffer was designed to contain. A routine is vulnerable to exploitation if it copies data to a buffer without first verifying that the source will fit into the destination. The characteristics of stack-based and heap-based programming are as follows: - "Heap" is a "free store" that is a memory space, where dynamic objects are allocated. - The heap is the memory space that is dynamically allocated new(), malloc() and calloc() functions; it is different from the memory space allocated for stack and code. - Dynamically created variables (i.e., declared variables) are created on the heap before the execution program is initialized to zeros and are stored in the memory until the life cycle of the object has completed. Memory on the heap is dynamically allocated by the application at run-time and normally contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage and uses the resulting pointer exchange to overwrite a program function pointer.
Wastewater is contaminated by various organic materials. Vegetable plants, animals, and humans are the source for the origination of natural or synthetic organic compounds. Human waste, detergents, paper products, cosmetics, agricultural products, food, waste from commercial activities, and waste from industrial sources of organic in origin and sufficient in quantity. You can also know more about wastewater treatment via ecoseptic. Organic compounds produced from sources above are a combination of carbon, hydrogen, oxygen, nitrogen, sulfur, and other elements. Organic compounds such as proteins, carbohydrates, and fats are degraded by the organism; however, they can cause pollution. A large concentration of degradable organic in wastewater is hazardous for lakes, rivers, and oceans because the organisms consume oxygen dissolved in the water to break down waste. This can reduce or deplete the oxygen supply in the water needed by aquatic life, thus killing the fish, increase the smell, and the overall decline in water quality. Some of the organic compounds are more stable than others and cannot be quickly broken down by the organism. This poses additional challenges to treatment. This is true with many synthetic organic compounds were developed for agriculture and industry. Some synthetic organic compounds belonging to pesticides, herbicides, dyes, pigments, cooking oil and fried meat is toxic to humans, fish, and aquatic plants and are often not properly disposed of in drains or carried in rainwater. On the receiving water bodies, they kill or contaminate fish, making them unfit to eat. They can also reduce the efficiency of the treatment process. Wastewater contamination by organic substances that pose a greater challenge for wastewater treatment.
IM for Teaching Unit IV, Socialization, in Introduction to Sociology Prepared by Caroline H. Persell August 2008 The social construction of the self can be illustrated well with the film, Nova: Secret of the Wild Child. There is a Lesson Plan for using the film and an in-class exercise that helps to explore conceptions of the self is “The Twenty Statements Test.” There is an Instructors' Manual for using a free textbook chapter available on “Socialization” that explores the contexts of socialization, the content and processes of socialization, the possible results of socialization, and socialization through the life course. There is a Gender Socialization Lab/Fieldtrip exercise and a Lesson Plan for it. To help students understand the power of social norms in shaping their conduct, they could do the simulation, “Breaking a Social Norm,” which has a Lesson Plan. To see how advertisers try to manage and manipulate the formation of social norms, students could view the film, “Merchants of Cool,” on-line. There is a Lesson Plan for using the film. Students could take Robert E. Wood’s Virtual Tour C, “Social Interaction and Socialization”.
The three species of louse that infest humans are: Lice are wingless insects with six legs on which are attached strong claws, which they use to grasp on tightly to hair shafts or clothing fibres. Head lice, the most common infestation in humans, are colloquially known as cooties and their eggs are called nits. Pubic lice are smaller with a short body resembling a crab. Despite excellent hygiene, head lice (pediculosis capitis) are very prevalent especially in school children in most societies (one study in the Uk found 57% primary school children were infested). The usual organism is Pediculus humanus capitis, but Pthirus pubis is more common in blacks with curly hair. Scurrying mature live lice are 3 mm in length and are most easily found on the occiput or behind the ears. Black specks of louse dung, and tiny haemorrhagic papules (bites) are often visible. Lice result in irritable crusted papules and sometimes, secondary dermatitis, impetiginisation and lymphadenopathy. The egg cases (‘nits’) are flask-shaped and 1 mm in length. They are found firmly attached to hair shafts. Empty egg cases are easier to see because they are white and further away from the scalp than grey nits containing live eggs. Scurrying mites are not always readily seen. Occasionally, it is difficult to distinguish egg cases from hair casts. Hair casts generally slide up the hair shaft, whereas egg cases are glued to the hair. Microscopy may be helpful. Other conditions to consider: Children should be taught not to share hats, scarves, headbands, combs and brushes, as adult lice can survive up to three days away from a host. However, lice usually are spread through direct head-to-head contact. Treatment of infestation should include: Topical insecticides are neurotoxic and are not effective against young nits. They include: Resistant cases may benefit from oral trimethoprim/sulphamethoxazole. Oral ivermectin is not registered for head lice treatment but is thought to be safe and effective. It is available under Section 29. Preparations listed by MIMS NZ: Pubic lice or crabs are easily transmitted sexually. The pubic hair is most common site but lice can spread to other hairy parts of the body including armpit, beard, chest hair and thigh hair. Eyelashes can also be affected. Infestation presents as itching, but blood specks on underclothes and live lice moving in the pubic hair are occasionally noted. An insecticide such as Prioderm Cream Shampoo (maldison 1%) should be applied to all hairy parts of the body apart from the eyelids and scalp. It is washed off after 5 to 10 minutes and any remaining nits should be removed by using a fine toothed comb. A repeat application is advisable 7 days later. Lice and nits can be removed from eyelashes by using a pair of fine forceps. Alternatively petroleum jelly, such as Vaseline can be smeared on the eyelashes twice a day for at least 3 weeks. Underwear and bed linen should be washed thoroughly in hot water to prevent recurrences. Sexual partners need to be treated even if they deny itching and do not appear to be infected. Body lice tend to infest people in extreme states of poverty or personal neglect. The eggs of body lice are laid and glued to cloth fibres instead of hair, and the lice feed off the skin. Regular hot washing of clothes and bathing has lead to a decrease in incidence of body lice but during wartime and in some undeveloped countries the condition can still occur. Body lice in the past have been responsible for spreading diseases such as typhus. However because of the decline in numbers of people infested with body lice this is no longer a significant problem. Similar insecticides used in the treatment of head lice are used in the treatment of body lice. Hot washing of clothes and bathing should be emphasised. Discuss the evidence that head lice are resistant to currently available insecticides. Information for patients See the DermNet NZ bookstore © 2019 DermNet New Zealand Trust. DermNet NZ does not provide an online consultation service. If you have any concerns with your skin or its treatment, see a dermatologist for advice.
Keys in Music The concept of keys in music is important to understand. The idea is a bit abstract and can be confusing, even mystifying, in the beginning. With experience the concept will become more and more clear. You might consider rereading this lesson from time to time until you solidfy your understanding of this essential musical concept. What is a Key in Music? In music a key is the major or minor scale around which a piece of music revolves. A song in a major key is based on a major scale. A song in a minor key is based on a minor scale. A song played in the ‘key of C major’ revolves around the seven notes of the C major scale C, D, E, F, G, A, and B. That means the fundamental notes making up the song’s melody, chords, and bassline are all derived from that group of notes. A song in the ‘key of F major’ uses the notes of the F major scale F, G, A, Bb, C, D, and E. Similarly, a piece of music can be in a minor key and revolve around a natural minor scale. For example, a song in the ‘key of D minor’ uses the notes of the D minor scale D, E, F, G, A, Bb, and C. Any major scale or natural minor scale can serve as a key for a piece of music. The Center of It All - The Tonic The root note of the key acts as the center of the key. Similar to the root notes of chords, the root note of a scale is the note on which a scale is built. For example, the root of the C major scale is C. The root note of an Eb minor scale would be Eb. When speaking of keys, the root note of the key is called the tonic (pronounced TAWN-ik). I think of keys and the tonic like gravity on Earth. All objects are constantly pulled toward Earth until they come to a state of rest on its surface. Objects can move away from Earth, but eventually come back down. When you play music, the music is constantly being pulled toward the tonic, or root of the key, wanting to come to a state of rest or completion. The tonic is the most resolved note in a key. The tonic is a key’s center. Moving away from and back to the tonic resting point of the key is partly what makes music interesting and why it has a pleasing effect on us. Continuing the gravity analogy, music momentarily defies gravity, but then comes back down. It’s exciting much like a pole-vaulter, basketball player, or juggler might be. When music has this centered sound to it, it is said to be tonal (pronounced TOE-nul), or possessing tonality. Almost all music to which we listen is tonal. When a piece of music lacks a tonal center it is said to be atonal (pronounced AY-toe-nul). Most people don't like the sound of atonal music. Listen for the Tonic As you listen to music, try to pay attention to these concepts of tonality and resolution. Although points of resolution occur all throughout a song, you will most noticeably hear it at the end of a song. Most songs finish on the tonic of the key to make the song sound complete or finished. It’s a very natural sound to expect and it will sound strange when you don’t hear it. When the end of a song goes unresolved it often has a comical effect. This effect is possible because of everyone’s natural sense of tonality. There are a couple of audio examples of the sense of tonality. Can a Piece of Music Only Use Notes Within the Key? Notes not in the scale are considered to be outside of the key. Outside notes can be (and often are) used, but the bulk of the notes will still center around the notes of the key and the key’s tonic. If outside notes are used improperly it’s possible to throw off a song’s tonality and create an unpleasant effect. Skilled musicians and composers have learned to use these outside/off key notes without upsetting the tonality of the music. Outside notes occur in most styles of music to some degree. You will hear the use of outside notes heavily in many jazz solos. Or, you might find them used in heavy metal riffs. Or, you may find them in a simple pop song. How Many Music Keys Are There? Since there are 12 major scales, there are 12 major keys. Likewise, there are 12 minor scales and, therefore, 12 minor keys. So there are 24 keys all together. Three of the major keys can be named 2 different ways one way with sharp note names, and the other way with flat note names. This results in 15 different major key spellings. As an example, the keys of Gb major and F# major contain the exact same notes. The former is spelled using flat note names (Gb, Ab, Bb, Cb, Db, Eb, and F), while the latter is spelled with equivalent sharp note names (F#, G#, A#, B, C#, D#, and E#). There will be times when choosing one spelling over another is preferable. (More on that later.) In the same way, there are 15 different minor key spellings. In total, there are 24 keys and 30 ways to spell them. In the next few lessons covering the circle of 5ths, I will show you how you can start memorizing all 30 key spellings. It sounds far scarier than it is, but it will take some effort.
Are you interested to know more about the 19th President of US? Check Facts about Rutherford B. Hayes in the following post. His full name was Rutherford Birchard Hayes. In 1877 until 1881, he served as the 19th president of America. He was born on 4th October 1822 and died on 17th January 1893. He was also known as a governor of Ohio and congressman. Hayes was at the side of the Union Army when the American Civil War broke out. At that time, he was severely injured during the war. During the antebellum years, Hayes was recognized as an abilitionist and lawyer who tried to defend the slaves in their trial. Facts about Rutherford B. Hayes 1: presidency In 1876, he got a nomination as the presidential candidate for the Republican Party. The South was capable to govern itself after the reconstruction Era ended. In the Compromise of 1877, Hayes was elected. Facts about Rutherford B. Hayes 2: military troops from the South The Republican state government in the South was not supported anymore by the Army after Hayes decided to withdraw them. Moreover, he also supported the African American who wanted to stay a family as a free citizen. Facts about Rutherford B. Hayes 3: as city solicitor In 1858 until 1861, Hayes took the role as the city solicitor. In Ohio, he was known as an attorney. See Also: 10 Facts about Ronald Reagan Facts about Rutherford B. Hayes 4: joining the Union Army Hayes served as an officer in the Union Army. He decided to leave the politics and took part in the American Civil War. Facts about Rutherford B. Hayes 5: the wound Hayes had severe wounds when he joined the war. The Battle of South Mountain gave him the most severe one. He earned the promotion as a brevet major general due to his bravery during the war. Facts about Rutherford B. Hayes 6: the life after the war Hayes becomes a congressman after the American civil war ended. He was a Republican congressman. Check Also: 10 Facts about Roman Government Facts about Rutherford B. Hayes 7: the career as a Governor of Ohio Hayes became the governor for Ohio two times. In 1876 until 1877, he had his third term for two years. Facts about Rutherford B. Hayes 8: the major contributions during his presidency Hayes becomes the 19th president of US. The major contributions were to make the people believe again with the executive power presidency. Facts about Rutherford B. Hayes 9: the first year in the presidential office The Great Railroad Strike, which took place in 1877, marked the beginning of her presidency. The wages of the employees who worked in the major railroads were cut a number of times in 1877 because of the panic of 1873. That is the primary reason behind the Great Railroad Strike. Related Article: 10 Facts about Roger Sherman Facts about Rutherford B. Hayes 10: the currency Currency was one of the major issues addressed by Hayes related to silver and gold coins. What do you think on facts about Rutherford B. Hayes?Tags: President of the US
What is Waste Recycling? Recycling is processing used materials (waste) into new, useful products. This is done to reduce the use of raw materials that would have been used. Recycling also uses less energy and and great way of controlling air, water and land pollution. Effective recycling starts with household (or the place where the waste was created). In many serious countries, the authorities help households with bin bags with labels on them. Households then sort out the waste themselves and place them in the right bags for collection. This makes the work less difficult. Waste items that are usually recycled include: Paper waste items include books, newspapers, magazines, cardboard boxes and envelopes. Click here to see how paper is recycled. Items include plastic bags, water bottles, rubber bags and plastic wrappers. All glass products like broken bottles, beer and wine bottles can be recycled. Click here to see how glass is recycled. Cans from soda drink, tomato, fruit cans and all other cans can be recycled. Did you know: Recycling just 1 ton of aluminum cans conserves more than 207 million Btu, the equivalent of 36 barrels of oil, or 1,665 gallons of gasoline. —EPA Click here to see how aluminum cans are recycled. When these are collected, the are sent to the recycling unit, where all the waste from each type are combined, crushed, melted and processed into new materials. Importance and benefits of waste recycling Recycling helps protect the environment: This is because the recyclable waste materials would have been burned or ended up in the landfill. Pollution of the air, land, water and soil is reduced. Recycling conserves natural resources: Recycling more waste means that we do not depend too much on raw (natural) resources, which are already massively depleted. Recycling saves energy: It takes more energy to produce items with raw materials than from recycling used materials. This means we are more energy efficient and the prices of products can come down. Recycling creates jobs: People are employed to collect, sort and work in recycling companies. Others also get jobs with businesses that work with these recycling units. There can be a ripple of jobs in the municipality.
It’s been about 10,000 years since our ancestors began farming, but crop domestication has taken much longer than expected – a delay caused less by genetics and more by culture and history, according to a new study co-authored by University of Guelph researchers. The new paper digs at the roots not just of crop domestication but of civilization itself, says plant agriculture professor Lewis Lukens. “How did humans get food? Without domestication – without food – it’s hard for populations to settle down,” he said. “Domestication was the key for all subsequent human civilization.” The study appears this the current issue of the Proceedings of the National Academy of Sciences. Lukens and Guelph PhD student Ann Meyer worked on the study with biologists at Oklahoma State University and Washington State University. Examining crop domestication tells us how our ancestors developed food, feed and fibre leading to today’s crops and products. Examining crop genetics might also help breeders and farmers looking to further refine and grow more crops for an expanding human population. “This work is largely historical, but there are increasing demands for food production, and understanding the genetic basis of past plant improvement should help future efforts,” he said. The Guelph team analyzed data from earlier studies of domesticated cereal crop species, and the American scientists also performed field tests. To study the historical effects of interactions between genes and between genes and the environment, they looked at genes controlling several crop plant traits. Domestication has yielded modern crops whose seeds resist shattering, such as corn whose kernels stay on the cob instead of falling off. Early agriculturalists also shortened flowering time for crops, necessary in shorter growing seasons as in Canada. Domestication traits are known to have developed more slowly than expected over the past 10,000 years. The researchers wondered whether genetic factors hindered transmission of genes controlling such traits. Instead, they found that domestication traits are often faithfully passed from parent to progeny, and often more so than ancestral traits, said Lukens. That suggests cultural and historical factors – anything from war and famine to lack of communication among separated populations – accounted for the creeping rate of domestication. “We conclude that the slow adaptation of domesticated plants by humans was likely due to historical factors that limited technological progress,” said Lukens. This research project stemmed from a meeting of anthropologists, archeobotanists and geneticists at the National Evolutionary Synthesis Center in North Carolina. Prof. Lewis Lukens Department of Plant Agriculture 519 824-4120, Ext. 52304 or 58164 For media questions, contact Communications and Public Affairs: Lori Bona Hunt, 519-824-4120, Ext. 53338, or [email protected]; or Kevin Gonsalves, Ext. 56982, or [email protected] Lewis Lukens | Eurek Alert! High in calories and low in nutrients when adolescents share pictures of food online 07.04.2016 | University of Gothenburg Brain connectivity reveals hidden motives 04.03.2016 | Universität Zürich Permanent magnets are very important for technologies of the future like electromobility and renewable energy, and rare earth elements (REE) are necessary for their manufacture. The Fraunhofer Institute for Mechanics of Materials IWM in Freiburg, Germany, has now succeeded in identifying promising approaches and materials for new permanent magnets through use of an in-house simulation process based on high-throughput screening (HTS). The team was able to improve magnetic properties this way and at the same time replaced REE with elements that are less expensive and readily available. The results were published in the online technical journal “Scientific Reports”. The starting point for IWM researchers Wolfgang Körner, Georg Krugel, and Christian Elsässer was a neodymium-iron-nitrogen compound based on a type of... In the Beyond EUV project, the Fraunhofer Institutes for Laser Technology ILT in Aachen and for Applied Optics and Precision Engineering IOF in Jena are developing key technologies for the manufacture of a new generation of microchips using EUV radiation at a wavelength of 6.7 nm. The resulting structures are barely thicker than single atoms, and they make it possible to produce extremely integrated circuits for such items as wearables or mind-controlled prosthetic limbs. In 1965 Gordon Moore formulated the law that came to be named after him, which states that the complexity of integrated circuits doubles every one to two... Characterization of high-quality material reveals important details relevant to next generation nanoelectronic devices Quantum mechanics is the field of physics governing the behavior of things on atomic scales, where things work very differently from our everyday world. When current comes in discrete packages: Viennese scientists unravel the quantum properties of the carbon material graphene In 2010 the Nobel Prize in physics was awarded for the discovery of the exceptional material graphene, which consists of a single layer of carbon atoms... The trend-forward world of display technology relies on innovative materials and novel approaches to steadily advance the visual experience, for example through higher pixel densities, better contrast, larger formats or user-friendler design. Fraunhofer ISC’s newly developed materials for optics and electronics now broaden the application potential of next generation displays. Learn about lower cost-effective wet-chemical printing procedures and the new materials at the Fraunhofer ISC booth # 1021 in North Hall D during the SID International Symposium on Information Display held from 22 to 27 May 2016 at San Francisco’s Moscone Center. 24.05.2016 | Event News 20.05.2016 | Event News 19.05.2016 | Event News 25.05.2016 | Trade Fair News 25.05.2016 | Life Sciences 25.05.2016 | Power and Electrical Engineering
This week University of Pennsylvania paleobotanist Hermann Pfefferkorn and colleagues have presented findings they've collected from a fossilized forest hidden under a coal mine in China for 300 million years. These findings will lend insight into the ecology and climate from the time and place that the forest was alive, before it was covered in a volcano's ash over a period of several days. Because of the speed in which the ash fell and the subsequent perfect storm of environmental factors that proceeded, this site has been kept in relatively excellent condition since the age of Pangea. The image you see above has been presented by the University of Pennsylvania as a reconstruction of the peat-forming forest we're speaking about today. Jun Wang of the Chinese Academy of Sciences, Yi Zhang of Shenyang Normal University and Zhuo Feng of Yunnan University worked with Pfefferkorn who is, again, a professor in Penn's Department of Earth and Environmental Science. The paper they're about to present next week in the Early Edition of the Proceedings of the National Academy of Sciences shows off how this site, located near Wuda, China, is utterly unique in both their condition and their scale. As Pfefferkorn notes: "This is now the baseline. Any other finds, which are normally much less complete, have to be evaluated based on what we determined here. … This is the first such forest reconstruction in Asia for any time interval, it's the first of a peat forest for this time interval and it's the first with Noeggerathiales as a dominant group." - Pfefferkorn The trees in this area were surrounded and covered with ash layers that the team was able to date to approximately 298 million years ago. This sets the forest at the point at which the Earth's several continental plates were still in a state of great flux, moving toward each other to form the supercontinent we now call Pangea. The team studied three sites thus far, counting and mapping all fossilized plants as they move through the ancient forest. They've thus far found six groups of trees, with a layer of tree ferns forming a lower canopy below and gigantic trees such as Sigillaria and Cordaites grew to a height of 80 feet above. Also found were nearly-complete specimens of a group of trees known as Noeggerathiales, they being spore-bearing trees related to our modern day ferns, this family one that's appeared in locations from North America and Europe in past rediscovered sites as well. These plants and the location provide an idea of what the environment was like at that one moment in time, but also allow us to gain context into the age in which that environment existed. As Pfefferkorn notes: "It's like Pompeii: Pompeii gives us deep insight into Roman culture, but it doesn't say anything about Roman history in and of itself. But on the other hand, it elucidates the time before and the time after. This finding is similar. It's a time capsule and therefore it allows us now to interpret what happened before or after much better." - Pfefferkorn Let the exploration continue! It's always great to hear about the rediscoveries of our old Earth like this each time such a find is found, and we look forward to the continuation of our understanding of the planet we live on, and beyond! [via University of Pennsylvania]
Today, I want to discuss with you the power of names and naming rights. To understand the power of names, we first need to understand what a name is. According to Merriam-Webster dictionary, a name is: 1 a : a word or phrase that constitutes the distinctive designation of a person or thingSuch simple words for such a complex idea. Names are more than simple words; names are how we interpret the world around us. Helen Keller learns to speak when she understood the hand movements where the name of water. In ancient times, the Egyptian god Ptah created things by naming them. In science, naming rights reflect who either thought up an idea first or who implemented an idea first/the best. Neil DeGrasse Tyson does a phenomenal job explaining how we interpret history through naming rights. b: a word or symbol used in logic to designate an entity 2 a descriptive often disparaging epithet <called him names> 3 a : reputation <gave the town a bad name> b : an illustrious record : fame <made a name for himself in golf> c : a person or thing with a reputation 4 family, clan 5 appearance as opposed to reality <a friend in name only> 6 one referred to by a name <praise his holy name> Fairy tales such as Rumpelstiltskin show how knowing a person's name gives you a power over them (or at least that's what we used to believe). Today, people name their children to connect future generations with past generations (through the use of familial names), or to reflect a parent's wishes or dreams for their progeny (e.g. naming a child 'Lucifer' because you want him to be beautiful and able to think for himself), or to remember their culture heritage. Studies show that men with feminine sounding names have more peer problems in school; that people with unusual names have a harder time getting hired. What does this have to do with feminism? The name 'feminism' is a word used in logic to designate an idea. The word appears around 1851 with the meaning 'the quality of being female' or 'the state of being feminine'. In the early 1900s, 'feminism' took on new meaning with the women's suffrage movement, now denoting a social theory or political movement that states we need to remove legal and social restrictions on women to allow them equality to men. Over time, the word 'feminism' changed definitions as women's rights moved forward. Women now have rights that at one point were reserved for men, such as the right to own property, the right to vote, the right to an education, the right to work, ... Socially, women shed previous restrictions on clothing, mannerisms, sexual behaviors, reproductive rights, hairstyles, hair colors, ornamentation (e.g. jewelry, tattoos, piercings,...), career choices,... I think it's safe to say that the majority of legal and social restrictions have been removed. This begs the question: what does feminism mean today? Some women claim that feminism still refers to gaining equality between the genders. But we already for words for that - egalitarianism or equalitarianism. These names sound like movements for equality, whereas 'feminism' sounds like a movement only for women. Some women claim that feminism refers to a movement dedicated to protecting women from domestic violence or intimate personal violence (IPV), rape, and harassment. But IPV, rape, and harassment are not female problems; these are crimes committed by woman and men against men and women. The name 'feminism' not only sounds like a movement especially to help women in these situations, but also ignores or belittles the half where men are also victims. Some women claim that feminism refers to a movement aimed to equalize wages and promotions between the genders. There are several problems with this definition. First, this feminism assumes that women choose the workplace over the home, and that more money and titles are required for happiness or fulfillment. It overlooks the happiness of either stay-at-home moms, or women who consciously choose to balance their careers needs with their family needs. Personally, I do not believe that money is the end-all, be-all of life. Second, we already have a term for people gaining promotions and raises due to their personal contributions - meritocracy. Turning again to the Merriam-Webster dictionary, meritocracy means a system in which the talented are chosen and moved ahead on the basis of their achievement. Third, men and women are not equal in their natural talents. For example, women tend to multi-task better than men; men have better hand-eye coordination that women. Scientists use fMRIs to determine if someone has a male or female brain. These differences mean that certain jobs will come more naturally to either women or men and will therefore lend themselves to the preferred gender better. It's not a function of society, but of who we are. In these cases, equality doesn't mean the same number of men and women; equality means the same opportunities presented to men and women. Yet feminism seems to ignore these problem; the name suggests that women deserves more money and more promotions simply because they are women. These definitions of feminism leads to another power of names - reputation. Feminism gained a negative reputation along its history as more and more people see the movement as dedicated to obtaining special privileges for women. The name itself leads to this reputation, since the focus of the word is "feminine" and not "equal". Personally, I consider myself an equalitarian who believes in meritocracy. The question now is - who are you? And who are these feminists? Do they really want equality? Or do they want special privileges?
It has been six months since my 7-year-old niece lost her front teeth, and there is no sign of new growth. Is this normal? There is a range of time during which tooth eruption is considered normal or "within normal limits." The upper and lower front teeth (central incisors) normally erupt into the mouth between the ages of 6 and 8. Factors that can affect normal tooth eruption include: 1. Genetics/family history. 2. Infection or cavities of baby teeth before the permanent teeth erupt. 3. Early loss of a baby tooth before eruption of the permanent tooth. 4. Trauma to a region of the mouth affecting the primary teeth. 5. Fevers and systemic health problems can also play a role in delayed eruption. My suggestion is not to panic, because age 7 is the average age at which the permanent teeth typically erupt in the front upper and lower areas. I certainly suggest consulting an excellent dentist and/or orthodontist, who will take X rays and check the status of the permanent teeth underneath the gum and bone, as their formation and position are critical. Learn more in the Everyday Health Dental Health Center. Last Updated: 6/11/2007
Growing primarily on the Pacific Coast from the waters off the shores of California all the way up to the waters off the coast of Alaska and Canada, Kelp is found in tiered forest-like developments. Featuring canopies and several under layers, quite similarly to rainforests, Kelp forests contain two main species Nereocystis leutkeana (bull kelp) and Macrocystis pyrifera (Giant Kelp). Found largely near the southern portions, Macrocystis pyrifera is brown in color, as is Nereocystis leutkeana, which is found primarily in the northern kelp forests. Growing along rocky coastlines, Kelp forests prefer cool water located in areas where the sun's rays can readily reach them. Warmer water contains greater supplies of dissolved inorganic nitrogen, leading to significantly slower growth for Kelp forests. This is evidenced during the summer months when marine water turns seasonably warmer. Survival for Kelp is linked to how strongly each plant is anchored to the rock on which it is formed. Rather than a root system, Kelp using holdfasts called anchors to keep them in place. Furthermore, Kelp growth and survival is determined in part by salinity, the motion of the waves, and presence of predatory urchins. Beginning as spores from the parent Kelp, both types grow into mature plants known as sporophytes. Giant Kelp features pneumatocysts, or small gas bladders, that act as floatation devices to keep the upper regions of Kelp floating freely. The smaller of these two brown Kelps, the bull Kelp, has only a single pneumatocyst that it shares with several blades. A perennial type of algae, Giant Kelp can live for as many as seven years, whereas its smaller counterpart, bull Kelp, can only survive a single year as it is an annual algae type. Not only does Kelp support its own ecosystem, but it has been discovered to be useful for a variety of purposes. Kelp has been used traditionally in cooking either as an ingredient or as a side dish by the Chinese, Korean, and Japanese. Today, it is eaten in other countries as an ingredient found in salads, soups, stews, and more. Due to its environment, Kelp has a high mineral content that includes calcium, copper, iodine, magnesium, iron, zinc, manganese, chlorine, potassium, and selenium. Since Kelp is rich in iodine, it makes it an excellent choice for individuals looking to balance their metabolism. Iodine is essential in the proper functioning of the thyroid gland, and Kelp can help to produce important thyroid hormones. Also as a rich source of calcium, Kelp can help to maintain healthy bones and teeth. Due to such high levels of these minerals, Kelp is commonly utilized in beauty care formulas for the hair, including conditioners, shampoos and treatments. In particular, both calcium and magnesium promote hair growth as they help to maintain healthy hair follicles. Kelp is also rich in Omega-3, vitamin A, vitamin C, vitamin E, and the B-complex vitamins (thiamine, pantothenic acid, riboflavin, and folate). Due to its supplies of Omega-3, Kelp is noted for supporting the human circulatory system. Kelp has been used in the process of making soap since the late 1880s when it was discovered that Kelp ash is a useful ingredient for soap making. Not only does Kelp ash offer excellent cleansing capabilities, but it also delivers exfoliating and moisturizing functionality. Therefore, Kelp is commonly found in body washes and salt scrubs as one of the primary ingredients. It also functions as a detoxifying element in cleansing products since it draws toxins produced by the body to itself. Marine Kelp contains a fibrous material known as alginate. This substance is used to help eliminate fat in the human body due to its fat-absorbing capabilities. Weight-loss supplements have been made using Marine Kelp to help people who are actively trying to manage their weight. Kelp supplements have grown in popularity, along with an increase in the production of tea made from this particular type of algae. Also containing high levels of amino acids, Brown Kelp delivers the ability to stimulate collagen levels within the human skin, enhancing the skin's natural elasticity. Due to this rejuvenating capability, Brown Kelp gleaned from the sea is often used as an ingredient in modern cleansing formulas, toners, facial masks, and moisturizing creams and lotions.
Why Is Learning Music So Important? TEN FACTS ABOUT SCHOOL MUSIC 1. Music makes a contribution to kids' development that no other subject can match. Music education uniquely contributes to the emotional, physical, social and cognitive growth of all students. National Review of School Music Education, Australia, 2005 2. Music students are more likely to be good citizens A 10-year US study called Champions of Change found that high school students who participate in arts programs, including in school bands, are less likely to be involved with drugs, crime or have behavioural problems. 3. Learning music helps under-performing students improve US researchers found that young children aged 5-7 who had been lagging behind at school had caught up with their peers in reading and were ahead in maths after seven months of music lessons. The childrens classroom attitudes and behaviour improved too. 4. Musical training can enhance brain function Brain imaging techniques (MRI) reveal that musical tasks such as sight-reading musical scores and playing music activate regions in all four lobes of the brain and parts of the cerebellum. Music is one of the few activities which engage the entire brain. 5. Incorporating music learning into other curriculum areas helps kids learn A US study of fifth-grade students found that their attitudes to reading (and to music!) improved when music was incorporated into reading instruction. Other studies show that music students are better equipped to grasp maths and science concepts. 6. Playing music improves concentration, memory and ability to express feelings A 2001 study in Switzerland involving more than 1200 children found that, when 3 other curriculum classes were replaced with music classes, young children made more rapid developments in speech and learned to read easier. They also learned to like each other more, were less stressed and enjoyed school more. 7. Australian parents want their kids to learn music at school Household surveys by the Australian Music Association show that nearly 90% of respondents believe music education should be mandatory in Australian schools. More people made submissions to the National Review of School Music Education than to any other Commonwealth Government enquiry. 8. Most kids miss out on effective music education while at school Music Council of Australia research shows that as few as 2 out of 10 State schools are able to offer their students an effective music education. What does effective mean? The National Review of School Music Education says it is where the learning is continuous, sequential and developmental. That is, it starts early in a childs life, keeps going as the child progresses through school and is in step with the childs capabilities. Almost 9 out of 10 independent schools offer this kind of program. It should be available to ALL students in EVERY school! 9. Learning music is good for Australias social and economic growth The Australian business community wants kids to learn music at school. The Australian Chamber of Commerce and Industry (ACCI) last year delivered its Skills for a Nation: A Blueprint for Improving Training and Education Policy 2007 2017. Among its fifteen recommendations for improving childrens education in the primary years was: There should be an opportunity for all students to learn a musical instrument in primary school. 10. Australia lags behind other countries in the provision of music in school The worlds top academic countries such as Hungary, Netherlands and Japan have strong commitments to music in their schools from the early primary years. In Britain, where the problems in school music provision mirror those of Australia, the government has recently decided to fix the situation. Recognising the huge benefits to kids, it has announced its commitment backed up by more than £300 million - to make every British primary school a musical school.
Teacher resources and professional development across the curriculum Teacher professional development and classroom resources across the curriculum The National Council of Teachers of Mathematics recognizes the importance of geometry and spatial sense in its publication Curriculum and Evaluation Standards for School Mathematics (1989). Spatial understandings are necessary for interpreting, understanding, and appreciating our inherently geometric world. Insights and intuitions about two- and three-dimensional shapes and their characteristics, the interrelationships of shapes, and the effects of changes to shapes are important aspects of spatial sense. Children who develop a strong sense of spatial relationships and who master the concepts and language of geometry are better prepared to learn number and measurement ideas, as well as other advanced mathematical topics. (p. 48) And in its Principles and Standards for School Mathematics, NCTM has placed the Standard for Geometry at every grade level from preK to 12. Arithmetic is an important corner of mathematics, but too often we neglect the rest of the field. Geometry suffers because we have the mistaken impression that it doesn't become real, serious mathematics until it gets abstract and we deal with proof. But geometry is important, even in its less formal form. Here's why. Consider this: Children who play with Tinkertoy®, the construction system, develop informal experience and understanding of isosceles right triangles. They know that if the legs are blue, the hypotenuse is red. When they study geometry or learn the Pythagorean theorem, they already have the background textbook writers and teachers may unconsciously take for granted. Children who miss out on playing with trianglesfor whatever reasonmust get this experience and understanding somewhere else. So teachers, be watchful. When you see a student who "just doesn't get it," you might ask yourself, is it a lack of talent or a lack of experience? Think about the out-of-school experiences that might have given the student the needed backgroundand try to provide something that serves the same purpose in the classroom. The activities in this lab will help you bring this practice to your teaching. Before you try them, read the introduction to each category of activitiesshape and space. It outlines the rationale for teaching the topic, briefly describes the activities, explains how the activities relate to different grade levels or to daily life, and connects the topic to national standards. Then follow the links to the activities themselves. There you can access a background page that elaborates on the rationale and the grade-level information. You may also find additional connections to standards for that specific activity as well as related resources for investigating the topic further. Collectively, the activities explore sophisticated mathematics without using formal geometry. All you have to do is think about shape and spaceand maybe do a little calculation.
Quantum computing involves using the weird properties of quantum mechanics to build a computer. We haven’t built one that works yet, but if a quantum computer is built it will be a very different computer than ones we’re using now. When you think of how a computer like your laptop works, information is encoded in a series of zeros and ones – or bits. These bits correspond to different voltages in parts of the circuitry. When you try to process information, the processor inside your computer changes voltages very, very rapidly, changing zeros into ones and ones into zeros. A normal computer works by doing lots of individual operations extremely quickly. A quantum computer is fundamentally different. One of the aspects of a quantum computer is that instead of having bits that may have a value of zero or one, information is encoded by quantum bits, which can have the value of both zero and one at the same time. Making a quantum bit is hard, but people have had some success with tiny particles of light, or with electrons around an atom. The general idea is that you try to connect a bunch of these quantum bits together in a process called entanglement: if you imagine you had two entangled quantum bits, you could have the first in a state of zero or one, and the second in a state of zero or one. So the system together could be in 00, 10, 01, or 11 – four possibilities. In a quantum computer they’re actually in all of those states at the same time when you’re not observing it, and they ‘pick‘ one of these states to be in when you are observing. “A quantum computer would look at every possible solution to a problem, because it is can be in many configurations at the same time, and then settle on the right one.” Now imagine you keep adding more and more quantum bits, then the system can be in a huge number of possible states. You can use this complexity to calculate extremely difficult problems that would be impossible with a normal computer. One of the challenges is trying to encode a problem in this system of quantum bits, say, asking it to crack a code or predict the weather. Once you’ve done that you can let these quantum bits evolve, and they will – hopefully – take the configuration that represents a solution to your problem. What has happened is that the quantum computer has looked at every possible solution to this problem because it is possible for it to be in many configurations at the same time, and settled on the right one, something a normal computer would take forever to do. So that’s how it works in theory. The difficulties are finding materials to make these quantum bits. As I said, you could use electrons and atoms, you could use photons, little bits of light… but it’s challenging to get these things to become entangled and to stay that way. The most we can do now is stick three, four, maybe five of these quantum bits together. The other challenge is: how do you get this computer to do something useful and then read information out of it? But people are making lots of progress on this, and it’s very interesting because it’s considered a potentially revolutionary technology. It takes problems that would take you billions of years to solve with even a super-computer, and makes them theoretically solvable in an instant. That includes things like finding the factors of very large numbers, which is the basis of encryption technology right now. Whenever you do a transaction on the internet and you send your credit card details, the reason why people can’t intercept that information is that it is very hard to factor large numbers. People are looking at this quantum computing technology very, very closely, but the technological hurdles are still enormous. But who knows? 10 years, 20 years, 30 years, maybe these will be solved.
More on Geostationary Orbits By Dr. T.S. Kelso In our last column, we discussed the basics of the geostationary orbit, describing the unique characteristics which make this particular orbit so valuable. In this issue, I would like to cover some operational considerations which can be important when working with satellites in these orbits. In particular, I would like to discuss how to determine the location of a geostationary satellite—relative to the earth's surface and any observer on its surface—and how the sun's position can affect onboard power management and communications. Locating Geostationary Satellites Ease of tracking—or, rather the lack of tracking—is one of the primary characteristics of the geostationary orbit which make it so valuable. An observer on the ground can simply point an antenna toward a fixed point in space and then forget it—no tracking is required. However, before the antenna can be pointed, the observer must first determine where the satellite is located. As we saw in our series on orbital coordinate systems (in the September/October 1995, November/December 1995, and January/February 1996 issues of Satellite Times), the first step to determining the location of a satellite relative to an observer is to determine both the satellite and observer's position in the same coordinate system. For this development, we are going to use the Earth-Centered Fixed (ECF) coordinate system—latitude, longitude, and radius (or altitude)—as our common coordinate system. As it turns out, one of the common ways of expressing a geostationary satellite's position is to specify its longitude—that is, the longitude on the equator over which the satellite appears to hover. This information can be obtained from various sources including the "Geostationary Satellite Locator Guide" found in every issue of Satellite Times. This guide is generated using the latest two-line element sets and determines each satellite's longitude at its ascending node. For the satellite to be geostationary, of course, its latitude must be zero and its altitude must be 35,786 kilometers (for this development, we will assume a true geostationary orbit and a spherical earth). Knowing the longitude of the satellite and the latitude and longitude of the observer, we can now determine where to look. If R is the radius of the earth, r is the geostationary altitude, λ is the satellite's longitude, θ is the observer's longitude, and φ is the observer's latitude, then the satellite and observer's ECF positions are: and the range vector is the satellite's position minus the observer's position: To calculate azimuth and elevation, we use the same coordinate transformation described in "Orbital Coordinate Systems, Part II" in the November/December 1995 issue of Satellite Times. As an example, let's calculate the position of Galaxy 4 from Pasadena, California. Using these values yields an azimuth to the satellite of 148.25°, an elevation of 45.32°, and a range of 37,390 km—values pretty close to the true values. While this approach can be used to produce good estimates, these are probably not calculations you would want to do by hand (although they can be done fairly easily using a spreadsheet). Plus, if you do not know the satellite's longitude, you will need to start from the satellite's orbital elements, further complicating the process. Of course, you can use a program like TrakStar to calculate the latitude, longitude, and altitude or the look angles (azimuth, elevation, and range) of any satellite (geostationary or otherwise) for any time interval desired using two-line element sets found on the CelesTrak WWW site. Power Management Issues Geostationary orbits present some interesting challenges for power management. To understand these challenges, we must first understand a little about the attitude (orientation in space) of geostationary satellites and the position of the geostationary orbit relative to the sun. All modern geostationary spacecraft use one of two forms of stabilization to maintain their attitude: dual-spin or three-axis stabilization (see Figure 1). With dual-spin stabilization, the satellite takes the shape of a cylinder which rotates about its long axis. This type of satellite has two sections: a spinning section upon which the solar arrays are mounted and a despun section where the communications antennas are mounted. The spinning section provides basic stabilization and can rotate as fast as 100 RPM (in the case of the early GOES satellites). The despun section rotates, too, albeit at a much slower rate of one rotation per orbit (day)—keeping the antennas pointed at the earth and preventing the satellite from going into a flat spin (which is the natural tendency). Figure 1. Geostationary Spacecraft Attitude Types With three-axis stabilization, the spacecraft attitude is maintained through the use of momentum wheels or control moment gyros. The body of the spacecraft does rotate once per orbit (day) to keep the antennas pointed at the earth. The solar arrays are mounted on paddles which also rotate once per day to keep them pointed toward the sun. In both cases, it should be noted that the rotation axis of the satellite is perpendicular to the satellite's orbital plane—which for geostationary orbits is the equatorial plane. We will see why this is important shortly. As with all satellites, the solar arrays on geostationary satellites are subject to a number factors which can result in significant fluctuations in the amount of power available to onboard systems. To begin with, the position of the satellite relative to the sun varies throughout the year. As the earth goes around its orbit, its distance from the sun changes from a minimum of 0.983 astronomical units (AUs—the mean distance from the earth to the sun is approximately 1 AU or 149,597,870 km) to a maximum of 1.067 AU—a difference of 12,518,000 km. If we consider the energy received from the sun at 1 AU to be 100%, then the energy received varies from 97% to 103%, as shown in Figure 2. Figure 2. Solar Distance Efficiency Not only isn't the earth's orbit truly circular, but the plane of the earth's equator does not lie in the plane of the earth's orbit (the ecliptic). Earth's seasons are a direct result of this circumstance. From our vantage on earth, it appears that the sun slowly moves from 23° below the equatorial plane (at the winter solstice) to 23° above the equatorial plane (at the summer solstice) and back again over the course of a year. As seen in Figure 3, our geostationary satellite sees the same thing. Figure 3. Sun-Earth-Satellite Geometry The apparent motion of the sun above and below the equatorial plane has two effects. First, it changes the angle of incidence of solar energy received on the solar arrays since they must rotate about an axis perpendicular to the equatorial plane. As a result, the amount of solar energy absorbed by the solar arrays drops off as a factor of cos(δ), where δ is the sun's declination (angle relative to the equatorial plane). If we consider the amount of energy received when the sun's rays are perpendicular to the solar arrays to be 100%, then the energy received drops to less than 92% at the solstices, as shown in Figure 4. Figure 4. Solar Angle Efficiency From Figure 3 we can also see that because of this sun-earth geometry, the geostationary orbit is usually outside the cone of the earth's shadow. That is, until around the times of the vernal and autumnal equinoxes (the beginning of spring and fall). At these times, geostationary satellites enter their eclipse season, when they can spend as much as 70 minutes of every day in shadow. These seasons run from the end of February through the middle of April and the beginning of September through the middle of October. The percentage of sunlight received for geostationary satellites is shown in Figure 5. To prepare for eclipse seasons, the satellite operators must ensure that the spacecraft batteries are properly conditioned to pick up the load during each day's eclipse. Figure 5. Eclipse Efficiency If we combine the effects of variations in solar distance, solar angle, and eclipses over the course of a year, we get the result in Figure 6. As can be seen in this figure, total solar energy available varies 12%—from a low of 89% to a high of 101%. Figure 6. Total Annual Efficiency If we also factor in the effects of degradation on the solar cells and their optical coverings due to the space environment and look at a nominal seven-year satellite lifetime, we get the graph in Figure 7. Typical results show the optical covering degrades about 7% the first year before stabilizing while the solar cells degrade about 3% their first year and 2% each subsequent year. As can be seen from the graph, the power levels drop from a high of 99% overall efficiency to a low of 72%. When designing the spacecraft power subsystem, that means if 7.5 kW of power are required for normal operations, the power subsystem must be designed to provide almost 10kW initially so that available power doesn't drop below the threshold before the end of the planned satellite lifetime. Figure 7. Total Lifetime Efficiency In addition to planning for variations in spacecraft power, satellite operators and users also need to plan for communications outages (or degradation) around the eclipse seasons. As the sun sweeps across the sky each day and gradually moves north or south with the seasons, there will come a time twice each year when the sun is directly behind a geostationary satellite as seen from a ground-based antenna. When this happens, the flood of solar radio energy into the antenna's main lobe can severely disrupt communications. Fortunately, such disruptions only last a couple of minutes. You may have actually seen one of these outages while watching your favorite cable channel (most of which are transmitted via geostationary satellites). For observers in the Northern Hemisphere, this happens prior to vernal equinox and after the autumnal equinox. We covered a lot of ground in this article—orbital mechanics, spacecraft attitude, power management, and even materials—all factors important in the design and operation of any satellite, but particularly important for geostationary satellites. I hope I've shown how these various areas interact and overlap and, in the process, shed some light on the topic of spacecraft design. As always, if you have any questions, please feel free to write me at [email protected]. Until next time, keep looking up! Dr. T.S. Kelso Follow CelesTrak on Twitter @TSKelso Last updated: 2014 May 17 02:02:27 UTC Accessed 84,356 times since 2000 December 16 Current system time: 2014 November 24 19:51:48 UTC
Were you looking for information about Botulism? Botchalism is a common misspelling of botulism. Botulism is an illness caused by dangerous toxins that are produced by a specific bacterium. These toxins prevent certain nerves from functioning, which can ultimately lead to paralysis or death. The condition is a serious but rare condition, occurring in about 110 people in the United States each year. Some of the early symptoms of botulism include drooping eyelids, blurred vision, and muscle weakness. People who survive an episode of botulism poisoning may have fatigue and shortness of breath for years. (Click Botulism to find out what causes the disease, for a list of other symptoms that may occur, and to learn about available treatment options. You can also click any of the links in the box to the right for more information.)
Fall is just around the corner, and the apples will soon be ripening on the tree. So it’s the perfect time for an apple unit study! In this post I’m sharing 10 different apple Montessori activities that would be great for 2 to 5 year olds. Note: For more apple activities, see my Apple Unit Study page. Apple fine motor Montessori activity I placed some red glass gems in a small bowl, some scissor scoops, and a silicone mold with apple shaped holes on a tray. Kids can use the scissor scoops to pick up the red glass gems and place them one-by-one into the apple shaped holes. This activity not only supports the development of fine motor skills, it also promotes an understanding of one-to-one correspondence. Red and white bead patterning Montessori activity On this tray I placed red and white beads into a small bowl along with a pipe cleaner. I provided a control card at the top showing an ABAB pattern my kids could make with the beads. Comparing apple weights activity This activity invites kids to compare the weight of two apples. I placed two apples of different sizes on a tray along with our Learning Resources balance scale set and a handful of counting bears. Kids put an apple in each of the buckets on the balance scale, then they add bears to the side that weighs less to make the weights equal. Kids then determine how many counting bears more/less one apple weighs than another. Apple coloring patterns I printed this Color in the Apple Patterns page from 3Dinosaurs. I provided red, yellow, and green pencils for my kids to make their own apple patterns. Apple-themed Montessori number activity For this activity, I grabbed one of our wooden 3-part card trays from Montessori Services. I filled the large space on the left with green craft sand. I placed sandpaper numbers in the smallest compartment in the upper right of the tray. And I placed a handful of red glass gems in the medium compartment on the lower right. My kids were responsible for (1) tracing the number on the sandpaper numbers card, (2) writing the number in the sand, and (3) laying the number card on the ground and placing the corresponding number of red glass gems underneath. Put the apples on the tree counting activity with dice For this activity I downloaded the Apple Count worksheet from A Teaching Mommy’s Apple Preschool Pack. I paired it with some red glass gems and our jumbo foam dice. My kids rolled the dice, counted the number of dots on the dice, and then placed that many red glass gems on the tree until it was all filled with “apples.” Put the apples on the tree counting activity with stickers For this activity I used another copy of the Apple Count worksheet from A Teaching Mama’s Apple Preschool Pack. On the tree I wrote the numbers that XGirl was working on at the time. On a set of apple stickers I drew dots corresponding to the numbers on the tree. XGirl worked to count the dots and match them to the numbers on the tree. Put the apples on the tree number recognition game This is a game I invented to help my daughter learn to recognize numbers 5-10. I used our wet erase markers to draw a big tree on our back window. (Can you see some of the outline in the photo below?) I also cut apple shapes from green foam, and wrote the numbers 5-10 on them. I put the numbered apples into a basket and invited XGirl to play. I called out a number and she had to find that number, dampen the back of the foam with a bit of water, and then stick the foam apple to the tree. Apple letter writing Montessori tray On this tray I placed a few sandpaper letters for the letters XGirl was working on. I also filled a smaller tray with red craft sand. I placed a small apple on the tray just for effect. XGirl traced the sandpaper letters and then wrote the letter in the red craft sand. Apple 3-part cards I also found these cute apple 3-part cards from ABC Teach, although you can find others at Montessori Print shop and Puzzle Heads. We used them to learn the parts of an apple. We also used them when we dissected an apple. More apple resources More apple posts from Gift of Curiosity: - Apple Unit Study - Apple Printables Pack - Apple Do-a-Dot Printables - Apple taste testing - Apple rotting experiment - Dissecting an apple - Apple sensory bin - Apple Montessori activities Products mentioned in this post:
A space mission depends on scientists and engineers working together --- teamwork is critical. Everyone has a special job to do to achieve the mission goals. Once the scientists and engineers agree just on what the spacecraft will do, the mission engineers program the operating instructions into a series of commands that the spacecraft can Months before launch, hundreds of things are decided -- how much fuel to carry, how the spacecraft can be protected during its journey, and what the scientific instruments will do. Mission scientists study different things about a planet or moon -- geology, weather, chemistry, and magnetic fields, to name a few. They identify which instruments can best collect the information they need. The scientists and engineers determine how to balance the mission goals against the limited resources on the spacecraft. When an instrument makes a measurement or observation, it uses power. If the spacecraft needs to maneuver to get a better look at something -- this consumes fuel. On-board power and fuel are limited. And for Galileo, space on the tape recorder to store scientific data is the most precious resource of all. Everything a spacecraft does is planned in advance for the best use of limited resources. Next: Guiding the Spacecraft
Universal Acceleration (UA) is a theory of gravity in the Flat Earth Model. UA asserts that the Earth is accelerating 'upward' at a constant rate of 9.8m/s^2. This produces the effect commonly referred to as "gravity". The traditional theory of gravitation (e.g. Newton's Law of Universal Gravitation, General Theory of Relativity, etc) is incompatible with the Flat Earth Model because it requires a large, spherical mass pulling objects uniformly toward its center. According to Flat Earth Theory, gravity does not exist. Instead, there is a force that produces identical effects as observed from the surface of the earth. This force is known as "Universal Acceleration" (abbreviated as UA). Objects on the earth's surface have weight because all sufficiently massive celestial bodies are accelerating upward at the rate of 9.8 m/s^2. The mass of the earth is thought to shield the objects atop it from the direct force of UA. Alternatively, it is possible that the force of UA can actually pass through objects, but its effect on smaller bodies is negligible (similar to gravity in RET cosmology, which only has a noticeable affect on very large objects). However, not all Flat Earth models dismiss the theory of gravity. The Davis Model proposes that the earth is an infinite plane exerting a finite gravitational pull (g), which is consistent with Gauss's Law. The phenomenon we observe everyday when falling is currently substantiated in modern physics by what is called "The Equivalence Principle". This principle in physics states that in a relative frame of reference, it is not possible to locally discern whether the frame is accelerating upwards, or if the object inside the frame is affected by gravity. Several frequently asked questions are, "How is that I can jump and then come back down?" and "Why is it that I feel as though I'm being pulled toward the earth?" Since the Earth is pushing you upwards, you are moving at the same speed as the Earth, much like when you are sitting in a car, the car is pushing you along. When you jump, your upward velocity is for a moment, greater than the Earth's so you rise above it. But after a few moments, the Earth's increasing velocity due to its acceleration eventually catches up. Accelerating to the Speed of Light It is a common misconception that if we were to continuously accelerate over time, we would eventually be moving faster than the speed of light. This is of course, incorrect as nothing with mass may do so. According to the Special theory of Relativity, the Earth can accelerate forever without reaching or passing the speed of light. Relative to an observer on Earth, the Earth's acceleration will always be 1g. Relative to an inertial observer in the universe, however, the Earth's acceleration decreases as the its velocity approaches c. It all depends on our frame of reference to measure and explain the Earth's motion. Thus, despite what most people think, there is no absolute "speed" or velocity of the Earth. A brief explanation of special relativity Special relativity (SR) (also known as the special theory of relativity or STR) is the physical theory of measurement in inertial frames of reference proposed in 1905 by Albert Einstein (after the considerable and independent contributions of Hendrik Lorentz, Henri Poincaré and others) in the paper "On the Electrodynamics of Moving Bodies". It generalizes Galileo's principle of relativity–that all uniform motion is relative, and that there is no absolute and well-defined state of rest (no privileged reference frames)–from mechanics to all the laws of physics, including both the laws of mechanics and of electrodynamics, whatever they may be. Special relativity incorporates the principle that the speed of light is the same for all inertial observers regardless of the state of motion of the source. This theory has a wide range of consequences which have been experimentally verified, including counter-intuitive ones such as length contraction, time dilation and relativity of simultaneity, contradicting the classical notion that the duration of the time interval between two events is equal for all observers. (On the other hand, it introduces the space-time interval, which is invariant.) Combined with other laws of physics, the two postulates of special relativity predict the equivalence of matter and energy, as expressed in the mass-energy equivalence formula E = mc2, where c is the speed of light in a vacuum. The predictions of special relativity agree well with Newtonian mechanics in their common realm of applicability, specifically in experiments in which all velocities are small compared to the speed of light. The theory is termed "special" because it applies the principle of relativity only to frames in uniform relative motion. Special relativity reveals that c is not just the velocity of a certain phenomenon, namely the propagation of electromagnetic radiation (light)—but rather a fundamental feature of the way space and time are unified as spacetime. A consequence of this is that it is impossible for any particle that has mass to be accelerated to the speed of light. Why doesn't the Earth's velocity reach the speed of light? Limit as t -> infinity = c As you can see, it is impossible for dark energy to accelerate the Earth past the speed of light. Explanations for Universal Acceleration The are several explanations for UA. As it is difficult for proponents of Flat Earth Theory to obtain grant money for scientific research, it is nigh on impossible to determine which of these theories is correct. This model proposes that the disk of our Earth is lifted by dark energy, an unknown form of energy which, according to globularist physicists, makes up about 70% of the universe. The origin of this energy is unknown. This model states that there is an infinite plane of exotic matter somewhere below the disk, pushing in the opposite manner of traditional gravity. This is a recent theory, and is in progress. Alternatives to Universal Acceleration The Davis model, suggested by John Davis, states that gravity does indeed exist. In this model, the Earth is an infinite disk with finite gravity. This was mathematically proven with the following: In the FE universe, gravitation (not gravity) exists in other celestial bodies. The gravitational pull of the stars, for example, causes observable tidal effects on Earth. Q: Why does gravity vary with altitude? A: The moon and stars have a slight gravitational pull. In the Round Earth model, terminal velocity happens when the acceleration due to gravity is equal to the acceleration due to drag. In the Flat Earth model, however, there are no balanced forces: terminal velocity happens when the upward acceleration of the person is equal to the upward acceleration of the Earth. Q: If gravity does not exist, how does terminal velocity work? A: When the acceleration of the person is equal to the acceleration of the Earth, the person has reached terminal velocity.
Journals are frequently used for students to practice reflective writing, but the entries are usually only seen by the student writer and his /her teacher. In this activity we will try another form of reflective writing, in which students communicate their thoughts or feelings about a topic of relevance to them in the form of a note or letter to a classmate. Let’s call it “C-Mail”. - Box with lid, covered and decorated - Writing paper - Have students design a form that can be used for C-Mail. Print enough copies to have plenty on hand for all students! - Each student writes and receives C-Mail messages on thoughts or feelings associated with the issues they’ve studied concerning the Lake Pontchartrain Basin. - Fold each note and write the name of the recipient on the outside before placing it in the box. - Designate a certain time of the day for reading and writing C-Mail. - Remind students that they are not to send personal messages about anything other than the designated topic(s). SUGGESTED “STARTER STATEMENTS” FOR STUDENTS: - One thing I learned is...... - One question I have is..... - I would like to ......... - This connects with what we learned about..... - The best part is...... - I (am, am not) prepared for a major hurricane, because..... - Auto emissions are largely responsible for increasing amounts of - CO2 in the atmosphere. I can help reduce those numbers by ...... - What other starter statements can you add?
Two of the most common methods used to measure earthquakes are the Richter scale and the moment magnitude scale. The Mercalli scale also measures the effects of an earthquake at different locations.Continue Reading The Richter scale calculates the strength of an earthquake based on measurements of the amplitude of the largest wave recorded on a seismometer as well as the distance between the earthquake and the same seismometer. It was developed to measure earthquakes in California. The moment magnitude scale is the preferred scale because it covers a wider range of magnitudes and can be applied globally. The scale is based on the earthquake's moment release, a measurement that combines the distance a fault moved and the amount of force required to move it.Learn more about Earthquakes
Planet Earth is made up of all kinds of rocks. When you know the type of rock you have in your hand, you will know something about the history of the place it came from. What's a Rock? What's a Mineral? Which Mineral? A mineral is a rock made up of all the same material. Minerals have names, such as amethyst, quartz, graphite, and so on, and can each be identified. A rock might be a mineral, or it might be a bunch of different minerals put together. Once you know you have a single mineral, you can identify it. First, the color is often helpful. Quartz is yellowish-brown on the outside but often a shiny clear-white inside. Granite is often gray speckled with white and black, while calcite crystals are often a transparent pale yellow, almost as clear as glass. Precious stones often vary only in color – if a crystal of the mineral called corundum is red, it’s a ruby, but if it’s blue or any other color, it’s a sapphire! Hardness is another important clue. Some minerals are much, much harder than others. If a mineral can scratch glass, it’s harder than glass. And if it can scratch another mineral, it’s harder than that mineral. Diamond is the hardest mineral. It can scratch anything, but almost nothing can scratch a diamond—except another diamond! Softer minerals include talc, pumice, and the gypsum inside the wallboard in your walls. Is the mineral heavy for its size? Then it has what’s called high density. A dense mineral weighs a lot even if it’s small. A less dense mineral, like pumice, is as light as Styrofoam. You can pick up a big block of pumice easily, but don’t try lifting a piece of basalt the size of your head! It probably weighs more than you do. If color, density and hardness don’t positively identify your mystery mineral, you may be tempted to smash it with a hammer. That can actually help reveal another important mineral property: fracture. The way a mineral breaks is key to identifying it. Slate, for example, breaks into flat sheets when smashed, which is why it’s convenient to use it for blackboards. Calcite’s cleavage breaks it into prisms, and fluorite breaks into eight-pointed shapes. Armed with your knowledge, you can identify dozens of different kinds of minerals. And, unlike birds, you don’t have to sneak up on them! Though you can if you want to. They won’t mind. Books for Rock Hounds You can check out these resources from Central Rappahannock Regional Library to help you identify the minerals you find and see how the rock cycle renews the world around us, from the mountains to the beaches. Rocks on the Web Interactive Rock Cycle Animation A very cool look at the way rocks form. The Mineral Gallery Examine minerals by class (elements, oxides, carbonates, etc.). View the chemical formula. Lists properties and histories. Pictorial. The Rock Identification Science Project A virtual rock lab where you figure out the types of rocks. This site teaches proper identification techniques and includes a good chart to help. Rock Identification Tables Once you're comfortable with the rocks you find all around you, check out this table for identifying more unusual rocks. You will need to know whether your rock is igneous (formed by fire), sedimentary (formed by sand or clay, may have fossils), or metamorphic (an igneous or sedimentary rock that has been changed by fire or pressure, or both). Want to learn more? Check out these books from the library: Every rock tells a story, whether it was born in fire (igneous), washed into the sea and cemented by tides (sedimentary), or changed by other natural forces (metamorphic). Wherever you are, you can learn about rocks, and understanding the structure of the Earth can help you understand volcanoes, tsunamis, sand dunes, and climate change. Also available in print.
An effort has been made to italicize technical words or phrases and clearly define them in the glossary. Fertilization, a common practice in agriculture, is also useful for enhancing tree growth. The primary plant nutrients in silvicultural fertilizers are nitrogen and phosphorus. Phosphorus deficient sites are generally the poorly drained clays and sands of the Atlantic Coast flatwoods. These sites often exhibit dramatic responses to the phosphorus fertilizer. To determine the effectiveness of nitrogen fertilizers, factors such as soil moisture, soil depth, stand stocking, and existing nutrient levels must be considered. Fertilizers can be applied safely with ground and air equipment. Research shows that little or no measurable increase in nitrogen or phosphorus occurs in streams following forest fertilization, provided care is taken not to apply the fertilizer directly on open water. With proper planning and site selection, forest fertilization poses little risk of environmental harm. - Use fertilizer, in prescribed amounts, only where site char acteristics indicate that tree growth will be improved. - Protect water bodies with appropriate buffers to ensure fertilizer is not applied to them directly. - Properly dispose of fertilizer containers. - Applying fertilizer prescribed for silvicultural purposes to water bodies, such as streams, ditches, or ponds. Streamside Management Zones / Forest Road Construction / Timber Harvesting / Site Preparation / Reforestation / Prescribed Burning / Pesticides / Stream Crossings / Minor Drainage / Endangered Species Act / Additional Management Options: Wildlife Management / Glossary
Dehydration in sick children is often a combination of refusing to eat or drink anything and losing fluid from vomiting, diarrhea, or fever. Infants and children are more likely to become dehydrated than adults because they weigh less and their bodies turn over water and electrolytes more quickly. The elderly and people with illnesses are also at higher risk. Other tests may be done to determine the cause of the dehydration (for example, blood sugar level to check for diabetes). Drinking fluids is usually enough for mild dehydration. It is better to drink small amounts of fluid often (using a teaspoon or syringe for an infant or child), instead of trying to force large amounts of fluid at one time. Drinking too much fluid at once can bring on more vomiting. Electrolyte solutions or freezer pops are very effective. These are available at pharmacies. Sports drinks contain a lot of sugar and can cause or worsen diarrhea. In infants and children, avoid using water as the primary replacement fluid. Intravenous fluids and a hospital stay may be needed for moderate to severe dehydration. The health care provider will try to identify and then treat the cause of the dehydration. Call 911 if you or your child have the following symptoms: Call your health care provider right away if you or your child has any of the following symptoms: Blood in the stool or vomit Diarrhea or vomiting (in infants less than 2 months old) Dry mouth or dry eyes Dry skin that sags back into position slowly when pinched up into a fold Listlessness and inactiveness Little or no urine output for 8 hours Sunken soft spot on the top of your infant's head Call your health care provider if you are not sure whether you are giving your child enough fluids. Also call your health care provider if: You or your child cannot keep down fluids during an illness Vomiting has been going on for longer than 24 hours in an adult or longer than 12 hours in a child Diarrhea has lasted longer than 5 days in an adult or child Your infant or child is much less active than usual or is irritable You or your child is urinating much more than normal, especially if there is a family history of diabetes or you are taking diuretics Even when you are healthy, drink plenty of fluids every day. Drink more when the weather is hot or you are exercising. Carefully monitor someone who is ill, especially an infant, child, or older adult. If you believe that the person is getting dehydrated, call your health care provider before the person becomes dehydrated. Begin fluid replacement as soon as vomiting and diarrhea start -- DO NOT wait for signs of dehydration. Always encourage a person who is sick to drink fluids. Remember that fluid needs are greater with a fever, vomiting, or diarrhea. The easiest signs to monitor are urine output (there should be frequent wet diapers or trips to the bathroom), saliva in the mouth, and tears when crying. Chen L. Infectious diarrheal diseases and dehydration. In: Marx JA, Hockberger RS, Walls RM, et al, eds. Rosen’s Emergency Medicine: Concepts and Clinical Practice. 7th ed. Philadelphia, Pa: Mosby Elsevier; 2009:chap 171. Greenbaum LA. Deficit therapy. In: Kliegman RM, Behrman RE, Jenson HB, Stanton BF, eds. Nelson Textbook of Pediatrics. 10th ed. Philadelphia, Pa: Saunders Elsevier; 2011:chap 54. Linda J. Vorvick, MD, Medical Director, MEDEX Northwest Division of Physician Assistant Studies, University of Washington, School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
The Triangle and its Properties PD is _____ Is QM = MR ? 2. Draw rough sketches for the following: In A ABC, BE is a median. In A PQR, PQ and PR are altitudes of the triangle. In A XYZ, YL is an altitude in the exterior of the triangle. 3. Verify by drawing a diagram if the median and altitude of a isosceles triangle can be same. 4. Find the value of the unknown exterior angle x in the following diagrams: 5. Find the value of the unknown interior angle x in the following figures: 6. Find the value of unknown x in the following diagrams: 7. Find the values of the unknowns x and y in the following diagrams: 8. Is it possible to have a triangle with the following sides? (i) 2 cm, 3 cm, 5 cm (ii) 3 cm, 6 cm, 7 cm (iii) 6 cm, 3 cm, 2 cm 9. Take any point 0 in the interior of a triangle PQR. Is: (i) OP + OQ > PQ ? (ii) OQ + OR > QR ? (iii) OR + OP > RP ? Since sum of two sides is greater than third side. 10. AM is a median of a triangle ABC. Is AB + BC + CA > 2AM? (Consider the sides of triangles Δ ABM and Δ AMC.) 11. ABCD is a quadrilateral. Is AB + BC + CD + DA > AC + BD? 12. ABCD is quadrilateral. Is AB + BC + CD + DA < 2 (AC + BD)? 13. The lengths of two sides of a triangle are 12 cm and 15 cm. Between what two measures should the length of the third side fall? 14. PQR is a triangle, right angled at P. If PQ = 10 cm and PR = 24 cm, find QR. 15. ABC is a triangle, right angled at C. If AB = 25 cm and AC = 7 cm, find BC. 16. A 15 m long ladder reached a window 12 m high from the ground on placing it against a wall at a distance a. Find the distance of the foot of the ladder from the wall. 17. Which of the following can be the sides of a right triangle? (i) 5 cm, 6.5 cm, 6 cm (ii) 2 cm, 2 cm, 5 cm (iii) 1.5 cm, 2 cm, 2.5 cm In the case of right angled triangles, identify the right angles. 18. A tree is broken at a height of 5 m from the ground and its top touches the ground at a distance of 12 m from the base of the tree. Find the original height of the tree. 19. Angles Q and R of a Δ PQR are 25° and 65°. Write which of the following is true: (i) PQ2 + QR2 = RP2 (ii) PQ2 + RP2 = QR2 (iii) RP2 + QR2 = PQ2 20. Find the perimeter of the rectangle whose length is 40 cm and a diagonal is 41 cm. 21. The diagonals of a rhombus measure 16 cm and 30 cm. Find its perimeter.
A servo is an electro-mechanical device that moves a precise amount based on an electronic signal. The servo will twist under command all the way clockwise or counter-clockwise. The motion is sometimes as much as a full revolution total, but also may be limited to something less like 180 degrees. Servos are widely used in radio controlled (RC) hobbies. For instance you can connect a linkage to it and control the angle of the wheels on a toy RC car for steering. Servos are also used in robotics. Servos are typically controlled by commercial devices like an RC receiver or a micro controller (computer). Every model servo has different specs. They have different rotation speeds and strengths, and they offer different amounts of rotation. But the control signals can be quite similar. A square wave, which is simply a voltage that swings from zero to battery voltage and back in a repeating chain, drives the signal pin. The time that the signal remains high is the pulse width. The pulses need to repeat about 400 times a second (400hz). A short pulse width of about .0006 seconds (.6ms) corresponds to full rotation one way. A long pulse width of about .0024 seconds (2.4ms) corresponds to maximum rotation the opposite way. Any pulse width in between will proportionally result in a servo position also in between the two extremes. The control signals are required to be too fast and precise to be controlled manually in any practical manner. So some sort of electronic circuit is needed to drive a servo. But this creates a complication for anyone building something from scratch. If you want to see if a car of your own design will steer properly, it would be useful to have a simple circuit that allows you to control the servo and put it through its paces. The above picture is a schematic for a circuit that does just that. Every turn in the knob that controls the variable resistor R2, a potentiometer (pot), creates a change in the circuits square wave. And that creates a corresponding movement in the servo. This circuit is a variation on the typical 555 timer astable square wave circuit. Without the diodes, the on time of the pulse is controlled by (R1 + R2) * C2, and the off time is controlled by R2 * C2. So the on time can never be smaller than half the pulse wavelength (rising edge to rising edge time). And a servo needs a narrow pulse width for the minimum position. By adding one diode across R2, the on time is now proportional to R1 * C2 and off time is proportional to R2 * C2. Just what is needed for the servo. But the diode makes the two proportions different. So R1 equals R2 does not result in a square wave of equal on and off times. By adding the second diode above, we create a more elegant design. The R1 and R2 become symmetrical in the circuit. R1 equals R2 then does create balanced on and off times for the square wave. The pot is made up of a resistor with a third wire that makes contact by friction somewhere in the middle of the resistor. This middle wire then drags from one end to the other as the knob of the pot is turned. As the wiping wire gets closer to one end, the resistance between that end of the resistor and the wipe wire gets smaller. And at the same time the resistance to the farther end gets larger. But always the sum of the two resistors remains the same full amount of the pot. The wipe wire is changing which side any portion of the resistor contributes from one side to the other. So in the above circuit, R2 is only the portion of the potentiometer from the wipe wire to the bottom terminal. And R1 is the sum of the 12k resistor and the top portion of the potentiometer resistance. And now that the second diode creates symmetrical contribution to high pulse width and low pulse width from R1 and R2, the pulse frequency doesn’t change as R1 get bigger and R2 gets smaller, and vice versa. The diodes improved the circuit, but also made the frequency voltage supply dependent. Diodes block current in one direction, and allow current to flow in the forward direction. But in the forward direction they also create a voltage drop of almost .7 volts. This drop varies only slightly and non-linearly with change in current. This complexity makes it harder to predict pulse width and frequency for different component values. The solution is to simulate the design on a computer. Analog circuit simulation is best done on a Berkley Spice based program. There are expensive commercial programs like Hspice. But I used the good and free for most uses LTspice IV from Linear Technology. I used the following parts in my circuit design: 10K Trim Potentiometer 0.027uf +-3% Capacitor 6″ Modular IC Breadboard Socket Experimenters Board Breadboard pattern PC Board Toggle Switch with On/Off Label Plate 4 x “AA” Battery Holder Project Enclosure (7″x5″x3″) 12″ Universal Male Servo Lead TS-53 Standard Servo Simulation allows you to design with components you can’t purchase. So knowing what parts are available will aid you in your design. Stick with standard values, and allow for variation. The 0.027uf C2 capacitor is the only component I chose that is best to get as high precision. The C2 capacitor affects frequency and thus pulse width. By controlling the variation in C2, we can then compensate for any other component variations with pot R2. In cases where supply voltage might vary, then you may find it useful to have R1 be a 10k ohm resistor in series with a 10k ohm trip pot. The following are screen captures from my simulations of the above design. The 100k pot can go all the way to zero, and that leads to no pulse. I simulated with 1k ohm remaining to get the maximum pulse width of 2.44ms which is more than the needed 2.4ms. The servo maximum position will be reached just before R2 drops to 1k. When the 100k pot is used all for R2 the pulse width is .275ms which is less than the needed .6ms. The servo will thus reach the minimum position before R2 is fully 100k. If you wish, you can add the 10k trim pot, and you can then raise the pulse width at the expense of decreasing the frequency below the 400hz design goal.
Aspects of Camera Lenses: Anterior Surface Camera lenses are complex pieces of machinery composed of autofocus motors, mechanical rings, a few lenses (to create a compound lens), an aperture and lots more. Lenses function according to the physical law of optics. When we refer to our lenses, such as a 50mm or a zoom lens, we are referring to the whole lens unit. The Anterior Surface Anterior surface refers to the outer most piece of glass on the lens. This is the part of the lens that is furthest away from the photographic medium. It is through here that the light first enters through the lens on it's journey to becoming a photograph. This piece of glass is also most susceptible to damage and should be protected. If it becomes scratched, it may become necessary to replace the entire lens, as the scratch will be visible in all the photographs that are taken. To help prevent the anterior surface from being scratched, one can purchase a UV filter for about $15 to cover the lens. Thus, any damage sustained happens to the filter and not the lens. Anterior surface is a name that comes from anatomy. It refers to the cornea, the outer most layer of the eye.
What Are Cells? Cells are the smallest structures that can perform all the processes required for life. All cells share certain components. They have a membrane that covers their surface, separating them from their environment, and controlling what enters and leaves the cell. Cells also all have organelles, which are constructions inside cells that perform specific functions. Additionally, cells contain genetic material for at least some period during their existence. This genetic material manages the activities of a cell, and it’s passed on from parent cells to new cells. Cell theory states that the cell is the basic unit of all living things, all organisms are composed of one or more cells, and all cells come from existing cells. What Is Chemical Bonding? Once formed, these new substances have properties different from those of the original atoms’ elements’. The atoms of over one hundred different elements connect in a variety of combinations to form the substances in our universe. What Is Pressure? Pressure is a force applied by fluids, like liquids and gases, which contain atoms or molecules moving freely. As these particles move, they bounce into each other and push out, creating pressure. This pressure can be calculated by dividing the force of the pressure by the area it’s pressing on. The standard unit used to measure pressure is called the pascal. What Is Acceleration? Acceleration is the rate of change in an object’s speed and direction over time. An object’s acceleration depends on its mass and the strength of the force acting on that mass. Objects with heavier mass require more force to accelerate. For this reason, objects with different mass still fall at the same speed. The heavier objects experience a greater pull from gravity, but they’re harder to accelerate because of their heavier mass. This extra mass perfectly balances out the additional gravitational force. How Acceleration Works Because of gravity, falling objects accelerate toward earth at 9.8 meters per second. This rule can be tweaked when air resistance is exploited to slow an object’s fall, like it is with parachutes. What Is Matter? Matter is anything that has mass and occupies space. Because matter occupies space, other matter can’t be in that same space. The amount of space which matter takes up is known as its "volume." Mass is the amount of matter in any object. Or, as I like to think about it, the amount of matter in any doodad. The mass of a doodad is the same regardless of where it is and where it goes in the universe. Mass vs. Weight Although many people use the word "weight" and "mass" to mean the same thing, for scientists like us, weight is different than mass. Weight is a measure of the force of gravity exerted on a doodad. The force of gravity is what keeps doodads from floating off earth and into outer space. The strength of the force of gravity on a doodad depends in part on the doodad’s mass. If a doodad has more mass, gravity’s force is stronger, so the doodad has more weight. What Is Science? In studying these fields, science seeks solely to describe “how” the phenomena operate. Science avoids the question of “why,” which is considered an issue best left to philosophers. For example, science attempts to explain how the universe formed and how gravity functions, but intentionally avoids the questions of why the universe formed and why gravity functions.
Skip to main content Create interactive lessons using any digital content including wikis with our free sister product . Get it on the Pages and Files Literacy and Learning Strategies Skills Cluster 1 -Preparing for The Task Skills Cluster 2- The Reading Process Skills Cluster 3-Transition to Writing Skills Cluster 4- Writing Social Studies Informational Text Resources Science and CTAE Informational Texts Useful Websites for Common Core Workshop and Presentation Materials Add "All Pages" Skills Cluster 2- The Reading Process Pre-reading: Ability to select appropriate texts and understand necessary reading strategies needed for the task. Note-taking: Ability to read purposefully and select relevant information; to summarize and/or paraphrase. Organizing Notes: Ability to prioritize and narrow notes and other information. Prereading, note-taking, and organizing notes are all part of active reading and can be overlapping aspects of an integrated process. Below are strategies for active reading. Setting a Purpose for Reading: For maximum effectiveness, setting a single purpose for reading, especially for struggling readers, helps avoid confusion from the overload of multiple purposes. Setting A Purpose For Reading Using Informational Text Teach students how to create questions by looking at the headings of informational texts Figure Previewing and Thieves Strategy Two strategies for getting students to preview text and think about what they are about to read. Students learn to predict, clarify, question, and summarize as they read. Q-Cards- High School Questions stems that reflect the variety of cognitive processes students need to process text. Double Entry Journal The Double-Entry Journal strategy enables students to record their responses to text as they read. Students write down phrases or sentences from their assigned reading and then write their own reaction to that passage. The purpose of this strategy is to give students the opportunity to express their thoughts and become actively involved with the material they read. SQ3R(Survey, Quesiton, Read, Review, Rephrase) Think Alouds help students learn to monitor their thinking as they read an assigned passage. Students are directed by a series of questions which they think about and answer aloud while reading. This process reveals how much they understand a text. As students become more adept at this technique they learn to generate their own questions to guide comprehension. Annolighting a Text "Annolighting" a text combines effective highlighting with marginal annotations that help to explain the highlighted words and phrases Labeling and interpreting a text actively on the document and in the margins. Checking out the Framework Students learn how to look at the organization of a text to determine what information they can expect to gleam. This is a technique that is used after students have already completed their own individual annotations; it is a great strategy to stimulate a small or large group discussion that engages and honors different perspectives on the same text. Key Concept Synthesis The process involves identifying the key concepts as they read, putting those concepts in their own words and explaining why the concept is important and/or making connections to other concepts. It requires students to first identify the organizational structure of an informational text and then take notes on essential ideas and information in the text using a structure that parallels the organization of the text. The Socratic seminar is a formal discussion, based on a text, in which the leader asks open-ended questions. Within the context of the discussion, students listen closely to the comments of others, thinking critically for themselves, and articulate their own thoughts and their responses to the thoughts of others. Various Discussion Techniques Think Pair Share Simulation, Role-playing or Panel discussion “Angel Card” Discussion Technique Feedback or Scored Discussion Nominal Group Technique Pyramid Technique or Snowballing Lineup or “Stand Where You Stand” help on how to format text Turn off "Getting Started"
October 17, 2008 21st Century Detective Work Reveals Rock’s Hot Origins A new technique using X-rays has enabled scientists to play 'detective' and solve the debate about the origins of a three billion year old rock fragment. In the study, published today in the journal Nature, a scientist describes the new technique and shows how it can be used to analyze tiny samples of molten rock called magma, yielding important clues about the Earth's early history.Working in conjunction with Australian and US scientists, an Imperial College London researcher analyzed a magma using the Chicago synchrotron, a kilometer sized circular particle accelerator that is commonly used to probe the structure of materials. In this case, the team used its X-rays to investigate the chemistry of a rare type of magmatic rock called a komatiite which was preserved for billions of years in crystals. It has previously been difficult to discover how these komatiites formed because earlier analytical techniques lacked the power to provide key pieces of information. Now, thanks to the new technique, the team has found that komatiites were formed in the Earth's mantle, a region between the crust and the core, at temperatures of around 1,700 degrees Celsius, more than 2.7 billion years ago. These findings dispel a long held alternative theory which suggested that komatiites were formed at much cooler temperatures, and also yields an important clue about the mantle's early history. They found that the mantle has cooled by 300 degrees Celsius over the 2.7 billion year period Lead researcher, Dr Andrew Berry, from Imperial College London's Department of Earth Science and Engineering, says more research needs to be done to understand fully the implications of this finding. However, he believes this new technique will enable scientists to uncover more details about the Earth's early history. He says: "It has long been a 'holy grail' in geology to find a technique that analyses the chemical state of tiny rock fragments, because they provide important geological evidence to explain conditions inside the early Earth. This research resolves the controversy about the origin of komatiites and opens the door to the possibility of new discoveries about our planet's past." In particular, Dr Berry believes this technique can now be used to explain Earth's internal processes such as the rate at which its interior has been cooling, how the forces affecting the Earth's crust have changed over time, and the distribution of radioactive elements which internally heat the planet. He believes this information could then be used to build new detailed models to explain the evolution of the planet. He concludes: "It is amazing that we can look at a fragment of magma only a fraction of a millimeter in size and use it to determine the temperature of rocks tens of kilometers below the surface billions of years ago. How's that for a piece of detective work?" On the Net:
PPS 15: Planning and Flood Risk Annex C Sustainable Drainage Systems C1 Development changes the natural drainage regime, it reduces the amount of water infiltrating into the ground by replacing fields with buildings and hard surfaces and contributing to the compaction of other areas by vehicular movements. This increases the volume and speed of surface water run off and requires built up areas to be drained to remove excess water. Traditionally this has been done by installing underground pipes to convey water away as quickly as possible. Although this approach may prevent local flooding it can simply transfer flood risk to other parts of a catchment. The extension of built development alters natural flow patterns both in terms of quantity and the speed with which peak flows occur. The most obvious result may be downstream flooding but the increased flows from new development can also cause damage to property through erosion and ecological damage to streams and streamside habitats. C2 While the disposal of surface water has long been a material consideration in determining planning applications, amenity, ecology and water resource issues have historically had limited influence on drainage system design and the determination of development decisions. The commitment to a sustainable approach to building and the use of land underlined in the Regional Development Strategy for Northern Ireland. In addition, the water quality improvements required by the EC Water Framework Directive means that continuing to drain built up areas without taking these wider issues into consideration is no longer an option. Flood risk and the environmental damage associated with flood events can be managed by minimising changes in the volume and rate of surface run-off from development sites through the use of sustainable drainage systems. This section contains the following sub-categories. - What are Sustainable Drainage Systems? - Benefits of and constraints on Sustainable Drainage Systems - The Future for Sustainable Drainage Systems in Northern Ireland
Courtesy of EarthSky A Clear Voice for Science Mercury – the solar system’s innermost planet – goes unnoticed by most people, because it’s so often obscured by the sun’s glare. Even when Mercury is visible – like it is now – it takes a deliberate effort to catch this rather elusive world. This evening, Mercury reaches its greatest angular distance east of the sun. What this means is that Mercury sets a maximum time after sunset today, enabling you to spot this planet at dusk and early evening. What’s more, Mercury shines close to the dazzling planet Venus, the 3rd brightest celestial body to light up the heavens after the sun and the moon. If you can’t see Mercury with the unaided eye, look at Venus through binoculars to spot Mercury, the sun-hugging planet. Mercury and Venus cozy up close enough together on the sky’s dome to fit within a single binocular field of view. If you have clear skies and an unobstructed horizon in the direction of sunset, your window of opportunity for spotting Mercury and Venus together is from about 40 to 90 minutes after sundown. After this evening, Mercury will start to fall back toward the sun. Day by day, Mercury will set sooner after sunset and its luster will fade. Catch Mercury now, while the opportunity still abounds! NASA news report: Written by Bruce McClure
Father’s Day is a time to celebrate fathers and everything they do for their families. However, the fight to make Father’s Day a national holiday wasn’t as smooth as Mother’s Day. If you would like to know more about Father’s Day, check out the answers to these four questions. What Inspired Father’s Day? Father’s Day was actually inspired by Mother’s Day. Sonora Smart Dodd was the leading force behind Father’s Day. Inspired by her father’s love and dedication to raising six children by himself after the death of their mother, she wanted to create a day that would celebrate fathers. She was also influenced by Anna Jarvis, who was the leading force behind creating Mother’s Day, and she felt that fathers, especially those like her own, deserved a day to be celebrated. How Did Father’s Day Begin? The first type of Father’s Day celebration was actually a sermon at a West Virginia church to honor 362 men who had died in a mining explosion. However, it was only meant to be a single celebration not an annual holiday. When Sonora Smart Dodd decided she wanted to make an annual holiday for fathers, she petitioned her idea to churches, shopkeepers and the government, and it worked. The first Father’s Day was only statewide, but it was held on July 19, 1910. The holiday’s popularity slowly gathered momentum. How Was Father’s Day Perceived? While most people loved the idea of Mother’s Day, many people scoffed at the idea of Father’s Day. Many people argued that because fathers didn’t have the same sentimental appeal as mothers, it would never work. Even many fathers didn’t like the idea of the holiday. They felt it was an attempt to stifle their manliness. They also felt the holiday was a joke and too commercial, which was worsened by the fact that most of the gifts would be paid for the father himself. Luckily, Father’s Day held on, and in 1972, Richard Nixon made it a federal holiday. What Is Parent’s Day? During the 1920s and 1930s, a movement started to combine Mother’s Day and Father’s Day into Parent’s Day. The argument was that both parents deserved to be celebrated together. However, the movement quickly fizzled out because as the depression worsened, retailers promoted Father’s Day harder to generate more revenue, and once World War II, began, Father’s Day became a way to celebrate American troops. Father’s Day may have had little momentum at first, but now it is widely accepted as a time to cherish dads. Now that you know a little more about the holiday, use it to show a father in your life that you care.
Distance education is neither an isolated concept, nor in its practice an isolated creation. It is education of a special type, like all types of education dependent on and influenced by values, opinions, experience and external conditions. While it is different from conventional schooling and has so many characteristics of its own that as an academic area of study it may be regarded as a discipline in its own right (see Chapter 11), its basis is general educational thinking and experience. Every educational endeavour has a purpose. Distance teaching and learning, like any kind of teaching and learning, can serve different ends. It makes little sense on the basis of purposes to distinguish between education proper and training of certain skills (Wedemeyer 1981). Any learning can be an educational experience. Distance learning primarily serves those who cannot or do not want to make use of classroom teaching, i.e. above all, adults with social, professional and family commitments. Learning implies more than acquisition of knowledge, for example, abstracting meaning from complicated presentations and interpreting phenomena and contexts;' [it] is the process of transforming experience into knowledge, skills, attitudes, values, senses and emotions' (Jarvis 1993:180). Regarding learning as acquiring the capacity to provide a number of replies that are correct (stage 1) is a primitive view that, at least according to William Perry (1970), ordinary university students give up fairly early. The reason why they do so is
The Judicial BranchWhere the Executive and Legislative branches are elected by the people, members of the Judicial Branch are appointed by the President and confirmed by the Senate. Article III of the Constitution, which establishes the Judicial Branch, leaves Congress significant discretion to determine the shape and structure of the federal judiciary. Even the number of Supreme Court Justices is left to Congress -- at times there have been as few as six, while the current number (nine, with one Chief Justice and eight Associate Justices) has only been in place since 1869. The Constitution also grants Congress the power to establish courts inferior to the Supreme Court, and to that end Congress has established the United States district courts, which try most federal cases, and 13 United States courts of appeals, which review appealed district court cases. Federal judges can only be removed through impeachment by the House of Representatives and conviction in the Senate. Judges and justices serve no fixed term -- they serve until their death, retirement, or conviction by the Senate. By design, this insulates them from the temporary passions of the public, and allows them to apply the law with only justice in mind, and not electoral or political concerns. Generally, Congress determines the jurisdiction of the federal courts. In some cases, however -- such as in the example of a dispute between two or more U.S. states -- the Constitution grants the Supreme Court original jurisdiction, an authority that cannot be stripped by Congress. The courts only try actual cases and controversies -- a party must show that it has been harmed in order to bring suit in court. This means that the courts do not issue advisory opinions on the constitutionality of laws or the legality of actions if the ruling would have no practical effect. Cases brought before the judiciary typically proceed from district court to appellate court and may even end at the Supreme Court, although the Supreme Court hears comparatively few cases each year. Federal courts enjoy the sole power to interpret the law, determine the constitutionality of the law, and apply it to individual cases. The courts, like Congress, can compel the production of evidence and testimony through the use of a subpoena. The inferior courts are constrained by the decisions of the Supreme Court -- once the Supreme Court interprets a law, inferior courts must apply the Supreme Court's interpretation to the facts of a particular case. The Supreme Court of the United States The Supreme Court of the United States is the highest court in the land and the only part of the federal judiciary specifically required by the Constitution. The Constitution does not stipulate the number of Supreme Court Justices; the number is set instead by Congress. There have been as few as six, but since 1869 there have been nine Justices, including one Chief Justice. All Justices are nominated by the President, confirmed by the Senate, and hold their offices under life tenure. Since Justices do not have to run or campaign for re-election, they are thought to be insulated from political pressure when deciding cases. Justices may remain in office until they resign, pass away, or are impeached and convicted by Congress. The Court's caseload is almost entirely appellate in nature, and the Court's decisions cannot be appealed to any authority, as it is the final judicial arbiter in the United States on matters of federal law. However, the Court may consider appeals from the highest state courts or from federal appellate courts. The Court also has original jurisdiction in cases involving ambassadors and other diplomats, and in cases between states. Although the Supreme Court may hear an appeal on any question of law provided it has jurisdiction, it usually does not hold trials. Instead, the Court's task is to interpret the meaning of a law, to decide whether a law is relevant to a particular set of facts, or to rule on how a law should be applied. Lower courts are obligated to follow the precedent set by the Supreme Court when rendering decisions. In almost all instances, the Supreme Court does not hear appeals as a matter of right; instead, parties must petition the Court for a writ of certiorari. It is the Court's custom and practice to "grant cert" if four of the nine Justices decide that they should hear the case. Of the approximately 7,500 requests for certiorari filed each year, the Court usually grants cert to fewer than 150. These are typically cases that the Court considers sufficiently important to require their review; a common example is the occasion when two or more of the federal courts of appeals have ruled differently on the same question of federal law. If the Court grants certiorari, Justices accept legal briefs from the parties to the case, as well as from amicus curiae, or "friends of the court." These can include industry trade groups, academics, or even the U.S. government itself. Before issuing a ruling, the Supreme Court usually hears oral arguments, where the various parties to the suit present their arguments and the Justices ask them questions. If the case involves the federal government, the Solicitor General of the United States presents arguments on behalf of the United States. The Justices then hold private conferences, make their decision, and (often after a period of several months) issue the Court's opinion, along with any dissenting arguments that may have been written. The Judicial Process Article III of the Constitution of the United States guarantees that every person accused of wrongdoing has the right to a fair trial before a competent judge and a jury of one's peers. The Fourth, Fifth, and Sixth Amendments to the Constitution provide additional protections for those accused of a crime. These include: - A guarantee that no person shall be deprived of life, liberty, or property without the due process of law - Protection against being tried for the same crime twice ("double jeopardy") - The right to a speedy trial by an impartial jury - The right to cross-examine witnesses, and to call witnesses to support their case - The right to legal representation - The right to avoid self-incrimination - Protection from excessive bail, excessive fines, and cruel and unusual punishments Criminal proceedings can be conducted under either state or federal law, depending on the nature and extent of the crime. A criminal legal procedure typically begins with an arrest by a law enforcement officer. If a grand jury chooses to deliver an indictment, the accused will appear before a judge and be formally charged with a crime, at which time he or she may enter a plea. The defendant is given time to review all the evidence in the case and to build a legal argument. Then, the case is brought to trial and decided by a jury. If the defendant is determined to be not guilty of the crime, the charges are dismissed. Otherwise, the judge determines the sentence, which can include prison time, a fine, or even execution. Civil cases are similar to criminal ones, but instead of arbitrating between the state and a person or organization, they deal with disputes between individuals or organizations. If a party believes that it has been wronged, it can file suit in civil court to attempt to have that wrong remedied through an order to cease and desist, alter behavior, or award monetary damages. After the suit is filed and evidence is gathered and presented by both sides, a trial proceeds as in a criminal case. If the parties involved waive their right to a jury trial, the case can be decided by a judge; otherwise, the case is decided and damages awarded by a jury. After a criminal or civil case is tried, it may be appealed to a higher court -- a federal court of appeals or state appellate court. A litigant who files an appeal, known as an "appellant," must show that the trial court or administrative agency made a legal error that affected the outcome of the case. An appellate court makes its decision based on the record of the case established by the trial court or agency -- it does not receive additional evidence or hear witnesses. It may also review the factual findings of the trial court or agency, but typically may only overturn a trial outcome on factual grounds if the findings were "clearly erroneous." If a defendant is found not guilty in a criminal proceeding, he or she cannot be retried on the same set of facts. Federal appeals are decided by panels of three judges. The appellant presents legal arguments to the panel, in a written document called a "brief." In the brief, the appellant tries to persuade the judges that the trial court made an error, and that the lower decision should be reversed. On the other hand, the party defending against the appeal, known as the "appellee" or "respondent," tries in its brief to show why the trial court decision was correct, or why any errors made by the trial court are not significant enough to affect the outcome of the case. The court of appeals usually has the final word in the case, unless it sends the case back to the trial court for additional proceedings. In some cases the decision may be reviewed en banc -- that is, by a larger group of judges of the court of appeals for the circuit. A litigant who loses in a federal court of appeals, or in the highest court of a state, may file a petition for a "writ of certiorari," which is a document asking the Supreme Court to review the case. The Supreme Court, however, is not obligated to grant review. The Court typically will agree to hear a case only when it involves a new and important legal principle, or when two or more federal appellate courts have interpreted a law differently. (There are also special circumstances in which the Supreme Court is required by law to hear an appeal.) When the Supreme Court hears a case, the parties are required to file written briefs and the Court may hear oral argument.
What is Eelgrass and Why is it Important? Eelgrass is 1 of 60 species of sea grasses found in shallow coastal areas worldwide! They are referred to as foundation species because they are able to create an entirely new habitat; once eelgrass is established many other species (juvenile fish, lobsters, and shellfish) that rely on this plant for food and habitat will also begin to inhabit the area. Eelgrass is neither a true grass nor seaweed, but a flowering plant that grows entirely submerged in water. Eelgrass has a stem, leaves, roots and a rhizome, and produces seeds, unlike seaweed. However, because they grow in large meadows, eelgrass and many other species of sea grass look similar to land grasses, hence the name. They are mostly found in muddy or sandy sediments in estuaries, bays and shallow subtidal zones of coastal areas. Eelgrass beds provide many important ecosystem functions! - Through photosynthesis, eelgrass oxygenates the surrounding water and also transfers oxygen deep within the surrounding sediment making it inhabitable for other organisms while producing energy, in the form of glucose for itself. Eelgrass also removes excess carbon dioxide from the environment and may be an important source of carbon sequestration. - Eelgrass beds are important nursery grounds and refuge from predation for many species of fish and invertebrates. Many species actually lay eggs on the eelgrass leaves for protection until they hatch. Eelgrass can also provide shading for some species, reducing the occurrence of overheating. - Eelgrass is food for herbivores such as turtles, Brant geese and small invertebrates. Dead eelgrass provides organic matter for bottom-dwelling decomposers such as bacteria that break it down into forms of energy utilized by other organisms, including eelgrass. - Nutrients such as phosphates and nitrates from fertilizers enter our bays and estuaries through storm channels or natural waterways. These nutrients are essential to life, but in excess can stimulate the growth of harmful algal blooms which can reduce the amount of oxygen in the water as they decompose. Eelgrass and other sea grasses are able to filter some of these excess nutrients and use them to grow. Threats to Eelgrass Habitat Eelgrass is influenced by both natural and man-made factors - Eelgrass grows in shallow waters which make it particularly vulnerable to land-based activities such as coastal development, boating, aquaculture and fishing. Although eelgrass is resilient and recovers from natural disturbance, eelgrass may not be able to adapt quickly enough to the rapidly changing environmental conditions caused by humans. - Human activities tend to accelerate the rate at which nutrients, both organic and inorganic, enter the ocean. When nitrate levels increase, there is an increase in growth and abundance of algae. This excessive growth of algae, referred to as an “algal bloom” or “red tide”, may out-compete eelgrass for nutrients, space, and sunlight. - Caulerpa taxifolia is a type of algae not native to California but has been found in certain waters along the coast since 2000. Caulerpa readily adapts to certain environments, especially where eelgrass thrives, and competes for space, light, and nutrients. Without any natural predation, Caulerpa can continue to grow unchecked. Because loss of eelgrass would harm the population of many organisms who depend on them, the state of California has taken up an active campaign to prevent Caulerpa from establishing itself in coastal waters. - Eelgrass requires more light than most marine plants, such that small changes in light availability can greatly influence the quality of eelgrass habitat. Eelgrass is particularly sensitive to storms which erode land sediment and churn up bottom sediment in the water. Low tidal flushing rates, which the regular movement of water, sediments and other suspended particles by daily tides, may result in reduced light availability due to the suspension of particles in the water for sustained periods of time. Our ultimate goal of the Eelgrass Project is to involve the public in the restoration and preservation of eelgrass in the Upper Newport Bay Ecological Reserve. After years of research, planning and community outreach, this long-term goal has finally come to fruition. Coastkeeper, in partnership with Coastal Resources Management, Inc. and the Department of Fish and Wildlife, has been restoring eelgrass along the DeAnza Marsh Peninsula in Upper Newport Bay each summer starting in 2012. Since 2012, we have worked with over 100 land-based volunteers, scientific divers, and Coastkeeper interns with over 1,160 hours of time dedicated to eelgrass restoration. This restoration project and our science education achievements would not have been possible without the groundwork laid out through the initial Newport Bay Eelgrass Project. Coastkeeper is grateful to all funders and partners for making these accomplishments possible!
What is an algorithm? In mathematics and computer science, an algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning. An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation and data processing. An algorithm is a completely defined process, finite sets of steps, operations, or procedures that will produce a particular outcome. For example, with a few exceptions, all computer programs, mathematical formulas, and (ideally) medical and food recipes are algorithms. A complete quantitative trading system is an algorithm. The program is a series of rules and steps that predefine risk and determine the next step as price trends evolve. Here is a simple algorithm example: Is the price higher today that it was 30 days ago? - Yes= the trend is up, stay in it. - No= the trend is down, exit. It can get far more adaptive and complex. For example, there would be multiple time frames or it could then be compared to other alternatives. The advantage of developing and operating such a decision making system is that we can think of every possible scenario and predefine and answer for it, rather than waiting and making a decision under duress. Another advantage is that once we have defined all the rules, steps, and processes, we can test them scientifically to determine how it would have acted and what results would have been created. Since I have completed this process thousands of times over many years, I have a strong understanding of how markets interact and how systems work in different conditions. For questions and comments, contact me. From Wolfram: An algorithm is a specific set of instructions for carrying out a procedure or solving a problem, usually with the requirement that the procedure terminate at some point. Specific algorithms sometimes also go by the name method, procedure, or technique. The word “algorithm” is a distortion of al-Khwārizmī, a Persian mathematician who wrote an influential treatise about algebraic methods. The process of applying an algorithm to an input to obtain an output is called a computation. You are encouraged to reference this website, but please source ASYMMETRY® Observations and http://www.asymmetryobservations.com Copyright 2013. ASYMMETRY® Observations All Rights Reserved.
We previously discussed work by Palese and colleagues in which a guinea pig model for influenza virus transmission was used to conclude that spread of influenza virus in aerosols is dependent upon temperature and relative humidity. They found that transmission of infection was most efficient when the humidity was 20-35%; it was blocked at 80% humidity. The authors concluded that conditions found during winter, low temperature and humidity, favor spread of the infection. Lower humidity favors virion stability and smaller virus-laden droplets, which have a better chance to travel longer distances. Another group re-analyzed Palese’s data and found that relative humidity explains only a small amount of the variability in influenza virus transmission and survival – 12% and 36%, respectively. When they converted the measurements of moisture to absolute humidity, the results were striking: 50% of the variability in transmission and 90% of the variability in survival could be explained by absolute humidity. Changes in relative humidity, the authors argue, do not match the seasonal patterns of influenza transmission. Although relative humidity indoors is low in the winter, outdoor levels peak during this season. Absolute humidity, on the other hand, has a seasonal cycle with low values both indoors and outdoors during winter months, consistent with the increased transmission of influenza. The conclusion of both papers is the same: humidification of indoor air during the winter might be an effective means of decreasing influenza virus spread. On the weather report, the amount of moisture in the air is given as a percentage. This is relative humidity – the ratio of the partial pressure of water vapor in a gaseous mixture of air and water vapor to the saturated vapor pressure of water at a given temperature. Absolute humidity is the actual amount of water vapor in a liter of gas. Any easy way to remember what measurement of humidity is important for influenza transmission is that it’s not what you hear on the weather report. J. Shaman, M. Kohn (2009). Absolute humidity modulates influenza survival, transmission, and seasonality Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.0806852106
We all know what radar is. It’s that greenish line going around a bunch of concentric circles and making a pleasant beeping on a small monitor on a boat. And when there’s another boat nearby, it blips on the screen. Duh. But really—for many of us, it’s surprising that technology still hasn’t revealed what happened to the Malaysia Airlines plane. Shouldn’t someone be able to call up radar from around the time the plane disappeared, follow the greenish lines, and pinpoint the exact location? Yet there have been conflicting reports about what, exactly, radar has indicated about Flight 370—did it start to turn back? Did it end up somewhere around the Strait of Malacca? So what’s going on here? Here’s a radar refresher course. Radar—radio detection and ranging—consists of a device that transmits pulses of radio waves or microwaves. When there is an object in the path of the waves—let’s say it’s a metal lunchbox—they bounce off it, and a small portion of their energy returns to the receiver part of the device. That energy can be used to calculate things like the lunchbox’s position and velocity (if the lunchbox is, you know, in motion). This is known as “primary radar.” Primary radar is very passive and transparent. Your radio waves bounce off of me and tell you I’m there, my radio waves bounce off of you and tell me you’re there. Flight 370 wouldn’t have needed anything at all to potentially be detectable by primary radar. It just would have needed to be a big metal thing in the sky. Which it was. “Secondary radar” in aircrafts uses a transponder (transmitter-responder) to “interrogate” objects it detects, meaning it requests a four-digit identification code from the lunchbox. At the same time, the lunchbox requests an identification code in return, so the transponder transmits that code and receives the other. Primary and secondary radar complement each other by giving lunchbox pilots, and aircraft pilots, a map of nearby objects while also providing specific identifications. Secondary radar is more active in the sense that it requires an exchange (even though that exchange can be automated so pilots don’t have to do it). Though the range of secondary radar is limited and doesn’t cover the whole Earth (it’s especially spotty far over oceans), it seems that Flight 370 should have had sustained secondary radar communication. But its radar went off around the time it disappeared from other tracking as well. Almost every plane, including the Boeing 777-200 from Malaysia Airlines Flight 370, has an onboard system called Traffic Collision Avoidance System (TCAS). This system uses secondary radar to warn about potential collisions, but it’s only scanning locally. Commercial aircrafts can have things like weather radar or ground proximity radar, but they don’t usually have wide field radar in the air because they don’t need to know what’s around them unless it’s somehow a threat. Air traffic controllers can use both primary and secondary radar to keep an eye on planes while they’re nearby (usually within a few hundred miles or so), but there are limits to what they can detect. Aviation reporter Steven Trimble told NPR: [T]he fact is an aircraft can fly off radar. Once it gets over the water, radar coverage is not nearly as robust as it is on land. And, of course, if you go below certain altitude, because of the curvature of the Earth, radar can't see you. And that appears to have happened here. But Wired points out that it’s possible that a private or military aircraft with more flexible radar capabilities might have picked up clues about Flight 370 without realizing it. A former Marine Corps pilot and current aviation consultant, Col. J. Joseph, told Wired, “I would be very surprised if, on somebody’s radar data, this event was not recorded.” David Esser, a professor at Embry-Riddle Aeronautical University, agrees that it’s possible. “Someone could have picked up a ping from them before they disappeared,” he says. But Esser warns that even these clues might not lead anywhere. “Let’s say the plane broke up at 40,000 feet. This stuff is going to be spread over a pretty wide area. It’s a big ocean.” Flight 370’s secondary radar seems to have been off around the time all contact was lost with the flight, but it’s unclear whether a power outage knocked the transponders out or someone turned them off. Conflicting reports about where the plane went next make the transponder outage seem either like evidence of foul play or simply a symptom of a massive system failure. A Reuters source in the Malaysian military says that the flight may have flown hundreds of miles after its last contact. This would indicate that the transponders were turned off and other systems were fine. But other sources deny this claim, or say that it is only one of many possibly scenarios being investigated. The solution to this tracking problem is complicated and limited by expense, because covering the whole Earth with secondary radar-extending towers, for example, is a much bigger job than covering the whole human-inhabited Earth with cellphone towers. Here on dry land it may feel like we’re being watched and monitored all the time, but if you dropped your (waterproof) iPhone in the middle of the South China Sea, Find My Phone wouldn’t be able to help you.
Most maps can be classified into two main groupings: general purpose and thematic. General purpose maps are often used for reference purposes and can exhibit a variety of information including physical land features and political boundaries. Examples of general purpose maps include those found in a standard geographic atlas, or road maps. Thematic (or special-purpose) maps are typically used to convey a specific theme to a particular audience. A map of the United States in which the color of a state represents its census population, is an example of a thematic map. There are types of maps which do not fall into either of the two main categories, or which exhibit properties of both. Since maps can display multiple levels of information, the separation between different types of maps is not always clear. In addition, maps have uses in varied fields, and as such, sometimes terminology may seem conflicting. However, the key purpose of this article is to introduce you to the most commonly used terms in an effort to help you search for new map data to use with Dundas Map. General Purpose Maps The most common type of general purpose map is a topographic map. Topographic maps are often used as reference maps, and typically display both natural land features (such as coastlines and bodies of water) as well as political boundaries. Topographic maps also display elevation (height above sea level), using either coloring (relief shading) or contour lines. The United States Geological Survey (USGS) produces topographic maps in a series, and at standard map scales (such as 1:24,000). Planimetric maps are two-dimensional maps which are similar to topographic maps, but do not show any elevation. Planimetric maps tend to display natural features such as lakes and rivers, or man-made features such as roads and city boundaries. These types of maps can serve as the basis for cadastral maps which document the boundaries and ownership of parcels of land. Base maps act as a foundation for superimposing additional layers of information. For example, a thematic map can be constructed from a base map using a specific colorization scheme. A base map typically contains some natural features such as coastlines, and some man-made features such as political boundaries. The map library distributed with Dundas Map consists of base maps representing various regions of the world. Planimetric maps are often used as base maps. |Figure 1: Base map of the world from the Dundas map library.| A thematic map shows how qualitative and quantitative data are distributed geographically. Thematic maps usually build on top of a base map in order to convey a specific geographic theme, such as population by state, or sales per region. Qualitative Thematic Maps |Figure 2: A geographical region map (shown left), and a resource map (shown right).| Quantitative Thematic Maps Quantitative thematic maps are also known as statistical maps and use a visual mechanism, such as color, to indicate the quantity of a data attribute at different locations on a map. Examples of this type of map are discussed below. Choropleth maps use a uniform color or pattern to fill a geographical shape on a map according to the quantity of a data attribute associated with that shape. For example, a choropleth population map of the United States might assign the color green to all states with a population of between 10 and 20 million people. Isopleth maps also use color, except that the boundaries between color areas are defined by isolines representing points with equal data attribute values. A typical weather or temperature map is an example of an isopleth map. Isopleth maps are ideal when your data values vary continuously and smoothly over space. Proportional symbol maps use scaled symbols or icons in order to indicate the relative quantity of a particular data attribute. A larger symbol, for example, indicates a larger data value for a location on the map. Other techniques involve the use of symbols such as bar or pie indicators in which the actual size of each symbol is fixed, but the symbol appearance varies proportionally with the data attribute value. For example, given a larger data value, a larger slice of a pie indicator would be drawn. Dot (or dot density) maps use a fixed size dot symbol on a map in order to represent a fixed quantity of data. For example, a single dot symbol on a population map could represent one million people. If you then view a map of the United States which is constructed using such symbols, you will see that areas of high dot density indicate regions of greater population while low dot density areas indicate sparsely populated regions. |Figure 3: Proportional symbol map using bar indicators to represent population ranges.|
Also known as a BMT, stem cell transplant, or hematopoietic stem cell transplant Bone marrow is found in the center of bones and is where blood cells are made. It is found in the spongy part of the bones, especially the hips, ribs, breastbone, and spine. Bone marrow contains the youngest type of blood cells known as hematopoietic stem cells. As a hematopoietic stemcell ages, it becomes a white cell, red cell, or platelet. Hematopoietic stem cells are found in bone marrow, peripheral blood (bloodstream), and umbilical cord blood. A bone marrow transplant (BMT) replaces diseased or damaged cells with non-cancerous stem cells that can grow healthy, new cells. BMT is usually used when cancer treatments have destroyed normal stem cells in the bone marrow. The stem cells can be replaced through BMT. A BMT is also performed when the chances for cure with chemotherapy alone are low. There are two major types of BMT, and the type that your child will receive depends upon the diagnosis. - Allogeneic: An allogeneic transplant is performed when bone marrow or blood cells are received from a donor other than the patient. These can come from a related donor, unrelated donor, or cord blood. This type of transplant is used for patients with leukemias and some lymphomas. - Autologous: An autologous transplant is performed when the patient’s own bone marrow or blood cells are used. The marrow or cells are collected and frozen, and then thawed when needed for reinfusion. This type of transplant is used for patients with solid tumors such as neuroblastoma, Hodgkin disease, and brain tumors. The first step is to locate a donor whose blood cells closely match the patient’s. This is done by tissue typing prospective donors. Tissue typing is done by a blood sample and is called HLA typing, which stands for Human Lymphocyte Antigens. These antigens are found on the surface of white blood cells. A patient’s full siblings each have a 25% chance of being a tissue type match. Less commonly, a parent may match the patient. Occasionally, a less- than-perfectly matched related donor is used. If a related donor is not available, then a search for a compatible, unrelated donor is performed through the National Marrow Donor Program. Unrelated donor cells can come from a living donor or frozen cord blood. Your physician will decide what the best source for donor cells is for your child. This is based upon urgency of the transplant, weight of your child, and the best tissue type match. An unrelated donor search may take several months; cord blood can be obtained within a few weeks. Peripheral stem cells are usually collected for autologous transplant, but stem cells from the bone marrow also can be used. These are collected either before the patient has chemotherapy or following a course of chemotherapy. To collect peripheral stem cells, the patient receives medications (such as G-CSF and/or GM-CSF) to increase the number of peripheral blood stem cells available. Cells are collected through a process called apheresis. An apheresis machine has a circuit that will collect blood, separate, and remove white blood cells containing stem cells, and then return red blood cells to the patient. This process takes about 4 hours and may need to be repeated for 2 or 3 days in a row. For certain diseases, the peripheral blood stem cells may be treated with anticancer medications to prevent tumor cells from being placed into back into the patient’s body. Before the transplant admission: When the healthcare team decides that BMT is the best treatment option for your child, they will schedule a lengthy conversation with you to explain the procedure. They will explain the many risks associated with BMT, as well as what you can expect before, during, and after the transplant. Your child will undergo testing to make sure he/she is healthy enough to withstand the rigors of transplant. Testing will include evaluation of the heart function with electrocardiogram (ECG) and kidney and liver function, and infection status. Depending upon the disease, a bone marrow aspirate and spinal tap may be performed. When your child is deemed healthy enough for BMT, physicians will usually insert a central line catheter that allows easy access to a large vein in the chest. The catheter will be used to deliver the new stem cells, as well as blood, antibiotics, and other medications during treatment. Preparation Before Transplant: Your child will be given preparative treatment, called “conditioning” before the transplant. Conditioning includes high doses of chemotherapy and sometimes, radiation of the whole body. The type and purpose of conditioning depends upon your child’s underlying diagnosis but may include: - Elimination of the cancer - Making space in the bone marrow for new cells to grow - Suppression of the immune system so that new cells may be accepted Commonly used drugs include: Once conditioning is complete, stem cells are given through a catheter. This is very similar to a blood transfusion. After traveling through the bloodstream to the bone marrow, the transplanted stem cells will begin to make red and white blood cells, and platelets. It can take between 14 and 30 days for enough blood cells, particularly white blood cells, to be created so the body can fight infection. The identification of new blood cells and an increase in white blood cells following BMT is called engraftment. Until then, your child will be at a high risk for infection, anemia, and bleeding. Your child will remain in the hospital until he or she is well enough for discharge. The process of BMT places a tremendous amount of strain on the body during conditioning, the actual transplant, and in the days following transplant. Your child’s immune system will basically be eliminated during conditioning. As a result, your child will be at high risk for infection and blood-related side effects immediately following transplant. Careful monitoring, use of medicines to treat or prevent infections, and other forms of supportive care can help your child to feel as comfortable as possible. Infection is very common before, during, and after transplant. Anemia (low red blood cells) and thrombocytopenia (low platelets). Transfusions of red blood cells and platelets will be needed until the new cells increase sufficiently to make these. Mucositis (sore mouth, sore throat). IV fluids or nutrition and pain medicines are used to help with these symptoms. This problem usually improves as the new cells grow in the patient. Loss of appetite, nausea. IV nutrition and/or nutrition with a tube into the stomach are used so that weight loss doesn’t occur. Medications can be given to prevent or reduce nausea. Infection – The patient’s immune system is destroyed after a transplant, and it takes many months and sometimes years to return. The types of infections that may occur include: bacterial, fungal, and viral. Preventive antibiotics are given for some patients. Special precautions are taken to protect your child from infection, including limiting visitors and avoiding crowded areas (such as stores) after discharge. Graft vs. host disease (GVHD) – This occurs only in an allogeneic blood or marrow transplant. Certain types of donor cells, called T cells (or T lymphocytes) react to the patient’s body and recognize it as “foreign.” Medicines are given post-transplant to prevent this complication, but it may occur despite this. - Acute graft vs. host disease – most commonly occurs within 3 months of transplant. The skin, liver, and intestines may be affected. Skin involvement occurs as a red rash that may be itchy or develop blisters. Liver involvement may cause jaundice or elevation of other liver tests. Intestinal involvement may cause very severe, watery diarrhea. Medicines such as steroids are used to treat GVHD and are often successful in controlling it. - Chronic graft vs. host disease – may occur months or even years after the transplant. Most commonly it is a continuation of acute GVHD. Many different parts of the body may be affected. Skin is the most common organ affected – patients may have red, scaly skin or skin that is thickened and tough. There may also be changes in the lining of the mouth, dry eyes, dry mouth, joint stiffness, lung restriction, and difficulty absorbing nutrients from foods. In addition, patients are at risk for infection because of the medications needed to control the GVHD as well as the effect of GVHD upon the immune system. Organ toxicity – Conditioning and prior cancer treatment may damage the lungs, liver, kidneys, and heart. These effects are unpredictable and not all children recover from organ toxicity. Late Effects – There is a very good chance that there will be long-term effects following BMT that may not be identified until years after treatment. These include: - Growth and other endocrine (gland) problems may develop depending upon the type of conditioning used. - Sterility is common for most patients. - Organ Damage can occur to the liver, kidneys, lungs, or heart. - Cataracts may develop clouding the lens of the eye and reducing vision.
More than 200 million years ago, a massive extinction decimated 76 percent of marine and terrestrial species, marking the end of the Triassic period and the onset of the Jurassic. This devastating event cleared the way for dinosaurs to dominate Earth for the next 135 million years, taking over ecological niches formerly occupied by other marine and terrestrial species. It's not entirely clear what caused the end-Triassic extinction, although most scientists agree on a likely scenario: Over a relatively short period of time, massive volcanic eruptions from a large region known as the Central Atlantic Magmatic Province (CAMP) spewed forth huge amounts of lava and gas, including carbon dioxide, sulfur and methane. This sudden release of gases into the atmosphere may have created intense global warming and acidification of the oceans that ultimately killed off thousands of plant and animal species. Now researchers at MIT, Columbia University and elsewhere have determined that these eruptions occurred precisely when the extinction began, providing strong evidence that volcanic activity did indeed trigger the end-Triassic extinction. Their results are published in the journal Science. The team determined the age of basaltic lavas and other features found along the East Coast of the United States, as well as in Morocco—now-disparate regions that, 200 million years ago, were part of the supercontinent Pangaea. The rift that ultimately separated these landmasses was also the site of CAMP's volcanic activity. Today, the geology of both regions includes igneous rocks from the CAMP eruptions as well as sedimentary rocks that accumulated in an enormous lake; the researchers used a combination of techniques to date the rocks and to pinpoint CAMP's beginning and duration. From its measurements, the team reconstructed the region's volcanic activity 201 million years ago, discovering that the eruption of magma—along with carbon dioxide, sulfur and methane—occurred in repeated bursts over a period of 40,000 years, a relatively short span in geologic time. "This extinction happened at a geological instant in time," says Sam Bowring, the Robert R. Shrock Professor of Geology in MIT's Department of Earth, Atmospheric and Planetary Sciences. "There's no question the extinction occurred at the same time as the first eruption." The paper's co-authors are Terrence Blackburn (who led the project as part of his PhD research) and Noah McLean of MIT; Paul Olsen and Dennis Kent of Columbia; John Puffer of Rutgers University; Greg McHone, an independent researcher from New Brunswick; E. Troy Rasbury of Stony Brook University; and Mohammed Et-Touhami of the Université Mohammed Premier Oujda in Morocco. More than a coincidence The end-Triassic extinction is one of five major mass extinctions in the last 540 million years of Earth's history. For several of these events, scientists have noted that large igneous provinces, which provide evidence of widespread volcanic activity, arose at about the same time. But, as Bowring points out, "Just because they happen to approximately coincide doesn't mean there's cause and effect." For example, while massive lava flows overlapped with the extinction that wiped out the dinosaurs, scientists have linked that extinction to an asteroid collision. "If you really want to make the case that an eruption caused an extinction, you have to be able to show at the highest possible precision that the eruption of the basalt and the extinction occurred at exactly the same time," Bowring says. In the case of the end-Triassic, Bowring says researchers have dated volcanic activity to right around the time fossils disappear from the geologic record, providing evidence that CAMP may have triggered the extinction. But these estimates have a margin of error of 1 million to 2 million years. "A million years is forever when you're trying to make that link," Bowring says. For example, it's thought that CAMP emitted a total of more than 2 million cubic kilometers of lava. If that amount of lava were spewed over a period of 1 million to 2 million years, it wouldn't have nearly the impact it would if it were emitted over tens of thousands of years. "The timescale over which the eruption occurred has a big effect," Bowring says. Tilting toward extinction To determine how long the volcanic eruptions lasted, the group combined two dating techniques: astrochronology and geochronology. The former is a technique that links sedimentary layers in rocks to changes in the tilt of the Earth: For decades, scientists have observed that the Earth's orientation changes in regular cycles as a result of gravitational forces exerted by neighboring planets. For example, the Earth's axis tilts at regular cycles, returning to its original tilt every 26,000 years. Such orbital variations change the amount of solar radiation reaching the Earth's surface, which in turn has an effect on the planet's climate, known as Milankovich cycles. The result of the climatic change can be preserved in the cyclicity of sediments deposited in the Earth's crust. Scientists can determine a rock's age by first identifying cyclical variations in deposition of sediments in quiet bodies of water, such as deep oceans or large lakes. A cycle of sediment corresponds with a cycle of the Earth's tilt, established as a known period of years. By seeing where a rock lies in those sedimentary layers, scientists can get a good idea of how old it is. To get precise estimates, scientists have developed mathematical models to determine the Earth's tilt over millions of years. Bowring says the technique is good for directly dating rocks up to 35 million years old, but beyond that, it's unclear how reliable the technique can be. His team used astrochronology to estimate the age of the sedimentary rocks and then tested those estimates against high-precision dates from 200-million-year-old rocks in North America and Morocco. The researchers broke rock samples apart to isolate tiny crystals known as zircons, which they then analyzed to determine the ratio of uranium to lead. The painstaking technique enabled the team to date the rocks to within approximately 30,000 years—an incredibly precise measurement in geologic terms. Taken together, the geochronology and astrochronology techniques gave the team precise estimates for the onset of volcanism 200 million years ago, and revealed three bursts of magmatic activity over 40,000 years—an exceptionally short period of time during which massive amounts of carbon dioxide and other gas emissions may have drastically altered Earth's climate, killing off thousands of plant and animal species. Andrew Knoll, professor of earth and planetary sciences at Harvard University, says pinpointing the duration of volcanism has been the key challenge for scientists in identifying an extinction trigger. "The new paper suggests that a large initial burst of volcanism was temporally associated with and could have caused the recorded extinctions," says Knoll, who was not involved in the study. "It provides a welcome and strong test of a leading hypothesis, increasing our confidence that massive volcanism can be an agent of biological change on the Earth." While the team's evidence is the strongest thus far to link volcanic activity with the end-Triassic extinction, Bowring says more work can be done. "The CAMP province extends from Nova Scotia all the way down to Brazil and West Africa," Bowring says. "I'm dying to know whether those are exactly the same age or not. We don't know." Explore further: Timeline of a mass extinction: New evidence points to rapid collapse of Earth’s species 252 million years ago
What is the treeline? If you go high enough up a tall mountain there is a point where trees disappear and you transition into low alpine vegetation. The same holds if you travel toward the North Pole, where the boreal forest gives way to the tundra. This is the treeline. It is one of the most classic concepts in biogeography. Some simple universal rules seem to describe the treeline, often described as the limit of tree growth (where trees reach 3 or more metres tall). The mean temperature of the warmest month must be 10°C or more. Below this threshold it is too cold for trees to maintain growth. Interestingly, it seems that amongst the most cold-tolerant trees (which come from diverse families) the ultimate limits of tree growth are relatively consistent. While the coincidence between tree limits and the 10°C isotherm for the warmest month has been shown to be an oversimplification at fine scales, there are clear relationships between this and other variables such as the mean temperature of the growing season (the length of which varies between sites), soil temperature, wind damage and water stress (due to lack of available water in frozen conditions). So we know what factors are correlated with the treeline, but few studies have looked at causal factors. The limits to tree growth Why are trees more limited than shrubs and other smaller plants? Firstly, trees are subject to more severe conditions than plants growing closer to the ground. Air temperatures can be several degrees cooler in a tree canopy than near the ground, and soil temperatures are lower under the shade of a tree canopy. These microclimatic vertical differences in temperature within the forest can be equivalent to 100-300 m elevation difference in free air temperature. Outside the growing season, a winter snowpack is often present at treeline sites, which insulates and protects lower growing plants. Secondly, there is considerable energetic cost in growth and maintenance for large organisms like trees. When there is only a short summer growing season, just maintaining a positive carbon balance becomes a critical limitation and this is what those air temperature correlates of treelines reflect. This explains why seedlings of tree species grow well above the treeline, sometimes maturing into a shrubby growth form called krummholz, which can live for hundreds of years as a stunted ‘tree’. Some angiosperms (higher plants) other than trees can survive with growing temperatures of just 5°C and at these limits to plant growth the microclimate becomes critical. Foliage temperatures in a sheltered sunny spot on a mountain summits or in high latitude tundra can be several degrees higher than the air temperature. Cushion plants and other low-growing plants take full advantage of these microclimates in between long periods of dormancy. In deciduous trees, the timing of bud burst when new leaves appear in spring allowing photosynthesis to begin is finely tuned to avoid the risk of damage to new leaves by late frosts, but later bud burst means a shorter growing season. When growth is so marginal any additional pressures such as loss of foliage and branches due to wind breakage or winter desiccation can be enough to eliminate trees, which is why treelines on a mountain are sometimes at higher elevations in more sheltered locations. Not all treelines are the same Like most topics in ecology, as more research is carried out a more complex story emerges. Treelines in Mediterranean climates such as northern Chile and Spain occur at lower elevations than the more widely studied temperate treelines. Low rainfall during the summer growing season means that drought is the more critical limiting factor for trees to achieve positive carbon balance (in other words, to grow new tissue). Most treeline research has been done in Europe and North America. Alpine treelines occur even in the tropics where mountains reach sufficient elevation but are more common at higher latitudes where they occur at lower elevations. Where the treeline reaches sea level it is termed the Arctic treeline since it is the northern limit of tree growth on Earth. The equivalent zone in the Southern Hemisphere falls almost entirely on the Southern Ocean. Consequently, the ‘Antarctic treeline’ occurs only on southern islands of the Tierra del Fuego archipelago. Here, Hoste Island supports the southernmost trees on Earth. The few oceanic islands at these far southern latitudes are treeless, but is that due to climate or to isolation and consequent lack of tree seed dispersal?
While your mother may have warned you about making assumptions, every writer assumes at some point while writing. In order to figure out what your audience may need, you have to make assumptions about who they are and what they need. When we assume, we are imagining that our audience holds some of the same beliefs or ideas that we do. For example, if you are arguing for the creation of laws against texting while driving, you might make the following assumptions about your reader: - The reader knows what texting is. - The audience can see that it might be dangerous. Sometimes, however, it is dangerous to make an assumption. After all, mother knows best. However, be careful about assuming that your audience will automatically agree with you, and make sure that you don’t make assumptions that could be offensive about a race, class, gender, or age. Here are some assumptions that would be inappropriate for the texting while driving example above: - Assuming that only a teenager would text while driving, and thereby ranting about the irresponsibility of teenagers in your paper. - Assuming that women are the only culprits of texting while driving and commenting on women as bad drivers. Ask yourself the following questions to help uncover some of your assumptions. Think about whether these assumptions might be inappropriate ones about gender, race, class, or age. - How might these relate to your topic? - Would the outcome be different if you had a different perspective, such as the opposite gender? - Does society have a different assumption of these categories? How might that impact your paper? It is not just important to think about the assumptions you make based on audience. You also want to think about why you have assumptions about certain topics. - Why do you feel as you do about the topic you are writing on? - Is there a reason why you do not accept the facts or statistics someone else may present on this topic? - Do any stereotypes or beliefs shape your feelings on the subject?
Think of a specific time in your life when you actively noticed stratification based on social class. Inequality is embedded in our daily lives, but I want you to think of a specific event. For example, a specific conversation that you had that made an impression or an event where there were certain social expectations based on class. It can be a memory from childhood or something you just did recently. 1. Start by telling me about the event. What made you think of it, and why is it relevant for this assignment? 2. Then tell me about how the event brought the social class to your attention. What were some of the social rules, expectations, or assumptions made based on class? Were there penalties for violating those rules/expectations? Rewards for conforming to them? Were other dimensions of stratification present (e.g., race, gender, sexuality)? If so, how did they intersect with the class? 3. Next, reflect on how you participated in the event. Did you discuss class – or related dimensions of stratification – openly? Why or why not? 4. Finally, describe how you generally talk about or encounter class in your everyday life. Did the event you’ve discussed in this assignment change this in any way? Use parenthetical references (lecture, date; or reading, date) to cite material from class and readings. You should cite at least two concepts (in addition to the term “class” itself) from the readings or lecture materials. Please use 12-point font, double-spaced with 1” margins. The assignment should be 400-700 words (about 2-3 double-spaced pages). Make sure to put your name on the first page and staple in the top left-hand corner of the assignment. In addition to a hard copy, please also submit an electronic copy of your assignment on Canvas. Followed directions, submitted on time1 pts Description of event7 pts Quality of self-reflection7 pts Connection to class concept(s)10 pts Pick 2 of the Attached files to use as sources. Please make sure you connect story to class concepts. You can use all the files if it’s easier to think of a story. Answer & Explanation Did the event you’ve discussed in this assignment change this in any way? Below is a detailed and precise response to the question. Step-by-step explanation Social stratification is the categorization... View This Answer
Mars mineral globe This unique atlas comprises a series of maps showing the distribution and abundance of minerals formed in water, by volcanic activity, and by weathering to create the dust that makes Mars red. Together the maps provide a global context for the dominant geological processes that have defined the planet's history. The maps were built from ten years of data collected by the OMEGA visible and infrared mineralogical mapping spectrometer on Mars Express. The animation cycles through maps showing: individual sites where a range of minerals that can only be formed in the presence of water were detected; maps of olivine and pyroxene, minerals that tell the story of volcanism and the evolution of the planet’s interior; and ferric oxide and dust. Ferric oxide is a mineral phase of iron, and is present everywhere on the planet: within the bulk crust, lava outflows and the dust oxidised by chemical reactions with the martian atmosphere, causing the surface to 'rust' slowly over billions of years, giving Mars its distinctive red hue. The map showing hydrated minerals includes detections made by both ESA’s Mars Express and by NASA's Mars Reconnaissance Orbiter. Credits: Hydrated mineral map: ESA/CNES/CNRS/IAS/Université Paris-Sud, Orsay; NASA/JPL/JHUAPL; Olivine, pyroxone, ferric dust & dust maps: ESA/CNES/CNRS/IAS/Université Paris-Sud, Orsay; Video production: ESA.