content
stringlengths
275
370k
Originally it was thought that all globular clusters were part of the halo. Now, however, it is realized that two distinct populations of globulars exist. Old, metal-poor clusters ([Fe/H] < -0.8) are part of an extended, spherical halo, while younger clusters with [Fe/H] > -0.8 are in a more concentrated and flattened distribution. The less-metal-poor clusters have a scale height similar to that of the thick disk, and they may be associated. Other ideas have them related to the Galaxy's bulge instead... Globular clusters are generally old, with ages ranging from 10-14 billion years. However, there's still a lot of controversy about absolute ages here... How do we find field halo stars? Look at the space velocities of stars with respect to the Sun. If they are low, they are probably disk stars (like the Sun). If they are high, they are usually associated with the halo.If we add up all the mass in the field stars and the metal-poor GCs, we can come up with a rough density distribution for the Galaxy's stellar halo: Like the GCs, the halo field stars are also metal-poor. This tells us something about the formation of the Galaxy! The total mass of the halo is about 108 - 109 M sun, about 1% of which is the globular clusters, and the rest in field stars. So there's not a lot of stuff in the stellar halo, but what is there holds a lot of information about the early history of the Galaxy! |The metallicity distribution of field stars in the Milky Way's halo shows lots of very metal poor stars, with an long tail to extremely metal poor values of [Fe/H] < -3.5. (Figure taken Spectra of extreme metal-poor stars (solar, -4, -5.3, zero metals), courtesy ESO: Stars this chemically unenriched must trace the early history of star formation in the Galaxy! The distribution of stars in the Milky Way's halo is not smooth, but shows evidence for stellar streams. Below: Star streams in the galactic halo from the SDSS survey (Belokurov et al 2006). Upper main sequence/turnoff stars have been selected via a color cut; these stars should have similar luminosities, so their apparent magnitude is a measure of distance. In this plot, color is not the color of the star, but distance (blue nearer, red further). On larger scales, these streams can be overlaid on 2MASS measurements of structure in the galactic halo, showing how the big SDSS stream connects with the larger Sagittarius stream detected by 2MASS:
Glossary of Musical Terms Our glossary of musical terms lets you look up any musical term unfamiliar to you, and comes to us courtesy of our good friends at Naxos. The English horn is more generally known in England as the cor anglais. It is the tenor oboe. The word ensemble is used in three senses. It may refer to the togetherness of a group of performers: if ensemble is poor, the players are not together. It may indicate part of an opera that involves a group of singers. It can also mean a group of performers. As the word suggests, an entr'acte (= German: Zwischenspiel) is music between the acts of a play or opera. An Etude is a study, intended originally for the technical practice of the player. Chopin, Liszt, and later composers elevated the tude into a significant piece of music, no mere exercise. The exposition in sonata-allegro form is the first section of the movement, in which the principal thematic material is announced. In the exposition of a fugue (a fugal exposition) the voices (= parts) enter one by one with the same subject: the exposition ends when all the voices have entered. F is a note of the scale (= Italian, French: fa). Fagott (German) or fagotto (Italian) is the bassoon, the bass of the woodwind section in the orchestra (see Bassoon). A fanfare is a flourish of trumpets or other similar instruments, used for military or ceremonial purposes, or music that conveys this impression. Fantasy (= French: fantaisie; Italian: fantasia; German: Fantasie) is a relatively free form in the 16th and 17th centuries, in which a composer may exercise his fancy, usually in contrapuntal form. In later periods the word was used to describe a much freer form, as in the written improvisations for piano of this title by Mozart, or Beethoven's so-called Moonlight Sonata, described by the composer as Sonata quasi una fantasia, Sonata like a Fantasia. A fiddle is a violin, but the word is used either colloquially or to indicate a folk-instrument. The Australian composer Percy Grainger, who objected to the use of words of Latin origin, used the word fiddle for violin, middle-fiddle for viola and bass fiddle for cello, as part of his eccentric vocabulary of 'blue-eyed English'. The Italian La Follia, (= Spanish: Fola; French: Folie d'Espagne) is a well known dance tune popular from the 16th century or earlier and found in the work of composers such as Corelli (1653 - 1713), who used the theme for a set of variations forming a violin sonata, or later by Rachmaninov (1873 - 1943) in his incorrectly named Variations on a Theme of Corelli. Forte (Italian: loud) is used in directions to performers. It appears in the superlative form fortissimo, very loud. The letter f is an abbreviation of forte, ff an abbreviation of fortissimo, with fff or more rarely ffff even louder. The word fortepiano, with the same meaning as pianoforte, the full name of the piano, with its hammer action and consequent ability to produce sounds both loud and soft, corresponding to the force applied to the keys, is generally used to indicate the earlier form of the piano, as it developed in the 18th century. A Mozart piano, for example, might be called a fortepiano. The instrument is smaller, more delicately incisive in tone than the modern instrument, and is in some respects more versatile. Fugue has been described as a texture rather than a form. It is, in essence, a contrapuntal composition. The normal fugue opens with a subject or theme in one voice or part. A second voice answers, with the same subject transposed and sometimes slightly altered, usually at the interval of a fifth, while the first voice continues with an accompaniment that may have the character of a countersubject that will be used again as the piece progresses. Other voices enter one by one, each of them with the subject, the third in the form of the first entry, the fourth in the form of the answer in the second voice. A fugue may have as few as two voices (the word voice does not necessarily imply singing in this context) and seldom more than four. The subject announced at the beginning provides the chief melodic element in a fugue. When all the voices have entered, the so-called fugal exposition, there will be an episode, a bridge that leads to a further entry or series of entries answering each other, now in different keys. The fugue, as it had developed by the time of Johann Sebastian Bach, continues in this way, often making use of stretto (overlapping entries of the subject) and pedal-point (a sustained note, usually below the other parts) as it nears the end. The fugue became an important form or texture in the Baroque period, reaching its height in the work of J. S. Bach in the first half of the 18th century. Later composers continued to write fugues, a favourite form of Mozart's wife Constanze, with Beethoven including elaborate fugues in some of his later piano sonatas and a remarkable and challenging Grosse Fuge (Great Fugue) as part of one of his later string quartets. Technically the writing of fugue remains an important element in the training of composers. G is a note of the musical scale (= French, Italian: sol) The galliard is a courtly dance of the late 16th and early 17th century in triple metre usually following a slower duple metre pavan. The two dances are often found in instrumental compositions of the period, sometimes in suites. The galop is a quick dance in duple metre, one of the most popular ballroom dances of the 19th century. The dance appears as a parody in Offenbach's operetta Orpheus in the Underworld in a can-can. Gamba (Italian: leg) is in English used colloquially to designate the viola da gamba or leg-viol, the bowed string instrument popular from the 16th until the middle of the 18th century and held downwards, in a way similar to that used for the modern cello, as opposed to the viola da braccio or arm-viol, the instrument of the violin family, held on the arm or shoulder. The German dance (= German: Deutsche, Deutscher Tanz) describes generally the triple metre dances of the late 18th and early 19th centuries, found in the Ländler and the Waltz. There are examples of this dance in the work of Beethoven and of Schubert. The gigue (= Italian: giga; English: jig) is a rapid dance normally in compound duple metre (the main beats divided into three rather than two). The gigue became the accepted final dance in the baroque instrumental suite. Giocoso (Italian: jocular, cheerful) is sometimes found as part of a tempo instruction to a performer, as in allegro giocoso, fast and cheerful. The same Italian adjective is used in the descriptive title of Mozart's opera Don Giovanni, a dramma giocoso. Giusto (Italian: just, exact) is found in tempo indications, as, for example, allegro giusto, as in the last movement of Schubert's Trout Quintet, or tempo giusto, in strict time, sometimes, as in Liszt, indicating a return to the original speed of the music after a freer passage. Derived from the French glisser, to slide, the Italianised word is used to describe sliding in music from one note to another. On the harp or the piano this is achieved by sliding the finger or fingers over the strings or keys, and can be achieved similarly on bowed string instruments, and by other means on the trombone, clarinet, French horn and pedal timpani among others. The glockenspiel is a percussion instrument similar in form to the xylophone, but with metal rather than wooden bars for the notes. The instrument appeared only gradually in the concert-hall and opera-house and is found in Handel's oratorio Saul and elsewhere. Mozart made famous use of the glockenspiel in The Magic Flute (Die Zauberflöte), where it is a magic instrument for the comic bird-catcher Papageno. It is now a recognised if sparingly used instrument in the percussion section of the modern orchestra. The gong is a percussion instrument originating in the East. In the modern orchestra it is usually found in the form of the large Chinese tam-tam. The gong appears in Western orchestral music in the late 18th century, and notable use of sets of gongs of varying size is found adding exotic colour to Puccini's oriental operas Madama Butterfly and Turandot.
Healthy teeth are important to your overall well-being. Any problem with your teeth, or mouth in general, can affect the rest of your body. For this reason, it’s important you make a conscious effort to take care of your teeth. Your teeth help you to masticate food which is usually the first step of digestion. They also play a big part in speech and creating a good-looking appearance. Chances are you’ll smile more often if your teeth are in good condition. The consequences of poor dental health are serious and usually include health conditions that are painful and even disabling. Below, you’ll find top tips to keep your teeth and mouth healthy. Remember, healthy teeth are crucial for your overall well-being. 1. Brush your teeth twice daily Everyone knows that the general brushing recommendation is at least twice daily. This has been drilled into us by parents and teachers alike. You should brush your teeth for at least 2 minutes in the morning and evening. This helps to get rid of germs and keep your teeth healthy. Plagues and other dental problems are also kept at bay by brushing at least twice daily for two minutes. Proper brushing technique involves cleaning all surfaces of the teeth both on the front and back as well as along your gum line. Children should be taught the practice of proper brushing early. And parents or guardians can make brushing fun by playing a song or setting the timer for the recommended two minutes of brushing time. 2. Floss daily Flossing is an important oral care routine and should be done at least once a day. The best time to floss is at night before bedtime and this helps to remove food and other particles stuck between the teeth. If these particles are not removed, they can cause the teeth to decay. Many people fail to floss at least once a day. However, it’s an important dental hygiene routine you should make a part of your lifestyle. It’s not enough to simply floss. You have to do it right to avoid damaging your teeth and gums. Use about 18 – 24 inches of dental floss and wind most of it along your middle fingers. You only need about 1 – 2 inches to clean your teeth. Slide the floss up and down along the whole tooth and avoid gliding it into your gums. This is why you need a short length of dental floss. Your gums may become sensitive when you start flossing but will return to normal when you’ve been flossing for a few days. 3. Replace old toothbrush Your toothbrush should be replaced as it starts to show signs of wear. The general recommendation for changing toothbrushes is about every 3 to 6 months. And this ensures that your toothbrush is in good condition to properly clean your teeth. Old toothbrushes tend to become damaged, frayed, and unable to properly clean the teeth. Changing your toothbrush regularly also prevents buildup of microbes although you should rinse your toothbrush well after every use. 4. Visit your dentist every 6 months Dentists are your friend when it comes to having healthy teeth and mouth. And periodic visits to one means you can detect any oral issues early on. This is beneficial to your health and money-wise. Your dentist will schedule a professional cleaning to remove buildup of plaque and tartar twice a year. “Regular dental cleaning is essential for your health. People often ask why we need a specialized doctor for our teeth cleaning. Our mouths are very important: our teeth prepare the food we need to eat so that our body can properly digest it. Regular cleanings at the dentist’s office keep our teeth healthy and help our body by preventing other illnesses,” says Dr. Winter, an experienced dentist in Arvada. 5. Maintain a healthy diet Healthy diets help keep your teeth in good condition. Foods like almonds and leafy greens are especially beneficial to your teeth. Avoid, or at the very least, limit sweetened or sugary foods such as candy, pop, and so on. Incorporate food rich in calcium into your diet as calcium supports strong and healthy teeth. Have a chat with your dentist about food to eat and avoid. 6. Use dental hygiene products When it comes to preventive dental care, there is no replacement for brushing and flossing. However, you can always supplement these key dental care practices with other dental hygiene products like tongue cleaners, mouthwash, oral irrigators, as well as interdental cleaners. Your dentist can recommend the type and even brand of dental hygiene products to use.
Researchers from the University of Helsinki’s Finnish Museum of Natural History Luomus and the National Museums of Kenya have discovered four lichen species new to science in the rainforests of the Taita Hills in southeast Kenya. Micarea pumila, M. stellaris, M. taitensis and M. versicolor are small lichens that grow on the bark of trees and on decaying wood. The species were described based on morphological features and DNA-characters. “Species that belong to the Micarea genus are known all over the world, including Finland. However, the Micarea species recently described from the Taita Hills have not been seen anywhere else. They are not known even in the relatively close islands of Madagascar or Réunion, where species of the genus have been previously studied,” Postdoctoral Researcher Annina Kantelinen from the Finnish Museum of Natural History says. “The Taita Hills cloud forests are quite an isolated ecosystem, and at least some of the species now discovered may be native to the area or to eastern Africa. Our preliminary findings also suggest that there are more unknown Micarea lichen species there.” Taita Hills are a unique environment The Taita Hills are part of the Eastern Arc Mountains that range from south-eastern Kenya to eastern Tanzania. The mountains rise abruptly from the surrounding plain, with the tallest peak reaching over two kilometers. Lush indigenous rainforests are mainly found on the mountaintops, capturing precipitation from clouds and mist developed by the relatively cool air rising from the Indian Ocean. Thanks to ecological isolation and a favourable climate, the area is one of the global hotspots of biodiversity. However, the native cloud forests in the region are shrinking year by year as they are replaced by forest plantations of exotic tree species that are not native to Africa. Compared to 1955, the area of indigenous forests has been diminished to less than half. “Planted forests have been found to bind less moisture and be more susceptible to forest fires. Therefore, they can make the local ecosystem drier and result in species becoming endangered. Some lichen species are capable of utilising cultivated forests at least temporarily, but indigenous forests have the greatest biodiversity and biomass,” Kantelinen says. The University of Helsinki maintains in the area the Taita Research Station, which is celebrating its tenth anniversary this year. Kantelinen, A., Hyvärinen, M., Kirika, P., & Myllys, L. (2021). Four new Micarea species from the montane cloud forests of Taita Hills, Kenya. The Lichenologist, 53(1), 81-94. doi:10.1017/S0024282920000511
Bats are unwitting participants in a disturbing vanishing act A strange fungus called white nose syndrome is wiping out little brown bat colonies across North America. The declining bat population has serious consequences for humans. The mysterious, nocturnal habits of bats have led to demonization of this unique animal. Now, as a strange disease called white nose syndrome quickly wipes out bat colonies across our continent, we’re beginning to appreciate how important bats are to our agricultural system—and possibly our health. But is it too late to help the little brown bat? Bees have snagged much of our appreciation when it comes to the impact of animals on our food security; however, bats are just as worthy of our respect and praise. Bats are hugely important in reducing pest-related damage to many crops, and some species are even important pollinators of fruit. However, a strange fungus causing a disease known as white-nose syndrome (WNS) is rapidly wiping out hibernating bat populations around the continent, resulting in profound implications for all of us. A one-way ticket to extinction White-nose syndrome was first detected in 2006 in a cave in New York, and has since spread across 19 states and into Canada, where it has been identified in bat populations in Nova Scotia, New Brunswick, Quebec, and Ontario. The fungus that causes this syndrome, known as Geomyces destructans, appears to affect at least nine species of cave-hibernating bats, including little brown bat, northern long-eared bat, and eastern small-footed bat. To date, at least 5.5 million and upward of 6.7 million bats have died; little brown bat—the most common species in North America—has been the hardest hit. White-nose syndrome, so-called because of the appearance of a white “fuzz” on the noses of infected bats, has a very high mortality rate—anywhere from 75 to 100 percent—and appears to affect bats by causing them to awaken too early from hibernation. Once awake, bats quickly consume their bodily fat reserves and ultimately die of starvation and exposure. This past winter the fungus was responsible for wiping out New Brunswick’s largest population of hibernating bats, and scientists are predicting the little brown bat will become locally extinct in northeastern Canada and the US within 15 years, with potential for complete extinction across North America if the fungus continues to spread westward. Research indicates the fungus is an invasive species from Europe; however, so far bat populations in Europe seem unaffected by the fungus. Why this seems to be the case is uncertain, but scientists in the UK are closely following the plight of bats in North America in an effort to understand and mitigate any problems that may arise in the future. Impact on Canadians To date there has been no evidence to suggest the fungus that causes WNS in bats poses direct risk to human safety. However, as bats are voracious consumers of insects, including mosquitoes and agricultural pests, the loss of little brown bat to WNS is expected to have significant agricultural, economic, and health implications for Canadians. Bats save North American farmers billions of dollars a year in insect-control costs by consuming pests of various crops. For instance, codling moth damage to pears was significantly reduced if the orchard was located near a known bat roosting area. With reduction and potential loss of bat populations across North America, farmers will need to increase chemical input to control various pest populations. As conventional farmers spend more on pesticides to keep pest levels down, the cost of the chemicals will be shifted on to consumers, resulting in higher food prices, possibly increased chemical residue in our food and water, and larger environmental problems associated with greater pesticide use. Organic farming and integrated pest management Bats are considered one of the greatest allies to organic farmers. Given their importance in agricultural pest reduction, the impact of an increased pest load caused by little brown bat’s extinction and reduction of other bat populations will profoundly affect organic farmers, who often rely on bats as part of a natural integrated pest management system. Although under lab conditions an individual bat can eat 600 mosquitoes per hour, evidence suggests that in the wild they are opportunistic feeders, preferring larger insects. Scientists have yet to reach consensus on whether bats make a significant contribution in reducing mosquito numbers, but little brown bat appears to be one of the most avid mosquito-eaters, being able to eat twice as many mosquitoes per hour as other bats. Despite its preference for larger insects, a single little brown bat will still eat over a thousand mosquitoes on any given night, including mosquitoes carrying viruses that cause West Nile and eastern equine encephalitis. Loss of little brown bat could mean increased instances of mosquito- and other insect-borne illnesses; in fact, some medical establishments in the US are already advising the public to be vigilant about mosquito bites in areas where WNS is present in the bat population. Furthermore, climate change is expected to increase the range of many disease-causing mosquitoes and insects into Canada and, combined with loss of bats, could have increasingly negative consequences for the health of Canadians. Batting for the bats Bats need all the help they can get. Unfortunately, many unfounded fears about bats prevent people from reaching out to this extraordinary mammal. However, even if you don’t want to get too close, there are a few things you can do to get involved. Bats moving in If bats have moved into your home, hire professionals to humanely relocate the bats. These “urbanized” bats may be among our last hope of saving some bat species from extinction, as urban bats have so far been unaffected by WNS. The most likely reason for this is that the places urban bats choose to roost in—dry, warm attics and walls of homes—are not conducive to growth of the fungus that causes WNS, which flourishes in the cold, damp caves and mines in which rural bats hibernate. However, take care to remember bats should always be handled with caution and by experts only—although very rare, bats can carry rabies and potentially infect humans. To help encourage bats to stay out of your home, hang up bat houses on your property. Bat houses can easily be built by hand or ordered online, and once bats have established themselves in their new roost you may appreciate their efforts at keeping your barbecue parties mosquito free. Bats behaving strangely If you see bats acting strangely—such as bats flying around in the daytime or during the winter months—report the sighting to your local environment ministry. Such activity could mean the bat is unwell, and is part of a roosting colony that has WNS. Catching new instances of WNS early on before it has a chance to spread to neighbouring bat populations could help stave off continent-wide extinction of certain bat populations. Prognosis for many of our bat species, and little brown bat especially, is not good. However, if we work to help the ones that are left and prevent the spread of white-nose syndrome, we may be able to create some hope for this important mammal’s future—and ultimately, the future of our food security as well.
No place has played such a prominent role in Berlin’s turbulent history as the Brandenburger Tor (Gate). This is where it all happened: Napoleon’s triumphal procession, Nazi parades and Hitler’s grim speeches, a no-man's land during the Cold War, JFK’s visit, Ronald Reagan's speech and the spontaneous street celebrations after the fall of the Berlin Wall. Dive into Berlin’s turbulent history at its only remaining city gate. The Brandenburg Gate was built between 1788 and 1791 as a city gate and a symbol for peace. During the Cold War, the Gate suddenly found itself in the no-man’s zone between East and West and thus became a symbol of stolen freedom. In his famous speech ‘Tear Down This Wall’ of June 1987, American president Reagan said: "General Secretary Gorbachev, if you seek peace, if you seek prosperity for the Soviet Union and eastern Europe, if you seek liberalization, come here to this gate. Mr. Gorbachev, open this gate. Mr. Gorbachev, tear down this wall!" Two years later the German people did just that and in November 1989 the first ‘Ossies’ (East-Germans) walked through the Brandenburg Gate to find freedom. After that, the Gate represented not only peace, but also freedom. “Suddenly the city gate stood in the kill zone, full of watch towers and armed Volkspolizisten”
The Polyneopteran Orders Polyneopterans have a very simple, unspecialized body-plan that retains many of the ancestral (pleisiomorphic) characteristics of ametabolous insects: abdominal cerci, chewing mouthparts, long multi-segmented antennae, and a distributed nervous system with numerous segmental ganglia. Adults have four wings, although some species are secondarily wingless. The front wings (often called tegmina) are usually thickened or leathery. At rest, they cover and protect the hind wings. In flight, front and hind wings operate independently of one another (as in the Paleoptera). Hind wings are often enlarged near the base, providing a greater surface area for lift during flight. Most of the polyneopterans are rather weak or clumsy fliers. There is extensive controversy over phylogenetic relationships within the Polyneoptera complex. Although the fossil record contains many primitive neopterans, few systematists agree on how these extinct organisms are related to living orders and families. The ordinal status of modern-day polyneopterans is also the subject of much debate: “lumpers” are inclined to group all of these insects into five or six orders, whereas “splitters” divide them into as many as ten different orders. Under the classification scheme we have chosen to use in this course (Cladogram 4), each major ecological group is given ordinal status. This may please the “splitters” but it probably gives a false impression that the evolutionary history of these organisms is more diverse than it really is. In fact, there is strong justification for combining some of these orders, and we will try to emphasize these groupings in the following paragraphs. The first polyneopteran insects were scavengers and/or herbivores. From a physical standpoint, they were probably very similar to members of the present-day order Plecoptera. These insects, commonly known as stoneflies, are generally regarded as the earliest group of Neoptera. They probably represent an evolutionary “dead end” that diverged well over 300 million years ago. Immature stoneflies are aquatic nymphs (naiads). They usually live beneath stones in fast-moving, well-aerated water. Oxygen diffuses through the exoskeleton or into tracheal gills located on the thorax, behind the head, or around the anus. Plecoptera Most species feed on algae and other submerged vegetation, but two families (Perlidae and Chloroperlidae) are predators of mayfly nymphs (Ephemeroptera) and other small aquatic insects. Adult stoneflies are generally found on the banks of streams and rivers from which they have emerged. They are not active fliers and usually remain near the ground where they feed on algae or lichens. In many species, the adults are short-lived and do not have functional mouthparts. Stoneflies are most abundant in cool, temperate climates. The order Embioptera (webspinners or embiids) is another group within the Polyneoptera complex that probably appeared early in the Carboniferous period. Many insect taxonomists believe webspinners may represent another evolutionary “dead end” that diverged about the same time as Plecoptera. Determining phylogenetic relationships for this group is unusually difficult because the Embioptera have a number of adaptations not found in any other insects. The tarsi of the front legs, for example, are enlarged and contain glands that produce silk. No other group of insects, fossil or modern, have silk-producing glands in the legs. The silk is used to construct elaborate nests and tunnels under leaves or bark. Webspinners live gregariously within these silken nests, feeding on grass, dead leaves, moss, lichens, or bark. Nymphs and adults are similar in appearance. Embiids rarely leave their silken tunnels; a colony grows by expanding its tunnel system to new food resources. Well-developed muscles in the hind legs allow these insects to run backward through their tunnels as easily as they run forward. Only adult males have wings. Front and hind wings are similar in shape and unusually flexible; they fold over the head when the insect runs backward through its tunnels. Blood (hemolymph) is pumped into anterior veins to stiffen the wings during flight. In Embioptera, the mouthparts are directed forward (prognathous) rather than downward as in other primitive polyneopterans. This may simply be an adaptation for life in a tunnel, or as some taxonomists have suggested, it may mean that Embioptera are really more closely related to earwigs (order Dermaptera). Most Embioptera are tropical or subtropical. The ancestral prototype for the main line of Polyneoptera evolution was probably an insect very similar in appearance to a cockroach. Paleobiologists refer to this ancestral lineage as the Protoblattodean line. It probably dates from the early Carboniferous period, around 360 million years ago. In fact, fossil cockroaches found in late Carboniferous rock are remarkably similar to species living today. In our scheme of classification, all modern cockroaches are grouped in one order, the Blattodea (or Blattaria). “Lumpers” often put them together with praying mantids (in the order Dictyoptera) or include them as a suborder of Orthoptera. The cockroaches, often known as “waterbugs” are scavengers or omnivores. They are most abundant in tropical or subtropical climates, but they also inhabit temperate and boreal regions. They are commonly found in close association with human dwellings where they are considered pests. Cockroaches have an oval, somewhat flattened body that is well-adapted for running and squeezing into narrow openings. Rather than flying to escape danger, roaches usually scurry into cracks or crevices. Much of the head and thorax is covered and protected dorsally by a large plate of exoskeleton (the pronotum). When cockroaches lay eggs, the female’s reproductive system secretes a special capsule around her eggs. This structure, known as an öotheca, may be dropped on the ground, glued to a substrate, or retained within the female’s body. Production of an öotheca is a special adaptation found only in the cockroaches and praying mantids. This similarity suggests a close phylogenetic relationship between these groups and explains why some taxonomists prefer to lump them into a single order (Dictyoptera). From an ecological standpoint, cockroaches and mantids could not be more different: More about Mantodea roaches are nocturnal scavengers, mantids are diurnal predators — in fact, they are the largest group of predators in the entire Polyneoptera complex. Mantids, order Mantodea, have elongate bodies that are specialized for a predatory lifestyle: long front legs with spines for catching and holding prey, a head that can turn from side to side, and cryptic coloration for hiding in foliage or flowers. Mantids are most abundant and most diverse in the tropics; there are only 5 species commonly collected in the United States and 3 of these have been imported from abroad. The termites, order Isoptera, are another group of insects that appear to be closely related to cockroaches. This conclusion is based on behavioral and ecological similarities between termites and wood roaches (members of the family Cryptocercidae). These cockroaches live in fallen timber on the forest floor, feeding on wood fibers which are then digested by symbiotic microorganisms within their digestive systems. They live in small family groups where each female provides care for her young offspring. Termites and wood roaches are thought to be close relatives because they both occupy similar habitats, share the same type of food resources, have the same intestinal symbionts, and provide care for their offspring. Termites are the only hemimetabolous insects that exhibit true social behavior. More about Isoptera They build large communal nests that house an entire colony. Each nest contains adult reproductives (one queen and one king) plus hundreds or thousands of immatures that serve as workers and soldiers. Like cockroaches and mantids, the termites are most abundant in tropical and subtropical climates. In Blattodea, Mantodea, and Isoptera, wing movement (particularly the downstroke) is largely dependent on muscles attached to the base of the wing (direct flight muscles). But in another branch of the Protoblattodean lineage, direct flight muscles are smaller and more of the power for flight is provided by indirect flight muscles (located in the thorax but not attached directly to the wings). At least two extinct orders (Protorthoptera and Protelytroptera) appear to be part of this second branch which also includes all the rest of the modern-day Polyneoptera orders: Orthoptera, Phasmatodea, Dermaptera, Grylloblattodea, and perhaps Zoraptera and Mantophasmatodea. Orthoptera (grasshoppers, crickets, and katydids) probably arose during the middle of the Carboniferous period. Most living members of this order are terrestrial herbivores with modified hind legs that are adapted for jumping. Slender, thickened front wings (tegmina) fold back over the abdomen to protect membranous, fan-shaped hind wings. Many species have the ability to make and detect sounds. Orthoptera is one of the largest and most important groups of plant-feeding insects. Although their phylogeny is not clear, all other members of the Polyneoptera complex are probably sister groups to the Orthoptera. These include the earwigs (order Dermaptera), leaf and stick insects (order Phasmatodea), rock crawlers (order Grylloblattodea), gladiators (order Mantophasmatodea), and zorapterans (order Zoraptera). Nearly all of these insects are herbivores or scavengers. In earwigs and stick insects, the chewing mouthparts are directed forward (prognathous) as in Embioptera; in rock crawlers, gladiators, and zorapterans, the mouthparts are directed downward (hypognathous) as in all other polyneopterans. The leaf and stick insects (order Phasmatodea or Phasmida) are sometimes grouped as a family or suborder of Orthoptera. All species are herbivores. As the name “walkingstick” implies, most phasmids are slender, cylindrical, and cryptically colored to resemble the twigs and branches on which they live. Members of the family Timemidae (=Phyllidae) bear a strong resemblance to leaves: abdomens are broad and flat, legs have large lateral extensions, and coloration is primarily brown, green, or yellow. More about Phasmatodea Most walkingsticks are slow-moving insects, a behavior pattern that is consistent with their cryptic lifestyle. In a few tropical species, the adults have well-developed wings, but most phasmids are brachypterous (reduced wings) or secondarily wingless. Stick insects are most abundant in the tropics where some species may grow to 30 cm (12 inches) in length. Females do not have a well-developed ovipositor so they cannot insert their eggs into host plant tissue like most other Orthoptera. Instead, the eggs are dropped singly to the ground, sometimes from great heights. Earwigs (order Dermaptera) are mostly scavengers or herbivores that hide in dark recesses during the day and become active at night. They feed on a wide variety of plant or animal matter. A few species may be predatory. Females lay their eggs in the soil and may guard them until they hatch. In a few species, maternal care even extends through the first two instars. Nymphs are similar in appearance to adults, but lack wings. The front wings are short, thick, and serve as protective covers for the hind wings. Hind wings are large, fan-shaped and pleated. They fold (both length-wise and cross-wise) to fit beneath the front wings when not in use. Some species are secondarily wingless. More about Dermaptera In most earwigs, the cerci at the end of the abdomen are enlarged and thickened to form pincers (forceps). These pincers are used in grooming, defense, courtship, and even to help fold the hind wings. The Dermaptera contains three suborders. Most species belong to the Forficulina. The other two groups, Arixeniina and Hemimerina, live in close association with mammals. The former (five species) live on Asian bats and the latter (eleven species) live on African rodents. All of these insects are adapted for a parasitic or semi-parasitic lifestyle: they are secondarily wingless and the cerci are not well-developed into pincers. Members of the Arixeniina give birth to live nymphs (vivipary). The rock crawlers (order Grylloblattodea) are a small and obscure group of insects found only at high elevations in the mountains of China, Siberia, Japan, and western United States and Canada. Cave-dwelling species have been found in Korea and Japan. More about Grylloblattodea These omnivorous insects scavenge for food on the surface of snowfields, under rocks, or near melting ice. They are active only at cold temperatures and move downward toward permafrost during warm seasons. As their name implies, rock crawlers have a blend of physical characteristics from both crickets (gryllo-) and cockroaches (blatta-). Some taxonomists include these insects as a suborder or family within Orthoptera. Others believe these insects are the only survivors of a primitive lineage that gave rise to other polyneopteran orders. The order Mantophasmatodea includes a very small group of insects that were first recognized as a separate order in 2002. More about Mantophasmatodea So far, living members of this group have been found only in the Brandberg and Erongo Mountains of Namibia and the Western Cape Province of South Africa. These insects appear to be nocturnal predators. They live within rock crevices, hide in clumps of grass, and prey on spiders and other small insects. As their order name suggests, they seem to exhibit a blend of the physical and ecological characteristics found in praying mantids (Mantodea) and walkingsticks (Phasmatodea). Zoraptera, the final order within the Polyneoptera complex, is probably the most controversial in terms of its phylogenetic position within the class Insecta. In many respects, the Zoraptera are typical polyneopterans: they have chewing mouthparts, unsegmented cerci, and a striking resemblance to termites. But other features are more typical of insects in the Paraneoptera complex: the front wings (when present) are larger than the hind wings and have reduced venation, the nervous system has a reduced number of abdominal ganglia, and there are very few Malpighian tubules (excretory structures) in the digestive system. This blend of Polyneopteran and Paraneoteran characteristics has led some entomologists to propose that Zoraptera represent a link between the two evolutionary lineages. Others reject this idea and claim that Zoraptera should be grouped with the protoblattodean lineage, near cockroaches, termites, and mantids. Still others argue that Zoraptera is a descendant of the protelytropteran lineage and therefore related to Dermaptera and Grylloblattodea. Regardless of phylogenetic placement, it seems likely that some of Zoraptera’s derived (apomorphic) characteristics are the result of convergent evolution. Members of the order Zoraptera are small (less than 4 mm) and usually found in rotting wood, under bark, or in piles of old sawdust. They live in small aggregations and appear to scavenge on spores and mycelium of fungi, or occasionally, on mites and other small arthropods. Little more is known about their biology. Some Zoraptera are blind, pale in color, and wingless, while other members of the same species may be darkly pigmented with compound eyes and wings. The winged individuals are rather uncommon; they may be dispersal forms. The wings break off easily near the base, leaving only stubs.
In a new study, researchers have shown that 3D printing can be used to make highly precise and complex miniature lenses with sizes of just a few microns. The microlenses can be used to correct color distortion during imaging, enabling small and lightweight cameras that can be designed for a variety of applications. In The Optical Society (OSA) journal Optics Letters, researchers detail how they used a type of 3D printing known as two-photon lithography to create lenses that combine refractive and diffractive surfaces. They also show that combining different materials can improve the optical performance of these lenses. 3D printing of micro-optics has improved drastically over the past few years and offers design freedom not available from other methods. Their optimized approach for 3D printing complex micro-optics opens many possibilities for creating new and innovative optical designs that can benefit many research fields and applications.
THEME – BASIC CONCEPTS OF AGRICULTURE PREVIOUS LESSON – How/Ways to Make Water Clean | Mid Term Test (Test) for Primary 1 (Basic 1) – Agriculture Link TOPIC – FOOD 1. Introductory Activities 2. Meaning of Food 3. Names of Food We Eat 4. Lesson Evaluation and Weekly Assessment (Test) By the end of the lesson, the pupils should have attained the following objectives (cognitive, affective and psychomotor) and should be able to – 1. State what food is. 2. Give examples of local food items. The pupils can identify different kind of food we eat in our community. The teacher will teach the lesson with the aid of chart showing different food in our community. METHOD OF TEACHING Choose a suitable and appropriate methods for the lessons. Note – Irrespective of choosing methods of teaching, always introduce an activities that will arouse pupil’s interest or lead them to the lessons. 1. Scheme of Work 2. 9 – Years Basic Education Curriculum 3. Course Book 4. All Relevant Material 5. Online Information Copy as I write or draw as I draw. This instruction should be given when you need the pupils to write or draw. CONTENT OF THE LESSON LESSON 1 – INTRODUCTORY ACTIVITIES Teacher’s Activities – Display chart showing two or more food for the pupils to identify and give examples of food in their community. Pupil’s Activities – Rice, bean, bread, etc. Teacher’s Remarks – Correct. There are different kinds of food in our community. We eat food to grow and stay healthy. Discuss the meaning of food with them and write on the board. MEANING OF FOOD Food is the items eaten by people and animals in order to live. Aks the pupils to state the food eaten by people and animals. EXAMPLES OF FOOD ITEMS EATEN BY PEOPLE 7. Guinea corn 9. Animals, fish, egg, etc. LESSON 2 – EXAMPLES OF FOOD ITEMS EATEN BY ANIMALS Examples of food eaten by animals – 2. cassava peel 3. Cassava leaves 4. Yam peel 5. Some other food eaten by people, etc. To deliver the lesson, the teacher adopts the following steps: 1. To introduce the lesson, the teacher revises the previous lesson. Based on this, he/she asks the pupils some questions; 2. Explains the meaning of food. 3. Mention some food items animals feed on in their homes. Pupil’s Activities – - Mention some of their local food items. - Mention some food items animals feed on in their homes. 4. Summarizes the lesson on the board. Pupil’s Activities – Copy as the teacher writes. To conclude the lesson for the week, the teacher revises the entire lesson and links it to the following week’s lesson. Ask the pupils to – 1. State what food is. 2. Name 6 local foodstuffs. 3. List 6 food items eaten by animals in their homes. MID TERM TEST 1. ____________ is the items eaten by people and animals in order to live. 2. Food is the items eaten by ____________ in order to live. A. people and animals B. people and fish C. man and woman 3. Food is the items eaten by people and animals in order to ____________. 4. Cassava peel is the food eaten by ____________. 5. Pounded yam is the food eaten by ____________. 6. Animals are meats eaten by ____________. Name 2 local foodstuffs. List 2 food items eaten by animals in their homes.
Opinions can be cleverly disguised as news. Can you tell the difference? After completing this lesson, hopefully so! Students learn how to distinguish news from opinion and how to determine if an opinion has merit. The lesson introduces students to several different kinds of opinion writers and offers tips for investigating an author's background. Through a web activity and hands-on internet investigation, students get up close and personal with news-related opinions and the journalism standards that make news commentary and analysis worthwhile. Got a 1:1 classroom? Download fillable PDF versions of this lesson's materials below! This resource was created with support from The Leonore Annenberg Institute for Civics of the Annenberg Public Policy Center.
When propane reacts with oxygen, does the surrounding area become warmer or cooler? the reaction is exothermic and releases heat which makes the surrounding area become warmer. Is the reaction between propane and oxygen endothermic or exothermic? Below is a hydrocarbon combustion animation showing the net reaction that occurs when propane combines with oxygen. The hydrocarbon combustion reaction releases heat energy and is an example of an exothermic reaction. The reaction also has a negative enthalpy change (ΔH) value. Is propane endothermic or exothermic? The combustion of propane is exothermic because energy is released in the reaction. Is energy created during an exothermic reaction? Chemical reactions that release energy are called exothermic. In exothermic reactions, more energy is released when the bonds are formed in the products than is used to break the bonds in the reactants. Exothermic reactions are accompanied by an increase in temperature of the reaction mixture. Is the combustion of propane an endothermic or exothermic reaction explain how you know? Combustion is an extremely exothermic reaction. When 1 mole of propane reacts with 5 moles of oxygen, 2220 kJ (kilojoules) of heat is released. In an exothermic reaction, the chemical energy of the reactants is greater than the chemical energy of the products. What happens when propane reacts with oxygen? Propane undergoes combustion reactions in a similar fashion to other alkanes. In the presence of excess oxygen, propane burns to form water and carbondioxide. Is Melting endothermic or exothermic? Phases and Phase Transitions |Phase Transition||Direction of ΔH| |Fusion (Melting) (solid to liquid)||ΔH>0; enthalpy increases (endothermic process)| |Vaporization (liquid to gas)||ΔH>0; enthalpy increases (endothermic process)| |Sublimation (solid to gas)||ΔH>0; enthalpy increases (endothermic process)| Is nh4cl exothermic or endothermic? The dissolution of ammonium chloride is endothermic. When dissolving a salt in water, two processes are taking place. Is nail polish remover evaporating endothermic or exothermic? Evaporating → liquid to gas →, energy/heat is needed. nail polish that evaporates is endothermic and ΔH is positive. How can you tell if a reaction is exothermic? So if the sum of the enthalpies of the reactants is greater than the products, the reaction will be exothermic. If the products side has a larger enthalpy, the reaction is endothermic. You may wonder why endothermic reactions, which soak up energy or enthalpy from the environment, even happen.
Down syndrome is a set of cognitive and physical symptoms that result from having an extra chromosome 21 or an extra piece of that chromosome. It is the most common chromosomal cause of mild to moderate intellectual disabilities. People with Down syndrome also have some distinct physical features, such as a flat-looking face, and they are at risk for a number of other health conditions. Understanding Down syndrome and other intellectual and developmental disabilities is part of the reason the NICHD was established. Today, the Institute continues to lead research on the causes, progression, treatment, and management of Down syndrome, as well as on conditions and diseases that are associated with the syndrome. New NIH Initiative: NICHD is pleased to be part of INCLUDE (INvestigation of Co-occurring conditions across the Lifespan to Understand Down syndromE), an NIH-wide initiative that launched in June 2018 in support of a Congressional directive in the fiscal year 2018 Omnibus Appropriations. The project aims to understand critical health and quality-of-life needs for individuals with Down syndrome, with the aim of yielding scientific discoveries to improve the health, well-being, and neurodevelopment of individuals with Down syndrome, as well as their risk and resilience common diseases that they share with individuals who do not have Down syndrome. INCLUDE will investigate conditions that affect individuals with Down syndrome and the general population, such as Alzheimer’s disease/dementia, autism, cataracts, celiac disease, congenital heart disease, and diabetes. - Down syndrome Medical or Scientific Names - Down syndrome - Trisomy 21
What is classroom action research? Classroom action research begins with a question or questions about classroom experiences, issues, or challenges. It is a reflective process which helps teachers to explore and examine aspects of teaching and learning and to take action to change and improve. Why do it? It helps you to: - deepen your understanding about teaching and learning - develop your teaching skills and knowledge - try out different approaches and ideas - develop reflective practice - improve student learning. How to do it Talk to your colleagues. What questions do you have about teaching? What topics are you and your colleagues interested in? Are there problem areas, or aspects of teaching/learning you are all unsure about? Make a list. From your list, decide together the topic for the classroom action research. To help you decide, discuss why you want to do it. What are the benefits to teachers and to learners? When you have decided, write one or two questions about your topic which will guide what you do. Reflect on your topic questions. Where can you find information to help you plan the research? Do you need to consult published materials or the Internet for information and ideas? Find out as much as you can about the topic to help you plan how to do the action research. Think about: how long it will the action research take? How will you record the research? There are different ways of doing classroom action research. It can be as simple as just writing down your own reflections relating to the topic after a lesson or sequence of lessons or it could include questionnaires, observations, audio recordings and so on. Carry out the action research using your chosen method. Some ideas are: • Peer observation • Teacher diary • Learner feedback • Lesson evaluation • Recording lessons • Reflecting on learners’ work Choose the method which best suits your topic questions. - Researching together It is also helpful to carry out action research with a colleague or group of colleagues. This gives you more data to reflect on, compare and discuss. This stage helps you to make sense of the data you have collected in your research. It is a process of reflecting on, organising and reviewing your data to help you answer your topic questions. What have you found out? What insights have you gained from the research? What does your research show you? If you have carried out the classroom action research on your own, share your results with your colleagues. Reflect on the results. How do the results help you and your colleagues? What changes will you all make? It is important to review the impact of the changes made. How successful were they? Is any follow-up action needed? Are there any differences amongst your colleagues?
Among the many popular tourist sites in Rome is an impressive 2,000-year-old mausoleum along the Via Appia known as the Tomb of Caecilia Metella, a noblewoman who lived in the first century CE. Lord Byron was among those who marveled at the structure, even referencing it in his epic poem Childe Harold's Pilgrimage (1812-1818). Now scientists have analyzed samples of the ancient concrete used to build the tomb, describing their findings in a paper published in October in the Journal of the American Ceramic Society. “The construction of this very innovative and robust monument and landmark on the Via Appia Antica indicates that [Caecilia Metella] was held in high respect,” said co-author Marie Jackson, a geophysicist at the University of Utah. “And the concrete fabric 2,050 years later reflects a strong and resilient presence.” Like today's Portland cement (a basic ingredient of modern concrete), ancient Roman concrete was basically a mix of a semi-liquid mortar and aggregate. Portland cement is typically made by heating limestone and clay (as well as sandstone, ash, chalk, and iron) in a kiln. The resulting clinker is then ground into a fine powder, with just a touch of added gypsum—the better to achieve a smooth, flat surface. But the aggregate used to make Roman concrete was made up of fist-sized pieces of stone or bricks In his treatise de Architectura (circa 30 CE), the Roman architect and engineer Vitruvius wrote about how to build concrete walls for funerary structures that could endure for a long time without falling into ruins. He recommended the walls be at least two feet thick, made of either "squared red stone or of brick or lava laid in courses." The brick or volcanic rock aggregate should be bound with mortar composed of hydrated lime and porous fragments of glass and crystals from volcanic eruptions (known as volcanic tephra). Jackson has been studying the unusual properties of ancient Roman concrete for many years. For instance, she and several colleagues have analyzed the mortar used in the concrete that makes up the Markets of Trajan, built between 100 and 110 CE (likely the world's oldest shopping mall). They were particularly interested in the "glue" used in the material's binding phase: a calcium-aluminum-silicate-hydrate (C-A-S-H), augmented with crystals of stratlingite. They found that the stratlingite crystals blocked the formation and spread of microcracks in the mortar, which could have led to larger fractures in the structures. In 2017, Jackson co-authored a paper analyzing the concrete from the ruins of sea walls along Italy's Mediterranean coast, which have stood for two millennia despite the harsh marine environment. The constant salt-water waves crashing against the walls would have long ago reduced modern concrete walls to rubble, but the Roman sea walls seem to have actually gotten stronger. Jackson and her colleagues found that the secret to that longevity was a special recipe, involving a combination of rare crystals and a porous mineral. Specifically, exposure to sea water generated chemical reactions inside the concrete, causing aluminum tobermorite crystals to form out of phillipsite, a common mineral found in volcanic ash. The crystals bound to the rocks, once again preventing the formation and propagation of cracks that would have otherwise weakened the structures. So naturally Jackson was intrigued by the Tomb of Caecilia Metella, widely considered to be one of the best-preserved monuments on the Appian Way. Jackson visited the tomb back in June 2006, when she took small samples of the mortar for analysis. Despite the day of her visit being quite warm, she recalled that once inside the sepulchral corridor, the air was very cool and moist. "The atmosphere was very tranquil, except for the fluttering of pigeons in the open center of the circular structure," Jackson said.
Language processor is very important in the field of computers. Compilers and interpreters are language processor which translates programs written in high-level languages into machine language that computers can understand. An assembler translates programs written in low-level or assembly language into machine language. Tools are available to help programmers write error-free code. A compiler is a language processor that reads an entire source program written in a high-level language at once and translates it into an equivalent machine language program called a compiler which is a language processor. If there are no errors in the source code in the compiler, it is successfully converted to object code. If there are errors in the source code, the compiler marks them as line numbers at the end of the compilation. For the compiler to successfully recompile the source code, it must first resolve the error. An assembler is used to translate programs written in assembly language into machine language. A source program is an assembly language input that contains assembly language instructions. Assembler is the first interface that can connect humans and machines. We need assemblers to bridge the gap so humans and machines can communicate with each other. Translating one sentence of the original program into machine language is done by the language processor, just before moving on to the next line, called the interpreter. The interpreter advances to the next line for execution only after the error has been cleared. Teachmint offers Integrated School Platform to educational institutions. Our lms portal is meant to improve the teaching-learning experience. To know more about our offerings like fee management system, visit our website.
The history of the Earth’s magnetic field is indelibly written in vortex-like structures inside grains of the iron oxide magnetite, a new study has shown. Researchers in the UK and Germany used electron holography to map the magnetic properties of individual grains of the material. They say it preserves magnetic information unaltered even when exposed to large temperature fluctuations. The imaging technique, for the first time, maps the magnetic vortices while they are heated to 550 °C – just short of the rock’s Curie point, the temperature at which it loses its permanent magnetic properties. In effect, this is a record of the Earth’s magnetic field at the time of the rock’s formation. That could give us a detailed picture of the evolution of the Earth’s magnetic field and improve our understanding of the planet’s core and plate tectonics. By being able to see the orientation of magnetic poles within rocks at the time of their formation, it will show how continents moved relative to each other, the researchers say. “These remarkable images show that vortex structures in natural magnetic systems can hold information about how the Earth’s inner structure evolved,” says Wyn Williams from the University of Edinburgh, who led the study. “This is a game-changer in our understanding of rocks’ ability to act as reliable magnetic recorders, and helps us see a little clearer into Earth’s history.” As the name suggests, magnetite is the most magnetic natural mineral there is. It is a commonly occurring oxide of iron with the formula Fe3O4. The geomagnetic imprint is formed as molten lava cools and magnetite grains align with the Earth’s magnetic field. Another author, Trevor Almeida, of Imperial College London, says only a small portion of naturally occurring magnetite held stable magnetic structures. “However, far more common are tiny magnetic vortices, and their stability could not be demonstrated until now,” he says. “Magnetite rocks, which carry signs of temperature fluctuations, are indeed a reliable source of information about the history of the Earth.” Bill Condie is a science journalist based in Adelaide, Australia. Read science facts, not fiction... There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.
Hertz Photoelectric Effect Hertz, in 1887 discovered the phenomena of photoelectric effect for the first time. He was experimenting on the generation of electromagnetic waves by spark discharge and he noticed that sparks around the detector loop were intensified when ultraviolet radiation fell on the emitter plate Radiation falling on the metal surface provided free electrons enough energy to neutralize the attractive force of positive ions and escape the metal surface to intensify the sparks across the metal loop. Hallwach and Lenard during 1886-1902, researched the process of photoelectric emission. They experimented on negatively charged zinc plate and found that on absorbing ultraviolet light zinc plate became neutral, losing its net positive charge. On further absorption of ultraviolet radiation, neutral zinc plate became negatively charged. Hence proved that electrons are released from the metal surface when ultraviolet radiation is incident on it. They also observed that electrons are not emitted when the frequency of light incident is less than a specific least value, called the threshold frequency. Threshold frequency depends on the nature of material used as an emitter Some metals like zinc, magnesium etc. emitted electrons only by absorbing ultraviolet light. But, alkali metals like sodium potassium etc. could also absorb visible radiation to emit electrons, and so they were called photosensitive materials. So, the process in which falling of electromagnetic radiation on the metal surface results in the emission of electrons is called photoelectric effect. And the electrons released due to photoelectric effect are called photoelectrons. - Lectures 0 - Quizzes 0 - Skill level All levels - Language English - Students 0 - Assessments Yes
If there were such a thing as an underwater freak show, then this would be it. Scientists from the Natural History Museum (NHM) in London have discovered a mysterious menagerie of marine megafauna deep in the Pacific Ocean, and dozens of the oddball creatures could be species that are unknown to science. With the assistance of a remotely operated vehicle (ROV) during the summer of 2018, scientists recovered 55 specimens lurking on the western edge of an abyss located between Hawaii and Mexico, roughly 16,400 feet (5,000 meters) below the sea surface. Of that assemblage of oceanic oddities, seven were recently confirmed to be newfound species; the researchers’ findings were published July 18 in the journal ZooKeys (opens in new tab). While the eastern side the abyss has been explored quite regularly, its western which is known as the Pacific Clarion-Clipperton Zone and includes several nearby seamounts (underwater mountains), is less accessible and has therefore remained largely unexplored, making it a prime locale for discovering new species. “About 150 years ago, the [HMS] challenger expedition exploring this area, but as far as I know, there hasn’t been much study done since that time,” Guadalupe Bribiesca-Contreras, a NHM biologist in the life sciences department and the study’s lead author, told Live Science. “This part of the ocean has barely been touched.” Related: What’s the weirdest sea creature ever discovered? During the 2018 expedition, the scientists more than made up for lost time. One after another, each new creature they discovered was as fascinating as the one that came before: from an elastic, banana-shaped sea cucumber known as a gummy squirrel (Psychropotes longicauda) — the individual that they found stretched nearly 2 feet (60 cm) long — to a sea sponge in the genus Hyalonemawhose body resembles a tulip. Of the potential new species that the scientists discovered, the one that caught Bribiesca-Contreras’ attention was a type of coral in the Chrysogorgia genus. Its pale orange polyp resembled that of C. abludo, the species that is usually found in the Atlantic Ocean. But the researchers later identified it as a new species that has yet to be named. This marks the first time that a coral of this genus has been found in the Pacific. “At first we thought it was the same species, but upon further molecular work, we learned that it’s morphologically different,” Bribiesca-Contrerasshe said. “One thing that always strikes me is that a lot of these lifeforms we see haven’t changed much over the course of millions of years, which is crazy to think [about],” she said. “Many of these species we’ve seen the fossils, and they look exactly the same now.” Many of the bizarre adaptations in these deep-sea weirdos have persisted for so long because they improve the animals’ chances of surviving in a very punishing environment, Bribiesca-Contrerasshe added. “Where they live this deep in the ocean can be challenging,” she said. “There’s no light, their bodies are withstanding crushing pressure and there’s little nutrition available.” Prior to the NHM expedition, many of these animals have only been glimpsed in photographs or videos, or are known from their fossilized remains. This mission enabled scientists to study the specimens as they moved freely through their ocean habitat, and then later in the lab. Such investigations allow scientists to better understand remote and untouched deep-sea ecosystems — an important goal as the deep-sea mining industry continues to expand worldwide. “We really need to understand this ecosystem so that we can come up with plans for conservation,” she said. “At this point, the little information we have about this environment and the species that live there it very difficult to know how damaging makes mining could be.” Originally published on Live Science. #Gummy #squirrel #deepsea #abyss #stretchy #halfpeeled #banana
The invention of the electric light bulb in the 19th century liberated us from the constraints of night and day, but at what cost to our emotional health? Why do we sleep? The origins and purpose of sleep are not yet fully understood, but it is clear that sleep plays an essential role in physiological and psychological homeostasis. Sleep is important for the regulation of mood, for growth and development, for the formation of long-term memories, and for maintenance of a healthy immune system (Choe 2010). The quantity and quality of sleep are regulated by natural variations in light and darkness during the day (circadian rhythm), and during the year (seasonal variations). Human gene expression and signalling in response to light and darkness is very similar to that found in many other organisms (Ko & Takahashi 2006), which suggests sleep is important for survival. Our non-human ancestors lived in world without artificial light. The introduction of fire, oil lamps and candles gave humans the possibility to extend daylight artificially, but these methods were relatively expensive, and their luminosity and wavelengths do not affect sleep adversely (Dennet 2001, pp 98-100). Since the invention of the electric light bulb in 19th century, relatively cheap and bright artificial light has enabled a significant and increasing proportion of the human population to work, study and play at a time of their choosing. During the same period, there has been a 2 hour reduction in the average number of hours slept per night by adults in industrialised countries, and increasing uniformity across seasons: we no longer sleep for longer in the dark winter months (Dennet 2001, pp 98-100). Dennet (2001) argues that this has resulted in an epidemic of chronic sleep deprivation with grave consequences for our health. The effects of sleep deprivation on the individual Acute sleep deprivation impacts mood - fatigue, vigour and confusion - and stress (Minkel 2010). Chronic sleep deprivation has more serious consequences (see Table 1). - Fear: worrying about waking on time - Frustration: cannot get to sleep - Anger: blame self or others for sleep loss - Anxiety and stress: fatigue impairs attention, concentration and memory, which impacts performance in personal and professional life; impaired immune system leaves us vulnerable to disease - Depression: disease, anxiety and chronic stress can lead to depression and, in some cases, suicide Table 1. The affective consequences of chronic sleep deprivation Mood can be affected directly and indirectly: for example, chronic sleep deprivation has been implicated in obesity (Dennett 2001), which can adversely affect self-image and leave the individual at risk of anxiety and depression. Depression can be both a cause and effect of sleep loss, which can lead to a downward spiral of depression and sleep deprivation. These effects can be exacerbated by individual and cultural attitudes in terms of the quantity, quality, utility and even moral value of sleep. Sleep can also be affected by psychiatric conditions. For example, due to chronic problems with time management and procrastination, individuals with Attention Deficit Hyperactivity Disorder (ADHD) are more likely to stay up late to work or study, and thereby forgo sleep, which would have been difficult or impossible before the invention of electric light. The effects of sleep deprivation on society The aggregate effect of increasing morbidity and mortality is costly to society as well as individuals, from the treatment of physical and psychological illness in our health systems, to income lost through illness and premature death. Sleep deprivation impacts safety-critical systems. For example, an air traffic controller was asleep while two aircraft landed without guidance (Williams & Mouawad 2011), putting lives at risk - the focus of news reports were on the work place, but did the home environment and cultural attitudes towards sleep have an effect? Opportunities for Human-Computer Interaction (HCI) Many people are not aware of their sleep behaviours and the consequences of sleep deprivation (Dennett 2001). Sleep has been given little attention in HCI, yet there are many opportunities for technology to help monitor sleep and maintain conditions for healthy sleep (see Figure 1): Figure 1. Technology design ideas (Choe 2011): (Left) Unobtrusive sleep monitoring tool uses a weight sensor under the bed to estimate sleep times and wake times. (Middle) Role playing game where the player’s character only heals when the player actually rests in real life. (Right) To help support a good sleep environment, this tool uses sensors to measure the room temperature, light, and sound and changes light colors as a simple indication if one or more conditions is not ideal. Sleep monitoring can be combined with mood diaries and physiological observations in clinical settings, such as in sleep laboratories or as part of a programme of cognitive behavioural therapy (CBT). The limits of technology The ethical issues around such technology include privacy and trust - the bedroom is a place of intimacy as well as sleep - and yet the costs of doing nothing are high, particularly for vulnerable members of society (children, older adults and the disabled). Moreover, good design and engineering have their limits, particularly where there are strong economic and social disincentives that undermine sleep. Fundamental changes in economies and societies usually require shifts in cultural values, not just technological solutions. If awareness is the first step to such shifts, then design solutions combining elements of of ubiquitous computing (Rogers 2009) and behavioural economics (Dolan 2010) may offer some promise. Choe, E. K., Kientz, J. A., Halko, S., Fonville, A., Sakaguchi, D., & Watson, N. F. (2010). Opportunities for computing to support healthy sleep behavior . Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems CHI EA 10, 3661. ACM Press.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish.
An essential requirement for the prevention of oral disease is early diagnosis. This can only be achieved by visiting the dentist, ideally twice a year. However, some patients are more prone to gum disease and infection than others, therefore they will be asked to visit the dentist more often than that for preventive reasons. Gum disease is entirely preventable if patients follow some simple rules. Gum disease is caused by the accumulation of bacteria and plaque along the gum line. Even diligent brushers and flossers can have gum disease because only a visit to the dentist in Southampton for a clean, scale and polish can effectively remove plaque hard to reach places in the mouth. Gum disease is a common ailment affecting millions of patients around the world and is one of the most common reasons patients go to the dentist, such as Smilemakers, for treatment. What is gum disease? Gum disease refers to 2 conditions, medically known as gingivitis and periodontitis. Gingivitis refers to the early stage of gum disease, whereas periodontitis describes the advanced stage when significant damage has already occurred. Early stage gum disease has mild symptoms, which can easily be dismissed by patients. As it progresses, gum disease becomes painful, affecting the gum tissues and the underlying bone structure. Untreated gum disease is a major cause of tooth loss. An ounce of prevention is worth a pound of cure Gum disease prevention is a two-fold process. Patients should brush their teeth at least twice a day as well as floss once a day. They should also never skip their regular appointment with the dentist in Southampton. A dental hygienist can safely and efficiently remove plaque and tartar from the teeth and around the gum line. Plaque contains bacteria that feast on the sugars in food and release harmful acids that damage the teeth and gums. The dentist in Southampton will look out for signs of gum disease such as sensitive, puffy, red, swollen or bloody gums. When diagnosed at an early stage, gum disease is easily treatable. Even in its advanced stages, a dentist can help restore a patient’s dental health.
My kids loved to read plays! And we often read one just before a holiday, before summer vacation, or on another occasion when an especially engaging activity was needed – but reading a play doesn’t have to be just for a break! There are lots of everyday close reading skills that can be incorporated into reading a play, and you can make it as simple or as complicated as you have time for. This post contains affiliate links. You can read my entire disclosure policy here. Of course, you can find plays in reading anthologies and magazines like Scope, but there are also free resources available. Here are two possibilities: Aaron Shepard’s Reader’s Theater Page – Free reader’s theater scripts to download for upper elementary through middle school. - Zoom Playhouse – mini scripts from PBS Kids. Three readings are actually perfect for reading a play if you have the time. You can use the close reading process, and the final presentation will be much more enjoyable thanks to all the preliminary work. Here is one way to incorporate the close reading process into reading a play: Reading One – Students read the play silently. Follow up with oral questions to be sure everyone understands the story. Then, have students choose the characters they hope to portray and read short passages aloud as a “try-out” for the final production. Assign all students a part, either as a cast member of an understudy. Reading Two – It’s time for the cast and the understudies to practice. Students work with a partner to practice reading their parts aloud. After practicing, go over any unfamiliar words with the class, or do another vocabulary activity with words from the play. For more follow-up activities, have students demonstrate their understanding of character traits by drawing “costume designs,” and their understanding of the setting by drawing “set designs.” Reading Three – The big event! Students read the play aloud for the class. Students not reading could be assigned to write a review or complete a graphic organizer about the characters. Since students should be very familiar with the content after this third reading, a simple story map might be all that’s needed as a final evaluation. Here are a few more ideas: - Popcorn – Add to the theater atmosphere by serving popcorn in those red and white movie-style containers. Or have students make their own from white lunch bags. - Shorter Options – No time for a whole play? Try a shorter option, such as a poem designed for choral reading. - Kids’ Skits – Have the class present a skit written by students based on a section of a story that they have read. - Puppets – Even older kids get into puppet theater. See this Edutopia article about using puppets written by a high school English teacher. Of course, you can’t read plays every day, but you probably do want to have your kids practice close reading on a regular basis. If you are could use more close reading resources, you might want to take a look at Close Reading – Wild Winter, one of the choices in my Teachers Pay Teachers store. Several of these resources, including Wild Winter, are sets of four articles with questions and follow-up activities for each of three readings. Others are single articles. This guest post is by Sharon from Classroom in the Middle
Learn something new every day More Info... by email The Tulalip tribes are a confederation of Native American peoples who were originally from the northern Puget Sound region of Washington state. Others included in this confederation are the Snohomish, Salish, Snoqualmie, Skykomish, Skagit, Suiattle, Samish, and Stilaguamish groups. The Tulalip tribes generally shared certain characteristics, such as salmon fishing and living in either gable-roofed longhouses or simple shed homes. They also share a common language called Lushootseed. Historically, the different Tulalip tribes were not agrarian. Rather, they followed a way of hunting, gathering, and fishing based on the seasons. They would catch salmon during its runs in spring and summer, and preserve it for use over the rest of the year. They supplemented their diet with game, and also gathered berries and roots. The Tulalip often traveled in hand-made cedar canoes. The tribes moved frequently, following food sources. In the warmer months, they generally lived in temporary structures while they fished. These were often made of cattail mats. As the Tulalip tribes came into contact with Europeans and Americans, their traditional way of life was threatened. The first contact seems to have been with Captain George Vancouver in 1792. By 1855, many new, non-native settlers made their homes in the Puget Sound area, which forced the Tulalip tribes to move from their traditional homelands. In that year, the tribes signed a treaty with the U. S. federal government giving them protection, reservation land, and monetary compensation. By the later 1800s, the Tulalip were in danger of losing their heritage due to schooling mandated by the U. S. government. This education of Tulalip children was usually conducted in boarding schools, located away from the reservations. It was designed to assimilate Native Americans into the primarily English culture that dominated the United States at that time. Tulalip children did not learn much about their native language and history because they spent a good deal of time away from the elders of their tribe. In the 1930s, the Tulalip tribes reorganized under the Indian Reorganization Act of 1934. This legislation gave more rights to Native American tribes. With the reorganization, the Tulalip established a constitution and bylaws that govern the members of the tribe. Today, the Tulalip reservation is home to approximately 3,600 people. It provides educational opportunities to members, in collaboration with the local school district. Leaders are also active in preserving and teaching the Lushootseed language, and in keeping cultural traditions and celebrations alive.
Probing a distant cloud 11,000 light-years away, astronomers have discovered what may be the largest stellar womb yet found in our galaxy. With a mass of 500 suns, this massive body is feeding an embryonic star that may become a rare behemoth in the Milky Way. This star birth, to be described in the journal Astronomy and Astrophysics, sheds light on how such giants are formed. Such massive stars are extremely rare; roughly one in 10,000 stars in the Milky Way gathers this much bulk. (A star is considered massive if it’s at least about 10 times the mass of the sun.) Astronomers aren’t sure how massive stars form. One idea suggests that many small star cores coalesce out of a dark gas cloud; another theory argues that the entire cloud core begins to collapse inward to form one or two really big star cores. To try and answer this question, a team of European scientists decided to look at the Spitzer Dark Cloud, a mysterious body filled with dense filaments of gas and dust that was discovered using NASA’s Spitzer Space Telescope and the European Space Agency’s Herschel Space Observatory. The scientists used the Atacama Large Millimeter/submillimeter Array in Chile, a radio telescope that can pick up long wavelengths of light that can punch through the dark cloud. They discovered just two embryonic stellar cores – one of them so big that they predict it will form at least one star that’s 100 solar masses when it fully develops. The observation supports theory No. 2 – that such massive stars are formed by a dramatic collapse of a cloud core. It’s a fast process as the material races inward – so the scientists were lucky to catch this process in the act, they said.
Photo courtesy of NASA Forty years ago, on July 21, 1969, American astronaut Neil Armstrong, commander of the Apollo 11 Moon mission, became the first person to set foot on another world. This historic spaceflight marked the culmination of the so-called “Space Race”, one of the major Cold War propaganda battles between the United States and the USSR, which began in 1957, when the Soviet Union shocked the world by launching the first satellite, Sputnik 1. Stung by a string of Soviet firsts in space exploration, in May 1961 President Kennedy committed the United States to achieving a human landing on the Moon by 1970: a bold goal to set at a time when America’s first astronaut had made only a 15 minute sub-orbital flight just 3 weeks before. When Apollo 11’s Lunar Module Eagle, with its crew, mission commander Neil Armstrong and Lunar Module pilot Col. Edwin “Buzz” Aldrin, landed on the Moon, it effectively gave the United States the victory in the Space Race, as the Soviet Union had not been able to mount a successful lunar programme of its own. But the success of Apollo 11 was more than just a Cold War propaganda victory: when Armstrong stepped onto the lunar surface at 12.56pm Eastern Australian time and uttered his famous words “That’s one small step for (a) man; one giant leap for Mankind” he was fulfilling a centuries-old dream. The desire to journey into the heavens is as old as humanity and the dream of travelling to the Moon has inspired poets and storytellers since Roman times. But it was not until the 20th Century that the technology to achieve spaceflight was developed and scientists and engineers looked forward to achieving this long-held goal. Apollo 11 therefore represented not just a Cold War political prize, it was also the accomplishment of an ancient Human aspiration: for the first time, people had left our home planet Earth and travelled to another world in the solar system. Australia played an important part in all the Apollo missions, with NASA tracking stations at Carnarvon (WA) and Honeysuckle Creek and Tidbinbilla (ACT) providing vital communication links with the Apollo spacecraft. In particular, the Apollo 11 Moonwalk images broadcast to the world were received at Honeysuckle Creek and the Parkes radio telescope. Visitors to the museum’s Space exhibition can view a genuine lunar sample from the Apollo 16 mission, as well as a massive F-1 rocket motor (five of which were needed to launch the mighty Saturn V rocket that sent the Apollo astronauts on their way to the Moon). To mark the 40th anniversary of Apollo 11, a selection of original contemporary space memorabilia from the museum’s collection is on display in the entry foyer until September. Curator of space technology
•5 min read Aquaponics is not only a forward-looking food production technology; it also promotes scientific literacy and provides a very good tool for teaching the natural sciences (life and physical sciences) at all levels of education, from primary school (Hofstetter 2007, 2008; Bamert and Albin 2005; Bollmann-Zuberbuehler et al. 2010; Junge et al. 2014) to vocational education (Baumann 2014; Peroci 2016) and at university level (Graber et al. 2014). An aquaponic classroom model system provides multiple ways of enriching classes in Science, Technology, Engineering, and Mathematics (STEM). The "hands-on" approach also enables experiential learning, which is the process of learning through physical experience, and more precisely the "meaning-making" process of an individual's direct experience (Kolb 1984). Aquaponics can thus become an enjoyable and effective way for learners to study STEM content. It can also be used for teaching subjects such as business and economics, addressing issues such as sustainable development, environmental science, agriculture, food systems, and health (Hart et al. 2013). A basic aquaponics can be built easily and inexpensively. The World Wide Web is a repository of many examples of videos and instructions on how to build aquaponics from a variety of components, resulting in a range of different sizes and set-ups. Recent investigations of one such prototype micro-aquaponics showed that despite being small, it can mimic a full-scale unit and it is an effective teaching tool with a relatively low environmental impact (Maucieri et al. 2018). However, implementing aquaponics in classrooms is not without its challenges. Hart et al. (2013) report that technical difficulties, lack of experience and knowledge, and maintenance over holiday periods can all pose significant barriers to teachers using aquaponics in education, and that disinterest on the teacher's part may also be a crucial factor (Graham et al. 2005; Hart et al. 2014). Clayborn et al. (2017), on the other hand, showed that many educators are willing to incorporate aquaponics in the classroom, particularly when an additional incentive, such as hands-on experience, is provided. Wardlow et al. (2002) investigated teachers' perceptions of the aquaponic unit as a classroom system and also illustrated a prototype unit that can easily be constructed. All teachers strongly agreed that bringing an aquaponics unit into the classroom is inspiring for the students and led to greater interaction between students and teachers, thereby contributing to a dialogue about science. On the other hand, it is unclear exactly how the teachers and students made use of the aquaponics and the instructional materials offered. Hence, the information needed to evaluate the impact of aquaponics classes on meeting the objectives of the students' curricula is still missing. In a survey on the use of aquaponics in education in the USA (Genello et al. 2015), respondents indicated that aquaponics were often used to teach subjects, which are more exclusively focused on STEM topics. Aquaponics education in primary and secondary schools is science-focused, project-oriented, and geared primarily toward older students, while college and university aquaponics were generally larger and less integrated into the curriculum. Interdisciplinary subjects such as food systems and environmental science were taught using an aquaponics more frequently at colleges and universities than they were at schools, where the focus was more often on single discipline subjects such as chemistry or biology. Interestingly, only a few vocational and technical schools used aquaponics to teach subjects other than aquaponics. This indicates that for these educators, aquaponics is a stand-alone subject and not a vehicle to address STEM or food system topics (Genello et al. 2015). While the studies mentioned above reported aquaponics as having the potential to encourage the use of experimentation and hands-on learning, they did not evaluate the impact of aquaponics on learning outcomes. Junge et al. (2014) evaluated aquaponics as a tool to promote systems thinking in the classroom. The authors reported that 13—14 year old students (seventh grade in Switzerland) displayed a statistically significant increase from pre- to post-test for all the indices measured to assess their systems thinking capacities. However, since the pupils did not have any prior knowledge of systems thinking, and since there was no control group, the authors concluded that supplementary tests are needed to evaluate whether aquaponics has additional benefits compared to other teaching tools. This issue was addressed in the study by Schneller et al. (2015) who found significant advances in environmental knowledge scores in 10—11 year old students compared to a control group of 17 year olds. Moreover, when asked for their teaching preferences, the majority of students indicated that they preferred hands-on experiential pedagogy such as aquaponics or hydroponics. The majority of the students also discussed the curriculum with their families, explaining how hydroponic and aquaponics work. This observation extends the belief that hands-on learning using aquaponics (and hydroponics) not only has a stimulating impact on teachers and students, but also leads to intergenerational learning. The objective of this chapter is to provide an overview of possible strategies for implementing aquaponics in curricula at different levels of education, illustrated by case studies from different countries. Based on evaluations conducted with some of these case studies, we attempt to answer the question of whether aquaponics fulfils its promise as an educational tool.
Our Ocean Portal Educators’ Corner provides you with activities, lessons and educational resources to bring the ocean to life for your students. We have collected top resources from our collaborators to provide you with teacher-tested, ocean science materials for your classroom. We hope these resources, along with the rich experience of the Ocean Portal, will help you inspire the next generation of ocean stewards. Featured Lesson Plans Search Lesson Plans Find lessons/activities by topic, title or grade levels. Sort by newest or alphabetically. Lessons were developed by ocean science and education organizations like NOAA, COSEE, and NMEA to help you bring the ocean to your classroom. The students will generate a KWL focused around the BP oil spill. What do they already know, what do they want to know, and what did they learn? Students can generate their ideas individually or in groups. After they have completed the K and W, students will watch the National Geographic documentary “Can the Gulf Survive?” During the video the students are to take notes and generate at least five questions that they have regarding the aftermath of this disaster. After the video the students will get back into their groups, discuss the video, and compile what they learned. The students will present their findings to the class. To introduce students to ocean currents and the transport of marine debris, spilled oil, and other pollutants in the ocean. The rise and fall of the ocean tides is a predictable phenomenon influenced by the gravitational pull of the sun and moon. Here, students will learn about how tides are measured and predicted so that they can then create a presentation for fifth and sixth graders about the topic. Students will also become familiar with publically available data that anyone can use to study the tides. Moorea Coral Reef LTER Education To help students understand that science is a part of their everyday lives, students will complete an activity where they create a collage of people doing science using magazines and drawing pictures. This lesson gives students a realistic idea of what science is and helps them understand that scientists are real people answering interesting questions. Watch interviews with scientists. Monterey Bay Aquarium Learn how scientists collect field data by being a scientist yourself! By studying a specific ecosystem, students learn how different scientists work together, what kinds of data scientists record, and experience the scientific process through observation and data collection. California Academy of Sciences Students will learn via experimentation that ice formations on land will cause a rise in sea level when they melt, whereas ice formations on water will not cause a rise in sea level when they melt. Students will learn that ice is less dense than water and that ice displaces water equal to the mass of the ice. Students investigate the relationship between the size of the wave and depth to which the effects of its energy can be observed. NOAA Ocean Explorer Students describe forms of energy found in the ocean and explain how they are used by humans. Students explain three ways that energy can be obtained from the ocean. NOAA Ocean Explorer Students utilize a grid system to document the location of artifacts recovered from a model shipwreck site. Students use data about the location and types of artifacts recovered from a model shipwreck site to draw inferences about the sunken ship and the people who were aboard. Students identify and explain types of evidence and expertise that can help verify the nature and historical content of artifacts recovered from shipwrecks. NOAA Ocean Explorer Students describe and contrast three types of underwater robots. Students discuss the advantages and disadvantages of using robots in the exploration of the ocean. Students identify a robotic vehicle that best suits a specific exploration task.
Some resources to seek out for this assignment: Classroom Talk (Curriculum) Games for everyone : explore the dynamics of movement, communication, problem solving and drama (Curriculum) Guiding the reading process : techniques and strategies for successful instruction in K-8 (Chilliwack) Improvisation : learning through drama (Curriculum) It's critical! : Classroom strategies for promoting critical and creative comprehension (Curriculum) Stories in the classroom : storytelling, reading aloud and roleplaying with children (Barton & Booth) (Chilliwack) Whatever happened to language arts? : -- it's alive and well and part of successful literacy classrooms everywhere (Curriculum) Story drama [videorecording] (Abbotsford AV- VHS) Tomorrow's classroom today : strategies for creating active readers, writers, and thinkers (Chilliwack) The art of teaching writing (Curriculum) Unfortunately, Susan has never published the book she was originally going to do. You will want to check out her website: http://www.smartreading.ca/ Smart Learning presentation by Shawna Peterson. Chilliwack interpretation of Smart Reading. And a poster: Guide for smarter learning [picture] illustrating the philosophy: Not a lot in Education journals has been written about this program. Nonfiction reading power : teaching students how to think while they read all kinds of information (Curriculum) **We also have Reading Power Kits organized by this approach’s methodology and here is example book: Giant Steps to Save the World This book is in the Reading Power IT or Intermediate Transform Kit. The Trait Crate Kit Series. We have kits for K to Grade 5. **Scholastic has only published up to grade 5 Example record: Grade 5 (Curriculum- Kits Section) North Vancouver School District Writing 44 : a core writing framework. Primary (Curriculum) Unfortunately, we don’t have any titles from these: Open Court Reading But you could try searching Education Research Databases (ERIC, Canadian Business &Current Affairs-has Canadian Education) to see what the Educational literature may say about them. Please note some programs or authors are more well known then others.
A gene knockout (abbreviation: KO) is a genetic technique in which one of an organism‘s genes is made inoperative (“knocked out” of the organism). Also known as knockout organisms or simply knockouts, they are used in learning about a gene that has been sequenced, but which has an unknown or incompletely known function. Researchers draw inferences from the difference between the knockout organism and normal individuals. The term also refers to the process of creating such an organism, as in “knocking out” a gene. The technique is essentially the opposite of agene knockin. Knocking out two genes simultaneously in an organism is known as a double knockout (DKO). Similarly the terms triple knockout (TKO) and quadruple knockouts (QKO) are used to describe three or four knocked out genes, respectively. (From: Wikipedia, June 2016)
Scientists have long believed that Mars' distinctive hue comes from iron particles being rusted... but by what? A new study suggests that it wasn't water that turned the Red Planet red, but wind. According to experiments carried out by Jonathan Merrison of Aarhus University in Denmark, the color may be the result of magnetite and quartz particles colliding as the result of being blown about the planet's surface, with each collision exposing surfaces of the quartz that oxidize the magnetite. After tests artificially creating similar circumstances, Merrison and his team now suggest that "a few thousand years" worth of such collisions would've resulted in Mars becoming the color it is today. Wind, not water, may explain Red Planet's hue [New Scientist]
Dr. Rignot and his colleagues from the University of California, Urvine have been measuring changes in the mass of the land-based ice sheets in Greenland and the Antarctic. They have reported their findings in the scientific publication Nature Geoscience. The current understanding of the process of climate warming in Greenland and the Antarctic suggests that snowfall may increase in the continents' interiors and, at the same time, accelerate melting on the coasts on account of the warmer air and ocean temperatures. To further elucidate this mechanism, the team utilized highly sophisticated satellite radar observations between the years 1992 and 2006 covering 85% of Greenland's and Antarctica's coastlines to estimate the total mass movement of melt water into the ocean. Collecting this kind of information was once an extremely daunting task due to limitations in the ability to measure such intricate changes. The technological advances over the past decades, however, have made it possible to measure trends on a monthly basis. The results of such continuous observations demonstrate that the rate of the combined loss of the ice sheet has indeed increased and accelerated over the last eighteen years by a total of 36.3 Gigatons (Gt) per year. A Gt is equivalent to 1 billion tons. This rate is some three times faster than rate of ice melting in mountain glaciers and at the polar ice caps. Should this rate of loss of land-bound ice continue unabated, it would prove to be the largest contributor to the ineluctable rise in the sea level by the end of this century. This is not taking into account the distinct possibility that the rate of increase in atmospheric greenhouse gases as a result of human activity will increase over the coming decades. According to Dr. Rignot, "Changes in glacier flow therefore have a significant, if not dominant impact on ice sheet mass balance." This is a significant and troubling finding that adds yet another dimension to the disturbing prospects posed by the ever escalating concentration of greenhouse gases in the earth's environment. Hopefully, the human community will use these kinds of data to implement policies that will ensure a tangible decrease in the actual use of fossil fuels to propel national economies.
The winter of 2013/14 in the northern hemisphere Large parts of Japan were hit by catastrophic snowfall in February 2014, while the US suffered record low temperatures in December 2013. Meteorologically, these two winter events are probably connected. Mark Bove and Eberhard Faust The Arctic is dominated by a large, quasi-stable area of high pressure caused by sinking cold air. Surrounding this area of high pressure is the polar front, a region where cold, dry Arctic air interacts with warmer, moister air being advected poleward. The temperature and moisture gradients along the polar front cause extratropical storms to form along the boundary, and also give rise to the polar night jet stream that circles the North Pole, moving from west to east. The strength of the polar night jet, as well as storms along the polar front, depend on the magnitude of the temperature and moisture gradients in the region. These gradients are typically at their strongest in the autumn, causing the polar night jet to intensify and impart additional vorticity, or spin, into the upper troposphere and lower stratosphere. This helps to form and contain a uniform mass of cold air over the North Pole, known as the polar vortex. The vortex acts to keep cold air in place at the pole. The stronger the vortex, the more likely Arctic air will remain there. But as winter begins, the gradients along the polar front weaken, and the polar vortex cools and stops growing. As winter progresses towards spring, sunlight returns to the Arctic. Some of the light is absorbed by ozone in the stratosphere, warming up the upper atmosphere and weakening the polar vortex. The resulting destabilisation of the polar vortex allows for pieces of the Arctic air to move southward, resulting in cold outbreaks in the mid-latitudes. Other outside influences can also cause weakening of the polar vortex during the winter season.One such phenomenon is known as Sudden Stratospheric Warming (SSW). An SSW event occasionally occurs when a stationary area of high pressure develops, forcing cyclonic storms to move around it. These blocking patterns create persistent atmospheric flows that can produce large amplitude planetary-scale waves in the troposphere, particularly when moving over mountainous terrain. The energy and momentum of these waves propagates into the polar stratosphere, and act to destabilise the polar jet via a warming of the stratosphere. The stratospheric warming disrupts or destroys the polar vortex, which allows for pieces of the polar air to be pushed southwards. In late 2013, a blocking pattern developed over the northeastern Pacific Ocean and persisted throughout the entire 2014 winter season. High pressure over this region generated high amplitude waves that destabilized the polar front jet, allowing for pieces of the polar vortex to stream southward across eastern North America. The same ridge caused Arctic air to flow into eastern Asia, resulting in severe winter storms in Japan, and produced anomalously warm and dry weather in western North America, leading to worsening drought conditions in California. At the same time, Europe experienced an unusually mild winter season. The winter in Japan Snow is not an unknown commodity in many parts of Japan, but the amount of snow seen in February in the heavily insured regions in and around Kanto was exceptional. The heavy snowfalls occurred between 6 and 9 February and 13 and 16 February and came from a similar weather pattern. Initially, a trough-like loop of the high-altitude air flow moved across eastern China and, in the days after 6 February, to the northeast across Japan. This trough in the high-altitude air flow was connected to a low pressure area in the lower atmosphere, which accompanied the displacement of the trough off Japan’s eastern coast to the northeast. At the southern to eastern side of this low pressure area, warm, moist air from the Pacific was drawn towards the north at the same time that Japan lay under the Arctic air on the rear side of the low. Where the warm air met the cold air, heavy precipitation developed that fell as snow over Japan. On 8 February, Tokyo was already buried under 27 cm of snow – a height not measured since 12 March 1969 – when a second heavy snowfall from 13 to 16 February followed. The second weather pattern developed in much the same way as the previous snow event. The cold front’s extensive snowfall reached southern Honshu on 13 February and the Tokyo region the next day. Snowfall was especially heavy on the east coast and in the Honshu mountains. On 16 February, there was 250 cm of snow in Tsunan and 43 cm in Fukushima. In Kofu (Yamanashi) the snow level reached 114 cm – the most snow ever recorded since recordkeeping began in 1894. The large volumes of snow brought traffic to a standstill over large areas of the country and cut off towns and villages. At least 16 people lost their lives; several hundred were injured, many of them in traffic accidents. Many parts of the area affected by the snowfall were without electricity. On 15 February, a major airline cancelled 350 flights; leading car production companies temporarily closed their factories. Record losses for Japanese insurers Private individuals were particularly affected by the collapse of their carports under the snow’s weight. Commercially available carports in the particularly affected prefectures are normally designed to withstand a maximum snow load of 25 cm. Not only were parked cars and other items stored in the ports damaged, but snow falling from roofs and traffic accidents contributed to car insurance losses of about 22 billion yen (US$ 215m). Altogether, almost 66,000 losses were reported. The snowfall also resulted in significant damage to roofs and gutters of residential property. Some 212,000 claims for insured damage were filed, with losses totalling 232 billion yen (US$ 2.26bn). If waterproof membranes under roof tiles had been damaged, the entire roof structure often had to be rebuilt. On top of this, the hefty demand for skilled tradesman significantly added to repair costs. Additionally, some factory buildings, warehouses, schools and gyms could not support the massive snowfall and collapsed either partly or entirely. In some areas, the snow’s weight on the buildings was twice as heavy as the heaviest weight recorded from previous years. Production materials, stored goods and inventory were also destroyed when roofs collapsed. Estimates for insured damage are over 320 billion yen (US$ 3.1bn), making these storms one of the most expensive for catastrophe damage in the Japanese insurance industry’s history. This is typical “mass damage” resulting from an extensive accumulation of small and medium-sized losses averaging US$ 3,000 to 5,000 – similar to what one sees from a winter windstorm in Europe. In comparison, there were fewer large-scale losses. Industrial policies were hardly affected. The winter in North America Last winter produced two major Arctic air outbreaks over North America. The first occurred in early December 2013, causing unseasonably cold temperatures across the region, but quickly dissipated. The second outbreak, caused by an episode of SSW, started on 2 January and ushered in the coldest air of the season. Record low temperature observations, some dating back to the 19th century, were set in dozens of cities over the next week, including Chicago, Detroit, New York City, Cleveland, and Atlanta. Although temperatures moderated somewhat by mid-January, the persistent ridge over the northeast Pacific kept forcing Arctic air southward over eastern North America for the next three months, resulting in one of the coldest winter seasons in decades. Twelve states had three-month temperature averages that ranked among the ten coldest on record. New records were set for the lowest monthly average temperatures across many midwestern cities in February, and across New England and the mid-Atlantic regions in March. The prolonged outbreak also caused one of the largest freeze-over events of the Great Lakes in decades. Large icebergs remained in Lake Superior until late May, and their slow melt prolonged unseasonably cool temperatures near the lakes through the summer of 2014. In all, the combination of extreme cold temperatures and several winter storm events caused an estimated US$ 2.3bn in insured losses, with economic losses of around US$ 4bn. The insured loss total for 2014 is the fifth highest total on record (adjusted for inflation), and almost double that of the 2009 - 2013 average, but still not extraordinary in comparison with other winter seasons over the past decade. The majority of the insured losses, US$ 1.7bn, stemmed from the January Arctic air outbreak. The majority of claims were due to frozen pipes bursting, resulting in water damage to buildings and personal property. The remainder of insured losses across the winter season were primarily associated with roof damage due to the weight of snow and ice, freezing rain events downing trees and power lines, ice damming on roofs, and automobile accidents due to slippery driving conditions. This first is that the Arctic has warmed dramatically over the past 100 years, and is now 3.5°C warmer than in the late 19th century. Since the development of a stable polar vortex relies in part on strong temperature gradients, the warming of the Arctic means that the gradient between polar air and warmer air to its south is reduced. This gradient reduction might lead to weaker polar vortexes in the future, which would increase the potential for pieces of the vortex to break off and enter the mid-latitudes. The second potential mechanism is the dramatic drop in sea ice over the Arctic during early autumn. Recent modelling studies have shown that the dramatic loss of sea ice in the Arctic, particularly north of Scandinavia and Russia, may intensify atmospheric wave patterns that can weaken the polar vortex through SSW. Other research gives hints that persistent areas of high pressure in the North Pacific – like 2014’s “Ridiculously Resilient Ridge”, which lasted through most of the year and is linked to both the cold weather across eastern North America and the drought in California – may form more often in the current climate regime than in one without the anthropogenic greenhouse effect, giving a third mechanism for more Arctic air outbreaks in a changed climate. It should be noted that all research noted here is preliminary, and it will take years of further research to determine what, if any, role anthropogenic climate change has on the frequency and severity of Arctic air outbreaks. But given the rapid changes being observed in the Arctic, it is likely that anthropogenic climate change will play at least some role in what to expect from winter seasons in the future. Despite winter perils being covered by most insurance policies in North America, insured loss potentials due to winter storms are typically not as severe as those from tropical cyclones and thunderstorm perils. For example, winter storm losses over the period 2009–2013 have averaged US$ 1.3bn per year, but losses from tropical cyclones and thunderstorms over the same time period have averaged US$ 7.7bn and US$ 15bn respectively. As a result, winter storms are often considered a secondary peril in the US reinsurance market, with Cat XL structures typically designed to protect against tropical cyclone or earthquake loss potentials. But despite this designation, winter storms still need to be considered by the underwriter assessing business risks.
Astronomers have discovered a 'supersense' frozen around the Sun's closest single star, so that information about nearby neighboring planets can come out. According to researchers from Queen Mary London University of Britain, this planet with rocks probably is larger than Earth and is known as 'Barnard's Star B'. According to the scientists, this super-sense revolves around its host star within 233 days. According to this study published in the journal Science Journal, this planet is so far away from its host star called 'Snow Line'. That is, things like water, ammonia, carbon dioxide at this distance are freezing due to cold. According to researchers, the conditions of this planet are beyond the habitat zone. In the habitat area there is the existence of fluid water and possibly life. He said that the surface temperature of this planet is below zero to 170 degree Celsius. This means that it is a frozen world where there are no favorable conditions for life like earth.
Consumer Revolution Imports Map American colonists were accustomed to being able to buy goods from around the world at their local shops. Increased trade combined with a relatively high standard of living meant consumer goods were more common and more affordable than ever before. In a given household in Williamsburg, one might find spices from India, wine from France, cloth from Germany, tallow from Ireland, almonds from the Barbary coast, dishes from China, and/or sugar from Jamaica. In this lesson, students will create a class bulletin board that demonstrates their knowledge of the interconnectedness of the trade of the American colonies with many other nations of the world. Then, by reading and summarizing Consumer Revolution Information Cards, students will learn about how increased global trade by the mid-eighteenth century affected the lives of the American colonists. - Colonial Store Product Cards, cut apart - Consumer Revolution Information Cards, cut apart - Speech Bubbles, cut apart - Large world map - Large map of the British colonies in North America - Vocabulary Handout - Textbook atlas - Before beginning the lesson, place a large world map and a map of the colonies on the bulletin board. - Discuss with students the idea of global trade: the goods we use every day come from all over the world. Explain that this is not new, and that the American colonists also enjoyed products from other countries. On the map of the colonies, mark Boston, New York, and Philadelphia, which were the port cities where merchants most commonly engaged in foreign trade. - Arrange students in groups of two. Give each team a copy of the Vocabulary Handout and a different card from the Colonial Store Product Cards. - Ask partners to find the country listed on their Colonial Store Product Card on the world map in their textbook atlas. - Call the partners to the bulletin board one group at a time. Have them present their product card to the class, especially noting if their card mentions a third trade partner (such as goods from Holland that originally came from China). Then, each pair should pin a string to the board connecting their product's country of origin to the British colonies in North America map. They can connect their line to any of the marked port cities in the colonies. These lines demonstrate the interconnectedness of global trade. - Now give partners a Consumer Revolution Information Card to read. This card gives partners additional information about the Consumer Revolution. - Give one speech bubble to each partnership. Ask partners to write in their speech bubble a short summary of what they learned from reading their information card. It may be helpful to demonstrate an example for the class. - Call partners to the bulletin board to present their speech bubble to the class. They can then pin their bubble on the edge of the bulletin board (to make a border) to show the information presented. - To make a current day connection, have students look at the tag in the back of their partner's shirt and add these countries to their maps or to the bulletin board map. - Display the interactive activity Around the Globe for the class, or have each student access the activity individually in a computer lab, on a class set of laptops, or at home. Click on each yellow spot on the map to pull up information about the materials imported from that location. This lesson was written by Marianne Esposito, Key West, FL, and Kim O'Neil, Liverpoole, NY.
Empowering Our Kids Today and Tomorrow The ABCs of Money Management More Fun Ways to Save Learning to save can be fun – for you and your children! Check out the tips below for creative ways to teach your children how to save. - Entertain them with storybooks, games and videos with monetary and savings themes. - Give them a job around the house so they can earn their own money. - Make a Savings Goal Chart or Wish List to keep goals top-of-mind and help prevent frivolous spending. - Take them to the Bank to open a savings account and make deposits; let them watch their money grow. - Let them pay for items on their wish list. - Hunt for bargains at yard sales, thrift stores and clearance and sales sections. - Reward good savings habits if they save regularly or reach a savings goal (e.g., match their savings, buy a small toy or game, allow extra TV time or have a sleepover). Children are keen observers. They learn how to manage money from their parents and they learn very young. Your attitude about money, and the actions you take to manage it, will have a lasting impact. Money doesn't grow on trees. Spending is easy. The trick is to teach children how to earn the money necessary to pay for what they need today and tomorrow, to save for what they want and to help others when they can along the way. A penny saved is a penny earned. Promote positive earning, saving and spending behaviors to help minimize "bad habits" later in life. The following steps will reinforce your children's positive attitude about learning and work, empowering them to manage their finances wisely through the years. - Always start early Promote patience (delayed gratification), as well as counting and basic math skills with toddlers and young children. Help them understand the difference between things they need and things they want. - All things cost money. You earn money by working. During grade school, talk with your children about your job and what you do to earn your paycheck. Discuss what they can do to earn their own money (an allowance for doing household chores; yard work, shoveling, or babysitting for neighbors; an after-school or weekend part-time job). Discuss your financial goals with your children and explain how you budget and save what you earn to achieve those goals. Let them learn from your successes and missteps; encourage them to ask questions. If your children want something, rather than buy it for them, help them develop a plan for saving up to buy it themselves. - Bank accounts can help your savings grow Take them to the bank to open their first savings account. Have them make deposits regularly, setting aside some of the money they received as birthday/holiday gifts or as payment for work they've done. Explain that the bank will help their savings grow by depositing interest every month; the more money they save, the faster it will grow. - Create a savings strategy Constantly reinforce the difference between "wants" and "needs," and discuss the importance of helping others through charitable donations. Encourage your children to divide their money into three groupings: saving, spending and donating. - Cost of borrowing money Credit card, auto and home loan ads may lead your teens to believe borrowing money is an easy way to get what they want. Discuss responsible borrowing and the true cost of purchasing with credit. Use an example from your own borrowing history to show them how much an item cost, how much you actually paid for it, and whether or not it was worth the extra expense. Getting there is half the fun. Teaching children about money, and how to manage it, may seem like a daunting task, but it can include lighthearted good times. Sneak in spending and saving lessons through storybooks, playing traditional board games and games online, or by watching kid-friendly videos that include counting money, choosing how to spend it, and the cost of goods or services. For great lessons about money and savings for your kids, add these books to their reading list: - What Can I Buy by Julie Moriarty - A Quarter from the Tooth Fairy by Caren Holtzman - Pigs will be Pigs by Amy Axelrod - Max's Money by Teddy Slater - How Much is that Guinea Pig in the Window? by Joanne Rocklin - The Case of the Shrunken Allowance by Joanne Rocklin For more ideas about teaching your children about money and money management, check out the following resources: - TD Bank's Wow! Zone – Helpful financial information for kids, parents and educators. - Jump$tart Coalition for Personal Financial Literacy (http://www.jumpstart.org/) – A national coalition dedicated to improving the financial literacy of pre-kindergarten through college-age youth.
What is the principle of a fair hearing in the law? I'm writing a project on this topic,so I would love to get a general preview of this principle. 2 Answers | Add Yours Fair hearing is also sometimes referred to as the "opportunity to be heard"; it is a bedrock of the law and is not limited to criminal law, but also applies in civil and administrative cases. Fundamentally, it is the opportunity to present your evidence, cross-examine witnesses and discover evidence being presented against you. In criminal law, it also means first being notified of the charges that are being brought against you. This may seem obvious, but it wasn't always so, and indeed still today it's not always so. Before the Magna Carta, the King might arbitrarily take and imprison someone without any procedures or notification of the particular reason for loss of liberty. Today, the issue is talked about in the context of the military prison base at Guantanamo, where many inmates have been held for years without notification of the exact charges against them. By a fair hearing, it means that all you must receive all of the protections guaranteed to a person by the U.S. Constitution including the “due process” of the law afforded under the 14th Amendment. Some of these rights include the right to remain silent, the right to be tried by a jury of your peers, the right to face your accuser and be present at trial and the right to a fair and public trial. Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
Keeping teens safe online If teens are being cyberbullied, they may exhibit the signs of traditional bullying. They may also avoid discussions about their online activities. They may appear unhappy, irritable, or distressed, particularly after using the computer or viewing their cell phone. There may be a distinct change in how often they use the computer. Teens are often afraid to talk to parents about their cyberbullying experiences out of fear their online activities will be restricted. Reassure your teen you will not take away their phone or Internet, but if they encounter anything online that makes them feel uncomfortable, or if they receive any messages or view content that is harassing or upsetting that it is important to talk to an adult. If teens are cyberbullying others, they may have a history of aggression or bullying in more traditional ways, or have friends who do. They may also be more secretive about their online activities, switching screens or programs when others walk by. They may spend long hours online or become upset if they cannot use the computer. They may also use multiple online accounts and appear agitated or animated when online. There have been a number of stories in the media over the past few years concerning cyberbullying. Use these as a starting point for a conversation with your teen about what is considered acceptable behaviour both on and off-line. Set family guidelines and rules for online behaviour. Encourage teens to think how they would feel if they were the target of cyberbullying. Check out the Cyber Tool put together by Media Smarts, Canada's Center for Digital and Media Literacy. This valuable resource kit contains a self-directed tutorial that examines the moral dilemmas kids face in their online activities and offers strategies for helping youth deal with them; lesson plans for teachers of grades 5 and 6 that focus on search skills and critical thinking; and tip sheets for parents that provide useful ideas and strategies on how to teach your kids to be safe and ethical online.
XML in the JavaTM 2 Platform XML (Extensible Markup Language) is a flexible way to create common information formats and share both the format and the data on the World Wide Web, intranets, and elsewhere. XML can be used by any individual or group of individuals or companies that wants to share information in a consistent way. XML, a formal recommendation from the World Wide Web Consortium (W3C), is similar to the language of today's Web pages, the Hypertext Markup Language (HTML). Both XML and HTML contain markup symbols to describe the contents of a page or file. HTML, however, describes the content of a Web page (mainly text and graphic images) only in terms of how it is to be displayed and interacted with. For example, a <P> starts a new paragraph. XML describes the content in terms of what data is being described. For example, a <PHONENUM> could indicate that the data that followed it was a phone number. This means that an XML file can be processed purely as data by a program or it can be stored with similar data on another computer or, like an HTML file, that it can be displayed. For example, depending on how the application in the receiving computer wanted to handle the phone number, it could be stored, displayed, or dialed. Java 2 Platform XML Features Additional XML reference material
Shang (shäng) [key] or Yin, dynasty of China, which ruled, according to traditional dates, from c.1766 B.C. to c.1122 B.C. or, according to some modern scholars, from c.1523 B.C. to c.1027 B.C. It is the first historic dynasty of China; its legendary founder, T'ang, is said to have defeated the last Hsia ruler, Chieh. His successors ruled over a city-state in modern Henan prov. and may have controlled other smaller states on the North China Plain. They warred against the Huns and against the Chou, who finally defeated the last Shang king, Shou. Archaeological remains at one of the capitals, near modern Anyang, suggest (along with later records) that the Shang had a complex agricultural civilization of peasants and city-dwelling artisans, with a priestly class, nobles, and a king, who was also high priest. Shang religion was characterized by ancestor worship, sacrifices to nature deities, and divination. Stylized inscriptions on bone and bronze artifacts probably reveal the earliest examples of Chinese writing. Bronze casting under the Shang reached a height of artistic achievement rarely equaled anywhere in the world. There was a highly organized bureaucracy, and the patriarchal Chinese family system seems to have already been developed. See H. G. Creel, The Birth of China (1954); T. Cheng, Archaeology in China: Vol. II, Shang China (1960); K. C. Chang, Shang Civilization (1980); D. Keightley, Early China (1981) and The Origins of Chinese Civilization (1983). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
Factors that limit the type of life forms able to live in an ocean environment include temperature, sunlight, pressure, oxygen concentration and nutrient availability. Because of its many varied attributes, the ocean offers a unique home to aquatic life. The various organisms living in the sea are suitably adapted to temperatures at differing depths. Because sunlight can penetrate water to only about 100 feet, water temperature falls with the ocean's floor. Sunlight is not only important to water temperature, but is necessary for the production of oxygen and other important substances. The weight of the water exerting pressure on life forms in the ocean increases with depth, limiting life at increasing depths. Finally, supplies of nutrients often run short in different regions, with nitrogen being the primary limiting nutrient to saltwater life.
The nose is the body's primary organ of smell and also functions as part of the body's respiratory system. Air comes into the body through the nose. As it passes over the specialized cells of the olfactory system, the brain recognizes and identifies smells. Hairs in the nose clean the air of foreign particles. As air moves through the nasal passages, it is warmed and humidified before it goes into the lungs. The most common medical condition related to the nose is nasal congestion. This can be caused by colds or flu, allergies, or environmental factors, resulting in inflammation of the nasal passages. The body's response to congestion is to convulsively expel air through the nose by a sneeze. Nosebleeds, known medically as epistaxis, are a second common medical issue of the nose. As many as 60 percent of people report nosebleed experiences, with the highest rates found in children under 10 and adults over 50.
Rubber trees (Ficus elastica), also known as Indian rubberplants, are members of the Moraceae family. These trees often are used as specimen trees or as a screen. While rubber trees generally are free from damaging pests and diseases, they can turn brown and droop if not cared for properly. Rubber trees can be used as shade trees and often are placed around decks or patios as a privacy screen. Home gardeners also grow them in containers indoors as houseplants. Rubber trees can reach heights of 30 to 45 feet if not pruned back and develop spreads of 25 to 30 feet. These trees grow rapidly in full sun or partial shade and tolerate a variety of soil types. Rubber trees are susceptible to the effects of overwatering, especially during the winter months when plants do not require as much moisture. According to the National Gardening Association, ficus will continue to take in water until its the leaf cells are over-full and burst. This causes plant leaves to turn brown and droop. Over-watering also causes rubber trees to develop root rot diseases that affect the health and appearance of the plant. Root rot diseases are common in overwatered plants, and rubber trees in this condition have an unhealthy appearance including brown, wilted leaves and brown rotted roots. Affected roots will have a stringy appearance; they easily are stripped from the plant. Eventually, rubber trees with root rot will drop their brown, droopy leaves. Check the soil of your rubber tree to determine if water is necessary. Place your finger into the soil; if it is moist 1/2 to 1 inch below the surface, water is not needed. Check every day and provide water only when your plant is dry 1 inch below the soil surface. When growing rubber trees indoors, be sure to place them in a container with good drainage o the tree does not sit in water. Remove any excess water in the drainage pan and discard. - How Fast Will a Whitespire Birch Tree Grow? - Grow a Ficus Tree Outdoors - What Is the Problem if My Ficus Tree Leaves Are Brown at the Tips? - Grow Plumeria As a House Plant - Repot a Ficus Tree - Plant Leyland Cypress Trees - Care for a Ponytail Palm Tree - Schefflera Plants - Care of Potted Hibiscus Trees - Types of Ficus Tree - Rubber Tree Diseases - Care Instructions for Bromeliad Plants
Hi, i need help answering these four questions 1 which of the following igneous rocks is/are not characteristics of mid-ocean ridges (i think the answer is d, but i wanted to confirm. We have studied mantle tomography profiles of global mid-ocean ridges to investigate their depth of origin and other characteristics the mid-atlantic and the south west indian ridges are deep rooted ridges that extend as far down in the mantle to 250-300 km the central indian ridge, south east. Mid-ocean ridges with medium spreading rates are transitional in character between slow sc, just how do ocean ridges vary: characteristics and population statistics of ocean ridges, in drilling the oceanic everything you ever wanted to know about submarine volcanoes, ridges. What is 'mid-oceanic ridges' jayeshe general characteristics : 1 mid-ocean ridges are typical submarine relief features, they are, in general, submerged beneath oceanic water, however, local crowning above the level of oceanic water gives rise to islands such as iceland. What are four identifying characteristics of mid-ocean ridges follow 1 answer 1 report abuse are you sure you want to delete this answer describe the basic characteristics of mid-ocean ridges, deep-ocean trenches, and seamount chains. Map of the mid-ocean ridge system divergent plate boundaries have the following characteristics: 1) marked by mid-ocean ridges, 2) plate motion is away from boundary, 3) lithosphere is created, 4) regions. Mid oceanic ridge forms according to sea floor spreading theory of harry hess, the lava get splinted apart and a ridge located at divergent tectonic plate or constructive plate boundary the most famous example is mid-oceanic ridge of atlantic oce. Lesson 2: mid-ocean ridges introduction the national oceanic and atmospheric administration noaa is a leader in exploring the geologic and biologic mysteries of the deep-sea. The mid-ocean ridge occurs along boundaries where plates are spreading apart. A mid-ocean ridge or mid-oceanic ridge is an underwater mountain range, formed by plate tectonics this uplifting of the ocean floor occurs when convection currents rise in the mantle beneath the oceanic crust and create magma where two tectonic plates meet at a divergent boundary the mid-ocean. Seafloor spreading is a process that occurs at mid-ocean ridges, where new oceanic crust is formed through volcanic activity and then gradually moves away from the ridge. The mid-ocean ridge system, one of the largest single known deposits occurs on the mid-atlantic ridge, a slow-spreading environment the entire deposit, known as tag (figure 1a), is a large sulfide mound measuring 250 meters in. 61 ocean ridge topography ocean ridges mark accretive the topographic expression of mid-ocean ridges is typically between 1000 and 4000 km in width the essential characteristics of the ridges, such as topog-raphy, structure, and rock types. With pictures, description, examples, facts, what is a ocean ridge, where it be found, famous, definition, characteristics and more about ocean ridge landforms. Previous article in issue: the role of seamount volcanism in crustal construction at the mid-atlantic ridge (24 -30 n) previous article in issue: the role of seamount volcanism in crustal construction at the mid-atlantic ridge (24 -30 n) next article in issue: deepwater high-resolution. The main features of plate tectonics are: mid-oceanic ridges geologists have determined that rocks found in different parts of the planet with similar ages have the same magnetic characteristics deep sea trenches. High school earth science/theory of plate tectonics in the oceans, earthquakes are found along mid-ocean ridges and in and around deep sea trenches earthquakes are extremely common all around the pacific ocean basin and often occur near volcanoes. Physiography of the ocean basins: the mid-oceanic ridge constitutes 23% of the earth's surface in the center of the mid-oceanic ridge is a rift valley, between 30 to 50 kilometers wide, that dissects 1000 to 3000 meters deep into the. The deep sea and marine biology @ marinebioorg seamounts, rocky banks, on mid-ocean ridges and their rift valleys, and some parts of continental slopes at the mid-ocean physical characteristics of the deep sea the physical characteristics that deep sea life must contend with to. 20 geological characteristics of mid ocean ridges, further information about how many geological characteristics in mid ocean ridges. Answer to hi, i need help answering these four questions 1 which of the following igneous rocks is/are not characteristics of mid-ocean ridges. The current relationship of the san andreas fault zone to the tectonic plates and mid-ocean ridges in the vicinity of the fault is illustrated on the figure (at left) from the usgs web site characteristics of the san andreas fault zone. An online resource from the geological society the north american and eurasian plates are moving away from each other along the line of the mid atlantic ridge the mid atlantic ridge, like other ocean ridge systems. Watch bbc video clips about mid-ocean ridges that explain how these undersea volcanic mountain ranges create new sections of the tectonic plates. The abyss contains plains, long mountains ranges called ocean ridges, isolated mountains called seamounts, and ocean trenches which are the deepest parts of the oceans in the centers of some ocean ridges are long rift valleys, where earthquakes. Ridge characteristics mid-ocean ridges have different shapes (morphology) depending on how fast they are spreading, how active they are magmatically and volcanically, and how much tectonic stretching and faulting is taking place.
A team of paleontologists has announced that fossilized remains recovered from a remote site in northern Alaska belong to a new species of pygmy tyrannosaur. The animal, which they dubbed Nanuqsaurus hoglundi, lived some 70 million years ago, during the Late Cretaceous period, when the land was part of an ancient subcontinent called Laramidia. Though only around half the size of its close cousin, the famous Tyrannosaurus rex, this polar pygmy was still around 25 feet long and weighed some 1,000 pounds. As reported this week in the journal PLOS ONE, a team of paleontologists from the Perot Museum of Nature and Science in Dallas first discovered the fossilized remains of the new tyrannosaur species when preparing for a dig in 2006 at northern Alaska’s Kikak-Tegoseak quarry. On a rocky bluff overlooking the Colville River, almost 400 miles north of Fairbanks, they uncovered the bones of Pachyrhinosaurus perotorum, a new species of horned dinosaur. Back in their laboratory, however, the scientists realized they had also unearthed fragments of the skull, lower and upper jaw bones of a tyrannosaur. Though Tyrannosaurus rex is by far the most famous, many different species of tyrannosaur roamed Asia and North America during the Late Cretaceous Period, some 70 million years ago. Through analysis of its remains, the scientists determined this particular one was much smaller than other tyrannosaurs: It probably measured around two meters tall at the hips and seven meters from nose to tail, around half the size of a T. rex. They were also able to identify the bones as belonging to a full-grown adult, thanks to the distinctive sockets along the edge of the upper jaw, only seen in jawbones of other adult tyrannosaurs. According to Ron Tykoski, fossil preparator at the Perot Museum, “It wasn’t until the past few years, with more work being done on growth rates, that we were able to look at these pieces in finer detail and realize that they weren’t a youngster of a known species, but a mature individual of something new”: a pygmy tyrannosaur. Tykoski and his colleagues named the new species Nanuqsaurus hoglundi; “Nanuq” means polar bear in the language of the local Inupiat people, and “hoglundi” is a nod to Forrest Hoglund, a Dallas businessman who helped raise funds to build the Perot Museum. Certain similarities, including a distinctively shaped ridge on its head, indicate that N. hoglundi was a close cousin of T. rex. When they looked back at the bones of the horned dinosaur species uncovered in 2006, the scientists found tooth marks and deep grooves on some of them, indicating that N. hoglundi preyed on local plant-eating dinosaurs. According to Tony Fiorillo, the Perot’s curator of earth sciences, “We feel pretty confident that this pygmy tyrannosaur was eating the herbivorous dinosaurs around at the time.” Such discoveries, Fiorillo says, shed new light on what life might have been like in the prehistoric Arctic. “By seeing tooth marks, it makes them the animals that they really were,” Fiorillo said, “instead of just a cool collection of objects.” Yet the question remained: why was N. hoglundi so small compared to other tyrannosaurs? According to Fiorillo and Tykoski, the answer lies in the cool climate and high latitude of its Arctic home. While the weather above the 70th parallel in Alaska would have been warmer during the Late Cretaceous than it is today, there still would have been far less sunlight and fewer resources than the typical T. rex climate, and long seasons during which food might not have been readily available. Still, such a conclusion seems to contradict a common evolutionary axiom, which holds that animals closest to the poles tend to be larger (as larger mass to surface area ratio helps them keep warmer). As this so-called “Bergmann’s Rule” doesn’t hold true in the case of the pygmy tyrannosaur, the new species remains something of a mystery–one that scientists will undoubtedly continue to explore. “We’d certainly like to know more about this animal,” said Fiorillo. “And whatever else might be out there.”
Soon after the glaciers melted at the end of the last Ice Age, our planet was vulnerable to abrupt and dramatic shifts in climate, including prolonged cold snaps that lasted for decades. New research suggests early hunter-gatherers living in the British Isles didn’t just manage to survive these harsh conditions—they actually thrived. Ancient hunter-gatherers living at the Star Carr site some 11,000 years ago in what is now North Yorkshire didn’t skip a beat as temperatures plunged around the globe in the immediate post-glacial era, according to new research published in Nature Ecology & Evolution. This latest research suggests abrupt climate change wasn’t catastrophically or culturally disruptive to this long-standing community, and that early humans were remarkably resilient and adaptable in the face of dramatic climate shifts. Map of the Star Carr lake with excavation sites marked around it. Amateur archaeologists first discovered the Star Carr site back in the late 1940s, and excavations have been conducted there on-and-off ever since. Digging through several feet of muddy peat, archaeologists have uncovered traces of a Mesolithic community that lived continuously around the edge of a former lake for over 300 hundred years starting around 8770 BC. Items found at Star Carr include huge numbers of animal bones and wooden timbers, barbed points, amber and shale beads, decorative antler headdresses, and much more. The Star Carr population arrived in this part of the world at the very beginning of the Holocene Era, which happens to be the era we still find ourselves in. The Holocene started when the Ice Age came to end some 11,500 years ago, but in this transitionary period, the Earth’s climate was still subject to dramatic shifts. In this immediate post-Ice Age era, rising sea levels, changing ocean currents, and frigid ocean temperatures produced prolonged cold periods that rekindled memories of the prior frozen epoch. Average global temperatures dropped by as much as three degrees Celsius, creating cold snaps that lasted more than a hundred years. In parts of the British Isles, Eurasia, and North America, temperatures got so low that entire forests stopped growing. Anthropologists figured early humans living in northern Britain suffered during this time, but the new study suggests this wasn’t the case. “It has been argued that abrupt climatic events may have caused a crash in Mesolithic populations in Northern Britain, but our study reveals that at least in the case of the pioneering colonisers at Star Carr, early communities were able to cope with extreme and persistent climate events,” lead author Simon Blockley, a researcher at Royal Holloway, University of London, said in a statement. The Star Carr site consists of many layers, some of which coincide with the abrupt cooling periods. Digging through the mud, the archaeologists uncovered large numbers of animal bones, flint blades, worked wood, and evidence of wooden houses and wooden platforms built on the edge of the lake. The team taking core samples. The scientists also extracted core samples, digging boreholes to depths of 16 to 26 feet. Within the sediment, the researches found traces of pollen and some animal fossils, which were used to radiocarbon date the layers. These samples showed that the region experienced two episodes of extreme cooling—one that happened when these Mesolithic humans first moved into the area, and one that happened when they were already firmly established. The researchers expected to see evidence of disrupted or altered activities within the specific layers. And indeed, during the early settlement phase, evidence suggested a period of slowed progress, but the second cooling period had no noticeable effect on the Star Carr community. “Perhaps the later, more established community at Star Carr were buffered from the effects of the second extreme cooling event—which is likely to have caused exceptionally harsh winter conditions—by their continued access to a range of resources at the site including red deer,” said Blockley. This evidence suggests a remarkable level of resilience, adaptation, and likely cooperation, among these early humans. But this community wasn’t completely vulnerable to change. They may have survived severe and abrupt climate change, but they were more susceptible to smaller, localised changes to their environment. Over time, their precious lake got shallower and boggier, eventually turning into a useless marshland. After living along the edge of the lake for hundreds and hundreds of years, the Star Carr people were forced to abandon the area. [Nature Ecology & Evolution]
What is a tick? Ticks are not insects but Arachnids, a class of Arthropods, which also includes mites, spiders and scorpions. They are divided into two groups – hard bodied and soft bodied – both of which are capable of transmitting diseases in the United States. Ticks are parasites that feed by latching on to an animal host, imbedding their mouthparts into the host’s skin and sucking its blood. This method of feeding makes ticks the perfect vectors (organisms that harbor and transmit disease) for a variety of pathogenic agents. Ticks are responsible for at least ten different known diseases in humans in the U.S., including Lyme disease, Rocky Mountain spotted fever, babesiosis, and more recently, anaplasmosis and ehrlichiosis. The Deer Tick Life Cycle The deer (or black-legged) tick in the East and the related western black-legged tick are the only known transmitters of Lyme disease in the United States. Both are hard-bodied ticks with a two-year life cycle. Like all species of ticks, deer ticks and their relatives require a blood meal to progress to each successive stage in their life cycles. The life cycle of the deer tick comprises three growth stages: the larva, nymph and adult. In both the northeastern and mid-western U.S., where Lyme disease has become prevalent, it takes about two years for the tick to hatch from the egg, go through all three stages, reproduce, and then die. Detailed descriptions of this life cycle and the seasonal timing of peak activity, as they occur in these regions, are provided below. The above graph shows the host-seeking behavior of I. scapularis ticks according to life-stage and season. Larval activity peaks in August, nymphs are active during the summer months, and adults are active during the spring and fall. People primarily acquire Borrelia burgdorferi (the causative agent of Lyme disease) from infected nymphs because of their small size. Host-seeking larvae are not infected. Infected adults are large enough to be noticed and are usually removed by people before B. burgdorferi is transmitted. Consequently, very few Lyme disease cases are reported during spring and fall. Click here to learn about the probability of B. burgdorferi transmission according to how long an infected tick feeds on a person. Stage 1: Larva – As shown in the upper left corner of the life-cycle diagram to the right, eggs laid by an adult female deer tick in the spring hatch into larvae later in the summer. These larvae reach their peak activity in August. No bigger than a newsprinted period, a larva will wait on the ground until a small mammal or bird brushes up against it. The larva then attaches itself to its host, begins feeding, and over a few days, engorges (swells up) with blood. If the host is already infected with the Lyme disease spirochete (a form of bacterium) from previous tick bites, the larva will likely become infected as well. In this way, infected hosts in the wild (primarily white-footed mice, which exist in large numbers in Lyme-endemic areas of the northeast and upper mid-west) serve as spirochete reservoirs, infecting ticks that feed upon them. Other mammals and ground-feeding birds may also serve as natural reservoirs of infection. Larval ticks are not born infected, they cannot transmit Lyme disease to nimal or human hosts. Instead, “reservoir” hosts infect the larvae. Having already fed, an infected larva will not seek another host, human or otherwise, until after it reaches the next stage in its life cycle. Therefore, larvae do not, in themselves, pose a threat to humans or pets. Stage 2: Nymph – Larvae, after feeding, drop off their hosts and molt, or transform, into nymphs in the fall. The nymphs remain inactive throughout the winter and early spring. In May, nymphal activity begins. Host-seeking nymphs wait on vegetation near the ground for a small mammal or bird to approach. The nymph will then latch on to its host and feed for four or five days, engorging with blood and swelling to many times its original size. If previously infected during its larval stage, the nymph may transmit the Lyme disease spirochete to its host. If not previously infected, the nymph may become infected if its host carries the Lyme disease spirochete from previous infectious tick bites. In highly endemic areas of the northeast and upper midwest, 25% of nymphs have been found to harbor the Lyme disease spirochete. Too often, humans are the hosts that come into contact with infected nymphs during their peak spring activity (late May through July). Although the nymphs’ preferred hosts are small mammals and birds, humans and their pets are suitable substitutes. Because nymphs are about the size of a poppy seed, they often go unnoticed until fully engorged, and are therefore responsible for nearly all of human Lyme disease cases. Stage 3: Adult – Once engorged, the nymph drops off its host into the leaf litter and molts into an adult. These adults actively seek new hosts throughout the fall, waiting up to 3 feet above the ground on stalks of grass or leaf tips to latch onto deer (its preferred host) or other larger mammals (including humans, dogs, cats, horses, and other domestic animals). Peak activity for adult deer ticks occurs in late October and early November. Of adults sampled in highly endemic areas of the northeast, 50% have been found to carry the Lyme disease spirochete. However, few cases of Lyme disease are acquired from adult tick bites because they are relatively large (about the size of an apple seed) and attached ticks are usually found and removed before spirochete transmission occurs (more than 36 hrs). |Adult deer tick questing for a host. Note the upper legs are raised in preparation for latching on to a passing animal.||Adult deer tick seeking a suitable spot to feed.| As winter closes in, adult ticks unsuccessful in finding hosts take cover under leaf litter or other surface vegetation, becoming inactive in temperatures below 45° F. Generally, winters in the northeast and upper mid-west are cold enough to keep adult ticks at bay until late February or early March (an exception was the warm winter of 1997-1998) when temperatures begin to rise. At this time, they resume the quest for hosts in a last-ditch effort to obtain a blood meal allowing them to mate and reproduce. This second activity peak typically occurs in March and early April. Engorged Adult Female with Eggs Adult female ticks that attach to deer, whether in the fall or spring, feed for approximately one week. Males feed only intermittently. Mating may take place on or off the host, and is required for the female’s successful completion of the blood meal. The females then drop off the host, become gravid (egg-laden), lay their eggs underneath leaf litter in early spring, and die. Each female lays approximately 3,000 eggs. The eggs hatch later in the summer, beginning the two-year cycle anew. For residents of regions other than the northeast and upper mid-west: Please note that where the range of the deer tick (or its close relative the western black-legged tick) extends beyond the northeast or upper mid-west, the timing of peak activity for each life stage of the tick may differ from that described above. In these areas, information on peak nymphal and adult tick activity can probably be obtained from local universities and health departments.
They attached small strips of graphene to metal electrodes, suspended the strips above the substrate, and passed a current through the filaments to cause them to heat up. The study, “Bright visible light emission from graphene,” is published in the Advance Online Publication on Nature Nanotechnology‘s website on June 15. “We’ve created what is essentially the world’s thinnest light bulb,” says Hone, Wang Fon-Jen Professor of Mechanical Engineering at Columbia Engineering and co-author of the study. “This new type of ‘broadband’ light emitter can be integrated into chips and will pave the way towards the realization of atomically thin, flexible, and transparent displays, and graphene-based on-chip optical communications.” Creating light in small structures on the surface of a chip is crucial for developing fully integrated ‘photonic’ circuits that do with light what is now done with electric currents in semiconductor integrated circuits. Researchers have developed many approaches to do this, but have not yet been able to put the oldest and simplest artificial light source—the incandescent light bulb—onto a chip. This is primarily because light bulb filaments must be extremely hot—thousands of degrees Celsius—in order to glow in the visible range and micro-scale metal wires cannot withstand such temperatures. In addition, heat transfer from the hot filament to its surroundings is extremely efficient at the microscale, making such structures impractical and leading to damage of the surrounding chip. By measuring the spectrum of the light emitted from the graphene, the team was able to show that the graphene was reaching temperatures of above 2500 degrees Celsius, hot enough to glow brightly. “The visible light from atomically thin graphene is so intense that it is visible even to the naked eye, without any additional magnification,” explains Young Duck Kim, first and co-lead author on the paper and postdoctoral research scientist who works in Hone’s group at Columbia Engineering. The ability of graphene to achieve such high temperatures without melting the substrate or the metal electrodes is due to another interesting property: as it heats up, graphene becomes a much poorer conductor of heat. This means that the high temperatures stay confined to a small ‘hot spot’ in the center. “At the highest temperatures, the electron temperature is much higher than that of acoustic vibrational modes of the graphene lattice, so that less energy is needed to attain temperatures needed for visible light emission,” Myung-Ho Bae, a senior researcher at KRISS and co-lead author, observes. “These unique thermal properties allow us to heat the suspended graphene up to half of temperature of the sun, and improve efficiency 1000 times, as compared to graphene on a solid substrate.” The team also demonstrated the scalability of their technique by realizing large-scale of arrays of chemical-vapor-deposited (CVD) graphene light emitters. Yun Daniel Park, professor in the department of physics and astronomy at Seoul National University and co-lead author, notes that they are working with the same material that Thomas Edison used when he invented the incandescent light bulb: “Edison originally used carbon as a filament for his light bulb and here we are going back to the same element, but using it in its pure form—graphene—and at its ultimate size limit—one atom thick.” The group is currently working to further characterize the performance of these devices—for example, how fast they can be turned on and off to create ‘bits’ for optical communications—and to develop techniques for integrating them into flexible substrates. Hone adds, “We are just starting to dream about other uses for these structures—for example, as micro-hotplates that can be heated to thousands of degrees in a fraction of a second to study high-temperature chemical reactions or catalysis.” Image and article via Phys Org
The Delaware Division of Public Health (DPH) is urging the public to take precautions to prevent possible exposure and spread of norovirus, an illness that typically occurs during the winter months. The Delaware Division of Public Health (DPH) is urging the public to take precautions to prevent possible exposure and spread of norovirus, an illness that typically occurs during the winter months. After the first report received on January 3, 2013, DPH has investigated a number of suspected and confirmed norovirus outbreaks. The norovirus is not related to influenza nor linked to the high number of influenza cases so far this flu season. Gastrointestinal illness caused by norovirus is unpleasant and can be severe for those who are elderly or have an underlying health condition. Symptoms include nausea, vomiting, diarrhea, and stomach cramping. Some people may experience fever, chills, headache, muscle aches, or a general sense of tiredness. The symptoms can begin suddenly and an infected person may go from feeling well to very sick in a very short period of time. In most people, the illness lasts for one to two days. People with norovirus illness are contagious from the moment they begin feeling sick until at least three days after they recover; some people may be contagious for even longer. Infection can be more severe in young children and elderly people. Dehydration can occur rapidly and may require medical treatment or hospitalization. Although there are no specific medications to treat norovirus, drinking plenty of liquids to prevent dehydration is important. Noroviruses are easily transmitted by touching a contaminated surface as well as by direct contact with an infected person or by eating food or drinking liquids that have been contaminated with the virus. The best course of action is prevention. DPH recommends the following steps to prevent exposure to and spread of norovirus. 1. Noroviruses can be difficult to kill with normal cleaning and disinfecting procedures. Surfaces that have been contaminated with stool or vomit should be cleaned immediately and disinfected with a freshly prepared diluted bleach solution (1 part bleach:10 parts water) or a bleach-based household cleaner. Never use undiluted bleach). 2. If you are ill with vomiting or diarrhea, you should not go to work, school, or attend daycare. 3. Healthcare facilities, such as nursing homes and hospitals, may restrict visitation of sick family members or friends for the safety of not only the ill persons but also the visitors. 4. Wash your hands frequently and thoroughly with soap and water. This is one of the most effective ways to protect yourself and others against norovirus since hand sanitizers alone are not as effective against the virus as handwashing. For more information about norovirus, see the DPH Web site at http://dhss.delaware.gov/dhss/dph/files/norwalkfaq.pdf.
Jawaharlal Nehru Essay Jawaharlal Nehru was the first Prime Minister of India. His Prime-Minister-ship was marked by social and economic reforms of the Indian state. A number of foreign policy landmarks like the founding of the Non-Aligned Movement also marked the tenure of Jawaharlal Nehru as Prime Minister. Jawaharlal Nehru became Prime Minister on the 15th of August 1947. His ascension was plagued by controversy and a bitter power struggle within the Congress Party. The internal struggle of the party was symptomatic of the larger struggle within the Indian Republic itself. The initial period of Jawaharlal Nehru as Prime Minister was marked by communal violence. Jawaharlal Nehru was forced to concede the creation of Pakistan as per the wishes of the Muslim League leader the leadership of Muhammad Ali Jinnah. Communal violence enveloped the entire country during this period. Maximum bloodshed was witnessed in the national capital Delhi. The Indian states of Punjab and West Bengal also witnessed fierce bloodshed. The first Prime Minister tried to defuse the explosive situation by visiting the violence affected areas. He toured the riot stricken areas with Pakistani leaders to reassure those affected by the violence. Nehru promoted peace in Punjab during that momentous period in Indian history. The secular nature of Jawaharlal Nehru was best exemplified during those times. He took active steps to safeguard the status of Indian Muslims. The first Prime Minister Jawaharlal Nehru was one of the first Indian policymakers to understand the importance of cottage industries in the Indian economy. The development of such small scale industries infused much needed production efficiency into the rural Indian economy. The Cottage Industries also helped the agricultural workers to have a better quality of life. This is due to the additional profits generated by the farming community.
At the core of any electronic project is knowing how to power it and how long it will last. This Instructable focuses on how to power digital electronic projects that take low voltage. Basic components and principles will be gone over, such as AC vs. DC power, circuits in series and in parallel and Ohm's Law. In addition to being introduced to these fundamental topics, you will also learn how to power the Intel Edison with Arduino and Mini breakout. Step 1: AC Vs. DC There are two different kinds of power signals that you can work with when powering a project, alternating current (AC) and direct current (DC). As the names suggest, it has everything to do with how current flows. Direct current flows in one direction, while alternating can flip it and reverse it, changing directions at timed intervals. This Instructable is geared towards low voltage DC powered projects using a microcontroller and other digital components. Still, understanding AC is important and can be used to power your DC project too. AC is available through wall outlets in the United States at 120 volts, it can be found clocking in at 220 – 250 volts elsewhere. This is a lot of power! This kind of juice will give you a good jolt if you touch ground and power at the same time, creating a short. This can be very harmful to a person, so be careful when working with AC. Most microcontroller projects will need 5V or 3.3V, not the hefty 220 volts coming from the wall outlet. To fix this, use a converter that plugs into the wall and outputs a steady voltage and current out of a barrel plug that is appropriate for your project. DC is mainly obtained through batteries, read more about those in the next step. Step 2: Ohm's Law Ohm’s Law is made of 3 equations that states the relationship between current, voltage and resistance. V = voltage I = current R = resistance V = I x R I = V/R R = V/I The triangle diagram is a easy visual reminder of the equations. this is how it works. - Voltage is on top, above the dividing line - Resistance and Current is below the dividing line and get multiplied - Cover up what you want to know and read what is left One common way to use Ohm's Law is to find out a value of a current limiting resistor. For example, you want to install an indicator LED into the new guitar pedal you just got. You circuit will look something like the one above. LEDs have a voltage rating and a current rating. The voltage rating is how much the LED is going to use up or “drop”, this can be annotated by different abbreviations, such as Vf (meaning forward voltage). The dropped voltage can range from 1.8 - 3.3 volts, dependent on the color of the LED. Used in the example circuit here is the typical forward voltage of 2V and rated current of 20 mA. We can’t hook up the 9V battery straight to the LED without it burning up and out. We need a resistor to resist the power. That's where Ohm's Law comes in. First, subtract 2V (voltage drop from the LED) from 9V to get 7V. We now know we need the resistor to take 7 volts, leaving 2 for the LED. From the LED current draw, we know our current value to use is 20 mA. In order to use this in the equation, we need to covert milliamps to amps : 20 mA becomes .02 amps. R = 7(volts)/.02(amps) 350 ohms = 7/.02 Our resistor needs to be a value of 350 ohms. If the result isn’t a standard value resistors come in, round up to the nearest one. Use the other two equations to find the current a component is drawing or how much voltage something is dropping. As long as you have 2 of the values in either equation, you can determine the third. Step 3: Parallel Vs. Series When connecting electrical components to one another, there are two basic ways, in parallel or in series. Let's take a look at what happens when you wire up LEDs and batteries in both configurations. Say you have 4 LEDs that you connect end to end, the positive of one to the negative of the other, continuing down the line. You will end up with one positive and one negative lead, which then gets connected to a power source. Being arranged in series, end-to-end, routes the power through every LED. Traveling through multiple LEDs causes voltage to drop with each one. If it takes 2.2V to power one LED, a 3V will do the trick with one, but with two in series, they will not light up since 2.2V x 2 = 4.4V, which can not be provided. With current it's different, when LEDs are put in series, the current draw is the same as one LED and is not multiplied by how many LEDs there are. When connected in parallel, all the LED negative terminals run along the negative bus of the battery, and all the positive terminals run along the positive. Running in parallel with each other and sharing the same power buses means that they all receive the same amount of voltage. You can now use a 3V battery to run the four LEDs. While in parallel, the current is the one that gets multiplied this time. The 4 LEDs will take 3V, but drain the battery faster. Putting batteries in series is very handy when you need more voltage than what one battery can provide. This is how large voltage batteries are made, if you were to open one up you would find smaller batteries connected in series inside. When connected end to end, voltage adds up, but the current (capacity), of the battery stays the same. Say you need your 3V project to last longer, what can you do? If you have multiple 3v batteries, running them in parallel can fix the situation. Connect all positive terminals of each battery together and doing the same with the negative terminals. This adds up the capacity of each battery, while keeping voltage the same. Step 4: Powering a Project Below are the most common ways to power a project. A project can be powered by your computer via a USB cable. Apple computers state a USB 1.1 or 2 port can supply up to 500 mA (Milliamps) at 5 V (Volts). Ones with USB 3 can supply up to 900 mA (milliamps) at 5 V (Volts). On Windows 7 machines, the port can supply up to 500 mA at 5V. You can check to see how much power your peripheral is taking up by going to Device Manager, expanding the Universal Serial Bus controllers option and clicking on the power tab in the pop-up window. Here, you will see what devices are connected and how much current they draw. If your project exceeds the maximum amount of current allowed through the USB ports, a warning will pop up on your screen and the port will temporarily shut down. Batteries supply DC voltage with a finite amount of capacity. Most commonly you will be using Lithium Ion, Lithium Polymer or Alkaline batteries. If one battery isn't able to supply the operating voltage or current needed, they can be put in series or in parallel, multiplying either the current or voltage. Read more on this in the Parallel vs. Series step. With batteries, you need to look at the voltage and the capacity. This is usually printed on the package or the battery itself, the capacity is sometimes more hidden. You can look up the datasheet for the particular battery you are using, or use a multimeter on the Ampere setting. The capacity of a battery is labeled in milliamps per hour (mAh) or ampsper hour (Ah). There are 1000 milliamps in 1 amp. You can roughly figure out how long your circuit will run once you know how many milliamps it draws. For example, if you have 4 LEDs that take up 20 mA each running in parallel, they will take up 80 mA all together. The microcontroller consumes current as well, say it consumes 12 mA, total these together to get 92 mA. Say the battery has 2000 milliamps, the equation would look like this. 2000 / 92 = 21.74 hours Note that this is a rough estimate, when powering a project, voltage and current can be lost in various places. This is a downside of voltage regulators, when they regulate the input power some is lost and is dissipated as heat. Current draw for components are sometimes listed at their highest, so if a bluetooth module sleeps, it will draw less current than when it is awake and ready to transmit data. It's behavior may depend on sporadic human interaction or the timing in your program. So, you can see that there are many factors in power usage and estimating operating time. Which is why one of the best ways may be to allow the project to run in it's environment and time how long it takes for the battery to completely drain. To find out what voltage and current values are being used at a specific part of a circuit, us a multimeter. Wall Converter or Wall-wart A wall-wart is plugged into a wall outlet and converts high AC voltage to low DC voltage and regulates the current. Printed on the wall wart, two important pieces of information can be found. You can find the output voltage and current printed on the part that plugs into the wall. Most likely you will want 5 volts at 2 amps. INPUT : 100-240Vac 50/60HZ OUTPUT : 5Vdc 2000mA The input is the range of AC voltage it can take, which is then converted to the output voltage and capacity. For most converters, a barrel plug can be found at the end which can be directly plugged in to a microcontroller. Before you do, use a multimeter to read the voltage that comes out of the converter. The voltage can be much higher than what is marked and can damage your circuit. If it is, a voltage regulator can be used. Learn more about regulators in the next step. Variable Bench Power Supply These are found in electronic labs and are an essential tool if you are building and testing circuits often. A bench power supply can vary their output of current and voltage, allowing you to adjust to the specific value needed for a project. The range of the output depends on the kind you buy, most likely it will go from 0Vdc - 50Vdc and have a maximum current setting, making sure your circuit does not draw more than is needed. Step 5: Regulating Voltage Voltage regulators are solid helpers, they are easy to use and can be essential when powering parts of a circuit. They take an input of voltage, step it down and maintain a constant voltage level. Common ones used for DC projects maintain 3.3, 5, 9 and 12 volts. You can usually find the voltage level printed on the package included in it's part number. For example, the regulated pictured here is a 7805, the 5 indicates that it maintains 5 volts. LM7809 : 9 volts L7812: 12 volts LD1086V33 : 3.3 volts Regulators have 3 terminals, one that takes the input voltage, one that gets grounded and one that outputs the new regulated voltage. There are negative and positive voltage regulators, make sure you know which one you are using. All the regulators listed about or positive voltage regulators. Power fluctuates and doesn't always flow consistently. This can be especially true when using a wall-wart, which will be labeled with an output of 5V, but if measured with a multimeter could read up to 8v. This additional amount of voltage can damage your circuit. Before powering up your microcontroller with a wall-wart, run it through a regulator to make sure you don't go over the recommended operating voltage. Another thing they are superb for is stepping down 5V to 3.3V if you are using a 3.3V component with a 5V microcontroller. The downside to voltage regulators is when they regulate the input power, some dissipates as heat and is lost. Step 6: Powering the Edison : Mini Breakout For a definitive explanation on how to power the Mini breakout Edison board, check out the Edison Breakout Board Hardware Guide here. There are many ways to power the Edison with Mini breakout. You can hook up a rechargeable Lithium-Ion battery, with a thermistor or without. If without, check out section 2.1 of the hardware guide. There is a main power input that can take 7 - 15Vdc and a footprint to solder a 2.5mm power barrel jack. These two sources feed to the charging circuit that will charge the Li-ion at a 4.2V rate. The Edison system runs between 3.15V - 4.5V, when running analog sensors, the high reference voltage will be 4.5V, be aware of this when calibrating. Step 7: Powering the Edison : Arduino Breakout For a definitive explanation on how to power the Mini breakout Edison board, check out the Edison Arduino Breakout Board Hardware Guide here. If you are familiar with the Arduinio UNO, you will already recognize some of the power options, but there are still some differences and some additions to be aware of. Here are some features and facts that I have taken from the hardware guide and consolidated. The board can be powered through: - Vin, where external power at 7 - 17Vdc can be hooked up. Use a source that supplies no more than 1 Amp. - Barrel DC power jack that can take 7 - 17Vdc. Use a adapter that supplies no more than 1 Amp. -For low power applications (those shields running off 3.3 V) a lithium ion battery (3.0 to 4.3 Vmax) can be attached to J2 - A USB cable via micro USB connector J16 - 3.3V output - 5V output - Ground (there are 3 terminals) - IORef, reference voltage. To change the reference voltage, select 3.3 or 5V with the jumper. The default position is 5V. - ARef, ADC reference voltage. Select between IORef or ARef with jumper 8 on board. The Arduino breakout can be powered through the Vin pin, via USB or through the barrel jack.
Nuba Land Rights The Nuba Mountains area is situated in the geographical centre of the Sudan and covers an area of 30,000 square miles. The area is inhabited by populations estimated to be over two millions and 61% of these inhabitants are indigenous Nuba. The other groups who share the land with the Nuba are mixture of Arab tribes, namely Hawazma, Meissyeria), Fellata (from west Africa) and Jellaba (merchants from northern Sudan). The land is fertile and rich in minerals and other resources, including oil which was discovered in the western part. For centuries the Nuba have shared this land with the nomadic Arab Baggara tribes, who migrated to the area with their cattle. Although in the past the Nuba have been subjected to raids and slavery last century by the Arabs. But they managed to resolve the disputes and the two communities were able to live together side by side in relative peace, mutual trust and understanding. However, over the last two decades the relations between the two communities deteriorated badly due to the government policy which favoured the Baggara tribes and set them against the Nuba. As a result jihad war was declared in 1992 and tens of thousands of Nuba were killed and many thousands were forcibly driven out of their land and their lands were either taken by the Arabs and Jellaba or confiscated by the government. Some of the land were converted into mechanized farming for quick cash. Similar policies of land grabbing in the Nuba Mountains was done in mid-Seventies and this was encouraged by international donors support, such as the World Bank, who supported the creation of the government ‘mechanised farming co-operation’ established in Habila and known as "Habila Agricultural Scheme". This project expanded rapidly, as millions of hectors of land in Habila and surrounding villages were confiscated and the landlords became landless and working as labours in their own lands. The present government is following similar policy of land sequestration. It has already taken most of agricultural fertile lands in the Lagawa, Khor Shalogo and Delami area, using the principle introduced by the British colonial power in 1898 for rural land tenure, that unregistered land is assumed to be owned by the government unless the contrary is proven. But the land confiscated were owned by the indigenous people who acquired their land through customary laws. Since Independence, the state has remained ready to confiscate land, while wealthy and powerful individuals, usually with connection to government, have also made use of the colonial and post-colonial land laws to acquire large areas of land. Successive legislation on land, up to the 1990 amendment to the Civil Transactions Act, has not changed this fundamental aspect of Sudanese land law, but on the contrary strengthened the privileges of the state and those with access to it, at the expense of rural people. In fact, Land issue has been one of the main factors contributing to the brutal civil war of the Sudan, which caused about one and a half million lives and four million people were made displaced. The problem now will the Nuba people who fled the area during the civil war return to their home after peace agreement find their land not been taken by some people or confiscated by the government? Kadugli, the principal town in the Nuba Mountains whose most of its citizens fled to the northern or forcibly relocated, is now inhabited by people coming mainly from northern Kordofan (mainly from Umruwaba). They are buying land of people who have been displaced to the northern Sudan, including eastern Sudan. These people will return to find their homes and land occupied by other people. There is fear that didpute over land is going to be major problem for the government of Southern Kordofan to resolve and justice is needed in this case. . One of the big problem is that land ownership documentations in the Nuba Mountains has not been practicised through the years, and some cases do not exist. Consequently, people returning back to their land will find it difficult and sometimes impossible to prove their ownership. This is going to create problem and will create many grievances and great injustice if the Land Commission in Southern Kordofan does not get the advise from the Meeks , chiefs and Nazers who know the people and the land. For this reason we are demanding that tribal land should be considered as owned by local people and that tribal leaders together with elders in the community should be responsible for identifying ownership by the individuals. When Sudan was under the British rule there were two types of ownership: 1. Communal or tribal ownership, namely land owned by the community at large. 2.Individual ownership and to other titles to land. Individual ownership was achieved either by the normal evolution process whereby communal ownership changed to individual or by grants made by the sovereign (Funj and Fur Sultanates and in the Mahdist state). It was clear that the system of land tenure in the Nuba Mountains was based on tribal ownership and owned by community at large. The Meeks and Chiefs in this case were responsible for distributing the land and resolving any land dispute in the region. One of the most significant elements in the customary land tenure system is the right of native authorities to allocate land and adjudicate disputes. This means
In this post, we will explore an inductive data structure as a type for lists of elements. We define two recursive functions that can be applied to lists of this type; namely, an operation to append two lists and an operation to reverse a list. With these definitions, we set out to mathematically prove a number of properties that should hold for these functions by means of induction. A list is a data structure of a collection of elements in a certain order. A list can be defined in multiple ways. One way is to define a list as being an element (the “head” of the list) followed by another list (the “tail” of the list). Inductively, this could be formalized as follows: Inductive list a := cons a list | nil. This means that a list l with elements of type a is either cons x tail, with x of type a and with tail of type list a, or l is the empty list nil. Let’s look at some example lists with elements of the whole numbers. // Empty list; l = nil // List with one element; l = cons 1 nil // Different list with one element; l = cons 2 nil // List with two elements; [1,2] l = cons 1 (cons 2 nil) // List with three elements; [1,2,3] l = cons 1 (cons 2 (cons 3 nil)) Note that because we have lists of integers, following our definition the list l is of type list integer. Multiple list operations can be defined, such as append and reverse. We defined our list inductively, and so it would make sense to define these operations inductively (also known as recursively) as well. Because of our neat data structure and operations, we should then be able to prove that certain properties of the operations hold.
Presentation on theme: "The Earth. Clicker Question: The dinosaurs were most likely wiped out by: A: disease B: hunting to extinction by cavemen C: a giant meteor impact D: the."— Presentation transcript: Clicker Question: The dinosaurs were most likely wiped out by: A: disease B: hunting to extinction by cavemen C: a giant meteor impact D: the close passage of another star General Features Mass: M Earth = 6 x 10 27 g Radius: R Earth = 6378 km Density: = 5.5 g/cm 3 Age: 4.6 billion years Earth's Internal Structure Crust: thin. Much Si and Al (lots of granite). Two-thirds covered by oceans. How do we know? Earthquakes. See later Mantle is mostly solid, mostly basalt (Fe, Mg, Si). Cracks in mantle allow molten material to rise => volcanoes. Core temperature is 6000 K. Metallic - mostly nickel and iron. Outer core molten, inner core solid. Atmosphere very thin Earth's Atmosphere 78% Nitrogen 21% Oxygen gas is ionized by solar radiation ozone is O 3, which absorbs solar UV efficiently, thus heating stratosphere commercial jet altitudes room temperature Original gases disappeared. Atmosphere is mostly due to volcanoes and plants! Ionosphere Particles in the upper reaches of the atmosphere are ionized by the sun. Radio signals below ~20 MHz can “bounce” off the ionosphere allowing Communication “over the horizon” Convection Earth's surface heated by Sun. What would happen if it couldn't get rid of the energy as fast as it gets in? Convection causes both small-scale turbulence and large scale circulation patterns. It also occurs within Earth, on other planets, and in stars. Convection also occurs when you boil water, or soup. Think of Earth's surface as a boiling pot! The Greenhouse Effect Main greenhouse gases are H 2 O and CO 2. If no greenhouse effect, surface would be 40 o C cooler! Clicker Question: A leading cause of Global Warming is: A: Increased soot (smog) in the atmosphere. B: Increased carbon dioxide in the atmosphere. C: The Earth is getting closer to the sun. D: The luminosity of the sun is steadily increasing. Clicker Question: The Greenhouse effect would not occur if: A: The Earth had no atmosphere. B: The amount of carbon dioxide doubled. C: We got rid of all the forests. D: The Earth didn’t have an ocean. Temperature Measurements “Warming of the climate system is UNEQUIVOCAL” (IPCC 2007) Top 11 warmest years on record have all occurred in the last 12 years. (IPCC 2007) 2006 warmest year on record in continental US. (NOAA 1/07). Impacts in Alaska 1. Melting The rapid retreat of Alaska’s glaciers represents about 50% of the estimated mass loss by glaciers through 2004 worldwide. (ACIA 2004) Loss of over 588 billion cubic yards between ’61 and ’98. (Climate Change 11/05) Alaska’s glaciers are responsible for at least 9% of the global sea level rise in the past century. (ACIA 2004) 1941 2004 Glacier Bay (Riggs Glacier) USGS photo Bruce Molnia photo Glacial Retreat 2003 Matt Nolan photo Austin Post photo 1958 McCall Glacier Polar bears Walruses Ice seals Black guillemots Kittiwakes Salmon Caribou Arctic grayling Impacts in Alaska 3. Animals Animals at Risk Rising temperatures Shrinking habitat Food harder to get Expanding diseases Competition Polar bears Walruses Ice seals Caribou Black guillemots Kittiwakes Salmon Arctic grayling Inundation Sea level has increased 3.1 mm/year between 1993 and 2003 (IPCC 2007). This is 10-20 times faster than during the last 3,000 years (ACIA 2004). 0.4-0.6 meters of sea level rise by 2100 if 3 times pre- industrial CO2 or 1% increase/year (Overpeck et al. 2006). Inundation Inundation from Four Meter Sea Level Rise (or, 1m rise + 3m storm surge) Weiss and Overpeck, 2006 Measuring Your Carbon Footprint Major Carbon Contributors: Electric Consumption Gas/Heating Oil Consumption Car and Miles Driven Miles Flown Recreational Vehicle Use Average Footprint is 30,000 pounds Making a Difference as an Individual Conservation Measures : Walk, bike, ride public transit, or carpool Make sure your tires are fully inflated and your car tuned up Lower your water heater and home thermostats Don't preheat your oven Only run your dishwasher with full loads Reduce your shower length and temperature Buy locally produced food Unplug appliances not in use Turn off lights when leaving a room Use recycled paper Reuse or recycle as much as you can Cut down on consumerism Conservation: Three Examples Unplug Appliances Vampires! 43 billion kWH lost/year in US Est: 1,000 lbs/year/person Pump Up Tires 4 million gallon of gas wasted daily in US Extends life of tires by 25% Est: 1,000 lbs/year/person Lower Thermostat 2 degrees Est: 2000 lbs/year/person What We Can Do Energy Efficiency: Two Examples Compact Fluorescents Four to six times more efficient Est: for each bulb converted, save about 100 lbs/year Bus/Walk/Bike Save money on fuel and maintenance Est: 5,000 lbs/year Earthquakes They are vibrations in the solid Earth, or seismic waves. Two kinds go through Earth, P-waves ("primary") and S-waves ("secondary"): How do they measure where Earthquakes are centered? * * * seismic stations Like all waves, seismic waves bend when they encounter changes in density. If density change is gradual, wave path is curved. S-waves are unable to travel in liquid. Thus, measurement of seismic wave gives info on density of Earth's interior and which layers are solid/molten. But faint P waves seen in shadow zone, refracting off dense inner core Curved paths of P and S waves: density must slowly increase with depth Zone with no S waves: must be a liquid core that stops them No P waves too: they must bend sharply at core boundary Earth's Interior Structure Average density Crust Mantle Core 5.5 g/cm 3 3 g/cm 3 5 g/cm 3 11 g/cm 3 Density increases with depth => "differentiation" Earth must have been molten once, allowing denser material to sink, as it started to cool and solidify. Earthquakes and volcanoes are related, and also don't occur at random places. They outline plates. Plates moving at a few cm/year. "Continental drift" or "plate tectonics" When plates meet... 1) Head-on collision (Himalayas) 2) "Subduction zone" (one slides under the other) (Andes) 3) "Rift zone" (two plates moving apart) (Mid-Atlantic Ridge, Rio Grande) 4) They may just slide past each other (San Andreas Fault) side view top view => mountain ranges, trenches, earthquakes, volcanoes Clicker Question: Sunlight absorbed by the Earth’s surface is reemitted in the form of? A: radio waves B: infrared radiation C: visible radiation D: ultraviolet radiation E: X-ray radiation Clicker Question: What steps are you willing to take to reduce your carbon dioxide footprint? A: Walk/bike/bus to work B: Unplug appliances when not in use C: Replace light bulbs with compact fluorescents D: Wash clothes in cold or warm water E: Buy a Prius What causes the drift? Convection! Mantle slightly fluid and can support convection. Plates ride on top of convective cells. Lava flows through cell boundaries. Earth loses internal heat this way. Cycles take ~10 8 years. Plates form lithosphere (crust and solid upper mantle). Partially melted, circulating part of mantle is asthenosphere. Pangaea Theory: 200 million years ago, all the continents were together!
Physical allergies are allergic reactions to cold, sunlight, heat, or minor injury. The immune system is designed to protect the body from harmful invaders such as germs. Occasionally, it goes awry and attacks harmless or mildly noxious agents, doing more harm than good. This event is termed allergy if thetarget is from the outside like pollen or bee venom and autoimmunity if it iscaused by one of the body's own components. The immune system usually responds only to certain kinds of chemicals, namelyproteins. However, non-proteins can trigger the same sort of response, probably by altering a protein to make it look like a target. Physical allergy refers to reactions in which a protein is not the initial inciting agent. Sometimes it takes a combination of elements to produce an allergic reaction.A classic example is drugs that are capable of sensitizing the skin to sunlight. The result is phototoxicity, which appears as an increased sensitivity to sunlight or as localized skin rashes on sun-exposed areas. - Minor injury, such as scratching, causes itchy welts to develop in about 5% of people. The presence of itchy welts (urticaria) is a condition is called dermographism. - Cold can change certain proteins in the blood sothat they induce an immune reaction. This may indicate that there are abnormal proteins in the blood from a disease of the bone marrow. The reaction mayalso involve the lungs and circulation, producing wheezing and fainting - Heat allergies can be caused by exercise or even strong emotions in sensitive people. - Sunlight, even without drugs, causes immediate urticariain some people. This may be a symptom of porphyria a genetic metabolic defect. - Elements like nickel and chromium, although not proteins, commonly cause skin rashes, and iodine allergy causes skin rashes and sores inthe mouth in allergic individuals. - Pressure or vibration can also cause urticaria. - Water contact can cause aquagenic urticaria, presumably due to chlorine or some other trace chemical in the water, although distilled water has been known to cause this reaction. When the inflammatory reaction involves deeper layers of the skin, urticariabecomes angioedema. The skin, especially the lips and eyelids, swells. The tongue, throat, and parts of the digestive tract may also be involved. Angioedema may be due to physical agents. Often the cause remains unknown. Visual examination of the symptoms usually diagnoses the reaction. Further skin tests and review of the patient's photosensitivity may reveal a cause. Removing the offending agent is the first step to treatment. If sun is involved, shade and sunscreens are necessary. The reaction can usually be controlled with epinephrine, antihistamines, or cortisone-like drugs. Itching can be controlled with cold packs or commercial topical agents that contain menthol,camphor, eucalyptus oil, aloe, antihistamines, or cortisone preparations. If the causative agent has been diagnosed, avoidance of or protection againstthe allergen cures the allergy. Usually, allergies can be managed through treatment.
Cities, women and climate change Although there are still opponents of climate change as a human-made phenomenon, it is now commonly recognized that it will have significant impact on environment and livelihoods throughout the globe. Greenhouse effect, severe weather events, raise of sea level and lose of biodiversity will affect people across the borders, regardless of their economic status, race or gender. However, climate change is likely to accentuate the gaps between the world’s rich and poor. It is widely accepted that women in developing countries constitute one of the poorest and most disadvantaged groups in society (Denton 2002:11). In this literature review I will briefly describe reasons and evidence of higher vulnerability of women exposed to climate change and necessity of analysis urban context separately. I will look for examples of women’s agency in facing climate change. Climate change gender neutral? Although climate change has significant impact on all humans, it is proved that it has more severe implications on women’s livelihood. Generally the poorest populations and marginal groups are impacted the most; nevertheless, there can be a differential effect on men and women as a consequence of their social roles, inequalities in the access to and control of resources, and their low participation in decision-making (Carvajal-Escobar, Quitero- Angel, Garcia-Vargas 2008:1). It is because they more rely on natural resources that may be threatened by climate change. Although traditionally it is man who is responsible for providing financial resources, women are responsible for securing basic needs such as water, food and shelter. Changes of temperature and extreme weather events impede the ability to buy or grow food or access water. In spite of difficulties in securing basic needs, climate change has an impact on women’s health. According to research conducted after the Asian Tsunami, women and children are 14 times more likely to die during natural disaster than man. It significantly decreases their security if climate change fuels extreme weather events. This difference in vulnerability derives mostly from difference in socialisation where girls are not equipped with the same skills as their brothers, such as swimming and tree climbing. For example, it has been documented that women in Bangladesh did not leave their houses during floods due to cultural constraints on female mobility and those who did were unable to swim in the flood waters (Bridge 2008:6). They are at risk even at the aftermath of the disaster when material losses may force girls to drop school in order to support their households. Because of inadequate conditions in shelters and temporary housing without privacy they are more likely to become victims of domestic or sexual violence, perpetuated by men threatened by the lack of control over the livelihood of their families. In spite of different risk related to climate change that women need to face they still have not been included in decision-making process about coping strategies and environmental policies. Land degradation, desertification and extreme weather events force communities to leave their land and look for better opportunities in other place. This mobility usually targets urban settlements perceived as a chance for better life. Although cities offer job and educational opportunities as well as basic services, migrants usually reinforce poor populations, already more vulnerable to climate change because of severe leaving conditions. Climate change and city Although climate change has and will have tremendous impact regardless geographical location, the urban areas are particularly vulnerable due to density and concentration of people, buildings and wealth. These factors are connected to progressive modification of environment which obstruct earth capacity to deal with the severe weather events. Moreover, urban settlements are often located in hazardous areas such as the edges of the tectonics plates, near big water reservoirs or at the coast which facilitate their development. Nevertheless, cities contribute to their increasing vulnerability by high greenhouse gases emission caused by concentration of consumption and production. While they are hit by boomerang effect of non sustainable development, they may be agents of change due to high social and economic capitals and innovative potential. That’s why urban areas need to be analyse independently due to their higher vulnerability, because rapid urbanization, coupled with global environmental change, is turning an increasing number of human settlements into potential hotspots for disaster risk (UN-HABITAT 2007: 163). To properly analyse and plan climate change adaptation and mitigation gender dimension should be take into account as the factor determining one position in the era of climate change. City, women and climate change Linkage between environment and gender has been the issue of interest for years. However, nexus between this factor and climate change was perceive recently as well as the role of cities in this process. As Gotelind Alber, the author of report which links all three element writes: Urban climate policy started in developed countries with a strong emphasis on mitigation actions, while climate change engagement of cities in developing countries is still rare. The reasons for this include a lack of awareness of the problem and in particular of the role of cities as part of the solution, lack of longer-term considerations and institutional and financial constraints. As for the cities that are actually working on climate issues, the gender dimension is virtually absent in their plans, policies and programmes (2011:10). The researcher sees the main sources of this situation in the underrepresentation of women in decision-making, lack of sex segregated data, knowledge and awareness of gender issues. Nevertheless, there are several international standards and norms which may be used as a tool in including woman in the urban climate change policies. CEDAW, the United Nations Convention on the Elimination of All Forms of Discrimination Against Women, written in 1979, obligates all parties to act against discrimination against women. In 2009 members of the Committee on the Elimination of Discrimination Against Women, concerned about missing gender dimension in climate change discourse stated: its concern about the absence of a gender perspective in the United Nations Frame- work Convention on Climate Change (UNFCCC) and other global and national policies and initiatives on climate change. (…) Gender equality is essential to the successful initiation, implementation, monitoring and evaluation of climate change policies (CEDAW 2009). Several councils and commissions address the issue of necessity of mainstreaming gender in all environmental policies and mechanism. For example, the Beijing Platform for Action from 1995 or The Commission of the Status of Women, which devoted their forty-sixth session in 2002 to the issue of natural disasters and gender-sensitive system of reduction, response and recovery. The necessity of including gender dimension in urban settlements has got attention recently. In United Nations’ documents gender equality was mentioned for the first time in The Istambul Declaration and the Habitat agenda as one of seven commitments. Further Declaration on Cities and other Human Settlements encourages member states underlines need for formulating and strengthening policies and practices to promote the full and equal participation of women in human settlements planning and decision-making (Albert 2011:13). Moreover, UN-Habitat Governing Council produced numerous resolutions linking the issue of women’s situation and climate change. Among many it is worth to mention resolution on women’s rights in humans settlements development or on gender equality in human settlements. The important guidelines are included in UN-Habitat’s Climate Change Strategy 2010-13. It not only underlines women’s vulnerability to climate change, but appreciate their role in upgrading living conditions through grassroots initiatives. It highlights women as important actors for adaptation and mitigation strategies, natural resource management, conflict resolution and peace building at all levels. It calls for gender indicators to assess the impacts of climate change, in order to shape the response accordingly; and for supporting the response capability of vulnerable groups by strengthening their social, natural, physical, human, and financial assets (Albert 2011:14). All documents mentioned above provide sufficient framework for including gender dimension in analysis and strategies for combating climate change as well as for including women in adaptation and mitigation actions. Women as agents of change Many data prove that women are usually victims of consequences of climate change. However, their vulnerable situation does not deprive them of agency. Women can and do make a difference. They are knowledgeable and experienced in adaptation and mitigation strategies, natural resource management, conflict resolution and peace building (UN-Habitat 2009:28). They use these skills both on personal and community level to slow down climate change and improve their lives. According to studies analysed by Albert, women are much more aware of climate change and its implications. Moreover, they are more likely to take personal actions to mitigate climate change. It is because they are responsible for providing necessities for household and they are often responsible for most of consumption decisions so they are able to notice changing conditions. Not only they consume less but they are willing to reduce or change habits, using more local products or renewable source of energy. Beside decisions on the household level women in many parts of the world organized themselves in the communities to mitigate and adapt. Although their actions cover many fields from preventing land degradation, introducing green energy to reforestation, I will mention initiatives concern natural disaster what will show the complexity and multitude of activities concerning one phenomenon. One of the initiative is mapping of hazardous areas which is conducted by many local organization united under Huairou Commission. By collecting data about areas at risk of floods and other natural disasters or food insecurity they gain strong argument for discussion with local and national governments. This process is usually followed by trainings both for officials and members of community about way to adapt to current situation and mitigate consequences of climate change. Raising awareness of the communities is an important part of women’s activity. Examples can be found in the Global South as well as Global North, inhabited as well by marginalized communities. Coastal Women for Change is an initiative started by several dozen women from New Orleans after hurricane Katrina, who lost their houses and jobs. It helped them realise that such severe weather events may happen again because they are enforced by climate change. Rights now they are providing emergency preparedness trainings along with educational courses and childcare. Few members won seats on the mayor’s planning commission what enable them to secure that the voice of poor and minority communities will be heard (Yes! 2011). High vulnerability to climate change is usually increased by inadequate and hazardous housing conditions. This is interconnected with problems with tenure rights, which is the case of many poor communities living in slum areas. Firstly, these kinds of settlements are usually located in hazardous locations, more likely to be affected by natural disaster. Secondly, lack of secure tenure discourage them from upgrading their houses what make housing structures unable to resist floods or hurricanes. Moreover, not regulated status restrain the access to basic services, such as water supply, sanitation or garbage collection. The solutions for this situation is broadening awareness about the rights and strengthening women networks in order to secure tenure rights. One of the examples is Conamovidi group from Lima. Through a community risk and vulnerability mapping process, women found that poor families are living in particularly unsafe conditions and that single mothers often have an acute need for secure tenure but their lack of land titles means that their needs are often ignored by government (Huairou 2011). The information and knowledge they obtained with support of GROOTS Peru enable them to start discussion with local government about including them in urban planning and decision-making processes. Although climate change has an impact on all livelihoods, women are particularly vulnerable. Nonetheless, gender dimension is not commonly recognized in the debate about climate change, especially about urban areas, which are significantly influenced by climate change. Women are usually excluded from decision-making process about urban climate change policies. But they are not accepting the role of victims and become the agents of adaptation and mitigation on the local level. Carvajal-Escobar, Y., Quitero- Angel. M., Garcia-Vargas, M. 2008. ‘Women’s role in adapting to climate change and variability’. Advances in Geosciences, 14:277-280 Alber, G. 2011. Gender, Cities and Climate Change. http://www.unhabitat.org/grhs/2011. BRIDGE. 2008. Gender and climate change: mapping the linkages. A scoping study on knowledge and gaps. Institute of Development Studies: Sussex. CEDAW. 2009. Statement of the CEDAW Committee on Gender and Climate Change. http://www2.ohchr.org/english/bodies/cedaw/docs/Gender_and_climate_change.pdf. Denton, F. 2002. ‘Climate change vulnerability, impacts, and adaptation: why does gender matter?’ Gender and Development, 10:20. Huairoun Comission. 2011. Changing climate, changing leadership: grassroots women’s groups model climate resilient development. Last modified March 3. http://www.huairou.org/changing-climate-changing-leadership-grassroots-womens-groups-model-climate-resilient-development. UN-Habitat. 2009. ‘Climate change is not gender neutral’. Urban World: Climate Change: Are cities really to blame?. UN-Habitat. 2007. Enhancing urban safety and security. Global Report on Human Settlements 2007. London: Earthscan. Yes!. 2009. Climate Hero Sharon Hanshaw. Last modified November 10. http://www.yesmagazine.org/issues/climate-action/climate-hero-sharon-hanshaw
Callisto, photographed by Voyager 1, showing a large impact basin, 2600 kilometres acros, on the moon's surface. Known as 'Valhalla', this is the largest impact crater yet discovered in the Solar System. The concentric rings indicate that the impacting body might have come close to breaking through Callisto's crust. Callisto is the second largest of Jupiter's four main moons, which were discovered by Galileo, and is approximately the same size as the planet Mercury. The outermost of the four, Callisto takes 16.7 Earth days to orbit Jupiter. Two Voyager spacecraft were launched in 1977 to explore the planets in the outer solar system. Voyager 1 made its closest approached to Jupiter of 278,000 kilometres in March 1979 before flying on to Saturn. © National Aeronautics & Space Administration / Science & Society
History of Philosophy Philosophy has been around since the dawn of western civilization. The golden age of Greek philosophy took place in Athens in the 5th century BC. The works of Socrates, Plato, and Aristotle informed thousands of years of thought, becoming central to thought in the Roman world, the Middle Ages, and then resurfacing in the renaissance and later. Starting at the height of the Roman republic, Christian thought was central to philosophy at least until the enlightenment. In the 18th century, questions of how we come to know what we believe we know (epistemology), and new ethical schools began to form. By the late 1800’s, questions of language, logic, and meaning took center stage, and the 20th century played host to one of the largest bursts of philosophical work ever seen. Today philosophical thought is applied to almost every component of life, from science to warfare, politics to artificial intelligence. Want to learn about Eastern Philosophy? You may also enjoy: A History of Eastern Philosophy
Enrolling a child in kindergarten is difficult enough without worrying and stressing about his or her readiness. Five year olds vary widely in their developmental skills. When evaluating your child’s readiness for kindergarten there are four areas of developmental functioning that should be considered: - Receptive and Expressive Language Skills – this is the ability to understand and use language to communicate. - Social and Emotional Development – this is the ability to get along with others and accepting authority. - Cognitive Skills – this is the ability to use mental process of perception, memory, judgment, and reasoning. - Fine and Gross Motor Development – this is the ability to control the hands and fingers for writing, coloring, and cutting as well as stabilizing the body when running, jumping, and playing. Because children around the same age develop and master skills at varying rates, the most important thing to know is what your child can do. The following developmental skills are a few indicators of a child’s readiness for kindergarten. Think about these questions listed below as they apply to your child to help you evaluate and determine how well your child is developing the skills necessary to be successful in kindergarten. Receptive and Expressive Language – Can your child… - Say his/her name? - Speak in sentences of five words or longer? - Tell stories at length, both made up and true, from beginning to end? - Use the future tense in conversations? - Understand simple directions in sequence and follow conversation? - Listen attentively or focus attention on a task for at least 10 minutes? Social and Emotional Development – Can your child… - Pretend, sing age-appropriate songs, dance, and play well with others? - Tell the difference between reality and make believe? - Participate in new experiences without fear? - Agree to rules for games and comply with expectations for behavior? Cognitive Development – Can your child… - Understand the concept of time (tomorrow, yesterday, next week, last summer, etc.)? - Recognize his/her name on paper? - Identify at least six body parts? - Name at least four colors correctly? - Sort objects by size, shape, or color? - Count ten or more objects? Fine and Gross Motor Development – Can your child… - Look at and copy a simple shape onto paper (circle, square, triangle) with a pencil or marker? - Hop (on one foot then the other) and skip? - Print some letters of the alphabet, such as the ones in his/her name? - Swing and climb without struggling or needing assistance? - Use a fork and spoon appropriately? - Put on clothing alone and toilet independently? If you feel like your child is not showing progress in these areas, we can help. In August we will be offering a one week Kindergarten Readiness Camp. We also offer individually focused kindergarten skills development sessions. If you would like to find out more about our Kindergarten Readiness Camp or other services that we offer, please contact our office at (503) 352-0240.
|Share Presentation: http://NeoK12.com/pres/ZTYROCK3| Rocks are classified according to the major Earth processes that formed them. They are generally classified by mineral and chemical composition, by the texture of the particles that formed them and also depending on the rock cycle. There are three types of rocks: Igenous, Metamorphic, and sedimentary. Igenous rocks: Are melted rocks that cooled and solidified, when melted magma cools. They are divided in two main categories: Metamorphic rocks are compacted by pressure and heat from deep inside the earth. They are formed by subjecting any rock type to different temperatures and pressure conditions than those in which the original rock was formed. Marble is a Metamorphic rock with a parent rock of limestone or dolostone and is formed by natural processes of heat and pressure. some other metamorphic rocks are quartzite, slate, schist, gneiss Sedimentary rocks are formed at the surface of the earth. They are formed by deposition of either clastic sediment, organic matter or chemical precipitates because of the high pressure and temperature. They are formed in water or on land. They are layered accumulations of sediments, fragments of rocks, minerals or animal or plant material. Some Sedimentary rocks are limestone or limestone karst formations which we can find in caves e.g Sandstone, gravel These rocks are distributed all across the mountains, valleys, and plains, in river beds, in mines and on beaches (pebbles and sand)
There’s an old joke that goes like this: There are 10 kinds of people in the world; those who understand binary and those who don’t. Well, you don’t have to be one of the latter anymore, because today, I’ll show you how it works. First up, the smallest unit in computer memory is a bit. Eight bits make up one byte. A thousand bytes make up a kilobyte. And so on and so forth. Every bit contains just one of two possibilities, a 1 or a 0. On or off, if you will. (You’ll notice that on power switches, the “on” side has a little line—this is actually a 1) One byte, aka 8 bits, can be one of 255 possibilities. Every bit in an 8-bit byte contains a yes/no answer for a different number. Binary works in multiples of 2. Every bit represents 2 to the power of a different number. These numbers, from right to left, are 0 to 7. Thus, the far right number represents 1 (aka 2 to the power of 0) and the far left represents 128 (aka 2 to the power of 7). Say we have this binary number: 01001101. To work out what number it is, you add all the bits’ values together. In this case, 0 + 64 (2 to the power of 6) + 0 + 0 + 8 (2 to the power of 3) + 4 (2 to the power of 2) + 0 + 1 (2 to the power of 0) = 77. Normally, when writing binary, you can leave out any leading zeros, since they do not alter the value. The computer doesn’t do that, but humans often do, when writing them down. Ergo, 10 would be 00000010, aka 2. Congratulations, now you are part of the people who understand binary (hopefully).
1) Apertures are expressed in numbers, with the smaller number meaning a larger opening in the lens. A larger lens opening lets in more light, which can increase your shutter speed. 2) Apertures are typically expressed by f/(number), like f/2.8, f/4.0, f/5.6 ,etc. 3) Moving one whole stop in an aperture means doubling or halving the available light to the camera's sensor, depending on which way the stop has moved. f/2.8 would be twice as fast as f/4.0, but only half as fast as f/1.4 (making f/1.4 4 times faster than f/4.0) So what else is there to aperture? A huge visual impact, that's what! Let's look at a series of examples: |Shot at aperture f/5.6| |Shot at aperture f/8| |Shot at aperture f/11| Look at the series of images - notice how every time the aperture gets larger, the background becomes more and more in focus? This is the second "function" of aperture. The smaller the aperture number, the more light allowed to the camera but the less the background is in focus. Conversely, the higher the aperture number, the less light allowed to the camera but the background is more in focus. The blur is actually a function of several things - the aperture of the lens during the photo, the size of the sensor (as exampled HERE), the focal length and the distance to the subject. But, for this tutorial, just remember that all else equal, setting your aperture to lower or higher f-stops can drastically alter how your image looks.
The northern hemisphere has always been traversed by ice ages. In reality, there were more cold years that warm years, as summarized in the following graph. We are clearly in interglacial (warm) period since -10,000 . The glacial periods (which have very different names depending on their geographic locations where their signs are observed, even if it is the same phenomenon at the scale of the entire world) and inter-glacial are governed by solar and astronomical complex cycles, and they cause violent change on Earth’s climate, as demonstrated by the Serbian astronomer Milutin Milankovitch between 1920 and 1941. These giants climate cycles (80 000-120 000 years) and observed are quite similar in form, but have some differences, which, at the human level, are considerable. They are governed by three main parameters: Comparison between the curve of solar radiation at latitude 65 ° (top) and the curve of temperatures calculated from the ratio O-18/O-16 ice at Vostok, Antarctica (bottom). When we observe these graphs, several observations: The two last glaciations (Weichselian and Wolstonian) were even colder than the preceding two glaciations. Wolstonian Glaciation is almost bonded to the previousglaciation (Kansan), since the interglacial period that separates them is very short. Glaciers have the time to melt only a little, and the volume of ice does not reach the usual minimum. In Wolstonian glaciation, temperatures reach their lowest three times (it was about -190,000, -160,000 and -140,000), against only once in the previous two (it was about – 340,000 and 240,000) and twice in the Weichselian glaciation (it was about -70,000 and -20,000). In Wolstonian glaciation, temperatures reach their lowest three times (about -190,000 years, -160,000 years and -140,000 years), against only once in the previous two (about – 340,000 years and -240,000 years) and twice in the Weichselian glaciation (about -70,000 years and -20,000 years). b. Glaciations and migratory reflex It is important to consider all these parameters, because we can so understand how much the “Out of Africa” theory is absurd. This theory never gives the cause of the so-called modern human exodus from Africa. It claims probably, as evolutionism wants to, that humans from Africa was shaken by such evolutionary change (but which one?) that they had a sudden urge to migrate to north for about 50,000 ago, in the middle of ice age, while the climate of France was roughly equivalent to current Siberia, tundra and taiga, while the Middle East and north Africa was a pleasant oasis covered with deciduous green forest. Believe it is absolutely unreasonable. There are no species adapted to the African climate that would have committed such a crazy project. However, there are many species from the northern hemisphere that have the migratory instinct in them. Whether seasonal, or larger cycles. This is the case for example for many migratory birds from Europe such as greylag geese and some swallows, whose no longer migrate to Spain or Africa in winter since the climate became warmer, the temperature being sufficient to them further north. The migratory instinct is well known – and entirely probable – among Neanderthals, since it is assumed that they followed their usual prey among others, as the reindeer and the mammoth, according to the specific movements of the latter. Several Neanderthal sites have been discovered in the Middle East: Shanidar in Iraq (60 000 to 80 000 BC) Amud (85 000 to 55 000 BC) Tabun (500 000 to 40,000 BC) Kébara (60 000 BC) And Skhul and Qafzeh which will be discussed later (see map): It is therefore obvious to think that this species took refuge here during the coldest periods, an assumption which is not popular, archaeologists preferring probably to think in a purely geographical and cultural way. The European barrier becomes necessarily a cultural barrier, they think, as now in our modern sedentary societies, and although everyone knows that peoples of the Stone Age were nomads. Thus, JJ Hublin claims that Neanderthals disappeared because they probably lived in small groups on a small geographical area and they were dying of cold and hunger. (Unfortunately, this program is now archived and is no longer possible to listen it) No earthly creature behaves thus, and many even manage to adapt to the whimsical behavior of modern man. It seems particularly inappropriate to think that Neanderthals would suddenly behaved thus when he has survived several ice ages before. Note that they lived in Byzovaya (presently in Siberia) in full ice age (around -28,500), and probably during the winter (? currently no proofs), according Ludovic Slimak, a researcher at CNRS in the program Le Salon Noir on french radio France Culture(22.06.2011). Modern winters (in inter-glacial period, warm also) in this region displays a nice -40 ° C during 4 months. For example, I made a pedestrian route on Google Maps, from Paris to Israel’s border, which gives me about 4300 km, or 35 days and 15 hours. With an average of 5km a day and some holidays, it will be about 3 years. Obviously, no one walks days and nights. They must have time to live, eat, defend themselves if necessary, to bear children, etc. Anyway, the goal was not to go to Israel or to move as fast, but to move step by step with changes and climatic disadvantages, over several centuries. I take this example to expose to the eyes of motorized sedentary humans we are that this is clearly and fully possible, and that to claim the opposite (Hublin) is quite absurd. Let us now look at our graphics and the particularly harsh glacial period called Wolstonian or Wisconsinan (in America), and the one before (Kansan on the graph), since they are almost only one. In 200,000 years, the minimum temperatures of the Pleistocene have been reached four times, which is considerable. It seems clear that Neanderthals were particularly weakened. Even if Neanderthals could live in Byzovaya in -28500 in full ice age, it is not certain that it was an example of a whole group (men, women, children) *, and it is obvious that the mildest circumstances in the south have attracted them. They moved further south to the Israelis sites, then covered with forests.* NB! I will mention that below, but the question of temperature is not the only one that play a role in population displacement. The solar radiation, which it does not fully bound with temperature is crucial. In – 28 500, the solar radiation was pretty high for such low temperatures, and this certainly played a positive role in the peopling of the far north. c. Hybridization as a survival mechanism of the species You can compare this situation to the current and well-known situation of polar bear. The conditions of its environment have been degraded (from its point of vue) dramatically. This species has entered a survival mechanism. I think it was the same for Neanderthals whenever the temperature fell very low, especially when the solar radiation also was low (too little solar radiation causes rickets, lower growth and thus a smaller size, an increase in disease). It occurs then the survival mechanism of the species, which drove them not only to move, as this is also known in warmer weather, but also to hybridize to the humans from the South just like the polar bear hybridizes to brown bear. The fair genes are recessive in the hybrid and are ready to emerge again when the climate will be favorable, if the species became extinct between time. Polar bears and grizzlies are closer than Neanderthals and Sapiens, but the comparison is interesting. The grizzly, named like this because of its gray color, sometimes brown and sometimes blond had a common ancestor about 200 000 years ago with the polar bear. 200 000 years is also the period when the ice extent in the northern hemisphere was a longtime low. It is now even lower, and this has been so for 10 000 years only, but we must also consider the role of humans in the polar bear habitat to understand that the modern critical situation in which it finds itself was approximately equivalent to that that lasted from – 240,000 to – 180,000 years. The pizzly, a hybrid between a grizzly and polar bear, is almost gray. It is less suited to the ice, but more adapted to life on land. Hybridize with the grizzly is an instinctive way for the polar bear to keep the genes of its species. It is probable that the white genes form gradually a “recessive package” as it is known in many species of animals, including the Kermode bear (or « Spirit Bear »), which was considered among American Indians as a relic of the last glaciation. 10% of black bears there are born white (genetic studies have shown that this phenomenon is caused by a package of recessive genes), and this color is probably the remnant of a died species but saved and who lived there at the last ice age. This bear has, by this survival mechanism, hybridized to his black neighbor, to “freeze” its species until the next ice age, in which the white coat is essential. In ancient times, there was a legend that every tenth bear was turned white as a reminder of the hardships of the last ice age, a pronouncement that the Spirit Bear would live forever in peace. This mechanism is dangerous for the species, but it may thus be essential to its survival, and that’s probably why it exists. It is likely that if the polar bear would disappear, this “white genes package”, called leucism or leucitism, which is found in the grizzly / pizzly would appear occasionally. The grizzly, who is a subspecies of the brown bear, is almost certainly already a polar bear / brown bear hybrid as suggested by its habitat, that it shares with the polar bear, its massive size compared to the classic brown bear, the shape of his head, and its coat that ranges from blonde / gray to dark brown. A blonde grizzly bear fights with a regularly-coloured one. Photographer Steven Kazlowski snapped the powerful animals on the Alaskan Peninsula, in Katmai National Park, United States In this sense, it is a genetic reserve for the polar bear, which makes possible its hybridization and limit the damage it would cause (genetic incompatibility, too different size, etc..). As polar bears exists, and as long as it is not extinguished, it is probable that the mixtures are not consistent enough or old enough to form this « white genes package » or leucism in another species, as Kermode bear, white tigers, white lions, white deer, white deer, white blackbird, white peacock, and other animals with this recessive so-called “anomaly” that is classical in animals of the northern hemisphere or mountainous regions, who suffer dramatic climate changes in during glacial periods. Note also that the animals are often larger leucistic, including the tiger, which gives them an advantage in cold climates. In humans, it is possible to find blue-eyed and blonds Mongolian for example, without they having any blue-eyed and blonds parents. If the leucistic population, like Kermode bear, able to live in the northern hemisphere or in mountain areas during ice ages disappeared whenever the climate of their habitat were transformed, as it is frequently the case, we can understand that these areas would be quickly drained of all life, and they would have been empty a long time. By this system they assure the protection of their species until they again become favorable climate. The leucitisme or leucism is the stigma of this survival mechanism. It is possible to imagine that these genes may become dominant if solar radiation decreases, as a variable coat with large cycles. Concerning such cycles, we note that the size of the cave bear increased during glaciations, and decreased during inter-glacial (warm) periods (http://fr.wikipedia.org/wiki/Ours_des_cavernes); for better known variable cycles, we can mention the arctic fox, arctic wolf, ptarmigan, arctic hare or alpine hare, etc.. While the white coat (or white skin) can be a disadvantage in interglacial period (and recessive) it is very useful in ice age (camouflage; or in humans in particular, less filtering of the very useful solar radiation when it is less), and it could become dominant. This is an hypothesis, but it is probable that these individuals or populations have a special attraction for the northern regions and mountains, in order to repopulate them, as the very easy example of Scandinavian. So I will choose a totally different interpretation of anthropology and geography. Scandinavians are not blonde because they live in the north, but they live in the north because they are blond. Here’s why you’ve always loved skiing, and why many Europeans have a very special affinity with the snow, they miss as soon as winter is a bit hot. I do not believe in psychology and that we are attracted by the snow because it would remind us nostalgic cold winters of our childhood. First, the winters have not all been cold, and childhood has not always been sweet. I believe in the power of DNA, the Platonic thumos in the soul, which dictates what is good for us, depending on our environment. It is clear that such micro-evolution exists, but what is called a mutation, however, is probably often either a direct hybridization or a past hybridization which has left recessive marks, which are able to reappear. It is still necessary to clarify that the hybridization between the bear is different from the hybridization between humans, even if it is comparable. While the polar bear is perfectly adapted to the coldest climates, and while it is the interglacial that kills it, the Neanderthal is adapted to cold climates, but moderately cold. An inland ice sheet too big, a too low temperature, and too little solar radiation cause damage to him. It was thus “threatened” during the coldest and less sunny periods of glaciation, and he has migrated southwards. What is strange is that we think in an entirely different – and much more logical – way when it comes to animals. Nobody is surprised if a breeder of Samoyed dog tell you that these dogs become other dogs in the first winter snow, and that they roll in the snow with happiness. Here there is no question of psychology and childhood, this seems obvious. Similarly, no one will tell you that the kittens are jealous if they fight, no one will bother you with this unfortunate Freudian Oedipus complex if they growl in front of their father or mother, they will tell you that they learn to hunt, and at worst that they learn the hierarchy imposed by their species. Sometimes it is good to look at the human being as an animal. This to a certain degree. Human is an animal, but a particular animal. This singularity is the weapon that nature has given him to him. Human has no long canines, he is not very large, he is not very fast, he is not very quiet, he has no shell, but he has his head. This is where everything changes, and that is why this hybridization has great implications for the Neanderthal species, or the European species that we are, at least on our scale of modern humans of the second millennium. d. The survival of the Neanderthal species We know that Neanderthals had a hard life, it is assumed that women were few, and this seems logical, since they appeared to more or less participate in hunting too, (it was found Neanderthal women with serious fractures, such the woman from St. Cesaire, called “Pierrette,” -36,000 years – she had a severe skull fracture), and since women have an additional disadvantage to survive compared to men, since they are more fragile and have to go through periods of pregnancy and childbirth that are particularly dangerous, especially in cold climates. It is likely that Neanderthals, who was also stronger than Sapiens Sapiens have taken a few women. The reverse is rather unlikely, when we see the inequality of force between the two species, and the results of studies on mtDNA we discussed earlier. These hybrids have probably occurred a long time, probably every time the temperature and / or the solar radiation was low, and the ice level was high. Neanderthals migrated southwards, seeking warmer temperatures. Thus, already in -340 000 we can imagine a hybridization. It was probably insignificant, and the resulting hybrid had probably a poor fertility, especially men, who might be sterile, as in many animals (see previous article). Offspring was anyway born, probably through the female hybrid, and it built the archaic “Neanderthal” Homo sapiens that I have listed in the preceding article. Some remained in Africa, and their blood was diluted in the African blood in the Middle East and in Ethiopia. Some others went away to Europe, perhaps instinctively, during the warming that followed. This may have slightly modified the appearance of the Neanderthal, but at this stage, the share of hybridization was too small to be meaningful, and most hybrids have probably given little or no offspring. In – 270 000, temperature reaches again its minimum, and Neanderthal is again in serious danger. New migration, new hybridization, probably slightly easier, since the sapiens in Ethiopia and in the Middle East had a little Neanderthal blood, but still very small and almost insignificant. After that, the climate was not warmed up clearly, but it happens a moderate and different inter-glacial period, glaciers are melting less than usual, then the ice age arrives, with three times with very low temperatures instead of only one, the one after the other, and a low level of solar radiation also three times. Neanderthal is severely weakened, and the entire species is endangered. Neanderthals, mostly women, die young, and the survival mechanism of of the species is again established: three migrations, three hybridizations, in – 185 000, in – 160,000 and in -140,000 . The so-called « modern Archaic Homo Sapiens » appears slowly in Ethiopia and in the Middle East. It is actually a early Neanderthal / Sapiens hybrid. A Cro-Magnon of Wolstonian glaciation. Then comes a classic inter-glacial period. Some hybrids remain in Asia / Africa, others go back to the north and the Neanderthal blood is already changed. A new ice age arrives and the temperature decreases dangerously around anno -120,000, and the solar radiation reach its minimum again and then around anno – 90 000 , and part of the Neanderthal species migrate south. Here, it becomes Tabun (-120,000 ) and Skhul and Qafzeh (-100 000 to -90 000), previously attributed to Neanderthals, and then returned without explanation – if not the lack of physical evidence of the so-called and famous “Out of Africa” theory – in archaic Homo Sapiens. It is in Skhul and Qafzeh, located in current Israel, that is the key to history. In Skhul, Qafzeh, Amud and Tabun are the first outstanding true hybrids, and not the first homo sapiens who appeared suddenly in Israel. We will return to this subject, and detail their characteristics in the next article. Some of these hybrids would again moved northwards around anno -60,000 when the climate was becoming a little warmer, and they would be mixed on site with the Neanderthals, some of whom lived to the limit of glaciers in the north, as in Byzovaya, located today in Russia.
Fever in Children What is a fever? A fever is defined by most healthcare provider as a temperature of 100.4°F (38°C) and higher when taken rectally. The body has several ways to maintain normal body temperature. The organs involved in helping with temperature regulation include the brain, skin, muscle, and blood vessels. The body responds to changes in temperature by: Increasing or decreasing sweat production. Moving blood away from, or closer to, the surface of the skin. Getting rid of, or holding on to, water in the body. Seeking a cooler or warmer environment. When your child has a fever, the body works the same way to control the temperature, but it has temporarily reset its thermostat at a higher temperature. The temperature increases for a number of reasons: Chemicals, called cytokines and mediators, are made in the body in response to an invasion from a microorganism, malignancy, or other intruder. The body is making more macrophages, which are cells that go to combat when intruders are present in the body. These cells actually "eat-up" the invading organism. The body is busily trying to make natural antibodies, which fight infection. These antibodies will recognize the infection next time it tries to invade. Many bacteria are enclosed in an overcoat-like membrane. When this membrane is disrupted or broken, the contents that escape can be toxic to the body and stimulate the brain to raise the temperature. What conditions can cause a fever? The following conditions can cause a fever: Disorders in the brain Some kinds of cancer Some autoimmune diseases What are the benefits of a fever? Fever is not an illness. It is a symptom, or sign that your body is fighting an illness or infection. Fever stimulates the body's defenses, sending white blood cells and other "fighter" cells to fight and destroy the cause of the infection. What are the symptoms that my child may have a fever? Children with fevers may become more uncomfortable as the temperature rises. In addition to a body temperature greater than 100.4°F (38°C), symptoms may include: Your child may not be as active or talkative as usual. He or she may seem fussier, less hungry, and thirstier. Your child may feel warm or hot. Remember that even if your child feels like he or she is "burning up," the measured temperature may not be that high. The symptoms of a fever may look like other medical conditions. According to the American Academy of Pediatrics, if your child is younger than 3 months of age and has a temperature of 100.4°F (38°C) or higher, you should call your child's healthcare provider immediately. If you are unsure, always check with your child's healthcare provider for a diagnosis. When should a fever be treated? In children, a fever that is making them uncomfortable should be treated. Treating your child's fever will not help the body get rid of the infection any faster; it simply will relieve discomfort associated with fever. Children between the ages of 6 months and 5 years can develop seizures from fever (called febrile seizures). If your child does have a febrile seizure, there is a chance that the seizure may occur again, but, usually, children outgrow the febrile seizures. A febrile seizure does not mean your child has epilepsy. There is no evidence that treating the fever will reduce the risk of having a febrile seizure. What can I do to decrease my child's fever? Give your child an antifever medicine, such as acetaminophen or ibuprofen. DO NOT give your child aspirin, as it has been linked to a serious, potentially fatal disease, called Reye syndrome. Other ways to reduce a fever: Dress your child lightly. Excess clothing will trap body heat and cause the temperature to rise. Encourage your child to drink plenty of fluids, such as juices, soda, punch, or popsicles. Give your child a lukewarm bath. Do not allow your child to shiver from cold water, as this can raise the body temperature. NEVER leave your child unattended in the bathtub. DO NOT use alcohol baths. When should I call my child's healthcare provider? Unless advised otherwise by your child’s healthcare provider, call the provider right away if: Your child is 3 months old or younger and has a fever of 100.4°F (38°C) or higher. Get medical care right away. Fever in a young baby can be a sign of a dangerous infection. Your child is of any age and has repeated fevers above 104°F (40°C). Your child is younger than 2 years of age and a fever of 100.4°F (38°C) continues for more than 1 day. Your child is 2 years old or older and a fever of 100.4°F (38°C) continues for more than 3 days. Your baby is fussy or cries and cannot be soothed.
This study of computer architecture covers the central processor unit, memory unit and I/0 unit, number systems, character codes and I/O programming. Programming assignments provide practice working with assembly language techniques, including looping, addressing modes, arrays, subroutines, and macros. Microsoft assembler is discussed and used for programming throughout the course. - Understand how various types of information, including numbers, text, images and programs, are represented in the computer in the form of binary data. - Understand how the computer performs arithmetic and logic operations on binary data. - Study the basic components of a computer – the CPU, memory system, storage and other peripheral devices, their individual operations, and how they work together to execute programs and interact with the computer user. - Understand a program written in high-level programming language is translated into assembly language and machine code, and how the compiled code is executed in hardware. - Understand the interaction between the CPU and memory in executing program instructions. - Understand the digital circuits, from basic logic gates to combinatorial and sequential circuits that form CPU data path and memory elements. Take the Next Step Learn more about Lewis University's online programs. Call (866) 967-7046 to speak with a Graduate Admissions Counselor or click here to request more information.
Thermodynamics is a branch of physics which deals with the energy and work of a system. deals only with the large scale response of a system which we can observe and measure in experiments. Like the Wright brothers, we are most interested in thermodynamics for the role it plays in On this slide we derive some equations which relate the heat capacity of a gas to the gas constant used in the equation of state. We are going to be using specific values of the state variables. For a scientist, a "specific" state variable means the value of the variable divided by the mass of the substance. This allows us to derive relations between variables without regard for the amount of the substance that we have. We can multiply the specific variable by the quantity of the substance at any time to determine the actual value of the flow variable. From our studies of we know that the amount of heat transferred between two objects is proportional to the temperature difference between the objects and the heat capacity of the objects. The heat capacity is a constant that tells how much heat is added per unit temperature rise. The value of the constant is different for different materials and depends on the process. Heat capacity is not a state variable. If we are dealing with a gas, it is most convenient to use forms of the thermodynamics equations based on the of the gas. From the definition of enthalpy: h = e + p * v where h in the specific enthalpy, p is the the specific volume, and e is the specific internal energy. During a process, the values of these variables will change. Let's denote the change by the Greek letter delta (which looks like a triangle). So "delta h" means the change of "h" from state 1 to state 2 during a process. Then, for a constant pressure process the enthalpy equation becomes: delta h = delta e + p * delta v The enthalpy, internal energy, and volume are all changed, but the pressure remains the same. From our derivation of the the change of specific enthalpy is equal to the heat transfer for a constant pressure process: delta h = cp * delta T where delta T is the change of temperature of the gas during the process, and c is the specific heat capacity. We have added a subscript "p" to the specific heat capacity to remind us that this value only applies to a constant pressure process. The equation of state of a gas relates the temperature, pressure, and volume through a gas constant R . The gas constant used by aerodynamicists is derived from the universal gas constant, but has a unique value for every gas. p * v = R * T If we have a constant pressure process, then: p * delta v = R * delta T Now let us imagine that we have a constant volume process with our gas that produces exactly the same temperature change as the constant pressure process that we have been discussing. Then the first law thermodynamics tells us: delta e = delta q - delta w where q is the specific heat transfer and w is the work done by the gas. For a constant volume process, the is equal to zero. And we can express the heat transfer as a constant times the change in temperature. delta e = cv * delta T where delta T is the change of temperature of the gas during the process,and c is the specific heat capacity. We have added a subscript "v" to the specific heat capacity to remind us that this value only applies to a constant volume process. Even though the temperature change is the same for this process and the constant pressure process, the value of the specific heat capacity is different. Because we have selected the constant volume process to give the same change in temperature as our constant pressure process, we can substitute the expression given above for "delta e" into the enthalpy equation. In general, you can't make this substitution because a constant pressure process and a constant volume process will produce different changes in temperature If we substitute the expressions for "delta e", "p * delta v", and "delta h" into the enthalpy equation we obtain: cp * delta T = cv * delta T + R * delta T dividing by "delta T" gives the relation: cp = cv + R The specific heat constants for constant pressure and constant volume processes are related to the gas constant for a given gas. This rather remarkable result has been derived from thermodynamic relations, which are based on observations of physical systems and processes. In the kinetic theory of gases, this result is derived from considerations of the conservation of energy at a molecular level. We can define an additional variable called the ratio of specific heats, which is given the Greek symbol "gamma", which is equal to cp divided by cv: gamma = cp / cv "Gamma" is just a number whose value depends on the state of the gas. For air, gamma = 1.4 for standard day conditions. "Gamma" appears in several equations which relate pressure, temperature, and volume during a simple compression or expansion Because the value of "gamma" just depends on the state of the gas, there are tables of these values for given gases. You can use the tables to solve gas dynamics problems. - Re-Living the Wright Way - Beginner's Guide to Aeronautics - NASA Home Page
Eukaryotic cells contain membrane-bound organelles, such as the nucleus, while prokaryotic cells do not. Differences in cellular structure of prokaryotes and eukaryotes include the presence of mitochondria and chloroplasts, the cell wall, and the structure ofchromosomal DNA. Prokaryotes were the only form of life on Earth for millions of years until more complicated eukaryotic cells came into being through the process of evolution. Beyond size, the main structuraldifferences between plant and animal cells lie in a few additional structures found in plant cells. These structures include: chloroplasts, the cell wall, and vacuoles.
Semester One Review: Geometry In this geometry review worksheet, students determine answers to problems with topics such as corresponding angles, sequences, angle measure, and angle bisectors. This seven-page worksheet contains 80 multiple choice problems. 3 Views 9 Downloads - Activities & Projects - Graphics & Images - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Writing Prompts - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Dilation of a Line: Factor of One Half What happens to a dilation when the scale factor is less than one? Learners show and then tell this in a short worksheet. They create a dilation from an off-line point using a scale factor of 0.5. The class then explains how the images... 9th - 12th Math CCSS: Designed Connecting Algebra and Geometry Through Coordinates This unit on connecting algebra and geometry covers a number of topics including worksheets on the distance formula, finding the perimeter and area of polynomials, the slope formula, parallel and perpendicular lines, parallelograms,... 9th - 10th Math CCSS: Designed Creating and Graphing Linear Equations in Two Variables This detailed presentation starts with a review of using key components to graph a line. It then quickly moves into new territory of taking these important parts and teasing them out of a word problem. Special care is taken to discuss... 9th - 10th Math CCSS: Adaptable Existence: One-to-One Functions and Inverses One-to-one means the answer is simple, right? Given four graphs, pupils use a vertical line to test each graph to find out if they are one-to-one. By using the resource, learners realize that not all one-to-one relations are functions.... 11th - Higher Ed Math CCSS: Designed Inverse Functions: Definition of Inverse Functions Is the inverse of a function also a function? Pupils manipulate the graph of a function to view its inverse to answer this question. Using a horizontal and vertical line, class members determine whether the initial function is... 11th - Higher Ed Math CCSS: Adaptable
Connectivism is a learning theory for the digital age. Learning has changed over the last several decades. The theories of behaviourism, cognitivism, and constructivism provide an effect view of learning in many environments. They fall short, however, when learning moves into informal, networked, technology-enabled arena. Some principles of connectivism: - The integration of cognition and emotions in meaning-making is important. Thinking and emotions influence each other. A theory of learning that only considers one dimension excludes a large part of how learning happens. - Learning has an end goal - namely the increased ability to "do something". This increased competence might be in a practical sense (i.e. developing the ability to use a new software tool or learning how to skate) or in the ability to function more effectively in a knowledge era (self-awareness, personal information management, etc.). The "whole of learning" is not only gaining skill and understanding - actuation is a needed element. Principles of motivation and rapid decision making often determine whether or not a learner will actuate known principles. - Learning is a process of connecting specialized nodes or information sources. A learner can exponentially improve their own learning by plugging into an existing network. - Learning may reside in non-human appliances. Learning (in the sense that something is known, but not necessarily actuated) can rest in a community, a network, or a database. - The capacity to know more is more critical that what is currently known. Knowing where to find information is more important than knowing information. - Nurturing and maintaining connections is needed to facilitate learning. Connection making provides far greater returns on effort than simply seeking to understand a single concept. - Learning and knowledge rest in diversity of opinions. - Learning happens in many different ways. Courses, email, communities, conversations, web search, email lists, reading blogs, etc. Courses are not the primary conduit for learning. - Different approaches and personal skills are needed to learn effectively in today's society. For example, the ability to see connections between fields, ideas, and concepts is a core skill. - Organizational and personal learning are integrated tasks. Personal knowledge is comprised of a network, which feeds into organizations and institutions, which in turn feed back into the network and continue to provide learning for the individual. Connectivism attempts to provide an understanding of how both learners and organizations learn. - Currency (accurate, up-to-date knowledge) is the intent of all connectivist learning. - Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate impacting the decision. - Learning is a knowledge creation process...not only knowledge consumption. Learning tools and design methodologies should seek to capitalize on this trait of learning.
An article in the journal “Astrophysical Journal” describes a research conducted on quasars using the Hubble Space Telescope. These objects that are incredibly bright were observed in their formation phase, when they were in a sense teen-agers. The observations confirm the hypothesis that quasars are generated by galactic collisions that feed the supermassive black hole at their center. An article in “The Astrophysical Journal” describes a research group of NGC 5813 made using the NASA’s Chandra X-ray Observatory. In this galaxy group, multiple eruptions originate from the supermassive black hole at the galactic center that gives its name to the group were discovered. This activity took place over about 50 million years and has changed the appearance of the group, creating various cavities, huge bubbles within the cloud of hot gas that surrounds it. An article published in the journal “Astrophysical Journal” describes a study that established a link between the presence of supermassive black holes that emit jets of materials to nearly the speed of light but also radio waves and galaxy mergers. An international team of astronomers led by Italian INAF researcher Marco Chiaberge used the Hubble Space Telescope in the most extensive survey of the kind ever conducted.
Volunteer for NIAID-funded clinical studies related to antimicrobial (drug) resistance on ClinicalTrials.gov. Microbes are living organisms that multiply frequently and spread rapidly. They include bacteria (e.g., Staphylococcus aureus, which causes some staph infections), viruses (e.g., influenza, which causes the flu), fungi (e.g., Candida albicans, which causes some yeast infections), and parasites (e.g., Plasmodium falciparum, which causes malaria). Some microbes cause disease and others exist in the body without causing harm and may actually be beneficial. Microbes are constantly evolving enabling them to efficiently adapt to new environments. Antimicrobial resistance is the ability of microbes to grow in the presence of a chemical (drug) that would normally kill them or limit their growth. Antimicrobial resistance makes it harder to eliminate infections from the body as existing drugs become less effective. As a result, some infectious diseases are now more difficult to treat than they were just a few decades ago. As more microbes become resistant to antimicrobials, the protective value of these medicines is reduced. Overuse and misuse of antimicrobial medicines are among the factors that have contributed to the development of drug-resistant microbes. View the illustration: What is drug resistance? back to top Last Updated February 18, 2009 Last Reviewed February 18, 2009
What happens when Newton's third law is broken?May 15, 2015 by Lisa Zyga in Physics / General Physics Even if you don't know it by name, everyone is familiar with Newton's third law, which states that for every action, there is an equal and opposite reaction. This idea can be seen in many everyday situations, such as when walking, where a person's foot pushes against the ground, and the ground pushes back with an equal and opposite force. Newton's third law is also essential for understanding and developing automobiles, airplanes, rockets, boats, and many other technologies. Even though it is one of the fundamental laws of physics, Newton's third law can be violated in certain nonequilibrium (out-of-balance) situations. When two objects or particles violate the third law, they are said to have nonreciprocal interactions. Violations can occur when the environment becomes involved in the interaction between the two particles in some way, such as when an environment moves with respect to the two particles. (Of course, Newton's law still holds for the complete "particles-plus-environment" system.) Although there have been numerous experiments on particles with nonreciprocal interactions, not as much is known about what's happening on the microscopic level—the statistical mechanics—of these systems. In a new paper published in Physical Review X, Alexei Ivlev, et al., have investigated the statistical mechanics of different types of nonreciprocal interactions and discovered some surprising results—such as that extreme temperature gradients can be generated on the particle scale. "I think the greatest significance of our work is that we rigorously showed that certain classes of essentially nonequilibrium systems can be exactly described in terms of the equilibrium's statistical mechanics (i.e., one can derive a pseudo-Hamiltonian which describes such systems)," Ivlev, at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, told Phys.org. "One of the most amazing implications is that, for example, one can observe a mixture of two liquids in detailed equilibrium, yet each liquid has its own temperature." One example of a system with nonreciprocal interactions that the researchers experimentally demonstrated in their study involves charged microparticles levitating above an electrode in a plasma chamber. The violation of Newton's third law arises from the fact that the system involves two types of microparticles that levitate at different heights due to their different sizes and densities. The electric field in the chamber drives a vertical plasma flow, like a current in a river, and each charged microparticle focuses the flowing plasma ions downstream, creating a vertical plasma wake behind it. Although the repulsive forces that occur due to the direct interactions between the two layers of particles are reciprocal, the attractive particle-wake forces between the two layers are not. This is because the wake forces decrease with distance from the electrode, and the layers are levitating at different heights. As a result, the lower layer exerts a larger total force on the upper layer of particles than the upper layer exerts on the lower layer of particles. Consequently, the upper layer has a higher average kinetic energy (and thus a higher temperature) than the lower layer. By tuning the electric field, the researchers could also increase the height difference between the two layers, which further increases the temperature difference. "Usually, I'm rather conservative when thinking on what sort of 'immediate' potential application a particular discovery (at least, in physics) might have," Ivlev said. "However, what I am quite confident of is that our results provide an important step towards better understanding of certain kinds of nonequilibrium systems. There are numerous examples of very different nonequilibrium systems where the action-reaction symmetry is broken for interparticle interactions, but we show that one can nevertheless find an underlying symmetry which allows us to describe such systems in terms of the textbook (equilibrium) statistical mechanics." While the plasma experiment is an example of action-reaction symmetry breaking in a 2D system, the same symmetry breaking can occur in 3D systems, as well. The scientists expect that both types of systems exhibit unusual and remarkable behavior, and they hope to further investigate these systems more in the future. "Our current research is focused on several topics in this direction," Ivlev said. "One is the effect of the action-reaction symmetry breaking in the overdamped colloidal suspensions, where the nonreciprocal interactions lead to a remarkably rich variety of self-organization phenomena (dynamical clustering, pattern formation, phase separation, etc.). Results of this research may lead to several interesting applications. Another topic is purely fundamental: how one can describe a much broader class of 'nearly Hamiltonian' nonreciprocal systems, whose interactions almost match with those described by a pseudo-Hamiltonian? Hopefully, we can report on these results very soon." A. V. Ivlev, et al. "Statistical Mechanics where Newton's Third Law is Broken." Physical Review X. DOI: 10.1103/PhysRevX.5.011035 © 2015 Phys.org "What happens when Newton's third law is broken?" May 15, 2015 https://phys.org/news/2015-05-newton-law-broken.html
If fluid is in the middle ear and the ear is not infected, it is called otitis media with effusion (OME). OME may occur in two ways. One way is if the fluid in the middle ear is slow to clear out after an ear infection. OME may also occur without any infection if the Eustachian tube is not properly working. When it works properly, this tube brings air to the middle ear. Young children and children with cleft palate or Down syndrome may have more problems with OME. The presence of fluid in the middle ear reduces the middle ear's ability to conduct sound. The eardrum and middle ear bones cannot vibrate as they should, making sound seem "muffled." This temporary hearing loss may contribute to speech and/or language delay or other developmental delays. Therefore, children with temporary hearing problems due to OME would benefit from attention to listening, language and learning conditions. Below are strategies to help your child continue to listen and learn during this period of temporary hearing loss.
Available as a Google Doc here. Time: 90 minutes - Computers (with internet access) for each student/student group - Teacher computer to share charts/slides with class - Cornstarch (1 tbs per student) - Vinegar (1 tbs per student) - Glycerin (1 tbs per student) - Water (4 tbs per student) - Stove top/heat pad - stirring tool (i.e. wooden spoon) - Rethinking Plastic Packaging -- How Can Innovation Help Solve the Plastic Waste Crisis? from the EPA - Frequently Asked Questions about Plastic Recycling and Composting from the EPA Objective: Students will be able to assess the potential effects of plastic reduction and bioplastics in their community and will model the potential impact on riverine environments. Students will do this through researching plastic reduction strategies, testing bioplastics and plastic alternatives and using student-generated data to assess the impact and viability of plastic reduction methods. Have space on a whiteboard or make a Google Doc for shared note-taking space. Have an experimentation space and materials/tools ready for students to make bioplastic samples. Experiment requires: a cooking pot, mixing tool (metal or wooden spoon ideal), a burner or heat plate, cornstarch, vinegar, glycerin, water and aluminum foil. Time: 30 minutes (5 m) Discuss: What do we commonly use plastic for? Ask students to answer the question above. List their answers on a whiteboard or a digital space that students can access and edit. Once listed, follow up with: What are some instances or uses that we could replace with alternative materials? (10 m) Explore: What are bioplastics? Have students watch "What is Bioplastic?" youtube video. Have they encountered bioplastics before that they know of? Show some examples of bioplastic commercial use. Recommended examples: Coca Cola's Plant Bottle, these Keurig cups, Ecovita cutlery. (15 min) Bioplastics Overview Create slide presentation (or use this one), that addresses the following: - bioplastics are made of biopolymers, which are natural substances that are composed of very large molecules (also known as macromolecules) made of simpler chemical units called monomers. - The most frequently used bioplastics, what they are made from, and what they are used for; i.e. gelatin, starch, cellulose, chitin, etc. Helpful resource here. - Have students discuss potential pros and cons of different types of bioplastics Optional: Have students watch and discuss Can Bioplastics Ever Compete? | Our Plastic Predicament: Episode 8 from ThinkBioplastic Time: 50 minutes (30 m) Make Bioplastic Have each student (this can also be done as a group or class project) organize their materials. Experiment requires: a cooking pot, mixing tool (metal or wooden spoon ideal), a burner or heat plate, cornstarch, vinegar, glycerin, water and aluminum foil. Measure one tablespoon of cornstarch, vinegar and glycerin. Combine with four tablespoons of water in cooking pot and stir together. Mixture should be a milky color. Turn on burner to low or medium and stir mixture over heat. Continual stirring will keep mixture from lumping together. Mixture will begin to turn translucent. After approximately a minute, turn off heat and evenly spread mixture onto aluminum foil. Plastic should be left alone for several hours to a day in order to dry properly. Recipe was pulled from Biomass Plastic, link here. (20 m) Plastic Reduction Discussion Ask students to describe, in as much detail as they can, the life cycle of a water bottle. After a few minutes, show students this infographic of a plastic water bottle's journey. Ask students where they think waste is mostly leaking out in the environment during this production cycle. Do they think it's primarily during manufacturing (from corporations, companies), distribution (lost during transportation), use (from consumers) or disposal (during or through lack of waste management)? What are some ways in which the New Orleans community can monitor and practice plastic waste reduction in the Mississippi River? Time: 10 minutes Synthesize and Share Have students write up a pseudo proposal outline for an initiative based on their brainstorming around plastic reduction practices in the Mississippi River and alternatives to petroleum-based plastics. This can be done in groups of 3-5. If you have time, have students give a 2-3 minute elevator pitch of their project idea. After the lesson: Compile student reflections and turn into a research note!
Keep obesity from robbing you of your health, happiness and wellness. Understanding what obesity is and how to prevent it can help you live a longer and healthier life. wiredhealthconference.com gathered essential information about the definition of obesity, how to determine if you are obese, its causes, and what you can do to prevent it. What is Obesity? Obesity can be defined as a disorder involving excessive body fat that significantly increases health problem risks. According to the Centers for Disease Control (CDC), there are three states that your body mass index (BMI) can indicate. These measurements include: - Your BMI is 18.5 to <25, which falls within the healthy weight range. - Your BMI is 25.0 to <30, which falls within the overweight range. - Your BMI is 30.0 or higher, which falls within the obesity range. Tip: The formula used to measure your BMI is weight in kilograms divided by height in meters squared. If height has been measured in centimeters, divide by 100 to convert this to meters. When using English measurements, divide pounds by inches squared, then multiply by 703 to convert from lbs/inches2 to kg/m2. What Causes Obesity? Generally, obesity is caused by overeating and moving too little. If you consume fats and sugars in excess but don’t burn off that energy through exercise and physical activity, much of that surplus energy will be stored by your body as fat. Consider the following potential contributors to obesity: Stress, Emotional Problems, and Poor Sleep - People may eat more than usual when bored, angry, upset, or stressed. - Research shows that genetics can play a significant role in obesity. - Not having access to parks, sidewalks, and affordable gyms makes it challenging to exercise regularly and remain physically active. - Oversized or supersized meal portions increase one’s calorie intake, making even more physical activity necessary to achieve or maintain a healthy BMI. - Some areas lack access to supermarkets (that sell affordable healthy food, like fresh fruits and vegetables). - Aggressive food advertising encourages people to purchase unhealthy food, like fast-food, high-fat snacks, and sugary beverages. Health Conditions and Medications - Some medical problems (like hormone imbalances) may cause obesity. - Certain medications can also cause rapid weight gain (corticosteroids, antidepressants, and seizure medicines). Tip: Regardless of the cause, rapid weight gain should be reported to your primary care physician. Simple tests can rule out causes like heart failure, kidney failure, underactive thyroid, and ovarian disorders. How to Prevent Obesity Obesity is a severe chronic disease affecting an ever-increasing number of children, teens, and adults. Early onset of type 2 diabetes, heart and blood vessel disease, and other obesity-related depression and social isolation in children and teens are being seen more frequently by healthcare professionals. The longer one is obese, the more significant and challenging obesity-related risk factors become. Consider the following strategies to prevent obesity: - Eat a balanced breakfast – While you may believe skipping a meal is a way to cut calorie intake, skipping breakfast typically works against you as excessive hunger comes back later in the day, potentially leading to overeating. - Choose smaller portions and eat slowly – Slowing your pace at meals and choosing reduced portions can help avoid overeating. - Focus on your food – Limiting distractions like the television, computer, or phone can help us focus on our food. - Eat home-cooked meals – Fast food, restaurant meals, and other foods prepared away from home typically have larger portions and are less nutritious than the food you cook for yourself. - Eat mindfully – Stop eating “just to eat,” and think about why you’re actually eating. When you are hungry, make the healthiest food and drink selections possible. Increase Physical Activity Reducing or eliminating activities that encourage a sedentary (immobile) state can increase your ability to manage or prevent obesity. The following will help you get moving: - Reduce Screen Time – Keep television and device screen time to two hours or less daily. The less, the better. - For Adults – A minimum of 2.5 to 3 hours of moderate exercise or 1.5 to 2 hours of vigorous exercise weekly is recommended for good health. - For Children – A minimum of 1 hour of moderate to vigorous physical activity daily. Watching television is a sedentary activity that commonly promotes unhealthy eating through ads, product placements, and other promotions that pitch high-calorie, low-nutrient food and drinks. Tip: Keep bedrooms TV- and Internet-free Get Sufficient Sleep There is evidence that a good night’s sleep is crucial to good health and may help keep weight in check. The National Sleep Foundation recommends that: - 1-3 years old: 12 to 14 hours nightly - 3-5 years old: 11 to 13 hours nightly - 5-12 years old: 10 to 11 hours nightly - 8.5 to 9.25 hours nightly - 7 to 8 hours nightly Note: Combating obesity requires you to remain vigilant of sedentary habits, food consumption, health conditions, stress, and other controllable factors to maintain a healthy weight/BMI. Consult your primary care physician for additional helpful advice, tips, referrals, and exams. How to Prevent Obesity In this article, you discovered information on obesity, how to determine whether you are obese or not, and methods to help you prevent it. Recognizing and consciously altering your habits and lifestyle can significantly contribute to a less sedentary and healthier body, less susceptible to becoming obese. Ignoring the need to manage your weight and prevent obesity can lead to poor health, while increasing your risks for life-threatening medical conditions.
The team of researchers from Queen’s University Belfast and Kings College London say they have hit upon a way to quickly produce large numbers of stem cells to treat vascular disease. While the idea of harvesting stem cells from blood is not a new one, the volume of stem cells necessary to treat cardiovascular disease makes traditional harvesting methods impractical. Relying on autologous material would require a skin biopsy or large quantities of donated blood to make treatment possible, according to the researchers. Their procedure eliminates the need to rely on so much biological material to isolate the stem cells for treatment. With just a small amount of blood, it is possible to produce large quantities of stem cells in a very short amount of time. Creating Better Endothelial Cells The original impetus for their study was to look at induced pluripotent stem cells and how they might be used to treat cardiovascular disease. Along the way, they also came up with a method of genetically altering the cells they created so as to make them useful for generating better endothelial cells. During the research, the scientists found that activating a particular gene in the stem cells would lead to the generation of new endothelial cells that could then be used to treat certain diseases. Endothelial cells play a key role in protecting the blood vessels. Unfortunately, there are a range of diseases that damage endothelial cells. Patients with such diseases are more likely to suffer heart attacks, poor circulation, and a range of chronic conditions. How They Did It While the scientific explanation of what the researchers did is quite complicated, it can be boiled down to three steps. First, the researchers started by generating induced pluripotent stem cells in the lab. They produced several different kinds of cells and looked for the most favorable properties of each one. The second step was to grow those cells using a specific biological environment that would encourage replication. Finally, the replicated cells were genetically altered in order to cause them to differentiate into healthy endothelial cells. The resulting endothelial cells would have been ready for injection into a sick patient had this study gone that far. It did not. Researchers were only looking for a way to create the new cells. Later studies will have to look into using them for actual treatments. Saving Patient Lives The effects of damaged endothelial cells can be quite profound. For example, the researchers say that 50% of all diabetes patients die from heart attacks. Those heart attacks are directly attributable to damaged endothelial cells. Researchers hope that their treatment changes that for diabetes patients. They hope the treatment will ultimately save patients’ lives. Like so much of regenerative medicine, this revolutionary stem cell treatment looks at reversing the damage done by cardiovascular disease. It is a treatment that focuses on genuine healing rather than treating symptoms. Let’s hope that further research backs up the results observed here. Any such research could open the door to all sorts of new treatments for cardiovascular conditions. The power of stem cells to heal the body is both incredible and virtually untapped by human knowledge. Who knows what further research will lead to? We do know that induced pluripotent stem cells can be encouraged to differentiate into healthy endothelial cells in a short amount of time. That is certainly good news all the way around.
To view the scheme of work for Key Stage 3 Maths – please click here During Key Stage 3, through the mathematics content, all pupils will be taught to: - Develop fluency - Reason mathematically - Solve problems - Ratio, proportion and rates of change - Geometry and measure ICT and Computing is taught to students in Key Stage 3 as part of their Maths and Science lessons with dedicated lessons in our ICT suites within the Maths and Computing block. In the computing lessons within maths, students will be taught to: - Develop computational thinking - Evaluate and apply ICT to solve problems - Gain practical experience of writing computer programs - Use a variety of programming languages - Complete group activities using computing and logical problem solving techniques with challenges that include designing an escape room and managing the movement of a troublesome turtle! MyMaths text books are used from Years 7 to 13 at St. Crispin’s and students will be issued with their online login to their digital textbook when they arrive. Students are tracked using half termly tests that measure their progress against the new GCSE 9-1 grades. The students are placed into ability sets using the Key Stage 2 data we receive from the primary schools and are in these sets from their very first maths lesson at St. Crispin’s. The composition of the sets is reviewed every half term using the half termly tracking tests or as required by a student’s individual development. As students develop at different rates, movement between sets is inevitable. This enables students to make maximum progress. Each student is required to attend lessons with the appropriate equipment. This includes a pen, pencil, ruler, calculator, protractor and a pair of compasses, together with their exercise books and textbooks (MyMaths) which will be issued at the beginning of term. Year 7 (Books 1A – 1C) In Year 7 students will follow a mastery curriculum. This encourages students to question mathematical processes and methodologies, leading to a comprehensive understanding of why things work and how topics relate to each other. Year 8 (Books 2A – 2C) Year 9 (Books 3A – 3C) For more information on the MyMaths books please click here. If you require any further information, please do not hesitate to email Mrs Green, the head of Key Stage 3, on [email protected]
National Indigenous Veterans Day Lest We Forget is a phrase commonly associated with Remembrance Day observations. It has particular resonance as the 100th anniversary of Armistice Day approaches. The phrase has biblical origins, but its modern connotation urges us to recognize and remember the sacrifices made by millions of men and women in foreign conflicts. In Canada, however, “Lest We Forget” seems to apply only to some Canadians who risked or lost their lives in defence of our country. Specifically, Canadian history largely omits the contributions of Indigenous Peoples who fought for the freedoms we enjoy. The government recently made efforts to rectify these omissions and honour the thousands of Indigenous volunteers who fought and died for Canada with National Indigenous Veterans Day. Honouring Indigenous Veterans In July 2018, the Governor General awarded an amateur historian in Quebec the Sovereign’s Medal for Volunteers. Yann Castelnot created one of the largest databases of Indigenous soldiers who served in the Canadian and U.S. militaries. Castelnot, originally from France, spent two decades compiling the names of more than 150,000 Indigenous soldiers who fought for their countries. His work brought the achievements of military heroes like Sgt. Frank Narcisse Jérome, a Mi’kmaw soldier from Quebec who fought at Vimy Ridge, Hill 70, and Passchendaele to light. Jérome is one of only 39 Canadians to receive the Military Medal three times during a conflict. “[These heroes] must be put back on the altar of heroes alongside all the others…it’s not just a photo or a name, but a story to tell,” Castelnot said. Charlotte Edith Anderson Monture Last year, Learning Bird worked with St. Theresa Point Middle School to create social studies resources about the contributions of First Nations people in Canada’s war efforts in the twentieth century. One of the video resources centres on Indigenous heroes in WWI. It details the experiences of veterans like Charlotte Edith Anderson Monture. Monture was a member of Six Nations of the Grand River and a gifted student. She aspired to become a nurse. However, the Indian Act barred Indigenous people from higher education. Instead, she attended nursing school in New York and became Canada’s first Indigenous registered nurse in 1914. She then joined the U.S. military and served in a military hospital in France from 1917-18. Charlotte subsequently became the first Indigenous woman to vote in Canada. Mike Mountain Horse The resource kit also highlights the service of Mike Mountain Horse from Kainai First Nation. Mike went to St. Paul’s Anglican Residential School on reserve at age six. He later worked as a scout for the North-West Mounted Police. Mountain Horse’s father fought with the Blood Tribe during the Battle of Belly River. His brother served with the 23rd Alberta Rangers in World War One, eventually dying of injuries he suffered during the Second Battle of Ypres. Mike Mountain Horse and his brother Joe enlisted in the Canadian Expeditionary Force, fighting in France. Upon his return to Canada, he became a journalist. He wrote a manuscript detailing Kainai culture and history called “My People the Bloods.” Charles “Checker” Tomkins Another Indigenous soldier featured in the resource kit is Charles “Checker” Tomkins, who played a key role in WWII. Charles was a fluent Cree speaker from a small village in Northern Alberta. He joined the Canadian military in 1939. The Canadian High Command sent him to London, England in 1940, and summoned him to a secret meeting. There, they tested Charles’ fluency in Cree. Canadian High Command told Charles he would serve as a “code talker,” a translator of secret Allied communications. Command assigned Charles to the U.S. 8th Air Force and the 9th Bomber Command in England, translating various messages like troop movement orders or enemy supply lines. A 2016 documentary on Charles’ service, Cree Code Talker, notes that many credit this operation with helping win the war. The film also points out that neither the U.S. nor Canadian governments have recognized code talkers’ role in WWII. Learning About Indigenous Veterans These resources ask learners to consider the contributions of Indigenous war heroes. They challenge learners to consider what we can learn from the past and how it helps us understand the present and the future. Questions like these help learners develop strong critical thinking skills and prepare them to consider the differences between how the Canadian government has treated non-Indigenous and Indigenous veterans. Celebrate National Indigenous Veterans Day November 8th marks National Indigenous Veterans Day. It was first observed by Winnipeg’s city council to commemorate Indigenous contributions to Canada’s war efforts. The Canadian government still does not consider it a national day of observance. However, educators can ensure that learners know the extraordinary contributions and significant legacies of thousands of Indigenous men and women. We hope to work with other communities to create content that honours and celebrates Indigenous Peoples’ service to Canada in the coming years.
- Using PVAAS for a Purpose - Key Concepts - Concept of Growth - Growth Measures and Standard Errors - Growth Standard Methodology - Predictive Methodology - Topics in Value-Added Modeling - Public Reports - Additional Resources - General Help Growth Measures and Standard Errors You'll want to keep in mind that all growth measures reported in PVAAS are estimates. They are reliable estimates generated from a large amount of data using robust, research-based statistical modeling. But they are estimates, nonetheless. Because these measures are estimates, PVAAS reports display the standard error associated with each growth measure. Error is expected with any measurement. The standard error is a mathematical expression of certainty in an estimated value. The standard error on the PVAAS reports can be used to establish a confidence band around the growth measure. This confidence band helps us determine whether the increase or decrease in student achievement is statistically significant. In other words, it indicates how strong the evidence is that the group's achievement level increased, decreased, or remained about the same. More Information about Standard Error The standard error is specific to each growth measure because it expresses the certainty around that one estimate. The size of the standard error will vary depending on the quantity and quality of the data that was used to generate the growth measure. A smaller standard error indicates more certainty, or confidence, in the growth measure. A number of factors affect the size of the standard error, including: - The number of students included in the analyses - The number of assessment scores each student has, across grades and subjects - Which specific scores are missing from the students' testing histories To understand why these factors affect the size of the standard error, let's consider a few examples. Imagine two groups of students. Each group could represent the students in a particular teacher's class, all the students in a grade and subject at a particular school, or even all of the students in an LEA/district. Both groups have the same growth measure in math. The growth measure for the first group is based on 11 students, while the growth measure for the second group is based on 60 students. Because this second group's measure is based on many more students, and therefore, much more data, we would be more confident in that estimate of growth. As a result, the standard error would be smaller for the second group. Now let's consider two other groups of students with the same growth measure in math. This time both groups are equal in size, each having 60 students. However, in the first group all students have complete testing histories for the past five years. In contrast, quite a few students in the second group are missing prior test scores. The missing test scores create more uncertainty, so the growth measure for that group would have a larger standard error. In our final example, there are again two groups of students, and again, both groups have the same growth measure in math. The groups are equal in size, with each group having 60 students. Both groups of students have five years of test scores, and in both groups there are five students who are missing one prior score. However, in the first group, the five students with missing scores are missing the previous year's math score. In the second group, the five students with missing scores are missing the math score from three years ago. Because we are generating a growth measure for math, the missing math scores from the prior year create more uncertainty than the missing math scores from several years ago. As a result, the group with the missing math scores would have a larger standard error. The standard error is a critical part of the reporting because it helps to ensure that LEAs/districts, schools, and teachers are not disadvantaged in PVAAS because they serve a small number of students or because they serve students with incomplete testing histories.
Historical Weapons From 1789 to 1864 The 19th century brought unprecedented changes to weaponry. Thanks to the Industrial Revolution and interchangeable parts, guns evolved from clumsy and inaccurate muskets to lethally precise rifles between the years 1789 and 1864. In addition, the revolver and machine gun introduced weapons that could fire numerous shots in a single round. Weapons like the Colt revolver were new but popular firearms in this period. 1 Percussion Cap In 1805, Reverend John Forsyth of Aberdeenshire, Scotland, patented the percussion cap. The percussion cap allowed guns to use a non-exposed flash pan for initial ignition. This had the advantage of minimizing dampness in moist conditions. In addition, the percussion cap was more reliable than its predecessor, the flash pan. It operated by having an automated hammer strike a small cup filled with mercury. This cup was attached to the tube of the gun. Forsyth's technology relied on the chemical properties of mercury, which explodes when struck. Once the hammer struck the mercury-filled cap, an explosion lit the gun's powder and propelled the bullet out. The percussion cap was perfected over subsequent decades, and was adopted by both the British and American armies by the 1840s. 2 Colt Revolver Before the invention of the revolver, guns could only shoot one bullet at a time before reload. In 1836, Samuel Colt received a U.S. patent for the Colt revolver, which included a revolving cylinder loaded with five or six bullets. Gunpowder and bullets were both loaded into the revolver, and six bullets could be loaded in about 20 seconds. Colt invented different models, including a pistol, rifle and a holster. Weapons produced by Colt were used in both the Mexican-American War and in the American Civil War. Colt, a Connecticut native, refused to sell guns to Confederate states after the outbreak of war, however. 3 Rifling and the Minié Ball Before 1849, loading guns remained difficult because most bullets conformed to the size of the barrel in which they were loaded. This meant that significant force was required to lodge the bullet into the gun, which slowed down reloading times. In 1849, however, the French army officer Claude-Etienne Minié invented a bullet that was hollow iron and expanded when fired. This meant the bullet could be easily loaded into the gun because it was smaller, but still fire accurately because it wouldn't bounce off the sides of the barrel. The Minié bullet, as it was called, was accurate to 200 to 250 yards. It was first used during the Crimean War, and was later used by both the Union and the Confederacy in the American Civil War. In the United States, it was colloquially called the "minnie ball." 4 Gatling Gun The world's first machine gun was invented in 1862 for the purposes of the American Civil War. The Gatling Gun, named after its inventor Richard Jordan Gatling, could fire up to 400 rounds in one minute, an exponential increase over other guns available at the time. Gatling's original model could only shoot 200 rounds per minute, though that was still a large improvement over other available firearms. The gun operated with a multi-barreled cylinder that turned so that only one barrel shot at a time. This allowed the other barrels to cool and be prepared to shoot again shortly thereafter. In the Civil War, the gun was used in the siege of Petersburg, Virginia, in 1864.
Description: The Coral Snake is colorfully marked with rings that are bright blacks, reds, and yellows in color. Some of the colored bands, particularly the red bands, may be speckled. Other non-poisonous species have similar colors and markings including the Scarlet Snake and the Milk Snake. In North America, to identify the species, remember that when red touches yellow it is a coral snake. You can remember this with the poem “when red touches yellow you’re a dead fellow”. In other parts of the world the colors and banding order differ (including species with red bands bordered by black bands). Most Coral Snakes are small in size averaging around 2-3 feet in length and 1 inch in diameter. It has short fangs around 1/8 inch long that are fixed in an erect position and a small head relative to its body. Coral Snakes have a tendency to hold onto their victim while biting (unlike vipers which have retractable fangs and bite and release quickly). It often chews to release its venom into a wound. Its venom is very powerful (Coral Snakes are a member of the Elapidae family which includes the Cobra, Mamba, and Sea Snake). The venom is neurotoxic, causing respiratory paralysis in the victim, who succumbs to suffocation. Characteristics: Coral Snakes are secretive in its habits and elusive and therefore seldom seen. When confronted by humans they almost always attempt to flee, or bury its head in its coils, and only bite as a last round of defense. Many can make a clicking or popping sound as a warning. They often remain hidden under logs or burrowed underground in cracks, moist soil, or crevices. They are nocturnal creatures and only active during the nighttime hours. Many only come out after it rains. Symptoms: Coral Snakes have a powerful neurotoxin that paralyzes the breathing muscles. There is usually only mild pain and little swelling following a bite but symptoms can begin within hours after the bite. As many as 60% of Coral Snake bites are “dry” but do not assume that because there is no pain, the bite did not result in poisoning. Symptoms can include slurred speech, numbness, dizziness, double vision, drooping eyelids, muscular paralysis, loss of muscle control, difficulty swallowing, weak pulse, shock, respiratory problems, and cardiac arrest. Treatment: Coral Snake antivenin ceased production because it was not profitable. Since the snake is elusive, there are very few bites occurring each year. Clean the infected area and follow the directions for treating snake bites. Habitat: Found in a variety of habitats including wooded areas, swamps, palmetto and scrub areas. Coral snakes often venture into residential locations. Length: Average 60 centimeters (24 inches), maximum 115 centimeters (45 inches). Distribution: Southeast United States and west to Texas. Another genus of coral snake is found in Arizona. Coral snakes are also found throughout Central and most of South America.
Phase 3. What is Typography? Exploring Typography Students will create a self-portrait illustration with the photography of their self-portrait, which was one of their previous project. Students will make the placed photo layer to template, and create a new layer to trace their self-portraits. 1. Art Context: 2. Elements and Principles: - Elements: typefaces - Principle: harmony, balance 3. Personal Perspective: Students will choose two typefaces for their social justice poster. 4. Production ("Making" words): Students will be able to: - Learn how one color defines differently by its hue, value and chroma. - Learn how to repaint their self-portrait illustration without using eye-dropper tool to select colors. Students will understand that: - With more practice in canalization, they will be able to create precise illustration. Essential Prior Learning - Use of Pen tool - Use of Live Paint Bucket Tool - Knowledge of social justice My cooperating teacher told me that students already learned about social justice in their social studies class. For Adobe Illustrator, I previously taught students how to use the Pen Tool and Live Bucket Tool, so students are in process of learning, but they still need to develop using Bezier Handles to create natural curves. Potential Misunderstandings and Strategies Students would not understand the importance of using specific styles of typography. For this phase of learning segment, I attached a website as a visual resource as well as PowerPoint slide on Google Classroom. iMac (or Desktop Computer) Drawing Tablet (optional)
Forest uptake of carbon dioxide (CO2) sequesters carbon which mitigates climate change. However, some of this benefit is offset when another greenhouse gas, nitrous oxide (N2O), is released from soils. The net balance of CO2 and N2O gases differs among forests, particularly for nitrogen-fixing trees used widely in reforestation. Researchers used field and modeling studies to compare how nitrogen-fixing and non-fixing trees affect climate mitigation by these two gases. They found that nitrogen-fixing trees best mitigate climate change on infertile soils, because they obtain their own nitrogen to support growth and CO2 uptake. However, on fertile soils or where background nitrogen levels are high, nitrogen-fixing trees stimulate N2O release and become less effective at climate mitigation. These results can guide forest management aimed at climate mitigation by clarifying the benefits of nitrogen-fixing trees and highlighting their risks when grown in regions where nitrogen is abundant in the environment. Kou-Giesbrecht, S., Funk, J.L., Perakis, S.S., Wolf, A.A., Menge, D.N., 2021, N supply mediates the radiative balance of N2O emissions and CO2 sequestration driven by N-fixing vs. non-fixing trees: Ecology, no. e03414, https://doi.org/10.1002/ecy.3414. (Photo credits, Robinia pseudoacacia: Larry Allain, U.S. Geological Survey)
Literature: Beyond the Bounds (8) Literature: Beyond the Bounds focuses on the theme of citizenship as it relates to our cultural beliefs. By examining a variety of literature, students are able to see recurring themes, unique details, and connections among cultures. Students are poised at this point in their development to put into action all that they've learned about how their worlds work and their place in it. A particular emphasis is placed on building reading comprehension and critical reading skills as they move through their year of literature. Students will also study vocabulary and grammar in conjunction with text evaluation. Throughout the year, students will be guided in the writing of personal and formal essays. School student accounts will be debited at the start of the year for a Membean license.
Diana Claudia Di Mario This lesson plan wants to promote multilingual and multicultural competences since early childhood, by developing attitudinal competences in order to favour the learning of a foreign language in the first years of Primary School and in the following school years. Attitudinal competences towards a foreign language means to have disponibility and attention to what is different from me. Therefore it means to develop emotional and social skills, and the motivation for "thinking out of the box". Motivation is also the basis of learning. The teaching approach which increases motivation is the game-based learning. Therefore the activities I'm going to propose are based on games. If you implemented this plan in your own classroom, please share your findings and suggestions by taking this short survey. Ella Rakovac Bekeš The aim of the lessons is to get students practice math concept and English vocabulary (percentage, ratios, equations) but in a informal way by playing and designing games (https://view.genial.ly/5ceffadec1f8800f3d689474/game-breakout-mad-science). This is a English lesson based on a chapter in Enterprise Plus Coursebook.
HOW TO ORGANIZE AND WRITE TYPICAL BUSINESS REPORTS Interpret and Analyze Data Analyzing and interpreting data is a vital part of any business. It can sometimes be the difference between a successful firm and a firm that fails. Unprocessed data can be meaningless to firms until it is sorted, analyzed, combined and recombined. Fortunately for firms there are several tabulating and statistical techniques that can help them create order from unorganized data. These techniques help simplify, organize, summarize and classify large amounts of data into meaningful terms. Once the data is sorted in a more condensed manner, the firm can then go on to understand it a lot clearer and therefore draw conclusions and thus move forward to make the necessary recommendations. The most useful summarizing techniques include tables, statistical concepts (mean, median, and mode), correlations and grids. Tables are an essential tool in simplifying data. Numerical data from questionnaires and surveys is usually summarized in a table. Tables use columns and rows to make quantitative information easier to comprehend. Data is often easier to understand when cross-tabulated, and tables are often used in this vein to facilitate the process of data comprehension. “When data is cross tabulated, it may become clearer to find the answer to your problem question. Calculating percentages of certain types of data collected and then ranking them in a table from greatest to least or vice versa may help you draw conclusions faster and more efficiently.” (Guffey, Rhodes & Rogin, 2006) Statistical Concepts (Mean, Median, Mode) Tables allow you to organize data while statistical methods allow you to describe them. The mean, median and mode are often used to mean “average”. However, knowing the exact meaning of each term will allow you to better interpret organized data. Mean: The mean of a specific set of data is intended to indicate the average of that data. Median: The median represents the midpoint in a group of figures arranged from lowest to highest or vice versa. Mode: The mode is the value in the set of data that occurs most frequently. Knowing the values of these statistical concepts could be an integral part of interpreting data. When the range of a specific set of data is given, knowing these concepts becomes even more vital as they can be put into perspective. While tabulating and analyzing data, you may begin to see relationships among two or more variables that may help explain your findings. Once a correlation is detected, you can begin to ask yourself how and why are these variables linked. The answer to those questions may allow you to further your search for a potential solution to a problem. Keep in mind that a cause and effect relationship should be avoided when none can be proven. Only sophisticated research methods can allow you to prove correlations. Lastly, another technique that can be used to interpret and analyze raw data, more specifically verbal data, is the grid. Complex verbal information can be transformed into brief, convenient data. Be using grids, readers can almost immediately recognize which points are supported and opposed. “Data arranged in a grid also works very well for projects such as feasibility studies.” (Guffey et al,. 2006) Consumer reports often use grids to display data. Summarize Data, Draw Conclusions and Explain Findings More often than not, the most important part of a report is the part devoted to the conclusions and recommendations. Most readers of reports often jump straight to the conclusions in order to see what the report writer thinks the data mean. Conclusions summarize and explain the findings outlined in the report and therefore often represent the heart of a report. By analyzing and drawing conclusions from a report, you can begin to think logically in order to find solutions to the problem you have set out to solve. Analyzing Data to Arrive at Conclusions A data set can produce a number of conclusions; however, one should bear in mind the audience which they are catering to with regards to their report when drawing up their conclusions. The audience will want to use the conclusions from the report in order to help them solve their original report problem. Tips for Writing Conclusions: · Interpret and summarize the findings; tell what they mean. · Relate the conclusions to the report problem · Limit the conclusions to the data presented; do not introduce new material. · Number the conclusions and present them in a parallel form. · Be objective and avoid exaggerating or manipulating the data. · Use consistent criteria in evaluating options. (Guffey et al., 2006) It is very important to examine your motives before drawing conclusions. Do not allow preconceived notions or wishful thinking to cloud your reasoning. Preparing Report Recommendations Unlike conclusions, recommendations make specific suggestions for the actions that need to be taken in order to solve the report problem. “Remember that readers expect certain information to be in certain places. They do not expect to hunt for what they want and the harder you make it for them the more likely they are to toss you report to one side and ignore it.” (http://ezinearticles.com/?Report-Writing---How-to-Format-a-Business-Report&id=96650) Report readers more often than not prefer recommendations to conclusions because the recommendations give them specific actions that need to be taken. The specificity of your recommendations depends on your authorization. You need to ask yourself “what am I commissioned to do?” and “what does the reader expect?”. The understanding of your audience will allow you to know how far you can go with your recommendations. The best recommendations offer practical suggestions that are feasible and agreeable to the audience. It is very important to keep feasibility in mind when writing out recommendations. You do not want to recommend a solution that requires actions that are not possible for the company to undertake. Where possible, began each recommendation with a verb in order for the recommendation to sound like a command. This structure sounds forceful and confident and allows the reader to easily comprehend what actions need to be taken. Avoid words such as perhaps and maybe, as these words can reduce the strength of the recommendation. Tips for Writing Recommendations: Make specific suggestions for actions to solve the report problem. Prepare practical recommendations that will be agreeable to the audience. Avoid conditional words such as maybe and perhaps. Present each suggestion separately as a command beginning with a verb. Number the recommendations for improved readability. If requested, describe how the recommendations may be implemented. When possible, arrange the recommendations in an announced order, such as most important to least important. (Guffey et al., 2006) Keep in mind that the important thing about recommendations is that they include practical suggestions for solving the report problem. Organize Data Into Logical Framework At this point you’ve collected your data and interpreted it, as well as drawn your conclusions and decided on what recommendations you are going to make. Now you are ready to logically structure and organize the data. A properly organized report will assist you in keeping your reader’s attention and therefore increasing your chances of persuading them. Informational Reports are simply organized in 3 parts. Analytical Reports are organized in 4 parts and come in 2 methods. These methods are the "direct pattern" and the "indirect pattern". The direct pattern is geared towards readers who already know about the project and just need the information up front. The indirect pattern is appropriate when you are trying to educate or persuade your reader. Ordering Information Logically Your report obviously needs to be structured coherently. In order for readers to better comprehend your data, there are five main organizational methods you may use: Time: Arrange data chronologically (for example 2007, 2008, 2009). Chronologies can become boring though so be careful not to overuse the time method. Component: Organizedata by components like location, geography, division, product, or part. Importance: Organized inorder of importance, starting with the most to the least or vice versa.You must decide what you think is most important from the reader’s perspective. Criteria: Organizing the data in evaluative groups. (Ex.: when comparing two computers you’ll use price, warranty, processing speed, etc.) Convention: Arranging the data according to a prescribed plan, using mostly conventional categories. This makes it easier to follow. Providing Reader Cues Introduction: A good introduction states the report’s purpose, the significance of the topic and introduces the main points as well as the order in which they will be further discussed. Transitions: Transitions such as however, furthermore, on the other hand, etc. are important because they will keep the reader’s attention. These tools provide a coherent and logical flow between ideas and let the reader understand where the ideas are going. Headings: Headings emphasize the main ideas in a report. Furthermore, they provide “a break” for the reader’s eyes and attention so the report is more convenient to read through. As stated in Report Writing (2000) “Headings should be clearly, logically and accurately labelled since they reveal the organization of the report and permit quick reference to specific information. They also make the report easy to read.” (http://unilearning.uow.edu.au/report/1gi.html) You may use functional heads or talking heads. Functional heads describe the outline of a report but the reader doesn’t get much insight. They can be useful when discussing controversial topics due to their general nature. They are less likely to trigger emotions in the readers. Talking heads are more informative and attention-grabbing. With a little effort a heading can be both functional and talking. These tips will assist you in creating successful headings: - Headings should be informative and not just catchy. Keep it short and clear. - Capitalize and Underline at appropriate places (all caps for main titles and only first letter caps for secondary titles) - Same level headings (ex.: all sub-headings) should be grammatically alike - Every report page should have a minimum of one heading (Guffey, Rhodes & Rogin, 2006) Writing Informational Reports - Deliver Information and Facts The importance of skillful communication is becoming increasingly important in today’s business world. Clayton (2006) reports “Business decisions involve the cooperation and interaction of several individuals. Sometimes dozens of colleagues and co-workers strive in unison to realize mutual goals. Lines of communication must therefore be maintained to facilitate these joint efforts.” (http://business.clayton.edu/arjomand/business/writing.html) In order to determine a company’s objectives, information must be properly communicated throughout the chain of command. Informational reports deliver neutral, informal information to receptive readers who do not need to be persuaded of anything. The data is often describing periodic activities. Guffrey, Rhodes & Rogin (2006) make it cleas that as in any business report, “paying attention to the proper use of headings, lists, bulleted items, and other graphic highlighting, as well as clear organization, enable the readers to grasp major ideas immediately” (p.313). There are different types of Informational Reports. “Summary submitted by each salesperson to provide certain details to the management about his or her activities and performance over a given period. It includes information such as (1) number of customervisits made, (2) demonstrations performed, and (3) new accounts opened.” (businessdictionary.com APA) These reports allow managers to stay informed of operations and activities and they can therefore intervene if anything is not on track. Sometimes these reports only contain sales figures. When writing a periodic report to your manager, make sure to do the following: - Summarize regular activities and events performed during the reporting period - Describe irregular events deserving the attention of management - Highlight special needs and problems (Guffey, Rhodes & Rogin, 2006) Having an update section on competition is often very valuable. A manager is usually very interested in what the competition is doing and business operations can greatly benefit from this information. Click Here to Download an example of a Periodic Report Template: |File Size:||104 kb| Trip, Convention, and Conference Reports If you are an employee invited to participate in a conference (seminar, workshop, symposium, etc.), you will probably be expected to publish a Report of the events. It is important for the company to know that they are not wasting their funds on the travel and conference expenses. As Guffey et Al. (2006) explain "These reports inform management about new procedures, equipment, and laws and supply information affecting products, operations, and service” (p.314). In writing your report, you will have an Intro, Body and Conclusion. The body should focus on about 5 topics that you know will interest your manager/reader. Here is a general outline of how to write a conference report: -Begin by identifying the event (exact date, name, and location) and previewing the topics to be discussed. -Summarize in the body three to five main points that might benefit the reader -Itemize your expenses, if requested, on a separate sheet. -Close by expressing appreciation, suggesting action to be taken, or synthesizing the value of the trip or event. (Guffey et al., 2006) Progress and Interim Reports A progress report explains itself: It reports the status and progress of an ongoing project. The report can be internal, to inform management, or external, to inform customers. In the introduction, make the purpose of the project clear. Provide a background if it is necessary to your audience, and then describe the work completed. Explain the work that is currently in progress (personnel, activities, methods and locations) and anticipate problems and possible remedies (Guffey et al., 2006). Talk about future activities. Provide the end of project date. Investigative reports are unbiased and don’t have recommendations. These reports are organized in 3 parts: The introduction, body, and summary. Guffey et al. (2006) explain how “The body-which includes the facts, findings, or discussion-may be organized by time, component, importance, criteria, or convention” (p.317). Divide the topic into 3-5 logical segments, depending on the subject. Checklist for writing Informational Reports - Begin directly - Provide a Preview - Supply background data selectively - Divide the topic - Arrange the subtopics logically - Use clear headings - Determine Degree of formality - Enhance readability with graphic highlighting - When necessary, summarize the report - Offer a concluding thought (Guffey et al., 2006) Writing Analytical Reports - Analyze Information and Persuade Reader The Analytical Report is very different from the Informational Report. Informational Reports are about presenting the facts to readers that don’t necessarily need to be convinced of anything. On the other hand Analytical Reports, in addition to presenting and analyzing the data interpret it and attempt to persuade the readers to take action on their recommendations. Their emphasis is on reasoning and recommendations (Guffey et al., 2006). As explained by Principles and Concepts of Technical Communication (200), teams of experts produce Analytical Reports. The experts use their professional skills to: - Determine issues; identify the factors that are causing the problems by using a study, and then attempting to solve them using a “standard professional methodology”. - Learn how similar problems have been solved in the past - Deal with limitations such as “time, cost, company policy, union contracts, local and federal law” Depending on who the readers will be, sometimes the writers are direct in putting the recommendations at the beginning of the reports. This is safe to do when the reader already has confidence in the writers. Directness can backfire if the reader objects to some of these ideas at the very beginning. It could take time to warm up to someone so as not to offend their belief systems and avoid triggering negative reactions. In this case, using the indirect method would suit best in order to lead the reader through the logical process that concludes with the appropriate recommendations. The 3 following analytical reports answer business questions: 1. Justification/Recommendation Reports 2. Feasibility Reports 3. Yardstick Reports Each one has a different goal and serves a different type of organization (Guffey et al., 2006). “Justification/recommendation reports follow the direct or indirect pattern depending on the audience and the topic.” (Guffey et al., 2006). Managers and employees at some time or another have something to recommend to their company, such as new equipment, hiring more labor, an increase in investing funds, etc. There are two types of patterns for writing this sort of report: Direct and Indirect. Guffey et al. (2006) define the Direct Pattern as "appropriate for justification/recommendation reports on nonsensitive topics and for receptive audiences” (p.320) How to organize the report: - Briefly address the problem - Concisely present your recommendation (use action verbs) - Explain in more detail the benefits of your solution - Include a discussion of “pros, cons, and costs” - Wrap up with a summary (Guffey et al., 2006) Guffey et al. (2006) define the Indirect Pattern as "appropriate for justification/recommendation reports on sensitive topics and for potentially unreceptive audiences” (p.320). How to organize the report: - Make reference to the problem rather than your recommendation in the subject line - Describe the problem. Be specific, use statistics and reliable quotes to support your claim - Present alternative solutions. State first the one that is least likely to succeed and the best one last. - Describe how there are more advantages to disadvantage to your solution - Summarize your recommended solution. - Ask for approval to carry on (Guffey et al., 2006) Vijay Luthra (2009) defines Feasibility Reports as “Analysis and evaluation of a proposed project to determine if it (1) is technically feasible, (2) is feasible within the estimated cost, and (3) will be profitable. Feasibility studies are almost always conducted where large sums are at stake.” (http://www.businessdictionary.com/definition/feasibility-study.html) They answer the question of whether or not your business proposal will work. Guffey et al. (2006) explain that “Feasibility Reports typically are internal reports written to advise on matters such as consolidating departments, offering a wellness program to employees, or hiring an outside firm to handle a company’s accounting or computing operations” (p.320). The bottom line of these reports is to decide whether or not to move forward with a given proposal. There is no need to persuade the reader. Waste no time and present the decision right away. When writing feasibility reports you should: - Declare your decision straight away - Give background information (of the problem) - Talk about the proposal’s benefits - Consider any consequences that may arise - Calculate the costs (if suitable to the reader) - Show the necessary time frame (Guffey et al., 2006) Guffey et al. (2006) demonstrate that “Yardstick Reports consider alternative solutions to a problem by establishing criteria against which to weigh options” (p.322). It is that criteria that becomes the yardstick. This method is effective when companies need to, for example, purchase equipment and compare specs from different manufacturers. Yardstick reports are advantageous in the sense that “alternatives can be measured consistently using the same criteria”. How to organize a Yardstick Report -Describe the problem or need -Describe suggested solutions and alternatives -Establish criteria and explain how it was developed -Discuss and evaluate every option in terms of the yardstick -Come to conclusions and make suggestions (recommendations) (Guffey et al., 2006) Checklist for Writing Analytical Reports - Explain the purpose of the report - Give a brief outline of the report is organization - Summarize the recommendations - Use the direct pattern if you already have the reader’s confidence, or the indirect pattern for an unreceptive audience - Discuss advantages and disadvantages of each alternative solution. Make sure to save the best one for last (for unreceptive readers) - Establish criteria - Use evidence such as facts, stats and other data to support your findings - Organize finding. Make sure the structure is logical and legible. Remember to make proper use of headings, lists, graphs. Etc. - Develop and justify recommendations and conclusions that address the research question. Use your findings. - Use action verbs when making your recommendations and explain the action that needs to be taken. (Guffey et al.2006) For More great information on Analytical and other types of Reports, visit http://icarus.lcc.gatech.edu/info/analytic.shtml Click here to download a .pdf with more detailed information on what a Report is and how to write one: |File Size:||0 kb|
Indian Pharmaceutical major Zydus Cadila has become the world’s first to seek approval for a DNA-based vaccine against COVID-19, with trials indicating 100% efficacy in adolescents against moderate disease after three doses. The vaccine, which is likely to be approved by Indian authorities, also promises relief for those afraid of needles, as it can be delivered using a new, needleless injection technology known as Pharmajet. The technology uses a narrow stream of fluid to penetrate the human skin and deliver the vaccine subcutaneously. More importantly, ZyCoV-D has is likely to become the second vaccine to be developed entirely in India. The first one was Covaxin, which was developed by researchers from Bharat Biotech and Indian Council of Medical Research. What makes ZyCoV-D different is that it uses DNA technology, and may become the first widely used vaccine in the world to work on this principle. DNA vaccines work by modifying the DNA inside the human cell. Due to this modification, the human cell starts producing a particular chemical or protein, and this protein leads to an immune reaction in the body. In this case, the human cell starts producing the spike protein of the COVID-19 virus. This leads to an immune reaction in the body, which generates antibodies that are useful in neutralizing any attack by the virus later on, giving the person immunity to the virus. The most widely used COVID-19 vaccines in the US, Pfizer and Moderna, are also based on a similar technology. However, unlike ZyCoV, these vaccines use RNA to induce the human cell to make the virus’ spike protein and lead to an immune reaction. RNA is also a ‘life chemical’ in that it too contains the code for making proteins. But RNA is located outside the cell nucleus (or the core of the cell) and in certain cellular structures called mitochondria, which are supposed to be remnants of bacteria-like organisms that ended up becoming part of the human cell. DNA, on the other hand, is kept inside the core of the human cell to protect it from being modified by chemicals that come into the cell, such as chemicals from food and the air. This is because damage to DNA can result in diseases like cancer and genetic mutation in offspring, while damage to RNA may result in less long-term symptoms such as a lack of energy. Because of the highly protected and sensitive nature of DNA, it has been considered more challenging to try to use DNA vaccines to produce the required chemicals. However, some DNA vaccines have been tried out for animal vaccination. DNA vaccines are easier to store as DNA does not disintegrate at regular refrigerator temperatures, while RNA requires deep freeze facilities that have to be custom built for the purpose. This is the primary reason why RNA vaccines like that of Pfizer and Moderna have not become popular in countries like India. TESTED IN ADOLESCENTS Zydus Cadila said its DNA vaccine was tested in 28,000 people as part of its Phase III clinical trial, making it the largest such test for any COVID-19 vaccine in India. The company said the results of the vaccine were impressive in the 12-18 group — a segment that is yet to be vaccinated in India. “This was..the first time that any COVID-19 vaccine has been tested in adolescent population in the 12-18 years age group in India. Around 1000 subjects were enrolled in this age group and the vaccine was found to be safe and very well tolerated. “The tolerability profile was similar to that seen in the adult population. Primary efficacy of 66.6% has been attained for symptomatic RT-PCR positive cases in the interim analysis. Whereas, no moderate case of COVID-19 disease was observed in the vaccine arm post administration of the third dose suggesting 100% efficacy for moderate disease. No severe cases or deaths due to COVID-19 occurred in the vaccine arm after administration of the second dose of the vaccine,” it said. Sharvil Patel, MD of Cadila Healthcare, said the vaccine has proven its safety despite being the first every plasmid DNA vaccine for human use. “This breakthrough marks a key milestone in scientific innovation and advancement in technology. As the first ever plasmid DNA vaccine for human use, ZyCoV-D has proven its safety and efficacy profile in our fight against COVID-19. The vaccine when approved will help not only adults but also adolescents in the 12 to 18 years age group. “This has been possible because of the collective support of the Government, the regulators, the volunteers who had faith in the process, the investigators who conducted the multi-centric trials all through these months, the suppliers who worked closely with us and our dedicated team of researchers and vaccine professionals who worked tirelessly on the vaccine and also manufactured the trial doses,” he said. The Company has also evaluated a two-dose regimen for ZyCoV-D vaccine using a higher dose (3 mg) per visit and the immunogenicity results had been found to be equivalent to the current three dose regimen. “This will further help in reducing the full course duration of vaccination while maintaining the high safety profile of the vaccine in the future,” it added. ZyCoV-D is stored at 2-8 degree Celcius temperature, but the company said it has shown good stability at temperatures of 25 degrees for at least three months. “The thermostability of the vaccine will help in easy transportation and storage of the vaccine and reduce any cold chain breakdown challenges leading vaccine wastage. The plasmid DNA platform provides ease of manufacturing with minimal biosafety requirements (BSL-1),” the company said. Zydus Cadila has wide range of capabilities in developing and manufacturing vaccines. It was the first company in India to develop and indigenously manufacture the vaccine to combat Swine Flu during the pandemic in 2010. It has also indigenously developed numerous vaccines successfully including tetravalent seasonal influenza vaccine (first company in India to indigenously develop and commercialize), Inactivated Rabies vaccine (WHO Prequalified), Varicella vaccine (first Indian company to indigenously develop and receive market authorization), Measles containing vaccines (MR, MMR, Measles), Typhoid conjugate vaccine and pentavalent vaccine. The company is also working on Measles-Mumps-RubellaVaricella (MMRV), Human papillomavirus vaccine, Hepatitis A, Hepatitis E vaccines.
The arrival of a new wave of Jewish immigrants from Central Europe in the nineteenth century ushered in a new stage in the history of American Judaism just as the Sephardic Jewish community was becoming Americanized. The influx of immigrants, primarily Ashkenazic Jews from Central Europe, dissolved the hegemony of the Sephardim, transforming a somewhat homogeneous Sephardic synagogue-community into an ethnic plurality of Jewish subcommunities. From the 1820s to the 1870s the Jewish population of America multiplied by nearly fifty times, increasing from an estimated 6,000 in 1824 to an estimated 250,000 in 1878. The newcomers expanded existing Jewish communities and founded new ones. The first congregation west of the Allegheny Mountains was established in Cincinnati, Ohio in 1824 and the first congregation west of the Mississippi in Chesterfield, Missouri in 1837. In this period of social and religious diversification, one of the most visible effects on the Jewish community was the breakup of the unified colonial and federalist synagogue-community into an ethnic complex of competing congregations: the synagogue-community became a community of synagogues. The first congregational schisms within Jewish life occurred in the leading communities of Charleston in 1824 and New York the following year. In both instances, a group of young, Americanized members of the synagogue-community requested permission to split from the main body of the congregation to form their own prayer group along more progressive lines. Their modest intentions were to reform their tradition by abolishing the practice of auctioning ritual honors, introducing vernacular language into the service, democratizing synagogue administration, and adding a weekly sermon. As these reformers stressed, none of these efforts were intended to change the essentials of Judaism, but only to modify some of its external functions in order to best preserve the whole. In both Charleston and New York, reformers promoted their modifications without the benefit of religious leadership. New York’s Chevra Chinuch Nearim (Society for the Education of the Young) went so far as to state its intention to remain leaderless: “the society intends in no way to create distinctions, but each member is to fulfill the duties in rotation, having no Parness [wealthy leader of a community] or Chazzan.” Many of the innovative characteristics of the nineteenth century American synagogue would arise from later, similar reformation impulses. Just as religious and cultural diversity created a new era of Jewish growth and migration, breaking up the classic American synagogue-community into a community of synagogues, the Sephardic hazzan was now replaced by a new rabbinical figure. While there were many rabbis among the new immigrants, the prototypical figure of this period was Isaac Leeser (1806-1868), a traditionalist, unordained, Ashkenazic hazzan who served a Sephardic synagogue in Philadelphia. How did a figure like Leeser so profoundly personify antebellum American Judaism? Moderate by nature, Leeser insisted that American Jews were still part of a worldwide Jewish people and that Americanization could also preserve tradition. As a leader, he was not simply a functionary of the Sephardic synagogue, but a man who set about refashioning the Jewish community by creating a myriad of new social, cultural, and educational institutions. From 1843 to 1868 he published a weekly newspaper, The Occident and American Jewish Advocate, and in 1867, he founded Philadelphia’s Maimonides College as the first American school for rabbis, though it closed soon after his death. He also translated Jewish texts into an American idiom, including the first translation of the Torah into English in 1845.
Most of the mass-energy, about 95%, in the universe is ‘dark’. By dark we mean that it does not emit any form of electromagnetic radiation. The existence of Dark Matter is inferred indirectly by its gravitational effect. Observations of the motions of stars and gas in galaxies, cluster galaxy radial velocities, hot gas properties of clusters, and gravitational lensing of distant, background galaxies by foreground galaxy clusters all suggest large amounts of Dark Matter exist. For example the observed radial velocities of cluster galaxies suggest dynamically-based cluster masses that are factors of 10 or more higher than that deduced by adding up the observed cluster mass (stars, gas, dust) content. Comparisons of galaxy cluster and supercluster structure with N-body computational simulations also suggests the need for some sort of Dark Matter. Dark Matter is also needed for gravity to amplify the observed small fluctuations in the Cosmic Microwave Background to form the large-scale structure that we observe today. Dark Matter candidates are either baryonic or non-baryonic, or a mixture of both. The non-baryonic forms are usually subdivided into two classes – Hot Dark Matter (HDM) and Cold Dark Matter (CDM). HDM requires a particle with near-zero mass (neutrinos are a prime example; axions, or supersymmetric particles are others). The Special Theory of Relativity requires that nearly massless particles move at speeds very close to c, the speed of light. However HDM does not fully account for the large-scale structure of galaxies observed in the universe. Using neutrinos as an example, their highly relativistic velocities would tend to smooth out fluctuations in the matter density as they propagated through the universe. They would be good at forming very large structures like superclusters but not smaller, galaxies. CDM requires objects sufficiently massive that they move at sub-relativistic velocities. Comparisons between observed large-scale structure and N-body simulations favour CDM to be the major, if not total, component of Dark Matter. A major CDM candidate is WIMPs (Weakly Interacting Massive Particles). The search for these particles involves attempts at direct detection by sensitive detectors and production by particle accelerators. Several teams are trying to detect WIMPs with ultra-sensitive, deep underground detectors. A collision between a WIMP and an atom would force the WIMP to change direction and slow, whilst the atom recoils. This recoil can be detected in a number of ways. In semiconductors and some liquids and gases, an ionisation charge released by the recoiling atom can be measured. Much work has been done to determine if any baryonic matter was unaccounted for in the universe. MACHOs (Massive Compact Halo Objects) are Dark Matter candidates consisting of condensed objects such as black holes, neutron stars, white dwarfs, very faint stars, or non-luminous objects like planets. The search for these consists of using gravitational microlensing to see the effect of these objects in our Galaxy on background stars. There are many continuing observing programs doing this work. The MACHO project at Mt. Stromlo Observatory observed star fields in our Galaxy and towards the Large Magellanic Cloud. In summary whilst they did detect microlensing events the number of events was not enough to suggest that a MACHO population of dark objects between 0.1 – 0.9 M⊙ could add more than 20% to the known mass discrepancy in the Galaxy halo. If there is a baryonic component to Dark Matter it appears to be quite small. This result is supported by other microlensing programs such as OGLE and EROS. The decay products of WIMPs may be detected by the Large Hadron Collider (LHC). The LHC is not expected to detect WIMPs but could detect their decay product(s), called SuperWIMPs. Interestingly astronomy, not particle physics, may be able to solve its own ‘mass problem’. Dark Matter particles, in some models, can annihilate and produce a slew of secondary particles. Two WIMPs, each with a mass of between 50 – 200 GeV can annihilate and produce two Gamma ray photons, the energy of each equalling the mass of one WIMP. Gamma ray telescopes, like Fermi Gamma ray Space Telescope, are observing the Galactic center (where the highest density of nearby Dark Matter should exist) and are searching for specific Gamma ray energy signatures. Alternatives to Dark Matter do exist. A variation of gravity, MOND (MOdified Newtonian Dynamics) claims to explain many of the observational signatures listed above. In MOND it is suggested that for extremely low gravitational acceleration Newton’s law for gravitational force may break down. MOND suggests that acceleration is not linearly proportional to force at low values.
The robot cucumber-picker is able to harvest ripe cucumbers without damaging them. With the ever-increasing need for food in the world, it is unsurprising that new and innovative ways to make farming practice more efficient are numerous. We have already seen farming innovations such as these remote control vehicles that are used to sew and harvest a field of barley. Another example which shows that robots are entering the industry too, is this gardening assistant capable of watering plants and scaring away pests. Now, The Cucumber Gathering, a Green Field Experiments (CATCH) project also seeks to use robots, designing a robot for harvesting. The EU funded project is formed by various partners including the Leibniz Institute for Agricultural Engineering and Bioeconomy, and the Spanish Centre for Automation and Robotics. Combining the expertise of these researchers, the project seeks to create a robotic alternative to human cucumber harvesters. In their own words; “the CATCH experiment aims at developing a flexible, cost-efficient and reconfigurable/scalable hortibotic outdoor solution for automated harvesting in challenging natural conditions.” The Germany-based project aims to develop a robot system consisting of dual-arms with lightweight modules. Using their gripper arms, the robots will be capable of picking cucumbers. Human workers can pick an average of 800 cucumbers an hour. To be financially viable, the project must achieve a result which is at least as efficient as human workers. One existing boundary to overcome, is how to create a robot able to identify ripe cucumbers hidden among foliage. Another boundary is how to ensure that the robot arms do not damage the cucumbers as they pick them. Following initial field testing in July 2017, the technology proved to function well. The robot was able to detect ripe cucumbers with 95 percent accuracy. However, the researchers are continuing their work in an attempt to reach an accuracy of 100 percent. How else could robots help us to increase farming efficiency?
Temporal range: Tremadocian–Frasnian Lichida is an order of typically spiny trilobite that lived from the Tremadocian to the Devonian period. These trilobites usually have 8–13 thoracic segments. Their exoskeletons often have a grainy texture or have tubercles. Some species are extraordinarily spiny, having spiny thoracic segments that are as long or longer than the entire body, from cephalon (head) to pygidium (tail). The sections of the pygidia are leaf-like in shape and also typically end in spines. - Data related to Lichida at Wikispecies |This Trilobite-related article is a stub. You can help Wikipedia by expanding it.|
The production possibilities frontier (PPF) is the boundary between those The opportunity cost of producing one more pizza is the marginal cost of a pizza. The marginal benefit curve shows the relationship between the marginal benefit of. Opportunity cost can be illustrated by using production possibility frontiers The production possibility frontier (PPF) for computers and textbooks is shown here. This is the difference between the maximum output of textbooks that can be. Follow the link for more information. Production–possibility frontier. From Wikipedia, the free encyclopedia. Jump to navigation Jump to search. A production–possibility frontier (PPF) or production possibility curve (PPC) is a curve which . In the context of a PPF, opportunity cost is directly related to the shape of the curve. Opportunity cost is measured in the number of units of the second good forgone for one or more units of the first good. If the shape of the PPF curve is a straight-line, the opportunity cost is constant as production of different goods is changing. But, opportunity cost usually will vary depending on the start and end points. In the diagram on the right, producing 10 more packets of butter, at a low level of butter production, costs the loss of 5 guns shown as a movement from A to B. At point C, the economy is already close to its maximum potential butter output. To produce 10 more packets of butter, 50 guns must be sacrificed as with a movement from C to D. The ratio of gains to losses is determined by the marginal rate of transformation. Lesson summary: Opportunity cost and the PPC Marginal rate of transformation[ edit ] Marginal rate of transformation increases when the transition is made from AA to BB. The slope of the production—possibility frontier PPF at any given point is called the marginal rate of transformation MRT. The slope defines the rate at which production of one good can be redirected by reallocation of productive resources into production of the other. It is also called the marginal "opportunity cost" of a commodity, that is, it is the opportunity cost of X in terms of Y at the margin. - Production–possibility frontier It measures how much of good Y is given up for one more unit of good X or vice versa. The shape of a PPF is commonly drawn as concave to the origin to represent increasing opportunity cost with increased output of a good. The marginal opportunity costs of guns in terms of butter is simply the reciprocal of the marginal opportunity cost of butter in terms of guns. Using the PPC, explain the concepts of scarcity, choice and opportunity cost. If, for example, the absolute slope at point BB in the diagram is equal to 2, to produce one more packet of butter, the production of 2 guns must be sacrificed. If at AA, the marginal opportunity cost of butter in terms of guns is equal to 0. Shape[ edit ] The production-possibility frontier can be constructed from the contract curve in an Edgeworth production box diagram of factor intensity. That is, as an economy specializes more and more into one product such as moving from point B to point Dthe opportunity cost of producing that product increases, because we are using more and more resources that are less efficient in producing it. With increasing production of butter, workers from the gun industry will move to it. At first, the least qualified or most general gun workers will be transferred into making more butter, and moving these workers has little impact on the opportunity cost of increasing butter production: However, the cost of producing successive units of butter will increase as resources that are more and more specialized in gun production are moved into the butter industry. With varying returns to scale, however, it may not be entirely linear in either case. Opportunity Cost & Production Possibility Curve by Jordan Potts on Prezi Specialization in producing successive units of a good determines its opportunity cost say from mass production methods or specialization of labor. In other words, due to scarcity, society chooses what goods and services to produce. The opportunity cost of a course of action is the benefit forgone by not choosing its next best alternative [definition of opportunity cost]. In other words, when society chooses what goods and services to produce, it is choosing what goods and services not to produce. Suppose there are only two goods produced in the economy. The PPC shows all the different combinations of the two goods that can be produced in the economy when resources are fully and efficiently employed, given the state of the technology [definition of PPC]. Overview on how PPC reflects scarcity, choice and opportunity cost The side diagram is a production possibility curve. The PPC is a series of points rather than a single point. Explanation and elaboration on how the PPC reflect scarcity An increase in the production capacity in the economy will lead to an outward shift in the PPC resulting in a decrease in scarcity and vice versa [point]. When the PPC shifts outwards, some of the previously unattainable points will become attainable. Opportunity cost & the production possibilities curve (PPC) (article) | Khan Academy The production capacity in the economy could increase due to an increase in the quantity or the quality of factors of production [explanation and elaboration]. Explanation on how the PPC reflect choice A change in the tastes and preferences of society will lead to a movement along the PPC which reflects a change in choice [point]. The tastes and preferences of society may change due to technological advancements [explanation and elaboration]. For instance, the invention of the smartphone and tablet computing has led to a change in the tastes and preferences of society towards electronic publications. As such, the market may be more inclined to produce more electronic publications [example]. Explanation on how the PPC reflect opportunity cost The PPC is concave to the origin because the opportunity cost of producing each good increases as its quantity increases [point].
In a marriage of quantum science and solid-state physics, researchers at the National Institute of Standards and Technology (NIST) have used magnetic fields to confine groups of electrons to a series of concentric rings within graphene, a single layer of tightly packed carbon atoms. This tiered “wedding cake,” which appears in images that show the energy level structure of the electrons, experimentally confirms how electrons interact in a tightly confined space according to long-untested rules of quantum mechanics. The findings could also have practical applications in quantum computing. Graphene is a highly promising material for new electronic devices because of its mechanical strength, its excellent ability to conduct electricity and its ultrathin, essentially two-dimensional structure. For these reasons, scientists welcome any new insights on this wonder material. The researchers, who report their findings in the Aug. 24 issue of Science, began their experiment by creating quantum dots — tiny islands that act as artificial atoms — in graphene devices cooled to just a few degrees above absolute zero. Electrons orbit quantum dots in a way that’s very similar to how they orbit atoms. Like rungs on a ladder, they can only occupy specific energy levels according to the rules of quantum theory. But something special happened when the researchers applied a magnetic field, which further confined the electrons orbiting the quantum dot. When the applied field reached a strength of about 1 Tesla (some 100 times the typical strength of a small bar magnet), the electrons packed closer together and interacted more strongly. As a result, the electrons rearranged themselves into a novel pattern: an alternating series of conducting and insulating concentric rings on the surface. When the researchers stacked images of the concentric rings recorded at different electron energy levels, the resulting picture resembled a wedding cake, with electron energy as the vertical dimension. A scanning tunneling microscope, which images surfaces with atomic-scale resolution by recording the flow of electrons between different regions of the sample and the ultrasharp tip of the microscope’s stylus, revealed the structure. “This is a textbook example of a problem — determining what the combined effect of spatial and magnetic confinement of electrons looks like — that you solve on paper when you’re first exposed to quantum mechanics, but that no one’s actually seen before,” says NIST scientist and co-author Joseph Stroscio. “The key is that graphene is a truly two-dimensional material with an exposed sea of electrons at the surface,” he adds. “In previous experiments using other materials, quantum dots were buried at material interfaces so no one had been able to look inside them and see how the energy levels change when a magnetic field was applied.” Graphene quantum dots have been proposed as fundamental components of some quantum computers. “Since we see this behavior begin at moderate fields of just about 1 Tesla, it means that these electron-electron interactions will have to be carefully accounted for when considering certain types of graphene quantum dots for quantum computation,” says study co-author Christopher Gutiérrez, now at the University of British Columbia in Vancouver, who performed the experimental work at NIST with co-authors Fereshte Ghahari and Daniel Walkup of NIST and the University of Maryland. This achievement also opens possibilities for graphene to act as what the researchers call a “relativistic quantum simulator.” The theory of relativity describes how objects behave when moving at or close to light speed. And electrons in graphene possess an unusual property — they move as if they are massless, like particles of light. Although electrons in graphene actually travel far slower than the speed of light, their light-like massless behavior has earned them the moniker of “relativistic” matter. The new study opens the door to creating a tabletop experiment to study strongly confined relativistic matter. Collaborators on this work included researchers from the Massachusetts Institute of Technology, Harvard University, the University of Maryland NanoCenter, and the National Institute for Material Science in Ibaraki, Japan. The measurements suggest that scientists may soon find even more exotic structures produced by the interactions of electrons confined to solid-state materials at low temperatures.
The SR (Set-Reset) flip-flop is one of the simplest sequential circuits and consists of two gates connected as shown in figure. Notice that the output of each gate is connected to one of the inputs of the other gate, giving a form of positive feedback or ‘cross-coupling’. The circuit has two active low inputs marked S and R, ‘NOT’ being indicated by the bar above the letter, as well as two outputs, Q and Q. Table shows what happens to the Q and Q outputs when a logic 0 is applied to either the S or R inputs. The SR Flip-flop Truth Table - Q output is set to logic 1 by applying logic 0 to the S input. - Returning the S input to logic 1 has no effect. The 0 pulse (high-low-high) has been ‘remembered’ by the Q. - Q is reset to 0 by logic 0 applied to the R input. - As R returns to logic 1 the 0 on Q is ‘remembered’ by Q. Problems with the SR Flip-flop In the table Q is the inverse of Q. However, in row 5 both inputs are 0, which makes both Q and Q = 1, and as they are no longer opposite logic states, although this state is possible, in practical circuits it is ‘not allowed’. In row 6 both inputs are at logic 1 and the outputs are shown as ‘indeterminate’, this means that although Q and Q will be at opposite logic states it is not certain whether Q will be 1 or 0, Notice however that in the absence of any input pulses, both inputs are normally at logic 1. This is normally OK, as the outputs will be at the state remembered from the last input pulse. The indeterminate or uncertain logic state only occurs if the inputs change from 0, 0 to 1, 1 together. This should be avoided in normal operation, but is likely to happen when power is first applied. This could lead to uncertain results, but the flip-flop will work normally once an input pulse is applied to either input. The SR Flip-flop is therefore, a simple 1-bit memory. If the S input is taken to logic 0 then back to logic 1, any further logic 0 pulses at S will have no effect on the output. The Clocked SR Flip-flop Figure below shows a useful variation on the basic SR flip-flop, the clocked SR flip-flop. By adding two extra NAND gates, the timing of the output changeover after a change of logic states at S and R can be controlled by applying a logic 1 pulse to the clock (CK) input. Note that the inputs are now labeled S and R indicating that the inputs are now ‘high activated’. This is because the two extra NAND gates are disabled while the CK input is low, therefore the outputs are completely isolated from the inputs and so retain any previous logic state, but when the CK input is high (during a clock pulse) the input NAND gates act as inverters. The main advantage of the CK input is that the output of this flip-flop can now be synchronized with many other circuits or devices that share the same clock. This arrangement could be used for a basic memory location by, for example, applying different logic states to a range of 8 flip-flops, and then applying a clock pulse to CK to cause the circuit to store a byte of data.
Today, my students will complete the second quiz for Unit 1. This quiz will encompass all of the standards presented in this unit. The Quiz will serve as a guide to inform me as to how to set up flexible group activities tomorrow for the purpose of clearing misconceptions and properly preparing students for the upcoming Unit 1 test. This assessment contains 11 multiple choice questions in total. For this reason, students will be given approximately 25 minutes to complete this quiz. 2 minutes question and 3 minutes to account for those who have come unprepared and need a pencil or paper... etc. Once the 25 minute time frame for this quiz is over, I will have my students exchange papers and I will provide answers for grading. This will give students immediate feedback on their performance on this quiz and allow them an opportunity to ask question about those questions that they struggled with. The students will then be given time and opportunity to discuss why certain concepts are such a struggle. I will guide my students through each of the questions. I will also record the number of questions missed for each student. To set up for this, all pencils will have to be put away, and I will pass out red marking pens or pencils for grading purposes. The student doing the grading, will write their name at the bottom of the paper that they are grading for accountability purposes. Then grading will commence. After we finish grading, I will ask for the number correct from each paper. Then, the papers will be returned to their owners. The returning of the papers provides students with immediate feedback. Using this feedback, the students will take their quiz home and correct any mistakes that they made and return the quiz the following day to receive a higher grade based upon their corrections. Allowing them to do this will cut down on the urge to cheat while or after grading and it will also encourage students to study.
This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action. Hepatitis B In Children WHAT YOU NEED TO KNOW: What is hepatitis B? Hepatitis B is inflammation of the liver caused by hepatitis B virus (HBV) infection. The infection is called acute when a person first becomes infected. The infection becomes chronic after 6 months. Chronic hepatitis B is less common in children than in adults. How is HBV spread? HBV can spread from a mother to her unborn child. An infected mother can also infect her baby during delivery. HBV also spreads through contact with infected blood or body fluids. HBV can enter your child's body through a cut or scratch in his skin or through his mucus membranes. HBV can live on objects and surfaces for 7 days or longer. What increases my child's risk for hepatitis B? - The following increase the risk for your adolescent: - A stick from an infected needle, including for illegal drugs and for procedures such as tattooing - Unprotected sex with an infected person, sex with more than one partner, or being a male who has sex with males - The following increase the risk for your child or adolescent: - An object with infected blood or body fluids on it touches a wound - Close contact with an infected person - Travel to areas in the world where HBV is common - Living or working in a long-term care facility or correctional facility - Rarely, a blood, organ, or tissue transplant from an infected donor What are the signs and symptoms of hepatitis B? Your child may have no signs or symptoms and may not know he has been infected. Once he is infected with HBV, it can take from 1 to 6 months before symptoms develop. He may have any of the following: - Dark urine or pale bowel movements - Fatigue and weakness - Loss of appetite, nausea, and vomiting - Jaundice (yellow skin or eyes), itchy skin, or skin rash - Joint pain and body aches - Pain in the right upper side of your child's abdomen How is hepatitis B diagnosed? Your child's healthcare provider will ask about his signs and symptoms and any health problems he has. Tell him if your child has other infections, such as HIV or hepatitis C. Tell him if your adolescent drinks alcohol or uses any illegal drugs. These can harm his liver. The healthcare provider may also ask about your adolescent's sex partners. Your child may need any of the following tests: - Blood tests are used to see if your child is infected with HBV and to check his liver function. - An ultrasound may be done to check for signs of HBV and to look for other liver problems. - A liver biopsy is used to test a sample of your child's liver for swelling, scarring, and other damage. A liver biopsy may help healthcare providers learn if your child needs treatment for HBV. How is hepatitis B treated? Hepatitis B may last a short time and go away on its own without treatment. Your child's healthcare provider will monitor him closely for signs of liver disease. If needed, treatment may help improve your child's liver function and decrease his symptoms. He may need any of the following: - Medicines may be given to help fight HBV and keep it from spreading in your child's body. - A plasma or platelet transfusion may be needed if your child's blood is not clotting as it should. Plasma and platelets are parts of your child's blood that help his blood clot. He will get the transfusion through an IV. - A liver transplant is surgery to replace your child's diseased liver with a donor liver. Your child may need a liver transplant if he has severe liver disease or liver failure. What can I do to help prevent the spread of HBV? - Have your child cover any open cuts or scratches. If blood from a wound gets on a surface, clean the surface with bleach right away. Put on gloves before you clean. Throw away any items with blood or body fluids on them, as directed by your child's healthcare provider. - Do not let your child share personal items. These items include toothbrushes, nail clippers, and razors. Tell him not to share needles. - Tell household members that your child has HBV. Anyone who has not been vaccinated against hepatitis B may need to start treatment to help prevent infection. Everyone should wash their hands often, especially after using the bathroom and before eating. Regular handwashing is important for your child and everyone who lives with him. - Talk to your adolescent about safe sex. If your adolescent is sexually active, tell him to use a condom during sex. Sexually active girls should have their male partners wear a condom. - Protect your baby. If you are pregnant, ask your healthcare provider for more information on keeping your baby from getting HBV. He will need a vaccination or treatment if you plan to breastfeed. - Do not let your child donate blood. Donations are screened for HBV, but it is best not to donate at all. Manage hepatitis B: - Have your child eat a variety of healthy foods. Healthy foods include fruits, vegetables, low-fat dairy products, beans, lean meats and fish, and whole-grain breads. Ask if your child needs to be on a special diet. - Have your child drink more liquids. Liquids help your child's liver function properly. Ask your healthcare provider how much liquid your child should drink each day and which liquids are best for him. - Talk to your adolescent about not drinking alcohol. Alcohol can increase liver damage. Talk to your healthcare provider if your adolescent drinks alcohol and needs help to stop. - Talk to your adolescent about not smoking. Nicotine can damage blood vessels and make it more difficult to manage hepatitis B. Smoking can also lead to more liver damage. Ask your healthcare provider for information if your adolescent currently smokes and needs help to quit. E-cigarettes or smokeless tobacco still contain nicotine. Talk to your healthcare provider before your adolescent uses these products. When should I seek immediate care? - Your child has a sudden, severe headache and head pressure. - Your child has new or increased bruising or red or purple dots on his skin. He may also have bleeding that does not stop easily. - Your child's abdomen is swollen. - Your child has severe nausea or cannot stop vomiting. - You see blood in your child's urine or bowel movements, or he vomits blood. - Your child has new or increased yellowing of his skin or the whites of his eyes. - Your child has severe pain in his upper abdomen. When should I contact my child's healthcare provider? - The palms of your child's hands are red. - Your child has a fever. - Your child has new or increased swelling in his legs, ankles, or feet. - Your child's muscles get smaller and weaker. - You have questions or concerns about your child's condition or care. Care AgreementYou have the right to help plan your child's care. Learn about your child's health condition and how it may be treated. Discuss treatment options with your child's healthcare providers to decide what care you want for your child. The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you. © Copyright IBM Corporation 2018 Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or IBM Watson Health Always consult your healthcare provider to ensure the information displayed on this page applies to your personal circumstances. Learn more about Hepatitis B In Children Micromedex® Care Notes - Body Image In Adolescents - Gender Identity In Adolescents - Gender Identity In Your Adolescent - Hemoglobin A1c - Hepatitis B - Hepatitis B Vaccine - How To Childproof Your Home - Medication Safety For Children - Promote Healthy Teeth And Gums In Young Children - Separation Anxiety Disorder - Tips For Toilet Training - Well Child Visit At 1 Month - Well Child Visit At 1 Week - Well Child Visit At 11 To 14 Years - Well Child Visit At 12 Months - Well Child Visit At 15 Months - Well Child Visit At 18 Months - Well Child Visit At 2 Months - Well Child Visit At 2 Years - Well Child Visit At 3 Years - Well Child Visit At 30 Months - Well Child Visit At 4 Months - Well Child Visit At 4 Years - Well Child Visit At 5 To 6 Years - Well Child Visit At 6 Months - Well Child Visit At 7 To 8 Years - Well Child Visit At 9 Months - Well Child Visit At 9 To 10 Years - Well Child Visit Information For Teens At 15 To 17 Years - Well Child Visits - Your Child's Body Image
Suppose any of the branches in the network has a voltage source, then it is slightly difficult to apply nodal analysis. One way to overcome this difficulty is to apply the Supernode Analysis. In this method, the two adjacent nodes that are connected by a voltage source are reduced to a single node and then the equations are formed by applying Kirchhoff s current law as usual. This is explained with the help of Fig. 2.44. It is clear from Fig. 2.44, that node 4 is the reference node. Applying Kirchhoff s current law at node 1, we get Due to the presence of voltage source Vx in between nodes 2 and 3, it is slightly difficult to find out the current. The supernode technique can be conveniently applied in this case. Accordingly, we can write the combined equation for nodes 2 and 3 as under. From the above three equations, we can find the three unknown voltages. Determine the current in the 5 Ω resistor for the circuit shown in Fig. 2.45. Solution At node 1 At node 2 and 3, the supernode equation is The voltage between nodes 2 and 3 is given by The current in the 5 Ω resistor Solving Eqs 2.50, 2.51 and 2.52, we obtain i.e. the current flows towards node 3.
Peer Tutoring & Buddy System Lowell in 1994 highlighted the benefits of peer tutoring strategies for children with conductive hearing loss. Lowell suggested that for children with conductive hearing loss, the benefit of peer tutoring may be more than just the educational development of the child. Peer tutoring also develops an appreciation in other children about the problems a child with conductive hearing loss may be experiencing. This increased awareness and understanding amongst a child’s peers may have a significant impact on the child’s self esteem. Peer tutoring is particularly successful within Indigenous communities, where Indigenous students are used to learning by observation and being helped by their peers. Along with peer tutoring the adoption of a buddy system will greatly assist children with conductive hearing loss. Children with conductive hearing loss may become withdrawn and introverted, even during group activities. The use of a buddy system is to get children who lack confidence to join in these activities. Apart from the obvious social and emotional benefits, the buddy system encourages children with conductive hearing loss to develop their language by becoming familiar with vocabulary and content of lessons. In light of these benefits, caregivers are strongly encouraged to consider the use of peer tutoring and buddy systems whenever appropriate.
In the everyday usage of the word, motivation is defined as "a sense of need, desire, fear etc., that prompts an individual to act", and emotion is defined as "a strong feeling often accompanied by a physical reaction" (Webster's Dictionary, 1988). But in the field of psychology, both motivation and emotion are hypothetical constructs, processes which cannot be directly observed or studied. Motivation is thought of as a process that both energizes and directs goal-oriented behavior, as where emotions are subjective experiences, feelings that accompany motivational states (Weber, 1991). I believe that stress is primarily a process of motivation since it requires some sort of adaptation (coping) to the demand or set of demands. On the other hand, the emotions that we experience due to stress can also be studied. This does relate stress to emotions, but stress in itself is not considered a particular emotion. For example, one could experience different kinds of emotions due to stress, such as that of anger, anticipation, and fear. The effects of stress is directly linked to coping. The study of coping has evolved to encompass large variety of disciplines beginning with all areas of psychology such as health psychology, environmental psychology, neuro psychology and developmental psychology to areas of medicine spreading into the area of anthropology and sociology. Dissecting coping strategies into three broad components, (biological/physiological, cognitive, and learned) will provide a better understanding of what the seemingly immense area is about. Biological/physiological component - The body has its own way of coping with stress. Any threat or challenge that an individual perceives in the environment triggers a chain of neuroendocrine events. These events can be conceptualized as two separate responses, that being of sympathetic/adrenal response, with the secretion of catecholamines (epinephrine, norepinephrine) and the pituitary/adrenal response, with the secretion of corticosteroids (Frankanhauser, 1986). The sympathetic/adrenal response takes the message from the brain to the adrenal medulla via the sympathetic nervous system, which secretes epinephrine and norepinephrine. This is the basic "fight or flight" response (Cannon, 1929), where the heart rate quickens and the blood pressure rises. In the pituitary/adrenal response, the hypothalamus is stimulated and produces the corticotrophin releasing factor (CRF) to the pituitary gland through the blood veins, then the adrenal corticotropic hormone (ACTH) is released from the pituitary gland to the adrenal cortex. The adrenal cortex in turn secretes cortisol, a hormone that will report back to the original brain centers together with other body organs to tell it to stop the whole cycle. But since cortisol is a potent hormone, the prolonged secretion of it will lead to health problems such as the break down of cardiovascular system, digestive system, musculoskeletal system, and the recently established immune system. Also when the organism does not have a chance for recovery, it will lead to both catecholamine and corisol depletion and result in the third stage of the General Adaptation Syndrome of exhaustion (Seyle, 1956). Social support has also been established by studies to be linked to stress (Bolger & Eckenrole, 1991; House, et. al, 1988). This can be seen as a dimension of the biological component since it is closely linked to the biological environment of that individual. There are many aspects to social support, the major categories would be of emotional, tangible, and informational. Personality types as so called Type A Personality have been defined to have such characteristics as competitive, impatient and hostile. Hostility has been linked to coronary heart disease which is thought be caused by stress (Rosenman, 1978). Eysenck (1988) has coined the term Type C Personality for those who are known to be repressors and are prone to cancer. Hardiness also is a personality that seems to have much to do with how an individual handles stress. Hardiness is defined as having a sense of control, commitment, and challenge towards life in general. Kobasa (1979) has studied subjects who were laid off in large numbers by AT&T when the federal deregulation took place, and found that the people who were categorized as having hardy personalities were mentally and emotionally better off than the others. Although it may be possible to modifying ones personality, research has shown it to be heritable (Rahe, Herrig, & Rosenman, 1978; Parker, & Barret, 1992). Cognitive component - The cognitive approach to coping is based on a mental process of how the individual appraises the situation. Where the level of appraisal determines the level of stress and the unique coping strategies that the individual partakes. (Lazarus & Folkman, 1984). There are two types of appraisals, the primary and the secondary. A primary appraisal is made when the individual makes a conscious evaluation of the matter at hand of whether it is either a harm or a loss, a threat or a challenge. Then secondary appraisal takes place when the individual asks him/herself "What can I do?" by evaluating the coping resources around him/her. These resources include, physical resources, such as how healthy one is, or how much energy one has, social resources, such as the family or friends one has to depend on for support in his/her immediate surroundings, psychological resources, such as self-esteem and self-efficacy, and also material resources such as how much money you have or what kind of equipment you might be able to use. How much personal control one perceives to have is another factor to consider when looking at coping from the cognitive perspective. Usually an individual will find themselves feeling more stressful in uncontrollable situations. Also, since personal control is a cognitive process, the more one has a sense of personal control, better sense of coping ability one will have. The categories of the attribution theory gives a good picture of the extreme ends of the "in control/lack of control" continuum. An individual will perceive to have the most control where the situations fit the categories of internal, transient, and specific. At the opposite end of the scale is the categories of external, stable, and global where the person will perceive lack of control. There are other ways of to approach coping from a cognitive perspective such as that of constructive and destructive thinking as conceptualized by Epstien and Meier (1989) a similar concept to that of optimistic versus pessimistic (Taylor, 1991), the perceived level of self-efficacy and self-esteem and so on. Learned component - The learned component of coping includes everything from various social learning theories, which assume that much of human motivation and behavior is the result of what is learned through experiential reinforcement, learned helplessness phenomena which is believed to have a relationship to depression, and even implications of the particular culture or society that the stress at hand is affected by can also be included in this component. Some of the examples for the social learning theories would be the wide range of stress management techniques that have been found to help ease stress. Changing how you cognitively process a particular situation, so called cognitive restructuring, changing how you behave in a particular situation, so called behavior modification, biofeedback which uses operant conditioning to alter involuntary responses mediated by the autonomic nervous system, and the numerous relaxation techniques such as meditation, breathing, and exercise are all part of what is learned through experiential reinforcement. The learned helplessness phenomena has been linked to depression by such researchers as Coyne, Aldwin, and Lazarus (1981) when they studied subjects who tried to exert control when it was not possible to do so. Cultures and societies have their own set of rule of what they perceive to be stressful or not (Colby, 1987). For example, educational systems differ greatly from culture to culture. In Asian cultures such as Japan and Korea, there is a great deal of importance attributed to how they do in schools. Access to higher education, leading to better jobs is determined solely through academic performance. The amount of stress that the students experience due to this is very high. High enough to report a number of suicides each year for not passing an important exam. People will have different responses in a monogamous culture to that of a polygamous culture. In Africa, where polygamy is the norm, when they find out that the significant other has another partner, it means more workforce to take care of the children and the household chores. If the husband does not take on many wives, it can become a strain on the rest of the wives. An interesting study was done by using Holmes and Rahe's (1967) stressful life event measure in South Africa, and found that it correlated very little with standard distress measures (Swartz, Elk, & Teggin, 1983). This suggests the existence of such cultural/societal differences. Understanding how these three components integrate is fundamental to further understand the process of stress and coping. The transactions between the "mind" (cognition, and learning) and the "brain" (biological/physiological), and the role each play with regards to stress, has been in debate for decades. The reductionist model of stress, comprises of a purely physiological perspective where the brain is the sole determinant of the presence of stress. In the interactionist model, both the brain and the mind affect stress, but it is still an uni-directional path from both the brain and the mind to stress. The transactionist model comprises of a bi-directional path, where stress in turn influences both the brain and the mind. Thus, by way of stress, the brain and the mind both mutually affect one another. This model can also be applied to the process that is bound to come after stress, that of coping. In this model, stress is replaced by coping and the two factors can be thought of as the "environment" and the "person". As with the reductionist model for stress, simple responses to stressful environmental stimuli affects the person to choose a particular coping strategy. In the interactionist model of coping, both the environment and the person affect coping, also in an uni-directional way as in the model of stress. It has been found that the use of coping strategies is influenced by personality (Bolger, 1990), and also by the type of environment (Mattlin, Wethington, & Kessler, 1990). The transactionist model of coping is a model that affects one another in all directions (Lazarus & Folkman, 1984). Many researchers who have studied subjects at midterms or finals and have found that coping is clearly a complex process, influenced by both personality characteristics (Bolger, 1990; Friedman et al., 1992; Long & Sangster, 1993), situational demands (Folkman & Lazarus, 1986; Heim et al., 1993), and even the social and physical characteristics of the setting (Mechanic, 1978). As we have seen in the various theoretical paradigms of coping, every factor from physiological, psychological, social, to cultural, both affect and are affected by the coping strategies. Just as there is said to be an optimal level of stress for an individual to function most effectively, I propose that there is an optimal level of coping which minimizes cost and maximizes benefits on all levels of the various factors combined. A coping strategy that may work to improve a romantic relationship, may have it's negative social, cultural, or even psychological consequences. If you choose not to see your friends so that you have more time to spend with your romantic partner, or you choose to move in with that person when it is considered a cultural taboo, or you are so psychologically dominated by that person that you don't have a mind of your own. In such cases, the individual has the illusion that they are effectively coping with a particular stress, while what they are really doing is creating many others. Also, since each factor has the power to influence the others, the true form of the transaction theory can only be captured when time is included as one of the variables. Longitudinal studies are crucial in order to truly reflect the long term effects and processes that takes place within the whole coping A national behavioral science research agenda that has been developed by the psychological science community to address critical areas of concern to this country. It specificly addresses the need to further investigate the effects that exposure to stress have on human immune system and the brain biochemistry. It also presses to clarify how coping can directly relate to reducing one's stress.Health Psychology - Stress and Coping. A thorough overview of the field of stress and coping in a easy to comprehend linking text formal. Usueful since the important studies are mentioned and cited.The Health Resource Network. A non-profit health education organization committed to developing new and effective programs for improving people's health and well- being. Practical ideas and solutions are recommended by the organization to everyday stressors.The American Institute of Stress. A non-profit organization founded in 1978, to serve as a clearing house for information on all stress related subjects. They have a monthly newsletter which reports on the latest advances in stress research and related health topics not currently available via internet.Parent Library: Coping with Stress. ERIC (Clearinghouse on Elementary and Early Childhood Education) at the University of Illinois Children's Research Center hosts this site by informing parents about how their children cope with stress and gives strategies for helping them.Coping with Stress: Adolescent Differences. This site provides a summary of a five year study presented at the American Eduational Research Association. The article discusses the different coping strategies that boys and girls use.Mind Tools-Sports Psychology: Stress Reduction Techniques. A good source with practical examples to see how the various components (biological, cognitive, and learned) can each be targeted for stress reduction.Research looks at how children fare in times of war. A study that shows how children react differently to war, depending on their culture, antional identity and family and community support systems.
Many different processes play important roles in the climate system. In chapter 4 we have already learned about radiation and the global energy budget. Here we want to discuss atmospheric and oceanic circulations, how they transport heat and water, and we’ll explore a little more in depth Earth’s water cycle and how it penetrates all climate system components and is linked to the energy cycle. a) Atmospheric Circulation Annual mean surface temperatures on Earth range from less than −40°C in Antarctica to +30°C in the tropics (Fig. 1). This begs the question: why is it warmer in the tropics than at the poles? The answer is, of course, because there is more sunlight in the tropics. Due to the curvature of Earth’s surface the equator receives more incident solar radiation per area than the poles (Fig. 2). Most solar radiation is received on a surface perpendicular to the sun’s rays, whereas the more tilted Earth’s surface is with respect to the sun’s rays the less energy it receives. This is similar to holding a flashlight perpendicular to a surface or at an angle. The area lit is smaller if the flashlight shines perpendicular onto the surface. This configuration maximizes the energy input. The area lit is larger the larger the angle between the flashlight and the surface. In the extreme case of a 90° angle, or more, there is no light at all received. This situation corresponds to the poles or the dark side of the Earth. Satellites have measured radiative fluxes at the top-of-the-atmosphere. Those data can be used to calculate the heat gain from absorbed solar radiation (ASR) and the heat loss from emitted terrestrial radiation (ETR) as a function of latitude (Fig. 3). The data show ASR values of around 300 Wm-2 in the tropics and 60 Wm-2 at the poles. Thus, the equator-to-pole difference is about 240 Wm-2. However, values for the ETR are only around 250 Wm-2 in the tropics and between 150 and 200 Wm-2 at the poles. Thus, the equator-to-pole difference is only about 50 to 100 Wm-2. The difference between the absorbed and emitted radiation gives the net heat gain from these fluxes. In the tropics the gain is positive. That is, there is more energy gain than energy loss. At the poles, on the other hand, there is more energy loss than energy gain. Therefore, if no other processes were involved the equator would warm up and the poles would cool down. But this is not the case, which implies a heat transport from the tropics towards the poles. Taking the difference ASR – ETR and integrating it from one pole to the other gives the total meridional heat flux (Fig. 4). In the southern hemisphere values are negative indicating southward heat transport. Positive values in the northern hemisphere represent northward transport. Poleward heat fluxes are peak at mid-latitudes with values of between 5 and 6 PW. Most of the meridional heat transport is carried by the atmosphere (4-5 PW) whereas the ocean is responsible for a smaller portion (1-2 PW). Complex and turbulent motions in the atmosphere such as seen in visualizations from satellites play an important part in this heat transfer. Air is compressible. The weight of overlaying air compresses the air beneath and increases the pressure closer to the surface (Fig. 5). Therefore, pressure decreases with height and the surface pressure depends on the density and mass of the air above. Let’s imagine a non-rotating Earth (Fig. 6). In this case, the air at the poles would be cold and the air at the equator would be hot. Since colder air is denser than warm air the pressure at the north pole would be larger than at the equator. The air at the equator would rise and the air at the poles would sink. At the surface, the air would flow from high to low pressure, thus from the poles to the equator. At high altitudes the air would move from the equator to the poles. However, Earth does rotate, which creates the Coriolis effect. Earth’s rotation causes a deflection of air and water masses towards the right (left) in the northern (southern) hemisphere. The Coriolis force is a consequence of angular momentum conservation. Assume an air parcel at rest at the equator. (An analogy would be a spinning ice dancer with extended arms.) Its angular momentum is M = R×U, where U = 40,000 km/day = 500 m/s is approximately the velocity of the Earth at the equator. Now move the air northward. (The ice dancer pulls his/her arms in.) This will decrease R. In order to conserve angular momentum U has to increase. If the air moved to about 60°N its distance from the axis of rotation would have decreased by about one half. Therefore, U must have doubled. Thus, the air parcel would have a velocity of 500 m/s relative to the Earth’s surface. Such high speeds never occur on Earth because of friction and turbulence but this simple example still explains qualitatively the high eastward wind velocities of around 40 m/s observed in the mid-latitude jet streams (Fig. 7). Rising of warm moist air at the equator causes water vapor condensation due to cooling of the air during the ascent. Clouds form and precipitation occurs. Some of the deepest cumulonimbus clouds on Earth form in the tropics. They can reach the top of the troposphere or higher. The cool relatively dry air then moves poleward. Now the Coriolis effect kicks in, deflects the air towards the right (left) in the northern (southern) hemisphere, which creates the jet stream. The air cools by emitting longwave radiation to space. This increases the density and the air descends back to the surface in the subtropics (~30°N/S). During the descend the air warms and its relative humidity decreases. This leads to dry conditions in the subtropics indicated by the major deserts at those latitudes. Subsequently the dry air moves back towards the equator. The Coriolis force deflects it towards the right (left) in the northern (southern) hemisphere, creating the easterly trade winds in the tropics. During this movement along the sea surface the air picks up water vapor from evaporation. Once the air returns to the equator it is saturated with water vapor (close to 100% relative humidity). The resulting meridional overturning cells in the tropical atmosphere are called Hadley cells, or Hadley circulation. Two cells, one in each hemisphere, exist only during the fall and spring, whereas during summer/winter there is only one major cell with rising air just slightly off the equator in the summer hemisphere, where the heating is largest. The belt of rising air close to the equator is called the Intertropical Convergence Zone (ITCZ), due to the convergence of air along the surface. The ITCZ is further north in the northern hemisphere summer and further south in the southern hemisphere summer, although on average it is slightly north of the equator because the northern hemisphere is slightly warmer than the southern hemisphere due to ocean heat transport from the southern to the northern hemisphere (Frierson et al., 2013). Water in the Hadley Cell Let’s follow an air parcel of about 1 kg weight (~1 m3 at the surface) during its travels along the Hadley cell and estimate its water vapor content using Fig. 16 of chapter 4. - Starting at the ITCZ we assume the temperature is 30°C and the air is fully saturated with water vapor. How many grams of water vapor does it contain? - The air ascends to the top of the troposphere. It cools to about -30°C. It is still at saturation. How many grams of water vapor does it contain? - How many grams of water vapor has the air parcel lost? - Now the air parcel moves poleward and descends in the subtropics. The descend causes warming. Will the water vapor content change during the descend? - Let’s assume the surface temperature is close to 30°C. What will be the relative humidity of the air? - During its travels near the surface evaporation from the ocean will quickly increase the air parcel’s water vapor content close to saturation. How many grams of water vapor will the air parcel have gained? Surface air moving from the high pressure at subtropical latitudes towards lower pressure at mid-latitudes also experiences the Coriolis effect. This leads to the prevailing westerly winds at mid-latitudes. Another important feature of the atmospheric circulation at mid-latitudes is the growth, movement, and decay of synoptic weather systems (transient eddies) that dominate weather variability and heat transport there. Transient eddies are the low and high pressure systems that move eastward, some of which can be associated with storms. Explore these features of the global atmospheric circulation in this animation of weather variations throughout a whole year from satellite observations. You may notice the pulsing of convection over tropical Africa and South America. This is caused by the diurnal (daily) cycle of surface heating. Fig. (8) shows the observed global distribution of precipitation. Note the ITCZ as the band of high precipitation close to the equator, the regions of low precipitation in the subtropics and the bands of high precipitation at mid-latitudes in the paths of the storm tracks over the North Pacific and North Atlantic. In the southern hemisphere we see an additional band between 50-60°S. Precipitation has a strong effect on vegetation as can be seen from the similarities between Figures 8 and 9. Regions of large precipitation such as tropical Africa, South America, and Asia have dense vegetation, whereas regions of little precipitation such as the Sahara, the Arabian Peninsula, regions in central Asia, southwestern parts of North America, central and western Australia, parts of South America west of the Andes, and southwestern Africa also have sparse desert or steppe vegetation. This relation is not a surprise considering that photosynthesis requires water (see box Photosynthesis and Respiration in chapter 5). b) The Hydrologic Cycle Water plays a fundamental role in the climate system. It is involved in Earth’s energy cycle and links physical and biological processes. Water has some remarkable properties due to its molecular structure. Hydrogen bonds in liquid water result from attractive electric forces between the positively charged hydrogen atoms of one molecule with the negatively charged oxygen atom of a neighboring molecule. A large amount of energy input is required to overcome this force for a transition from the liquid to the vapor phase (Fig. 10). This energy is called of vaporization. It is about 2,300 joules per gram of water. The same amount of energy is released during condensation. The lapse rate Γ = ΔT/Δz is the rate of temperature decrease ΔT with height Δz in the atmosphere. Let’s try to estimate the effect of condensation of water vapor in the ascending branch of the Hadley cell on the temperature of the upper atmosphere (z = 10 km). In the previous box we have calculated that approximately 30 g of water vapor were lost from 1 kg of air during its ascent. - How much latent heat of condensation was released? - How much would that added heat have increased the temperature of the air parcel, assuming a specific heat capacity of air of cp,air = 1 J/(g°C)? The observed lapse rate of the atmosphere is on average about Γm = -6.5 °C/km, which is close to the moist adiabatic lapse rate. (Adiabatic means that no heat is added or removed from the air parcel.) This is in contrast to the dry adiabatic lapse rate, which is approximately Γd = -10 °C/km. Thus, given a surface air temperature of 30°C, the air at 10 km altitude at the equator would be -70°C in a dry atmosphere compared with -35°C in the real, moist atmosphere. Our calculation above was not quite correct due to the various assumptions we made, but the order of magnitude was right. This example illustrates the large effect of latent heat release on upper air temperatures. occurs when the air is at 100% relative humidity and when cloud condensation nuclei are available. Cloud condensation nuclei are small particles in the air. In the atmosphere condensation typically occurs when the air is cooled, e.g. during ascending motions in convection or when air is lifted over mountains. The latent heat released during condensation in clouds leads to warming and thus more intense upward motions. This is an important driver of convection and storms (Fig. 11). occurs when the relative humidity rh = q/qsat of the air above a water surface is less than 100%. q is the specific humidity, that is the amount of water vapor (in grams) per moist air (in kg) and qsat is the specific humidity at saturation. The lower the relative humidity the higher the rate of evaporation E ~ qsat – q. Stronger winds also cause more evaporation, similar to your blowing over your hot coffee or tea to cool it down. Evaporation leads to cooling of the remaining liquid water since it removes the fastest molecules, and the slower ones stay behind. This principle is also at work in air conditioners and refrigerators, in which air is cooled by evaporation of a coolant. Evaporative cooling is important in keeping Earth’s surface and especially the ocean cool. Over vegetated land areas transpiration of water by plants also cools the surface. Plants can limit their water loss through transpiration by closing their stomata. Another property of water that is important for climate is its large . The table below compares the heat capacity of water with that of air. On a per gram basis water already has more than four times the heat capacity of air. Moreover, the density of water is 1000 times that of air. Therefore, on a per volume basis the heat capacity of water is 4200 times that of air. As a result, the top 2 m of the ocean has the same heat capacity as the whole atmosphere. |Specific Heat Capacity||cp||J/(g°C)||1||4.2| |Volumetric Heat Capacity||cvol = cpρ||J/(m3°C)||1000||4,200,000| Experiment: Heat Capacity The differences in heat capacity of air versus water can be demonstrated in a simple experiment with two balloons. You can do this at home. Fill one balloon with with air, the other with water. We will hold the flame of a lighter to the balloon. - But first make a guess what will happen. Will there be a difference in the results? - Now perform the experiment with the air balloon first. What happened? - Now perform the experiment with the water balloon. Is there a difference? - Did you expect these results? If you don’t want to do the experiment yourself you can watch a video here. Land also has a much lower heat capacity than the ocean. Let’s consider the budget equation for two cases: an air column with ocean and one with land. Let’s also assume the system was initially in balance, e.g. ΔCT/Δt = I – O = 0 (zero), which means the rate of change in heat content CT is zero and thus the temperature T is constant. C is the heat capacity, which is larger for the column with the ocean underneath. Now, we add a forcing ΔF on the right-hand-side such that ΔCT/Δt = ΔF. Because the heat capacity is a constant (it does not change with time) we can divide by C to get the temperature change ΔT/Δt = ΔF/C. This equation implies that the temperature change over the ocean will be slower than over land because it has a larger heat capacity C. The effects of the differences in heat capacity between the ocean and land can be seen in Fig. 12, which shows the amplitude of the seasonal cycle in surface temperatures (summer minus winter). The seasonal cycle in the forcing is much larger at higher latitudes than in the tropics, which explains why the seasonal temperature variations in the tropics are smaller than at higher latitudes. But you also see a larger difference between the seasonal cycle over land and over the ocean at similar latitudes. E.g. over the North Pacific at 40°N temperatures in summer are only about 10°C warmer than in winter, whereas in the interior of North America they are 30°C warmer. The largest amplitudes of the seasonal cycle are over east Siberia, because it is the furthest downstream from the ocean (remember the prevailing westerly winds at those latitudes) and over Antarctica because it is, for dynamic and topographic reasons, isolated. Dynamically the large westerly winds over the Southern Ocean inhibit meridional transport. Topographically, the sheer height of the ice sheet removes it from what is happening at the sea surface. In the data explorations in chapter 1 you may have found similar features such as a smaller seasonal cycle closer to the ocean than in the continental interior. Most people live close to the ocean presumably at least partly because climate variations there are more dampened and less extreme than in the interior of the continents. The large heat capacity of the ocean not only dampens seasonal temperature variations but also those on other timescales. We have seen in chapter 2 (Fig. 2) that observed warming over the past 100 years is also smaller over the ocean than over land. Now we understand that the observed land-sea contrast is due, at least in part, to the differences in heat capacities. In fact, about 90% of the observed increase in Earth’s heat content goes into the ocean (Fig. 13). Most of the remaining 10% goes into ice and land, whereas the heat gain of the atmosphere is very small in comparison. Fig. (14) shows a schematic of the global water cycle. Most water is contained in the ocean (more than one billion cubic km), whereas the atmosphere contains only a relatively small amount (13 thousand cubic km). Evaporation removes about 400 thousand cubic km from the ocean each year, most of which precipitates back over the ocean. 40 thousand cubic km are transported in the atmosphere from the ocean to land each year. Over land this water precipitates together with about 70 thousand cubic kilometers of recycled water from evapotranspiration from land surfaces. Observations show increases in water vapor content in the atmosphere (Fig. 15). This is consistent with our understanding of the physics of the hydrologic cycle and its dependency on temperature (Clausius-Clapeyron relation). Warmer air can hold more water vapor, and due to the presence of the oceans there is no lack of water supply to the global atmosphere. c) Ocean Circulation The general, planetary-scale circulation of the ocean can be separated into a wind-driven component that dominates the upper ocean and a density driven component that occupies the deep ocean. Five large gyres are the main features of the surface circulation in the subtropics (Fig. 16). The easterly trade winds push water towards the west in the tropics. The water piles up where it encounters land and flows poleward. The poleward flow brings warm waters from the tropics to the mid-latitudes. There, the westerly winds push the surface waters toward the east. Again, where the current hits a continent it piles up and flows north and south. The southward flowing part completes the subtropical gyre bringing cold waters towards the tropics. The poleward flow along the western boundaries of the subtropical gyres are warm currents, such as the Gulf Stream, the Kuroshio, and the Brazil Currents. Equatorward currents along the eastern boundaries are cold such as the California or the Peru (or Humboldt) Currents. Within the subtropical gyres the water flows in a spiral-like pattern towards the center of the gyre. This convergence causes sinking in the centers of the gyres. However, the water is relatively warm so it sinks only to depths of a few hundred meters. In the tropics, on the other hand, the trade winds cause divergence and upwelling. Surface currents near the equator are particularly vigorous. The piling up of waters in the western equatorial Pacific, for example, causes the narrow Equatorial Counter Current to move eastward. The strongest current in the world oceans is the Antarctic Circumpolar Current, which transports more than 100 million cubic meters of water per second around Antarctica flowing eastward. That amounts to 500 times the Amazon river discharge. The Southern Ocean is also an important region for upwelling of deep waters caused by a wind-driven divergence of surface waters. Watch a high resolution model simulation of surface ocean currents here. It shows more details such as mesoscale eddies, which are the ocean’s equivalent of weather systems in the atmosphere. Notice that they are much smaller in size than high and low pressure systems in the atmosphere due to the larger density of seawater compared with air. Try to identify some of the features discussed above such as the Gulf Stream and the Antarctic Circumpolar Current. These and other fascinating features are also seen in satellite observations of sea surface temperatures here and here. In contrast to the atmosphere, the ocean is mostly stably stratified. That is, denser water is layered below lighter water. The density of sea water is determined by temperature and salinity. The colder and saltier, the heavier it is. Typically warmer, more buoyant water is on top of colder water, especially at low latitudes (Fig. 17). This is because water absorbs sunlight efficiently, which heats the surface. Winds cause lots of turbulence close to the surface, which creates a layer of uniform temperature called the surface mixed layer. Below that, between about 200 – 1000 m depth, is a region in which the temperature decreases rapidly with depth. This is called the thermocline. Turbulence is weak here. Further down is the weakly stratified deep and abyssal ocean. Vertical temperature gradients are small here. Closer to the sea floor turbulence increases due to interactions of flow with the bottom topography. The deep ocean is cold because waters there originate from the high-latitude surface. Only in a few regions of the world’s oceans where the density of surface waters is large enough do they sink into the deep ocean (Fig. 18). In the current ocean there is deep water formation in the North Atlantic and near Antarctica. Surface waters of the Atlantic are saltier than those of the Pacific because of water vapor transport within the atmosphere. Whereas mountain ranges at mid-latitudes block water vapor transport from the Pacific to the Atlantic with the westerly winds there, in the tropics gaps in the mountains allow water vapor transport with the trade winds from the Atlantic to the Pacific. This causes fresher, more buoyant surface waters in the Pacific. In the North Pacific this freshwater lens prevents sinking, whereas in the North Atlantic saltier waters are dense enough to sink to about 2-3 km depth. From there they flow south along the margin of the Americas pushed there by the Coriolis force. The deep water from the North Atlantic crosses the equator and the South Atlantic and enters into the Southern Ocean. Part of it rises back to the surface there whereas the rest flows into the Indian and Pacific Oceans, where it slowly ascends. The return flow at the surface flows through the Indonesian Archipelago into the Indian Ocean, merges with upwelled waters there and continues to flow westward around the tip of South Africa and back northward across the Atlantic. This planetary scale circulation pattern is called the thermohaline (thermo = temperature, haline = salinity) circulation or meridional (north-south) overturning circulation. Its circulation impacts tracer distributions in the ocean (Fig. 19). E.g. in the Atlantic the southward flowing North Atlantic Deep Water (NADW) can be identified as a water mass with relatively high salinity between about 2 – 4 km depth. Fresher Antarctic Bottom Water (AABW) flows north below NADW. It is colder and therefore denser than NADW. Relatively fresh Antarctic Intermediate Water (AAIW) flows north above NADW creating a sandwich-like structure in the deep Atlantic. In the North Atlantic there is a blob of high salinity water around 1 km depth and 40°N. This is outflow from the Mediterranean Sea, which is very salty. Experiment Thermohaline Circulation Perform a simple experiment that illustrates the effects of salinity and temperature on the density of water. You’ll need the following ingredients: - a container, preferably made out of a transparent material such as glass or plexiglass, - ice cubes, - a small sponge, and - food coloring. Fill the container with water. Now put the moist sponge at the surface of the water so that it swims on the top. Pour some salt on the sponge. Not too much. You don’t want it to spill over or capsize the sponge. Just enough so that the water in contact with the sponge will soak up the salt. Now add a few drops of food coloring on top of the salty sponge and observe what happens. Where does the water flow? Now add an ice cube to the water. Drip a few drops of food coloring (choose a different color than before) onto the ice cube and observe. What happens? Describe your observations. Explain your observations with what you’ve learned about the effects of salinity and temperature on sea water density. Observations show that the ocean is warming (Fig. 20). Most of the increase in temperatures is concentrated near the surface consistent with a warming atmosphere as its cause. A prominent maximum of heat uptake is in the North Atlantic, similar to the pattern of anthropogenic carbon uptake (Fig. 12 in chapter 5). The reason is the sinking and southward penetration of NADW, which transports both anthropogenic carbon and heat from the surface to the deep ocean there. Other regions of enhanced heat uptake are the subtropics, where surface waters sink to a few hundred meters depth in the centers of the subtropical gyres. Ocean temperature observations rule out that changes in ocean circulation are the cause of the observed warming of the surface. If this was the case deeper layers would have cooled, which is not what is observed. Therefore, the hypothesis that changes in ocean circulation have caused the observed warming of the atmosphere during the past 50 years has been falsified by subsurface temperature observations. Acceleration of the atmospheric hydrological cycle also affects ocean surface salinities (Fig. 21). Regions that are already salty such as the subtropics and the Atlantic get even saltier and regions that are already fresh like the North Pacific and the Southern Ocean get even fresher. Warming and freshening of surface waters at high latitudes decreases their density and increases their buoyancy. This reduces deep water formation and the meridional overturning circulation. A reduction of the Atlantic meridional overturning circulation, which was initially predicted by climate models in the 1990’s (Manabe and Stouffer, 1993), is now observed in measurements from the subtropical Atlantic (Smeed et al., 2014). However, due to the relatively short period of the available data (2004 to the present) it is currently not clear how much of the observed reduction is caused by human greenhouse gas emissions and how much is due to natural variability. A long-term decline of the Atlantic meridional overturning circulation has been suggested by Rahmstorf et al. (2015) to cause the reduced warming over the subpolar North Atlantic in surface temperature observations (Chapter 2, Fig. 2). - Why is it warmer at the equator than at the poles? - Without heat transport in the atmosphere and oceans, how would temperatures be different at the equator and the poles? - What determines the surface air pressure? - What is the Hadley cell? - How does the Hadley circulation influence precipitation patterns? - What is the Intertropical Convergence Zone? - From which direction blows the wind at the surface in the tropics, from which direction does it blow at mid-latitudes? - In which direction does the Coriolis force acts in the northern and in the southern hemisphere? - How does the Coriolis force impact the jet stream, the trade winds, and the westerly winds at mid-latitudes? - What is latent heat of vaporization/condensation? - How much more heat is required to vaporize water than to heat if from the melting to the boiling point? - What is the lapse rate? - How does vertical water vapor transport impact the lapse rate? - When does evaporation occur? - When does condensation occur? - How much larger is the heat capacity of one cubic meter of water than one cubic meter of air? - How does the difference in heat capacities of air and water impact climate variations? - Which component of the climate system absorbs most of the energy that currently accumulates on Earth? - Use the numbers in Fig. (14) to calculate the residence time of water in the atmosphere? - Use the numbers in Fig. (14) to calculate the residence time of water in the ocean? - In which direction do surface waters flow in the tropics, in which direction do they flow at mid-latitudes? - What are the subtropical gyres, and what forces them. - What is the strongest ocean current in the world? - What is convergence, what is divergence? What does it imply for vertical flow (upwelling/downwelling)? Name one region each where surface ocean waters converge/diverge? - What is stratification? - What is the thermocline? - What determines sea water density? - What is the thermohaline circulation? - Where do surface waters sink into the ocean’s interior? - Where do they upwell back to the surface? - Why is the Atlantic saltier than the Pacific? - Where is the ocean warming the most? - Is it possible that the warming observed in the atmosphere during the past 50 years was caused by changes in ocean circulation? Why? - Where is the surface ocean becoming saltier, where fresher? How is the pattern of salinification/freshening related to the salinity of the surface ocean? - How are changes in surface salinity related to changes in the atmospheric hydrological cycle? Frierson, D. M. W., Y.-T. Hwang, N. S. Fuckar, R. Seager, S. M. Kang, A. Donohoe, E. A. Maroon, X. Liu, and D. S. Battisti (2013), Contribution of ocean overturning circulation to tropical rainfall peak in the Northern Hemisphere, Nature Geosci, 6(11), 940-944, doi:10.1038/ngeo1987. Hartmann, D.L., A.M.G. Klein Tank, M. Rusticucci, L.V. Alexander, S. Brönnimann, Y. Charabi, F.J. Dentener, E.J. Dlugokencky, D.R. Easterling, A. Kaplan, B.J. Soden, P.W. Thorne, M. Wild and P.M. Zhai, 2013: Observations: Atmosphere and Surface. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Manabe, S., and R. J. Stouffer (1993), Century-Scale Effects of Increased Atmospheric CO2 on the Ocean-Atmosphere System, Nature, 364(6434), 215-218, doi: 10.1038/364215a0. Peixoto, J., and A. Oort (1992) Physics of Climate, AIP Press. Rahmstorf, S., J. E. Box, G. Feulner, M. E. Mann, A. Robinson, S. Rutherford, and E. J. Schaffernicht (2015), Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation, Nature Climate Change, 5(5), 475-480, doi: 10.1038/Nclimate2554. Rhein, M., S.R. Rintoul, S. Aoki, E. Campos, D. Chambers, R.A. Feely, S. Gulev, G.C. Johnson, S.A. Josey, A. Kostianoy, C. Mauritzen, D. Roemmich, L.D. Talley and F. Wang, 2013: Observations: Ocean. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Smeed, D. A., G. D. McCarthy, S. A. Cunningham, E. Frajka-Williams, D. Rayner, W. E. Johns, C. S. Meinen, M. O. Baringer, B. I. Moat, A. Duchez, and H. L. Bryden (2014), Observed decline of the Atlantic meridional overturning circulation 2004-2012, Ocean Sci., 10(1), 29-38, doi: 10.5194/os-10-29-2014. Trenberth, K. E., L. Smith, T. Qian, A. Dai, and J. Fasullo (2007), Estimates of the Global Water Budget and Its Annual Cycle Using Observational and Model Data, Journal of Hydrometeorology, 8(4), 758-769, doi:10.1175/jhm600.1. Energy required for a phase change. E.g. to evaporate 1 g of water 2,300 J is required. The same amount of energy is released during condensation. Transition of a substance (e.g. water) from vapor to liquid. Condensation occurs when air is saturated with water vapor and condensation nuclei (e.g. small particles) are present. Latent heat is released during condensation. Transition of a substance (e.g. water) from liquid to vapor phase. The rate of evaporation from the ocean depends on sea surface temperature (the warmer the more evaporation), the relative humidity of the air (the drier the air the more evaporation), and the wind velocity (the more wind the more evaporation). The energy required for that transition is called the latent heat of vaporization. The amount of heat required to increase the temperature of a substance by one degree Celsius. The specific heat capacity of air at constant pressure cp = 1 J/g°C. That of water is 4.2 J/g°C.
CHRISTMAS WRITING ACTIVITY: SNOWBALL WRITING The snowball writing method is a fun way to teach your students how to write collaboratively during the Christmas holidays! HOW IT WORKS 1. Each student gets one of the 30 pre-made worksheets with a different holiday visual story starter. They begin writing the story. 2. After an allotted amount of time, writer one crumples their story up into a "snow ball" and throws it to the front of the room. 3. Everyone goes to the front to grab a snowball and they continue to story! This is done one more time, then it is returned to the original writer for editing, revision, and final copy! Students always have SO much fun doing this, and they usually can't believe you are letting them crumple up their stories. WHAT TEACHERS ARE SAYING ABOUT THIS ACTIVITY ♥ I have done snowball activities before, but this one has a specific purpose. What a great idea! Holds the students accountable for reading the previous section(s) and expanding. ♥ My students LOVED this and WOW - their stories were amazing! This is one activity the kids asked for time and again. Thanks! ♥ My students loved this activity and so did I!! Students were totally engaged! Included in the packet are a teacher guide and 30 different 2 page worksheets with visual story starters (all stories will be different). © Presto Plans