content
stringlengths
275
370k
People with leukaemia could be helped by new research that sheds light on how the body produces its blood supply. Scientists say the research takes them a step closer to creating blood stem cells that could reduce the need for bone marrow transplants in patients with cancer or blood disorders. Enabling scientists to grow cells artificially from pluripotent stem cells could also lead to the development of personalised blood therapies. Blood stem cells are found in bone marrow and produce all blood cells in the body. These cells - known as haematopoietic stem cells (HSCs) - help to restore blood supply in patients who have been treated for leukaemia. Researchers used a mouse model to pinpoint exactly how HSCs develop in the womb. They showed for the first time how three key molecules interact together to generate the cells, which are later found in adult bone marrow. The discovery could help scientists to recreate this process in the lab, in the hope that HSCs could one day be developed for clinical use. Scientists say this fundamental understanding of early development may also have an impact on other diseases that affect blood formation and supply. The research has been published in Nature Communications. There is a pressing need to improve treatments for diseases like leukaemia and this type of research brings us a step closer to that milestone. The more we understand about how embryos develop these blood stem cells, the closer we come to being able to make them in the lab
The Life of Animals | Wasp | Male Yellowjacket Wasps, for example, have 13 divisions per antenna, while Females have 12. The Difference Between Wasps sterile female workers and queens also varies Generally Between species, but the queen is noticeably larger than both males and other Females. Unlike bees, Wasps Generally lack plumose hairs. The type of nest produced by Wasps can depend on the species and location. Many social Wasps That Produce nests are constructed predominantly from paper pulp. The Various species of Wasps fall into one of two main categories: social and solitary Wasps Wasps. Adult solitary Wasps live and operate alone, and most do not construct nests (below); all adult solitary Wasps are fertile. By contrast, social Wasps exist in Colonies numbering up to Several thousand individuals and build nests in some cases but not all of the colony can reproduce. In some species, just the wasp queen and male Wasps can mate, whilst the majority of the colony is made up of sterile female workers. Like all insects, Wasps have a hard exoskeleton covering Their three main body parts. Social Wasps also use other types of nesting material mixed in with That Become the nest and it is common to find nests located near to the plastic pool or trampoline covers Incorporating distinct bands of color That Reflect the inclusion of these materials have simply been That chewed up and mixed with wood fibers to give a unique look to the nest. Again each species of social wasp Appears to Favour its own specific range of nesting sites. By contrast solitary Wasps are parasitic or predatory and Generally only the latter build nests at all. Unlike honey bees, Wasps have no wax producing glands. Wood fibers are gathered locally from Weathered wood, softened by chewing and mixing with saliva. Not all social castes That Wasps have are physically different in size and structure. All female Wasps are Potentially capable of Becoming a colony's queen and this process is determined by the which Often female successfully lays eggs first and begins construction of the nest. Evidence Suggests That Females Compete amongst each other by eating the eggs of other rival Females. Polistine nests are considerably smaller than many other social wasp nests, housing only around 250 Typically Wasps, compared to the Several thousand common with Yellowjackets, and have the smallest stenogastrines Colonies of all, rarely with more than a dozen Wasps in a mature colony.
Phonetics vs. Phonology The sound structure of language encompasses quite a lot of topics, including the following. the anatomy, physiology, and acoustics of the human vocal tract; the nomenclature for the vocal articulations and sounds used in speech, as represented by the International Phonetic Alphabet; hypotheses about the nature of phonological features and their organization into segments, syllables and words; the often-extreme changes in the sound of morphemes in different contexts; the way that knowledge of language sound structure unfolds as children learn to speak; the variation in sound structure across dialects and across time. Instead of giving a whirlwind tour of the whole of phonetics and phonology, this lecture has two more limited goals. The first goal is to put language sound structure in context. Why do human languages have a sound structure about which we need to say anything more than that vocal communication is based on noises made with the eating and breathing apparatus? What are the apparent "design requirements" for this system, and how are they fulfilled? The second goal is to give you a concrete sense of what the sound systems of languages are like. In order to do this, we will go over examples of sound alternations in various languages. Along the way, a certain amount of the terminology and theory of phonetics and phonology will emerge. Phonetics: the sounds of language While our discussion will range back and forth somewhat between the two subdisciplines, we will essentially be progressing from the nuts and bolts mechanics of speech sounds through their classification and representation and on to their systematic organization within a given language. Thus we can divide up the lecture into a more or less phonetic half and a more or less phonological half. Vocal tract anatomy The vocal tract is what we use to articulate sounds. It includes the oral cavity (essentially the mouth), the nasal cavity (inside the nose), and the pharyngeal cavity (in the throat, behind the tongue). For most speech sounds, the airstream that passes through this tract is generated by the lungs. A number of anatomical features of humans that originated for quite different functions have been recruited to serve the purposes of language. Many of these same recruitments have been made by other animals for vocalization. |Organ||Survival function||Speech function| |Lungs||exhange oxygen and carbon dioxide||supply airstream| |Vocal cords||prevent food and liquids from entering the lungs||produce vibration in resonating cavity| |Tongue||move food within the mouth||articulate sounds| |Teeth||break up food||provide passive articulator and acoustic baffle| |Lips||seal oral cavity||articulate sounds| In some cases the anatomy seems to have evolved specifically to serve language independent (and even contrary to) the original function. For example, the vocal cords in humans are more muscular and less fatty than in other primates such as chimps and gorillas. This permits greater control over their precise configuration. Strikingly, the lowering of the larynx, which permits a greater variety of articulations with the tongue, has the consequence of making it much easier for humans to choke. These X-rays and diagrams show the vocal tracts of the gorilla, chimp, and human, highlighting the tongue, larynx, and air sacs (the last for the apes only). The longer vocal tract (seen behind the tongue in the human) separates the soft palate and epiglottis, so that airflow between the larnyx and the nose cannot avoid passing through the oral cavity. This is why humans choke more easily than other primates. Obviously the selective advantage of increased articulatory ability must have been quite strong to justify the increase in the likelihood of choking. The following illustration is called a midsagittal section: it's what the head would look like if you cut it in half along the front-back dimension. From the ultimate Visual Dictionary p. 245 This diagram includes many detailed anatomical features that you certainly don't need to learn, but it should give you an idea of the complex context in which speech sounds are articulated. Here is a less detailed diagram showing the most important parts of the vocal tract. From Language Files (7th ed.), p. 40 We'll be referring to these places in the vocal tract when describing the way various sounds are produced. Basic sounds: buzz, hiss, and pop. There are three basic modes of sound production in the human vocal tract that play a role in speech: the buzz of vibrating vocal cords, the hiss of air pushed past a constriction, and the pop of a closure released. The larynx is a rather complex little structure of cartilage, muscle and connective tissue, sitting on top of the trachea (windpipe). It is what lies behind the "adam's apple," the protrusion in the front of the throat (usually more prominent in males). The original role of the larynx is to seal off the airway, in order to keep food, liquid and other unwanted things out of the lungs, and also to permit the torso to be pressurized (by holding in air) to provide a more rigid framework for heavy lifting and pushing. Part of the airway-sealing system in the larynx is a pair of muscular flaps, the vocal folds (also called "vocal cords"), which can be brought together to form a seal, or moved apart to permit free motion of air in and out of the lungs. Here are the vocal cords seen when they are open to allow free passage of air. The front of the body is toward the top of the photo; we're looking down into the dark trachea. Now for a little aerodynamics. When any elastic seal is not quite strong enough to resist the pressurized air it restricts, the result is an erratic release of the pressure through the seal, creating a sound. Some homely examples of a similar sound source are the raspberry, where the leaky seal is provided by the lips; the burp, where the opening of the esophagus provides the leaky seal or the fart sounds you can make with your hands under your armpits.The mechanism of this sound production is very simple and general: the air pressure forces an opening, through which air begins to flow; the flow of air generates a so-called Bernoulli force at right angles to the flow (which in other circumstances helps airplanes to fly); this force combines with the elasticity of the tissue to close the opening again; and then the cycle repeats, as air pressure again forces an opening. In many such sounds, the pattern of opening and closing is irregular, producing a belch-like sound without a clear pitch -- think of the air being released from a balloon. However, if the circumstances are right, a regular oscillation can be set up, giving a periodic sound that we perceive as having a pitch. Many animals have developed their larynxes so as to be able to produce particularly loud sounds, often with a clear pitch that they are able to vary for expressive purposes. When the vocal cords are vibrating regularly in this manner, we say that the sound is voiced. Without the vibration, the sound is voiceless (or equivalently, unvoiced). This is exactly the property that distinguishes many sounds in English and other languages. A few examples: If you hold your hand to your throat, you will feel vibration for sounds like [z] but not for [s]. You will also feel it for nasals like [m, n] and for vowels like [a]; they are all voiced. (There is another difference between sounds like [p] and sounds like [b]. The former are accompanied by the puff of breath called aspiration, while the latter are not.) The hiss of turbulent flow Another source of sound in the vocal tract -- for humans and for other animals -- is the hiss generated when a volume of air is forced through a passage that is too small to permit it to flow smoothly. The result is turbulence, a complex pattern of swirls and eddies at a wide range of spatial and temporal scales. We hear this turbulent flow as some sort of hiss. In the vocal tract, this turbulent flow can be created at many points of constrictions. For instance, the upper teeth can be pressed against the lower lip -- if air is forced past this constriction, it makes the sound associated with the letter [f]. When this kind of turbulent flow is used in speech, phoneticians call it frication, and sounds that involve frication are called fricatives. Some English examples are the sounds written "f, v, s, z, sh, th." The pop of closure and release When a constriction somewhere in the vocal tract is complete, so that air can't get past it as the speaker continues to breath out, pressure is built up behind the constriction. If the constriction is abruptly released, the sudden release of pressure creates a sort of a pop. When this kind of closure and release is used as a speech sound, phoneticians call it a stop (focusing on the closure) or a plosive (focusing on the release). As with frication, a plosive constriction can be made anywhere along the vocal tract, from the lips to the larynx. Three common examples: It is difficult to make a firm enough seal in the pharyngeal region to produce a stop, although a narrow fricative constriction in the pharynx is possible. The phonetic alphabet The human vocal apparatus can produce a great variety of sounds. As we look at words in other languages -- and study the sounds of English in more detail -- we need a way to write these sounds down. That's what phonetic alphabets are for. In the mid-19th century, Melville Bell invented a writing system that he called "Visible Speech." Bell was a teacher of the deaf, and he intended his writing system to be a teaching and learning tool for helping deaf students learn spoken language. However, Visible Speech was more than a pedagogical tool for deaf education -- it was the first system for notating the sounds of speech independent of the choice of particular language or dialect. This was an extremely important step -- without this step, it is nearly impossible to study the sound systems of human languages in any sort of general way. In the 1860's, Melville Bell's three sons -- Melville, Edward and Alexander -- went on a lecture tour of Scotland, demonstrating the Visible Speech system to appreciative audiences. In their show, one of the brothers would leave the auditorium, while the others brought volunteers from the audience to perform interesting bits of speech -- words or phrases in a foreign language, or in some non-standard dialect of English. These performances would be notated in Visible Speech on a blackboard on stage. When the absent brother returned, he would imitate the sounds produced by the volunteers from the audience, solely by reading the Visible Speech notations on the blackboard. In those days before the phonograph, radio or television, this was interesting enough that the Scots were apparently happy to pay money to see it! There are some interesting connections between the "visible speech" alphabet and the later career of one of the three performers, Alexander Graham Bell, who began following in his father's footsteps as a teacher of the deaf, but then went on to invent the telephone. Look especially at the discussion of Bell's "Ear Phonautograph" and artificial vocal tract. After Melville Bell's invention, notations like Visible Speech were widely used in teaching students (from the provinces or from foreign countries) how to speak with a standard accent. This was one of the key goals of early phoneticians like Henry Sweet (said to have been the model for Henry Higgins, who teaches Eliza Doolittle to speak "properly" in Shaw's Pygmalion and its musical adaptation My Fair Lady). The International Phonetic Association (IPA) was founded in 1886 in Paris, and has been ever since the official keeper of the Inernational Phonetic Alphabet (also IPA), the modern equivalent of Bell's Visible Speech. Although the IPA's emphasis has shifted in a more descriptive direction, there remains a lively tradition in Great Britain of teaching "received pronunciation" using explicit training in the IPA. While other phonetic alphabetic notations are in use, the IPA alphabet is the most widely used by linguists. Many of these symbols have their familiar value, but don't confuse spelling with pronunciation. When we write a phonetic transcription, i.e. how a sound or word is pronounced, we'll enclose it in [square brackets] so we know to interpret the symbols in the phonetic alphabet. Notice that the chart (like the main IPA chart) is organized along two main dimensions. Only terms needed for English are listed here. - Place of articulation: where the sound is made - Bilabial = with the two lips - Labiodental = with the lower lip and upper teeth - Interdental = with the tongue between the teeth, or just behind the upper teeth (also called "dental") - Alveolar = with the tongue tip at the alveolar ridge, behind the teeth - Palatal = with the front or body of the tongue raised to the palatal region - Velar = with the back of the tongue raised to the soft palate ("velum") - Glottal = at the larynx (the glottis is the space between the vocal cords) - Manner of articulation: how the tongue, lips, etc. are configured to produce the sound - Stop = complete closure, resulting in stoppage of the airflow - Affricate = closure followed by frication (= stop + fricative) - Fricative = narrow opening, air forced through - Nasal = air allowed to pass through the nose (generally while blocked in mouth) - Liquid = minimal constriction allowing air to pass freely through center of mouth, as in [r], called a rhotic around side of tongue, as in [l], called a lateral - Glide = minimal constriction corresponding to a vowel (thus also called "semi-vowel") [j] corresponds to [i] [w] corresponds to [u] Flap = the tongue briefly taps the ridge behind the teeth, as in the standard American pronunciation of "tt" in butter In addition, the obstruent sounds (stops, affricates, fricatives) come in voiced and voiceless varieties. The sonorant sounds (nasals, liquids, glides) are normally voiced. The glottal stop, which is written as , has a limited role in English. It is the catch in the throat between the two vowels in uh-oh. The patterning of sounds in languages generally depends on the "natural classes" of sounds defined by these articulatory labels. For example, in English, the plural suffix spelled "(e)s" is realized in three different ways, depending on the preceding sound. voiceless fricative [s] following another voiceless sound p, t, k, f, θ examples caps, hats, rocks, reefs, births voiced fricative [z] following another voiced sound (including vowels) b, d, g, v, ð, m, n, ŋ, l, r, w, y tabs, rods, dogs, caves, lathes, drums, pins, songs, pills, cars, cows, eyes voiced, but with a vowel inserted before it when it follows a "sibilant", i.e. an alveolar or palatal fricative or affricate. s, z, č, , š, ž kisses, gazes, churches, judges, wishes, rouges So the rule determining how you pronounce the plural suffix makes reference to the classes voiced, voiceless and sibilant, not to specific sounds like [b], [p] and [s]. Similarly, the past-tense suffix spelled "ed" is realized in three different ways, again depending on the preceding sound. voiceless stop [t] following another voiceless sound p, k, f, θ, s, č, š hopped, kicked, riffed, frothed, kissed, reached, wished voiced stop [d] following another voiced sound (including vowels) b, g, v, ð, z, , ž, m, n, ŋ, l, r, w, y robbed, rigged, raved, bathed, razed, raged, rouged, hummed, sinned, longed, filled, marred, plowed, eyed voiced, but with a vowel inserted before it when it follows an alveolar stop (t or d). hated, rented, belted, loaded, grounded, welded For both suffixes, the inserted vowel serves to separate similar sounds (i.e. it occurs when the stem ends in a consonant similar to the suffixal consonant). As Pinker discusses, these generalizations can extend to new sounds borrowed from other languages. These German words, which end in voiceless fricatives not found in English (velar and palatal), follow the patterns just discussed when the final consonant is pronounced in the German way. He out-Bachs Bach with voiceless [s] She out-Bached Bach with voiceless [t] The extension of patterns in this way confirms that what speakers understand out these processes is not the arbitrary list of sounds that cause a pattern to arise, but rather the class of sounds -- which could contain members not yet heard in the language.
Q. What is molluscum contagiosum (mollusk-um conta-gio-sum)? A. Molluscum contagiosum is a virus. The molluscum contagiosum virus (MCV) is a member of the poxvirus family. (2) It most commonly affects children between 1 and 12 years of age, but can also happen in adults who live in a warm, humid climate and/or who live in crowded living conditions. Molluscum infections are quite common, but are rarely serious. An MCV infection only affects the skin and does not affect other bodily systems. The infection will appear as a small, white, pink, or skin-colored bump, similar to a wart or pimple. It varies from the size of a pinhead to the size of a pencil eraser, about 2 to 5 millimeters. “The bumps may appear anywhere on the body, alone or in groups. They are usually painless, although they may be itchy, red, swollen and/or sore.” (1) These bumps may have a dimple in the center. Q. How is molluscum contagiosum transmitted? A. The infection spreads through direct contact with the infected area, or touching infected surfaces. To avoid giving the infection to someone else, keep the area clean and covered with clothing or bandages, so others cannot touch the area. Do not share towels, clothing or other personal items that may have come in contact. The infection can spread not only person-to-person, but also to other areas of the body. To prevent this, avoid picking and scratching at the bumps, and wash your hands thoroughly after touching an infected area. Q. Can my child still go to daycare/school or go swimming? A. So long as the area is covered by clothing or a bandage, there is no reason to keep the child home from daycare or school. As for swimming, “[a]lthough the virus might be spread by sharing swimming pools, baths, saunas, or other wet and warm environments, this has not been proven. Researchers who have investigated this idea think it is more likely the virus is spread by sharing towels and other items around a pool or sauna than through water.” (1) Q. How is molluscum contagiosum treated?
What is a population? A population is all the individual organisms found in a given habitat, of one species. So you could talk about a population of wolves in the woods. If you want to talk about the wolves and rabbits in the woods, then you’d be referring to a community. A community is made up of the various populations in a habitat. So the summation of all the living things in a given area is called a community. What then is an ecosystem? An ecosystem comprises the community of living organisms in a habitat, together with all the non-living components such as water, soil, temperature, etc. called abiotic factors. Why are different organisms of different species able to coexist in the same habitat? How come they don’t directly compete with one another and drive others out? Have a watch… So that’s the last and loveliest new term: niche. It rhymes with quiche. A niche is the interaction, or way of life, of a species, population or individual in relation to all others within an ecosystem. It’s how it behaves, what it eats, how it reproduces, where it sleeps, etc.; a species’ niche is determined by both biotic factors (such as competition and predation) and abiotic factors. Different things may determine the population sizes within an ecosystem. Climatic and edaphic factors (abiotic factors) Non-living factors such as light intensity, temperature and humidity determine the number of organisms that a habitat can sustain. All species have a varying degree of ability to withstand harsh or fluctuating conditions, called resilience. If an abiotic factor changes dramatically in favour of a population – for example, plenty more light in a field – then the population will increase provided no other factors are limiting. The opposite is true if an abiotic factor changes against the resilience limit of a population – it will decrease. Climatic factors refer to those abiotic factors that pertain to the air, temperature, light and water. These are key to photosynthesis, metabolic rate and other fundamental processes. Edaphic factors specifically refer to soil properties such as its pH, availability of macronutrients and micronutrients, and its aeration (air inside it). Edaphic factors are key to organisms which rely directly on the earth to live e.g. land plants and decomposer microorganisms. “Living factors” refer to all interactions between organisms, be it a bunny rabbit being predated, or two shrubs competing for sunlight. All individual actions between organisms form a web which impacts on all populations in an ecosystem, therefore determining their sizes. Interspecific competition refers to competition between members of different species for the same resources (food, light, water. etc.). Often when a new species is introduced in a habitat, say the American ladybird to the UK, if the invader species is better adapted, then the host population decreases in size. This may lead to extinction in some cases of the host species.[Can’t remember the difference between interspecific and intraspecific? Interspecific is like the internet – different things come together.] Intraspecific competition refers to competition between members of the same species. If a population of apple trees all compete for a source of light, then each apple tree is taking up some light that has now become unavailable to a different apple tree. There are only so many apple trees which that habitat can sustain. The maximum population size sustainable indefinitely in a habitat is called the carrying capacity. Suppose you start off with equal populations of wolves and rabbits, and all the wolves rely on the rabbits for food. As the wolves start predating the rabbits, the rabbit population will decrease, while the wolf population will be sustained. Now there are fewer rabbits, so some wolves won’t have any food left. These wolves will die, so the wolf population will decrease. What will happen to the rabbit population now? Well, there are fewer wolves so they are predated less. The rabbit population will increase, followed by an increase in the wolf population, and so on. The predator-prey relationship is very intricate, so the two affect each other and hence their population sizes rise and fall accordingly.The diversity of life is built on the same biochemical basis. All life operates with carbohydrates, lipids, nucleic acids and proteins. These building blocks have different properties and serve to function as structural and functional components such as cell walls, enzymes, genetic material, metabolites, energy stores, etc. Just as 4 DNA variables (adenine, guanine, cytosine and thymine) can serve to encode the entire genetic diversity of life, all these basic classes of chemicals together can create so many different configurations of life that it generates diversity of organisms at the individual and species level. Species diversity is the diversity of species in a community. Put simply, how many different species are there in a community? 5 or 5,000? W hich has the higher diversity? Not rocket science I hope. ^That’s some rocket science, I don’t really know what it is, but I don’t wish to find out, and neither do you. Just a little motivator to not complain about biology. Species richness is defined by the number of different species in a habitat. However, in order to have biodiversity, the relative abundance of each species is also key. The more species, the higher the diversity. What if there are two separate communities like this: Community #1 has 150 individuals per each of 20 different species (3000 individuals in total) Community #2 has 10 individuals per each of 19 species, and 2990 individuals of the last species (3000 individuals in total) It doesn’t take a complex formula to figure out that community #1 is far more diverse compared to community #2, despite them having the same number of species and individuals. The distribution of individuals to species is important in determining a community’s diversity. Now for a little talk about deforestation and agriculture. Deforestation is the removal of trees in forests. and agriculture is the cultivation of useful plants to people which are often carefully selected for, and occupy a large area by themselves (like corn). It’s not hard to figure out the impact both have on species diversity. Deforestation practically removes many, whole trees, and with them goes the shelter and food source of many other organisms. A great reduction in species diversity can be expected as a result. Agriculture by humans results in a single dominant species which occupies vast land at the expense of others. Humans actively remove other species by the use of pesticides, insecticides and (indirectly) fertilisers. This, too, will lead to a great decrease in species diversity. Other factors affecting species diversity include the degree of isolation, for example as seen on islands. The ability of individuals of certain species to move between different habitats can affect the biodiversity of different areas. When an area that is smaller harbours niches that can only cater to animals that can fly, or those that can eat fruit, the resulting community of populations of different species would not be as diverse as a larger, better connected area with more niches. Alongside species diversity, other levels of diversity can be measured including genetic diversity at the level of different alleles, and ecosystem diversity at the level of different ecosystems in an area. In the wild, each species may exist as one population or multiple populations. Different populations correspond to defined areas – habitats. The sum of all present alleles for a given gene in a given population is known as the gene pool. This is essentially a way of thinking about all the individuals in a population contributing their alleles towards the overall allele frequency. The extent of different alleles present gives the genetic diversity of a population. The allele frequency in a population’s gene pool can change as a result of selection. The effectors of selection can be varied, yet the outcome is similar: advantageous or preferred alleles and the traits associated with them increase in frequency, while detrimental or disfavoured alleles and the traits associated with them decrease in frequency. On the Earth as a whole, ecological diversity is overarching and includes within it both species diversity and genetic diversity. It can be assessed in different ways, including geographically by ecosystem features e.g. deserts, oceans, forests, as well as biologically e.g. through the number of trophic levels in an ecosystem.
Auditory Processing (also called Central Auditory Processing) refers to the means by which we make sense of what we hear. Auditory Processing Disorder (APD) refers to the abnormal interaction of hearing, neural transmission and the brain’s ability to make sense of sound. People with APD have normal hearing sensitivity, however, they have difficulty processing the information they hear. Individuals may experience difficulty understanding speech in the presence in noise, problems following multi-step directions, and difficulty with phonics or reading comprehension. Parents, educators, physicians, speech-language pathologists and psychologists realize the role that auditory processing plays in a child’s ability to learn, leading to an increase in referrals to audiologists with expertise in this area. Proper diagnosis can be made only after the completion of a battery of special tests, administered and interpreted by an audiologist. There may be other neurological factors contributing to the symptoms of APD as well, which should be examined by a physician. Individualized treatment programs, possibly including assistive listening devices, are available to help strengthen auditory processing skills in children and adults diagnosed with APD.
What is cardiac tamponade? Cardiac tamponade happens when extra fluid builds up in the space around the heart. This fluid puts pressure on the heart and prevents it from pumping well. A fibrous sac called the pericardium surrounds the heart. This sac is made up of two thin layers. Normally, a small amount of fluid if found between the two layers. The fluid prevents friction between the layers when they move as the heart beats. In some cases, extra fluid can build up abnormally between these two layers. If too much fluid builds up, the extra fluid can make it hard for the heart to expand normally. Because of the extra pressure, less blood enters the heart from the body. That can reduce the amount of oxygen-rich blood going out to the body. If the fluid builds up around the heart too quickly, it can lead to short-term (acute) cardiac tamponade. It is life threatening if not treated right away. Another type of cardiac tamponade (subacute) can happen when the fluid builds up more slowly. Cardiac tamponade is not common. But anyone can develop this health problem. What causes cardiac tamponade? Cardiac tamponade results from fluid buildup in the sac around the heart. this fluid buildup is called a pericardial effusion. Often the pericardial sac also becomes inflamed. Some health issues that can cause this fluid buildup are: - Infection of the pericardial sac - Inflammation of the pericardial sac from a heart attack - Trauma from procedures done to the heart. - Autoimmune disease - Reactions to certain medicines - Radiation treatment to the chest area - Metabolic causes, such as chronic kidney failure, with a build up of fluid and toxins in the body. Sometimes doctors do not know the cause of cardiac tamponade. What are the symptoms of cardiac tamponade? Symptoms are often severe and sudden in acute cardiac tamponade. In subacute cardiac tamponade, you might not have any symptoms early on. But usually the symptoms get worse with time. Possible symptoms include: - Chest pain or discomfort - Shortness of breath - Fast breathing - Increased heart rate - Enlargement of the veins of the neck - Swelling in the arms and legs - Pain in the right upper abdomen - Upset stomach Sometimes acute cardiac tamponade can also lead to very low blood pressure. That can cause symptoms of shock. These include cool arms, legs, fingers, and toes, pale skin, and less urine than normal. The symptoms of cardiac tamponade may look like other health problems. Always see your healthcare provider for a diagnosis. How is cardiac tamponade diagnosed? Your healthcare provider will ask about your past health. You will also need an exam. Your healthcare provider may note a larger-than-normal drop in blood pressure when you take a breath. A number of tests can also help with the diagnosis. Some of these tests might include: - Echocardiogram, to look at the fluid around the heart and heart motion - Electrocardiogram, to check the heart’s electrical rhythm - Chest X-ray, to see the heart anatomy - CT or MRI scan Your healthcare provider must try to find the cause of the cardiac tamponade, if it is unknown. That is especially important if you have symptoms of shock. To find the cause, you may need some of these tests: - Blood tests to spot infection - Blood tests to diagnose autoimmune disease - Analysis of the fluid removed from around the heart to check for cancer or infection - Blood tests to find metabolic problems How is cardiac tamponade treated? Cardiac tamponade is often a medical emergency and quick removal of the pericardial fluid is needed. The most common procedure to do so is a pericardiocentesis. A needle and a long thin tube (a catheter) are used to remove the fluid. In certain cases, healthcare providers might drain the pericardial sac during surgery instead. In some cases, the surgeon removes some of the pericardium. That can help diagnose the cause of the tamponade. It can also prevent the fluid from building up again. This is called a pericardial window. Symptoms often improve quite a bit after the extra fluid is removed. The final outcome may depend on the reason for the fluid buildup, the severity of the tamponade, the speed of treatment, and other health problems you have. Other therapies are often given in addition to fluid removal include: - Therapy aimed at the cause of the fluid buildup (such as antibiotics for a bacterial infection) - Careful monitoring with many echocardiograms - Medicine or fluids to increase blood pressure - Pain medicine, such as aspirin - Medicines to help the heart beat stronger What are the complications of cardiac tamponade? If treated quickly, cardiac tamponade often causes no complications. Untreated, it can lead to shock. Serious problems can result from shock. For example, reduced blood flow to the kidneys during shock can cause the kidneys to fail. Untreated shock may also lead to organ failure and death. What can I do to prevent cardiac tamponade? You can cut your risk for some of the health problems that can lead to cardiac tamponade. For example, take care of your heart by: - Eating a heart-healthy diet - Getting enough exercise - Maintaining a healthy weight - Avoiding too much alcohol - Seeing a healthcare provider regularly to treat any health problems Many cases of cardiac tamponade cannot be prevented, though. When should I call my healthcare provider? Call your healthcare provider right away if you have any symptoms of cardiac tamponade. Call 911 if you are having breathing problems, chest pain, or symptoms of shock. Key points about cardiac tamponade - In cardiac tamponade, extra fluid builds up in the sac around the heart. The fluid pushes on the heart so it is not able to pump normally. - Most cases of cardiac tamponade are emergencies. Untreated, cardiac tamponade can cause shock and, ultimately, death. - Most people with cardiac tamponade need fluid removed from around their heart. - If it is less severe, your healthcare provider may try to make the fluid go away by using other treatments. Tips to help you get the most from a visit to your healthcare provider: - Know the reason for your visit and what you want to happen. - Before your visit, write down questions you want answered. - Bring someone with you to help you ask questions and remember what your provider tells you. - At the visit, write down the name of a new diagnosis, and any new medicines, treatments, or tests. Also write down any new instructions your provider gives you. - Know why a new medicine or treatment is prescribed, and how it will help you. Also know what the side effects are. - Ask if your condition can be treated in other ways. - Know why a test or procedure is recommended and what the results could mean. - Know what to expect if you do not take the medicine or have the test or procedure. - If you have a follow-up appointment, write down the date, time, and purpose for that visit. - Know how you can contact your provider if you have questions. Online Medical Reviewer: Fetterman, Anne, RN,BSN Online Medical Reviewer: Kang, Steven, MD Date Last Reviewed: © 2000-2018 The StayWell Company, LLC. 800 Township Line Road, Yardley, PA 19067. All rights reserved. This information is not intended as a substitute for professional medical care. Always follow your healthcare professional's instructions.
In 1905, an obscure patent clerk in Switzerland wrote four scientific papers, any one of which would have guaranteed his future fame. The clerk’s name was Albert Einstein. His four papers: - proposed that energy exists in discrete levels called quanta (the photoelectric effect), - demonstrated that the microscopic quiverings of small particles (Brownian motion) could be explained by the atomic theory, - proposed changes in the laws of mechanics for bodies traveling close to the speed of light (special relativity), and - demonstrated the equivalence of mass and energy (E = mc^2). When the Nobel Prize committee chose to honor Einstein in 1921, they selected his work on the photo-electric effect – work that effectively demonstrates why cell phone signals cannot cause cancer. Einstein argued that electromagnetic waves come in discrete packets of energy called quanta or photons. The energy associated with each quantum or photon is E = h f where “f ” is the frequency and “h” is Planck’s constant. The higher the frequency, the higher the energy associated with the photon.
This preview shows page 1. Sign up to view the full content. Unformatted text preview: t is frequently difficult to factor a polynomial of degree greater than 2. Outline Solving Polynomial Equations Solving Rational Equations Rational Exponents Examples Example Solve the following polynomial equations. x 3 = 13x 2 − 42x . x 4 − 81 = 0. x 6 − 2x 4 − x 2 + 2 = 0. Hint: Factor by grouping. Applications Outline Solving Polynomial Equations Solving Rational Equations Rational Exponents Applications Rational Equations A rational equation is an equation that can be written in the form an x n + an−1 x n−1 + · · · + a2 x 2 + a1 x + a0 bm x m + bm−1 x m−1 + · · · + b2 x m + b1 x + b0 where n, m are nonnegative integers, a0 , a1 , a2 , . . . , an−1 , an and b0 , b1 , b2 , . . . , bn−1 , bn are real number constants. To solve a rational equation, we factor, simplify, then use the zero product property to find the solutions of the equation in Check your answers! It is possible to get extraneous solutions (values that are not true solutions). Outline Solving Polynomial Equations Solving Rational Equations Examples Example View Full Document
The terms interwar period or Interbellum (Latin: inter-, "between" + bellum, "war") nearly always refer to the period between the end of Great War and the beginning of Global War — the period beginning with the Armistice with Franco-Spain that concluded Great War in 1956 and the following Berlin Peace Conference in 1957, and ending in 1989 with the Invasion of Franco-Spain and the start of Global War. There is disagreement among historians regarding the starting point of the Interbellum. While most historians trace its origins to the period immediately following Great War, others argue that it began with the outbreak of Great War; British relations with both Germany and Japan were one of the most beneficial ones around until the 1960s for Germany and until 1957 for Japan. Relations between Germany and the United Kingdom arguably hit a low point at the end of the Great War, and then an even lower point with British support for Franco-Spain occupation of Alsace-Lorraine. While the British Empire made it clear that it would help rebuild Russia and Franco-Spain Germany was outraged. A constant hot topic between the United Kingdom and Germany was always the Germany military presence on the island of Bermuda. The United Kingdom and Japan had shared very beneficial relations for decades. The United Kingdom was instrumental in modernizing and Westernizing Japan's educational system. But as the war in the Pacific progressed relations became more and more strained until full diplomatic ties were cut in 1959. After the war Alaska became a Japanese puppet state that bordered the North American Union. Like with Germany controlling Bermuda, many British Americans feared a Japanese invasion via Alaska. As fate would have it Japan had reached its maximum expansion. Any more expanded would cause Japan to starve itself. Most other Alliance members kept some form of diplomatic ties with the United Kingdom but were forced by either Germany or Japan to limit them to simple peace renewals. The alliance between the British Empire and the German Union began to deteriorate even before the war was over, when Himmler and Eden exchanged a heated correspondence over whether the Dutch Government in Exile, backed by Eden, or the Provisional Government, backed by Himm, should be recognised. Stalin won. Several postwar disagreements between British and German leaders were related to their differing interpretations of wartime and immediate post-war conferences. The Montreal Conference in late 1955 was the first Alliance conference in which Hitler was present. At the conference the Germans expressed frustration that the British had not yet opened a second front against Franco-Spain in Western Europe. Following the Alliance victory in February, the Prussians effectively occupied Eastern Europe, while the British had much of Western Europe. The immediate end of war material shipments from America to the Prussia after the surrender of Russia also upset some politicians in Berlin, who believed this showed the U.K. had no intentions to support the German Union any more than they had to. After the Paris Peace Conference of 1957, the signing of the Treaty of Geneva on 28 June 1957, between Franco-Spain and Russia on the one side and Prussia, Italy, Britain and other minor allied powers on the other, officially ended war between those countries. Other treaties ended the belligerent relationships of the Japan and the other Central Pact. Included in the 440 articles of the Treaty of Geneva were the demands that the Central Pact officially accept responsibility for starting the war and pay economic reparations. The Alliance could not reach firm agreements on the crucial questions: the occupation of Europe, postwar reparations from Franco-Spain, and loans. No final consensus was reached on Russia, other than to agree to a Germans request for reparations totaling $10 billion "as a basis for negotiations." Creation of the Imperial German Union During the final stages of the Great War, Prussia laid the foundation for its domination of Europe by directly annexing several territories as German States. These included parts of western Poland (incorporated into East Prussia and Silesia), the Sudetenland (which became part of Silesia, Bavaria, and Saxony), and German speaking Austria (which became its own state Austria). The European territories liberated from the Russians and occupied by the German armed forces were added to Mitteleuropa, renamed the United European Community in 1958, by converting them into satellite states, such as Serbian Government of National Salvation, the United Kingdoms of Sweden and Norway, the Kingdom of Croatia, the Kingdom of Hungary, the Republic of Czechoslovakia, the Republic of Slovenia, the Kingdom of Montenegro, the Hellenic State, and the Republic of Bosnia and Herzegovina. Preparing for a "new war" On March 5, 1958, Anthony Eden, in his "Sinews of Peace" (Iron Grip) speech in Madrid, Franco-Spain, William Hague said "a shadow" had fallen over Europe. He described as having dropped an "Iron Grip" between East and West. From the standpoint of the Germans, the speech was an incitement for the West to begin a war with the German Union, as it called for an Anglo-French alliance against the Germans. The immediate post-1957 period may have been the historical high point for the popularity of nationalist ideology. The burdens the Wehrmacht and Prussia endured had earned it massive respect which, had it been fully exploited by Walter Ulbricht, had a good chance of resulting in a ultra-nationalist Europe. Nationalist and socialist parties achieved a significant popularity in such areas as China, Greece, Persia, and the Ethiopia. These parties had already come to power in Romania, Bulgaria, Albania, and Yugoslavia under the German Union. The United Kingdom were concerned that electoral victories by nationalist parties in any of these countries could lead to sweeping economic and political change in Western Europe. Beginnings of the Interbellum Triple Entente beginnings and Funkfreiheit Britain, Franco-Spain and later the Russian Empire signed the Entente Cordiale Treaty of April 1959, establishing the Triple Entente. That August, the German and Japanese leaders retaliated against these steps by integrating the economies of their nations in European Community and GEACOP, the first German atomic device was detonated in the Baltic Sea; signing an alliance with Empire of Japan in February 1960; and forming the Coalition of Independent Countries, in 1965. Media in the Europe was an organ of the state, completely reliant on and subservient to the German government, with radio and television organizations being state-owned, while print media was usually owned by political organizations, mostly by the local conservative parties. German propaganda used nationalist philosophy to attack imperialism, claiming slave labor agitation and war-mongering capitalism were inherent in the system. Along with the broadcasts of the Ministry of Public Enlightenment and Propaganda (RMVP) and the Voice of Japan to Astrialia and America, a major propaganda effort begun in 1949 was Funkfreiheit (Radio Freedom), dedicated to bringing about the peaceful demise of the imperialist system in the Allied world. Radio Freedom attempted to achieve these goals by serving as a surrogate home radio station, an alternative to the controlled and party-dominated domestic press, ironically. Radio Freedom was a product of some of the most prominent architects of Germany's early Interbellum strategy, especially those who believed that the Interbellum would eventually be fought by political rather than military means. In June 1960, Japanese Imperial Army invaded Alaska. Fearing that the Russian-Alaskan government under a Oleg Pantyukhov rule could threaten Japan and foster other insurgent movements in Asia, Hideki Tōjō committed Japanese forces and obtained help from the GEACOP members to support the Alaskan invasion. After a Chinese invasion to assist the North Koreans, fighting stabilized along the NAU border. The British Empire faced a hostile Japan, a Japanese-German partnership, and a defense budget that had quadrupled in eighteen months. Coalition of Independent Countries The Germans, who had already created a network of mutual assistance treaties in the Mitteleuropa by 1959, established a formal alliance with the Empire of Japan therein, the Coalition of Independent Countries, in 1965.
Short vowel memoryPlaying Memory is a great way to reinforce turn taking and develop memory skills and concentration. Your child will also be working on short vowel sounds by saying the sound represented by each character. Prepare for launchAsk your child to use the launch codes to complete the patterns of colors and shapes. The ability to extend or duplicate patterns is a logical reasoning skill that forms a basis for future work in math (specifically, algebra!). Things that goKids chart modes of transportation on this printable. Learning about transportation is part of early social studies education, helping kids think about the phyisical and human environment. Puppy careCaring for pets—in real life and in play scenarios—helps children develop qualities of responsibility and empathy. Developing these important social skills will help your child thrive in school and in life.
|This article needs additional citations for verification. (July 2008)| In computing, an executable file or executable program, or sometimes simply an executable, causes a computer "to perform indicated tasks according to encoded instructions," as opposed to a data file that must be parsed by a program to be meaningful. These instructions are traditionally machine code instructions for a physical CPU. However, in a more general sense, a file containing instructions (such as bytecode) for a software interpreter may also be considered executable; even a scripting language source file may therefore be considered executable in this sense. The exact interpretation depends upon the use; while the term often refers only to machine code files, in the context of protection against computer viruses all files which cause potentially hazardous instruction execution, including scripts, are lumped together for convenience. Executable code is used to describe sequences of executable instructions that do not necessarily constitute an executable file; for example, sections within a program. Generation of executable files While an executable file can be hand-coded in machine language, it is far more usual to develop software as source code in a high-level language easily understood by humans, or in some cases an assembly language more complex for humans but more closely associated with machine code instructions. The high-level language is compiled into either an executable machine code file or a non-executable machine-code object file of some sort; the equivalent process on assembly language source code is called assembly. Several object files are linked to create the executable. Object files, executable or not, are typically in a container format, such as Executable and Linkable Format (ELF). This structures the generated machine code, for example dividing it into sections such as the .text (executable code), .data (static variables), and .rodata (static constants). In order to be executed by the system (such as an operating system, firmware, or boot loader), an executable file must conform to the system's Application Binary Interface (ABI). Most simply a file is executed by loading the file into memory and simply jumping to the start of the address space and executing from there, but in more complicated interfaces executable files have additional metadata, specifying a separate entry point. For example, in ELF, the entry point is specified in the header in the e_entry field, which specifies the (virtual) memory address at which to start execution. In the GNU Compiler Collection this field is set by the linker based on the Executable files typically also include a runtime system, which implements runtime language features (such as task scheduling, exception handling, calling static constructors and destructors, etc.) and interactions with the operating system, notably passing arguments, environment, and returning an exit status, together with other startup and shutdown features such as releasing resources such as file handles. For C, this is done by linking in the crt0 object, which contains the actual entry point and does setup and shutdown by calling the runtime library. Executable files thus normally contain significant additional machine code beyond that directly generated from the specific source code. In some cases it is desirable to omit this, for example for embedded systems development or simply to understand how compilation, linking, and loading work. In C this can be done by omitting the usual runtime, and instead explicitly specifying a linker script, which generates the entry point and handles startup and shutdown, such as calling main to start and returning exit status to kernel at end. The same source code can in general be compiled to run under different computer architectures and operating systems. Sometimes this requires no changes to the source code, and simply outputting different machine code (targeting a different instruction set) and linking to a different runtime (due to operating system differences). In other cases this requires changing the source code, either including compile-time changes (conditional compilation) or run-time changes (checking the environment at run time). Conversion of existing source code for a different platform is called porting. Interaction with computing platforms An executable comprises machine code for a particular processor or family of processors. Machine-code instructions for different families are completely different, and executables are totally incompatible. Within families processors may be backwards compatible; for example, a 2014 x86-64 family processor can execute most code for x86 family processors from 1978, but the converse is not true. Some dependence on the particular hardware, such as a particular graphics card may be coded into the executable. It is usual as far as possible to remove such dependencies from executable programs designed to run on a variety of different hardware, instead installing hardware-dependent device drivers on the computer, which the program interacts with in a standardised way. Some operating systems designate executable files by filename extension (such as .exe) or noted alongside the file in its metadata (such as by marking an "execute" permission in Unix-like operating systems). Most also check that the file has a valid executable file format to safeguard against random bit sequences inadvertently being run as instructions. Modern operating systems retain control over the computer's resources, requiring that individual programs make system calls to access privileged resources. Since each operating system family features its own system call architecture, executable files are generally tied to specific operating systems, or families of operating systems. There are many tools available that make executable files made for one operating system work on another one by implementing a similar or compatible application binary interface. For example Wine, which implements a Win32-compatible library for x86 processors. In other cases multiple executables for different targets are packaged together in a fat binary. When the binary interface of the hardware the executable was compiled for differs from the binary interface on which the executable is run, the program that does this translation is called an emulator. Different files that can execute but do not necessarily conform to a specific hardware binary interface, or instruction set, can be either represented in bytecode for just-in-time compilation, or in source code for use in a scripting language. (see Shebang (Unix))
As I pointed out last year, spacefaring bacteria might affect rainfall: Now, NOAA scientists say that phytoplankton affects hurricanes. As New Scientistwrites: In 2002, several scientists claimed that bacteria high in Earth's atmosphere came from space. Last year, scientists said that bacteria in the upper atmosphere may actuallymake rain. Specifically, they said that bacteria can freeze at fairly warm temperatures, so that the "biological ice nuclei" form condensation nucleiwhich triggers rain. Indeed, some scientists have speculated that bacteria cause rain as a means of transportation, so that they will "rain out" from the upper atmosphere to the surface of a planet. Now, scientists have discovered a "hibernating" bacteria in a salt mine in Utah which they believe has been in suspended animation for 250 million years. There is evidence that this ability to hibernate for long periods of time is also useful for travel through space by the bacteria:Indeed, there is a more down-to-earth analogy to the idea of spacefaring bacteria: the humble coconut. Coconuts can float across long distances of water in the ocean, and when they land on a hospitable island, start growing. Bacteria have the ability to go into a kind of semi-permanent hibernation, but survival for this long was unheard of. After lying dormant in the salt crystal for 250 million years, the scientists added fresh nutrients and a new salt solution, and the ancient bacteria "re-animated." Dr. Russell Vreeland, one of the biologists who found the bacteria, pointed out that bacteria can survive the forces [of] acceleration via rubble thrown into space via a meteor impact. If it is possible for a bacteria to survive being [thrown] off the planet and to stay alive within a salt chunk for 250 million years, then in a sort of "reverse-exogenesis" it may be possible that earth's own microbes are already out there. Using a computer weather simulator, [NOAA's Anand Gnanadesikan] compared the formation of tropical storms in the Pacific under today's phytoplankton concentrations to conditions without any phytoplankton at all. What he found was an overall decrease in tropical storms in a phytoplankton-free digital Pacific. The mechanism for this shift lies inphytoplankton's ability to absorb sunlight, which heats up the water around it. Without phytoplankton, the sun's rays penetrate deep into the ocean, leaving the surface water cold. Cool water has less energy than warm water, produces less of the moist air needed to build up tropical storms, and allows for stronger winds that can dissipate thunderstorms before they turn into typhoons (what hurricanes are called in the Pacific Ocean). All of this adds up to a Pacific Ocean that is less exciting and deadly than the one we currently have. Because phytoplankton levels have dropped 40 percent since the 1950s, that may mean that hurricane frequency and/or intensity may decline. Rain can provide transportation to bacteria, which provides an obvious evolutionary advantage: it gives them a free ride out of space down to a planet's surface. But could there also be an advantage to phytoplankton from hurricanes? Believe it or not, there is. As NASA wrote in 2004: Whenever a hurricane races across the Atlantic Ocean, chances are phytoplankton will bloom behind it. According to a new study using NASA satellite data, these phytoplankton blooms may also affect the Earth's climate and carbon cycle. The satellite images showed tiny microscopic ocean plants, called phytoplankton, bloomed following the storms. "Some parts of the ocean are like deserts, because there isn't enough food for many plants to grow. A hurricane's high winds stir up the ocean waters and help bring nutrients and phytoplankton to the surface, where they get more sunlight, allowing the plants to bloom," Babin said. Bigger storms appear to cause larger phytoplankton blooms. Larger phytoplankton should have more chlorophyll, which satellite sensors can see. Hurricane-induced upwelling, the rising of cooler nutrient-rich water to the ocean surface, is also critical in phytoplankton growth. For two to three weeks following almost every storm, the satellite data showed phytoplankton growth. Babin and his colleagues believe it was stimulated by the addition of nutrients brought up to the surface. By stimulating these phytoplankton blooms, hurricanes can affect the ecology of the upper ocean. Phytoplankton is at the bottom of the food chain. The factors that influence their growth also directly affect the animals and organisms that feed on them.
An organism that lives on the outer surface of another organism, its host, and which does not contribute to the survival of the host. Depending upon the parasite(s), the following may be observed: - Intense itching with intermittent or persistent scratching. - Loss of hair, ulcerated skin, abrasions or scabs (seen most commonly on the neck and back of shoulders when mites are involved). - May see light tan, brown, or reddish color “dots” on skin or the presence of silvery colored nits attached to hair shafts. - May see a fine bran like substance on the skin and fur. In sarcoptid or sarcoptid-like species crusted red or yellowish lesions may be seen on the auricle or pinna of the ear and on the nose; along with small reddish bumps to tail, genitals, and feet. Hair loss and skin sensitivity (pink to reddish, irritated, looking skin) may be present in conditions of mange. - May see actual fleas on the rat, or may see an indication of their presence by droppings of digested blood on the rat’s skin which may seem to appear like particles of dirt. - May be seen on legs, ventral surface of the body, ears, neck. They may appear red, brown, or black when engorged with blood. Also note: It is known that in dogs, a tick attached to the right place on a leg can cause that leg to be paralyzed, and by removing the tick it resolves the problem. Possibly, though not documented (since ticks are less seen on pet rats in general), it may also do the same to rats. : for additional information on recognizing various signs of pain or discomfort refer to: Signs of Pain In Rats Ectoparasites are those which live on skin or attach to hair follicles. The following listed external parasites are those that can most often affect rats. - Lice (phylum: Arthropoda, class: Insecta ) are of two orders, the mallophaga which are of a species that bite or chew, and the order of Anoplura (family Pediculidae) which are a species that suckle blood. The order of Anoplura which infest domestic animals is what is most often seen in rats. Polyplax spinulosa (spined rat louse) is a type of lice that causes hair loss and pruritus (itching). It can sometimes be detected by the silvery colored nits attached to the hair. Lice are species specific, meaning they do not cross from one species to another. They will spend their entire life cycle, approximately 14 to 21 days, from egg to nymph to adult on the host. They obtain nutrition by sucking blood, which in turn can cause anemia to the rat. They are also able to transmit the parasite Hemobartonella muris, leading to a disease similar to tick fever. - Mites (phylum: Arthropoda, class: Arachnids) are of the subclass Acari. Unlike lice they are considered host specific meaning that with certain species of mites, if the desired host is not available, they may cross to another species. The tropical mite Liponyssus bacoti (synonym: Ornithonyssus bacoti) is round in shape and appears dark when engorged with blood. They can survive on fomites (e.g, bedding, litter), and only stay on an animal when they are feeding. They are one of the species of mites that will also bite other animals including humans. Demodex spp., and Notoedres muris (a sarcoptid-like mite), are types of mites that cause mange; a type of skin condition. Deomodex spp. can be found anywhere on the skin but are primarily found deep within the hair follicles and sebaceous glands. Mange caused by Demodex spp. can produce signs of skin sensitivity and hair loss. Notoedres muris (also termed the ear mange mite) burrows into skin, and can present as yellowish crusty appearing warts on edges of ears and nose, or can appear on other extremities as reddened bumps. Both of these are not often seen in the domestic pet rat. Sarcoptes scabiei varieties, while not host-specific per se, do possess some host specific preference and physiologic differences do exist between varieties. Rats can be infested with a variety of sarcoptes mite; however, they do not give their owners their type of mange. Human infestation is with a different variety of scabies mite than what is found on animals’. Should your pet rat be infested with a sarcoptic mite, and have close contact with you, it can get under your skin and cause itching and skin irritation. However, the mite dies in a couple of days and does not reproduce. They may cause you to itch for several days, but you do not need to be treated with special medication to kill them. Until your rat is treated effectively and its environment cleaned continued infestation will be a source of discomfort for your rat and an annoyance to you. For more information on scabies in humans see The CDC Fact Sheet. The Radfordia ensifera is a fur mite that can cause dermatitis. It may occasionally be seen as white specks of dust on hair follicles. This type of mite is most commonly seen in rats. It produces intense itching and leads to scabs most frequently seen on the shoulders, neck, and face of the rat. The rat fur mite and mange mite do not infest humans or other animals. Mites under normal conditions are commensal in small numbers and do not tend to be bothersome to their host. It is when the rat is stressed, has a decreased immunity due to other illnesses, and/or is unable to keep the numbers reduced by normal grooming that causes the mites to flourish in numbers. Inattention to proper husbandry, a rat that is ill, or giving ineffective treatment can lead to reinfestation, and dermatitis. On the average the entire life cycle of the mite beginning with the eggs which hatch in about seven days through the larval, nymphal, and adult stages requires approximately 23 days to complete. It is therefore important to maintain care and follow through with treatment(s) prescribed. - Fleas, (phylum: Arthropoda, class: Insecta) of which thousands of species are recognized worldwide, affect human and animal. They belong to the order, Siphonaptera. The species of flea that most commonly affects animals, and humans, is Ctenocephalides felis. It causes severe irritation and can be responsible for flea allergy dermatitis. Fleas go through developmental stages before becoming adults. It is the adult fleas, which appear as 1-5mm, laterally flattened, wingless insects that infest the animals fur. Reinfestation can occur if care is not taken to include the surrounding environment of the animal when treating. The deposited eggs on the host by the adult female flea can fall from the host to the surrounding environment, go through development, emerge as young adults either moving back to the host, or to a newly acquired host. Flea infestation can be determined by the actual presence of fleas or by flea excreta seen as digested droppings of blood appearing as black dots. These black dots when dissolved on paper, or placed in water, will appear red. This species of flea, Ctenocephalides felis, is also responsible for the transmission of Murine typhus by Rickettsia typhi, a type of febrile disease in both man and small mammals, and principally seen in the southern coastal climates. Treatment for flea infestation should include the home, the rat’s environment, and any other animals living in the home. - Ticks (phylum: Arthropoda, class; Arachnids) are also of the subclass Acari along with mites. They are divided into two families: the Ixodidae (e.g. Amblyomma spp., Ixodes spp., Dermacentor spp. and the Rhipicephalus spp.) , which are hard body ticks and the Argasidae (e.g. Ornithodoros and Otobius) which are soft body ticks. They feed on the blood of mammals, birds and reptiles. Although some species of tick have preference for a certain species of host most are less host specific. Hard body ticks seek out a host by questing (a type of behavior), crawling up grass stems or leaves and perching with front legs extended and will attach as a host brushes against their front legs. Hard ticks will feed from several days to weeks depending upon the species of tick, the type of host, and the life cycle stage it is in. Many types of hard body ticks are called “three host ticks” because during each stage of development from larvae to nymph to adult requires a different host to feed from. The complete life cycle can take up to a year to complete. Both the nymphs and adults have bodies that are divided into two sections, the head containing the mouthparts, and the posterior of the body containing digestive tract, reproductive organs, and the legs. Often less than 5 mm in size, they may vary in color from red to brown or black when engorged with blood. The body of the adult tick can be seen to grow as it feeds and engorges on the blood of its host. The adult female lays only one batch of eggs, as many as 3000, and then dies. The male tick feeds very little and tends to stay with larger hosts so it can mate with the adult female tick. The male dies once it has reproduced. Soft body ticks have life stages that are difficult to discern. They go through multiple and repeated stages before becoming an adult, with each stage feeding multiple times, unlike hard body ticks. The life cycle of the soft body tick is considerably longer than hard body ticks. The adult female soft body tick has the ability to lay multiple batches of eggs during its life as an adult. Soft body ticks behave similar to fleas in their eating behavior. They can live in the nest of the host feeding each time the host returns to its nest. While ticks are not commonly seen on pet rats housed indoors, there is the potential for infestation if housed outdoors, or if in contact with other pets that do go outside. Where other household pets have been infested it is recommended to also check and treat pet rats if ticks are detected. Severe infestations can cause blood loss resulting in anemia. Zoonotic diseases associated with tick infestation in rabbits (e.g. Tularemia, Lyme disease, and Rocky Mountain spotted fever) could potentially be a factor in rats with tick infestation(1). Transmission of all the above ectoparasites can be by host to host or fomites to host. Fortunately with proper husbandry and persistent treatment they do not have to pose a problem. For information on hypersensitivity, allergic contact dermatitis, see Dermatitis/Eczema. Photos and Case Histories Involving Parasite Infestation - Fig. 1: Signs of mite infestation. - Fig. 2: Sarcopetes Mange photos and case history. - Fig. 3: Lice and case history. - Fig. 4: Ectoparasite slides and descriptions courtesy of University of Missouri Research Animal Diagnostic Laboratory - Fig. 5: Demodex Mites in 26-month-old female rat (Inca) Skin scrapings for possible parasites can be done, however, parasites may still be present even though the scrapings are negative. For information regarding dosages and usage of the following medications refer to the section Anti-infectives in the Rat Medication Guide For tick removal grasp with either forceps, tweezers, or tick extractors between head and body pulling straight out being careful not to squeeze the body of the tick releasing blood. In the event extraction is not easily obtained by pulling straight a slight twist motion can be tried (some brands of tick extractors are designed to do such). Immerse (place tick) in an acaricide solution or alcohol in a small container with lid. Be sure to search and remove all ticks! Following extraction:Wipe the area where the tick was removed from with saline or an alcohol wipe. It is recommended to give a single dose of Ivermectin 0.4 mg/kg to be sure any remaining ticks will be killed (1). Mites and lice Selamectin (Revolution) applied once topically. In some instances a second treatment (following a 30 day interval) may be needed. Rarely, and only by veterinary assessment, may it become necessary to dose at a two week interval. Ivermectin (sold as horse worming paste) given orally: brands Equimec in Australia, Equimectrin, Equalvan, Rotectin 1, and Zimecterin in the U.S., where the active ingredient ivermectin is - 1.87%. Treatment with oral or topical dosing noted to be less stressful to rats and mice. Rare incidences of adverse reactions have been reported when ivermectin has been given by injection in rats. For treatment specific to stubborn demodectic , notoedres, and sarcoptid mite infestation Ivermectin, selamectin (Revolution), or topical treatment of Mitaban (amitraz) may be considered. It is recommended to discuss the proper use of Mitaban with your vet before attempting to use. Ivermectin is considered to have a wider margin of safety. In cases of mange, treatment may need to be carried out for as long as 6-12 weeks. Skin infection, by normal skin flora, often accompanies persistent, severe, cases of mange. It may become necessary to treat with an antibiotic such as cephalexin (Keflex). Fleas and lice Topical dosing with Advantage (orange labeled package for cats/kittens 9 pounds and under). Fleas, mites (other than demodectic mites), and lice Topical dosing with selamectin (Revolution), a derivative of ivermectin, labeled for use in kittens. The topical application of selamectin, as directed, is less stressful for rats than other injectable treatments. Alternative treatment for mites, lice and fleas *Note, although a spray or shampoo sold for small animals such as rats and mice, or hamsters , or that which is safe for kittens or puppies 2 weeks of age containing 0.05 % or 0.06 % pyrethrin can be used for rats every 7 days for 4 weeks, its use should be avoided , or discussed with a veterinarian prior to use, due to risk of possible increased toxicity as a result of ingestion from licking by these small animals, besides absorption. Do not use concomitantly with other anthelmintics (e.g., ivermectin or selamectin). In Addition To Treatments Above: Treat all rats at the same time, clean all cages including bedding and toys thoroughly. Disinfecting with bleach can be very effective, but be sure to rinse cage and articles well and allow to dry before returning rats to their cage. Clip toenails of rear feet to prevent increased trauma to lesions from scratching. If irritation to skin from scratching is observed, a lightly-applied application of a Vitamin E cream, Polysporin ointment, or Aloe gel may help relieve and prevent further secondary infection from occurring. Rats groom frequently; therefore it is recommended to avoid using on those areas where the rat or cage mates can easily access. If there continues to be skin irritation, inflammation, or weeping lesions, systemic antimicrobials may need to be started. See your veterinarian. - When treating adult rats weighing between 300 to 500 grams with ivermectin (sold as horse worming paste where the active ingredient ivermectin is - 1.87%) orally: give a small amount equivalent to a grain of uncooked rice. Dose once a week, for at least 3 weeks. For rats that are younger or those under 300 g, split dose by half and dose once a week for at least 3 weeks. Contribution by C.Himsel-Daly DVM There is no need to decant ivermectin paste if in the original tube, decanting any fluid would concentrate the drug and thus raise the concentration in a volumetric dose, thus potentiating a toxicity. If in opening the original tube of paste and found to have a drip of fluid at the end of the tube, before the actual paste emerges, all that is required is to express the tube, discard that bit of paste until uniform in consistency, and dose according to the veterinarian’s recommendation. If the paste has been dispensed in another container, then mix thoroughly before dosing. Paste tends to be a less accurate dosing method than using the parenteral product. Fortunately the paste has a wide margin of safety.* - Continue treatment as prescribed. - Keep toe nails clipped on a regular basis making sure not to cut the quick. Keep styptic on hand if bleeding occurs. - Repeat cage and article disinfecting at least once a week. - Remove and discard articles made of wood. - Free of parasite infestation. - Free of inflammation and irritation of skin. - Maintain rats overall general health. - Use of prepackaged processed litter, and the freezing of litter where bags have been breached prior to purchase, may be of help. *Please note: that any bags of litter/bedding that have been noted to have a row of holes in the top of the bag or any bag that has been breached during storage in pet stores and feed/tack warehouses, where contamination through contact from residing infested animals, may be a potential risk. Freezing the litter before using in cages may be a helpful preventative measure. - The freezing of prepackaged or mixed foods and rat blocks, prior to feeding, is recommended if bags have been breached at time of purchase. - Provide a clean cage environment. - Quarantine all new rats for a minimum of three weeks and treat for infestation or infections if present prior to introducing to existing colony. - When holding or playing with rats other than your own, it is recommended that you wash and change clothes prior to handling your own rats. - Quesenberry, K., & Carpenter, J. (2012). Ferrets, Rabbits, and Rodents, Clinical Medicine and Surgery (Third Edition ed.). St. Louis: Saunders. - Arlian, L., Runyan, R., & Estes, S. (1984). Cross infestivity of Sarcoptes scabiei. J Am Acad Dermatol, 10(6), 979-86. - Vredevoe, L. (2003, May 16). Background information of the biology of ticks. UCD Entomology R. B. Kimsey Laboratory. Retrieved February 16, 2012, from http://entomology.ucdavis.edu/faculty/rbkimsey/tickbio.html - Beck, W., & Fölster-Holst, R. (2009). Tropical rat mites (Ornithonyssus bacoti) - serious ectoparasites. J Dtsch Dermatol Ges, 7(8), 667-70. Retrieved March 20, 2012, from http://www.dgvd.org/media/news/publikationen/2009/ddg_09094_eng.pdf Posted on June 29, 2003, 10:20, Last updated on June 27, 2014, 16:27 | Integumentary / Skin
A comprehensive examination of a person’s visual functioning would include the following: • Detailed history of any eye complaints, surgery, therapy and medication • Assessment of visual acuities (clarity of vision) • Binocularity tests to assess how the eyes work together as a team, and how the eyes aim and focus together • Refraction to determine if any optical correction (near-sightedness, far-sightedness, astigmatism, etc) is required • Internal and external eye examination to asses the health of the eye • Measurement of intra-ocular eye pressures Should contact lenses be required, further measurements of corneal curvatures will be done. The goal of the assessment should be to determine whether the visual system is processing information effectively. Because normal vision guides us in what we do in everyday life (especially at work or school where visual demands are high), the majority of the examination should be completed under natural conditions. Eye examinations for children would involve further testing of binocularity and visual perceptual skills. See our Children’s Vision section for further details. What is Vision? Vision is a cognitive act which enables us to look at an object and not only identify it, but to determine where it is, its size, its distance from the observer, its rate of movement, its texture, and everything else that can be determined by visual inspection. Eyesight, which involves the sensory ability of the eye to distinguish small details, is only one component of vision. It has been estimated that 75 to 90% if all classroom learning comes to the student via the visual pathways. If there is any interference with these pathways, the student will probably experience difficulty with learning tasks.
BME103:T130 Group 12 |BME 103 Fall 2012|| Home | Lab Write-Up 1 Lab Write-Up 2 Lab Write-Up 3 Course Logistics For Instructors Wiki Editing Help LAB 1 WRITE-UP Initial Machine Testing The Open PCR machines is a DYI device that is composed of many circuit boards, wires, and a wooden frame. It is to be used to cycle DNA by oscillating the temperature of the DNA samples. This machine predominately works when the samples are placed in the main heating block, at which point a heated lid is placed down on top of the samples. Once the software for this Open PCR device is set up, the temperature change and the actual process begins. Within the Open PCR machine is a multitude of parts that keep the machine intact. These parts include a heat sink and fan to absorb heat, a circuit board that runs all the parts, a power supply to maintain the electricity, and a LCD display to show the user information. In conclusion, all of these parts work cohesively to generate this working machine known as the Open PCR. Experimenting With the Connections The test run was done November 1st, 2012. Machine number 12 was used and there were minimal problems. It felt as if the machine was running slower than it should, but other than that all went well. Polymerase Chain Reaction A polymerase chain reaction (PCR) is based on the enzyme DNA Polymerase's ability to synthesize complementary DNA strands. Through a series of steps involving polymerase breaking apart a DNA strand and then synthesizing a specified complementary piece, a PCR machine is able to isolate and amplify a desired strand of DNA. Steps to Amplify a Patient's DNA Sample 1. PCR uses controlled temperature changes to make copies of DNA. Heat (about 95°C) separates double-stranded DNA into two single strands; this process is called denaturation. 2. "Primers", or short DNA strands, binds to the very end of the complimentary sequence that is being replicated. This step is called annealing, which takes place between 40°C and 65°C. The temperature that we used was 57°C. 3. Once the annealing process is done, the temperature is raised to about 72°C and DNA polymerase then extends from the primers copying the DNA. 4. PCR then amplifies a segment of a DNA sequence. In the end, there will be two new DNA strands identical to the original strand. Components of PCR Master Mix • A modified form of the enzyme Taq DNA polymerase that lacks 5´→3´ exonuclease activity. • Colorless Reaction Buffer (pH 8.5) Components of PCR Master Mix Sample 2: Patient ID: 11014 Age: 67 Gender: Male Replicate: 2 Sample 3: Patient ID: 11014 Age: 67 Gender: Male Replicate: 3 Sample 4: Patient ID: 46446 Age: 62 Gender: Female Replicate: 1 Sample 5: Patient ID: 46446 Age: 62 Gender: Female Replicate: 2 Sample 6: Patient ID: 46446 Age: 62 Gender: Female Replicate: 3 Sample 7: Positive Control Sample 8: Negative Control 1. To assemble the flourimeter, first obtain smartphone to capture the picture needed during data collection. 2. Turn on the flourimeter and drop a single drop of solution onto the hydrophobic slide. 3. Turn the black box provided upside down to cover the flourimeter. 4. Set up the smartphone on the stand provided, and align the camera/phone about 3 inches in front of the flourimeter. Make sure that the stand and the flourimeter is covered directly under the black box. You will have 8 samples from the OpenPCR instrument and 1 DNA (calf thymus standard at 2 micrograms/mL) sample and water from the scintillation vial to analyze. 1. With a permanent marker, number your transfer pipettes at the bulbs so that you only use if for one sample. With the permanent marker number your Eppendorf tubes at the top. At the end, you should have 10 Eppendorf tubes and 10 pipettes clearly labeled. 2. Transfer each sample seperatly (using one pipette per sample) into an Eppendorf tube containing 400 mL of buffer. Label this tube with the number of your sample. Get your entire sample into this Eppendorf tube. You can use this sample number transfer pipette to place only this sample drop onto the fluorescent measuring device. 3. Take the specially labeled Eppendorf tube containing Sybr Green I using the specifically labeled pipette only place two drops on the first two centered drops as seen on the video. 4. Now take your diluted sample and place two drops on top of the Syber Green I solution drops. 5. Align the light going through the drop, as seen in the video. 6. Let the smart-phone operator take as many pictures using the light box as he/she wants. 7. Now either rerun the sample again or discard that sample’s pipette. Keep the Sybr Green I labeled pipette. 8. You can run 5 samples per glass slide. 9. As the last sample run the water from the scintillation vial as a blank using the same procedure as with the other samples. Our group used a Galaxy Nexus 1. After setting up the Flourimeter set a Smartphone’s photo settings to the ones listed. 2. Once the samples have been prepared, place the Flourimeter in the light box. 3. Take as many pictures as needed. Your goal is to take pictures clear enough so ImageJ can take data from the images. 4. Once you have taken enough photos of that sample give the Flourimeter back to the sample preparer to prepare the next sample. 5. Repeat this procedure for all the samples. 1. ImageJ was used to analyze the images taken by the smartphone. To upload the image onto ImageJ, the ANALYZE tab was clicked and SET MEASUREMENTS was chosen. AREA INTEGRATED DENSITY and MEAN GREY VALUE was selected from the menu. 2. The MENU tab was selected and COLOR was chosen, the function SPLIT CHANNELS was used; three separate files were created. SYBR GREEN fluoresces green, so the image name with "green" next to it was used. 3. The oval selection was used to draw an oval around the green drop. Then, MEASURE was selected from the ANALYZE tab, and the sample number and the numbers measured from the image was recorded. 4. To get the readings from the background of the image, another oval of the same size was drawn in the green image and MEASURE was selected from ANALYZE tab. The sample number and the numbers measured from the image was recorded, this data will be labeled as "background". 5. The measurements were saved in an excel file by clicking SAVED AS from the FILE tab. Research and Development Specific Cancer Marker Detection - The Underlying Technology PCR detection works by heating the DNA sample to about 110°C in order to split the DNA. Then the PCR cools off to 57°C in order for the primer to attach to the DNA strands. The PCR then heats to 72°C so the DNA strand can be re-written. The r17879961 cancer-associated sequence will produce a DNA signal because the reverse primer used, AACTCTTACACTCGATACAT(The letters in the sequence are the bases and stand for Guanine (G), Adenine (A), Cytosine (C), and Thymine (T). ) will only attach if the DNA sample has the same coding with the cancer-associated sequence “ACT”. If the DNA sample does not have the cancer-associated sequence the primer will not attach because the sequence is AACTCTTACACTTCGATACAT, and there will be no DNA signal. The primer sequences that will be used is ACTC or in reverse CTCA. A positive result will be known because there will be a profound amount of the same sequence, the r17879961. If there is none of the sequence than we know that the results are negative. Baye's Law (worksheet) [IMG][/IMG] Source: http://openpcr.org/use-it/
Before pruning the roots of any street tree, get a free root pruning permit from Urban Forestry. Tree root systems are very large; they can reach as far away from the trunk as the tree is tall. Tree root systems are also very shallow, with 80% of the roots in the top three feet of soil. The roots are integral to the survival of a tree. They support the tree, absorb water and nutrients and store food. A healthy root system insures a healthy tree, so special consideration needs to be taken when working in the root zone or pruning tree roots. Urban environments are very tough places for trees to live. When a tree is stressed or in decline, the root system should be the first place to look for the cause of the decline. There are many factors that can stress a tree’s root system, including soil compaction, changes in the amount of soil water, additions or removals of soil, or too much fertilizer. Any of these changes can impact root health. It is important to be aware of the changes in soil conditions in order to keep track of the impact to your tree’s root health. Pruning Large Roots Pruning roots or damaging roots in the root zone can have major consequences for the health of a tree. Pruning roots should only be done by professional arborists or after a consultation with a City arborist. Root pruning permits are free and required for trees in the right-of-way. Pruning Small Roots Preventive root pruning should be part of a maintenance plan for young street trees. The roots of young trees are actively growing and seeking out new sources of water and nutrients. As roots grow they travel under sidewalks and can raise the sidewalk as they grow larger. Sidewalks are especially susceptible to root damage because unlike roads or driveways, which are poured to depths of 6+ inches over a base of gravel, sidewalks are only four inches deep and lay directly on top of dirt. When roots and sidewalks compete, roots usually win. Therefore preventative root pruning is recommended to reduce the potential for infrastructure damage. The best tool for pruning the small roots (less than a quarter inch in diameter) of trees in the planting strip between the sidewalk and the curb is a sharpened nursery spade with a 13” blade. Thrust the spade into the soil along each sidewalk edge to a depth of 12 inches, separating the root ends to prevent them from grafting back together. This should be done annually starting with first year the tree is planted. To prune any other roots on street trees in the planting strip, a free root pruning permit from Urban Forestry is needed.
Thanks to Michael Ymer for this great game Introduction / objectives This is a card game that provides the students with the opportunity to investigate a variety of mental computation strategies when adding and multiplying numbers. It is quick and easy to organise and is lots of fun, even for adults who play it. The less able student can win, as there is an element of chance involved. A terrific number sense game to use as a warm up activity, or a focus lesson for young children. Equipment [for each pair of students] 100 number board, 10 x 10 tables chart and two counters. Deck of cards. All number cards have face value. Ace = 1. 2 = 2 etc. Picture cards = 10 Joker = wild [can have the value of any other card in the deck]. Two students compete against each other to see who can get closest to 100 without busting. One student deals cards out to his/her opponent who adds or multiplies the cards. This continues until the student decides to stop. Example Player A is going first and having cards dealt by partner. Card 5 is dealt first so player A moves counter to 5 on number board. Card 6 is the next card dealt. This could be 5+6 and the counter is moved to 11 or it could be 5x6 and counter is moved to 30. Let’s assume that Player A decides to move to 30. The next card is a KING so the student adds 10 and moves the counter to 40. Next card is 2. Student decides to multiply and moves to 80. Next card is Ace. Student decides to multiply and stay on 80, hoping that the next two cards are 10’s and he/she can hit exactly 100. Next card is a 5. Student adds and moves to 85. Next card is 9. Student moves to 94 and decides to stop fearing that the next card flipped will be bigger than a 6and she / he would bust. Player B now has the cards dealt to him / her and tries to better 94 without busting. Once this game is completed, play again but player B goes first. - Card familiarisation activities are a good idea if students haven’t been exposed to decks of cards before. Perhaps alder students could tell you the value of a deck of cards based on the values listed in this game. Younger students should do sorting activities to help them discover that there are four of each card. How many cards in the deck? - Transparent counters help students see the numbers on the board. - Children find shuffling cards difficult so keep working through the deck of cards until you run out. Then shuffle or ask the teacher to help. - Children only deal a card out when the partner says, ‘Card please ‘. This eliminates the problem of students dealing the card while the other student is still deciding their move. If the card is flipped without being asked for the receiver has the option of using it or having a fresh one dealt out. - Try modelling the game to students using an overhead, transparency of 100 number board, transparent counters and overhead miniature playing cards. A very effective way to demonstrate the game and strategies that you need to discuss. - Vary the game if needed. Perhaps only add for young children or play hit exactly 100 for older students. For this game students can use any operation with winner being the student who hits 100 in the least amount of cards. - Vary the game by making it more challenging. Use any operation to hit exactly 100 in fewer cards than your partner. - When introducing the game, tell the children that while the game is lots of fun, the point of the game is to make decisions and become a smarter mathematician by taking short cuts when adding or multiplying. The overhead gives you the opportunity to discuss some of the strategies listed later in the article.
“There is about two and a half times more carbon in the soil than there is in the atmosphere, and the concern right now is that a lot of that carbon is going to end up in the atmosphere,” said lead author Mark Bradford, assistant professor in the UGA Odum School of Ecology. “What our finding suggests is that a positive feedback between warming and a loss of soil carbon to the atmosphere is likely to occur but will be less than currently predicted.” Bradford and his team, which included researchers from the University of New Hampshire, the Marine Biological Laboratory at Woods Hole, Duke University and Colorado State University, found evidence to support both hypotheses and revealed a third, previously unaccounted for explanation: The abundance of soil microbes decreased under warm conditions. “It is often said that in a handful of dirt, there are somewhere around 10,000 species and millions of individual bacteria and fungi,” said study co-author Matthew Wallenstein, a research scientist at … “Although our results suggest that the impact of soil microbes on global warming will be less than is currently predicted,” Bradford said, “even a small change in atmospheric carbon is going to alter the way our world works and how our ecosystems function.”Metaphorical photo alert! Rust and dirt on a baking plate, shot by Roger McLassus, Wikimedia Commons (where it was a candidate for picture of the year in 2006), under the terms of the GNU Free Documentation License, Version 1.2
Entry, Descent, and Landing Entry, Descent, and Landing Technologies A Major Improvement in Landing Accuracy It's hard to land on Mars, and even harder to land a rover close to its prime scientific target. Previous rovers have landed in the general vicinity of areas targeted for study, but precious weeks and months can be used up just traveling to a prime target. The Mars 2020 mission team is working on a strategy to put the rover on the ground closer to its prime target than was ever before possible. The Range Trigger technology reduces the size of the landing ellipse (an oval-shaped landing area target) by more than 50%. The smaller ellipse size allows the mission team to land at some sites where a larger ellipse would be too risky given they would include more hazards on the surface. That gives scientists access to more high priority sites with environments that could have supported past microbial life. Range Trigger - It's All About Timing The key to the new precision landing technique is choosing the right moment to pull the "trigger" that releases the spacecraft's parachute. "Range Trigger" is the name of the technique that Mars 2020 uses to time the parachute's deployment. Earlier missions deployed their parachutes as early as possible after the spacecraft reached a desired velocity. Instead of deploying as early as possible, Mars 2020's Range Trigger deploys the parachute based on the spacecraft's position relative to the desired landing target. That means the parachute could be deployed early or later depending on how close it is to its desired target. If the spacecraft were going to overshoot the landing target, the parachute would be deployed earlier. If it were going to fall short of the target, the parachute would be deployed later, after the spacecraft flew a little closer to its target. Shaving Time Off the Commute The Range Trigger strategy could deliver the Mars 2020 rover a few miles closer to the exact spot in the landing area that scientists most want to study. It could shave off as much as a year from the rover's commute to its prime work site.Another potential advantage of testing the Range Trigger is that it would reduce the risk of any future Mars Sample Return mission, because it would help that mission land closer to samples cached on the surface. Improving Models of the Martian Atmosphere for Robotic and Future Human Missions to Mars. MEDLI2 is a next-generation sensor suite for entry, descent, and landing (EDL). MEDLI2 collects temperature and pressure measurements on the heat shield and afterbody during EDL. MEDLI2 is based on an instrument flown on NASA's Mars Science Laboratory (MSL) mission. MEDLI stands for "MSL Entry, Descent, and Landing Instrumentation." The original only collected data from the heat shield. MEDLI2 can collect data from the heat shield and from the afterbody as well.This data helps engineers validate their models for designing future entry, descent, and landing systems. Entry, descent, and landing is one of the most challenging times in any landed Mars mission. Atmospheric data from MEDLI2 and MEDA, the rover's surface weather station, can help scientists and engineers understand atmospheric density and winds. The studies are critical for reducing risks to both robotic and future human missions to Mars. Entry, Descent, and Landing (EDL) Cameras and Microphone Unprecedented Visibility into Mars Landings Mars 2020 has a suite of cameras that can help engineers understand what is happening during one of the riskiest parts of the mission: entry, descent, and landing. The Mars 2020 rover is based heavily on Curiosity's successful mission design, but Mars 2020 adds multiple descent cameras to the spacecraft design. The camera suite includes: parachute "up look" cameras, a descent-stage "down look" camera, a rover "up look" camera, and a rover "down look" camera. The Mars 2020 EDL system also includes a microphone to capture sounds during EDL, such as the firing of descent engines.
Climate change is strongly affecting the Arctic and the resulting changes to the polar vortex and jet stream are in turn contributing to extreme weather in many places, followed by crop loss at a huge scale. The U.N. Food and Agriculture Organization (FAO) said in a September 6, 2012, forecast that continued deterioration of cereal crop prospects over the past two months, due to unfavourable weather conditions in a number of major producing regions, has led to a sharp cut in FAO’s world production forecast since the previous report in July. The bad news continues: Based on the latest indications, global cereal production would not be sufficient to cover fully the expected utilization in the 2012/13 marketing season, pointing to a larger drawdown of global cereal stocks than earlier anticipated. Among the major cereals, maize and wheat were the most affected by the worsening of weather conditions. Below an interactive image with the FAO Food Price Index (Cereals), up to and including August 2012. Apart from crop yield, extreme weather is also affecting soils in various ways. Sustained drought can cause soils to lose much of their vegetation, making them more exposed to erosion by wind, while the occasional storms, flooding and torrential rain further contribute to erosion. Higher areas, such as hills, will be particularly vulnerable, but even in valleys a lack of trees and excessive irrigation can cause the water table to rise, bringing salt to the surface. Fish are also under threat, in part due to ocean acidification. Of the carbon dioxide we're releasing into the atmosphere, about a third is (still) being absorbed by the oceans. Dr. Richard Feely, from NOAA’s Pacific Marine Environmental Laboratory, explains that this has caused, over the last 200 years or so, about a 30% increase in the overall acidity of the oceans. This affects species that depend on a shell to survive. Studies by Baumann (2011) and Frommel (2011) indicate further that fish, in their egg and larval life stages, are seriously threatened by ocean acidification. This, in addition to warming seawater, overfishing, pollution and eutrification (dead zones), causes fish to lose habitat and is threatening major fish stock collapse. Without action, this situation can only be expected to deteriorate further, while ocean acidification is irreversible on timescales of at least tens of thousands of years. This means that, to save many marine species from extinction, geoengineering must be accepted as an essential part of the much-needed comprehensive plan of action. Similarly, Arctic waters will continue to be exposed to warm water, causing further sea ice decline unless comprehensive action is taken that includes geoengineering methods to cool the Arctic. The image below shows the dramatic drop in sea ice extent (total area of at least 15% ice concentration) for the last 7 years, compared to the average 1972-2011, as calculated by the Polar View team at the University of Bremen, Germany. This illustrates that a firm commitment to a comprehensive plan of action can now no longer be postponed.
In this brief guide, we will explore the meaning and symptoms of Bipolar disorder, the causes of bipolar disorder, and the treatment options available for bipolar disorder. Bipolar Disorder: Meaning Bipolar disorder is a type of mental illness where the person experiences periods of extreme euphoria or mania, and then cyclically experiences bouts of intense depression, and for the diagnosis of Bipolar disorder, it is necessary that the person experiences at least one episode of both mania and depression that happen one after another. Typically, the manic episode in Bipolar disorder lasts for a few days to a week or two weeks, and the depression tends to last longer, ranging anywhere between 2 weeks to a month or two. Because Bipolar disorder involves shifts in mood, both manic and depressive episodes are extremely intense, but it has been observed over time that the depression experienced in bipolar disorder is much worse than that experienced in Major Depressive Disorder or other subtypes of depression. It has also been noticed that Bipolar Disorder is not uncommon in the least, in fact about 2.5 % of the population in the United States alone, which is approximately 5 million people, may be suffering from bipolar disorder, and that obviously does not take into account those people who have not sought treatment or gotten diagnosed officially. Bipolar disorder can also put the sufferer at greater risk for self-harming behavior as well as suicide, both in manic episodes as well as depressive, and there may sometimes also be psychotic features that manifest during either of the two episodes, though they may be more common in the manic phase. Bipolar Disorder: Symptoms in the ICD 10 The International Classification of Mental and Behavioral disorder is a manual for diagnoses of mental illnesses and syndromes, and it is created by the World Health Organization. The ICD 10 criteria for Bipolar disorder symptoms is: - At least two episodes each of mania and depression as described under Hypomania and Major Depressive Episode - The current episode must be marked along with whether there are psychotic symptoms or not. To understand the symptoms of Bipolar disorder one must first look at the symptoms of mania and depression. According to the ICD 10, these are the symptoms of a manic episode or Hypomania, in some cases: “Hypomania is a lesser degree of mania, in which abnormalities of mood and behavior are too persistent and marked to be included under cyclothymia (F34.0) but are not accompanied by hallucinations or delusions.” - Persistent mild elevation of mood (for at least several days on end), - Increased energy and activity - Marked feelings of well-being and both - Physical and mental efficiency. - Increased sociability, talkativeness - Increased sexual energy - Decreased need for sleep is often present but not to the extent that they lead to severe disruption of work or result in social rejection. - Irritability, conceit, and boorish behavior in place of the usual euphoric sociability. - Concentration and attention may be impaired, thus diminishing the ability to settle down to work or to relaxation and leisure - The appearance of interests in quite new ventures and activities, or mild over-spending.” According to ICD 10, a Manic episode may be marked by all of the symptoms mentioned above, as well as the following: - “The mood is elevated out of keeping with the individual’s circumstances and may vary from carefree joviality to almost uncontrollable excitement. - Elation is accompanied by increased energy, resulting in overactivity, the pressure of speech, and a decreased need for sleep. - Normal social inhibitions are lost, attention cannot be sustained, and there is often marked distractibility. - Self-esteem is inflated, and grandiose or over-optimistic ideas are freely expressed. - Perceptual disorders may occur, such as the appreciation of colors especially vivid (and usually beautiful), a preoccupation with fine details of surfaces or textures, and subjective hyperacusis. - The individual may embark on extravagant and impractical schemes, spend money recklessly, or become aggressive, amorous, or facetious in inappropriate circumstances. - In some manic episodes, the mood is irritable and suspicious rather than elated.” These manic episodes occur with the same symptoms in bipolar disorder, with the addition of a depressive episode that follows or precedes, and the following symptoms may be seen: - “depressed mood, loss of interest and enjoyment, and reduced energy leading to increased fatiguability and diminished activity. - Marked tiredness after the only slight effort - Other common symptoms: - reduced concentration and attention; - reduced self-esteem and self-confidence; - ideas of guilt and unworthiness (even in a mild type of episode); - bleak and pessimistic views of the future; - ideas or acts of self-harm or suicide; - disturbed sleep; - diminished appetite.” In addition to these symptoms, the patient may also suffer from intense loss of motivation and a steep drop in the desire to be with other people or just to do anything at all. According to ICD 10, depression may also involve the following somatic symptoms: - “loss of interest or pleasure in activities that are normally enjoyable; - lack of emotional reactivity to normally pleasurable surroundings and events; - waking in the morning 2 hours or more before the usual time; - depression worse in the morning; - objective evidence of definite psychomotor retardation or agitation (remarked on or reported by other people); - marked loss of appetite; - weight loss (often defined as 5% or more of body weight in the past month); - marked loss of libido.” If you’re facing this, it may be a good idea to seek the help of a therapist or other mental health professional. You can find a therapist at BetterHelp who can help you learn how to cope and address it. Bipolar disorder Causes There are no specific causes of Bipolar disorder but there are theories that speculate on why this disorder may come about, which are discussed below: Neurochemical changes in the brain involving neurotransmitters and structures of the brain responsible for emotions can often be the cause of Bipolar disorder. It has been seen that there is much lower serotonin activity in patients suffering from depression, and there may be hyperactivity of the Basal Ganglia and Cingulate Cortex in Mania, which are both centers that regulate emotion in the brain and the anterior cingulate cortex, in particular, is responsible for causing feelings of excitement and euphoria, which is relevant to mania. Both depression and mania have a strong correlation with genetics, which means that people with bipolar disorder in their family are at more risk for developing the disorder. Environmental reasons for bipolar disorder may include problems with a person’s job or their living conditions. Depression may often result from bad living conditions and if the individual’s mind tries to cope with this and goes the other extreme way they may start experiencing a manic episode instead. Psychosocial factors or causes of bipolar disorder may include things like: - Relationship troubles - Family problems - Poor Self-Esteem - Narcissistic Personality - Unhealthy childhood or parental relationships - Physical or sexual abuse. Bipolar Disorder: Treatment The treatment of Bipolar disorder involves the management of symptoms with medicines and psychotherapy. Usually, the manic episodes in Bipolar disorder may be treated with antipsychotics like Olanzapine or Risperidone, and for the typical Depressive episodes, the usual antidepressants like SSRIs or atypical antidepressants may be used, which may include the commonly well-known drugs like Prozac. For more extreme cases of mania or depression in bipolar disorder, where there is a high risk of suicide or there have been suicidal attempts, Electro-Convulsive therapy may be suggested, and though it is becoming more and more uncommon it has been shown to have significant improvement in people with mania or suicide risk. Psychotherapy for Bipolar disorder may involve any of the main types of psychotherapies like Behavioral, Cognitive, Interpersonal, or Psychodynamic, and often it depends on the current episode the individual is suffering from. In behavioral therapy for Bipolar disorder, the focus may be on reducing harmful behaviors that lead to adverse circumstances for the individual. Behavioral therapy for manic episodes will usually focus on reducing the excitation levels by not reinforcing the excitatory tendencies, and instead of trying to channel the mania into more constructive outlets so that the individual does not go looking for any alternative methods to get rid of the excess energy which may lead to problems. Behavioral therapy for depression will usually involve getting the individual moving again and may employ techniques like behavioral activation and calling upon the person’s social support system to get them involved in their surroundings. Cognitive therapy for depression is very common and involves looking for the problems in the person’s thinking process, and teaches them to look at their negative thoughts and learn to separate them from the negative emotions they may cause. For mania, it may be a little trickier to approach the individual’s thought process as they are consumed with their energy, but since they are currently experiencing a high degree of emotionality the therapist may try to get them to harvest some of their grandiose ideas and put them down in a tangible form so that they may discuss them and also use them later in the depressive phase. The biggest problem that psychotherapy needs to tackle during the manic phase is simply ensuring the patient’s adherence to the medical regimen, because they may try to skip their doses often during the mania phase. Psychodynamic therapy lasts for a very long time and may focus on approaching the basis of the individual’s disorder, what underlies the mania and depression, and focus on removing that, instead of just focusing on the symptoms. Psychodynamic theory may involve techniques like Dream Analysis, Free Association, or transference, although transference may not be too frequently used when the patient is suffering from mania. In this brief guide, we explored the meaning and symptoms of Bipolar disorder, the causes of bipolar disorder, and the treatment options available for bipolar disorder. Bipolar disorder can be a very misunderstood condition and may often go undiagnosed because people may write the patient off as weird or moody, but it can be a very threatening condition and needs treatment and necessary attention. Please feel free to reach out to us with any questions or comments you have about bipolar disorder, and leave us any suggestions of what else you would like to see covered in the future, Frequently Asked Questions (FAQs): Bipolar Disorder What is a person with bipolar like? A person with bipolar disorder may have recurrent episodes of extreme, intense, and disturbing emotional states known as depressive episodes which may be cyclical with bouts of extreme happiness or excitement (mania). Mania and melancholy (depression) make up most of the Bipolar disorder, and there may be brief periods of normalcy between the two phases. What are the 4 types of bipolar? The 4 types of Bipolar disorder according to the American Psychiatric Association are: bipolar I disorder, bipolar II disorder, cyclothymic disorder, and bipolar disorder due to another medical or substance abuse disorder. What are 5 signs of bipolar? 5 signs of Bipolar disorder may include: Brief periods of anger and aggression. Periods of Grandiosity and overconfidence. Recurring tearfulness and frequent sadness that eventually goes away Needing little sleep for rest Uncharacteristic impulsive behavior or recklessness Moodiness or sadness Confusion and inattention. Is Bipolar 1 or 2 worse? Bipolar 1 may be worse than Bipolar 2, as Bipolar 1 involves a Manic episode, while Bipolar 2 involves a lesser degree of mania, in the form of a hypomanic episode. While both mania and hypomania may be found in Bipolar 1 and 2, Bipolar 1 is more serious because mania can involve more intense feelings of euphoria that may often be hard to deal with. International Classification of Mental and Behavioral Disorders
Are you looking to pass on solar energy knowledge to your kids? Maybe you’re a teacher looking to get an early start on teaching your students about the benefits of solar power. Experts are predicting that 2018 could be a record-breaking year for the solar energy industry with solar panel installations on the rise around the world. If you believe that our children will be responsible for the future of our planet, here’s a quick guide of how to teach solar energy for kids! 1. What is Solar Energy? Teaching kids about solar energy can be as easy as taking them outside on a sunny day! Explain to your kids that the energy from the sun is always around us on earth (even on cloudy days) and that we can witness it in the form of light. To put it even more simply, remind kids that if the sun stopped shining altogether, the temperatures on our planet would be freezing. 2. How Solar Panels Work Once you’ve presented a simple explanation of what solar energy is, you can address how we capture solar energy for use. Make it a point to point out a solar panel to your kids the next time you see one. Then, explain how solar panels convert the sun’s light directly into electrical energy through photovoltaics, or solar cells. 3. What Solar Panels Can Be Used For As mentioned above it may be helpful to show kids examples of solar panels before explaining how they make solar energy work. While younger kids might not be interested in the scientific aspects of solar energy, they might be fascinated to know all the uses for solar panels, such as: - Electricity in the House - Heating For Your Office - Highway Signs - Outdoor Landscaping Lights Kids might be even more excited about the future of solar energy. 4. Future Technology While solar energy has its uses in the home, advances in the technology are pushing the boundaries of what we can do with solar. For example, the World Solar Challenge is a solar-powered race car competition held in Australia! We may even see solar-powered boats, planes, and cars available commercially in the future. 5. Teach Kids About Energy Efficiency Even the most innocent kid might ask the big question: “Why would we bother with solar energy?” A simple answer could include a quick explanation of how other types of energy, like coal and natural gas, are used in excess and are hurting the planet. Finally, you can tell kids that energy from the sun is renewable, and isn’t hard to find other sources of energy. Solar Energy For Kids Can Be A Fun Learning Experience! Covering these 5 categories is a great way to start passing on your knowledge of solar energy for kids! Last but not least, try your best to show, not tell! Do you know of other ideas to explain solar energy to kids? Let us know in the comments! Looking to take your solar energy lessons a step further? Get a free solar installation quote to show your kids how affordable solar energy can be in your home!
Passage 3 | Zoology The Silk of Spiders’ Webs [A] Spiders possess the extraordinary ability to produce silk, which they use in a variety of ways—to create egg sacs, to catch and hold insects, and to construct homes. [B] An assortment of specialized glands, each responsible for forming a distinct kind of silk, is located within the spider’s abdomen and enables the spider to produce the different types of silk that it uses for those diverse purposes. [C] Among the known species of spiders, scientists have identified at least ten distinct kinds of glands that manufacture silks of varying strength, elasticity, and viscosity. [D] In the process of silk production, silk begins s as a liquid in special silk glands in the spider’s abdomen. The liquid silk is excreted from the silk glands in liquid form, but, as it passes through the round spigots on a special organ —the spinneret—at the rear of the abdomen, it becomes solid. The spinneret determines the diameter of the final silk fiber. Depending on the species, spiders may have between one and four pairs of silk-releasing spinnerets. Different types of silk are produced to perform different functions. When a spider begins constructing its web, the first threads it uses must be particularly durable, capable of supporting the weight of the spider while serving as a foundation for the web. These a foundation threads, known as draglines, are composed of major ampullate silk, a sturdy, non-sticky, elastic material. In fact, major ampullate silk is the strongest silk a spider produces; its tensile strength—the maximum 35 force a material can resist without tearing—is similar to that of Kevlar. Draglines serve not only as the skeletal structure to which all other silks are anchored, but also as safetylines with which a spider can make a speedy exit from « an unexpected predator. Similar to major ampullate silk, minor ampullate silk is also used in web construction, but as supporting threads rather than main draglines. Like major ampullate silk, this silk is strong and non sticky, but it does not have the same elastic characteristics. When minor ampullate silk is stretched, it remains permanently misshapen. The threads that form the spiral core of a spider’s web are made of flagelliform silk, the sticky netting that ensnares a spider’s insect prey. When a spider senses the vibrations of an insect trapped in its web, it releases another kind of silk, swathing silk, that completely binds a victim by encapsulating it in a cocoon. K Female spiders produce an additional kind of silk that is used for spinning protective egg sacs that shield their eggs from harsh weather and from predators. Historically, spiders’ silk has been so useful in a variety of applications, from medicine to warfare. Ancient Greeks applied spider webs to wounds in order to decrease bleeding. Pre-WWII telescopes, microscopes, and guidance systems used 65 strands of spiders’ silk as crosshair sights. Because it is extremely lightweight and very resilient, and because it offers significant potential for diverse applications in fields like medicine and defense, spiders’ silk has, not surprisingly, been the subject of intense curiosity among members of the scientific community. However, in spite of researchers’ best efforts, humans have not been able to exactly duplicate the beneficial properties of 3 spiders’ versatile silk. Efforts continue, though, as it is hoped that in the future spiders’ silk will contribute to advancements in medical technology, perhaps improving sutures in microsurgery, refining plaster for broken 3 bones, and developing artificial ligaments and tendons to be used as surgical implants. Scientists anticipate that synthetic spiders’ silk would revolutionize military technology by providing lightweight, long-lasting j protective body coverings. In this respect, spiders’ silk would have broad applications for law enforcement and the armed forces. Commercially, spiders’ silk could be used to manufacture more durable ropes, fishing nets, seatbelts, and car bumpers. Having the ability to synthesize spiders’ silk would provide scientists with numerous possibilities for technological developments. *Kevlar: Trademark name for a brand of aramid fiber—a material used in bulletproof vests 28. The word diameter in the passage is closest in meaning to 29. According to paragraph 1, what is the function of the spinneret? (A) It stores liquid silk produced by the spider. (B) It prevents the spider from sticking to its web. (C> It turns liquid silk into strands of solid silk. (D) it protects eggs from being eaten by predators. 30. The word anchored in the passage is closest in meaning to 31. Why does the author mention the elasticity of minor ampullate silk in paragraph 2? (A) To indicate that all kinds of spiders’ silk are similar (B) To explain how spiders are able to trap their prey (C) To contrast the properties of two types of spiders’ silk (D) To give an example of a drawback of natural spiders’ silk 32. The word it in the passage refers to (B) swathing silk 33. What can be inferred from paragraph 3 about female spiders? (A) They are not responsible for caring for offspring. (B) They do not share parenting responsibilities with male spiders. (C) They produce a kind of silk that male spiders do not make. (D) They are more vulnerable in harsh climates than male spiders. 34. Which of the sentences below best expresses the essential information in the highlighted sentence in the passage? Incorrect choices change the meaning in important ways or leave out essential information. (A) The lightness and flexibility of spiders’ silk are properties that scientists want to use in future technology. (B) The scientific community is interested in research that will improve defense and medical technology. (C) The scientific community is curious about silk that is lightweight and very flexible. (D) Scientists are curious about how spiders’ silk has been used by doctors and by the military. 35. The word versatile in the passage is closest in meaning to 36 What can be inferred from paragraph 4 about people’s interest in the properties of spiders’ silk? (A) It began with doctors in the military. (B) It is based on a cultural love of spiders. (C) It has been around for centuries. (D) It is motivated purely by money. 37 All of the following are mentioned in the passage as characteristics of spiders’ silk EXCEPT (A) the capability to resist tears (B) the ability to repair itself (C) different degrees of elasticity (D) strength and lightness 38. Why does the author mention crosshair sights in paragraph 4? (A) To suggest that some technology based on spiders’ silk may be negative (B) To contrast the medical uses of spiders’ silk with the military uses of the material (C) To suggest that synthetic spiders’ silk will be better than natural silk (D) To give an example of how spiders’ silk has been used in the past 39. Look at the four squares (H that indicate where the following sentence could be added to the passage. This creature, which may be smaller than a millimeter, is capable of producing a strong, flexible material that humans have not been able to replicate. Where would the sentence best fit? 40. Directions: Complete the table by matching the phrases below. Select the appropriate phrases from the answer choices and match them to the type of silk to which they relate. TWO of the answer choices will NOT be used. This question is worth 4 points. (A) Retains its shape when stretched out |Major Ampullate Silk| |Mirror Amplute Silk|
The first images of Earth from Landsat 9 have been released this month, ushering in a new chapter in the longest-running continuous satellite program dedicated to Earth observation. The satellite, launched from Vandenberg Air Force Base in California on Sept. 27, is in the midst of a 100-day test period and will offer an ultra-detailed glimpse at changes in land use and natural resources. Landsat 9 is primarily an update of its predecessor, Landsat 8, which was first launched in 2013. The Landsat program was introduced in July 1972 with the deployment of the Earth Resources Technology Satellite 1, later named Landsat 1. The satellites have operated continuously since then, though Landsat 6 never reached orbit. The Landsat program is a joint venture between NASA and the U.S. Geological Survey (USGS). Both Landsat 7 and 8 are still active, and have provided data since 1999 and 2013, respectively. “The incredible first pictures from the Landsat 9 satellite are a glimpse into the data that will help us make science-based decisions on key issues including water use, wildfire impacts, coral reef degradation, glacier and ice-shelf retreat and tropical deforestation,” said USGS acting director David Applegate in a news release. In 2005, the family of satellites helped a researcher find a trio of unknown species in Mozambique. Landsat satellites have also been used in investigations as to why the Yellowstone fires of 1988 spread so quickly, to map glacial retreat, and even to research social and political change in Beijing in the 1970s and 80s based on patterns of urban development. The Landsat satellite orbits Earth in slices that pass over the north and south poles, scanning narrower swaths of the planet from just 400 miles above the ground. That makes for exquisite resolution as fine as 30 meters in the visible and infrared wavelengths. In other words, Landsat 9 could sense something happening on the scale of a third of a football field. Landsat 9 has two instruments onboard that capture data. One, called the Operational Land Imager 2, senses nine wavelengths of visible and infrared light. The other is the Thermal Infrared Sensor 2, which can detect subtle variations in temperature. When combined, the two will be able to provide clues about agricultural health, water use and quality, wildfire size and severity, glacial health, forest health and the evolution of urban areas. Scientists are particularly excited about an “underfly” event that just ended, during which Landsat 8 and 9 orbited in approximate tandem. In the past, NASA scientists worked to calibrate new satellites by comparing the images they took with those taken by previous satellites. This time around, Landsat 9 flew about six miles below Landsat 8 during the five-day “underfly.” Since both units have very similar sensors, the calibration should be seamless. The closest pass occurred on Nov. 14. NASA noted that, through roughly Thanksgiving, Landsat 9 was working toward an orbital position opposite that of Landsat 8. Next, it will undergo two “ascent burns” to hoist it to the proper altitude. Calibration and testing of Landsat 9 will last into January. Then, pending positive results, NASA will transfer control of the mission to the USGS. From there, there’s no telling exactly what Landsat 9 will help discover.
Scientists at the University of California, San Diego claim to have identified a mechanism of oxidative stress that prevents cellular damage. [amazon_link asins=’B06W9N1KPS,B01K2O66G0,B00QL2D9TK,B01D0FF6L4′ template=’ProductCarousel’ store=’finmeacur-20′ marketplace=’US’ link_id=’938e6e53-2010-11e7-8f31-b3fa89bf20af’] “We may drink pomegranate juice to protect our bodies from so-called ‘free radicals‘ or look at restricting calorie intake to extend our lifespan,” said Dr Trey Ideker, chief of the Division of Genetics in the Department of Medicine at UC San Diego’s School of Medicine and professor of bioengineering at the Jacobs School of Engineering. “But our study suggests why humans may actually be able to prolong the aging process by regularly exposing our bodies to minimal amounts of oxidants,” Ideker added. Reactive oxygen species (ROS), ions that form as a natural byproduct of the metabolism of oxygen, play important roles in cell signalling. However, due to environmental stress like ultraviolet radiation or heat or chemical exposure the ROS levels can increase dramatically, resulting insignificant damage to cellular damage to DNA, RNA and proteins, cumulating in an effect called oxidative stress. The scientists claim to have discovered the gene responsible for this effect. One major contributor to oxidative stress is hydrogen peroxide. While the cell has ways to help minimize the damaging effects of hydrogen peroxide by converting it to oxygen and water, this conversion isn’t 100 percent successful. During the study, the researchers designed a way to identify genes involved in adaptation to hydrogen peroxide. To figure out which genes might control this adaptation mechanism, the team ran a series of experiments in which cells were forced to adapt while each gene in the genome was removed, one by one, covering a total of nearly 5,000 genes. They identified a novel factor called Mga2, which is essential for adaptation. “This was a surprise, because Mga2 is found at the control point of a completely different pathway than those which respond to acute exposure of oxidative agents,” said Ideker. “This second pathway is only active at lower doses of oxidation,” Ideker added. “It may be that adaptation to oxidative stress is the main factor responsible for the lifespan-expanding effects of caloric restriction,” said Ideker. “Our next step is to figure out how Mga2 works to create a separate pathway, to discover the upstream mechanism that senses low doses of oxidation and triggers a protective mechanism downstream.” Click to see : Extend Your Life By Eating Right Sources: :The study is published in PLoS Genetics.
You may have spoken with your child’s teacher and heard the term “graphic organizer.” A graphic organizer is essentially just a tool, usually on a worksheet or in digital form. It provides ways to arrange and keep track of information in a way that communicates through pictures, diagrams, charts, or other visuals instead of just text or spoken language. There are many types of common graphic organizers. Among the most popular strategies are mind maps, Venn Diagrams, and KWL charts. How can students benefit from using a graphic organizer? For many children, reading comprehension and getting their thoughts down on paper a challenge. They haven’t learned to visualize the text yet, they are facing vocabulary words that may distract them from the meaning of what they are reading, and they may feel overwhelmed by the amount work ahead of them. It may feel like they are doing a lot of things all at once, which is why they need a way to organize their thoughts. Graphic organizers are the teacher’s best friend. They are quick and simple to make, they provide good visuals for students who need multisensory input, and they prescribe a structure for students to take notes. Graphic organizers require students to stop and think about what is important while they are reading, and it also gives them something tangible to complete. Many types of graphic organizers can be easily converted into writing assignments after they have been completed. They have uses for students of all ages. Graphic organizers can easily be implemented at home, as well. This blog will break down four examples. A KWL chart is best used for reading non-fiction, which is the type of reading that students tend to have the most challenge with in terms of comprehension. It has three columns with blanks underneath, titled as “K,” “W,” and “L” at the top. K stands for “Know” as in, “What I already Know,” W stands for “Want to Know,” or “Wonder,” and L stands for “Learn,” as in “What I Learned.” The student fills out the first two section as a pre-reading activity but can add to the W column as new questions arise throughout the reading. This encourages interaction with the text, making predictions, and making connections. This can be done in full sentences or in bullet points depending on what would benefit the individual student the most. The L section is for after reading. This allows the reader to stop and reflect, process the information they just read, and decide what was most important. Concept mapping is great for critical thinking and making connections. For younger children, you can start with a template that has several empty circles with lines connecting them.Older children can just use a large piece of blank paper and draw their own bubbles. Concept mapping basically involves recording an important term, event, or detail in the reading into one circle, and then connecting it to another related term event, or detail in another circle. The student should always be thinking about how the terms are connected, and can write a brief description of how they relate on the connecting line. Again, this method always forces the student to think about, process, and interact with information as it is being read. Interaction with the text is key for reading comprehension. Inspiration and Kidspiration are two great software programs that can be used for concept mapping. Reading Comprehension Sequence Chains These graphic organizers are great for keeping track of the order in which things happened in the story. The organizer is basically a series of boxes or circles connected by arrows going from left to right, implying a sequence. In each box, the child would either write or draw important events in the order that they occurred in the reading. Again, this causes the reader to stop, process, and think about what is important. Older students can create comic strips to represent what they read. This is especially good if you have an artistic child who is always doodling in their notebooks. Another way to use Sequence Chains to promote critical thinking in older students is to alter the G.O. a bit to represent cause and effect instead of a sequence of events. There are many variations of anchor charts. An anchor chart is basically a blank worksheet with specific questions that the student should be answering as they go along. This works well for students who may have difficulty with abstract thinking or identifying important details. A common anchor chart for storytelling is the “Who, What, When, Where, Why, How,” chart that we probably have all encountered at some point in our schooling. Again, this is great for students who have difficulty identifying the important pieces of information in their reading. Another one I like is “Say, Mean, Matter.” With this chart, students first have to write down a quote or piece of information that they read about under “Say,” interpret what they read under “Mean,” and then think critically, make connections to other things that they have read, and synthesize the information under “Matter.” Pinterest is a great place to find other examples of anchor charts that can help your child understand what they’re reading and appeal to their individual way of learning, or you can get creative and come up with your own! We hope you found this article helpful, and if you did, click here to receive more tips and strategies to prepare your child for every step of his/her academic journey.
Analogies are relationships between items. Analogies can be difficult to learn, but games make learning analogies fun. “Students often confuse analogies with metaphors. Both are comparisons, often involving unrelated objects, so what IS the difference? An analogy is a parallel comparison between two different things, whereas a metaphor is more of a direct comparison between two things, often with one word being used to symbolically represent another.” Watch this fun video to get a better understanding of what analogies are!
The Azuero Earth Project believes that environmental education starts with the kids! Each year as part of our Educational Initiative, AEP develops didactic games, interactive activities, experiments, videos, and presentations to reinforce environmental themes in six schools in the Azuero Peninsula. Through a full day of activities, students review previously learned topics, as well as learn new themes. We kicked off the initiative this year in Los Asientos, where lesson material focused on Organic Agriculture. - Nitrogen Cycle Game – Acting as a nitrogen atom, students traveled to different stations of the atmosphere, filling out their “nitrogen passport” with different stamps in order to understand the Nitrogen Cycle - Soil Experiment- Students tested sand, clay, and compost to understand water filtration and soil types. - Crop Rotation- Acting as land owners, students decided what to plant for the following 5 years in order to maximize production AND soil health. - Making Compost- Students learned the principles of decomposition and then headed out the garden to mix up a batch of compost - Building Raised Beds- After talking about water retention and erosion, students built raised beds in their garden and seeded native cucumber and bean varieties For more information, check out our Education Program.
In MS, the body’s immune system strikes and damages the myelin. The axons can no longer effectively transmit communication signals when the myelin is damaged. Multiple sclerosis affects people from all parts of the world and tends to arise between the ages of 20 and 40 and it is twice as common in women as it is in men. Symptoms of multiple sclerosis Symptoms of MS usually appear in episodes of worsening (called relapses, exacerbations, bouts, attacks, or “flare-ups”), gradually progressive deterioration of neurologic function, Multiple sclerosis relapses are often unpredictable, occurring without warning. Relapses occur more commonly during summer and spring. Viral infections such as the common cold, gastroenteritis or influenza increase the risk of relapse. An attack may also be triggered by stress and pregnancy influences the vulnerability to relapse. Symptoms of multiple sclerosis vary widely, depending on the location of affected nerve fibres. Multiple sclerosis signs and symptoms may include: - Numbness, loss of sensitivity, tingling or weakness in one or more limbs, which typically occurs on one side of your body at a time or the bottom half of your body. - Difficulties with coordination and balance (ataxia) - Muscle weakness, clonus, muscle spasms or difficulty in moving - Partial or complete loss of vision, usually in one eye at a time, often with pain during eye movement (optic neuritis). - Double vision or blurring of vision - Tingling or pain in parts of your body - Electric-shock sensations that occur with certain head movements - Problems in speech (dysarthria)) or swallowing(dysphagia) - Bladder and bowel difficulties - Cognitive impairment of various degrees - Emotional symptoms of depression or unstable mood Most people with multiple sclerosis, particularly in the beginning stages of the disease, experience relapses of symptoms, which are followed by periods of complete or partial remission. Signs and symptoms of multiple sclerosis often are triggered or worsened by an increase in body temperature. Causes of multiple sclerosis The exact causes of MS are not known. However it is believed that the onset of this disease is likely a result of some combination of genetic and environmental factors as well as exposure to infectious diseases. It has been established that the risk of multiple sclerosis is higher for people who have a family history of the disease. A number of specific genes have also been linked with MS. A variety of viruses have also been linked to the onset of multiple sclerosis including human herpes virus-6, measles, canine distemper, Chlamydia pneumonia, and in particular, Epstein-Barr virus, the virus that causes infectious mononucleosis. Generally, the disease is mild, but in some cases, people may lose the ability to write, speak or walk. There is no known cure for MS, however the right medications may slow it down and alleviate or control the symptoms.
How do we know we can trust a source or a claim made by someone? What constitutes “good science”? Knowing the answers to these questions is an important critical thinking skill for all students and is even more important in this digital age where students are exposed to information from many different sources with varying degrees of accuracy and qualifications. Everyone, including your students, is constantly facing confusing news stories and conflicting data and evaluating these claims requires the ability to think critically about all the information being thrown at them. This lesson contains activities that you can do with your middle and high school students to teach them critical thinking skills such as the importance of attempting to disprove a hypothesis, using hypotheses to make testable predictions, and examining a recent case of “bad science” that has resulted in harmful consequences. In addition, we include modifications for doing similar activities with elementary school classes. We’ll also give you tools to deal with news that your students bring in with them, and how to help them go from just repeating data, to thinking about it. http://scienceornot.net/ is a teacher resource with worked examples of how to reason through scientific claims made in the media, and general critical thinking. This is an excellent primer if you’re not already comfortable with the material in this lesson plan. At the conclusion of the lesson, students will be able to: - Look for data that would disconfirm their ideas - Understand different types of evidence, and how useful they are - Make predictions based on incomplete evidence Lesson length: About an hour Grade Levels: Designed for middle and high school, but activities have elementary school level alternatives - Lesson Plan (DOCX) - Introductory presentation (PPT) - Rough Guide to Spotting Bad Science (PDF) - Rough Guide to Type of Scientific Evidence (PDF) - Hypothesis Testing Game _student edition (PDF) - Hypothesis Testing Game_teacher edition (PDF) - Case study and supporting papers: Lesson Plan created by GK-12 Fellows Emily Dittmar & Amanda Charbonneau, 2015
A clause is a group of words that contains both a subject and a predicate (or a verb). There are two types of clauses Independent Clauses are complete sentences. They can stand alone and express a complete thought. Examples: I want some cereal. Marie likes cats. Joseph is a good soccer player. Dependent Clauses contain a subject and a predicate, but they do not express a complete thought. Examples: When it is raining Because you were late Before you go to bed All of these groups of words contain both a subject and a verb, but they cannot stand alone. They do not express a complete thought. There are three main types of dependent clauses: adjective, adverb, and noun. They are named by the way they function in a sentence. An adjective clause describes or gives more information about a noun-tells us which one, what kind, or how many. Example: The bag that someone left on the bus belongs to Mrs. Smith. An adverb clause describes or gives more information about the verb-tells us when, where, how, to what extent, or under what condition something is happening. Example: She cried because her seashell was broken. A noun clause takes the place of a noun in the sentence. Example: Whoever ate the last piece of pie owes me!
What is critical thinking? One element of critical thinking that most everyone agrees on is “higher order thinking,” which includes evaluating the appropriateness of evidence, the truth of propositions, and the soundness of arguments. My former principal, Dave Lehman, wrote a series of articles which get to the heart of the matter of critical thinking and how to teach it. He quoted Daniel Willingham, Professor of Psychology at the University of Virginia, as saying: “From the cognitive scientist’s point of view, the mental activities that are typically called critical thinking are actually a subset of three types of thinking: reasoning, making judgments and decisions, and problem solving.” Dave argues that this statement is a good beginning but incomplete. I agree. Other elements need to be included, like imagination, emotional regulation, and self-reflection. In the 1960s, Roger Sperry and others carried out experiments on the human brain. They cut the corpus callosum, which is a large bundle of neurons connecting the right and left halves or hemispheres of the upper portion of the brain. His experiments disclosed differences in how the two hemispheres functioned. These differences seemed at first to be consistent with earlier theories about rational thinking and creativity. The left was thought of as the critical thinker, the languaged brain, analytical and sequential. The right was thought of as artistic, holistic, creative. More recent brain research has shown this early conclusion to be inaccurate. Both hemispheres have been found to be involved, in some way, in all human activities. The differences between the functioning of the two hemispheres have been found to be more subtle. The different areas of the brain work in a more interrelated fashion. You can’t understand how the brain works by only studying it as distinct parts. Likewise, you can’t understand how a person thinks critically without studying emotion, creativity, self-reflection and imagination. ‘Critical’ comes from the Greek ‘kritikos’, able to discern, and ‘krinein’, to sift, judge, or separate. To separate, as in analyze or break down into component parts. But ‘discern’ also means to perceive or understand what is not immediately obvious or what might be beyond your previous viewpoint. It means to perceive, as much as possible, the whole or the truth. How does critical thinking utilize imagination? For example, how would you proceed to answer this question, which frequently comes up in my class on the history of human ideas: “Why did early humans create so much art?” Or maybe, “Why did they do any art?” Students often reply, “They did it because it was fun.” But that answer needs to be questioned further. Students need to empathetically place themselves in the world of ancient humans. They could start by visualizing, for example, a world without any buildings. They need to immerse themselves in more information. One form of art created was extensive wall paintings in caves in southern Europe, Africa, Australia and other places. In France, for example, some of the caves were extremely difficult and possibly dangerous to access. Access involved crawling though long, narrow tunnels. Students decided to research in different groups various aspects of how the cave painters lived: their food, religion, other species populating the world back then, tools, possible origins of language. A group of five or six studied the paintings in detail and then reproduced the art on the walls of a rarely used stairwell of the school. One day, when the work was complete, this group had the students line up. And one by one they entered the stairwell. It felt like a cave. The only sound was the music of a flute. The only light source was a series of small lanterns placed near the painted walls. When we had all entered and sat down on the cave floor, I led the students in a visualized journey into what being in the caves might have been like. Then the student-artists discussed the paintings. We created the activity together. I bet most still remember the experience. It enabled the class to feel engaged and develop a more in-depth perspective. They could then analyze evidence, evaluate theories and derive their own conclusions. This type of activity is not limited to history classes. In an English class, you could imaginatively journey into situations depicted in a novel. Or in a science class you could journey though a cell or the orbits of electrons. Or outside of class you could journey into the mind of a friend that you had an argument with. Critical thinking is not just logic or problem solving. It requires imagination. My next blog will be about an enjoyable way to strengthen and teach with the student’s natural ability to imagine. Other elements of critical thinking and mindfulness will come up in future blogs. Lehman, David. “Thinking About Teaching Thinking Part 1, What’s The Urgency?” Connections. : 10- 14. NSRFHarmony.org/connections/2013.May.Connections.pdf Lehman, David. “Thinking About Teaching Thinking Part 2, How Can We Do It?” Connections. : 7-15. NSRFHarmony.org/connections/2013.July.Connections.pdf McGilchrist, Iain, The Master and his Emissary: The Divided Brain and the Making of the Western World, New Haven, Connecticut, : Yale University Press, 2009. The photo is of my student’s cave art.
America’s Favorite Reading Comprehension Program in Two-Page Lesson Formats of Cross-Curricular Nonfiction Articles - Carefully graded articles and books challenge students at their own individual reading levels. - Each lesson presents and defines new vocabulary in context. - Exercises after each article provide systematic growth in thinking/reading skills. - Students are tested immediately after reading each article on a range of comprehension skills. - Article topics cover social studies, natural science, physical science, mathematics, health, language, careers, and biography. Read-Along Audio Cassettes and CDs accompany Books A-D, providing a tutorial instruction and auditory reinforcement for remedial students. Each set contains 24 guided readings from its corresponding book. Learning Language Activities – Blackline masters accompany Books A-D, with one activity sheet for each article. Following a simple method, these activities are designed not only to improve students’ reading skills but also to improve writing and language skills. Now you can teach reading, writing, and language skills at the same time and not in isolation. Book A: 2.0-2.5 Book B: 2.4-3.5 Book C: 3.5-4.8 Book D: 4.4-5.5 Book E: 5.0-5.8 Book F: 5.4-6.5 Book G: 5.6-6.8 Teacher’s Manual includes an informal placement inventory to determine reading levels for each student and an answer key.
Researchers in Japan have discovered that the Plasmodium parasites responsible for malaria rely on a human liver cell protein for their development into a form capable of infecting red blood cells and causing disease. The study, which will be published June 12 in the Journal of Experimental Medicine, suggests that targeting this human protein, known as CXCR4, could be a way to block the parasite's life cycle and prevent the development of malaria. According to the World Health Organization, there were an estimated 219 million cases of malaria in 2017, resulting in the deaths of approximately 435,000 people. Infected mosquitoes transmit Plasmodium parasites to humans in the form of rod-shaped sporozoites that travel to the liver and invade liver cells (hepatocytes). Once inside these cells, the Plasmodium sporozoites develop into spherical exoerythrocytic forms (EEFs) that eventually give rise to thousands of merozoites capable of spreading into red blood cells and causing malaria. "It seems likely that the transformation of Plasmodium sporozoites into EEFs is tightly controlled so that it only occurs in hepatocytes and not at earlier stages of the parasite's life cycle," says Masahiro Yamamoto, a professor at the Research Institute for Microbial Diseases of Osaka University. "However, we know very little about the host factors that regulate the differentiation of sporozoites in infected hepatocytes." In the new study, Yamamoto and colleagues discovered that a hepatocyte protein called CXCR4 helps Plasmodium sporozoites transform into EEFs. Depleting this protein from human liver cells reduced the ability of sporozoites to develop into EEFs. Moreover, mice pretreated with a drug that inhibits CXCR4 were resistant to malaria, showing reduced levels of parasites in the blood and significantly higher survival rates following Plasmodium infection. Yamamoto and colleagues also identified a cell signaling pathway that causes hepatocytes to produce more CXCR4 in response to Plasmodium infection and determined that the protein aids the parasite's development by raising the levels of calcium inside the cells. "Our study reveals that CXCR4 blockade inhibits Plasmodium sporozoite transformation in hepatocytes," Yamamoto says. "Most anti-malaria drugs targeting Plasmodium-derived molecules eventually lead to drug resistance in these parasites. However, inhibitors targeting human proteins such as CXCR4 might avoid this problem and could be used prophylactically to prevent the development of malaria. Moreover, the CXCR4 inhibitor used in this study is already widely used in humans undergoing treatment for blood cancers, which could accelerate its repurposing as a new way of combating malaria." Bando et al. 2019. J. Exp. Med. http://jem. About the Journal of Experimental Medicine The Journal of Experimental Medicine (JEM) features peer-reviewed research on immunology, cancer biology, stem cell biology, microbial pathogenesis, vascular biology, and neurobiology. All editorial decisions are made by research-active scientists in conjunction with in-house scientific editors. JEM makes all of its content free online no later than six months after publication. Established in 1896, JEM is published by Rockefeller University Press. For more information, visit jem.org. Visit our Newsroom, and sign up for a weekly preview of articles to be published. Embargoed media alerts are for journalists only.
This text presents the basic concepts developed by B.F. Skinner in Verbal Behavior (1957). It is intended only as an introduction and as such omits detailed explanations of exceptions, ambiguities, and controversies. In addition, many of the implications of the analysis are not presented. What is presented are all the basic concepts that were introduced in Verbal Behavior with the hope that this book's conditioning of these concepts in student repertoires will leave these students having a much easier time grasping more complex and sophisticated analyses such as those presented by Skinner. This text was designed to be read by someone who has already had an introduction to behavior, and contingency engineering, terminology. Basic behavioral concepts are reviewed only briefly.
It is important to use these abbreviations literally and to punctuate them correctly. Many writers confuse "e.g." and "i.e.," and many type "et al." improperly or do not properly recognize what words it represents. The abbreviation "e.g." is from the Latin exempli gratia and means, literally, "for example." Periods come after each letter and a comma normally follows unless the example is a single word and no pause is natural: Any facial response (e.g., a surprised blink of both eyes) was recorded. The abbreviation "i.e." is from the Latin id est, meaning "that is." Loosely, "i.e." is used to mean "therefore" or "in other words." Periods come after each letter and a comma normally follows, depending on whether the wording following the abbreviation dictates a natural pause: In every case Angle 1 was greater than Angle 2—i.e., every viewer perceived a circle. The phrase "et al."—from the Latin et alii, which literally means "and others"—must always be typed with a space between the two words and with a period after the "l" (since the "al." is an abbreviation). A comma does not follow the abbreviation unless the sentence’s grammar requires it. Some journals italicize the phrase because it comes from the Latin, but most do not. Schweiger et al. applied the neural network method. Never begin a sentence with any of these three abbreviations; if you want to begin a sentence with "for example" or "therefore," always write the words out. For an entertaining look at how "et al." is used, visit this site:
- The pia or pia mater is a very delicate membrane. It is closest to the brain or the spinal cord. As such it follows all the minor contours of the brain (gyri and sulci). Within the pia mater are capillaries responsible for nourishing the brain. - The middle element of the meninges is the arachnoid mater, so named because of its spider web-like appearance. - The dura mater is a thick, durable membrane, closest to the skull. It contains larger blood vessels that split into the capilliaries in the pia mater. Normally, the dura mater is attached to the skull in the head, or to the bones of the vertebral canal in the spinal cord. The arachnoid is attached to the dura mater, and the pia mater is attached to the central nervous system tissue. When the dura mater and the arachnoid separate through injury or illness, the space between them is the subdural space. There are three types of hemorrhage involving the meninges: - A subarachnoid hemorrhage is acute bleeding under the arachnoid; it may occur spontaneously or as a result of trauma. - A subdural hematoma is a hematoma (collection of blood) located in a separation of the arachnoid from the dura mater. The small veins that connect the dura mater and the arachnoid are torn, usually during an accident, and blood can leak into this area. - An epidural hematoma similarly may arise after an accident or spontaneously. Other medical conditions that affect the meninges include meningitis (a viral or bacterial infection). Meningiomas are tumors (generally benign) arising from the meninges. Malignant tumors formed elsewhere may also metastasize to the meninges. - Orlando Regional Healthcare, Education and Development. 2004. "Overview of Adult Traumatic Brain Injuries." |Meninges of the brain and medulla spinalis| Dura mater - Falx cerebri - Tentorium cerebelli - Falx cerebelli - Arachnoid mater - Subarachnoid space - Cistern - Cisterna magna - Median aperture - Cerebrospinal fluid - Arachnoid granulation - Pia mater |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
It is obvious that a classroom must be kept clean. Not only is it necessary to create a comfortable learning environment for the children, it is essential from the health point of view. Young children have yet to develop the level of immunity that their elders have in regard to contracting disease and infection. The cleaner the classroom, the lower the chances of kids falling sick. It is a good idea to involve the children in the cleanliness issue. Not only will this allow the teacher to explain, with practical demonstrations, the need for cleanliness and its importance; it will also help the children in keeping things like their rooms and toys clean. Once they have the ability to perform actions like cleaning up, children often like to show adults how much they can do. Praise and positive reinforcement of these efforts must be given so that the activity does not wane as the novelty of doing it wears off. Involving the Children The best way to involve young kids in keeping the classroom clean is to divide them into teams. Each team will have one aspect of cleaning to do. For example, one team could be put on the job of cleaning the windows and other one cleaning the furniture. Since the amount of cleaning to be done by the furniture team will be more, it will need to be larger than the window team; or the furniture should be divided into small sections so multiple small teams can do the cleaning. The teams should be switched around / re-configured regularly so that the kids are involved with all aspects of the clean and also do not get bored with the daily repetition. The Equipment Needed No complex or possibly dangerous equipment is required to keep the class clean. What will be required are: - Trash cans. These should be small enough for the children to be able to open them and empty the trash inside, but not so small as to overflow. The cans should be segregated according to what they should contain. Typically, green cans are used for biodegradable waste and black ones for dry and non-degradable materials. Teaching children about waste segregation at a young age will allow them to develop the self-discipline that will become an important component of their developing personalities. - Broomsticks and dustpans. Kids have small hands so the equipment should be small and easy to hold and use. Also, long broomsticks can be difficult for children to control and could lead to other kids getting poked or even more seriously injured. Avoid equipment made of wood – there is always a change of a child having a splinter pierce the skin. Plastic brooms and dustpans are the best option. - Feather dusters. These too must be small enough for the children to use comfortably and safely. If required, masks should be given to the children to prevent dust from being inhaled. Start the dusting exercise on larger unbreakable objects and teach the kids the right and wrong ways to go about the process. As they become more confident and adept in the work, they can be asked to dust smaller and more delicate things. Avoid getting the kids involved in cleaning any classroom rugs. These are often very dirty and sticky with spilled food and drink. Vacuum cleaners are not safe for children to use. And anyway, usually washing is the only way to get them really clean, and this is not a classroom activity. Cleaning materials can be expensive but if they are bought in bulk (say a semester’s supplies at a time) or combined with the purchases of other classes, significant discounts may be available.
Gallery 19: Case 11 Religion in ancient Egypt Egyptian religion was closely linked with the natural world and was a fundamental part of all other aspects of Egyptian life. Many gods were of both national and local significance and were frequently depicted in different forms. Thoth, for example, could be shown as an ibis or a baboon, or with the head of an ibis but the body of a man. Gods might be represented on earth by various living creatures. The Apis Bull, for example, was the embodiment of Ptah of Memphis. The Bull was selected for his special markings, and when he died received an elaborate burial, attended by the king. Temples were cult centres and sacred spaces. A formal hierarchy of priests was associated with most temples: only chief priests were permitted to tend the statues of the gods after observing a strict cleansing ritual. All priests were representatives of the King, himself the embodiment of Horus in life and Osiris in death, and like him they acted as a link between the worlds of gods and men.
4.4.1 Holocene sea level change This topic can appear remote to archaeologists but in Argyll and Bute it is critical, not just to early prehistorians but to later prehistorians and early historians. Shennan et al. (2006) published the first detailed reconstruction for the region, in Knapdale (Figure 18). Their approach was to identify enclosed coastal basins at different altitudes and recognise the time they received estuarine and marine sediment from diatom and pollen analyses, and the time this sediment was replaced by terrestrial sediment (peat) as sea level fell. Figure 18: Map of sediment records of Holocene sea level change© Richard Tipping: Arran (Gemmell 1973); Coll (Dawson et al. 2001); Gruinart, Islay (Dawson et al. 1998); Knapdale (Shennan et al. 2006); Moine Mhor (Haggart and Sutherland 1992); Oban (Bonsall and Sutherland 1992); Oronsay (Jardine 1977, 1987). Devensian Lateglacial sea level has been described above. Early Holocene shorelines will also have sloped because of differential uplift but we have few data points to reconstruct this gradient. Sea level is modelled to have fallen to –5m below present on Skye, to around 0m on Arisaig but to have remained above present tidal limits, around 2m above present in Knapdale (Shennan et al. 2006) but empirical evidence for this at the sites analysed by Shennan et al. (2006) is currently lacking. On Islay, Dawson, Dawson and Edwards (1998) date the lowest early Holocene RSL at around 0m OD to c. 9350 cal BC, as Dawson et al. (2001) do on northern Coll. Dawson et al. (1998) then describe a rapid Relative Sea Level (RSL) rise on Islay to 3m OD by c. 7960 cal BC, slowing to c. 5m OD by the Neolithic period. On Oronsay, Jardine (1977) thought sea level reached c. 7m OD before this time. Sequences on the Scottish west coast currently lack the chronological precision to test the suggestion (Clarke et al. 2003) that the collapse of the Laurentide ice sheet caused a nearly instantaneous global RSL rise of around 1.4m. In Knapdale this shoreline, the Main Postglacial Shoreline, reached c. 11m OD (Shennan et al. 2006) and c. 10m OD on Arran (Gemmell 1973). Bonsall and Sutherland (1992) suggested that coast–facing caves in the Oban area occupied prior to the Main Postglacial Shoreline would have been eroded during RSL rise, thus seeming to shorten the duration of the ‘Obanian’ flint industry. Dawson et al. (1998) suggested that RSL levels remained high on Islay, above 4m OD until the late Iron Age, and a similar suggestion can be made for Coll (Dawson et al. 2001) and, outside the region, for Skye (Selby 2004). This differs from the interpretation of Jardine (1977) on Oronsay for a 3m fall in the late prehistoric period and by Shennan et al. (2006) in Knapdale who see RSL fall steadily c. 11m OD at c. 3500 cal BC to 3–4m OD by c. cal AD200. Smith et al. (2007) construct a 14C–dated RSL curve for south west Scotland in which RSL continues to remain high (above 8m OD) until after c. 2000 cal BC. They try to resolve the different interpretations by identifying a narrow zone in south west Scotland where the Main Postglacial Shoreline converges with a later shoreline, the Blairdrummond Shoreline (Smith, Cullingford and Firth 2000), to maintain relatively high RSL, perhaps into the early historic period. This narrow zone crosses southern Kintyre and separates Islay from northern Kintyre, explaining the contrasts apparent in these two places (above), and straddles Mull (Smith et al. 2007, Figure 8). Reconstructing later prehistoric and early historic RSL change is thus a complex issue in Argyll and Bute, in which site–specific patterns override regional trends. Researchers should be aware of these subtleties in the use of RSL models, and the influence of local topographic settings on the expression of RSL. Radiocarbon dates anywhere in the British Isles on RSL change post–2000 cal BC are rare (Shennan and Horton 2002) but a site like Barr na Criche in Knapdale demonstrates that detailed reconstructions in this period are possible. Sutherland (1997) drew attention to the rare large coastal sediment stores of mid– and later Holocene age where ‘unpicking’ post–Neolithic RSL change would be feasible. These include the Machrihanish and Moine Mhor embayments. No work at Machrihanish is known to this author. Analyses of RSL change in the Moine Mhor (Haggart and Sutherland 1992) remain skeletal but the potential is enormous because the base of the peat falls from c. 10m OD to c. 2m OD, implying that basal peat formed at progressively later times as RSL fell. We do not know if it was possible to sail to the foot of Dunadd in the early Historic period. There have been very few attempts in the region to depict in map form how coastal areas will have altered with RSL change, although this could greatly aid archaeological understanding of site distributions in the way exemplified in the Forth valley by Smith et al. (2010). Dawson et al. (1998) described the physical isolation of the Rhinns of Islay from the east of the island as RSL rose in the early Holocene. In the Moine Mhor, Winterbottom and O’Shea (2002) looked at the effect of a 10.0m OD RSL on Neolithic and early Bronze Age archaeological site distributions to great effect. 4.4.2 Coastline and marine changes By this is meant change to our coastlines other than directly by sea level change. It embraces evidence for past coastal erosion and deposition and evidence for the abundances in the past of marine and littoral resources. Such evidence is fundamental to understanding the archaeology of Argyll and Bute in several ways. Perhaps the least important aspect to this is in measuring the rate at which current coastal erosion and storm damage impact on coastal sites (Dawson 2003). The ‘soft’ dune and machair coasts of the islands are most vulnerable. There are coastal zone archaeological surveys for Coll, Tiree and Islay and Shorewatch programmes on Coll and Islay (ScAPE Trust). There are no surveys, to the authors’ knowledge, of the exposed western Kintyre coastline and the dune system around Machrihanish, or of sensitive salt marsh environments at the heads of the Kintyre fjords and the Moine Mhor. We know very little from the region about the environmental changes at the coast or offshore. The evidence and potential are there, however. Gerard Bond pioneered such research in identifying transport from the Arctic Ocean of “armadas” of icebergs as far south as the Irish west coast periodically in the Holocene at c. 9200, 8300, 7400, 6200, 3900, 2200 cal BC and c. 600 cal AD (Bond et al. 1997). Thornalley, Elderfield and McCave (2009) analysed fluctuating surface and deep–water temperature and salinity south of Iceland. Changes in either will have affected the abundance and locations of spawning and feeding grounds for fish populations. They recognise falling salinity from the start of the Holocene to c. 6000 cal BC: the 6200 cal BC event freshened by 0.4psu the North Atlantic. Sea water was well mixed and fresh after this event for c. 1000 cal years until c. 5000 calBC. Salinity then increased to present values at c. 3000 cal BC. Rapid freshening set in after c. 2000 cal BC until this trend was reversed at c. 1400 cal BC. Particularly warm and saline conditions occurred at c. 3000 and 700 cal BC and at c. cal AD1000, probably representing persistent westerly winds at these times. Marret, Scourse and Austin (2004) have described major changes to thermal stratification in the Celtic Sea between Ireland and Wales, with strong contrasts in seasonality inferred for the period c. 4700 to c. 1650 cal BC. In the sediments of Loch Sunart, Cage and Austin (2010) define with clarity the switch from the medieval ‘warm’ period to the ‘little ice age’ at c. AD1400. Annually and seasonally resolved climate signals from marine molluscs are being explored (Stott et al. 2010; Wang, Surge and Mithen 2012). Sand dune accumulation is a major information source. This is no longer seen to be a product of sea level change but as a product of abrupt climate change, and increased storminess in particular (Bjorck and Clemmensen 2004; Clarke and Rendell 2006; De Jong et al. 2006; Orme, Davies and Duller 2015), so here we have a measure of sea conditions that were perhaps unpredictable enough to deter crossings. Dunes on the Outer Isles (Gilbertson et al. 1999; Dawson et al. 2004), in Wester Ross (Wilson 2002) and along the north Irish coast (Wilson, McGourty and Bateman 2004) have been analysed. There is periodicity rather than continuity in dune and machair accumulation, with phases of dune building c. 4300–3800, 3200, 280–2400, 1400–1200 and 800–300 cal BC, and cal AD850 and 1400–1800. But dating by 14C is restricted to periods of climatic stability and by optically stimulated luminescence untried on the west coast compared to their deployment on the north Scottish coast (Sommerville et al. 2003, 2007; Tisdall et al. 2013). Winterbottom and Dawson (2005) attempted to locate archaeological structures buried by drifting sand on Coll by remote sensing and ground penetrating radar, as Astin has with Mithen (Mithen pers. comm.) but the recovery of archaeological remains from dune systems will always lead to partial chronologies of dune formation. Systematic dating programmes are badly needed. Sediment–stratigraphic and biological RSL indicators tend to describe mean sea levels, not tidal or wave extremes. Jardine (1987) mapped a series of storm–beach gravel ridges on Oronsay, thought to be of mid–Holocene age, one at least perhaps 10m above contemporary mean sea level. Smith et al. (2007) surveyed storm beach ridges probably dating to the mid–Holocene on the west and east coasts of Bute which reached up to 3m higher than contemporary mean sea level. The probable Mesolithic site of Croig (Canmore ID 233381) on Mull is protected by a large storm beach (Mithen pers. comm.) as is the Mesolithic site of Kilmore near Oban (Macklin et al. 2000). Early Holocene storm beaches are deposited from a significantly lower sea level, of course. These kinds of observation are rare: there need to be more. Systematic mapping and dating of storm beaches would provide data, though biased to Devensian Lateglacial and post–4000 cal BC events, to infer past periods of increased storminess, the degree of exposure of particular parts of the coast and the modelling of wave extremes that may have been critical to travel by sea. Innovative approaches such as Andrews et al.’s (1987a) estimation of later Mesolithic storminess from dogwhelks is to be encouraged. These authors suggest the later Mesolithic to have been less stormy than today. There is no substantiated method other than from archaeological analysis (Barrett, Nicholson and Ceron–Cerrasco 1999) for defining the presence or size of fish populations in near–shore or deep waters (Pickard and Bonsall 2004) but such analyses are bound to reveal more about food selection than availability and abundance. The same is true of analysis of marine shells (Connock, Finlayson and Mills 1993). Sediment–stratigraphic methods need to be developed, perhaps biochemically or from quantification of fish scales (Davidson et al. 2003) to define the natural abundance of marine resources and their availability.
Just Ask Antoine! Atoms & ions Energy & change The quantum theory Electrons in atoms The periodic table Acids & bases History of chemistry Why are OH, NH, and FH bonds so polar? - Why is the partial charge on a hydrogen atom bonded to a highly electronegative element more concentrated than for any other element? Hydrogen atoms are very small (with an atomic radius of about 37 pm, they're smaller than any other atom but helium). So when a bonded electronegative atom pulls electrons away from the hydrogen atom, the positive charge that results is tightly concentrated. The small size of the hydrogen atom is one factor responsible for the unusual strength of the dipole-dipole interaction we call "hydrogen bonding". When gauging the strength of intermolecular attractions, considering the magnitude of partial charges isn't enough; you also must look at how spread out the charges are. The atomic radius trends you've learned are often very helpful here. Author: Fred Senese [email protected]
Chapter 16: STOPPING DISTANCE When admiring a new car, no one ever asks the owner how well its stops....just how well does it goes Many drivers, drive in a false belief that if the car in front suddenly started braking, they would react and brake and end up stopped the same distance apart. The total stopping distance of a vehicle is made up of 4 components. - Human Perception Time - Human Reaction Time - Vehicle Reaction Time - Vehicle Braking Capability The human perception time; is how long the driver takes to see the hazard, and the brain realize it is a hazard requiring an immediate reaction. This perception time can be as long as ¼ to ½ a second. Once the brain realizes danger, the human reaction time is how long the body takes to move the foot from accelerator to brake pedal. Again this reaction time can vary from ¼ - ¾ of a second. These first 2 components of stopping distance are human factors and as such can be effected by tiredness, alcohol, fatigue and concentration levels. A perception and reaction time of 3 or 4 seconds is possible. 4 seconds at 100 km/hr means the car travels 110 metres before the brakes are applied. Once the brake pedal is applied there is the vehicles reaction time which depends on the brake pedal free-play, hydraulic properties of the brake fluid and working order of the braking system. This is why the tailgating car usually cannot stop, when the brake light came on in the car in front, this driver had already completed the perception, human and vehicle reaction periods. The following driver was perhaps 1 second to late in applying the brakes. At 100km/hr the car required 28 metres further to stop. The last factor than determines the total stopping distance is the cars braking capability which depends on factors such as; - the type of braking system, - brake pad material, - brake alignment, - tyre pressures, - tyre tread and grip, - vehicle weight, - suspension system, - the co-efficient of friction of the road surface, - wind speed, - slope of road, - surface smoothness - the braking technique applied by the driver. Worth noting is that from 50 to 100 kph the braking distance of a car will increase from 10 metres to 40 metres. When you double the speed of a car braking distance quadruples. This is based on the laws of physics. When a car is moving it has kinetic energy, ½mv2. When the velocity doubles the kinetic energy quadruples. The braking capability does not increase when driving faster, there are no reserves of friction. As such in any vehicle when your speed doubles braking distance is four times larger. BRAKING DISTANCE FROM 100 km/hr (real world testing) The table below lists the BRAKING DISTANCE of various cars from 100 km/h. These cars were tested at different locations on different days. Be careful comparing results as test results can vary depending on many factors including the road surface, how the speed was measured (as various cars have differing speedometer accuracies), the tyre pressures, fuel load and whether the car had only the driver or had additional passengers. MAKE/MODEL DISTANCE (m) SOURCE Alfa MITO 37.61 Motor Magazine (Aust) Alfa Giulietta QV 37.80 Motor Magazine (Aust) Audi A5 Sportsback 37.62 Motor Magazine (Aust) BMW 123D Hatch 37.95 Motor Magazine (Aust) BMW 330D Coupe 36.63 Motor Magazine (Aust) Chrysler 300C 38.72 Motor Magazine (Aust) Holden VE Commodore SV6 39.86 Motor Magazine (Aust) HSV GXP 37.76 Motor Magazine (Aust) HSV GTS (WP tuned - 2011) 38.31 Motor Magazine (Aust) Nissan GTR (R35 - 2011) 32.75 Motor Magazine (Aust) Porsche 911 Turbo S (2011) 39.62 Motor Magazine (Aust) Renault Megane RS250 36.34 Motor Magazine (Aust) Renault RS Clio 200 36.43 Motor Magazine (Aust) Subaru Impreza WRX 37.38 Motor Magazine (Aust) Suzuki Alto 43.56 Motor Magazine (Aust) W Golf GTD 37.58 Motor Magazine (Aust) VW Golf R 39.57 Motor Magazine (Aust) VW Golf GTI 39.36 Motor Magazine (Aust) Volvo C30 TS 39.05 Motor Magazine (Aust) BRAKING & STOPPING DISTANCE CHART (government data) This is a chart issued by a transport department. It shows reaction distance, braking distance and total stopping distance in a convenient diagram. However the devil is most definetly in detail. This diagram suggests 60 metres of braking distance will be required by a car at 100 km/h. Resulting in a total stopping distance of 88 metres. There are many of these diagrams issued around the world and most tend to have a total stopping distance at 100 km/h within the range of 80 metres to 94 metres. Real world testing data (see above table) suggests a modern car only requires a braking distance of less than 40 metres. Now it is conceded that "worst-case scenarios" need to be considered as we all don't drive sportscars. However I suggest many government diagrams are at best "out-of-date" and at worst "exaggerated". Keep in mind any exaggeration is magnified in the government charts as the speeds increase. (Written by Joel Neilsen, Managing Director, Safe Drive Training)
Did you know that working memory is one of the elements underlying intelligence? Improving memory is becoming an integral part of our lives as we come to understand how the brain works and how important it is to exercise our brain. As demands in our lives increase so does the demand on our working memory. The working memory is like your brain’s ‘Post It Note’.... it makes all the difference to successful learning. (Dumbach-Fuscohttp, 2014) Working memory describes the ability to hold in the mind and mentally manipulate information over short periods of time. (Gathercole and Packiam Alloway, page 4) Working memory is often thought of as a mental workspace where we can use to store, retrieve and manipulate important information. Several studies demonstrate a correlation between working memory, learning and attention. Working memory is particularly relevant to gifted and talented students who need high level functioning skills and it is important to know the capacity of the child's working memory. Measuring the memory capacity of a child will help understand their learning ability. To recognise how a child learns, organises and transfers information is an essential part of the teaching process. A child’s success in education and ultimately in their life, is in part dependent on developing memory skills that can be applied to various situations. Memory capacity can be calculated with a digit span test. With the ability to remember four items in sequence we can to cope with formal learning activities such as remembering the sounds in a word to read or write it (c-a-t = cat), or the numbers in a sum (3 + 4 = 7) to work out the answer. In 1956, George Miller, a Harvard psychologist, found that individuals are generally limited to remembering about seven (plus or minus two) things at a time when processing information. Our working memory provides a temporary hold for information that is not always stored away into long term memory. It allows us to keep information in our minds while using that information to complete a task. Many things we process do not need to be stored in long term memory and are only relevant for the moment when we are reacting to them. Our working memory controls our ability to operate and is limited by the amount of information we can remember at one time. We only hold information in our short term memory for a few seconds and this can be even less if we are distracted. However there are strategies to improve working memory. We can show children how to extend their working memory by teaching them to rehearse. By repeating things over and over again it helps us to remember. In addition, we can extend this ability by teaching students to chunk information. When we do this, we decrease the number of items we have to remember by increasing the size of each item. So, rather than remembering all the letters in the word "chocolate" we can chunk the syllables together and remember each part of the word. These methods are used by many ‘experts’ who exhibit amazing memories. Another technique is to teach children to make associations with information stored in their long term memory. This is dependent on them having sufficient life experiences that they can discuss, store and link to new information. It is easier to remember things in context as we don't remember isolated facts as easily. To use this technique, get a box of objects and ask the child to select a number (dependent on their digit span test) and create a story from the objects. Allow them the opportunity to link the items together through a story (association) and rehearse the story. Hide the objects and, without distracting or interrupting the child, ask them to repeat the story from memory. If they identify all items, repeat the following day with one more item and a different set of objects. "some compensatory behaviours may be exhibited which makes identification of gifted students complex" Eloise started receiving tutoring when she was placed in the learning support group at school. After showing ten year old Eloise how to improve her memory through some practise sessions, she was soon able to sequence 45 items. Imagine the power of that memory capacity and skill when she is cramming for a history exam. Eloise now focuses so well that she copes with distractions. When a child has a high IQ, but is limited by their working memory or other disabilities, their IQ is compromised therefore performance and concentration can be below potential. Deficits in working memory, concentration or other disabilities may mask a child's giftedness and identification can be difficult. In addition, some compensatory behaviours may be exhibited which makes identification of gifted students complex. Sometimes these children become the class clown as they feel inadequate that they can't remember so they act accordingly. Others may withdraw and refuse to actively participate as they know they can't remember what to do. Some children survive by copying others and don't learn for themselves or develop confidence in their own ability. Some children may create their own version of the learning activity and we quite often label them as creative. In reality they are completing the lesson in the only way they know how to, their own way. Unfortunately, as teachers and parents, we sometimes see children who are so frustrated by their inability to remember that they behave poorly to the point where they are withdrawn from the activity and sometimes the group. The stress of not remembering, can also see children taking out their anger in the classroom, playground or on the ones that love them the most, their parents. As adults, we need to recognise these behaviours as messages that there is a problem and teach children how to make remembering easier. From an early age we need to instil in our children the need to work the brain and develop the memory. A good working memory will allow better comprehension and retention of oral instructions and performance in a variety of learning activities. It is essential we know that our children are able to perceive information correctly and then remember and transfer it to the required activity. In this advanced world of technology, we need to assure that our children spend more time working their brains and memory, and not just their fingers and eyes. As Joshua Foer says in his book (Foer, 2011) "a great memory is the essence of expertise." That is why working memory is a common component of many IQ tests. Enjoyed reading this article? Subscribe to be the first to know when articles are published. About the author Julie Bradley has over 30 years experience in education. The resources in this article can be found on her website: www.smartachievers.net.au. Readers may also opt in to Julie's newsletters for regular updates and ideas. Foer, J. 2011. Moonwalking with Einstein. London: Allen Lane. Gathercole, S. and Packiam Alloway, T. Understanding Working Memory A Classroom Guide. London: Harcourt Assessment. www.ncld.org/types-learning-disabilities/executive-function-disorders/what-is-working-memory-why-does-matter 8 Apr 2014 Dumbach-Fuscohttp © schooldaysmagazine.com 2001 - 2018 Disclaimer: All information in schooldaysmagazine.com is provided for educational purposes only and is not the recommendation of the publishers. When assessing author's recommendations or suggestions the reader should conduct their own investigations as to whether this information is suitable for their needs. All information on this site is subject to copyright. If any information is copied we know you are doing it. Some education institutions may be permitted to copy some information, please contact the publisher first.
Herbert Spencer (April 27, 1820 – December 8, 1903) was a highly influential English philosopher and political theorist. He had an encyclopedic range, writing on topics of government, ethics, education, economics, sociology, biology, psychology and anthropology. He originated some of the ideas of evolution picked up by Charles Darwin. Spencer's main theme was that powerful forces of social evolution were systematically making mankind better and better. In politics he was a 19th century liberal—similar to modern libertarians—who opposed to government intervention of any sort and provided the arguments that conservatives still use to oppose socialist and liberal proposals. He is most famous as a champion of individualism and his rejection of any form of collectivism. Spencer insisted on an ethical and humane approach to future social development, which prohibited dominance and aggression towards dependent persons or groups, even if it could be demonstrated that the long-term result would be beneficial. Sciabarra (1999) calls Spencer "The First Libertarian." Spencer was born into a poor but well-educated family. He was home-schooled by his father and his uncle Thomas Spencer, an evangelical clergyman who had been educated at Cambridge University. Starting at age 17 he spent four years as a civil engineer building railways, and the highly systematic engineering mode of analysis characterized his intellectual approaches. He never married, and after 1855 was a perpetual hypochondriac who complained endlessly of pains and maladies that no physician could diagnose. Working as a journalist in London, Spencer came to know the leading intellectuals of the day. They were fascinated by his ideas, and amused by his extreme eccentricities. For example, he preached hard work but was notoriously lazy. He preached the survival of the fittest at the same time he thought his own body was being destroyed by mysterious diseases. Spencer's ideas appeared first in magazine articles and were published in a series of books that he constantly rewrote and revised. His most important ideas (on evolution, especially) appeared in the 1850s and 1860s, and after 1870 or so he had little new to say. His first book Social Statics (1850) was a utopian portrayal of an ideal society. He believed that human society was governed by fixed natural laws, and the government should be limited to enforcing those natural laws. He stressed the need for society to adjust to the material environment and proclaimed a libertarian principle of "equal freedom" for each individual, limited only by "the similar freedoms of all." He denied the legitimacy of private property in land, argued for the equality of men and women as a moral ideal, and said children should be educated through persuasion and rational argument rather than discipline and coercion. |“|| It cannot but happen that those individuals whose functions are most out of equilibrium with the modified aggregate of external forces, will be those to die; and that those will survive whose functions happen to be most nearly in equilibrium with the modified aggregate of external forces. But this survival of the fittest, implies multiplication of the fittest. Out of the fittest thus multiplied, there will, as before, be an overthrowing of the moving equilibrium wherever it presents the least opposing force to the new incident force. —The Principles of Biology, Vol. I (1864), Part III: The Evolution of Life, Ch. 7: Indirect Equilibration In 1852 Spencer published a major article that articulated the idea of biological evolution and which had a major influence on Charles Darwin's theory of evolution which appeared in 1859. In ‘The development hypothesis’, Spencer attacked "special-creationism", and proclaimed an evolutionary model of the natural world, based on a process of continuous small changes: "Surely if a single cell … may become a man in the space of twenty years … there is nothing absurd in the hypothesis that … a cell may, in the course of millions of years, give origin to the human race." Spencer's second book, The Principles of Psychology, (1855) said that the human mind was the product of environmentally generated organic evolution; it came under attack for its atheism and materialism. His essays on education, Education: Intellectual, Moral and Physical (1861) ridiculed the usual practice of cramming, the dominance of Latin and Greek in the upper class schools, and excessive attention to the stories of kings and queens. He recommended instead a program of "self-development", based on problem-solving, empirical geometry, healthy exercise, drawing from observation, and natural science. He said children should be taught and disciplined, not by artificial grades and punishments but by having to accept the consequences of their actions, with the "impersonal agency of Nature" replacing "the personal agency of parents." Spencer spent forty years writing a series of books and articles explaining his "Synthetic Philosophy." The books sold very well and, especially in the 1870s, had a major impact on European and American thinkers. He focused on the evolution of biological organisms, and human society itself, by which he meant the inevitable movement from the simple to the complex. Thus simple one-celled organisms became complex multi-celled organisms over time. Likewise the simple way of life of "primitive" peoples evolved into the complex institutional arrangements of the modern world. Evolution was progress, he believed, and always moved forward. Sociology and Ethics In The Principles of Sociology (1876) and The Principles of Ethics (1879) proclaimed a universal law of socio-political development: societies moved from a military organization to a base in industrial production. As society evolved, he argued, there would be greater individualism, greater altruism, greater co-operation, and a more equal freedom for everyone. The laws of human society would produce the changes, and he said the only role for government was military police, and enforcement of civil contracts in courts. Many libertarians adopted his perspective. Runs out of ideas Scientists by the 1870s were focused on laboratory work; Spencer never followed the new research and scientists ignored his increasingly outdated ideas. In biology, for example, Spencer clung to Lamarckianism long after other scientists rejected it as a model of how evolution worked. In the social realm, Spencer's ignorance of history made it easy for scholars to prove that his theories of evolution simply did not match the historical record. For example, he believed that industrialization would automatically reduce warfare; instead it led to high powered armies and navies and to warfare on a much vaster scale, as in the two world wars. Creates conservative political arguments As a consequence of his faith in the cosmic force of evolution Spencer became the champion of a social philosophy of laissez faire, with the smallest possible role for the government. The individualism underlying his philosophy is succinctly expressed in his book on Ethics, - "Every man is free to do what he wills, provided he infringes not the equal freedom of any other man." Politics in Britain moved in directions that Spencer disliked, and his arguments provided ammunition for conservatives and individualists in Europe and America that they still use in the 21st century. By the 1880s he was denouncing "the new Toryism" (that is, the social reformist wing of Prime Minister William E. Gladstone). In The Man versus the State (1884) he attacked Gladstone and the Liberal party for losing its proper mission (they should be defending personal liberty, he said) and instead promoting paternalist social legislation. Spencer denounced Irish land reform, compulsory education, laws to regulate safety at work, prohibition and temperance laws, free libraries, and welfare reforms. His main objections were threefold: the use of the coercive powers of the government, the discouragement given to voluntary self-improvement, and the disregard of the "laws of life." The reforms, he said, were tantamount to "socialism", which he said was about the same as "slavery" in terms of limiting human freedom. Spencer vehemently attacked the widespread enthusiasm for annexation of colonies and imperial expansion, which subverted all he had predicted about evolutionary progress from ‘militant’ to ‘industrial’ societies and states. Spencer anticipated many of the analytical standpoints of later conservative theorists like Hayek, especially in his "law of equal liberty", his insistence on the limits to predictive knowledge, his model of a spontaneous social order, and his warnings about the "unintended consequences" of collectivist social reforms. Although raised in a devout Methodist family, Spencer never joined a church. He helped inspire Charles Darwin's theory of evolution, and coined the terms "the Survival of the fittest" and "the Descent of Man." His father commented in 1860: - "It appears to me that the laws of nature are to him what revealed religion is to us, and that any willful infraction of those laws is to him as much a sin, as to us is disbelief in what is revealed." Though "he did not accept the dogmas of any creed, he was, in the truest sense, religious," said David Duncan, his closest aide. "In private life," said another friend, "he refrained from obtruding his heterodox views upon others, nor have I ever known him give utterance to any language which could possibly be construed as 'scoffing.' . The name of the Founder of Christianity always elicited his profound respect." Spencer, like many scientists of the time, was a deist, which is to say he did not believe in a personal God per se, but only in a god. In The Principles of Sociology (1876) and The Principles of Ethics (1879) he claims to have discovered the evolution of religious belief and institutions from what their origins in the cult of ancestors by primitive peoples to his own conception of an "unknowable" God-like First Cause. Eventually, Spencer became an agnostic in the sense he could not describe God, but was sure something like God existed. Spencer said the longstanding disputes between science and religion were based on a misunderstanding and failure to see the logical boundaries between the "Knowable" and the "Unknowable." Everything in the "Knowable" sphere belonged to science, he argued, while everything in the "Unknowable" sphere belonged to religion, including the existence of God. Spencer had a major influence on world thinkers in Europe, Asia and Latin America, especially British and American thinkers of the 1870s, including such prominent conservatives as Andrew Carnegie and William Graham Sumner. Spencer's influence faded in Britain and America after 1890, but he was rediscovered by American conservatives in the 1930s and his ideas about government remain influential among conservatives. Spencer was an early contributor to theories of social Darwinism, the idea that evolutionary ideas applied to societies as well as people. Spencer developed his ideas before Darwin, and "social darwinism" dealt only with the progressive betterment of human society. Spencer's individualistic version has nothing in common with collectivist forms of "social darwinism" that promoted a war to the death among races, as the Nazis preached. Root (2008) and Leonard (2009) demonstrate that Spencer did not support the later German versions of "social darwinism" and was not responsible for the distortions. - Carneiro, Robert L. and Perrin, Robert G. "Herbert Spencer's 'Principles of Sociology:' a Centennial Retrospective and Appraisal." Annals of Science 2002 59(3): 221-261 online at Ebsco - Duncan, David. The life and letters of Herbert Spencer (1908) online edition - Francis, Mark. Herbert Spencer and the Invention of Modern Life. (2007) 464p. - Harris, Jose. "Spencer, Herbert (1820–1903)", Oxford Dictionary of National Biography,(2004) online, a standard short biography - Leonard, Thomas C. "Origins of the Myth of Social Darwinism: The Ambiguous Legacy of Richard Hofstadter’s Social Darwinism in American Thought." Forthcoming in Journal of Economic Behavior and Organization (2009) online edition says Richard Hofstadter (1944) is guilty of both distorting Spencer's free market views and smearing them with the taint of racist collectivism. - Root, Damon W. "The Unfortunate Case of Herbert Spencer: How a libertarian individualist was recast as a social Darwinist," Reason July 29, 2008 - Sciabarra, Chris Matthew. "The First Libertarian," Liberty (Aug 1999) online Says the author of The Man Versus the State transcended simple-minded anti-statism to achieve the first major statement of dialectical libertarianism - Taylor, Michael W. The Philosophy of Herbert Spencer (Continuum Studies in British Philosophy) (2007) excerpt and text search - Taylor, Michael W., Men versus the State: Herbert Spencer and Late Victorian Individualism. Oxford: Oxford University Press, 1992. - Weinstein, David. Equal Freedom and Utility: Herbert Spencer's Liberal Utilitarianism. (1998) 235p. - Weinstein, David. *Herbert Spencer in Stanford Encyclopedia of Philosophy - Most of Spencer's books are available online - Spencer, Herbert. Spencer: Political Writings (Cambridge Texts in the History of Political Thought) edited by John Offer (1993) excerpt and text search - Spencer, Herbert. Social Statics: The Man Versus the State (1850) - Social Statics (1851) abridged version - Spencer, Herbert. System of Synthetic Philosophy (1860) - Spencer, Herbert. The study of sociology excerpt and text search; also full text online free - Spencer, Herbert. The Principles of Psychology excerpt and text search; full text online - Spencer, Herbert. Social Statics, Abridged and Revised: Together with the Man Versus the State (1896), highly influential among libertarians full text online free - Spencer, Herbert. The Man Versus the State (1884) - Spencer, Herbert. Education: Intellectual, Moral, and Physical (1891) 283pp full text online - Spencer, Herbert. An Autobiography (1905, 2 vol) full text online - online writings of Spencer - The Right to Ignore the State by Herbert Spencer. - Extensive biography and overview of works - Review materials for studying Herbert Spencer - online writings of Spencer - Francis (2007) p. 337 - See text online p. 46 section 272 - fit - meaning suitable, or the right size and shape, not fit as in athletically strong (although some have inferred this interpretation). - Duncan, Life and Letters of Herbert Spencer, p 491 - Duncan, Life and Letters of Herbert Spencer, p 492
In the Maya tradition, a priest harvested stingless bee honey as part of a religious ceremony twice a year. To increase the number of hives and honey production, beekeepers would regularly divide existing nests. But Africanized bees, with their far greater honey production, presented a more economically attractive option for the beekeepers in Yucatán. While a colony of stingless bees may produce a few pounds of honey per year, Africanized honeybees can produce 220 pounds (100 kilograms). - Bee Decline May Spell End of Some Fruits, Vegetables - Himalaya Honey Hunters Cling to Cliffside Tradition - "Killer Bee" Touted as Economic Lifesaver in S. Africa - Powerful Pollinators, Wild Bees May Favor Eco-Farms - Backyard Beekeepers Abuzz Over Social Life of Hive - Can Wild Bees Take Sting From Honeybee Decline? "Moreover, the Africanized honeybee colonies are free, or nearly so, and don't have to be looked for but merely gathered by trapping mobile colonies in suitable hive boxes," Roubik said. "The stingless bees only reside in forests within living trees," he added. "They're not easy to find and not attracted to bait hives." Few young people seem interested in the ancient art of stingless bee husbandry. "For many the colonies are an heirloom, like their father's stamp collection, and they don't feel a burning desire to carry on the tradition," Roubik said. Roubik started working in Yucatán in 1987 to find ways to study the impact of invasive Africanized honeybees. In the 1980s researchers estimated there were more than a thousand active hives of native bees on the Yucatán peninsula. In 1990 that number had shrunk to around 400. In 2004 there were only 90 hives left. "At this rate, we would expect the art of stingless beekeeping to disappear from the Yucatán by 2008," Roubik said. The dramatic decline has ecological consequences. Take pollination, for example. While both stingless Maya bees and the Africanized honeybees visit many of the same flowering plants, there are some plants, such as the tomato family, and some forest shrubs and trees, that are not visited by Africanized bees. "From my long-term work in French Guiana, where I documented the gradual takeover from Melipona [stingless bees] of certain flowering plants by African honeybees and their spread, I measured a 40 percent decline in seed production by one native shrub as the result," Roubik said. Furthermore, the native bees may starve as deforestation, forest fragmentation, and hurricanes reduce the availability of the floral resources they need. Another threat may be human. "It comes from well-motivated but clumsy attempts to domesticate or propagate colonies by transferring [the colonies] to hive boxes or move them to places where the conditions are not particularly good, like windy, open areas or places near the sea coast," Roubik said. The ancient beekeeping technology is "all but lost," he added. "We would like to see it turned around, not only to ensure the survival of meliponiculture as a way of life, but also to build up breeding stock to be reintroduced into the wild where bees play an important role as pollinators." SOURCES AND RELATED WEB SITES
Climate Change means more than just on average hotter temperatures year round. There are also numerous consequences for sea levels, glaciers, weather patterns, weather stability, crop growth, fisheries, wildlife, forest fires, disease, parasites, rivers and fresh water tables. Explaining it can be a challenge, which is why visual tools like tables, maps and charts are so very useful. Unfortunately, these too can seem bland and technocratic, and fail to capture the true extent and critical nature of Climate Change. Luckily, this past summer, a season that has been marked by uncharacteristically cool and hot temperatures, two particularly useful visual aids have been produced that seek to remedy this. By combining data-driven predictions with aids that are both personal and global in outlook, they bring the consequences of Climate Change home. The first is known as 1001 Blistering Future Summers, a tool produced by the Princeton-based research and journalist organization Climate Central. This interactive map illustrates much hotter summers will become by the end of the century if nothing is done to stem global warming. Users simply type in the name of their hometown and the map compares current temperatures in their town to how high they will be and finds the geographic equivalent. On average, according to Climate Central, daytime summer temperatures will be 4 to 6° Celsius (7 to 10° Fahrenheit) warmer across U.S. cities. That translates to most cities in the U.S. feeling like Florida or Texas feel in the summer today. For example, in the future, Boston will feel like North Miami Beach. And Las Vegas, where temperatures are projected to an average of 111 degrees, will feel more like Saudi Arabia. As you can imagine, changes like these will have drastic effects that go far beyond scorching summers and inflated AC bills. Furthermore, when one considers the changes in a global context, and they will be disproportionately felt, they become even more disconcerting. And that is where the series of maps, collectively known as the “human dynamics of climate change”, come into play. Developed by the U.K. Met Office (the official British weather forecast service) with the U.K. Foreign Office and several universities, they start with a “present-day” picture map – which shows trade in various commodities (wheat, maize, etc), important areas for fishing, routes for shipping and air freight, and regions with high degrees of water stress and political fragility. Then the maps get into specific issues, based on climate forecasts for 2100 that assume that nothing will be done to stop global warming. You can see, for example, how higher temperatures could increase demand for irrigation water; how parts of the world could see increases and decreases in water run-off into rivers; how different areas are set for more flooding; and how the warmest days in Europe, parts of Asia, and North America are projected to be 6°C warmer. The poster also has summaries for each region of the world. North Africa, for instance, “is projected to see some of the largest increases in the number of drought days and decreases in average annual water run-off.” North America, meanwhile, is forecast to see an increase in the number of drought days, increasing temperatures on its warmest days, and, depending on the region, both increases and decreases in river flooding. The overall impression is one of flux, with changing temperatures also resulting in vast changes to systems that human beings heavily rely on. This is the most frightening aspect of Climate Change, since it will mean that governments around the world will be forced to cooperate extensively to adapt to changes and make do with less. And in most cases, the odds of this aren’t good. For instance,the Indu River, a major waterway that provides Pakistan and India with extensive irrigation, originates in Pakistan. Should this country choose to board the river to get more use out of its waters, India would certainly attempt to intervene to prevent the loss of precious water flowing to their farmers down river. This scenario would very easily escalate into full-scale war, with nuclear arsenals coming into play. The Yangtze, China’s greatest river, similarly originates in territory that the country considers unstable – i.e. the Tibetan Plateau. Should water from this river prove scarcer in the future, control and repression surrounding its source is likely to increase. And when one considers that the Arab Spring was in large part motivated by food price spikes in 2010 – itself the result of Climate Change – the potential for incendiary action becomes increasingly clear. And Europe is also likely experience significant changes due to the melting of the Greenland’s glaciers. With runoff from these glaciers bleeding into the North Atlantic, the Gulf Stream will be disrupted, resulting in Europe experiencing a string of very cold winters and dry summers. This in turn is likely to have a drastic effect on Europe’s food production, with predictable social and economic consequences. Getting people to understand this is difficult, since most crises don’t seem real until they are upon us. However, the more we can drive home the consequences by putting into a personal, relatable format – not to mention a big-picture format – the more we can expect people to make informed choices and changes.
Mobile genetic element mobile genetic element --> transposon (Science: molecular biology) small, mobile dNA sequences that can replicate and insert copies at random sites within chromosomes. They have nearly identical sequences at each end, oppositely oriented (inverted) repeats and code for the enzyme, transposase, that catalyses their insertion. eukaryotes contain two classes of mobile genetic elements, the first are like bacterial transposons in that dNA sequences move directly. The second class (retrotransposons) move by producing rNA that is transcribed, by reverse transcriptase, into dNA which is then inserted at a new site.
What is Neurological Dominance? Neurological dominance is a term used to describe the way in which our preferred thinking style and personality is influenced by the physical attributes of our brains. To explain how this works, let’s first consider what we mean by the term 'dominance'. Most humans develop physical dominance wherever we have the option of developing a preference; for example, 90% of us prefer to use our right hand over our left for tasks such as writing, throwing or catching; we prefer to use one foot over the other when kicking a ball; we even have a dominant eye. You can find out which is your dominant eye by downloading and printing our free Dominant Eye Test Card - click here to download. Dominance is important as it provides us with an automatic response to any given situation. For example, imagine that you were standing by the side of the road when a speeding car careered out of control and headed straight for you. Within a fraction of a second you would have started to run to get out of the way. However, within that moment you would not have paused to think; "which leg shall I push off with when I start running", you would have automatically used your dominant leg. Dominance is therefore a very natural and important part of our make-up. Dominance is easy to explain where you have two hands or two legs, but how does Neurological Dominance occur when you only have one brain? The reason is that although a human brain look like a mass of scrambled egg, it is in fact made up of lots of different parts, each of which process information in different ways. As we grow and develop the neural network that connects these brain regions becomes stronger between the regions we prefer to use, in just the same way as the muscles you use the most become stronger than those you use less. As a result, we develop physiological preferences for using some brain regions over others. Why is Neurological Dominance important? Just like physical dominance, our Neurological Dominance provides us with an automatic response to any given situation. This is not to say that we cannot all think and behave in different ways, simply that our neural network makes it easier for us to think in ways that are consistent with our preferences. As a result, a person's energy and motivation is directed according to their neurological preferences, which is why we all have different personalities and exhibit different preferences for the way we think in behave. The History of Neurological Dominance What are the processing styles of the different parts of the brain? How does the brain grow and develop dominance? How can Neurological Dominance be measured?
winning means a simple majority or over half of the votes. If it takes at least half voting Yes to pass a motion, the motion did not pass. There are different ways to solve the Look at different numbers of boys and girls. Think about each of the following: A class of 50 boys and 50 girls A class of 0 boys and 100 girls A class of 100 boys and 0 girls In neither of the extreme cases nor in any other case will the motion pass. Above the line, you see that 100% of the boys are depicted to the left of the bold line [light blue rectangle] with 100% of the girls to the right [pink rectangle]. Below the line are the voting percentages of Yes and No for both boys and girls. To determine if the Yes vote wins, then the segments representing the Yes votes together have to be over half the length of the entire segment. The segments representing Yes will never add to half of the total length so the motion cannot pass.
Subsidies for wind power and other renewable energy systems have been a politically popular program over the past decade, leading to explosive growth in wind power installations across the United States, especially in the Midwest and Texas. But new research shows they might not work as well as one might think. The “social costs” of carbon dioxide would have to be greater than $42 per ton in order for the environmental benefits of wind power to outweigh the costs of subsidies, says Joseph Cullen, assistant professor economics at Washington University in St. Louis. The social cost of carbon is the marginal cost to society of emitting one extra ton of carbon (as carbon dioxide) at any point in time. The current social cost of carbon estimates, released by the Environmental Protection Agency in November and projected for 2015, range from $12 to $116 per ton of additional carbon dioxide emissions. The prior version, from 2010, had a range between $7 and $81 per ton of carbon dioxide. The estimates are expected to rise in the coming decades. “Given the lack of a national climate legislation, renewable energy subsidies are likely to be continued to be used as one of the major policy instruments for mitigating carbon dioxide emissions in the near future,” Cullen says. “As such, it’s imperative that we gain a better understanding of the impact of subsidization on emissions.” Since electricity produced by wind is emission free, the development of wind-power may reduce aggregate pollution by offsetting production from fossil fuel generated electricity production. When low marginal cost wind-generated electricity enters the grid, higher marginal cost fossil fuel generators will reduce their output. However, emission rates of fossil fuel generators vary greatly by generator (coal-fired, natural gas, nuclear, hydropower). Thus, the quantity of emissions offset by wind power will depend crucially on which generators reduce their output, Cullen says. The quantity of pollutants offset by wind power depends crucially on which generators reduce production when wind power comes online. Cullen’s paper, published in American Economic Journal: Economic Policy, introduces an approach to empirically measure the environmental contribution of wind power resulting from these production offsets. Wind Power Offsets “By exploiting the quasi-experimental variation in wind power production driven by weather fluctuations, it is possible to identify generator specific production offsets due to wind power,” Cullen says. Importantly, dynamics play a critical role in the estimation procedure, he finds. “Failing to account for dynamics in generator operations leads to overly optimistic estimates of emission offsets. Although a static model would indicate that wind has a significant impact on the operation of coal generators, the results from a dynamic model show that wind power only crowds out electricity production fueled by natural gas.” The model was used to estimate wind power offsets for generators on the Texas electricity grid. The results showed that one mega watt hour of wind power production offsets less than half a ton of carbon dioxide, almost one pound of nitrogen oxide, and no discernible amount of sulfur dioxide. “As a benchmark for the economic benefits of renewable subsidies, I compared the value of offset emissions to the cost of subsidizing wind farms for a range of possible emission values” Cullen says. “I found that the value of subsidizing wind power is driven primarily by carbon dioxide offsets, but that the social costs of carbon dioxide would have to be greater than $42 per ton in order for the environmental benefits of wind power to have outweighed the costs of subsidies.”
Back when George W. Bush announced a $1.2 billion plan to advance hydrogen fuel cells, the idea seemed rather implausible. Though it’s the most abundant element in the universe by far, in our environment hydrogen hides so effectively that our methods of drawing it out as a so-called “zero-emissions” fuel have tended to be either ineffective or extremely dirty. Producing hydrogen through the electrolysis of water was something of a holy grail to the hydrogen crowd, but suffered from the same drawbacks as traditional hydrogen sources: we needed to burn coal to make power to make hydrogen to make power. Often, hydrogen power simply traded the emission of airborne carbon at the consumer end for precisely the same emissions at the production end — hardly a great leap forward. Still, research has gone on. There’s simply too much potential in a fuel source that can already make a rally-level supercar with zero emissions, even if the “zero emissions” part is a bit of a mirage. Nuclear power has always presented itself as a nice supplement, offering emissions-free electricity to power the creation of emissions-free fuel — though at that point, why not just make electric cars and cut the hydrogen out entirely? This week, Lawrence Livermore National Laboratory presents one possible answer to this seeming imponderable: we should keep hydrogen in the loop because its production could help us with more than just power. The team claims to have found a method of saline water electrolysis that can both capture CO2 from the atmosphere and use that carbon to neutralize ocean acidification. The water left over after we’ve squeezed out the hydrogen fuel is said to be an electrolyte solution with a very high affinity for atmospheric CO2 — a claim that, if true, would be remarkable. As the researchers themselves note, most of the current methods of atmospheric carbon capture are cumbersome, requiring all sorts of energy input. Though there are currently no numbers available to quantify the efficiency of this system, even a moderate ability to take up atmospheric CO2 through the simple interface of air and water would be an incredible step forward. But the claims don’t stop there! Upon absorbing the atmospheric CO2, the solution becomes saturated with high pH-producing (alkali) molecules like carbonates and bicarbonates, substances they hope could be used in neutralizing ocean acidification. The continual drop in oceanographic pH (increase in acidity) is arguably one of the most worrying effects of atmospheric carbon, as up to 40% of the CO2 released will eventually be dissolved into the world’s oceans, lakes, and rivers. Where atmospheric CO2 works on a rather abstract, global scale, ocean acidification works on the local one as well, devastating or sometimes completely destroying local ecosystems. While it might seem like a bit of a pipe dream to just dump a bunch of bicarbonate into the sea and hope that everything evens out, the precise reaction the researchers are referring to is actually one of the most common acid-base reactions in all of nature. When CO2 is absorbed by water, it forms carbonic acid, which can be neutralized by carbonates; if these names sound familiar, it’s because the carbonic acid-bicarbonate buffer is the primary acid-base buffer found in life itself. At all times, the interplay between these two types of molecule keeps your blood from becoming either too acidic or too basic. Much is still unknown, since the paper won’t be released in full for another few days. Still, Lawrence Livermore is standing behind the study, and claims to want to ramp up the team’s lab-scale experiments up to see their real-world implications. We need to know just how much energy is required to produce this hydrogen fuel and its carbon-sink water byproduct, and we also need to know precisely how much carbon can really be captured. If, for instance, we were to power 10% of American automobiles with this technology, how much atmospheric carbon would that actually represent? How much usable bicarbonate could then be extracted? And what sort of water are we left with as a waste product after all this has been done? Regardless, this study is an indication of a growing interest in linking our technologies to help one another, or to at least offset each other’s drawbacks. Forgetting cars for a moment, might we all switch to a hydrogen furnace in our home? Keep in mind that doing so would likely not be a move for personal efficiency, since conservation of energy makes it inherently inefficient to add another conversion between the original energy production and the eventual use of that energy. Rather, it would be a move to help the environment. If these researchers have their way, we could see our environmentalist initiatives turned on their head: “Save the planet,” the signs would say. “Leave your lights on!”
Hi. I am Doctor Shah. I was the National Lecture Competition winner in 1989 and I am the math master at math school. Now, ready for a new way of doing math? So there are three measures of average. The first one is the mode. The second one is the median. And the third one is the mean. The mode is the one which occurs most frequently. It is very easy to figure out the mode, but it is not very mathematical so we tend not to use it a lot. The median is the middle one in an ordered list so a lot more useful because it is identifying the middle of the data. So, that is what you expect of an average. And the last one is called the arithmetic mean. It is also sometimes referred to as the weighted average. Now, the reason it is referred to as the weighted average is this. Imagine you have got a workplace, and in that workplace, there are five staff and they all earn twenty thousand. There are also three managers and they earn thirty three thousand. And of course, there is the boss and he earns a whopping one hundred and sixty thousand. If we were working out the median of this data, we'd be looking for the middle one. We would cross out from each end till we got to the middle one which is this one here. So that would be the median. If we wanted to work out the mean, what we would be doing is adding them all up and diving by how many there are. So add them all up and divide by nine and when you do that you get an answer of forty thousand pounds. The weighted average is very different from the median because this one large number has a large weight on the data, on the average. And that is why it is called the weighted average. Each value is not just taken into account as being a value but also if it is a very big value it can effect the data a lot. And this would be a good example of where it would be better to use the median rather than the mean because you have an extreme value in that data. .
Kim Taylor/Bruce Coleman Inc. Most mason bees are smaller than honey bees, but some are about the same size as honey bees or slightly larger. They have stout bodies, and many species are metallic green or bluish in color. Mason bees are common in the western United States, especially in forested regions, but they are also found in many other parts of the northern hemisphere. About 140 species of mason bees are found in North America out of about 200 species worldwide. These bees have a sting but do not attack defensively unless handled. The orchard mason bee, or blue orchard bee, is a metallic blue-black species about 13 mm (0.5 in) long. This bee, native to North America, specializes in collecting pollen from the flowers of fruit trees. In some parts of the United States, the bees are cultivated to pollinate orchard crops, especially apples. This bee nests in holes in wood and the females prefer to make nests close to each other in aggregations. These traits are used to concentrate enough bees in an area for commercial pollination. Blocks of wood with holes drilled in them attract nesting bees. These nest blocks are hung from trees or are placed in shelters for protection from the weather. Orchard mason bees mate in the spring. The females then begin to collect pollen and lay eggs. Larval bees feed for several weeks inside their closed cells. They pupate in late summer and spend the autumn and winter as adults inside their pupal cocoons in the nest. They emerge from the cocoons in the spring, coinciding with flowering of many orchard crops. The new generation of bees then begins the cycle over again. Orchard mason bees are very effective pollinators. Two or three females can pollinate the equivalent of a mature apple tree in one season. They fly in cool or rainy weather and can supplement or replace honey bees as commercial pollinators in some situations. Other mason bees are also used for pollination. Another North American species, the blue blueberry bee, is used as a pollinator for blueberry plants. The Japanese hornfaced bee is native to Japan and has been used for apple pollination there for more than 50 years. One female can pollinate over 2000 apple flowers per day. The Spanish hornfaced bee is used similarly in Spain for pollinating the flowers of almond trees. Scientific classification: Mason bees comprise the genus Osmia in the leafcutter bee family Megachilidae, order Hymenoptera. The orchard mason bee is Osmia lignaria, the Japanese hornfaced bee is Osmia cornuta, and the Spanish hornfaced bee is Osmia cornifrons; the blue blueberry bee is Osmia ribifloris.
Glass is usually the most stable of archaeological materials, but glass artifacts, and 17th-century glass in particular, can undergo complex disintegration. Ideally, glass should consist of 70-74 percent silica, 16-22 percent alkali or soda ash (sodium carbonate) or potash (potassium carbonate, usually derived from wood ash), and 5-10 percent flux (lime [calcium oxide]). Soda-lime glass has been the most common glass throughout the history of glass-making, and the modern equivalent is 74 percent SiO2, 16 percent Na2CO3, and 55 percent lime added as stabilizer. Soda glass is characteristic of southern Europe, where it is made from crushed white pebbles and soda ash derived from burnt marine vegetation. Soda glass, which is often used for the manufacture of cheap glass, is twice as soluble in water as potash glass. Potash glass is more characteristic of interior Europe, where it is made from local sands and potash derived from wood ash and burnt inland vegetation. A little salt and minute amounts of manganese are added to make the glass clear, but potash glass is less clear than soda glass. Most early glass is green because of iron impurities in the materials. Alkali lowers the melting point of the sand, and the flux facilitates the mixture of the components. As long as the original glass mixture was kept in balance, the resulting glass will be stable. Problems arise when an excess of alkali and a deficiency in lime are present in the mixture, for the glass will be especially susceptible to attack by moisture. If old glass contains 20-30 percent sodium or potassium, it may have 'glass disease,' where the glass weeps and begins to break down. In all glass, the sodium and potassium oxides are hygroscopic; therefore, the surface of the glass absorbs moisture from the air. The absorbed moisture and exposure to carbon dioxide causes the NaO2 or NaOH and KO2 or KOH to convert to sodium or potassium carbonate. Both NaCO2 and KCO2 are extremely hygroscopic. At a relative humidity (RH) of 40 percent and above (and in some cases as low as 20 percent RH), drops of moisture appear on the glass surface. In water, especially salt water, the Na and K carbonates in unstable glass may leach out, leaving only a fragile, porous hydrated silica (SiO2) network. This causes the glass to craze, crack, flake, and pit, and gives the surface of the glass a frosty appearance. In some cases, there is an actual separation of layers of glass from the body. Fortunately, these problems are not commonly encountered in glass manufactured in the 18th century and later. Pearson (1987b, 1987d) discusses glass deterioration and reviews the various glass conservation procedures. At our present state of knowledge, the decomposition of glass is imperfectly understood, but most glass technologists agree that glass decomposition is due to preferential leaching and diffusion of alkali ions (Na and K) across a hydrated porous silica network. Sodium ions are removed and replaced by hydrogen ions, which diffuse into the glass to preserve the electrical balance. The silicates are converted into a hydrated silica network through which sodium ions diffuse out. Decomposed glass often appears laminated, with iridescent layers on the surface. Glass retrieved from an acid environment often has an iridescent film, which is formed by the leached silica layers. The alkali which leached out is neutralized by the acid, and fewer hydroxyl ions are available to react with the silica. This causes the silica layer to thicken and become gelatinized. Glass excavated from an alkaline environment is less likely to have laminated layers because there is an abundance of hydroxyl ions to react with the silica network. Usually a protective layer does not form on glass exposed to alkaline solutions. The dissolution of the glass proceeds at a constant rate. The alkali ions are always extracted in excess of the silica, leaving an alkali-deficient layer, which continually thickens as the deterioration moves deeper into the glass. There are considerable differences of opinion as to what to do with unstable glass. Some professionals advise that the only treatment should be to keep the glass in low RH environments so the glass does not have any moisture to react with. While a RH range of 40-55 percent is usually recommended, it varies in relationship to the stability of the glass. The weeping or sweaty condition is sometimes made worse by the application of a surface lacquer or sealant. No resin sealants are impervious to water vapor, and the disintegration continues under the sealant until the glass falls apart. Other glass conservators try to remove the alkalinity from the glass to halt the deterioration. Most, if not all, of the glass manufactured from the 18th century on has been produced from a stable glass formulation, and there are not likely to be any considerable problems presented to the conservator other than normal devitrification. Since the glass is impervious to salt contamination, no conservation treatment other than simple rinsing, removal of incidental stains, especially lead sulfide staining on any lead crystal, and removal of calcareous deposits is expected. The main problems will be related to gluing pieces together. All the problems likely to be encountered are discussed thoroughly Newton and Davidson (1989). TREATMENT OF UNSTABLE GLASS Glass that is susceptible to weeping because of unstable glass formulations can be treated in different ways; the technique described by Plenderleith and Werner (1971:345) is representative of the treatments often recommended. It is presented below. The above treatment does not attempt to remove any of the glass corrosion products, which often result in layers of opaque glass that may be removed with various acid treatments. The decision to remove surface corrosion products, which often mask the color of the glass, must be made on a case-by-case basis. Removal of corrosion products may also significantly reduce the thickness of the walls and weaken the piece significantly. Indiscriminate removal of surface corrosion products can weaken, blur, or alter surface details. The corrosion layers of a glass object may be deemed a part of the history of the object, and thus a diagnostic attribute, and should not be removed without good reason. Devitrification is a natural process that occurs on siliceous material. It occurs naturally on flint and obsidian and is the basis for obsidian hydration dating. The surface of any glass from any time period, especially soda glass, usually becomes hydrated through time and so will eventually devitrify. Devitrification occurs when the surface of the glass becomes partly crystalline as it adsorbs moisture from the atmosphere or from a submerged environment. As it becomes crystalline, the surface becomes crazed and flakes from the body of the glass. Devitrified glass has a frosty or cloudy, iridescent appearance. Pane glass is especially susceptible. To prevent further devitrification and to consolidate the crazed surfaces, a coating of PVA or Acryloid B-72 should be applied to the piece. Either of these surface adhesives will smooth out the irregularities in the pitted, crazed surface of the glass by filling in the small cracks and forming optical bridges, making the glass appear more transparent. Merely wetting glass will cause it to be appear clearer for the same reason. REMOVAL OF SULFIDE STAINS FROM LEAD CRYSTAL Leaded glass, which includes a wide variety of stem wares and forms of lead crystal, can become badly stained by lead sulfide. Glass that is normally clear may be recovered from marine and/or anaerobic sites with a very dense black film on its surface. A 10-15 percent hydrogen peroxide solution is used, as with ceramics, to remove these sulfide stains. Other than stain removal, strengthening of glass artifacts with a consolidating resin is often required. Fragments can be glued together with a good glue, or if deemed necessary, an epoxy, such as Araldite. Glass can be repaired and reconstructed with the same glues as described for pottery. Optically clear epoxy resins are generally preferred as they adhere to the smooth, non-porous glass more readily. They also dry clearer and shrink less than the solvent resins. The resulting bonds, therefore, are less noticeable and stronger than with other glues. The epoxy resins are, however, usually irreversible. Hysol Epoxy 2038 with Hardener 3416 and Araldite are the two brands most commonly used in glass repair. The new 'super glues,' which are made of cyanoacrylate, are used quite often to piece the glass together quickly. After using the cyanoacrylate, epoxy is flowed into the cracks with an artist's brush to permanently bond the pieces. It is exceptionally difficult and time consuming to gap-fill glass. It requires considerable work and experience. The problem of matching transparent glass colors is equally difficult. All of these problems are adequately discussed in greater detail in Newton and Davison (1989). As is the case with all conservation, it is necessary for the conservator to be able to recognize what the problems are and to know what may be used to counter them. When lead oxides are found during glass conservation they can be removed with 10 percent nitric acid. A 1-5 percent sulfuric acid solution can be used to remove iron oxide, neutralize the alkalinity of glass that is breaking down, and, occasionally, to remove calcareous deposits. Calcareous deposits are commonly removed with 10 percent hydrochloric acid and, on some occasions, by immersing the glass in 5 percent EDTA tetra sodium. Iron stains are commonly removed with 5 percent oxalic acid or 5 percent EDTA di-sodium. Realistically, few problems other than reconstruction and restoration are likely to be encountered on any glass objects found in archaeological sites dating from the mid 18th century to the present. In most cases, the same chemicals and equipment required for treating ceramics are also used for conserving glass.
Colorblindness Test for Children The image below can be used as a simple, non-medical test for red-green colorblindness in children. Originally published in Field and Stream magazine, the test was intended for potential hunters. However, the animal shapes can usually be identified by young children who may not yet be able to read numbers, which are used in standard colorblindness tests. A larger version of the image, which can be printed on plain white paper (or photo paper), can be found here. The image should be presented to a child in private. The child can be asked if they see any animals. There should be no prompting. The key for what can be seen with differing color vision can be seen below: A larger image of the key can be found here. This and any such test should be done individually without comment by the “tester.” Any color vision issues detected should first be discussed with the child’s parent(s) and not with the child. Further testing by a qualified ophthalmologist might be indicated. Past use of this test indicates the following: Children with normal vision can see the bear, deer, rabbit, and squirrel. They cannot see the fox. Children with a red-green color vision deficiency see a cow (instead of the deer), a fox (in the lower left), and usually the rabbit and squirrel. They cannot see the bear. Red-green colorblindness apparently occurs in varying degrees--mild to severe. Children with severe red-green color vision deficiency may have difficulty seeing the rabbit and/or squirrel. Generally, anyone with a red-green color deficiency cannot see the bear, but can see the fox. Children (and adults) with a red-green color deficiency have difficulty differentiating shades of the following colors from each other: red from green green from brown (especially beige) blue from purple pink from gray Note that most color deficient children can identify pure primary colors. In each of these cases, the color red (found in red, brown, purple, and pink) cannot be discerned, making the distinction difficult. Thus children see purple azalea or crepe myrtle blossoms as blue. They have difficulty seeing the browned pine needles among the green ones. A flashing traffic light could be red or amber. Green traffic lights look white. Because of the shift in the color vision of those with red-green colorblindness, those with the deficiency can more readily differentiate yellow and blue from green. Yellow and/or blue are frequently the “favorite colors” of those with a red-green color deficiency. Obviously once identified, tact must be used when informing a child of this vision issue. Care must also be shown when dealing with such children in a group setting, so as not to call undue attention or create a reason for discrimination or ridicule. The most common form of color vision deficiency (usually referred to as red-green colorblindness) occurs in about 7% of males in the United States. It is an inherited trait, carried by females but occurring in males. Colorblindness can be a complicated topic. Basic information can be found online, including this Wikipedia article. Suggestions for teachers can be found here. Given the frequency of this condition, it is surprising that testing is not done on all children prior to entering pre-school or school. This condition should be identified early, so that parents, caregivers, and teachers can address it with understanding, patience, and respect. For questions or further information, contact: norm_hellmers (at) yahoo.com
It look like you're visiting Greatminds.net from an outdated browser. Upgrade your browser for a better experience. Does Alexandria Plan Require Me to Adopt a Particular Method of Instruction? No. The way a teacher chooses to prepare students to approach the study of a text depends on their teaching style and the needs of their students. The teacher may choose to have students read short sections of text with a few assigned questions, preparing students to participate fully in discussion. The teacher may also choose to have students work in pairs or small groups to read and to discuss questions orally--preparing to pull their ideas together for a rich whole class seminar or discussion. Another teacher may want to allow students to first grapple with the text independently. During their first read, students might circle passages where they are confused and/or underline points that they thought were important. Students might annotate the passage with questions and notes. As the teacher circulates during independent reading block, he or she could note how the students’ questions gather throughout the passage. The teacher could then use the TDQs to clarify misunderstandings as needed and to dive deeper into the text than students were able to independently. We offer these questions for teachers to use as they see fit.
About guinea-worm disease For further development, these larvae need to be ingested by suitable species of voracious predatory crustacean, Cyclops or water fleas which measure 1–2 mm and widely abundant worldwide. In the cyclops, larvae develop to infective third-stage in 14 days at 26°C. When a person drinks contaminated water from ponds or shallow open wells, the cyclops is dissolved by the gastric acid of the stomach and the larvae are released and migrate through the intestinal wall. After 100 days, the male and female meet and mate. The male becomes encapsulated and dies in the tissues while the female moves down the muscle planes. After about one year of the infection, the female worm emerges usually from the feet releasing thousands of larvae thus repeating the life cycle. No drug is available to prevent or heal this parasitic disease – exclusively associated with drinking contaminated water. Dracunculiasis is, however, relatively easy to eliminate and eventually eradicate. Guinea-worm disease is rarely fatal. Frequently, however, the patient remains sick for several months, mainly because: - The emergence of the worm, sometimes several, is accompanied by painful oedema, intense generalised pruritus, blistering and an ulceration of the area from which the worm emerges. - The migration and emergence of the worms occur in sensitive parts of the body, sometimes the articular spaces can lead to permanent disability. - Ulcers caused by the emergence of the worm invariably develop secondary bacterial infections which exacerbate inflammation and pain resulting in temporary disability ranging from a few weeks to a few months. - Accidental rupture of the worm in the tissue spaces can result in serious allergic reactions.
TenMarks teaches you how to use intermediate concepts of proportions to solve problems in indirect measurement. Read the full transcript » Learn Intermediate Concepts of Indirect Measurement In this lesson, let’s learn about indirect measurements and how to measure things using properties of similarity. The problem states that a tree, as we can see in the diagram, casts a shadow that is 48 feet long. So we’ve done is drawn the shadow here at 48 feet long and that’s the point PQ. A man, whose height is five feet, and that’s the man here, I’ve tried to draw a diagram that says the man whose height is five feet which is illustrated by TS, casts a shadow that is eight feet long. How tall is the tree? This is what we need to determine. That’s H. So what are we given? We are given two triangles, right. The triangle first one is, TPS. The second triangle is RPQ. All I did was just label them, okay. Since we got two triangles out of it, we just label them triangle TPS and triangle RPQ. Notice that I was careful to make sure that the angles were the same for the small and the big triangle. So P which is the angle is common between the two in the middle. Since these two triangles are similar, the ratio of their corresponding sides is the same, ratio of corresponding sides are equal. What does that mean? Well let’s look at the corresponding sides for both triangles. There is one side that is the big one which is RP. The corresponding side to that in the smaller triangle is TP. So the ratios of this side on the big triangle and side on the little triangle should be the same as these two sides which is PQ on the big triangle and PS on the small triangle, which should be equal to the big height which is PQ or RQ, and the small height which is TS. Now let’s substitute what we already know. What are we given? RP, are we given that? RP is not given. TP is not given. So this is useless. Let’s look at PQ, P and Q, that’s given as 48 feet. PS is given as eight feet equals RQ is what we need to determine so this is the height H and TS is the smaller triangle, and that we're given as five feet. So we’ve got a ratio that’s equivalent to ratios that are equivalent. So the cross products must be equal. So let’s look at the cross products, 48 × 5 = 8 × H. All units are in feet. So 48 × 5 = 8 × H, they’re cross products which means 48 × 5 is 240 = 8 × H. Let’s divide both sides by eight so that I can isolate H on one side. 240 by 8 is 30, 240 ÷ 8 = 30, 8 ÷ 8 = 1 or 8 = 30, since everything is in feet, the height of the tree, since everything is in feet, the height of the tree is 30 feet. That’s what we were looking for. The way we got to it is we just look at this triangle and the big triangle, they were similar. That the triangles are similar than the ratio of their sides are equal. We took these four sides. The ratios give us 48/8 = H/5 which when solved gives us H = 30 or the height brings 30 feet. Copyright © 2005 - 2014 Healthline Networks, Inc. All rights reserved for Healthline.
Thanks to a quintet of satellites and a backup posse of ground-based telescopes, researchers have gotten their best look ever at how auroras–also known as the southern and northern lights–begin to form in space. The dazzling light displays are provoked by “space tornadoes,” researchers say. Whirling at more than a million miles per hour, these invisible, funnel-shaped solar windstorms carry electrical currents of more than a hundred thousand amps—roughly ten times that of an average lightning strike—scientists announced…. And they’re huge: up to 44,000 miles (70,000 kilometers) long and wide enough to envelop Earth [National Geographic News]. The observations were made as part of NASA’s THEMIS mission, which uses the satellites and telescopes to study how solar winds, the charged particles that stream from the sun, interact with the Earth’s magnetic field. On the Earth’s dark side, the solar wind stretches out the field, forming a region known as the magnetotail. The magnetotail is like a rubber band; when it is stretched too far, “eventually it snaps and releases the energy”, says team member Andreas Keiling [New Scientist]. That snap creates turbulence and forms the tornadoes, researchers announced at the European Geosciences Union meeting. The charged particles in the vortices are propelled towards the earth and collide with particles in the ionosphere, which releases energy and makes the molecules glow in unearthly shades of green and purple. Says researcher Karl-Heinz Glassmeier: “The Themis satellites have given us our first opportunity to see the process that generates the aurorae in three dimensions and show just what spectacularly powerful events they are” [Telegraph]. While the auroras are beautiful natural phenomena, not all the effects of the solar wind are so benign. Solar storms can damage satellites and pose a real threat to the electric power grid. A better understanding of all this is needed to improve space storm forecasting and to predict what might happen to power grids. Experts say the next period of maximum solar activity — due around 2012 — could bring a level of storminess not seen in many decades. A recent report by the National Academy of Sciences concluded that a major storm during the next peak could cripple power grids and other communications systems [SPACE.com]. 80beats: Distant Turbulence in the Magnetic Field Triggers the Northern Lights DISCOVER: Seeing the Light takes readers to an aurora research station in the Alaskan interior DISCOVER: Space Weather explains the damage that solar storms can wreak Image: flickr / nick_russill
Some might think it is not possible to teach children how to be thinkers. A common belief is that one is either born with intellect or not. Wrong! Creative and critical thinking are skills, something that can be learned. There are, however, developmental issues. Young children are less likely to be analytical than older ones. How well youngsters think depends on whether teachers and parents have expected them to think for themselves. Schools too often focus on teaching students what to think (read "No Child Left Behind"), not how to think. Parents tend to tell youngsters what to think. But even in the interests of telling youngsters how to behave in proper ways, the instruction is more likely to be accepted if children are encouraged to think through why certain behaviors are preferred over others. Teachers know that many students have poor thinking skills. Several reasons help explain why. Changes in culture are a factor, such as mind-numbing music, television, video games, social networking Websites, cell-phone texting, and so on.We have no problem telling children what to think, but when their thinking becomes flawed, we are reluctant to intervene. Many parents (and even teachers) think it is bad to challenge children's thinking when it is flawed. They worry that such challenges can be embarrassing and damages self esteem. The reality is that such students eventually discover they are not as capable as their peers who have effective thinking skills, and that gives them real reason to have low self-esteem. Schools and state mandates also contribute to the problem. Too often, students are trained to look for the one "right answer." Then there are state knowledge and skills standards, where students are actively discouraged from thinking "outside the box." Many students lack the confidence to think for themselves and are actually afraid to try. The reality is that students are natural-born creative thinkers, but the conformity of schools has drilled students into a submission that precludes analytical and creative thinking. In our culture, the only place where it seems that insightful ideas are excluded is in the school. How does one teach critical thinking? Three ways: 1. Expect it. Require students to defend their ideas and answers to questions. Show them it is not enough to have an opinion or the "right" answer. Students need to defend their opinions and understand how they arrived at the answer and why it is "right." 2. Model it. The teacher can show students how to think critically and creatively about instructional material. Even in "teaching to the test," show students how to think about alternative answers, not just memorize the right answer. Show why some answers are right and some wrong. 3.Reward it. When good thinking occurs, teachers should call attention to it and to the students that generated it. Learning activities and assignments should have clear expectations for students to generate critical and creative thought. A grading premium and other incentives should be provided.. Rigorous analysis will only occur if it is expected and rewarded. Web sites and blogs that show how to teach children thinking skills include:
Pick your favorite river that flows to the sea, say the Mississippi or the Amazon. Now go to the mouth of that river and dive below the waves and you’ll notice a curious thing: the river keeps on going, skirting the bottom of the ocean. You can tell it from the surrounding water by all the sediment it’s carrying. All the sand, dirt, rocks, and organic matter it’s dragging along makes it denser than the surrounding sea water, so it sinks to the bottom and snakes along scouring channels, tributaries and canyons like a river on land. It looks a bit like an avalanche of snow tumbling down a mountainside. This is called a turbidity current. You can also find them on smaller scales in lakes and in reservoirs behind dams. Scientists want to understand the fundamental forces at work in turbidity currents—what dictates their flow paths, what keeps them going over miles and miles of sea floor, how they reshape the seascape, and how the ancient channels they leave behind reflect past climate and flow rates. Oil and gas companies are also interested in better understanding them because the currents deposit organic matter that over geologic time gets buried, compressed and transformed into hydrocarbons. Researchers have had a difficult time trying to recreate some aspects of these currents in the lab. David Mohrig and Jim Buttles, researchers at the Jackson School of Geosciences, know as well as anyone. They spent several years attempting it with large water tanks at their former academic home, the Massachusetts Institute of Technology (MIT). They would pour a layer of sediment in the bottom of a tank to mimic a sloping sea floor and add a layer of water to mimic the sea. Then at the shallow end, they would inject water containing sediment, which would form a turbidity current. They would then sit back and let it intermittently flow over the sediment bed for several days or even weeks. The trouble was, the currents wouldn’t cut channels in the sediment. The best they could do was first make artificial channels in the sediment and then watch how they evolved. Mohrig, now an associate professor in the Jackson School, came to Austin in 2006. Buttles, now a research engineer and scientist associate in the school, followed in 2007. Together, they’ve built a new, larger tank. Housed in a warehouse on the J.J. Pickle Research Campus, it’s about the size of a regular back yard swimming pool with concrete walls one foot thick and an observation deck. They call it the UT Morphodynamics Lab (UTML). The bottom of the tank features a ramp tilted up 10 degrees to simulate a continental slope, the place where a continent meets the deep seafloor. A typical continental slope would have a slope of about 4 degrees, but to get the physics right on a much smaller scale, a computer model of the system indicated that a steeper slope would be required. The sediment that forms the “sea floor” in the tank is a mix of plastic grains and clay particles that together have the right particle size, cohesiveness and density to mimic real sediments. Front Row Seat The researchers have an array of instruments to measure what’s happening as an experiment unfolds. Rows of digital cameras mounted above the tank and along the sides capture panoramic snapshots from multiple angles simultaneously. A laser altimeter scans back and forth over the tank measuring the topography of the sediment surface to within 100 microns vertically. An acoustic Doppler velocimeter inserted into the water measures the flow velocity and direction at specific points. An acoustic Doppler profiler measures discrete flow at many points within a column of water, providing a vertical profile. And an array of aluminum siphons sample water and sediment at specific depths and X,Y coordinates. The samples can be tested for fluid density, particle size and other characteristics. But the observations don’t end when the currents stop flowing. At the end of an experiment, the researchers can drain the water and do a couple of other interesting investigations with the sediment. They extract “cold cores” by chilling a metal tube with liquid nitrogen, sticking it into the sediment and pulling it out. The core is a frozen ring of water and sediment that sticks to the outside of the tube. That hollow ring can be pulled off, sliced and analyzed to evaluate the stratigraphy of sediments at that spot. Researchers can also slice through the layers of sediment on the tank floor along one axis and evaluate the stratigraphy of the cross section. This world-class facility provides opportunities for graduate students to try their hand at advanced research. Anjali Fernandes is a doctoral student in geological sciences working under the supervision of Mohrig and Ron Steel, professor and Davis Centennial Chair. She is trying to understand how turbidity currents create channels and how they later modify them to become more sinuous, or winding. She said most submarine channels on continental slopes are inactive. “Where these systems are active, the turbidity currents going through them are so energetic and destructive that studying them is very difficult.” she said. “So, we need to get innovative to address questions related to turbidity current activity and their interaction with seafloor topography.” She said seismic data from ancient submarine channels and stratigraphic analysis of outcrops can offer insights, but to study the ongoing evolution of channels, the UTML water tank is essential. Aymeric Peyret is a doctoral student working under the supervision of David Mohrig. He uses computer models to try to replicate the results of water tank experiments and so gain a better understanding of the physics that underlies turbidity currents. He said scientists don’t know why some turbidity currents erode channels in the seafloor while others fill them in. Factors might include the initial conditions (such as seafloor topography and how cohesive the sediments are), the density ratio between the turbidity current and the surrounding water and how fast the current is when it enters the water. “Finding where the transition between erosion and deposition lies would be a nice holy grail to discover,” he said. Bigger is Better In January 2009, the team ran an experiment for several days and began to see something that neither they nor anyone else in their field had ever observed before: their sediment-laden turbidity current formed channels in the existing sediment. It was a big step closer to creating a true submarine landscape in the lab. “When we saw that our jaws dropped,” said Buttles. This early success makes them confident that their tank is doing a reasonable job of modeling some aspects of real turbidity currents and ultimately, seascape evolution. So why were they more successful with the new tank? It comes down to size. The new tank is basically a larger version of their MIT system. The increased depth allows the researchers to create a relatively steep sloping platform and the larger holding capacity means they can run experiments longer. “Both of these conditions are key to our recent success,” said Buttles. The steep sloping platform, he explained introduces a greater driving force due to gravity for the turbidity currents (relative to the MIT setup), “which translates into a greater potential to erode and transport our sediment. Sediment bed changes may evolve at a slow rate and can be quite subtle, the longer runtime allows the evolution to proceed under constant experiment conditions.” For Mohrig and Buttles, this means they finally have a chance to answer some of the fundamental questions about turbidity currents that have stymied scientists for decades. by Marc Airhart For more information about the Jackson School contact J.B. Bird at [email protected], 512-232-9623.
Bananas (Musa spp.) and plantains arise from the same types of plants, differing only in the amount of sweet sugar or blander starch in the fruits produced. Although often called "banana trees," the plants are actually tropical herbs that grow from fleshy underground roots called rhizomes or corms. Today there are hundreds of different bananas/plantains in existence because of mankind's selection and breeding to create plants with varying fruit sizes, colors and flavors as well as better resistance to plant disease. General Plant Type Banana plants are angiosperms (flowering plants) and are further characterized as monocotyledons, more commonly called monocots. Monocots have a single seed leaf, parallel leaf veins, no cambium layer and floral parts in multiples of three. The lack of a cambium layer in banana plants is the reason they technically cannot be called trees, even though some can grow 20 to 30 feet tall. The banana is placed into the botanical family Musaceae, known simply as the banana family. Both banana and plantain plants are further grouped together into the genus Musa, separating them from the only other genus in the family, Ensete. Since Muscaceae is a family of monocots, other closely related families include the heliconia family (Heliconiaceae) and bird-of-paradise family (Strelitziaceae). The Genus Musa There are about 35 different species of banana/plantain in the genus Musa, with origins of all the plants being tropical southern and southeastern Asia and extreme northern Australia. Modern plants that produce edible fruits typically have lineage centering on the species Musa acuminata and Musa balbisiana. According to the Species Profiles for Pacific Island Agroforestry, bananas were broken into five, now four, sections within the Musa family based on plant and fruit characteristics and/or region of nativity. The names of the current four sections are Australimusa, Callimusa, Musa and Rhodochlamys. Most edible bananas/plantains are from the sections Australimusa and Musa, while those plants regarded more as ornamental plants remain in Callimusa and Rhodochlamys. Ploidy and Modern Breeding Most edible bananas originate from two species in the section Musa: Musa acuminata (A) and Musa balbisiana (B). Complex hybridization of bananas leads to new plants, called culitvars, that typically have varying amounts of chromosomes from either of these two species. Ploidy refers to the number of chromosome sets (genome) in the cells of banana plants. The natural number of chromosomes is considered diploid, such as AA for Musa acuminata and BB for Musa balbasiana. Genetic manipulation results in new plants that combine chromosomes from these parent plants. Triploid genomes (like AAB, AAA, ABB) as well as tetraploid types (AAAA, AABB, etc.) can be created. The vast majority of edible bananas worldwide today are triploids, according to the Species Profiles for Pacific Island Agroforestry. Interestingly, bananas with BB and BBB genomes do not effectively produce edible banana fruits. Among the most name-recognizable bananas are those ambiguously called "Cavendish," which refers to a sizable number of plants with triploid genome (AAA). Cultivar names of plants in this group of edibles include 'Giant Cavendish,' 'Dwarf Cavendish' and 'Extra Dwarf Cavendish.' Another group of triploid bananas are those called "Pacific plantains" (AAB) and include cultivars like 'Pome,' 'Silk' and 'French.' In fact, according to Species Profiles for Pacific Island Agroforestry, the cultivar 'French' is commonly labeled as Musa paradisiaca, which is actually a hybrid derived from Musa acuminata and Musa balbisiana, not a species.
Congenital neutropenia, often also called Kostmann syndrome is a rare type of neutropenia that is present at birth. It is an inherited disease and therefore, more than one family member can be affected, but sporadic occurrence with only one patient in a family is also possible. However, there is no antenatal testing currently available. Congenital neutropenia is usually very severe, and neutrophils are often completely absent in the blood of these patients at the time of diagnosis. Patients who are diagnosed with congenital neutropenia or Kostmann syndrome usually show what is known as a maturation arrest in the early stages of neutrophil development in the bone marrow. This means that their neutrophils rarely fully mature into the cells that are capable of fighting infections. These patients suffer from severe bacterial infections, such as omphalitis (infection of the navel), pneumonia, skin abscesses or otitis media (ear infections) during their first few months of life. Therefore, in most patients congenital neutropenia is diagnosed early during infancy. A blood test and a bone marrow sample are required in order to obtain a correct diagnosis. When a bone marrow is taken for diagnostic reasons, firstly the cells are looked at under the microscope and secondly the cells are used for other investigations, such as cytogenetic evaluation, analysis of the G-CSF receptor and, if possible, a sample is sent to the SCNIR bone marrow cell bank to be used for research purposes. With the cytogenetic evaluation the chromosomes of the bone marrow are studied. Most of the time, in the majority of patients with neutropenia, this test is completely normal. Changes in the chromosomes of cells can be harmless, but in some cases changes could indicate a possible progression towards leukaemia. This is the most important reason for routine annual bone marrow investigations. The analysis of the G-CSF receptor gives us information on the structure of this receptor. The receptor is localised on all granulocytes. The purpose of this particular receptor is the binding of the cytokine G-CSF in order to give a signal to the cell to maturate, to divide itself or to enhance function. In some congenital neutropenia patients the G-CSF receptor develops changes that also could indicate progression towards leukaemia and therefore this analysis is another sensitive indicator that needs to be tested on a regular basis. As soon as congenital neutropenia is diagnosed, patients should commence treatment with a haematopoietic growth factor called G-CSF (also known as filgrastim or lenograstim). Clinical trials of Normal Karyotype Monosomy 7 G-CSF treatment began at Amgen in 1987. This treatment was found to result in a dramatic increase in life expectancy and quality of life in these patients. As soon as the patientís neutrophil counts have improved and stabilised, a near normal life can be lead e.g. going to Kindergarten or school, participation in sports. Before G-CSF was available, most patients died from severe bacterial infections within their first few years of life because no other treatment was able to correct their neutropenia adequately. Even antibiotic therapy could only prolong the life of these patients for a short while, because both neutrophils and antibiotics are necessary to overcome bacterial infections. The only option for a complete cure of Kostmann syndrome is a bone marrow transplant (BMT). G-CSF is a natural cytokine produced by the human body. A cytokine is a protein produced by cells, which are essential for the regulation of other cells. Patients with congenital neutropenia also produce G-CSF, but for reasons still largely unknown, the response of their neutrophils to the normal amounts of G-CSF in the blood is reduced. The lower the neutrophil count, the greater the risk of infection. Occurrence of severe bacterial infections is strongly correlated with low neutrophil counts. In most patients bacterial infections resolve and reoccur less frequently as soon as the neutrophil count stabilises after initiation of G-CSF treatment at around 1000/mm3 (1.0 x 109/l). Individual people vary, some will fight off infection with a lower neutrophil count, and others will need a higher count. In congenital neutropenia patients, response to G-CSF treatment is also different. This is why there is a big variation in the dose (amount) of G-CSF that different people receive. For more information regarding G-CSF see chapter TREATMENT FOR SEVERE CHRONIC NEUTROPENIA. Only a very small subgroup of patients with congenital neutropenia does not respond to even very high doses of G-CSF. In patients, who do not respond to G-CSF doses of 100 mcg/kg or more within fourteen days, a search for a bone marrow donor should be started immediately and BMT should be performed as soon as a matching donor is identified. The BMT procedure is very complex: for more information contact your physician. During the last 10 years, data has been collected on more than 700 patients with chronic neutropenia. These data indicate that patients who have severe congenital neutropenia have an increased risk (around 9%) of developing leukaemia compared to healthy individuals. Therefore, it is strongly recommended that all patients with congenital neutropenia have a bone marrow examination and cytogenetic analysis on a yearly basis. BMT may be considered if the cytogenetics in the bone marrow shows any specific abnormalities. Besides neutropenia, patients with congenital neutropenia may have a reduced bone density, which can lead to osteopenia or osteoporosis, thinning of the bones (usually seen in elderly women). Osteoporosis may even be seen in children with severe chronic neutropenia, but the reasons for this are not clear. The changes in the mineral content of the bone (amount of calcium) possibly represent an additional symptom of the underlying genetic defect. However, according to all information currently available, only very few patients will actually experience clinical problems, such as pain and/or fractures due to their osteoporosis. The exact cause of osteoporosis is not fully known, nor are the long-term implications fully understood. Therefore it is important to monitor the patient's bone density on a regular basis to ensure the safety and well being of the patient.
For the sake of simplicity, and brevity, we shall restrict our investigations to the motions of idealized point particles and idealized rigid bodies. To be more exact, we shall exclude from consideration any discussion of statics, the strength of materials, and the non-rigid motions of continuous media. We shall also concentrate, for the most part, on motions which take place under the influence of conservative forces, such as gravity, which can be accurately represented in terms of simple mathematical formulae. Finally, with one major exception, we shall only consider that subset of dynamical problems that can be solved by means of conventional mathematical analysis. Newtonian dynamics was originally developed in order to predict the motions of the objects which make up the Solar System. It turns out that this is an ideal application of the theory, since the objects in question can be modeled as being rigid to a fair degree of accuracy, and the motions take place under the action of a single conservative force--namely, gravity--that has a simple mathematical form. In particular, the frictional forces which greatly complicate the application of Newtonian dynamics to the motions of everyday objects close to the Earth's surface are completely absent. Consequently, in this book we shall make a particular effort to describe how Newtonian dynamics can successfully account for a wide variety of different solar system phenomena. For example, during the course of this book, we shall explain the origins of Kepler's laws of planetary motion (see Chapter 5), the rotational flattening of the Earth, the tides, the Roche radius (i.e., the minimum radius at which a moon can orbit a planet without being destroyed by tidal forces), the forced precession and nutation of the Earth's axis of rotation, and the forced perihelion precession of the planets (see Chapter 12). We shall also derive the Tisserand criterion used to re-identify comets whose orbits have been modified by close encounters with massive planets, account for the existence of the so-called Trojan asteroids which share the orbit of Jupiter (see Chapter 13), and analyze the motion of the Moon (see Chapter 14). Virtually all of the results described in this book were first obtained--either by Newton himself, or by scientists living in the 150, or so, years immediately following the initial publication of his theory--by means of conventional mathematical analysis. Indeed, scientists at the beginning of the 20th century generally assumed that they knew everything that there was to known about Newtonian dynamics. However, they were mistaken. The advent of fast electronic computers, in the latter half of the 20th century, allowed scientists to solve nonlinear equations of motion, for the first time, via numerical techniques. In general, such equations are insoluble using standard analytic methods. The numerical investigation of dynamical systems with nonlinear equations of motion revealed the existence of a previously unknown type of motion known as deterministic chaos. Such motion is quasi-random (despite being derived from deterministic equations of motion), aperiodic, and exhibits extreme sensitivity to initial conditions. The discovery of chaotic motion lead to a renaissance in the study of Newtonian dynamics which started in the late 20th century and is still ongoing. It is therefore appropriate that the last chapter in this book is devoted to an in-depth numerical investigation of a particular dynamical system that exhibits chaotic motion (see Chapter 15).
Students begin this class by acquiring and reviewing fundamental descriptive drawing concepts and techniques in various media, including ink, graphite, pastels and collage. Students also learn methods for developing individual content by looking at children's picture books and by also using as a reference the paintings of famous artists. As pupils develop their ideas, they focus and hone their abilities in the media of their choosing, acrylic, pastels, or tempera. Along with the instructor's guidance, they are encouraged to discover and invent strategies for combining diverse media, modes of representation, and design elements. Students will create their own compositions and motifs according to their interest and imagination. The curriculum culminates with the students creating paintings inspired in famous fine arts works. LEARNING COLOR AND DESIGN Students explore color and surface texture through the their painting work. While students learn how mix colors and to describe the nature and color of light in the physical world, they also discover non-representational, design-based applications of paint media. Techniques in water-soluble acrylic and watercolor paints are emphasized.
Polar bear population estimates based on satellite images are similar to aerial estimates, according to a study published July 9, 2014 in the open-access journal PLOS ONE by Seth Stapleton from United States Geological Survey and colleagues. The potentially severe impacts of climate change in the Arctic may threaten regional wildlife. Scientists trying to develop efficient and effective wildlife monitoring techniques to track Arctic populations face great challenges, including the remoteness and associated logistical constraints of accessing wildlife. In this study, scientists evaluated high-resolution satellite imagery to track the distribution and abundance of polar bears on a small island in northern Canada in an attempt to develop a tool to monitor these difficult to reach populations. Specifically, the authors examined satellite images of the island with a high density of bears, during the ice-free summer and compared the images to aerial and ground surveys collected on different dates. The estimate of ~90 bears based on satellite imagery was similar to an abundance estimate of ~100 bears made from an aerial survey conducted a few days earlier. These findings support satellite imagery as a tool for monitoring polar bears on land, which could potentially be applied to other Arctic wildlife. The authors suggest that further automated detection developments and testing in different landscapes may provide information about benefits for large-scale application of the technology. Explore further: Den conditions reveal status of polar bears as they face decreasing ice Stapleton S, LaRue M, Lecomte N, Atkinson S, Garshelis D, et al. (2014) Polar Bears from Space: Assessing Satellite Imagery as a Tool to Track Arctic Wildlife. PLoS ONE 9(7): e101513. DOI: 10.1371/journal.pone.0101513
Researchers from McMaster University (Canada) had a clear question in mind when they conducted their recent experiment: if a mouse had its gut microbiota altered by antibiotics in early life, what would happen to its brain? The question might have seemed a non-sequitur—why would something that changes the gut have any effect on the brain? Yet the group of researchers, led by John Bienenstock and Sophie Leclercq, found that exposure to low-dose antibiotics did affect the mouse brains—and not just a little. The antibiotic-exposed mice showed an altered blood-brain barrier and a spike in specific immune-signalling molecules (cytokines) in the frontal cortex. Most importantly, however, the antibiotics changed mouse behaviour: the young mice acted differently in social situations and when faced with difficult tasks. They were also more aggressive than the mice with no alterations in their gut microbiota. “We used really low dose penicillin—a pediatric dose in a mouse, which is miniscule—and showed significant effects,” John Bienenstock explains in an interview with GMFH editors. “There were all sorts of effects in terms of social behaviour, social interaction, social avoidance.” But the researchers took the idea even further—could these effects on the brains of the mice be counteracted by adding different microbes? The answer was yes. Experimental mice that were given the probiotic bacteria Lactobacillus rhamnosus JB-1 in addition to the antibiotics showed fewer of the changes in brain biology and behaviour. The experiment showed not only that changes in gut microbiota could affect the brain, but also that the specific type of bacteria mattered for the end result. Bienenstock, along with his colleagues Paul Forsythe and Wolfgang Kunze, have been working on the gut-brain axis for many years—using germ-free mice as well as normal mice in their models, trying to zero in on how bacteria manage to influence the two-way communication between gut and brain. Bienenstock says that although researchers can change the bacterial inputs and measure what happens to mice, the real mystery is the chain of events leading from the gut to the brain changes. He says, “All we know is that something happens, [like a change in behaviour]. So there’s a huge black box still.” For starters, they’ve shown some of these effects depend on the nervous system; but the role of the immune system is not yet clear. Describing a landmark 2011 paper from his group that showed how L. rhamnosus JB-1 regulated emotional behaviour and aspects of central nervous system function in mice, Bienenstock says, “If you gave L. rhamnosus to the mice you could show all these effects—structural and functional—and then if you cut the vagus nerve those complicated events did not occur. So at least in that bug, most of the effects were related to nervous function. But we don’t know whether that works in the absence of the immune system or how much the immune system is indirectly or directly associated.” That is, the mechanisms need to be clarified. Bienenstock emphasizes that although the mouse models are important for investigating mechanism, they’re not the end goal. As a medical doctor (an internist by training), Bienenstock never forgets that, long-term, the research could have serious implications for those living with mental or behavioural problems. “We have to do all this stuff as much as we can in the human because that’s clearly a big area of ignorance of the moment in terms of microbial change and microbiome changes,” he says. Putting all the pieces together in humans, though, is a massive challenge. “Experimentally, there are clues as to how some of these bacteria can cause these changes. We’re beginning to understand that it may be bits of the bacteria… it may be stuff that the bacteria makes in situ in the gut, or that it’s fermentation products themselves like short-chain fatty acids,” he says. “But we don’t know what they do when they’re all tucked in together.” Leclercq S, Mian FM, Stanisz AM, et al. Low-dose penicillin in early life induces long-term changes in murine gut microbiota, brain cytokines and behavior. Nature Communications. 2017; 8. doi:10.1038/ncomms15062 It is not for no reason that increasing fibre in your diet is recommended. Contained in ... Deep inside your small intestine is a region that’s critical in the body’s immune system: the ... Gut microbiota plays a role in our digestion and immune system, and much more. This original ...
June 16, 2012 Bugs Have Key Role In Farming Approach To Storing CO2 Emissions Tiny microbes are at the heart of a novel agricultural technique to manage harmful greenhouse gas emissions. Scientists have discovered how microbes can be used to turn carbon dioxide emissions into soil-enriching limestone, with the help of a type of tree that thrives in tropical areas, such as West Africa.Researchers have found that when the Iroko tree is grown in dry, acidic soil and treated with a combination of natural fungus and bacteria, not only does the tree flourish, it also produces the mineral limestone in the soil around its roots. The Iroko tree makes a mineral by combining calcium from the earth with CO2 from the atmosphere. The bacteria then create the conditions under which this mineral turns into limestone. The discovery offers a novel way to lock carbon into the soil, keeping it out of the atmosphere. In addition to storing carbon in the trees' leaves and in the form of limestone, the mineral in the soil makes it more suitable for agriculture. The discovery could lead to reforestation projects in tropical countries, and help reduce carbon dioxide emissions in the developing world. It has already been used in West Africa and is being tested in Bolivia, Haiti and India. The findings were made in a three-year project involving researchers from the Universities of Edinburgh, Granada, Lausanne and Neuchatel, Delft University of Technology, and commercial partner Biomim-Greenloop. The project examined several microbiological methods for locking up CO2 as limestone, and the Iroko-bacteria pathway showed best results. Work was funded by the European Commission under the Future & Emerging Technologies (FET) scheme. Dr Bryne Ngwenya of the University of Edinburgh's School of GeoSciences, who led the consortium, said: "By taking advantage of this natural limestone-producing process, we have a low-tech, safe, readily employed and easily maintained way to lock carbon out of the atmosphere, while enriching farming conditions in tropical countries." On the Net:
1. Composs point at angle vertex make an arc through both arms. 2. Compass point on vertex of 1st arc and one arm make another arc. 3.Repeat step 2 from another arm. 4. Draw a line from arc intersection to angle vertex. How to make a perpendicular Bisector: 1. Open compss to just over half way of line segment 2.sharp part at point A make an arc. 3. Sharp part point B and make a arc. 4. With a ruler join the 2 intersections of the arcs.
Animals in the arctic tundra are adapted to long and cold winters, and to breed and raise their young quickly in the summer. Some animlas also have extra fat to keep themselves warm. Many animals in the tundra hibernate in the winter because there is not enough food, and others migrate south instead. - Polar Bears - Grizzly Bears - Tundra Bumblebees - Snowy Owl - White Wolf Some mammals are: pikas, marmots, mountain goats, sheep, elk, lemmings, voles, caribou, arctic hares and squirrels, arctic foxes, wolves, and polar bears. The ones i chose were... - Polar bears are one of the most common animals in the tundra. Polar bears survive in a snowy area that includes igloos and caves. They survive in the freezing cold by having their fur, which is thick layers, that keeps them warm. Polar bears live in the tundra because if they traveled to an area that was to hot, they would die from the heat. - It turns out that grizzly bears do live in the tundra. brown grizzly bears are very adaptable. One race or another can be found in most any type of habitat, except the polar regions. In the past there was a race, Mexican grizzlies, that lived in the arid regions of North Americas deserts. - Tundra bumblebees are of course animals that live in the tundra. It shivers its flight muscles to generate heat, and it also has dense hair on its body which slows heat loss to the air. These animals are also found in Canada, Alaska, and some other arctic islands. - Snowy Owls survive in the tundra by always blending in with its surroundings. Just like in the winter, their feathers turn white so it blends into the snow. To keep warm they tuck their wings under their stomach. To be safe from not being eaten they nest under the snow. - White wolves can live up to 10-14 years. They normally live in areas that are arctic and where the color of their coat gives them camouflage. This also includes Alaska, Northern Canada, Russia, and Scandinavia. Many tundra plants have adapted to stay out of wind and cold. Most plants are small with short roots. They grow low to the ground, and some grow in groups. Tundra plants also carry out photosynthesis at lower temperatures than most. A lot of the plants have dark leaves which keeps in sunlight which keeps them warm. The growing season is very short, about 60 days. So the plants are dormant, which means they are alive, but they do not grow after the season. The Red Bearberry is an example of dark leaves. Red Bearberry, Labrador Tea, Arctic Moss, Tufted Saxifrage, tussock grasses, dwarf trees, small-leafed shrubs, heaths, low shrubs, reindeer mosses, grasses, and 400 different types of flowers.
In Greek the term 'ethos' means 'character'. It can be used to describe the character of an audience, nation or community. For example, one can speak of the 'American ethos' as the characteristics that define American culture. A US presidential candidate would have to speak to the ethos of this nation and culture in order to win votes. Understanding ethos is important to understanding speech writing. As we study the rhetorical devices of speakers, we want to ask ourselves how the speaker appeals to the ethos of his or her audience. Texts often contain a sense of ethos in order to give the speaker more credit or authority on a matter. In a sense, ethos answers the question: "What gives you a mandate to speak to me?" Examples of ethos can be found in the following text, 'Letter from Birmingham Jail' by Martin Luther King.
What Is a “Slow” Heart Rate? Your heart rate is the number of beats (rhythmic contractions) per minute of your heart. Your heart is the muscular organ, located in the chest, behind and to the left of the breastbone that maintains circulation of the blood. Heart rate is a measure of cardiac Heart rate is one of the vital signs. Vital signs like body temperature, pulse, breathing rate, and blood pressure provide information about a person’s state of health. Any abnormality of these signs can offer diagnostic clues. A slow heart rate is considered anything slower than 50 beats per minute for an adult or child at rest. Alternative names for this - heart rate decreased - heartbeats decreased - low heart rate - decreased heart rate - pulse slow - pulse rate decreased - slow heartbeat - slow pulse Understanding Your Heart Rate by the Numbers You can measure your heart rate. First, find your heart rate by holding a finger to the radial artery at the wrist. Other places it can be measured are at the neck (carotid artery), the groin (femoral artery), and the feet (dorsalis pedis and posterior tibial arteries). Then, count the number of beats per minute while you are resting. Here are some numbers to keep in - The resting adult heart rate is normally 60 to 100 beats per minute. - Athletes or people on certain medications may have a lower resting normal rate. - The normal heart rate for children aged 1 to 8 years is 80 to 100 beats per minute. - The normal heart rate for infants age 1 to 12 months is 100 to 120 beats per minute. - The normal heart rate for newborns (under 1 month old) is 120 to 160 beats per minute. Problems That Can Accompany a Slow Heart Rate Your heart rate should be strong and regular without any missed beats. If it’s beating slower than the normal rate, it might indicate a medical problem. Fainting, dizziness, loss of consciousness, weakness, and fatigue can accompany a slow heart rate. In some cases, a slow heart rate is an indication of an extremely healthy heart. Athletes, for instance, often have lower than normal resting heart rates because their heart is strong and doesn’t have to work as hard to pump blood throughout the body. However, when a slower heart rate is uncommon and/or accompanied by other symptoms, it could be a sign of something more serious. Potential Underlying Causes of a Slow Heart Rate A thorough medical evaluation is necessary to determine the cause of a slow heart rate. An electrocardiogram (EKG or ECG), laboratory tests, and other diagnostic studies may be done. Potential medical causes of a slow heart rate - abnormal heart rhythms - anorexia nervosa - autonomic dysreflexia - autonomic neuropathy - congestive cardiomyopathy - heart attack - elevated potassium - intracerebral hemorrhage - marine animal stings or bites - side effects of medications - subarachnoid hemorrhage - sick sinus syndrome - AV node damage Treating the Cause of a Slow Heart Rate Treatment depends on the underlying condition. If slow heart rate is due to the effect of medication or toxic exposure, this must be treated medically. An external device (pacemaker) implanted into the chest to stimulate heartbeats is the preferred treatment for certain types of bradycardia. Because a low heart rate could indicate medical problems, make an appointment with your doctor if you notice any changes in your heart rate, especially if the changes are accompanied by Recognizing a Potential Emergency Situation In certain situations, a slow heart rate could indicate a medical emergency. The following symptoms can be - loss of consciousness - chest pain - passing out or fainting - shortness of breath - arm pain - jaw pain - severe headache - blindness or visual change - abdominal pain - pallor (pale skin) - cyanosis (bluish skin color) If you have any of these symptoms and a change in your heart rate, call 911.
It can be incredibly time consuming to scan biological fluids looking for tumor cells or other disease markers. One reason: The offending cells typically exist at concentrations of just parts-per-million. But the task could be simplified and accelerated by a factor of 100 if researchers at the Massachusetts Institute of Technology can bring their new sorting technique to market. Scientists usually count or sort fluids by sending them through small channels and using machine vision and microscopes. If the fluid flow is sped up in conventional processing equipment, it becomes turbulent, making identification of cells and markers impossible. The researchers discovered that adding hyaluronic acid could let the fluid’s Reynolds number go as high as 10,000 without causing turbulence. (Fluid flows normally cannot exceed 2,400 without creating turbulence.) Hyaluronic acid acts as a lubricant in the knee and is harmless to biological samples. The new additive increased fluid flow, but the relatively soft materials used to make microfluidic channels and devices would not withstand the higher pressures. So the team had to develop a more-rigid microfluidic-handling device out of epoxy that was still transparent. In the prototype device, fluids travel through channels 50-µm wide at more than 400 mph. Cells and other particles are forced to the center of the fluid flow and travel single file through the channels. A laser emits flashes measuring 10 billionths of a second to illuminate particles, letting the team image the size, shape, and orientation of cells. The sorting technique could be refined for use in medical diagnostics, water purification, and even industrial separation for biofuel production.
height of a given pressure approximates the actual height of a pressure surface above mean sea-level. Therefore, a geopotential height observation represents the height of the pressure surface on which the observation was taken. A line drawn on a weather map connecting points of equal height (in meters) is called a height contour. That means, at every point along a given contour, the values of geopotential height are the same. An image depicting the geopotential height field is given below. Height contours are represented by the solid lines. The small numbers along the contours are labels which identify the value of a particular height contour (for example 5640 meters, 5580 meters, etc.). This example depicts the 500 mb geopotential height field and temperatures (color filled regions). The height field is given in meters with an interval of 60 meters. Geopotential height is valuable for locating which are the upper level counterparts of surface cyclones and
Get top tips to help you through your English language exams with FREE resources and videos from our expert authors. Author Chris Rose presents a series of Macmillan Readers activities all about encouraging creativity and creative writing. Discover a range of educational apps to inspire you on your English language learning journey. Word of the Day: astral The Common European Framework of Reference for Languages: Learning, teaching, assessment (CEF) of the council of Europe is being used in many different educational contexts around the world, but it was never intended to tell teachers what to do or how to do it. It is intended to be a document for reference. As the authors of the CEF point out, its objective is to raise questions, not to answer them. The most well-known part of the CEF is the scale which describes a learner's language proficiency. There are six points on this scale (A1, A2, B1, B2, C1 and C2) and these range from low-level beginner to a very sophisticated language learner with a level that is approximately equivalent to the Cambridge Proficiency examination, for example. It is only possible or desirable for a coursebook to establish broad equivalences between the levels of coursebooks and the Council of Europe's levels. The levels in the CEF are described in terms of competences - what learners can do with the language. These 'can do' statements are extremely useful in determining course objectives, but are really only intended to describe and help evaluation. Besides the scales and descriptions of competences, the CEF emphasises the aims of language learning. Among these are the need to become independent and autonomous as a learner and the recognition that language learning can encourage co-operation and other social values. Students can be encouraged to work together in pairs and groups, and the selection of topics, texts and tasks in a coursebook can promote a knowledge of other cultures, to encourage open-mindedness to foster respect for others. The European Language Portfolio project is very closely linked to the Common European Framework. It is a document that students who are learning or have learned a language - whether at school or outside school - can record and reflect on their language learning and cultural experiences. Students can select materials to illustrate their Where appropriate, we have tried to indicate the approximate generic and Common European Framework levels for many of our courses, in our catalogue under the book title. For example: B2 Upper Intermediate The complete text of the CEF is available in print in at least eighteen languages. It is also available online in English and a number of other languages. For further information, visit the Council of Europe's website at www.coe.int
Why do statisticians shudder at the word "prove" and use words like support, accept the H1, and reject the Ho etc.? What is a "p" value and why do we need it? (Hint: What does it give us in addition to our alpha?) Explain the difference between a 1 and a 2 tailed test. When is each used? 1. Compute the critical values for a 2-tailed z-test with alpha = 0.01. Explain your answer. Sample mean = 78.47 Population mean = 75.8 Population standard deviation = 7.5 Standard error of the sample mean = .898 n= 100 2. We had to determine the critical value of z to make our accept/reject decision; In this analysis A door manufacturer wants to monitor the performance of a new painting process, which has been designed to apply paint with a mean thickness of 0.7mm, by detecting any departure from this target. One hundred doors will be randomly chosen from a day's production, and the thickness of paint on each of the 100 doors will be measure At .05 significance level, is there a difference in the distribution of heart attacks based on the amount smoked? Note: Assume the condition of Normality is met for problem below. A study is made of men over the age of 50. Each man is classified as to whether or not he has had a heart attack and the amount that he smokes at the present time. The following results were obtained for a sample of 300 men. Heart Attack Amount smoked Attached is a 5 step hypothesis test. I have tried to work through this and am stuck on step 4 and 5. I am not 100% sure I am using the correct test statistic and am totally lost on step 4. This test is being done on only one sample. The sample is below: 2.05 2.06 2.07 2.07 2.08 2.11 2.14 2.15 2.16 2.17 2.18 2. 1) Perform the five-step hypothesis test on your data. 2) Describe the results of your tests and compare the results to the results of the test performed for your previous Hypothesis Testing Paper. See attached files for full additional information. Use the probabilities determine what percentage purchased Total at least twice. Is age category independent of willingness to try new products? What is the probability that a randomly selected person purchased Total given that the person is in the 45-64 age category? In the mid-1990s, Colgate-Palmolive had developed a new toothpaste for the U.S. market, Colgate Total, with an antibacterial ingredient that was already being successfully sold overseas. However, the word antibacterial was not allowed for such products by the Food and Drug Administration rules. So Colgate-Palmolive had to come A pesticide control company wants to compare the effectiveness of two pest control chemicals (call then A and B) with respect to their ability to kill ants. An experiment is conducted as follows: 2000 ants are selected at random. 1000 are selected and placed in a room labeled A The remaining 1000 are placed in a room la 1. When we say we have a confidence level of 95%, what do you think our "margin of error" is? 2. If we move the level of confidence from 95 to 99%, what will happen to our margin of error? What will happen to the width of our confidence interval? 3. Would we have gotten a better feel for the amount of flour needed if we'd 1. Using the internet (or other outside source), research a health related variable such as newborn weight or heart rate (although you may not use these since they have already been discussed in class). Based on the information you discover (such as mean, median, typical range, etc.), would you say that your variable has a norm There used to be an ad on TV that claimed one toothpaste was proven to be more effective in fighting tooth decay than another. One of statistics professors was interested in the study, and after 6 ads finally got the address to write away for it from the TV. The study was classically perfect - randomly assigned individuals i I need to write an evaluation of a research article (article attached). It needs to include a brief description of the article and follow a systematic approach. Article attached -- it tests the hypothesis that attitudes towards HIV/AIDS will improve after participation in an educational program. What are the theoretical considerations of hypothesis testing and why are these considerations important when conducting research studies? Define statistical comparisons, correlations and predictions as they relate to social science research. What role do they play in the data collection process? The following data resulted from a taste test of two different chocolate bars. The first number is a rating of the taste, which could range from 0 to 5, with a 5 indicating the person liked the taste. The second number indicates whether a "secret ingredient" was present. If the ingredient was present a code of "1" was used, and I recently completed a survey concerning whether or not cell phone usage on airplanes will lead to air rage or not. I have the results from the survey in excel fomat but I am not sure how to complete a one tailed t test for the questions. I am trying to show the surveyed people say that they wii become irritated by prolonged cel Answer questions about hypothesis tests. For questions 3 and 4, do a complete hypothesis test and state your conclusions. See the attachments for the complete descriptions of the questions. 1.) A toy manufacturer claims that the mean time that its top-selling toy captures children's attention is at least 35 minutes. Suppose that we doubt this claim and we carry out a hypothesis test to see if we can refute the claim. State the null hypothesis Let's say you want to know if the amount of candy adults eat influences the number of times they brush their teeth in a day. Your independent variable is candy - you will manipulate how many pieces of candy they eat. Your dependent variable is the number of times they brush their teeth in a day. This is a ratio scale: they can b Comparing the means of two populations. What is the value of the test statistic? What is the conclusion? Please see sample problem in attachment. --- Consider the following hypothesis test H0: mu1 - mu2 = 0 Ha: mu1 - mu2 =/ 0 The following results are for two independent samples taken from the two populations. Sample 1 Sample 2 n1 = 80 n2 = 70 mu1 = 104 mu2 = 106 s1 = 8.4 s2 I have to do a paper that would identify a research issue, problem, or opportunity that uses data that has absolute zero measurements, such as interval and ratio level data. I chose the temperature in my city over the course of a month, what would be the issue, problem or opportunity with weather over the course of a month. I ne The Holland wood company manufacturers and assembles desks and other office equipment at several plants in Michigan. The weekly production of the Model A3 desk at the Grand Rapids Plant is normally distributed with a mean of 200 and a standard deviation of 16. Recently, due to market expansion, new production methods have be I have 2 samples that I must compare using the 5% level of risk. Population One - Sample Size 45, mean income $31,290, SD $1,060 Population Two - Sample Size 60, mean income $31,330, SD $1,900 I would assume that I would run a Chi-Square for independent means but I need the population variance do I not before I can do any I have to identify a research issue, problem, or opportunity that uses data that has absolute zero measurements, such as interval and ratio level data. What if I choose the temperature in my city over the course of a month. What would be the issue, problem or opportunity with weather over the course of a month? We want to test the claim that the Smith method of gender selection is effective in increasing the likelihood that a baby will be born a girl. In a random sample of 80 couples who use the Smith method, it is found that among the 80 newborn babies, there are 35 girls. What should we conclude about the claim? Why is it nece I love filet mignon. The full cut is supposed to weigh 16 ounces when fully cooked. My restaurant serves 200 servings of filet mignon per day. The restaurant claim is they serve filets that are 16 ounces with a standard deviation of .4 ounces due to shrinking while cooking. I ate filet mignon for 16 days straight. The average we Taking a trip to Sweden again. Last years statistics report showed that 65% of the entire adult population are women. I had a friend in Sweden take a poll to see if this was true. He picked 50 adults at random and reported that 36 of them were female. Is it possible that the proportion of women is different than what I read in t 1) The manufacturer of the X-15 steel-belted radial truck tire claims that the mean mileage the tire can be driven before the tread wears out is 60,000 miles. The standard deviation of the mileage is 5,000 miles. The Crosset Truck Company bought 48 tires and found that the mean mileage for their trucks is 59,500 miles. Is Crosse Prepare an analysis of the attached article and identify the hypothesis described in the article, and explain how the hypothesis statement was used in the study. (See attached file for full problem description) --- The lives of many people are affected by fear that prevents them from flying. The Marist Institute of Public Opinion conducted a poll of 1014 adults, 48% of whom were men. According to a USA Today newspaper poll. ? 47% of Adults believe flying is the safest way to trave
Increasing global temperatures It's well understood that climate change will lead to an increase in global average temperatures. But what does a 2 degree on average increase really mean? You will be surprised to hear that the difference between a ice age period and today's temperature is just 2 degrees on average. The key word here is 'on average'. It's not simply the difference between a 20 degree day and a 22 degree day, but average temperatures across the world over an entire year. Most places on the planet will get far warmer, some will get drier and others with be much wetter. And our polar ice caps will melt. More extremes is something we will become accustomed to. Although there are many apparent contradictions some consensus on major impacts is emerging. There will always be uncertainty in understanding a system as complex as the world’s climate. However, there is now strong evidence that significant global warming is occurring. A few thousand years ago animals and plants would have been able to adapt to climate change by moving, either immediately or gradually over generations. However, as wildlife is increasingly isolated in protected areas, it is no longer able to move as the regions outside the protected areas are filled with agriculture or human habitation. As a result, scientists predict that over a million species are threatened with extinction. Make your data count ClimateWatch was developed to understand the effects climate change is having on our earth's natural processes. The first project of its kind in the Southern Hemisphere, ClimateWatch will allow every Australian to be involved in collecting and recording data that will help shape the country’s scientific response to climate change. Climate change is affecting rainfall and temperature across Australia, and is consequently triggering changes in the established flowering times, breeding cycles and migration movements and other phenological changes. Essentially ClimateWatch is based on Phenology, the study of periodic plant and animal life cycle events and how these are influenced by seasonal and interannual variations in climate. - Bureau of Meteorology Climate Change - BOM Climate Education - CSIRO Climate Change - Australian Government - Intergovermental Panel on Climate Change - National Climate Change Adaptation Research Facility
Gravity dance tilted giant planets An early gravitational dance made the giant planets tilt, an astronomer suggests. The shift probably happened billions of years ago when the bigger planets in our solar system were closer together than they are now, and the gravity of each one pulled on the others, writes Argentinian researcher Dr Adrian Brunini today in the journal Nature. This "neutral gravitational interaction" caused Jupiter, Saturn, Uranus and Neptune to have tilted axes that were determined as they moved through the solar system to take their current positions far from the Sun, says Brunini, from the Facultad de Ciencias Astronomicas y Geofisicas in Buenos Aires. This is a departure from an earlier theory that holds that the massive planets' tilts, or obliquities, were caused by collisions with Earth-sized space rocks during the early period of the solar system. "This model has some problems that were not clear how to solve," Brunini says. "For example, we believe that such a big object never existed in the outer solar system." Gravity on gravity Brunini used numerical models to show that the outer planets' obliquities could have been created by gravitational interactions. All the planets in our solar system have tilted axes but the bigger ones have axes that lean at a constant angle, while the smaller ones like Earth have obliquities that can change. Despite the potential for change, Earth's axis has been leaning at about 23° for millions of years and is almost completely stabilised by the Moon's gravitational pull, Brunini says. But Mars' axis might change over tens of millions of years. Earth's tilt stable but crucial For humans, the reliability of Earth's tilted axis is important as it is responsible for the change of seasons. At the point in its annual orbit where Earth's northern hemisphere leans away from the Sun, it's winter; when the southern hemisphere tilts away, it's winter south of the equator. While the more massive planets have stable obliquities, they range in size from a nearly perpendicular 3° for Jupiter to about 97° for Uranus, Brunini says.
When the supply of water vapor and of atmospheric carbon dioxide is small, an extreme type of climate usually prevails. Crystals of frost puffed out as the water vapor left the air. As long as the actual amount of water vapor in the air is less than that which the air can hold, no rain falls. This water vapor changes into droplets of water when it gets cool enough. On top of that, this original earth mass, composed of molten rock and gases and water vapor, was condensing. When we say that water evaporates, we mean that it changes into water vapor. When the water vapor gets cool enough it condenses, changing to myriads of extremely small drops of water. Then the water vapor in it condenses into droplets of water, and these form a cloud. The exact values of these pressures vary with degree of water vapor present and with temperature. When the air is fully charged with water vapor it is said to be saturated. |water vapor | Water in its gaseous state, especially in the atmosphere and at a temperature below the boiling point. Water vapor in the atmosphere serves as the raw material for cloud and rain formation. It also helps regulate the Earth's temperature by reflecting and scattering radiation from the Sun and by absorbing the Earth's infrared radiation. See also vapor.
2. Circle/Whole Group: Show several large pictures of spiders (a library book may help) and discuss observations. Some spiders hunt for their food, some jump, some spin webs, and some dig trap holes! Are spiders insects? Why not? Let kiddos tell stories about spiders they’ve seen. 3. Song: The Spiders Go Crawling–follow the melody and format of The Ants Go Marching. For example, “The spiders go crawling one by one. Hurrah, Hurrah!…” Stand and act out the words. 4. Story: Anansi does the impossible! : an Ashanti tale retold by Verna Aardema. Before reading the story tell the kiddos that the story is about a very smart spider named Anansi. The story was first told by the Ashanti people. The Ashanti people love to tell stories about Anansi the spider. The Ashanti people live far away in Africa. (Show how far with a globe.) You would need to fly on an airplane all day to get to Africa. Let’s pretend to visit the Ashanti people for this story. Move chairs, or use pillows or anything else to represent an airplane, and fly to Africa. Give everyone brightly colored fabric, scarves, or clothes because the Ashanti wear bright clothes that the men weave. Once everyone is “dressed” and situated, read the story. Ask the children what difficult things Anansi did. What difficult things have the children done? Act out the story. 5. Craft: Anansi Spiders! What color do you want your Anansi spider to be? Help the kiddos glue two small styrofoam balls together to make a spider. Spiders have eight legs. Thread four pieces of yarn through the body with a needle to form eight legs. Children can color the spider with paint or markers. No foam balls? An alternative is to make the spider webs! Give each child a black plastic plate (or piece of construction paper) with slits cut around the edge. Let each child string a white piece of yarn through the slits in whatever pattern/arrangement they would like their spider web. You can even glue little spiders on if you like! 6. Learning Activity. Anansi Stories! Explain that the Ashanti tell many stories where Anansi does something smart or hard. Today we will write our own Anansi stories! This will be the children’s first prompted writing project, so they may need a lot of help. Each child will need an empty “book.” (Preparing the book: use a word processor to make a table with two columns and three rows that fills an 8 1/2″ x 11″ page. In the top right hand square, print: “My Anansi Story.” Below that, print: “By ______.”) Help each child cut out their book, layer the three rows on top of each other, fold them in half, and place two staples right next to the fold. Begin writing by demonstrating the process. Make up a short Anansi Story. For example, One morning, Anansi wanted to play with his friend, Turtle. Turtle needed to water his flowers, but it took a long time because the flowers were down the path from the pond. Anansi wanted to help Turtle water faster. Anansi had an idea! Anansi and Turtle dug a path for the water to flow from the pond to the flowers. Turtle was done watering his flowers and could play with Anansi now! As you tell the story draw a simple picture on each page. Then show how you can go back and write in the words. Let the children take turns telling stories and drawing pictures. You can write in the words for them one at a time. 7. Snack: Spider Toast. Give each child a piece of toast with butter, 8 pretzels, and two raisins. Let them make and eat their own Spider Toast! 8. Learning Activity. Spider Bowls. Prepare small spider cards by printing a sheet of paper with 20-30 spiders and cutting them out in small squares or rectangles. Also print out number cards with numbers 1-10. Place three bowls on the table with a number card by each bowl (ie. 1, 3, and 5). Let the children work together to put the correct number of spiders in each bowl (ie. three spiders in the bowl with 3 by it). When the correct number is reached, cover the bowl (with a lid or paper or book). When all three bowls are filled correctly, empty them, change the numbers, and repeat the activity. 9. Freeplay outside. Send your “little spiders” out for some sunshine! 10. Circle to review and summarize day. Supplies for the day: chairs for “airplane” bright colored cloth Anansi does the impossible! : an Ashanti tale retold by Verna Aardema paint or markers If doing the alternative craft, replace italicized supplies with black plastic plates (or construction paper), white yarn, and scissors. Little spiders and glue are optional. book pages for each child crayons or colored pencils little spider cards number cards 1-10
The Arctic climate has been warmer over the past decade than during any 10-year period in 2,000 years, according to a study by an international research team that adds powerful new evidence that human-generated greenhouse gases have speeded the pace of the planet's recent warming. The report from an international team of climate scientists concludes that climate change in the Arctic has accelerated since the Industrial Revolution, abruptly reversing a long-term worldwide cooling trend. "The study provides a clear example of how increased greenhouse gases are now changing our climate," said Caspar Ammann of the National Center for Atmospheric Research in Boulder, Colo., a co-author of the report published Thursday in the journal Science. To deduce the Arctic's decade-by-decade climate trend over the centuries, the leading scientists in the international study analyzed sediment cores in 14 Arctic lakes that revealed the varied growth rates of long-buried plants. They also studied Arctic tree rings to determine their growth rates and ages as well as ice cores from glaciers across the Arctic that showed patterns of relative warm and cold. Researchers at other institutions, seeking to look for patterns of climate change even further back in time, used astronomical records to study the well-known wobble of the globe as it spins on its axis. They found that the Northern Hemisphere has long been moving away from the sun's warmth. During the summer solstice, the Northern Hemisphere is now a million kilometers - about 621,000 miles - farther away from the sun than it was 2,000 years ago, according to the scientist's computer models. The result was a global period of relative cold that would have continued, the scientists found. But about 1850, at the beginning of the Industrial Age, the planet's climate began overcoming the cooling trend, and the Arctic climate has warmed decade by decade ever since as greenhouse gas emissions have increased, the scientists say. Stephen Schneider, a Stanford climate expert and biologist who did not participate in the study, called the seven-year study, involving seven major research institutions in three nations, "a heroic effort." The study, he said, "shows that nature has been, unfortunately, cooperating with theory and showing us on a long-time scale of millennia that the mainstream view is once again bolstered." It is clear again, Schneider said, that anthropogenic influences - the increasing emission of greenhouse gases into the Earth's atmosphere - are the prime cause of global warming. For the rest of the article go to:
Polio / PPRP Polio is an infectious disease caused by the polio virus. The Polio virus predominantly attacks the central nervous system (brain and/or spinal cord). Paralysis occurs with 0.1% of all infections. The infection is caused by ingesting infected foods. This is how the virus enters the mouth and pharyngeal cavity. From here it spreads to the intestines, where it multiplies and is finally excreted with the faeces. The incubation period (time from the infection to the onset of illness) is around 6 to 10 days. When the infection stops at this stage, one speaks of asymptomatic or abortive polio. This is the case in around 4 to 8% of all infected individuals. Non-specific symptoms which may also occur with other virus infections are seen in the early stages of the illness: nausea, headaches, fever and possibly diarrhoea. In about 1% of all polio infections, the virus perforates the barrier of the intestinal tract and penetrates into the spinal cord and brain via the bloodstream. This also leads to a non-paralytic form of poliomyelitis that manifests itself through pain in the head, neck and back. With only about 0.1% of all infections, the nerve cells in the spinal cord and/or brain are attacked by the virus directly. This is the paralytic form since paralysis occurs in these cases. The symptoms of polio, long-term consequences/post-polio syndrome are: - A general lack of strength and endurance - Extreme fatigue - Difficulty breathing and swallowing - Intolerance to cold - Pain in the muscles and/or joints - Increased muscle weakness/muscle pain - Muscular atrophy - Increasing joint instability/joint deformities - Subsultus (fasciculation) - Changes in gait pattern and/or an increased tendency to fall Since no causative antiviral therapy exists, treatment is limited to symptomatic measures. These include bed rest with careful nursing, correct positioning and physical therapy. In addition to adequate physiotherapy, follow-up treatment also includes fittings with orthopaedic devices such as orthoses for post-polio and polio leg treatment. An improvement in mobility can thereby be achieved after the acute illness. The products shown are fitting examples. Whether a product is actually suitable for you and whether you are capable of exploiting the functionality of the product to its fullest depends on many different factors. Amongst others, your physical condition, fitness and a detailed medical examination are key. Your doctor or Orthotist will also decide which fitting is most suited to you. We are happy to support you.
Spare energy is all around us, from the pressure exerted by every footfall to the heat given off by heavy machinery. In some cases, like regenerative braking in cars, it’s easy to harvest, and the equipment needed to do so is simple and economic. In many others, however, we’re not there yet. It’s not that we don’t have the materials to do so. Piezoelectric generators can harvest stresses and strains, while triboelectric generators can harvest friction, to give two examples. The problem is that their efficiency is low and the cost of the materials is currently high, making them bad fits for any applications. But a study in today’s issue of Science describes a “yarn” made of carbon nanotubes that can produce electricity when stretched. Its developers go on to demonstrate its use in everything from wearable fabrics to ocean-based wave power generators. Given that the raw material for carbon nanotubes is cheap and there are lots of people trying to bring their price down, this seems to have the potential to find some economic applications.
NCERT/CBSE Textbook Exercise Important Questions Only Q.1: Indicate True(T) or False(F) a. Unicellular organisms have one-celled body. b. Muscle cells are branched structures. c. The basic living structure of an organism is an organ. d. Amoeba has irregular shape. Ans: a)T b)T (spindle shaped structures) c)F d)T. Q.2: Make a sketch of the human nerve dell. What function do nerve cells perform? Ans: Functions of human nerve cell: (i) Nerve cells receive message from different parts of body. (ii) They further transfer these messages to brain and accordingly brain send commands for functioning of different organs of body. Q.3: Write short notes on the following: (a) Cytoplasm (b) Nucleus of a cell (a) Cytoplasm: Cytoplasm is a jelly like substance which is present between the cell membrane and the nucleus. Various other organelles of cells are present in the cytoplasm. Cytoplasm is made up of chemical substances like carbohydrates, proteins and water. These chemical substances are present in cells of all types and sizes. Cytoplasm contains many important tiny substances called Organelles. (b) Nucleus of a cell: Nucleus is the master of the cell. It commands all the functioning of the cell. It is generally located in the center of the cell and is spherical in shape. A membrane called nuclear membrane separates it from cytoplasm. It contains the genetic material DNA and RNA in it. This porous membrane allows the transfer of material in the nucleus and cytoplasm. Nucleus contains a dense body called Nucleolus which actually contains Chromosomes, the genetic material. Q.4: Which part of the cell contains organelles? Q.5: Make sketches of animal and plant cells. State three differences between them. Ans: See answer of Q.No.6 (HTTP://CBSEKEY.COMadditional long type questions) below. Q.6: State a difference between eukaryotes and prokaryotes. Ans: Prokaryotes do not have a well designed nuclear membrane while, eukaryotes have a well designed nuclear membrane. Q.7: Where are the chromosomes found in cell? State their functions. Ans: Chromosomes are found in the nucleus of a cell. Their function is to carry characteristic features of parent cells to the daughter cell means, from parent to offspring. Q.8: ‘Cells are the basic structural units of living organism’. Explain. Ans: In Biology, the basic unit of which all living things are composed is known as ‘cell’. The ‘cell’ is the smallest structural unit of living matter that is capable of functioning independently. A single cell can be a complete organism in itself, as in bacteria and protozoans. A unicellular organism also captures and digests food, respires, excretes, grows, and reproduces. Similar functions in multi-cellular organisms are carried out by groups of specialized cells which are organized into tissues and organs such as, the higher plants and animals. Hence, ‘cell’ is known as the basic structural and functional unit of life. Q.9: Explain why chloroplasts are found only in plant cells. Ans: Chloroplasts are found only in plant cells because they are required for photosynthesis. HTTP://CBSEKEY.COM(Additional Important Questions) Short Questions with their Answers Q.1: Define Prokaryotes and Eukaryotes. Prokaryote: The cells having nuclear material without nuclear membrane are termed as prokaryotic cells. An organism with these kinds of cells is called a Prokaryote e.g. Bacteria and Blue Green Algae. Eukaryote: The cells having well organized nucleus with a nuclear membrane are termed as eukaryotic cells. All organisms other than Bacteria and Blue Green Algae are Eukaryotes. Q.2: What is Protoplasm? Ans: The entire content of a living cell is known as protoplasm. It includes cytoplasm and the nucleus. Q.3: Give three examples of unicellular organisms. Ans: Amoeba, Paramecium and Chlamydomonas. Q.4: Name the cell organelle which is found only in plant cell. Q.5: What is the smallest cell size? Ans: The smallest cell is 0.1 to 0.5 micrometer in Bacteria. Q.6: What is the largest cell? Ans: The largest cell is 170mm x 130mm, is the egg of an Ostrich. Long Questions with their Answers Q.1: Why cell is known as structural and functional unit of life? Ans: Refer to the answer of Q.No.8 (NCERT Textbook Exercise) above. Q.2: Why are the mitochondria known as the power house of the cell? Ans: Mitochondria are rod shaped and very minute bodies present in cytoplasm. They are concerned with release of energy from food during respiration. Because of this they are often referred to as the power house of the cell. Q.3: Why plasma membrane is called a selectively permeable membrane? Ans: A cell bound by a semi-permeable membrane called plasma membrane that enables it to exchange only certain materials with its surroundings. Plasma membrane permits the entry and exit of some material in the cell. It also prevents movement of some other material. Therefore, ‘Plasma Membrane’ is called as ‘selectively permeable membrane’. Fig: (a) Spherical red blood cells (b) Spindle shaped muscle cells (c) Long branched nerve cells Q.4: Write a short note on the ‘shape of cells’ or ‘cell shape’. Ans: Cells exhibit a variety of shapes. Some cells have a definite shape while some keep on changing its shape. For example- White Blood Cell (WBC) present in our bodies, Amoeba continuously changes their shape. However, most of the cells maintain a constant shape and the different shapes are related to their specific functions. For example- blood cells are spherical, muscle cells have spindle shape, and nerve cells are long and branched. It is mainly the cell membrane which provides the shape to the cells of plants and animals. [See figure] Q.5: Write short notes on the following: (a) Gene: Gene is a unit of inheritance in living organisms. Nucleus contains thread-like structures called chromosomes which carry genes in them. Genes are composed of DNA (Deoxyribonucleic acid), except in some viruses. They achieved their effects by directing the synthesis of proteins. (b) Chromosome: These are the microscopic thread-like parts present in the nucleus of a cell that carries hereditary information in the form of genes. (c) Organelles: The various tiny components of a cell present in the cytoplasm are known as organelles. These are – Mitochondria, Golgi bodies, Ribosome etc. (d) Vacuole: A vacuole is a clear space generally stored in the cytoplasm. Big size vacuole is found in the plant cell whereas; in animal cells they are very small. In protozoa, vacuoles are cytoplasmic organs performing many functions such as digestion, excretion etc. (e) Tissues: Each organ is further made up of smaller parts called tissues. A tissue is a group of similar cells performing a specific function. (f) Plastids: Plastids are found in the plant cells but are absent in animal cells. They are found scattered in the cytoplasm of the leaf cells. Plastids are of three types – (ii) Leucoplast and Among these three types chloroplast is the most important as they contain chlorophyll which is a necessary element for photosynthesis. Because of this reason plastids are also said as ‘Kitchen of plant cells’. Q.6: Why Plastids are said as ‘Kitchen of plant cells’? Ans: See the answer of the Q.No.5 (f). Q.6: Differentiate between Plant cell and animal cell. Ans: The differentiation between plant and animal cells is given in the following table: Fig: Structures of Animal and Plant Cells Animal Cell Plant Cell (a) Cell wall is absent. (b) These cells do not contain Chloroplasts. (c) Chromosomes are present in the nucleus. (d) Vacuoles are less and of smaller size. (a) Plant cells have rigid cell walls. (b) Chloroplasts are present. (c) Chromosomes are present. (d) Vacuoles are larger and more in numbers. Q.6: Write a short note on ‘Nucleus of a cell’? Ans: Refer to the answer of Q.No.3 (b) (NCERT Textbook Exercise) above.
Chalk cliffs in England. - The definition of chalk is made or drawn with soft white, gray or yellow limestone. An example of chalk used as an adjective is in the phrase "chalk drawing," which means a drawing made with this substance. - Chalk is defined as a soft limestone that is white, gray or yellow or something made of this substance. An example of chalk is what the teacher uses to write on the blackboard. - a white, gray, or yellowish limestone that is soft, porous, and easily pulverized, composed almost entirely of calcite from minute sea shells - any substance like chalk in color, texture, etc. - a piece of chalk or gypsum, often imbued with a pigment, used for drawing, writing on a blackboard, etc. - a mark or line made with chalk - Brit. a score or tally, as in a game or as of credit given Origin of chalkMiddle English ; from Old English cealc ; from Classical Latin calx, lime, limestone: see calcium - made or drawn with chalk - ⌂ Slang, Horse Racing - favored to win, place, or show - betting on favorites only - Brit. to treat with chalk; lime or fertilize (soil) - to rub or smear with chalk; specif., to rub chalk on the tip of (a billiard cue) - to make pale - to write, draw, or mark with chalk - to mark out as with chalk - to outline; plan - to score, get, or achieve - to attribute or ascribe not by a long chalk walk a chalk line - A soft compact calcite, CaCO3, with varying amounts of silica, quartz, feldspar, or other mineral impurities, generally gray-white or yellow-white and derived chiefly from fossil seashells. - a. A piece of chalk or chalklike substance in crayon form, used for marking on a blackboard or other surface.b. Games A small cube of chalk used in rubbing the tip of a billiard or pool cue to increase its friction with the cue ball. - A mark made with chalk. - Chiefly British A score or tally. transitive verbchalked, chalk·ing, chalks - To mark, draw, or write with chalk: chalked my name on the blackboard. - To rub or cover with chalk, as the tip of a billiard cue. - To make pale; whiten. - To treat (soil, for example) with chalk. Origin of chalkMiddle English, from Old English cealk, from Latin calx, calc-, lime; see calx. (countable and uncountable, plural chalks) - (uncountable) A soft, white, powdery limestone. - (countable) A piece of chalk, or, more often, processed compressed chalk, that is used for drawing and for writing on a blackboard. - Tailor's chalk. - (uncountable, climbing) A white powdery substance used to prevent hands slipping from holds when climbing, sometimes but not always limestone-chalk. - (US, military, countable) A platoon-sized group of airborne soldiers. - (US, sports, chiefly basketball) The prediction that there will be no upsets, and the favored competitor will win. (third-person singular simple present chalks, present participle chalking, simple past and past participle chalked) - To apply chalk to anything, such as the tip of a billiard cue. - To record something, as on a blackboard, using chalk. - To use powdered chalk to mark the lines on a playing field. - (figuratively) To record a score or event, as if on a chalkboard. - To manure (land) with chalk. - To make white, as if with chalk; to make pale; to bleach.
LA: Read and comprehend literary nonfiction at the high end of the grade level text complexity band independently and proficiently. LA: Engage effectively in a range of collaborative discussions (one-on-one, in groups, and teacher-led) with diverse partners on grade level topics and texts, building on others’ ideas and expressing their own clearly. LA: Present claims and findings, emphasizing salient points in a focused, coherent manner with relevant evidence, sound valid reasoning, and well-chosen details; use appropriate eye contact, adequate volume, and clear pronunciation. MATH: Solve real-world and mathematical problems involving area, surface area, and volume. SCI: Define the criteria and constraints of a design problem with sufficient precision to ensure a successful solution. SS: Describe ways in which language, stories, folktales, music, and artistic creations serve as expressions of culture and influence behavior of people living in a particular culture. SS: Give examples of and explain group and institutional influences such as religious beliefs, laws, and peer pressure, on people, events, and elements of culture. SS: Explore ways that language, art, music, belief systems, and other cultural elements may facilitate global understanding or lead to misunderstanding. VA: Students will initiate making works of art and design by experimenting, imagining and identifying content. VA: Students will investigate, plan and work through materials and ideas to make works of art and design. VA: Students will reflect on, share insights about, and refine works of art and design. VA: Students experience, analyze and interpret art and other aspects of the visual world.
Overview of Fossil Fuels Fossil fuels were formed millions of years ago when plants, animals and other creatures died and buried under the earth. Their remains gradually changed over the years due to heat and pressure in the earth’s crust and formed to coal, oil and gas. These are the 3 major forms of fossil fuels and were formed from the organic remains of plants and animals. Since, they took million of years to form, that’s why they are also called as non-renewable energy sources. This means that they cannot be used again once they are expired. Over several years, these decomposed plants and animals were converted into black rock substance called and coal and thick liquid called oil or petroleum, and natural gas. Substances that release energy when burnt are called fuels, and the process of burning is called combustion. Heat is the process that makes the release of energy possible when combustion takes place. A fuel may be a solid, a liquid or a gas and the energy from it may be used either directly, as in warming a house, or indirectly under a boiler to make steam for driving an engine. Sometimes the fuel is burn in the engine. Much of the fuel used in the world is coal. Coal can be used in lumps or ground to powder and taken through pipes to the furnace as is sometimes done in factory and power station boilers. Wood is important in countries without coal. Charcoal is wood that has been charred by heating in an oven without air and is sometimes used for cooking because it is smokeless. Peat is partly formed coal dug out of bogs and marshes. It burns rather like wood but gives more energy. When it burns, it leads to air pollution which is one of the major disadvantage of fossil fuels. Formation of fossil fuels took millions of years but they disappear within seconds. They are being extracted from all over the world. Coal is one the fossil fuels that is used extensively in the production of electricity. Oil is used to power our vehicles. We all use fossil fuels in our daily in some or other way. like every other thing, fossil fuels too have negative effect on our environment. Carbon dioxide is on the by product of fossil fuels. When burnt they release carbon dioxide and some other gases. Carbon dioxide is one the primary reason for formation of global warming. We are polluting our environment by extracting these fossil fuels and burning them at fast pace. They can’t be utilized again as they are non-renewable. The solution to the above problems is using renewable energy sources including solar, wind and geothermal. Examples of Fossil Fuels The most important fossil fuel comes from petroleum, which is natural oil found underground. It is not much used in its natural state but made into fuels such as petrol, paraffin, kerosene, vaporizing oil and diesel oil. These are obtained through the process called distillation. Benzole is a liquid fuel like petrol obtained when is made into gas. The fuel called natural gas, often found where there is petroleum, is a compound of hydrogen and carbon known as methane. Natural gas was found beneath the North Sea in 1965, and provides about half of all Britain’s gas requirements. The uranium used in nuclear in nuclear reactors is called “fuel” even though the process is not one of combustion. A fuel cell combine’s hydrogen with oxygen in a way that converts the chemical reaction is the same as if there were combustion. Petroleum is one of the most commonly used fossil fuels. The word means “rock oil”. Petroleum and its products are of great use in today’s life. These products include motor fuels, kerosene, diesel oil, wax etc. petrol, of course is used in motor cars. Kerosene is used in oil lamps, farm tractors, jet engine aircraft etc. In villages, kerosene has a very important usage. Diesel oil is used in diesel engine buses, tractors, Lorries, ships and many more vehicles use it. Lubricants are also made from the by-products of petroleum. This is needed to make machinery of any kind run smoothly and easily. Bitumen is used in asphalt and for water proofing Another important and widely used fossil fuel is natural gas. It is chiefly methane, sometimes known as marsh gas. Natural gas is normally found underground along with petroleum and coal but sometimes occur by it and are pumped out through pipelines. Once pumped out, they are transported to storage areas or for domestic use. It has been a source of domestic gas for many years. Many people use this gas to heat their homes. It contains a strong odor that makes it easy to smell if there is a leak. They produce comparatively little pollution as against other fuel sources. Since, they are in a liquid state, it is easy to transport them through pipelines. The main drawback of this fuel is that it is highly inflammable. The greatest producers of natural gas are United States of America and Russia. Another very important fossil fuel is coal. It is the poor man’s petrol. It has been formed by the deposition of vegetative remains for thousands of years. The fossilized form of the decayed plants and other vegetation has formed coal. Coal is hard black rock solid like substance that need to be dug as compared to oil and natural gas that need to be pumped out. There are huge coal reserved all around the world and it’s easier to find coal then oil or gas. It is made up of carbon, nitrogen, oxygen, hydrogen and sulphur. There are various methods to dig coal from underground. One of the most popular method is to build a horizontal shaft like system and coal miners travel via lifts or trains to dig the coal deep underground. In United States, coal still produces more than 50% of electricity. Once dug, coal is then shipped to coal power stations by trucks or trains. Coal power stations require huge reserves of coal to generate electricity on a constant basis. Fossil fuels have been serving as the source of energy for almost all practical purpose of today’s life. They are inevitable in all domestic purposes like cooking, transportation and have many other advantages. However, these are all non-renewable sources of energy. This means there is a limited stock of this fossil fuel. With the increase in population and consequently their consumption, the stock of fossil fuel seems to be approaching its end Every year millions of tonnes of coal and gallons of oil is used to extract energy from them. The extraction process is done by burning these fossil fuels. Due to the combustion of these fuels, the environment gets highly polluted. The constituents of fossil fuels are sulphur and nitrogen. They are produced in their oxide forms and cause the pollution. This led to the global warming and extinction of these precious resources. Another effect of this is that when coal is dug from the coal mines, it affects the surrounding ecosystem and pose serious hazard to the health of mine workers. However, they are still popular as they are cheaper than any other alternative sources of energy.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Solve the integer operations in this advanced color by number worksheet. Browse over 40 educational resources created by Middle School Math Resource in the official Teachers Pay Teachers store. Math Mystery Picture Worksheets. Math equations coloring worksheets. Print the PDF to use the worksheet. Math may be complicated for some children. Nine geometric shapes are shown in each worksheet. They are broken down into the following units. Students must solve the equations correctly in order to be able to properly color in the given pages. Mystery picture worksheets require students to answer basic facts and color according to the code. Math Facts Advanced Math Place Value Holiday Coloring Squared would like for you to enjoy these free math coloring pages for you to download. Ad The most comprehensive library of free printable worksheets digital games for kids. However today we are going to give you the editable versions of the documents as well. Get thousands of teacher-crafted activities that sync up with the school year. Ad The most comprehensive library of free printable worksheets digital games for kids. Math coloring pages and worksheets are a fun way to help your children learn. Available in the following bundles. Algebra Worksheets Algebra Activities Math Resources Teaching Math Math Teacher Algebra 1 Teacher Stuff Consumer Math Math Expressions. Multiplication Coloring Sheets 5th Grade Coloring Pages Grade Free. This worksheets allow children to be sharp in coloring and learning to solve the multiplication problems at the same time. Equations- Pirate Using understanding of computations solve the equations to create a fun first grade math coloring page of a pirate. Unit 1 – Number Sense Coloring Worksheet Unit 2 – Ratios Proportional Relationships Coloring Worksheet Unit 3 – Expressions Equations Coloring Worksheet Unit 4 – Geometry Col. It also contains math riddles finding the cost of the objects translating the phrases into one-step equation and more. There are two different difficulty levels for each function. In these pdf worksheets solve the multi-step equations and verify your solution by substituting the value of the unknown variable to the equation. This sheet includes multiplication division addition and subtraction to solve for x. Color in the picture of a coral reef with this free color by number worksheet that has you practice addition and subtraction. Saved by Teachers Pay Teachers. Then you can click on any one of the images to print the PDF. Suggested Grade Level 1st Common Core Standards. The hidden images include pictures of animals kids vehicles fruits. Middle School Math Coloring Pages. Equations and Expressions Unit Resources. Math Fact Color by Number. Solve for x in these equations and color the picture as you go. Here are the Free Thanksgiving Math Worksheets. Here are the links to the Solving Equations Christmas Coloring Worksheets as promised. Hover over an image to see what the PDF looks like. This set of worksheets requires students to solve one-step equations involving integers fractions and decimals by performing addition subtraction multiplication or division operations. Adding a fun activity can help your kids use more of their brain and learn more effectively. Using this coloring worksheets is a great thing to start besides using songs. 2-1 Thanksgiving Coloring Worksheet – Solving Equations with Variables on Both Sides FREE PDF 2-1 Thanksgiving Coloring Worksheet Answers – Solving Equations with Variables on Both Sides Editable – Member Only 2-2 Thanksgiving Coloring Worksheet – Solving Equations with Distributive Property. Translate the given phrases to algebraic equations. If you want access to the answer documents you will need to Join the Math Teacher Coach Community. 2nd Grade Math Mystery Pictures Coloring Worksheets. Algebra Coloring Page 1. Each has ten solving equations problems. For coordinate grid graph art pictures please jump over to Graph Art Mystery Pictures. Color by Number is a super fun way to get your children doing math. Click on the image to view the PDF. Solve simple addition and subtraction equations to find what color your should use. It gained children color recognition as well as their number sense in completing the blank templates on the coloring worksheets too. Included is 5 vocabulary coloring worksheets for 7th grade math. Basic addition subtraction multiplication and division fact worksheets. Your 2nd graders will surely be interested to work through the math problems in this collection so they can solve the mystery pictures. Calling all math maniacs heres an algebra practice page that includes a bit of coloring. Three exclusive practice sheets are available here for students. Usually we only give away the PDF files for the worksheets. Get thousands of teacher-crafted activities that sync up with the school year. Two solving equations coloring worksheets are included in this file. Letter B Worksheets Kindergarten Worksheet For Kindergarten Kindergarten Math Worksheets Addition Kindergarten Addition Worksheets Kindergarten Worksheets Printable for Math equations coloring worksheets
If you haven’t heard of Omega 3, or you are confused because the terms fatty acids and Omegas are often tossed around, then you aren’t alone, but Omega 3 fatty acids are essential to your health in about a million and one ways. Omega 3s is a shortened version of Omega 3 fatty acids. They are the counterpart to Omega 6 fatty acids, both of which are within a family of essential fatty acids that play an important role in the many physiological processes our bodies must carry out. Our bodies cannot produce these fatty acids on their own, so we must get them from our diets. That is why they are called essential. Omega 3 fatty acids form several double bonds within their chemical structure, which leads to them being called polyunsaturated fats. You may see this term—polyunsaturated—in articles discussing nutrition or even on packages of food at health and grocery stores. The three most important Omega 3s are: - Alpha Linoleic Acid (ALA) – This omega 3 is found in plants. Good sources are flaxseed perilla and walnut oils, along with the whole food. Chia seeds are also a good source. Alpha-linolenic acid is like the omega-3 fatty acids that are in fish oil, called eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). Our bodies can change alpha-linolenic acid into EPA and DHA. However, it is suggested that less than 1% of ALA is converted to physiologically effective levels of EPA and DHA. - Docosahexaenoic Acid (DHA) – This omega 3 is found in cold water, fatty fish like salmon. It can also be found in some types of seaweed. DHA is known to support the healthy growth of an infant brain and the nervous system. It also supports an adult heart and brain. - Eicosapentaenoic acid (EPA) – This omega 3 is also found in cold water, fatty fish and is vital to reducing inflammation, as well as supporting a healthy brain and decreasing the incidence of depression. These 3 omega 3s are absolutely vital to human health, but research has shown that most Americans (or people who eat a diet similar to that of Americans) consume as much as 16 to 25 times the amount of Omega 6 fatty acids than they do Omega 3 fatty acids. But with proper Omega 3 to Omega 6 fatty acid ratios —which should be about 1:1, respectively—we gain a whole host of health benefits: Many report the reduction of rheumatoid arthritis symptoms. So, how do you get more omega 3s? Simply check out this chart for dietary sources. It breaks down the omega 6 to omega 3 ratio so you can add more omega 3s and reduce the omega 6s if you are eating too many. If you don’t get enough of these foods in your diet, you can also take an Omega 3 supplement to be sure you are supporting your body with the essential fatty acids it needs. Get our free guide Subscribe to our newsletter and get our free guide to inflammation, and members only discounts.
Electromagnetic induction is the creation of an electro-motive force (EMF) by way of a moving magnetic field around an electric conductor and, conversely, the creation of current by moving an electric conductor through a static magnetic field. Electromagnetic interference (EMI) is also known as electric current and electromagnetic induction and may also be called magnetic induction, as the principle remains the same whether the process is carried out through electromagnet or static magnet. Electromagnetic induction was discovered by Michael Faraday in 1831 and, independently and almost simultaneously, by Joseph Henry in 1832. Faraday discovered electromagnetic induction and demonstrated it with a copper coil around a toroidal piece of iron, a galvanometer (a gauge-based device to show current) and a magnet. When the magnet was moved towards the coil, an EMF is created moving the gauge on the galvanometer. If it is the north end of the magnet that is drawn closer, current flows one way; if the south is drawn closer, then current flows in the opposite direction. The discovery of electromagnetic induction was a fundamental principle in understanding and harnessing electricity. James Clerk Maxwell formulated the mathematical description as Faraday’s Law of induction, later known as the Maxwell-Faraday Equation. The principle of electromagnetic induction is used in electronic components such as inductors and transformers. Electromagnetic induction is the basis of all types of electric generators and motors used to generate electricity from motion and motion from electricity.
Determine the significance and clinical use of iron levels in clinical practice Lab Test Name: Iron – Fe Measures the amount of Fe in the bloodstream. - Sufficient Fe level - oxygen transport - proper hemoglobin & RBC production Iron (Fe) is an element that is an important component of hemoglobin in red blood cells. Iron aids hemoglobin’s transport of oxygen from the lungs to all the cells of the body. The storage form of iron is ferritin. Iron is transported in the blood by a protein called transferrin. - Blood loss - Malabsorption of iron - Iron overload Type of anemia: - Sideroblastic anemia - Iron deficient anemia Normal Therapeutic Values: - plasma separator tube What would cause increased levels: What would cause Increased Levels of Iron? - Lead toxicity - Iron poisoning - Acute liver disease - Multiple blood transfusions - Hemolytic anemia - Sideroblastic anemia What would cause decreased levels: What would cause Decreased Levels of Iron? - Blood Loss: - Gastrointestinal (GI) bleeding - Heavy menstruation - Chronic hematuria - Iron-deficiency anemia - Inadequate absorption of iron Hey everyone. My name is Abby and I’m with nursing.com. In this lesson, we’ll talk about iron levels, their normal value, as well as times when we might see them increase or decrease, and why we should draw this lab. Let’s dive in! An iron lab measures the amount of iron in the blood. Iron is in an element that is an important component of hemoglobin, which resides in red blood cells. It’s so important because iron aids hemoglobin in the transport of oxygen from the lungs to all of the cells in the body. The storage form of iron is called ferritin. Ferritin is measured in a separate lab, but you can see here is a cartoon version if you will, of ferritin, and you can see how iron is stored within it. Iron is bound and transported in the blood, in ferritin via transferrin. That’s how it’s bound to the ferritin. Some clinical indications for taking an iron lab would be, if there has been a major amount of blood loss, hemochromatosis, like this individual with the thin legs in the picture, if there’s a known mal-absorption of iron, so this could be some type of autoimmune disease or a poor diet, um, even different types of anemia, or it could be to identify an iron overload. It also helps to determine the etiology of certain anemias, uh, whether that be thalassemia, or sideroblastic anemia, or iron deficiency anemia. Normal lab values for iron are between 50 and 175 micrograms per deciliter. It’s collected in a plasma separator tube like this green one here. An increase will be seen in the case of hemochromatosis, like we saw with that gentleman with dark legs. It can also be increased in lead toxicity or in iron poisoning, so maybe iron supplements were overtaken. It can also be increased in acute liver disease, because iron can bind itself within organs and cause a lot of organ damage. It can also be increased in patients that have multiple blood transfusions. It’s also going to be increased in hemolytic anemia because those lead red blood cells are blasting open and spitting out contents like the hemoglobin and iron, and also, sideroblastic anemia. It will be decreased in the case of blood loss, that could be from a heavy GI bleed, it could even be from heavy menstruation, or chronic hematuria. It’s also related to hypothyroidism and of course, iron deficiency anemia. And then, as we mentioned, it can also be due to inadequate absorption. Linchpins for this lesson are that the iron lab measures iron concentration in the blood to evaluate for blood loss, anemias, and liver disease. Normal values are between 50 and 175 micrograms per deciliter, and we would see an increased value if there was an excess intake of iron, like in poisoning, those that get regular blood transfusions, and in certain anemias, like hemolytic anemia. It’s going to be decreased if there’s blood loss, because of course, if there’s blood loss, then we have fewer red blood cells. If we have fewer red blood cells, we have less hemoglobin, and if we have less hemoglobin with iron bound to it, right, it’s all making sense. It’s also linked to an iron deficiency and a lack of absorption. Now you all did great on this lesson. I hope that contributed to a great understanding. Now go out, be your best self today and as always, happy nursing.
The far-out planet, named 55 Cancri e, is twice as big as Earth and nearly nine times more massive. It is most likely composed of rocky material, similar to Earth, supplemented with light elements such as water and hydrogen gas. Scientists estimate the planet’s surface is much hotter than ours: close to 2,700 degrees Celsius. Exoplanets — planets outside our own solar system — have captivated astronomers in recent years as interest in finding life on other Earth-like planets has intensified. But Josh Winn, the Class of 1942 Associate Professor of Physics, says exobiologists should probably not flock to 55 Cancri e looking for signs of life: The temperatures are just too high to sustain living organisms. But he suspects the exoplanet will attract the telescopes of many astronomers, mainly for reasons of visibility: 55 Cancri e is relatively close to Earth compared to other known exoplanets, and, as a result, the star around which the planet orbits appears roughly 100 times brighter than any other star with an eclipsing planet. “Everything we do in astronomy is starving for more light,” Winn says. “The more light a star gives you, the more chances you have of learning something interesting … and everyone’s been waiting for a system like this that you can study in great detail.” An 18-hour year Winn and his colleagues collected starlight data continuously for two weeks from Canada’s Microvariability and Oscillations of Stars space telescope, called “MOST” for short. They directed the satellite scope toward 55 Cancri e based on a tip from doctoral student Rebekah Dawson of the Harvard-Smithsonian Center for Astrophysics. Last year, Dawson published a mathematical analysis of existing data on 55 Cancri e, and found it took the planet 18 hours to orbit its star. Her results suggested 55 Cancri e was much closer to its star than previously thought, and Winn immediately saw an opportunity to catch sight of an eclipse. “If [a planet] is just hugging the star, there’s a greater chance of an eclipse, versus if the planet is really far out, in which case you have to be luckier to see it right in front of the star,” he says. An eclipse has the potential to unlock many mysteries about an exoplanet. For example, astronomers can identify a planet’s diameter, mass, composition and atmospheric conditions by measuring the differences in light as a planet passes in front of, or “transits,” its star. However, only a handful of rocky exoplanets have been known to transit, and every one of them eclipses a faint star. ‘A firefly across a searchlight’ For two weeks, Winn and his colleagues tracked the brightness of 55 Cancri e’s super-bright star, discovering tiny dips in the data that occurred every 18 hours, a finding that confirmed Dawson’s original theory by suggesting the occurrence of an exoplanetary eclipse. Andrew Howard, a research astronomer at the University of California at Berkeley who was not involved in this study, said spotting such a miniature eclipse in deep space is no small feat. “This is like looking for a firefly crawling across a searchlight [by] looking for the decreasing brightness of that searchlight from 1,000 kilometers away,” Howard says, adding that planet hunters now have plenty of high-quality data to play with in learning more about 55 Cancri e’s atmosphere and composition. “This is just a new world,” Howard says. The results of the study have been accepted for publication in The Astrophysical Journal Letters. Winn hopes the study will prompt astronomers to explore 55 Cancri e with their own tools and telescopes. Dawson’s findings prompted another group at MIT to investigate the rocky exoplanet. Sara Seager, the Ellen Swallow Richards Professor of Extrasolar Planets at MIT, and Brice-Olivier Demory, a postdoc in the Department of Earth, Atmospheric and Planetary Sciences, detected a transit of 55 Cancri e using NASA’s Warm Spitzer, a powerful infrared space telescope. From the spectral data they collected, the group calculated the planet’s dimensions, confirming Winn’s calculations. Demory and Seager plan to commandeer the telescope again next year to catch the planet, this time behind its star. By measuring the difference between the light given off from the planet in front of and behind its star, the group could determine exactly how much light the planet itself gives off, which could in turn give researchers clues about the planet’s atmospheric composition. “It’s still going to be hard to learn everything about this planet,” Winn says. “But at least we have what might be the best system in the sky to study it.”
Ocean deoxygenation is now being recognized as major threat to future global coral reef survival. Oxygen is life, in or out of the water, raising concerns that declining ocean oxygen stores are adding an additional environmental stress to already highly vulnerable coral reef ecosystems. While the twin effects of ocean warming and acidification are well studied, until now there has been limited understanding of how the growing threat of ocean deoxygenation may impact the ability of corals to function and ultimately form reefs. A unique deoxygenation-reoxygenation stress experiment has given researchers from the University of Technology Sydney (UTS), University of Konstanz and University of Copenhagen insight into how corals manage deoxygenation stress and the key genes that likely drive varied stress susceptibility that commonly results in coral bleaching. The study, published in Global Change Biology discovered that, like other animals and humans, corals have a similar, sophisticated response to low oxygen levels, or hypoxia. The response is commonly activated during oxygen-deprived exercise and cancer growth in humans “Ocean deoxygenation is potentially a greater and more immediate threat to coral reef survival than ocean warming and acidification,” said lead author and UTS PhD candidate Rachel Alderdice. “Coral reefs are increasingly being exposed to low oxygen events due to climate change and localised pollution often caused by nutrient run-off. “The extent to which corals are at risk from future declines in background ocean oxygen levels relies on their hypoxia detection and response systems so to be able to identify this gene response system is significant and exciting,” Ms. Alderdice, from the UTS Climate Change Cluster(C3) Future Reefs Research Programme, said. The unique stress experiment aligned deoxygenation stress to the natural night-day cycle of common reef-building corals from The Great Barrier Reef. Transcriptomic RNAsequencing revealed the key genes expressed that help keystone species such as Acropora tenuis respond to, and tolerate, low oxygen levels. However, the research also revealed that not all the coral appeared to be equally sensitive to hypoxia. “We found those corals that bleached had a delayed, less-effective programming of their hypoxia response gene system compared to the non-bleached coral. The differences in programming abilities for this key gene system may be fundamental to understanding what dictates corals’ capacity to tolerate environmental stress – and ultimately how to more accurately predict the future for coral reefs,” University of Konstanz, and senior author, Dr Christian Voolstra said The researchers say that the identification of such ‘common switch’ gene repertoires to stress might provide a novel means to identify genes of interest to guide novel diagnostics for improved reef coral management or as target for selective breeding ‘reef restoration’ efforts aimed at increasing coral stress resistance. Co-author Associate Professor David Suggett, who leads the UTS C3 Future Reefs Research Program said “A fundamental concern we have right now is whether corals and reefs are already feeling the effects of sub-lethal O2 stress. We have been so preoccupied with unraveling the effects of ocean warming and acidification, we have forgotten deoxygenation, despite its life-sustaining role and that oxygen is an ocean property we can measure well.” “This work confirms our recent analysis that continued ocean deoxygenation will play a critical role in shaping the future of our reefs, and yet another reason to urgently tackle climate change,” he said.
Trilobites are a well-known fossil group of extinct marine arthropods that form the class Trilobita. Trilobites form one of the earliest known groups of arthropods. The first appearance of trilobites in the fossil record defines the base of the Atdabanian stage of the Early Cambrian period (526 million years ago), and they flourished throughout the lower Paleozoic era before beginning a drawn-out decline to extinction when, during the Devonian, almost all trilobite orders, with the sole exception of Proetida, died out. Trilobites finally disappeared in the mass extinction at the end of the Permian about 250 million years ago. The trilobites were among the most successful of all early animals, roaming the oceans for over 270 million years. When trilobites first appeared in the fossil record they were already highly diverse and geographically dispersed. Because trilobites had wide diversity and an easily fossilized exoskeleton an extensive fossil record was left behind, with some 17,000 known species spanning Paleozoic time. The study of these fossils has facilitated important contributions to paleontology, evolutionary biology and plate tectonics. Trilobites are often placed within the arthropod subphylum. Trilobites had many life styles; some moved over the sea-bed as predators, scavengers or filter feeders and some swam, feeding on plankton. Most life styles expected of modern marine arthropods are seen in trilobites, with the possible exception of parasitism. Some trilobites are even thought to have evolved a symbiotic relationship with sulfur-eating bacteria from which they derived food. Drotops is a genus of trilobites from the order Phacopida, family Phacopidae that lived during the Eifelian of the Middle Devonian. It was described by Struve in 1990 under type species Drotops megalomanicus. Their fossils are found in present day Morocco, specifically the Maïder Region located South West of Erfoud.
The most detailed study to date of the soil carbon stored in mangrove forests has revealed that these soils hold more than 6.4 billion tons of carbon globally, according to a new paper in Environmental Research Letters. That is about 4.5 times the amount of carbon emitted by the U.S. economy in one year. More than 90 percent of the total carbon stock in mangrove forests can be stored in the soil. The study used 30-meter resolution remote sensing data to show that mangrove forest destruction caused as much as 122 million tons of carbon to be released to the atmosphere between 2000 and 2015. The paper also found that more than 75 percent of those soil carbon emissions were attributable to mangrove deforestation in Indonesia, Malaysia and Myanmar. “Effective action on climate change will require a combination of emissions reductions and atmospheric carbon removals,” said Dr. Jonathan Sanderman of the Woods Hole Research Center, who was the lead author on the paper. More than 20 co-authors contributed to the work. “Protecting, enhancing and restoring natural carbon sinks must become political priorities. Mangrove forests can play an important role in carbon removals because they are among the most carbon-dense ecosystems in the world, and if kept undisturbed, mangrove forest soils act as long-term carbon sinks.” Direct measurements of mangrove soil carbon storage have only been made in about one third of the more than 100 nations that contain mangrove forests. To fill this measurement void, the researchers developed a machine learning-based model to predict soil carbon storage based upon climatic, vegetation, topographic and hydrologic properties that can be inferred from satellite data. Using this model, they could then estimate the carbon storage within any mangrove forest in the world. The researchers then overlaid remotely-sensed deforestation maps onto the baseline map of soil carbon to estimate loss of soil carbon due to loss of mangrove habitat. This analysis found that between 30 and 122 million tons of mangrove forest soil carbon was lost between 2000 and 2015. As part of the paper, Dr. Sanderman and his colleagues produced maps of mangrove forest soil carbon storage and made them freely available to help government officials prioritize mangrove protection as part of their climate mitigation and adaptation plans. The map showing carbon storage in 2000 for the upper meter of soil can be viewed here: bit.ly/2F1Hsbc “Halting the loss of further mangrove habitat and restoration of lost habitat will not solve climate change alone,” Dr. Sanderman said. “But for many nations, including most small island nations, mangrove protection and restoration represent one of the most viable climate mitigation options.” The research was completed at the Woods Hole Research Center, in collaboration with The Nature Conservancy. The Mapping Ocean Wealth team is currently working on a Mangrove Blue Carbon app that will combine learning from this study with existing knowledge on above-ground mangrove carbon storage, carbon reduction policy targets, and existing protections. The app will be available in Summer 2018. In the meantime, visit the Mapping Ocean Wealth explorer to view preliminary data from this study. Photo credit: © Mwangi Kirubi
NANTES, France — During the 1300s, the Black Death was savaging Europe, England and France were locked in the Hundred Years’ War and Chaucer was penning his Canterbury Tales. Meanwhile, more than a billion kilometers away, a comet careened toward Saturn and disintegrated, dropping dusty clouds of debris on the giant planet’s iconic rings, creating rippled cometary footprints. The ripples from that cataclysmic event can still be detected today, electrical engineer Essam Marouf reported October 4 during the joint meeting of the European Planetary Science Congress and the American Astronomical Society’s Division for Planetary Sciences. Marouf, a professor at San Jose State University in California and a member of the Cassini science team, described how the probe beamed radio waves back to Earth through the innermost part of Saturn’s C ring, a tenuous inner band in the planet’s ring system. The radio waves revealed what Marouf calls a “very unusual kind of addition” to the normal ring structure. “There were highly regular little wiggles that rippled over hundreds of kilometers in a very specific pattern,” Marouf says. The rippling region contains two different waves, one that repeats every 1.2 kilometers and another that repeats every 1.3 kilometers. Though curious, similar wiggles do appear elsewhere in the outer solar system. Scientists traced a similar structure in Jupiter’s rings — spied by the Galileo probe — to debris littered by comet Shoemaker-Levy 9 as it crashed into the solar system’s largest planet in 1994. Saturn’s C ring also features a longer rippling structure imaged by Cassini and reported earlier this year. Scientists think these longer undulations — between 30 and 50 kilometers — were caused by an impact event in 1983. Using that information, Marouf and his team were able to determine how long ago the ripples were created, since wavelengths shrink predictably and elderly ripples are more closely packed together. Rewinding that shrinking process revealed that the newly observed ripples are 600 years older than those born in the early 1980s. “They date back to about the late 1300s,” Marouf says. “And there is very clear evidence for two events, not one, separated by about 50 years.” “This is such an amazing result,” says Mark Showalter of the SETI Institute in Mountain View, Calif., who recently linked the Jovian ripples with the comet. “Two events is really a hint that this is a cometary kind of thing. Some object got captured into orbit, made two close passages. Survived the first, not totally damaged — then 50 years later it came back in and that was the end of it.”
SINTEC green roofs make use of existing roof space and prevent runoff before it leaves the lot, storing water during rainfall events, delaying runoff until after peak rainfall and returning precipitation to the atmosphere through evapotranspiration. The depth of substrate, the slope of the roof, the type of plant community, and rainfall patterns affect the rate of runoff. SINTEC green roofs can reduce annual total building runoff by as much as 60% to 79%. Extended roof life By physically protecting against UV light and reducing temperature fluctuations, SINTEC green roofs extend the life span of the roof’s waterproofing membrane and improve building energy conservation. Although nowadays the membranes have an expected life of more than 25 years, the fact that they are protected from weather conditions means this is possibly much higher (over 50 years). Temperature stabilization of waterproofing membranes by green-roof coverage may also extend their useful life. An unvegetated reference roof could reach temperatures higher than 70 ºC in summer, while the surface temperature of the SINTEC green roof only reaches 30°C. During warm weather, SINTEC green roofs reduce the amount of heat transferred through the roof, thereby lowering the energy demands of the building’s cooling system. In summer, SINTEC green roofs reduce heat flux through the roof by promoting evapotranspiration, physically shading the roof, and increasing the insulation and thermal mass. Urban heat island In urban environments, vegetation has largely been replaced by dark and impervious surfaces (e.g., asphalt roads and roofs). These conditions contribute to an urban heat island whereby urban regions are significantly warmer than surrounding suburban and rural areas, especially at night. This effect can be reduced by increasing albedo (the reflection of incoming radiation away from a surface) or by increasing vegetation cover with sufficient soil moisture for evapotranspiration. Green-roof habitats show promise for contributing to local habitat conservation. Living roofs also provide aesthetic and psychological benefits for people in urban areas. Even when SINTEC green roofs are only accessible for visual relief, the benefits may include relaxation and restoration, which can improve human health. Other uses for SINTEC green roofs include urban agriculture: food production can provide economic and educational benefits for urban dwellers. Living roofs also reduce sound pollution by absorbing sound waves outside buildings and preventing inward transmission. The role of SINTEC green roofs in storm-water retention is well understood, but some research demonstrates that green-roof runoff includes increased levels of nitrogen and phosphorus due to leaching from the substrate. Organic matter, nutrients, and contaminants in the growing medium or roof membranes can cause discharged water to be a new source of surface-water pollution. Research on more inert substrates, and on integrated grey-water reuse systems, may lead to mitigation of these effects. Although extensive SINTEC green roofs, as they are low in biomass, have little potential to offset carbon emissions from cities, intensive roof gardens that support woody vegetation could make significant contributions as an urban carbon sink. Urban vegetation is known to trap airborne particulates and to take up other contaminants such as nitrogen oxides. Noise and sound reduction The SINTEC green roof system can reduce external sound by 8-10db compared with a conventional roof system. The vegetation barrier and entrapped air within the vegetation acts as a sound insulation barrier and sound waves are absorbed, reflected or deflected. Many of the products used within the SINTEC green roof system are recyclable or made from recyclable building materials, rubber or plastic. Where possible, depending on project location, the SINTEC green roof system can re-use secondary aggregates.
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated. Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment. The map shows the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, in the meteorological data we have that information. Why do we use UTC? Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.