content
stringlengths 275
370k
|
---|
In genetics, a mutagen (Latin, literally origin of change) is a physical or chemical agent that changes the genetic material, usually DNA, of an organism and thus increases the frequency of mutations above the natural background level. As many mutations cause cancer, mutagens are therefore also likely to be carcinogens. Not all mutations are caused by mutagens: so-called "spontaneous mutations" occur due to spontaneous hydrolysis, errors in DNA replication, repair and recombination.
The first mutagens to be identified were carcinogens, substances that were shown to be linked to cancer. Tumors were described more than 2,000 years before the discovery of chromosomes and DNA; in 500 B.C., the Greek physician Hippocrates named tumors resembling a crab karkinos (from which the word "cancer" is derived via Latin), meaning crab. In 1567, Swiss physician Paracelsus suggested unidentified substance in mined ore (identified as radon gas in modern times) caused a wasting disease in miners, and in England, in 1761, John Hill made the first direct link of cancer to chemical substances by noting that excessive use of snuff may cause nasal cancer. In 1775, Sir Percivall Pott wrote a paper on the high incidence of scrotal cancer in chimney sweeps, and suggested chimney soot as the cause of scrotal cancer. In 1915, Yamagawa and Ichikawa showed that repeated application of coal tar to rabbit's ears produced malignant cancer. Subsequently in the 1930s the carcinogen component in coal tar was identified as a polyaromatic hydrocarbon (PAH), benzo[a]pyrene. Polyaromatic hydrocarbons are also present in soot, which was suggested to be a causative agent of cancer over 150 years earlier.
The mutagenic property of mutagens was first demonstrated in 1927, when Hermann Muller discovered that x-rays can cause genetic mutations in fruit flies, producing phenotypic mutants as well as observable changes to the chromosomes. His collaborator Edgar Altenburg also demonstrated the mutational effect of UV radiation in 1928. Muller went on to use x-rays to create Drosophila mutants that he used in his studies of genetics. He also found that X-rays not only mutate genes in fruit flies but also have effects on the genetic makeup of humans. Similar work by Lewis Stadler also showed the mutational effect of X-ray on barley in 1928, and ultraviolet (UV) radiation on maize in 1936. The effect of sunlight had previously been noted in the nineteenth century where rural outdoor workers and sailors were found to be more prone to skin cancer.
Chemical mutagens were not demonstrated to cause mutation until the 1940s, when Charlotte Auerbach and J. M. Robson found that mustard gas can cause mutations in fruit flies. A large number of chemical mutagens have since been identified, especially after the development of the Ames test in 1970s by Bruce Ames that screens for mutagens and allows for preliminary identification of carcinogens. Early studies by Ames showed around 90% of known carcinogens can be identified in Ames test as mutagenic (later studies however gave lower figures), and ~80% of the mutagens identified through Ames test may also be carcinogens. Mutagens are not necessarily carcinogens, and vice versa. Sodium Azide for example may be mutagenic (and highly toxic), but it has not been shown to be carcinogenic.
Mutagens cause changes to the DNA that can affect the transcription and replication of the DNA, which in severe cases can lead to cell death. The mutagen produces mutations in the DNA, and deleterious mutation can result in aberrant, impaired or loss of function for a particular gene, and accumulation of mutations may lead to cancer. |
Advanced English listening comprehension
The Lioness And The Oryx – Predator adopting Prey?
In this lesson you will hear a story about a lioness adopting an antelope calf. Listen to the story, fill in the blanks and answer the quiz questions.
Tip! If you are working online, you can press the Tab button to jump to the next blank while listening.
You can download the text as well as the audio to work on your computer. Below, you can download the answers.
2. Answer the questions.
3. Important vocabulary to remember.
- calf – a young cow, elephant or other mammal
- predator – an animal that eats other animals
- prey – an animal haunted for food (do not confuse with pray!)
- Oryx – a genus consisting of four large antelope species
- defy – to oppose, resist
- trauma – an emotional or physical wound, shock
- obsessive-compulsive – a tendency to dwell on unwanted thoughts or ideas or perform certain repeated actions, especially as a defense against anxiety
- spark – to activate, to set in motion
This lesson is based on a YouTube Video. You can watch the full video here |
G.CO.1: Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc.
G.CO.2: Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch).
G.CO.4: Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments.
G.CO.5: Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of transformations that will carry a given figure onto another.
G.CO.6: Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent.
G.CO.8: Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions.
G.CO.9: Prove theorems about lines and angles.
G.CO.10: Prove theorems about triangles.
G.CO.11: Prove theorems about parallelograms.
G.CO.12: Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.).
G.SRT.1: Verify experimentally the properties of dilations given by a center and a scale factor:
G.SRT.1.b: The dilation of a line segment is longer or shorter in the ratio given by the scale factor.
G.SRT.2: Given two figures, use the definition of similarity in terms of similarity transformations to decide if they are similar; explain using similarity transformations the meaning of similarity for triangles as the equality of all corresponding pairs of angles and the proportionality of all corresponding pairs of sides.
G.SRT.4: Prove theorems about triangles.
G.SRT.5: Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures.
G.SRT.6: Understand that by similarity, side ratios in right triangles are properties of the angles in the triangle, leading to definitions of trigonometric ratios for acute angles.
G.SRT.8: Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems.
G.C.2: Identify and describe relationships among inscribed angles, radii, and chords.
G.C.5: Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector.
G.GPE.1: Derive the equation of a circle of given center and radius using the Pythagorean Theorem; complete the square to find the center and radius of a circle given by an equation.
G.GPE.7: Use coordinates to compute perimeters of polygons and areas of triangles and rectangles, e.g., using the distance formula.
G.GMD.1: Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone.
G.GMD.3: Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems.
Correlation last revised: 5/8/2018 |
The Surprising Evolutionary History of our Oral Bacteria
Researchers reconstruct the oral microbiomes of Neanderthals, primates, and humans, including the oldest oral microbiome ever sequenced from a 100,000-year-old Neanderthal, and discover unexpected clues about human evolution and health.
A new study published in PNAS compares ancient dental calculus of humans, Neanderthals, and other primates. Despite oral microbiome differences, researchers identified ten core bacterial types maintained within the human lineage for over 40 million years. The team discovered a high degree of similarity between Neanderthals and humans, including an apparent Homo-specific acquisition of starch digestion capability in oral streptococci, suggesting that the bacteria adapted to a dietary change that occurred in a common ancestor.
Living in and on our bodies are trillions of microbial cells belonging to thousands of bacterial species - our microbiome. These microbes play key roles in human health, but little is known about their evolution. In a new study published in the Proceedings of the National Academy of Sciences, a multidisciplinary international research team led by scientists at the Max Planck Institute for the Science of Human History (MPI-SHH) investigates the evolutionary history of the hominid oral microbiome by analyzing the fossilized dental plaque of humans and Neanderthals spanning the past 100,000 years and comparing it to those of wild chimpanzees, gorillas, and howler monkeys.
Researchers from 41 institutions in 13 countries contributed to the study, making this the largest and most ambitious study of the ancient oral microbiome to date. Their analysis of dental calculus from more than 120 individuals representing key points in primate and human evolution has revealed surprising findings about early human behavior and novel insights into the evolution of the hominid microbiome.
The most challenging jigsaw puzzle in the world
Working with DNA tens or hundreds of thousands of years old is highly challenging, and like archaeologists reconstructing broken pots, archaeogeneticists also have to painstakingly piece together the broken fragments of ancient genomes in order to reconstruct a complete picture of the past. For this study, researchers had to develop new tools and computational approaches to genetically analyze billions of DNA fragments and identify the long-dead bacterial communities preserved in archaeological dental calculus. Using these new tools, researchers reconstructed the 100,000-year-old oral microbiome of a Neanderthal from Pešturina Cave in Serbia, the oldest oral microbiome successfully reconstructed to date by more than 50,000 years.
“We were able to show that bacterial DNA from the oral microbiome preserves at least twice as long as previously thought,” said James Fellows Yates, lead author and doctoral candidate at the Max Planck Institute for the Science of Human History. “The tools and techniques developed in this study open up new opportunities for answering fundamental questions in microbial archaeology, and will allow the broader exploration of the intimate relationship between humans and their microbiome.”
An enduring microbial community
Within the fossilized dental plaque, researchers identified ten groups of bacteria that have been members of the primate oral microbiome for over 40 million years and that are still shared between humans and their closest primate relatives. Many of these bacteria are known to have important beneficial functions in the mouth and may help promote healthy gums and teeth. A surprising number of these bacteria, however, are so understudied that they even lack species names.
“That many of the most important taxa are poorly characterized is a surprise to oral microbiologists who have been working on these bugs for years,” said Floyd Dewhirst, Senior Member of Staff at the Forsyth Institute and a coauthor on the study. “We’re still learning about new members of this community, and these results give us new species to target for full characterization.”
Although humans share many oral bacteria with other primates, the oral microbiomes of humans and Neanderthals are particularly similar. Nevertheless, there are a few small differences, mostly at the level of bacterial strains. When the researchers took a closer look at these differences, they found that ancient humans living in Ice Age Europe shared some bacterial strains with Neanderthals. Because the oral microbiome is typically acquired in early childhood from caregivers, this sharing may reflect earlier human-Neanderthal pairings and child rearing, as has also been already indicated by the discovery of Neanderthal DNA in ancient and modern human genomes. Researchers found that Neanderthal-like bacterial strains were no longer found in humans after ca. 14,000 years ago, a period during which there was substantial population turnover in Europe at the end of the last Ice Age.
“Oral bacteria provide an unexpected opportunity for reconstructing the interactions of humans and Neanderthals tens of thousands of years ago,” said Irina Velsko, postdoctoral researcher at the MPI-SHH and a coauthor on the study. “The intersection of human and microbial evolutionary biology is fascinating.”
An early love of starch
Among the biggest surprises was the discovery that a subgroup of Streptococcus bacteria present in both modern humans and Neanderthals appears to have specially adapted to consume starch early in Homo evolution. This suggests that starchy foods became important in the human diet long before the introduction of farming, and in fact even before the evolution of modern humans. Starchy foods, such as roots, tubers, and seeds, are rich sources of energy, and previous studies have argued that a transition to eating starchy foods may have helped our ancestors to grow the large brains that characterize our species.
“Reconstructing what was on the menu for our most ancient ancestors is a difficult challenge, but our oral bacteria may hold important clues for understanding the early dietary shifts that have made us uniquely human,” said Christina Warinner, lead senior author of the study and a professor with joint appointments in Anthropology and Microbiome Sciences at Harvard University and the MPI-SHH. “Bacterial genomes evolve much more quickly than the human genome, making our microbiome a particularly sensitive indicator of major events in our distant and recent evolutionary past.”
It’s important food for thought - the humble bacterial plaques that grow on our teeth and that we carefully brush away every day hold remarkable clues not only to our health, but also to our evolution. |
By Howat A. Labrum
Which strategy do TESOL teachers choose to use? Do they prefer “top down” or “bottom up” approaches? Or do they use neither or both? I suggest choosing both, which means exploiting the synergy of the two strategies. Underlying my choice of both is the 2×2 matrix which shows the four choices visually.
What is the 2X2 Matrix and Why Use it?
The 2×2 matrix is also the key to my focus here and the basis of my active voice English tense-map (see the graphic at the bottom), allowing for a concise overview while giving some essential details. The matrix also synergizes the two important areas which involve appealing content and a concise verb tense system. In addition, the 2×2 matrix is a math formula, a universal concept understood by speakers of many languages, thus being a bridge for students wishing to learn English, both grammar and content.
The 2×2 as the Basis for the Tensemap
My starting point is the active voice tensemap. It is a combination of a 3×4 table shared by Betty Azar in her book called Fundamentals of English Grammar (Prentice-Hall, 1985) and a timeline. By some deep thinking and chance, I realized the 12 tense forms could be shown by a timeline using three 2×2 matrices, one for each of the three tenses: past, present, and future
The tensemap uses colours to help students see the patterns within and across the tenses. For example, in the graphic below, it is clear the combination of yellow (perfect) and light blue (progressive) gives dark green (perfect progressive). Grey is my obvious choice for the simple tense form (aspect). Furthermore, the tensemap allows the use of a quick and easy 3-step algorithm which students can use to identify the tense forms correctly by putting them in the appropriate quadrant.
The Tensemap can be Reduced to Uncoloured Symbols
Once the concept is understood, the tensemap can be visualized as the symbol +++. The ‘plus’ signs represent the four quadrants for past, present, and future. Students can use the +++ to show they understand the tense form in a text by underlining the verb, putting the +++ above the verb, and a dot in the corresponding quadrant.
To show the past perfect (I had eaten), I place a dot in the upper left quadrant in the + which is the one on the left of the three.
Use in the Classroom and at Home
A large version of the +++ (windows) can be put on the whiteboard where students can point to the corresponding quadrant when they hear a verb tense form in a sentence. This board exercise can become a Total Physical Response game for the whole class to participate in, a fun and less intimidating way than the usual verb tense exercises. At home students diagram the tense forms in passages of text that are interesting and appealing to them.
Bio: Howat Labrum holds an M.A. in TESOL from UBC. He worked as an EFL teacher in Thailand, Malaysia, Saudi Arabia, and South Korea from 1976 to 2014. Howat created his tensemap in 1990 and has subsequently added more features to it. He has shared his ideas on Twitter @Howie7951 since 2015. Go to letlearn2008 on YouTube for more.
Azar, Betty Schrampfer, (1985). Fundamentals of English Grammar, (1st Ed,), Prentice-Hall
A question for you:
Do you think this dynamic, colourful tool could be used in your classroom? |
Some languages that don’t use the Latin Script also have silent letters. Which one do you use? 50 Stupid Writing Mistakes to Avoid Difference between Homonyms and Homographs. Rule 1: B is not pronounced after M at the end of a word. Start with the letter h, and use harder examples. PRONOUNCED: Is your car electric? Many of these languages have evolved and have been influenced by history. However, British English has some silent Rs) Silent S https://www.espressoenglish.net/wp-content/uploads/2012/09/Silent-Letters-A-Z15.mp3. This is mostly seen in British English. The R affects the pronunciation by lengthening the vowel. Rule: G is not often not pronounced when it comes before N. Rule 1: GH is not pronounced when it comes after a vowel.eval(ez_write_tag([[300,250],'myenglishteacher_eu-leader-1','ezslot_18',673,'0','0'])); Rule 2: GH is sometimes pronounced like F. Rule 1: H is not pronounced when it comes after W (n.b. So, if there are five letters that are not silent, then we can say that there are 21 silent letters in English. The letter ‘H’, when pronounced alone, should sound like ‘aitch’, but when used at the start of most words beginning with H, it uses its pronounced sound (e.g. There is no linguistic or phonetic name for silent letters, except silent letters. Silent T https://www.espressoenglish.net/wp-content/uploads/2012/09/Silent-Letters-A-Z16.mp3 When you’ve been practicing for a while and know how to pronounce words with silent letters, you’ll be able to recognize them in a text immediately. English is maddening, and it's not sorry. Thank you very much. Make sure to pronounce all the words, and practice finding them in writing and saying them out loud. Silent letters don’t come from one singular source. G is another letter that is silent in front of n. When teaching silent letters in English always use similar examples so that your students can remember them easier. Why is K silent when it comes before the letter N? e.g. 36 Examples of Verb + Noun Collocations [List], A BIG List of Prefixes and Suffixes and Their Meanings, 199 Phrases for Saying Thank You in Any Situation ✅. Thank you for your contribution and hard work. not pronounced at the beginning of many words. There are multiple sources where silent letters come from. 5 positive answers. Take a listening test on hearing the difference between L and R as a last letter. A silent letter is a letter that appears in a particular word but not pronounced such as the ‘b’ in ‘dou B t’/ daʊt /. Every Letter Is Silent, Sometimes. Glossary of Cultural Anthropology. Let the child repeat after you. E: date, name. The a in bread (as well as in tread) does nothing. You’re very welcome! There are many silent letters in English, including the letter 'e' at the end of a word, the letter 'b' following 'm,' and many, many more. Keep reading to learn the types of silent letters as well as how they affect pronunciation and English language learning. The ‘gh’ sound used to be spelt with just the letter ‘h’, and was pronounced like the Scottish word ‘loch’ – a hard sound to pronounce! D: bridge, edge. In Germanic and Scandinavian languages letters like ae, sch, oe, ue, and others become ä, ö ü, or ß. 50 Stupid Writing Mistakes to Avoid, Difference between Homonyms and Homographs, A BIG List of Prefixes and Suffixes and Their Meanings, Ultimate List of 50 INTERJECTIONS with Examples. This is a good starting point. Pronunciation is the most important lesson when teaching silent letters. You can then explain that those are silent letters, and one of them is the letter e. You can use various examples. A good exercise to start with is a written test where students have to determine which letters are silent. In British English most Rs are silent at the end of words, and often at the end of syllables. One way to bring this topic closer to a child is by comparing silent letters to similar silent letters in the child’s native language. Here are the rules to help you understand when to use some silent letters, but remember there are usually some exceptions! The silent ⟨r⟩ rule also applies to connected speech, if a word such as CAR has a silent ⟨r⟩ at the end, https://audio.thesoundofenglish.org/2020/02/Silent-r.mp3. marlond They can be beneficial for readers, when having to distinguish between homophones (these are words that have the same sound, but different definitions and different spelling). Don’t worry too much, there is (sort of) a ‘solution’ ….there are some rules that explain which letters are supposed to be silent, before and after certain letters (the only ‘minor’ issue about this is that, like all English rules – there are usually some exceptions!). English accents that contain silent ⟨r⟩ are ‘non-rhotic’; these include most English accents in England, Wales, Australia, New Zealand and South Africa. Silent Q (no words) Silent R (no words in American English. ): It is undoubtedly a tough skill to acquire, you may even consider condemning this language, with all its oddities and words that are spelt the same but do not rhyme! Rule: E is not pronounced at the end of words, but instead elongates the sound of the vowel before it. The letter v is silent at the end of words if preceded by l , as in selv ('self'), halv ('half'). One of the best ways to start teaching silent letters is to first make sure your students understand what they are. To explain silent letters to a child make sure they. List of Words With 11 Silent Letters in English. Nadia. guest/gest; They can help to connect different forms of the same word e.g. Corporate Buzz Words. Make sure to use words that the students know, and then show them how the words have silent letters. Pronunciation. Glad you found it useful. If the child’s native language has no silent letter then explain why the English language does. I argue that d is not silent in grudge. 15 ways to say In Conclusion Synonyms for IN CONCLUSION, Types of Adverb Adverb Examples [All You Need], LIVE Video ››› Free Chat Rooms For English Learners, What does TBH mean? 5 Common ANGRY Synonyms. English Words that End with a Silent E. Funny Southern Words. Silent letters come from languages that have used them over the course of history. There are a lot of words with silent letters in English. Rule 1: D is not pronounced in the following common words: Rule 2: D is also not pronounced in the combination DG. If Etymology (the origin of words) interests you, then you’ll find learning silent letters very fascinating, as they provide so much information about the history of words! Copyright 2020 Anouka Ltd | All Rights Reserved |. Rule 3: H is often not pronounced when it comes after C, G or R. Rule: K is not pronounced when it comes before N at the beginning of a word. This caused problems as the new words didn’t follow the same rules of grammar as English! Rule 1: B is not pronounced after m at the end of the word. Silent letters are not there to confuse you, even though you may think so! In an alphabetic writing system, a silent letter is a letter that, in a particular word, does not correspond to any sound in the word… Two languages that are famous for having silent letters are French and Italian. If you try hard enough, fluent you will become! C: acquire, muscle. Here they are in alphabetical order, and with some examples: Silent letters are just called silent letters. The I R ON Exception. Silent letters were not invented. More for you: A BIG List of Prefixes and Suffixes and Their Meanings When do we use the suffixes -er and -or? However, when the letter ‘k’ precedes the letter … Some languages have had silent letters since their beginning, and in others silent letters evolved over the course of history. However, virtually all speakers do pronounce the H at the beginning when the word is not preceded by the indefinite article (people do not say, for example, Istory is an interesting subject.) Other Ways to Say ‘Good Luck”. Although these letters are silent, they remain so that you can see their history and origin. Common Three Letter Words. Also, that they are able to talk with folk in any matter, may it be business or pleasure, with dialogue as that of a native speaker. Once you’ve practiced enough you’ll be able to identify all the silent letters in written form too. List of all 10-letter words beginning with letter R. There are 1943 ten-letter words beginning with R: RABATMENTS RABATTINGS RABBINATES ... RUTHLESSLY RYBAUDRYES RYEGRASSES. It can even prove to be difficult and confusing for students who have a background of two or more languages! Subtle is the root word, and ‘ness’ is a suffix. Before you can start practicing that it is important to prepare students. One prominent obstacle can be silent letters. Here we take a look at some common words with silent letters, and how to pronounce them. Silent letters are ones that you don’t pronounce when saying a word. This lesson shows a small list of the silent letters from A to Z and is designed to use as a guide to help you pronounce words. This does make silent letters quite interesting, as you can see the history of each word in the way it is spelt, and track its origins! B is often silent after M (mb) and before T (bt): bomb, debt; When you login first time using a Social Login button, we collect your account public profile information shared by Social Login provider, based on your privacy settings. After the students have worked with written words, it’s time for them to pronounce them. great blog! Insurrection Meaning. I don’t think I speak any differently from others around me. Every word on this site can be played in scrabble. The child has to understand that these letters are necessary. aisle, island, debris, apropos, bourgeois. Silent A – Artistically, logically, musically, stoically; Silent E – When added to the end of a word, it changes the pronunciation of the word, but is in itself, silent. For this reason, an attempt to use combinations of letters to represent sounds was introduced, thus ensuring that all the major sounds in English were covered. There are two ways to identify them. When your students know that you can start using various exercises to teach them how to identify silent letters. See if you can figure out how many words that contain silent letters there are in this paragraph (Please note – not all the words have been used as examples in this blog, that would be too easy..! Rule: N is not pronounced when it comes after M at the end of a word. The earliest English form of this word (in 1544) evolved into what it is in modern English today, in this order: hicket, hickot, hickock, hickop, hiccup and finally hiccough. Sophia 3 years ago 6 Comments. Silent letters Some words include letters which are not pronounced when the word is spoken. These letters appear next to each other in the alphabet, and they can be silent in various places in a word. Thank you, Nadia, for helping us to understand the silent letters. What is the definition of a silent letter? Wow it’s literally blow my mind and i love it. To explain silent letters to a child make sure they understand that silent letters are not unnecessary. In those languages, the silent letters denote the proper pronunciation. Rule 2: H is not pronounced at the beginning of many words (remember to use the article “an” with unvoiced H). List of Words With 17 Silent Letters in English 04.23.2014 Nadia Ilyas English Grammar, English Pronunciation , Silent Letters in English silent letter E Feel free to jump on any part you want: The silent letter spelling rule is supported by illustrations, letter combinations and a list of words. Why is K silent when it comes before the letter N? This is also a good way to teach silent letter because the students won’t have to pronounce the words first, which means they don’t have to be shy about speaking the words wrong. But yes, this IS a very useful page. For example in the words lamb and knee, both b and k are silent. There are numerous words with silent letters in the English language. Here's a list of common silent A words: Silent B Words. Rule: P is not pronounced at the beginning of many words using the combinations PS, PT and PN. Once your account is created, you'll be logged-in to this account. In other languages, silent letters come from umlauts, which are two letters that fuse into one. I would usually pronounce the d in grudge, dodge, hedge and budge. For many students, however, it is the pronunciation that causes most of the problems.eval(ez_write_tag([[250,250],'myenglishteacher_eu-medrectangle-4','ezslot_14',659,'0','0'])); It is evident that there are some very common pronunciation issues that people face when learning English as a second language. Depending on the child’s age and skill level they can already know something about the topic. For example. Silent letters are very frequent in English, and there are numerous exercises and ways to teach them. How Many Silent Letters Are There in English? tap/tape, mat/mate, rid/ride, con/cone and fin/fine. Hello Frank, thank you for your comment. This pattern is from the Anglo Saxons, other examples are dough, bright, fight and fright. Rule 2: B is usually not pronounced before T at the end of a root word.**. I: business. rabanna rabannas rabat rabatine rabatines rabatment rabatments rabato rabatoes rabatos rabats rabatte rabatted rabattement rabattements rabattes rabatting rabattings rabbet rabbeted rabbeting rabbets rabbi rabbies rabbin rabbinate rabbinates rabbinic rabbinical rabbinically rabbinics rabbinism rabbinisms rabbinist rabbinistic rabbinists rabbinite rabbinites rabbins rabbis rabbit rabbitbrush rabbitbrushes … It is often best to start by pronouncing the words you used in written exercises. resign/resignation. Found 34446 words that start with r. Browse our Scrabble Word Finder, Words With Friends cheat dictionary, and WordHub word solver to find words starting with r. Or use our Unscramble word solver to find your best possible play! What is the difference between AS and SINCE? The history and evolution of language caused silent letters to be a part of the English language. Silent letters are letters that are not pronounced in a word. You make an interesting point! Traditional Rhotic speakers used to pronouce the R’s in their words but now, the Non-Rhotic speakers are common and they don’t do so. This is a great opportunity to continue with another important step in teaching silent letters. Example:eval(ez_write_tag([[300,250],'myenglishteacher_eu-mobile-leaderboard-2','ezslot_26',678,'0','0'])); Tell your students that the words are written with a k at the start, but the letter is not pronounced. Accents in which every ⟨r⟩ is pronounced are ‘rhotic’, and these include most accents in USA, Canada, Ireland, Northern Ireland and Scotland. Now, modern day English is only 40% phonemic! You can’t just learn them by heart, so you have to develop a vocabulary and understanding. For each image below, choose the word from the word bank that best names each picture. The two most common letters that are silent before the letter n are K and G. You may think that silent letters can’t be all that important if they’re not pronounced, but as a matter of fact, they make a HUGE difference to the meaning of words, and sometimes, they even have the power to change their pronunciation! This will make learning English even harder!” I can assure you; it’s not that bad, really. That way your students can practice, and expand later on. To children, a language that has extra letters you don’t read may seem stupid. The silent K: You need to know. Here are some of them. Here’s a list of common silent B words: Silent C Words. Rule 2: B is usually not pronounced before t at the end of a root word. You have to be able to pronounce the words correctly. G: high, sign. This combination then either became silent or pronounced with the /f/ sound. To explain silent letter to a child you have to be patient. This is a good way for them to get used to seeing more letters in words than they have to pronounce. J: hallelujah, marijuana. Modern English words beginning with WH are almost all derived from Old English, in which they were originally spelled HW.Over time, the position of the letters reversed, and the digraph came to represent the /w/ sound, with H becoming silent. Quite a noticeable difference when repeated that way! you are really awesome and thanks for sharing a very informative article with us. From there you can explain how various letters can be silent, and how they can appear in different places within a word. If you really want to improve your English and grow to love spelling, I would recommend you take an interest in the words you are learning. Start by telling the child that English is not the only language that has silent letters. When the French invaded, they modified the spelling of these words and added the ‘g’ to make ‘gh’. List of the words with silent letters A silent letter is a letter that is not pronounced, but it is written in a word. debt, doubt, debtor, doubtful, subtle, subtleness, Champagne, foreign, sign, feign, foreign, design, align, cognac. Teaching silent letters in English works best when combining written and spoken exercises. Read more about making the R … One is by listening to the pronunciation, and the other is finding them directly in a written text. Silent letters from A to Z list and examples for each letter Silent letters English lesson. There are many languages across the world that have silent letters. Make sure the teacher pronounces the words first, and then the students follow. Once they know how to correctly pronounce the word you can start explaining what silent letters are, and why they exist. Silent letters help to show 'hard' consonants e.g. 60 minute long English lessons OR 60 minutes long English lessons? They can appear at the beginning or the end of a word. H is a very common silent letter, so ask them to add more examples of words that have a silent h. Use the letters t and u. is silent if there is no sound directly after. We also get your email address to automatically create an account for you in our website. Silent letters are letters that appear in words but do not make a sound. Insurrection Translation and Synonyms. Answered June 10, 2018. In Sweden, they still pronounce the ‘k’ in their word for knife (kneefe)! This isn’t such a bad thing, as it means we know exactly which areas to target to make these difficulties easier to overcome. Be patient and always make sure to pronounce the words correctly. Once you start practising these rules and use any new vocabulary that you learn, it will become easier to remember which letters are silent in some words, and in which words they are supposed to be pronounced. Many words ending in d are pronounced with a stød, but it is still considered a silent letter. We also have lists of Words that end with r, and words that start with r. Search for words that start with a letter or word: That is why, even though the spelling was already fixed for those words, some letters became silent. fork /fɔːk/ car /kɑː/ first /fɜːst/ horse /hɔːs/, forest /ˈfɒrɪst/ rack /rak/ merry /ˈmɛri/ pouring /ˈpɔːrɪŋ/. The silent ⟨r⟩ rule also applies to connected speech, if a word such as CAR has a silent ⟨r⟩ at the end, this ⟨r⟩ will be pronounced if the next word begins with a vowel sound: SILENT: Is your car here? These are: F, Q, R, V, and Y. What is the difference between Realize and Notice? From here you can continue to work with other letters. K: knife, know. First, let me note that some people use an as the indefinite article form before historic, horrific, hotel and a couple more words beginning with an H, so they say an istoric rather than a historic. The easiest silent letter is k, so start with those examples. Glossary of Allied Weaponry. Practice speaking with R and L questions. The best way to explain silent letters to a child is through examples. This way students will be able to identify the silent letter in front of n more often. Rule 1: C is not pronounced in the combination SC.eval(ez_write_tag([[250,250],'myenglishteacher_eu-banner-1','ezslot_15',671,'0','0'])); Rule 2: C is usually redundant before the letters K or Q. Use examples with k, and make sure the students pronounce the words correctly. This spelling pattern seems to have influenced other words with initial /w/ sounds that were from languages other than Old English, too (such as whip). Teaching silent letters takes time, and practice, so give your students good examples they can follow. Take our multiple choice test on R sounds. So it is very beneficial to know where they are and when they are used, as they’ll help you to work out the meaning of the word! Some examples of homophones are. See if the students have written them with the letter u. So, to be able to identify silent letters you need to practice you pronunciation. The ‘k’ in English is traditionally a hard-sounding vowel ‘cah’ or ‘kah’, especially when it’s at the end of a word: back, for instance. List of Words With 11 Silent Letters in English. Identifying and understanding them will undoubtedly improve your spelling, speaking and writing skills, as well as boost your confidence! For example: debt, subtle, doubt. Identifying silent letters is most commonly done by listening, and then writing. Silent or pronounced with the letter N, you can start explaining what silent,... Your confidence part of the same word e.g list of words with silent letter r there are five letters are... And skill level they can appear at the beginning or the end of the word ‘ ’. Still considered a silent E. Funny Southern words evolution of language caused letters. 60 minutes long English lessons let ’ s native language has no silent letter has extra you. T just learn them by heart, so you have to pronounce words... Proper pronunciation Skills, as well as in tread ) does nothing /ˈfɒrɪst/. Feels numb, 15 good LUCK Sayings for English learners ( in England ) the letter.... And evolution of language caused silent letters are very frequent in English for.... ) does nothing the letters that are of French origin such as in front of N more often problems the. Are many languages across the world that have silent letters to a child make sure students. Over the course of history as it is important to prepare students maddening, and 's! If you try hard enough, fluent you will become most difficult silent letters or phonetic name for letters. Go over the course of history be silent, they still pronounce the words correctly t just learn them heart!, even though they are also called dummy letters apparently the word bank that best names each picture account you... Teacher pronounces the words “ bat '' and “ bar '' England ) the letter you... Island, debris, apropos, bourgeois to prepare students intrusive R - Wikipedia of syllables and sure. Since accent and pronunciation differ, letters may be hard to recognize the silent letters at all but... An intractable heel, especially when it comes after M at the end of,! Prepare students harder examples it may be silent, then we can that! You can start using various exercises to teach silent letters come from umlauts, which are not pronounced it... English words that end with a silent E. Funny Southern words merry /ˈmɛri/ pouring /ˈpɔːrɪŋ/ England ) the letter?! In American English ’ comes from Middle English, to make ‘ gh ’ words ) silent s https //www.espressoenglish.net/wp-content/uploads/2012/09/Silent-Letters-A-Z15.mp3! Change around the 15th century this will make learning English even harder! ” i can assure you it! Be difficult and confusing for students who have a background of two or more,... Thanks for sharing a very useful page 'll be logged-in to this account and added the ‘ ful ’ a... Boost your confidence various exercises to teach them how the words lamb knee! This way students will be able to pronounce them examples that are famous having! Do not hear their beginning, and often at the beginning of many words have them it ’ s list! May think so those words, even though they are silent, because it appears in words! When teaching silent letters in English for Beginners to the pronunciation, and the ‘ ful ’ is in... English even harder! ” i can assure you ; it ’ s and..., list of words with silent letter r, V, and the ‘ k ’ in their word for knife ( kneefe!..., letters may be silent, then we can say that there are list of words with silent letter r where! S native language has no silent letter is why, even though they are silent at end... Test where students have written them with the letter u letters won t... Test where students have written them with the /f/ sound they remain that! The magic ‘ e ’ is a great opportunity to continue with another step... And English language does check: Linking and intrusive R - Wikipedia they! Ask them which letters are ones that you don ’ t seem stupid to them anymore /ˈmɛri/ pouring.... E. you can ask them which letters they do not hear with other letters silent in various places a! In d are pronounced with a silent E. Funny Southern words if there is a very informative article with.! To start with list of words with silent letter r silent letter e, because it appears in words... Teaching silent letters are not unnecessary make sure they my tongue touches the roof of my mouth when a. Good exercise to start teaching silent letters in English ( kneefe ) an intractable heel especially! Letters and combinations that cause difficulties for English learners argue that d is not the only language has... Writing Mistakes to Avoid difference between Homonyms and Homographs combinations PS, and. Letters since their beginning, and Russian spelling of these languages have had silent letters ’... In grudge, dodge, hedge and budge English is not the only language that has extra in... /FɔːK/ car /kɑː/ first /fɜːst/ horse /hɔːs/, forest /ˈfɒrɪst/ rack /rak/ merry /ˈmɛri/ pouring /ˈpɔːrɪŋ/ forms of the word! ) silent R ’ s age and skill level they can appear in different words that start those. From languages that have silent letters in English can be silent in various in! A vocabulary and understanding argue that d is not pronounced after the a! Silent letters is to first make sure they understand that silent letters come languages. That ’ s wrist and practise until your brain feels numb letters they do hear. English most Rs are silent s, but it is usually silent in words that students... ; they can appear at the end of the English language is a suffix around the 15th.... Remember the silent letters ( wr ) to Avoid difference between L and R as last... Is often best to start teaching silent letters don ’ t think i speak any differently from around! Is only 40 % phonemic my mind list of words with silent letter r i love it, did not invent letters. Letters since their beginning, and often at the end of a root word. *... English the exist because other languages were introduced into English, to be patient bat '' and “ bar.... Have used them over the course of history that the students have worked with written words, and 's. Explain why the English language learning from different languages all across list of words with silent letter r world that used. Help to connect different forms of the vowel child make sure your students finished... Others silent letters e.g but no so saying a word. * * a root is! Written spelling here is a suffix of prefixes and suffixes and their Meanings when do we use the Script! Them how to pronounce all the silent letters letters can be some silent letters words added... Of my mouth when saying grudge, but this soon began to change around the 15th.. Are not silent, they modified the spelling of these languages have evolved and have been influenced history. Written spelling and expand later on not others language that has silent letters as well as tread... To teach them how to pronounce all the words lamb and knee, both B and are! And in others silent letters that silent letters to a child make sure teacher! Them directly in a written test where students have written them with the letters... Be silent for some speakers whisper the H before the letter H, and they can already know about. Fork /fɔːk/ car /kɑː/ first /fɜːst/ horse /hɔːs/, forest /ˈfɒrɪst/ rack /rak/ merry /ˈmɛri/ pouring.. Then you can see their history and evolution of language caused silent letters, and it 's sorry. Middle English, to make it look more Latin or French writing Skills, as well as in )... Them it ’ s a list of words, even though they are them directly a... R as a last letter appear next to each other in the alphabet, and then writing its form... You used in written form connect different forms of the best ways start... “ bar '', Q, R, list of words with silent letter r, and Russian if the child has to understand silent... Saying grudge, but it is usually silent in various places in a written test where students to. Then the students follow are ones that you can see their history and origin examples that the! Different words or the end of syllables the suffixes -er and -or before t at the beginning or end. Easily work with something as complex as silent letters identifying silent letters remember. Sweden, they still pronounce the words first, and after a while silent. Lot of words, and the other is finding them directly in a word. *! Work with other letters have one letter in front of N more often and... Having multiple examples, you can then explain that those are silent famous having!: e is not pronounced at the end of a root word. * * to pronunciation! Letters appear next to each other in the English language them which letters they do not.. It appears in easy words are famous for having silent letters at all, but you don ’ t from. Where the pronunciation by lengthening the vowel before it if your native language has silent! Long English lessons through examples modified the spelling of these words and added the ‘ g ’ is one. Is finding them in writing and saying them out loud caused problems as the word. Best when combining written and spoken exercises so that you can explain various... The exercise by checking their results is through examples new words didn t. Students understand what they are written down, but no so saying a word. *., British English there can be some silent Rs ) silent R ( no words English. |
Advancing Indigenous Peoples’ Rights in Mexico
In Mexico, 15% of the population identifies itself as indigenous. In the southern state of Oaxaca alone, 56% of people consider themselves indigenous, divided in around 16 ethnic and linguistic groups, in addition to a small population of African descent.
During her recent visit to Oaxaca, UN Human Rights chief Navi Pillay said that the UN Declaration on the Rights of Indigenous Peoples “inspires and motivates movement towards a world in which the basic human rights of indigenous peoples are respected.”
However, she pointed out that “it is one thing to have proclaimed the Declaration, and it is quite another to see it implemented.” She added that “while some progress has been made towards its implementation, much remains to be done.”
"Indigenous women suffer two types of discrimination, as indigenous people and as women," Pillay said. “Just as there is still a long path in the wider, non-indigenous societies to achieve gender equality,” she stressed, “indigenous peoples also need to give women a more prominent role.”
In her inspiring intervention she urged Mexico's indigenous leaders "to renew your commitment to improve the situation of women and promote their political participation and their leadership."
Pillay stressed that the efforts of indigenous women in the struggle for indigenous peoples’ rights needed to be acknowledged, and “indigenous peoples must fight the resistance against them taking positions of power even if it comes from inside.”
Under the Constitution, indigenous peoples in Mexico have the rights to self-determination, which includes, among others, the right to autonomy, education, infrastructure and no-discrimination.
However, each Mexican state has its own constitution and can establish a new legislation. In some cases, as regards indigenous peoples, the local legislation has limited the provisions recognized in the national constitution.
As a consequence, the protection of indigenous people’s rights varies greatly from state to state. While some political entities have established a wide range of policies aiming at the promotion of indigenous peoples’ rights, others have not developed an institutional framework.
Mexican indigenous peoples continue to suffer discrimination in all spheres of public life. Many, especially women, receive arbitrary or disproportionate sentences in criminal courts. Political participation remains extremely marginal.
According to several indigenous organizations, the main problems suffered by indigenous peoples in Mexico are linked to land and territories, natural resources, administration of justice, internal displacement, bilingual education, language, migration and constitutional reforms.
They are also more likely to live in poverty than non-indigenous. During his recent visit to Mexico, Olivier De Schutter, the UN Special Rapporteur on Right to Food, warned that 19.5 million Mexicans, approximately 18 % of the population, are food insecure, an overwhelming majority of them in the rural areas, with a disproportionate number of indigenous peoples among them.
Pillay said that the promotion and protection of the rights of indigenous peoples remained a key priority for her Office. “In particular, we promote and use the UN Declaration on the Rights of Indigenous Peoples as our framework for action and to further the advancement and protection of indigenous people’s rights.”
Pillay’s official visit to Mexico will end on 9 July. She is scheduled to discuss rights issues with the President Felipe Calderón, lawmakers and the Ombudsman at federal and local level, as well as with non-governmental organizations.
7 July 2011 |
CBSE class 6 Mathematics syllabus, question papers, online tests and important questions as per CBSE syllabus. Notes, test papers and school exam question papers with solutions. Main topics are Knowing our Numbers, Whole Numbers, Playing with Numbers, Basic Geometrical Ideas, Understanding Elementary Shapes, Integers, Fractions, Decimals, Data handling, Mensuration, Algebra, Ratio and Proportion, Symmetry, Practical Geometry.
Number System (60 hrs)
(i). Knowing our Numbers:
Consolidating the sense of number ness up to 5 digits, Size, estimation of numbers, identifying smaller, larger, etc. Place value (recapitulation and extension), connectives: use of symbols =, >, < <,>and use of brackets, word problems on number operations involving large numbers up to a maximum of 5 digits in the answer after all operations. This would include conversions of units of length & mass (from the larger to the smaller units), estimation of outcome of number operations. Introduction to a sense of the largeness of, and initial familiarity with, large numbers up to 8 digits and approximation of large numbers)
(ii). Playing with Numbers:
Simplification of brackets, Multiples and factors, divisibility rule of 2, 3, 4, 5, 6, 8, 9, 10, and 11. (All these through observing patterns. Children would be helped in deducing some and then asked to derive some that are a combination of the basic patterns of divisibility.) Even/odd and prime/composite numbers, Co-prime numbers, prime factorization, every number can be written as products of prime factors. HCF and LCM, prime factorization and division method for HCF and LCM, the property LCM × HCF = product of two numbers. All this is to be embedded in contexts that bring out the significance and provide motivation to the child for learning these ideas.
(iii). Whole numbers
Natural numbers, whole numbers, properties of numbers (commutative, associative, distributive, additive identity, multiplicative identity), number line. Seeing patterns, identifying and formulating rules to be done by children. (As familiarity with algebra grows, the child can express the generic pattern.)
(iv).Negative Numbers and Integers
How negative numbers arise, models of negative numbers, connection to daily life, ordering of negative numbers, and representation of negative numbers on number line. Children to see patterns identify and formulate rules. What are integers, identification of integers on the number line, operation of addition and subtraction of integers, showing the operations on the number line (addition of negative integer reduces the value of the number) comparison of integers, ordering of integers.
(v). Fractions: Revision of what a fraction is, Fraction as a part of whole, Representation of fractions (pictorially and on number line), fraction as a division, proper, improper & mixed fractions, equivalent fractions, comparison of fractions, addition and subtraction of fractions (Avoid large and complicated unnecessary tasks). (Moving towards abstraction in fractions) Review of the idea of a decimal fraction, place value in the context of decimal fraction, inter conversion of fractions and decimal fractions (avoid recurring decimals at this stage), word Problems involving addition and subtraction of decimals (two operations together on money, mass, length and temperature)
Algebra (15 hrs.)
INTRODUCTION TO ALGEBRA
Ratio and Proportion (15 hrs.)
Geometry (65 hrs.)
(i).Basic geometrical ideas (2 -D):
Introduction to geometry. Its linkage with and reflection in everyday experience.
Line, line segment, ray.
Open and closed figures.
Interior and exterior of closed figures.
Curvilinear and linear boundaries
Angle — Vertex, arm, interior and exterior,
Triangle — vertices, sides, angles, interior and exterior, altitude and median
Quadrilateral — Sides, vertices, angles, diagonals, adjacent sides and opposite sides (only convex quadrilateral are to be discussed), interior and exterior of a quadrilateral.
Circle — Centre, radius, diameter, arc, sector, chord, segment, semicircle, circumference, interior and exterior.
(i). Understanding Elementary
Shapes (2-D and 3-D):
(iv).Constructions (using Straight edge Scale, protractor, compasses) • Drawing of a line segment
Mensuration (15 hrs.)
CONCEPT OF PERIMETER AND INTRODUCTION TO AREA
Introduction and general understanding of perimeter using many shapes. Shapes of different kinds with the same perimeter. Concept of area, Area of a rectangle and a square Counter examples to different misconnects related to perimeter and area.
Perimeter of a rectangle – and its special case – a square. Deducing the formula of the perimeter for a rectangle and then a square through pattern and generalization.
Data handling (10 hrs.)
(i).What is data - choosing data to examine a hypothesis?
(ii). Collection and organization of data - examples of organizing it in tally bars and a table.
(iii). Pictograph- Need for scaling in pictographs interpretation & construction. (iv).Making bar graphs for given data interpreting bar graphs+.
Create papers in minutes
Print with your name & Logo
Download as PDF
3 Lakhs+ Questions
Based on CBSE Blueprint
Best fit for Schools & Tutors
No software required, no contract to sign. Simply apply as teacher, take eligibility test and start working with us. Required desktop or laptop with internet connection |
There's an ancient Persian proverb that says, "When you understand how to do a thing, the doing is easy; if you find it difficult you do not understand it." There are of course numerous homestead activities where a basic understanding can make the difference—not only between making a thing simple or difficult, but between a gratifying success or disheartening failure. And nowhere on the homestead is this dichotomy more evident than when one attempts to modify plant environment by the use of a forcing structure.
Some types of plant shelter are simple and easily understood; a shade or windbreak screen, an arbor or even a cold frame are rational structures requiring minimum knowledge to construct and manage. But it's a different ball game when a homesteader attempts to modify plant environment in a greenhouse situation.
A greenhouse is something more than a sun trap and a light trap for the benefit of plant growth; it's complexity lies in the fact that plant forcing, itself, is a highly complicated affair. In a greenhouse there exists a so-called trinity of plant ecology, which necessitates a balance between light (heat), moving air, and controlled humidity. Temperature, first of all, affects plant growth because it directly influences such internal processes as photosynthesis (food manufacture). Plant growth also requires respiration — which is energy generated by the breaking down of foods manufactured by the plant. Now to illustrate how this trinity principle works: during the day, sunlight promotes plant growth through photosynthesis; plants absorb light energy to reduce carbon dioxide in the air to sugar. High daytime temperatures require high relative humidity and high soil moisture to balance the increased water loss through the plant. We see here that the plant environment includes not only the vegetative — above ground - considerations such as temperature, humidity, radiation, air movement and gas content of the air. There is also the root environment to consider: root temperature, soil moisture, plant nutrients, and soil structure. And there is yet another complication: not only do different plants require different environments, but the nighttime factors are different from the daytime. At night photosynthesis stops and reactions associated with reproduction occur. A low temperature at night produces growth, flowers and fruit.
Much of my understanding and appreciation of greenhouse functions grew out of a brief 1957 visit with F.W. Went, then director of the Earhart Plant Research Laboratory in Pasadena. Through lengthy and painstaking experiments, Went found optimal temperature and humidity requirements. Tomatoes, for instance, require an optimal daytime temperature of 80 to 90 degrees Fahrenheit, nighttime 65 degrees F. The optimal daytime humidity was found to be 50 to 80%, nighttime 95%. Went's findings proved important to the furtherance of plant growth knowledge — it also pointed to some obvious inefficiencies of conventional greenhouse design.
The "greenhouse effect" is an expression which applies to a building having excessive radiation buildup. As one would suspect, greenhouses are troubled with "greenhouse effect" . . . so is our atmosphere. Atmospheric vapor filters shortwave solar radiation (ultraviolet). Water vapor, however, is transparent to visible light, which warms the earth and re-radiates longwave (infrared) rays back to the atmosphere. Some of this infrared heat is absorbed by the atmosphere, and some is reflected back to earth. The earth's atmosphere acts like glass in a greenhouse: opaque to longwave but transparent to shortwave radiation. In a greenhouse situation this effect works in much the same fashion: the ground and vegetation inside are heated by the transmission of ultraviolet rays from the sun. These contents then give off heat in the form of infrared radiation. Window glass, however, will not allow these longwave radiations to escape so they are retained (to the actual detriment of the vegetation inside).
Heating from strong radiation reduces the nighttime humidity of the air when a high water-saturation is especially needed. Artificial heating also tends to lower the relative humidity. Went overcame these obstacles in his experimental greenhouses by employing elaborate, highly sophisticated, artificial conditioning devices. These methods are of course not available — nor even desirable — for homestead greenhouse production. A better home-grown solution is to design a greenhouse structure that provides optimum growing conditions.
A little greenhouse research reveals the fact that, although Washington and Jefferson both had greenhouses, the oldest reported forcing structure in the U.S. was not a greenhouse as we know it today. It was, rather, a pit covered with glass on the south side, and earth insulation on the north. This so-called pit greenhouse was built into the side of a Waltham, Massachusetts hill about 1800. I found that the pit greenhouse is practically unknown among horticultural circles, yet it proves to be a far more sensible, economical, and efficient forcing structure.
In principle the sun pit is an "unheated" greenhouse. That is, it relies entirely upon solar and ground heat rather than auxiliary furnaces — which are always required in conventional greenhouses.
Soon after learning of Went's greenhouse research I had occasion to include an unheated pit greenhouse into a rambling, adobe ranch house designed for the Morgan Washburn family in Oakhurst. About the only unique feature of this greenhouse (besides the obvious pit-heating effect) was its incorporation as an important annex to the house; one could stand in the kitchen and pick salads from the greenhouse bench.
Tom Powell featured the Washburn greenhouse in an article for ORGANIC GARDENING AND FARMING magazine (January 1959), and notoriety from this item helped to initiate me into an expanding fraternity of greenhouse freaks — designers, builders and growers. For the next ten years I amassed an impressive working knowledge and greenhouse construction experience. I found greenhouse enthusiasts to be, on the most part, exciting and imaginative people. Witness the Nearing 9 by 18 sun pit, which Helen and Scott kept active all year in Vermont: tomatoes and peppers were grown through the summer months; Chinese cabbage, celery, parsley and chives all winter — with no supplemental heat!
There are both ephemeral cranks and foremost representatives of science involved in the study of forcing plants in artificial stricture. Sometimes the sifting out of the true and beautiful is not all that easy; even the scientific opinion raises questions and problems that seem unanswerable in our lifetime. But there is much of this information that can be used to advantage by today's homestead builder. Are you ready?
An Explanation of Light and Heat in the Greenhouse
Let's begin again with light and heat. To function properly, a greenhouse requires maximum light. But admitting desirable light also admits a possibly excessive and undesirable temperature buildup. High temperature causes plant respiration which tends to disturb the metabolic process. It is the infrared — the heat — region of the solar spectrum that causes this temperature increase. The greenhouse operator who customarily paints whitewash on the glass for shade, thereby reducing entry of shortwave rays, is sensitive to plant respiration. Opaque shading is a shortsighted solution, however, because the balance of light rays (fully onehalf of the rays) is thereby inhibited on the other end of the spectrum, restricting the narrow ultraviolet (shortwave) rays.
The "greenhouse effect" also takes place as a result of faulty (e.g., standard) greenhouse design. As the contents of a greenhouse are heated, this interior heat is given off in the form of infrared radiation. Though given off, the heat never leaves the greenhouse: window glass does not transmit longwave infrared radiation. Heat consequently builds up inside, and plants "burn". Furthermore, window glass admits only about 5% of the ultraviolet rays, which really makes glass a health hazard for man as well as for plants. Ultraviolet radiation controls bacterial and viral populations, and when the rays are filtered out upper respiratory troubles are apt to occur, especially during winter months in cold climates.
The obvious solution here is to use a type of translucent material which admits the maximum amount of light, and maximum quantity of ultraviolet radiation and the minimum degree of infrared intensification. Fiberglass plastics meet these specifications: as much as 95% of the ultraviolet rays are admitted (as against 5% for window glass). Glass fibers and crinkled surfaces diffuse infrared heat rays, making this material almost a perfect solution for greenhouse coverings.
Light, as we have seen above, is the energizer for the primary growing process known as photosynthesis. The "white" light visible in the ultraviolet range of wavelength is actually made up of tones of violet, blue, green, yellow, orange and red; light projected through a prism will demonstrate this range of color. Scientists have for a long time been interested in the ultraviolet waves, known also as actinic rays. In fact, early Egyptians treated various diseases by exposing patients to the sun rays filtered through a blue quartz lens. Actinic rays are known as decomposing, or chemical rays, of the sun. They penetrate through solid matter and are thought to have the power of setting up a vibration, which, in matter that is susceptible to it, sets up a counter vibration.
Some of the actinic rays that shine through chlorophyll are absorbed. From plant-growing research that extends to 1880, we know that in photosynthesis, plants use more light from the blue and red parts of the spectrum. Little use is made of green, yellow, or other actinic nays . . . in fact, violet rays actually inhibit plant growth.
The accompanying graph was compiled from research data supplied in part by the Philips Research Laboratory, Eindhoven, Netherlands. Three processes are illustrated: carbon dioxide absorption, chlorophyll formation, and chlorophyll synthesis (photosynthesis). Carbon dioxide is absorbed into the plant through the stomata, located in the epidermis of leaves (oxygen is also transpired through the stomata). Stomata open under the influence of light, and are more widely open in the presence of blue light than either red or green. Evaporation and photosynthesis are intensified and chlorophyll production is accelerated when exposed to blue light.
Those of us living in mountainous apple country can testify to the effect that light plays in producing red pigment (anthocyanin) in apples. Ample amounts of late summer sunshine produce redder apples. A simple experiment can be performed to demonstrate the effect blue light has in producing anthocyanin: using a simple prism, project a solar spectrum on a green apple. The only part of the apple that will turn red is that in the blue and nearly ultraviolet end of the spectrum.
One greenhouse manufacturer (Lifelite Corporation, Concord, Calif.) promotes a bluish-red film colorant to absorb ultraviolet and green wavelengths. Red wavelengths are shifted and intensified. These self-adhering sheets can be used as reflectors from indoor fluorescent units, or on outdoor greenhouse panels. The degree of chlorophyll absorption under the influence of red light is significant, as illustrated in accompanying graph.
I have yet to find a viable explanation of what actually takes place when actinic rays are absorbed in chlorophyll. It must certainly have something to do with cellular decomposition. Growth equals decay, remember?
Physicians who employ color therapy explain the principles as far as the human body is concerned: the absorptive quality of actinic rays has the faculty of starting every nerve cell in the body into active vibration. This vibration stimulates into action the proper interchange of fluids in the cells of the muscular structure, thus promoting cellular subdivision and new formation. Actinic rays affect chemical blood composition more than anything else. Blood, of course, repairs all illness: all waste matter is swept out through blood circulation. Color, supposedly, has a definite oscillatory frequency which corresponds to a similar oscillation in one or more of our body organs.
Dr. Dinshah Ghadiali, founder of the Spectro-Chrome Institute, Malaga, New Jersey, and inventor of the one-time controversial "Spectro-Chrome Metry" equipment for localized color treatment, claims that all fevers are caused by an excess of the chemical elements hydrogen and carbon. These elements are localized by the use of his special equipment: red and yellow attuned color waves seem to be present. Oxygen is necessary to eliminate the hydrogen and carbon elements. In fever, the respiration does increase, giving a larger intake of oxygen, which converts the hydrogen into water and the carbon into carbon dioxide, both of which are excreted. Oxygen "burns" out the hydrogen and carbon. It is made more available to the body through the single attuned color wave of blue.
Kate Baldwin, M.D., F.A.C.S., former Senior Surgeon, Woman's Hospital, Philadelphia, says the following about the therapeutic value of light and color (as quoted from Atlantic Medical Journal, April 1927):
For about six years I have given close attention to the action of colors in restoring the body functions, and I am perfectly honest in saying that, after nearly thirty-six years of active hospital and private practice in medicine and surgery, I can produce quicker and more accurate results with colors than with any or all other methods combined . . . and with less strain on the patient. In many cases, the functions have been restored after the classical remedies have failed.
In about 1900, Arthur Schuster, Professor of Physics at the University of Manchester, worked on a lamp that would simulate actinic rays for the treatment of human disease. It took 12 years for him to perfect the quartz lamp with a side band of the actinic ray sufficient for therapeutic use. The quartz lamp is used today by some physicians. Treatment is not pleasant but results are said to be outstanding.
Several years ago I had occasion to build an experimental greenhouse for the McCoy family in Oakhurst. In one section of the greenhouse we used blue-tint fiberglass panels. Results from the use of blue fiberglass were immediately apparent: the growth rate increased, the plant fiber strengthened, yields were greater and the taste of vegetables improved. The McCoy experience fully substantiated the blue-glass theories postulated a hundred years ago by General A.J. Pleasonton. In 1861, this inventive genius built a 26-foot by 84-foot greenhouse with every eighth row of blue-colored glass. His results were rather astonishing (as reported in his book, THE INFLUENCE OF THE BLUE RAY OF THE SUNLIGHT AND THE BLUE COLOUR OF THE SKY, In Developing Animal and Vegetable Life, In Arresting Disease, and In Restoring Health in Acute and Chronic Disorders to Human and Domestic Animals; Philadelphia, 1876).
At the end of five months the grapevines in his greenhouse produced 1,200 pounds of fruit; growth reached 45-foot lengths, with stems 1 inch in diameter. Consider his explanation for this fabulous yield:
That blue light of the firmament, if not itself electro-magnetism, evolves those forces which compose it in our atmosphere, and applying them at the season, viz, the early spring, when the sky is bluest, stimulates, after the torpor of winter, the active energies of, the vegetable kingdom, by the decomposition of its carbonic acid gas—supplying carbon for the plants and oxygen to mature it, and to complete its mission.
In a second experiment, General Pleasonton introduced diseased livestock in a greenhouse which had equal proportions of white and blue glass. After a short while the animals regained their health and increased remarkably in weight. After much experimentation he found that an 8-to-1 proportion of white to blue glass would be used in vegetable production, and a 1-to-1 proportion for animals.
Interested readers might refer to the patent that General Pleasonton fled, IMPROVEMENT IN ACCELERATING THE GROWTH OF PLANTS AND ANIMALS, September 26, 1871, No. 119,242. The following is taken from the original patent:
...combining the natural light of the sun transmitted through transparent glass with the natural light of the sun transmitted through blue glass or any of the varieties of blue, such as indigo, or violet . . .
I do not pretend to be the first discoverer of the vitalizing and life-growing qualities of the transmitted blue light of the solar rays, and its effect in quickening life and intensifying vitality.
I have found, upon patient and long experiments, running through many years, that plants, fruits of plants, vines and fruits of vines and vegetables so housed and enclosed as to emit the natural light of ties sun through ordinary glass, and the transmitted light of the solar rays through the glasses of blue, violet or purple colours in the proportion of eight of natural light to one of the blue or electric light light, grow much more rapidly, ripen much quicker, and produce much larger crops of fruit than the same plants housed and treated with the natural light of day, the soils and fertilizers and treatment and culture being identical in both cases and the exposure the same.
I have also discovered, by experiment and practice, special and specific efficacy in the use of this combination of the caloric rays of the sun and the electric blue light in stimulating the glands of the body, the nervous system generally, and the secretive organs of man and animal. It therefore becomes an important element in the treatment of diseases, especially such as have become chronic or result from derangement of the secretive, perspiratory or glandular functions as it vitalizes and gives renewed activity and force to the vital currents that keep the health unimpaired, or restores them when disordered or deranged.
Orientation of a Greenhouse
Greenhouse experts, like gardening experts, are never in agreement as as to proper direction to orient the structure or to plant the crops. A southwest exposure may provide more light, but in the afternoon the energy of the plant has started to wane. So actually a southeast exposure is best, as the morning hours for a plant are most productive. Of greater importance than orientation is the slope of greenhouse walls; the amount of fight transmitted or reflected depends upon the angle that the light beans makes with the greenhouse wall. The angle of incidence should not be less than 70 degrees. Some solar heating engineers use the formula "Latitude + 13 degrees = angle of glass to horizon" to get maximum winter penetration and maximum summer reflection.
About 50% of the total sunlight striking a greenhouse is dissipated. This loss can be reduced considerably by reflecting the light from the southern half of the sky against a north-facing wall. This north wall should have a smooth white surface for maximum reflection. And of course this north wall should be properly insulated. From my accompanying drawings it becomes clear that the ideal greenhouse should have a dome shape, cut vertically along an east-west line.
My half-dome sun pit greenhouse was designed to meet theoretical solar conditions—both from the standpoint of maximum mid-winter absorption and mid-summer reflection of the sun's rays. The accompanying drawing illustrates some of these solar considerations, and how they might influence greenhouse design. As shown, the noontime altitude of the sun gives only a minor part of necessary design criteria; among other things one needs to know the azimuth angles. And of course these sun angles vary according to latitude, north or south of the equator.
So following theoretical considerations, the practical approach to homestead greenhouse design is to build a scale model and investigate the yearly sun path with a heliodon. A heliodon is simply a simulated sun machine. It gives an accurate solar account for any time of the day, at any season of the year, for any specific latitude.
The heliodon that I have used in the past (mostly in conjunction with architectural models) was recently revised so large-size three-dimensional homestead layouts could be designed and analyzed. I also simplified the fabrication, to make it feasible for any homesteader-builder to have his very own. With this machine, one can determine optimum building location and orientation, roof overhang and window placement. It is especially valuable for locating new trees-shade tree size and positioning, in particular. The solar effect on every homestead building can be immediately perceived, as with garden, fields, wood lot, and general land topography. All of this contributes to that all-important, number-one factor of homestead planning: make your mistakes on paper. |
What is Rheumatic Fever?
Rheumatic fever is an inflammatory disease affecting different areas of the body such as joints, the nervous system and heart.
Not everyone with a streptococcus infection will go on to develop this inflammatory disease. Precisely how this bacteria causes rheumatic fever remains inconclusive. (2)
Of those affected by an untreated infection strep bacteria infection, about five percent will develop rheumatic fever. In contrast to strep throat and scarlet fever, this disease is not contagious. (3)
Research into the condition has shown a possible hereditary connection. Some people might have a specific genetic makeup making them susceptible to rheumatic fever.
If the condition is not treated, could lead to serious complications such as heart damage.
Rheumatic fever is primarily diagnosed in children and young adults, being most prevalent between the ages of six and 16. It generally develops 14 to 28 days following an infection such as strep throat or scarlet fever. Symptoms can include the following:
Despite the name of the condition, fever is among the less severe symptoms of this condition. However, it may come in bouts until the infection has cleared.
Those affected by rheumatic fever are inclined to experience nosebleeds during the illness. (5)
The small blood vessels inside the nose become very vulnerable and are easy targets for trauma. They may break when sneezing or blowing the nose. This is thought to be a result of autoimmune defenses. The veins enlarge to make room for antibody cells to reach the infection. (6)
Joint Pain and Swelling
One of the primary effects rheumatic fever can have on the body involves the joints.
People with this disease may encounter joint pain and swelling in and around the joints. It might even evolve to arthritis which is more severe inflammation. If so, the affected areas may appear red and warm. (8)
This symptom will generally affect the joints in the knees, elbows, ankles or wrists. However, it may begin in one location and spread to another.
The pain can be mild or severe, depending on the person. In older children and young adults the pain can be quite serious. It will usually come in episodes that may last for up to a few days. Fortunately, it’s likely to subside within a week or so.
Small nodules or bulges formed of fatty tissues may emerge. These are especially prevalent around the bony areas, such as the elbows. They will typically develop three or four at a time and may persist for a couple of weeks. (9)
The skin is likely to become pink or slightly red in color. Little red spots will appear that will expand outwards, usually in a creeping pattern, creating a ring-shaped rash. The middle will lose its color and look significantly pale. (10)
These rashes are generally noticed on the chest and back, but may also grow on the legs and arms.
Rheumatic fever can affect the functions of the brain. Sometimes, the affected can display emotional disturbances or instability. He/she may enter episodes of spontaneous crying or laughter. The person can also exhibit restlessness or signs of obsessive-compulsive disorder (OCD). (11)
Chorea, sometimes called Sydenham’s chorea or St. Vitus’s dance, is a condition in which parts of the body will make impulsive, quick jerks. These are very likely to occur in the face. The affected may illustrate grimaces, grins or frowns. (12)
Rheumatic fever remains a mysterious disease. Experts have found connections in genetic makeup with untreated strep bacteria infections. Exactly how it develops is yet to be understood. (13)
However, what is better understood is how the condition gives rise to complications. If this disease is left untreated or is recurrent, it may advance into rheumatic heart disease or neurological damage. (14)
Rheumatic Heart Disease
This development will weaken the heart and could eventually lead to heart failure and become a permanent complication. It will damage different parts such as the valves, linings and the muscle itself. Symptoms may not always be apparent, but they can include general signs of heart problems such as, shortness of breath and chest pains. (15)
It occurs as cells in the heart collide with antibody cells fighting against the immune system. This collision will cause an attack on the outer lining of a heart valve.
The reaction will result in an inflammatory response in the different parts of the heart. Swelling will occur and eventually cells will absorb excess fluids. This will lead to an abnormal growth, thus creating scar tissues.
This process will then repeat itself. The antibodies recognize other protein cells and initiate another attack. These reactions will continue to recur, further damaging the heart.
This condition is another complication if rheumatic fever remains untreated or has an acute onset. (16)
Antibody cells will attack the nervous system, damaging the linings. The result is impairments of neurological processes. The patient may begin to exhibit changes in handwriting. Quick, uncontrollable movements, generally in the face, and loss of fine motor skills usually follow.
As the damage continues, psychological changes will take place, such as loss of emotional control and behavioral issues. The affected may burst out in laughter for no apparent reason, or display unprovoked sadness.
If these attacks are left untreated, they may eventually lead to a mental disorder called psychosis. This will meddle with thoughts and emotions to such extent that the affected might lose touch with reality. (17)
There is no particular test which will show a precise result of rheumatic fever. Doctors will typically base the diagnosis on a physical examination and a symptom questionnaire. (18)
If these should indicate rheumatic fever, blood samples will be examined to check for the presence of the strep bacteria. Further testing can include chest X-rays and echocardiogram.
When a diagnosis is established, a treatment plan will be prescribed.
Doctors will generally treat rheumatic fever with rounds of antibiotics and anti-inflammatory drugs such as aspirin or corticosteroids. These will eliminate the bacteria and reduce the symptoms.
Antibiotics are prescribed even in severe cases. They may be needed for prolonged periods to prevent the bacteria from reinfecting the person. Parents are often advised to continue a low dose for a few years even if no symptoms present. (19)
What is rheumatic fever? Rheumatic fever is an illness causing inflammation in different parts of the body.
What are the signs of rheumatic fever? Initial symptoms can include nosebleeds, fever and abdominal pain. As the inflammation continues to spread, manifestations can involve arthritis-like signs. Other signs include changes in the skin, such as bulges or nodules, redness and ring-shaped rashes. If the illness progresses the affected may exhibit personality changes, behavioral issues and uncontrollable emotions.
How do you develop rheumatic fever? A person must first be infected with group A streptococcus bacteria. Secondly, experts believe that there needs to be an impairment in the genetic makeup, as not everyone goes on to acquire the disease and it is not contagious. There are specific risk factors which could increase the chances, such as poor living conditions, overcrowded places and insufficient access to medical care. (20, 21)
How are you diagnosed for rheumatic fever? Doctors usually begin with a physical examination and questions about the symptoms. This is generally followed by X-rays of the chest and an echocardiogram to check for heart damage. Blood samples might also be drawn to check for the bacteria.
What is the best treatment of rheumatic fever? Treatment is based on antibiotics and anti-inflammatory medication such as aspirin. These will be used to kill the bacteria. Once eliminated, symptoms generally subside.
What are the long term complications of rheumatic fever? Complications will depend on how much the disease affects the heart. If the destruction is severe, it could progress to rheumatic heart disease. Additionally, those affected have an increased chance of contracting the disease again. Recurrent episodes of rheumatic fever tend to result in significant heart damage. (22)
Is rheumatic disease considered a disability? It can lead to brief mental disabilities, but these symptoms generally subside once treatment is initiated. (23)
Is there any cure for rheumatic fever? The bacteria can be eliminated and the person will generally feel better. However, there are chances of reinfection. Doctors will usually prescribe prolonged periods of low doses of antibiotics for children, to prevent a recurrent episode.
Is rheumatic fever life threatening? Yes, it can lead to heart damage which can be fatal.
Rheumatic fever is an inflammatory disease. It might occur following strep bacteria infections such as strep throat or scarlet fever.
Rheumatic fever will cause an inflammatory response in various parts of the body. If left untreated it could progress to heart damage and neurological impairments. Little is yet understood of its exact causes, but fortunately, it can be treated. |
Children with auditory processing disorders have difficulties recognizing subtle differences between sounds in words. It affects their ability to process spoken language. There are several kinds of auditory processing issues. The symptoms can range from mild to severe.
The ability to notice, compare and distinguish between (speech) sounds. For instance, the child doesn’t hear the difference between the words “cat” and “hat”.
Auditory discrimination can also result not knowing on which auditory input to focus, when in a noisy setting. For example when you call your child while it’s playing in a noisy playground, he or she doesn’t seem to hear you.
The ability to recall what you’ve heard, either immediately or when you need it later. It can be difficult for your child to remember what you’ve told him or her to do. For example when you say “brush your teeth and but on pyjamas”, the child doesn’t know what to do after he or she has brushed her teeth.
The ability to understand and recall the order of sounds and words. A child might say “melonade” instead of “lemonade” or hear the number 725 but write 572.
Common symptoms for children with auditory processing difficulties:
- Poor musical ability
- Easily distracted by background noise or sudden noises.
- Find it hard to follow spoken directions, especially when there’s more than one direction
- Often asks for repetition after someone has spoken, or often says “huh?” or “what?”
- Difficulties following conversations
- Having trouble remembering details
- Difficulties learning songs or rhymes. |
Type 1 diabetes is the inheritable type of diabetes that accounts for about 5–10% of all cases of diabetes. It is an autoimmune disease that turns the body’s immune system against the insulin-producing cells in the pancreas. Eventually, the immune system completely impairs the pancreas’s ability to produce insulin, resulting in uncontrollable blood sugar levels that wreak havoc on the body.
Typically, the only way to manage the blood sugar levels of people with type 1 diabetes is to take insulin. However, recent research suggests that there are other ways to help control blood sugar levels (and potentially reduce the severity disease) as well.
Here are the three interventions that have shown promising results:
- Following a standard ketogenic diet
- Restricting the consumption of dairy and wheat products to see if insulin production improves
- Supplementing with vitamin D3 and plenty of sunlight
Although these three suggestions seem simple enough to follow, they will cause dramatic changes in your body and insulin requirements. This is why it is important to read our article on type 1 diabetes and work together with your doctor before making these adjustments to your diet and lifestyle. |
According to size, computer classification are of 4 types namely
- Mainframe computers
- Mini computers
- Micro computers
- Super computers
Mainframe computers occupy large space, sensitive to parameters like temperature, humidity, etc., They have large storage capacity, many accessories or peripherals and performs many tasks. It is not user-friendly, can be used only by qualified or trained professionals. It is used to service multiple users with numerous software. They are used by large corporations, governments, banks, etc.,
Mini computers posses less memory and storage than mainframe computers. They are used for data processing, while they are sensitive to parameters like temperature, humidity, etc., It posses less peripherals and limited software than the mainframe computers.
Micro computers are also known as personal computers. They are cheap, affordable, user-friendly and easily accessable. It is widely used in companies, offices, household, schools, colleges, etc., It posses less accessories like keyboard, mouse, CPU, monitor, etc.,
Super computers are used for complex mathematical operations. They are used for weather forecasting, nuclear simulations, scientific computations, fluid dynamics, cyclone predication, etc., They are used by research institutions, space centres, weather forecast stations, nuclear power stations, etc.,
According to functions, computer can be classed into 4 types namely
- Information appliances
- Embedded computers
Servers are the kind of computer used to provide services. It depends on the service like database, file, web, etc.,
Workstations are kind of computers intended to single user but can use multi-user operating system.
Information appliances are the portable or handy devices designed to perform simple operations like calculations, games, etc., They have limited memory and limited operations capabilities and software. Example – Mobile phones, Tablets, etc.,
Embedded computers are used in another machines to serve limited requirements. It executes program in the non-volatile memory to operate an intended machine or electronic device. They cannot be rebooted unlike normal computers, required to operate continuously. Embedded computers are used widely in day-to-day life. Example – Washing machine, DVD player, etc., |
Engineering Optical Particle Counters for Smaller Particle Sizes
In order to count nanoparticles, the technology behind ultrapure water particle counters had to be improved. In this blog series, we look at the signal pathway of a typical counter and the areas of optimization and groundbreaking engineering.
How do liquid particle counters work?
The basic anatomy of a liquid particle counter is shown above. Let’s look at what’s happening step by step:
- At first, clean sample fluid flows through the glass capillary. There are no measurable particles, and the laser, focused through the glass capillary, remains a single, fixed beam.
- When particles pass through the beam, they scatter the light in a phenomenon known as Rayleigh scattering. This same phenomenon explains how sunlight reflecting off the tiny molecules that make up our atmosphere produce visible color changes throughout the day. In a liquid particle counter, mirrors or lenses capture the light scatter and deliver it to a device sensitive to subtle changes in light (i.e., a photodetector).
- The photodetector converts the scattered light into a measurable electrical signal. Software translates the signal into particle size data. (Big particles produce big signals. Small particles produce small signals.)
- The signal in millivolts becomes equivalent to the height or y-axis value of the graphical representation of data.
- Based on the y-axis value, particles are “binned” or sorted into a choice of size channels, such as 0.5 µm, 1 µm or 5 µm, etc., depending on the particle counter.
What’s creating the signal?
The intensity of the light scattered by a particle is proportional to the diameter of the particle to the sixth power (d6) when particles are smaller than the wavelength of visible light (averaging 500 nm or 0.5 µm). The electrical signal of the photodetector becomes much smaller as the particle size decreases. For example, a 20 nm particle scatters one million times less light than a 200 nm particle. Even the difference in detection capability between 30 nm and 20 nm requires greater than 10 times the improvement in signal and noise sensitivity. In other words, detecting small particles requires some specialized tools!
Nanoparticle Counter Signal Sensitivity and Noise
When we listen to music on systems from just fifty years ago, we can perceive a difference in sound clarity. A signal, whether it’s from a light source reaching a photodetector or the hit of a tambourine entering a microphone, is always accompanied by some noise. This noise is the primary limitation of optical particle counters.
The above image shows 20 nm calibration spheres passing through an ultrapure water particle counter. The reading with the highest peak is the particle signal. To the left and right, you can see noise of a relatively fixed height. Ensuring the particle signal and the noise can be told apart with no mistakes is key to increasing the range of measurable particle sizes.
Note: A particle’s refractive index also plays a role in how well its signal is read. The same technology can also see many metallic particles 10 nm and less. Why? Because they refract light to a greater extent than other materials.
The PMS-Intel Collaboration to Improve Liquid Nanoparticle Counters
Particle Measuring Systems has been working with Intel for many years developing new technology to improve the capabilities of optical particle counters for fab implementation. Together, they collaborated on improving signal and noise readings of liquid particle counters for nanoparticle contamination in ultrapure water. The cleaner the fluid, the higher the difficulty in matching results between different counters at different stages of filtration. Their combined efforts pushed the boundaries of quality established by prior generations of particle counters for liquids.
In our next blog in this series, we will examine the filter studies performed at Intel and share how their results demonstrated the improvements made to nano-scale particle counting. You can learn more by watching this 2020 Ultrapure Micro event presentation.
Do you want more information on a 20 nm ultrapure particle counter? Particle Measuring Systems is only manufacturer only to reliably provide 20 nm particle monitoring for ultrapure water and chemicals.
Learn about the Ultra DI 20 liquid particle counter. |
According to the Wikipedia page for "Cross Validation" :
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set). The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).
Clearly, Cross Validation is a statistical technique which is used to infer something about some data. As far as I know, there are thousands of such techniques or approaches which we use for solving different kind of problems relevant to statistics. Then, why is stats.stackexhange.com is called Cross Validated ? Shouldn't it be called something like Statistics Stack Exchange, just like the Mathematics one ?
I'm sorry if my question is silly, or had been answered previously. Thank you. |
Obesity syndrome: Causes and treatment
Obesity syndrome is a complex disorder of excessive amount of body fat. It is not only a cosmetic syndrome, but it also increases the risk of diseases and health problems such as heart disease, diabetes, and high blood pressure.
Approximately 43 million individuals are obese, 2124% children and adolescents are overweight, and 1618% of individuals have abdominal obesity.
Blood tests are required.
- BMI calculation is required.
- It requires a medical diagnosis.
- It is a chronic condition.
Obesity occurs when a person takes more calories than he/she burns. Hence, the body stores these extra calories as fat. It can also be due to a medical case such as PraderWilli syndrome, Cushing's syndrome, and other diseases and conditions. But these disorders are rare.
- Trouble sleeping
- Shortness of breath
- Sleep apnea, a medical condition in which breathing is irregular and periodically stops
- Osteoarthritis in weight-bearing joints
- Varicose veins
- Skin problems that accumulate in the folds of the skin
- Exercise regularly.
- Eat meals of few calories.
Medication: Doctors may recommend you diet pills. These will help you in losing weight.
Specialist: A specialist might recommend you weight loss surgery. At Mfine, we always provide the best treatment so that you can get your healthy body back. |
Python provides different visualization libraries that allow us to create different graphs and plots. These graphs and plots help us in visualizing the data patterns, anomalies in the data, or if data has missing values. Visualization is an important part of data discovery.
Modules like seaborn, matplotlib, bokeh, etc. are all used to create visualizations that are highly interactive, scalable, and visually attractive. But these libraries don’t allow us to create nodes and edges to connect different diagrams or flowcharts or a graph. For creating graphs and connecting them using nodes and edges we can use Graphviz.
Sign up for your weekly dose of what's up in emerging technology.
Graphviz is an open-source python module that is used to create graph objects which can be completed using different nodes and edges. It is based on the DOT language of the Graphviz software and in python it allows us to download the source code of the graph in DOT language.
In this article, we will see how we can create a graph using Graphviz and how to download the source code of the graph in the DOT language.
We will start by installing Graphviz using pip install graphviz.
- Importing Required Libraries
The digraph is defined under graphviz which we will use for creating graphs object, nodes, and edges. We will create different sample graphs. We will import digraph.
from graphviz import Digraph
- Initializing the digraph
After importing the digraph the next step is to initialize digraph by creating a graph object. Let us create a graph object.
gra = Digraph()
- Create Graphs
For creating graphs we will use the dot and edges function and create different types of graphs. Initially, we will start by creating a node for the graph.
gra.node(‘a’, ‘Machine Learning Errors’)
This will create different graph nodes, now we need to connect these nodes with edges and then visualize them. Let’s create the edges for these graph objects.
This will create the Edge between Graph objects, now let us visualize what we have created.
Here You can see how we created the graph objects(nodes) and then connect them using the edges. Now let us see how we can see the source code of the graph we created.
This is the source code that can be used in DOT language to render the graph using graphviz graph drawing software. We can also save and render the source code using render function. Let us see how we can save it.
This will create a pdf with the graph which we created and the name which we have assigned.
If we open the pdf which we have created in the above step we will have the output given below.
Now let us see one more example and create a new graph. Let us create a family tree and see how we can visualize it.
gra = Digraph(filename='Family_Tree.gv') #Filename
#Creating the Subgtraph with same rank
with gra.subgraph() as i:
#Creating SubGraph with same level
with gra.subgraph() as i:
#Connecting the edges of the graph
gra.edges(['XC', 'AC', 'CD', 'XY', 'XD', 'XB'])
This is how we have created the family tree now let us visualize it.
Here you can see the graph objects we created linked to each other using edges. Now let us see the source code for this graph.
In this article, we saw how graphviz is used to create graphs/flowcharts using nodes and edges. We saw how to visualize these graphs, render these graphs to a file and also how to download the source code in DOT language. Graphviz can be used to create many more complex graphs that can be used for different purposes as per requirements. |
In-text attribution is the attribution inside a sentence of material to its source, in addition to an inline citation after the sentence. In-text attribution should be used with direct speech (a source's words between quotation marks or as a block quotation ); indirect speech (a source's words modified without quotation marks); and close paraphrasing . It can also be used when loosely summarizing a source's position in your own words. It avoids inadvertent plagiarism and helps the reader see where a position is coming from. An inline citation should follow the attribution, usually at the end of the sentence or paragraph in question.
This training module is geared primarily to Venturing youth that are elected into positions as officers within their Venturing crew, but it can also be used by adult leaders to learn the duties of the officers in the crew. This course is very useful for youth officers to fully understand their roles and responsibilities as it relates to Crew Officers Briefings and Seminars. This course will help the youth develop an annual plan of activities for the crew, as well as acquiring leadership and team-building skills. Estimated time to complete: 45 minutes. |
Domesticated chickens have been providing humans with eggs for nearly 8,000 years, but having access to freshly laid eggs is no longer restricted to those living in the country. For the last few decades, the idea of the “urban farm” has seen an increasing number of suburbanites keeping their own poultry. With this renewed interest, you’re not alone in asking: How do chickens lay eggs?
How Do Chickens Lay Eggs?
An Eye to Behold
Funnily enough, the whole egg laying process begins with a small gland near the chickens’ eye. This tiny gland is light receptive and when it registers light (either natural or artificial), it triggers the release of an egg cell from the chicken’s ovary.
This egg cell travels to the uterus, where it will eventually grow into the egg yolk. Meanwhile, the chickens’ uterus gradually fills with a jelly like substance, called albumin, more commonly known as the “egg white”.
Seal the Deal
A membrane then begins to develop around the perimeter of the uterine walls, effectively sealing in the egg yolk and albumen. In time, a combination of calcium, salt and water rims this membrane to form a sturdy outer shell.
Let's Get Physical
Once the egg is fully formed, the real work begins. The chickens’ muscles will start to contract in an effort to release the egg, forcing the egg to move down the vaginal canal towards an external opening, called a vent.
An Unexpected Journey
This vent is also connected to the intestinal canal and is used by the chicken to expel waste. Mid-way through its journey, the egg will reach an internal flap, called a cloaca. The cloaca is like a valve, which only allows one canal access to the vent at a time. Once the egg reaches this point, the cloaca valve descends and blocks the intestinal canal.
Bend and Stretch
Once the egg has almost completed its journey, the chicken will stand up and then lower its back end, which causes the vent to open as wide as possible.
Interesting Facts About Chickens Laying Eggs
Why Are Chickens Eggs Shaped like…Well, Chicken Eggs?
Did you know that the distinctive shape of a chicken’s egg actually mirrors the shape of the chickens’ uterus? The shell forms along the edges of the uterine wall, so the end result is just a mirror image of the space it was formed in. That’s why all chicken eggs are roughly the same shape.
Do Chickens Lay at the Same Time Every Day?
A lot of people assume chickens lay bright and early every morning. In fact, they actually work on a 25 hour cycle, so every day the egg will arrive around 1 hour later than the day before. But because chickens won’t lay in the dark, once their laying cycle reaches dusk, they will skip a day of laying and then the time cycle will start all over again the next morning.
What Do You Call the Tiny Eggs That Chickens Sometimes Lay?
Periodically a chicken may lay a tiny, useless egg. This happens when the egg passes too quickly through the chickens’ oviduct, and poultry farmers refer to these as “fart” or “oops” eggs, although perhaps not the official scientific name for them.
What If Chickens Won’t Lay in Their Nest Box?
Chickens, surprisingly, can be trained by example. If you have a chicken that lays her eggs wherever she feels like it, leave either a fake or real egg in the designated nesting box. The chicken will get the picture and start laying there instead.
Chickens Eat What?!
Yes, it’s a bit disturbing, but chickens will occasionally eat eggs. Generally this only happens if an egg is accidentally broken, and it’s nothing to worry about. However, if you have a chicken that starts breaking eggs deliberately so they can eat them (Hannibal Lector style), then it’s best to remove that particular chicken from the flock. |
< 1 hour
A Little Messy
Materials or Fees
Question and Wonder:
- What do you notice about your found objects, just using your eyes?
- What are some of the similarities that you notice? What are some of the differences?
- Once you feel the objects, do you think you will find other similarities and differences?
- How do you think the different objects will feel when you touch them? Make a guess and give each sensation a name. (“squishy” or “hard” or “prickly” etc.)
Imagine and Design:
- Trace around your hand – one hand per piece of paper. Use one sheet of paper per grouping. How many “touch categories” can you create? How many hands will you draw?
- What are your categories? Label your sheets.
- Try classifying things into groups. Which things are small? Big? Green? Soft?
- What will you pick if you were looking for something that crumbles?
- Imagine a story where soft things are hard and cold things are hot. What would happen in the story?
Test and Discuss:
- How do you describe objects that fit into multiple categories?
- Did you know what the item would feel like before you touched it? How?
- What other senses did you use finding the objects? Labeling them?
- Work with a sibling or grown up and put items one at a time in a sock or bag. Can you tell what they are simply by feeling them? Which was the easiest to tell? The hardest? Why?
Did you Know?
Touch is thought to be the first sense that humans develop, according to the Stanford Encyclopedia of Philosophy. Touch consists of several distinct sensations communicated to the brain through specialized neurons in the skin. Pressure, temperature, light touch, vibration, pain and other sensations are all part of the touch sense and are all attributed to different receptors in the skin.
Touch isn’t just a sense used to interact with the world; it also seems to be very important to a human’s well-being. For example, touch has been found to convey compassion from one human to another.
Touch can also influence how humans make decisions. Texture can be associated with abstract concepts and touching something with a texture can influence the decisions a person makes, according to six studies by psychologists at Harvard University and Yale University, published in the June 24, 2010, issue of the journal Science.Print Instructions |
Basic knowledge of NZ natural history
This local course is used in the following course
You will receive an introductory understanding of the natural geological development of the South Island (particularly Fiordland)and its natural flora and fauna. Knowledge of which will contribute to your skill base as a nature tourist guide.
At the successful completion of this course, you will be able to:
- Demonstrate an introductory knowledge of the South Island’s native flora and fauna and key geological processes.
Learning and teaching strategies
How will we help you to learn?
- Fieldtrips, DVD presentation, roleplays, group work, individual projects, simulations, practical service, demonstrations, lectures
- Interpretation Plans
- Written assessments/test
- Peer review
Student Reading List
- Course web logs, learning and assessment activities
- Hall, M. C. (2003). Introduction to tourism: Dimensions and issues. (4th ed). Australia: * Pearson Education Pty Limited.
- [Department of Conservation NZ] |
Avalanches are not identical, but can be assigned to several distinct categories. Slab avalanches are especially dangerous for winter sport participants.
Slab avalanches have a distinct, broad fracture line. They can occur only when a bonded layer of snow (the slab) is lying on top of a weak layer over a sufficiently large area. Triggering requires the application of an additional load and a slope angle of at least 30°. The avalanche is released by a small fracture that initially occurs in the weak layer (initial failure) and then rapidly propagates across it. The extent to which the fracture propagates depends on the properties of the weak layer and on the slab that is lying on top. Once released, the slab slides down the slope. The typical size of a slab avalanche released by winter sport participants is 50 metres wide and 150 to 200 metres long.
Slab avalanches can occur in dry or wet snow, even a long time after any snowfall. They can release naturally (without human assistance) or be triggered at any point within or even outside (in case of remote triggering) the perimeter of the slab. Slab avalanches are the most dangerous type and responsible for more than 90% of the deaths that occur in avalanches.
Slab avalanches can be dangerous even if they are not large. They reach a high speed quickly. A person who releases a slab is often within its perimeter and caught in the avalanche.
Loose snow avalanches fan out from a point of triggering as they plummet downhill and sweep along more and more snow. This type of avalanche often occurs during or shortly after snowfall, or when significant warming occurs. A loose snow avalanche consisting of dry powder generally requires a slope angle of 40°. Especially when the snow is wet, these avalanches can reach considerable size in continuously steep terrain.
Loose snow avalanches are often released naturally. Less than 10% of avalanche fatalities are attributable to them. Many of the deaths occur in summertime, when mountaineers in steep terrain are swept along and then fall. A snow sport participant who triggers a loose snow avalanche is seldom buried because it slides down the slope away from him and usually releases only small snow masses.
Like slab avalanches, gliding avalanches have a distinct, broad fracture line, but they differ inasmuch as the entire snowpack is released. They can occur only on a smooth substrata, typically consisting of flattened grass or slabs of rock. The steeper the slope, the more likely the snow is to slide.
Gliding avalanches can be a major problem for transport routes, especially in snowy winters. They are of only secondary significance for winter sport enthusiasts because they cannot be triggered by people. These avalanches are released naturally when friction decreases at the interface with the ground as the snow at the base of the snowpack becomes moist. Water can penetrate the lowermost layer in two different ways:
- In mid-winter the snowpack is usually cold and dry. During this period it becomes moist from underneath, either as the warm ground melts the snow above, or as the snow absorbs water from the moist ground. Gliding avalanches can occur at any time of day or night in mid-winter.
- At some time in the spring, the snowpack reaches a temperature of zero degrees (isothermal) throughout. This allows melt water and rain to seep through the entire snowpack and moisten its base from above. In these conditions gliding avalanches often occur in typical wet snow avalanche periods, and their frequency increases in the second half of the day.
Often, but by no means in every case, the snowpack begins to slide slowly where gaps (glide cracks) have formed within it. These signs can suddenly be followed by a gliding avalanche. It is impossible to predict exactly when this will happen, so that people should avoid lingering below or alongside glide cracks for any longer than absolutely necessary.
Powder avalanches arise mostly from slab avalanches. A powder cloud forms in the presence of a large altitude difference when a sufficient quantity of snow becomes suspended in the air. Powder avalanches can reach a speed of 300 km/h and cause tremendous damage. They occur most commonly when the avalanche danger is high or very high.
Wet snow avalanches can consist of a slab or loose snow. They frequently release naturally, especially in the event of rain or after daytime warming, and occur in springtime in particular. The main cause of wet snow avalanches is the presence of liquid water in the snowpack, which significantly weakens bonding at layer boundaries. Water accumulates and gives rise to instability in particular where large differences in grain size exist between contiguous layers. Regions where the bonding of the snowpack is poor are especially prone to wet snow avalanches.
Our interactive prevention platform White Risk is a source of extensive knowledge on avalanche types. |
Spinal refers to the spine. Stenosis is a medical term used to describe a condition where a normal-size opening has become narrow. Spinal stenosis may affect the cervical (neck), thoracic (chest), or lumbar (lower back) spines. The most commonly area affected is the lumbar spine followed by the cervical spine.
To help you to visualize what happens in spinal stenosis, we will consider a water pipe. Over time, rust and debris builds up on the walls of the pipe, thereby narrowing the passageway that normally allows water to freely flow.
In the spine, the passageways are the spinal canal and the neuroforamen. The spinal canal is a hollow vertical hole that contains the spinal cord. The neuroforamen are the passageways that are naturally created between the vertebrae through which spinal nerve roots exit the spinal canal. (See Figure 1.)
The spine’s bony structures encase and protect the spinal cord. Small nerve roots shoot off from the spinal cord and exit the spinal canal through passageways called neuroforamen.
Figure 2 is an artist’s illustration of lumbar spinal stenosis. Notice the narrowed areas in the spinal canal (reddish-colored areas). As the canal space narrows, the spinal cord and nearby nerve roots are squeezed causing different types of symptoms. The medical term is nerve compression.
Figure 2. Lumbar stenal stenosis.
Those are the basics of spinal anatomy related to spinal stenosis, but to better understand this spine condition, it helps to get a quick lesson in overall spinal anatomy.
The spine is a column of connected bones called vertebrae. There are 24 vertebrae in the spine, plus the sacrum and tailbone (coccyx). Most adults have 7 vertebrae in the neck (the cervical vertebrae), 12 from the shoulders to the waist (the thoracic vertebrae), and 5 in the lower back (the lumbar vertebrae). The sacrum is made up of 5 vertebrae between the hipbones that are fused into one bone. The coccyx is made up of small fused bones at the tail end of the spine.
At the back (posterior) of each vertebra, you have the lamina, a bony plate that protects your spinal canal and spinal cord. Your vertebrae also have several bony tabs that are called spinous processes; those processes are attachment points for muscles and ligaments.
Vertebrae are connected by ligaments, which keep the vertebrae in their proper place.
The ligamentum flavum is a particularly important ligament. Not only does it help stabilize your spine, it also protects your spinal cord and nerve roots. Plus, the ligamentum flavum is the strongest ligament in your spine.
The ligamentum flavum is a dynamic structure, which means that it adapts its shape as you move your body. When you’re sitting down and leaning forward, the ligamentum flavum is stretched out; that gives your spinal canal more room for the spinal nerves. When you stand up and lean back, though, the ligamentum flavum becomes shorter and thicker; that means there’s less room for the spinal nerves. (This dynamic capability helps explain why people with spinal stenosis find that sitting down feels better than standing or walking.)
In between each vertebra are tough fibrous shock-absorbing pads called the intervertebral discs. Each disc is made up of a tire-like outer band (annulus fibrosus) and a gel-like inner substance (nucleus pulposus).
Nerves are also an important part of your spinal anatomy—after all, they’re what sends messages from your brain to the rest of your body. The spinal cord, the thick bundle of nerves that extends downward from the brain, passes through a ring in each vertebra. Those rings line up into a channel called the spinal canal. (See Figure 3, which shows where the spinal cord goes.)
Between each vertebra, two nerves branch out of the spinal cord (one to the right and one to the left). Those nerves exit the spine through openings called the foramen and travel to all parts of your body.
Normally, the spinal channel is wide enough for the spinal cord, and the foramen are wide enough for the nerve roots. But either or both can become narrowed—that’d be the spinal stenosis—and lead to pain, as explained above. |
One purpose of the new Constitution was to organize an effective army to deal with issues surrounding the "western lands." The western lands, were, of course, occupied by Native Americans. The history of U.S. relations with Native Americans during the nineteenth century is long and complicated because of the number of different Native American peoples involved, but fundamentally simple in terms of the process that was repeated hundreds of times across the continent. The U.S. government deployed military garrisons on the edge of Indian (Native American) territories, and when conflict arose, as it invariably did, the army reacted by invading the Indian nations and attacking the Native Americans.
At the time of the American Revolution, however, Americans viewed the Indians as distinct peoples, and they viewed their nations as distinct nations, even if other countries did not. Both the Articles of Confederation and the Constitution of the United States reflected this reality. One of the first acts of the Continental Congress was the creation in 1775 of three departments of Indian affairs: northern, central, and southern. Among the first departmental commissioners were Benjamin Franklin and Patrick Henry. Their job was to negotiate treaties with Indian nations and obtain their neutrality in the coming revolutionary war. Among the first treaties presented to the Senate by George Washington—in August 1789—dealt with U.S. relations with various Native American tribes.
While the many accords reached with the Native Americans were sometimes called treaties, in reality the treaties were fictions. On 9 July 1821, Congress gave the president authority to appoint a commissioner of Indian affairs to serve under the secretary of war and have "the direction and management of all Indian affairs, and all matters arising out of Indian relations." From 1824, Native Americans were subject to the jurisdiction of the Bureau of Indian Affairs, newly established as a division of the War Department. After 1849 they were subject to the Home Department (later the Department of the Interior), which, within a century, controlled virtually every aspect of Indian existence.
International law in the nineteenth century did not consider as true treaties accords concluded with indigenous tribes that were not constituted in the form of genuine states. In 1831 the Supreme Court under Chief Justice John Marshall in Cherokee Nation v. Georgia ruled that Indian nations were not foreign nations but "domestic dependent nations," although the following year in Worcester v. Georgia, in a ruling that was defied by President Andrew Jackson and ignored by Congress, he ruled that they were capable of making treaties that under the Constitution were the supreme law of the land.
Between 1789 and 1871 the president was empowered by the Senate to make treaties with the Native American tribes or nations in the United States. These treaties ostensibly recognized the sovereignty of Native Americans. Many of the very early Native American treaties were ones of peace and friendship, and a few included mutual assistance pacts, or pacts to prevent other tribes from making hostile attacks. The majority of Native American treaties, however, dealt with trade and commerce, and involved Indians ceding land. Native title was effectively extinguished by treaties of evacuation and removal of the Native American population. Most were signed under coercion. During the two terms of the presidency of Andrew Jackson (1828–1836), when removal of Native Americans from their lands reached almost a frenzy, ninety-four Indian treaties were concluded under coercion. Interestingly, one feature that all Native American treaties share with foreign treaties is that the courts will not inquire into the validity of the signatories. Just as a court will not inquire into whether a foreign dignitary was bribed or forced into signing a treaty, the courts will not inquire into whether a Native American tribe was properly represented during negotiation of a ratified treaty or whether such a treaty was acquired by fraud or under duress.
The president's authority to make treaties with Native Americans was terminated by the Indian Appropriations Act of 3 March 1871, which declared that no Indian tribe or nation would be recognized as an independent power with whom the United States could contract by treaty. However, this statute did not alter or abrogate the terms of treaties that had already been made. Native American treaties are still enforced today and continue to constitute a major federal source of Native American law.
In later years, Congress made provisions to permit Native Americans to recover monetary damages for treaty violations by the federal government. Prior to 1946 Congress enacted numerous special statutes permitting tribes to recover damages through the court of claims, and in 1946 Congress established the Native American Claims Commission to settle claims. |
What are seizures? Here are some key facts
Most seizures occur suddenly without any warning, last a short time (a few seconds or minutes) and stop on their own. -Seizures can be different for each person. -Knowing that someone has epilepsy does not tell you what their epilepsy is like, or what seizures they have. -Not all seizures involve convulsions. Some people seem vacant, wander around or are disorderly during a seizure.
What are the causes of epilepsies?
Different epilepsies are due to many different causes. Medication can assist and make the journey more bearable, research more about Lorazepam tablets online for more insight here. The causes can be complicated, multi-faceted and hard to identify. A person may start having seizures because they have one or more of the following:
- A genetic tendency, passed down from one or both parents (inherited).
- A genetic tendency that is not inherited but is a new change in the person's genes.
- A structural (sometimes called 'symptomatic') change in the brain, -Structural changes due to genetic conditions such as tuberous sclerosis, or neurofibromatosis, which can cause growths affecting the brain.
- Tuberous sclerosis – A genetic condition that causes growths in organs including the brain.
- Neurofibromatosis – A genetic condition that causes tumours to grow on the top layer of nerves. Buy Lorazepam online if you want to experience some relief and get a good night sleep. Together with lifestyle changes, medication can be added to your routine. Epilepsy…genetic? Some researchers now believe that the chance of experiencing epilepsy is always genetic to some extent, in that any person who starts having seizures has always had some level of genetic likelihood prior.
This level can range from high to low and anywhere in between. Even if seizures begin following a brain injury or other structural change, this may be due to both the structural change and the person's genetic tendency to seizures, combined. This makes sense if we consider that even though people may have a similar brain injury, not all of them develop epilepsy afterwards.
Shop online for Lorazepam tablets The medication is available from many reputable e-pharmacies that sell sleep medication. When you buy Ativan online, you can expect free and fast delivery in the UK and EU. You won’t need a prescription either. |
Some simple ways to improve children's understanding of grammar
Children use grammar as part of their daily learning and may not even be aware when they are using paragraphs or adding verbs to their sentences. Grammar is a broad term that has many facets, including sentence construction, tenses and much more.
Here are some simple ways that parents can support children's understanding of grammar.
Sentence structure can vary and it can be difficult to explain syntax in terms of prepositions, nouns, verbs, adjectives etc.
There are several games that can help children gain a better understanding of where words are placed within a sentence, such as the ones below:
- Find the right word to fit the sentence: David _cartoons (walks, sleeps, eats, drinks ,likes)
- Sentence substitution: The girl enjoyed eating chocolate (use the following words to change the sentence in one way: man, hated, melting)
Children can practice writing one sentence using a variety of tenses, such as the examples below:
- He is playing in the park (Present)
- This morning, he played in the park (Past) or He went to the park and played all day
- He would like to play in the park (Future)
They can also familiarise themselves with spelling rules related to tenses such adding 'ing' or 'ed' to the end of the base verb, and some of the exceptions to this rule:
- work > working > worked
- play > playing > played
- open > opening > opened
Work together with children to create child-friendly definitions of the following elements of punctuation, amongst others:
- Full stop: Put at the end of sentences.
- Exclamation marks: Adds emphasis to a word (wow!) or phrase (it was terrible!)
- Commas: Mainly used in lists (apples, bananas and pears) or two separate clauses (Mary walked to the party, but she was unable to walk home)
Classes of Words
Again children can write a sentence to explain each class of word such as the following:
Noun: A word like table, dog, teacher, England etc. A noun is the name of an object, concept, person or place. A "concrete noun" is something you can see or touch, like a person or car. An "abstract noun" is something that you cannot see or touch, like a decision or happiness. |
Geologists have long been intrigued by the presence of traces of very ancient glaciers in many parts of the globe, including those at the equator.
What is the age of the glaciers? It turned out very solid, from 800 to 550 million years.
At the end of the twentieth century, drawing on the accumulated knowledge of science, Paul Hoffman and Daniel Shreg from Harvard University (USA) put forward the hypothesis that in the Proterozoic era, the globe has repeatedly covered by thick ice.
This hypothesis was briefly called “Earth – snowball.” Approximately 750 million years ago the ice came down to the equator, while the average temperature of the globe has dropped to – 40 degrees Celsius.
Fortunately for life on the planet, the Earth did not stay icy.
Despite the ice, continued volcanic activity, supplying the atmosphere with carbon dioxide.
Finally, the climate was very hot: the average temperature has risen to 25-30 degrees.
Evolution moved on … |
An international team of planetary scientists determined that the Moon formed nearly 100 million years after the start of the solar system, according to a paper to be published April 3 in Nature. This conclusion is based on measurements from the interior of Earth combined with computer simulations of the protoplanetary disk from which Earth and other terrestrial planets formed.
The team of researchers from France, Germany and the United States simulated the growth of the terrestrial planets (Mercury, Venus, Earth and Mars) from a disk of thousands of planetary building blocks orbiting the Sun. By analyzing the growth history of Earth-like planets from 259 simulations, the scientists discovered a relationship between the time Earth was impacted by a Mars-sized object to create the Moon and the amount of material added to Earth after that impact.
Augmenting the computer simulation with details on the mass of material added to Earth by accretion after the formation of the Moon revealed a relationship that works much like a clock to date the Moon-forming event. This is the first "geologic clock" in early solar system history that does not rely on measurements and interpretations of the radioactive decay of atomic nuclei to determine age.
"We were excited to find a 'clock' for the formation time of the Moon that didn't rely on radiometric dating methods. This correlation just jumped out of the simulations and held in each set of old simulations we looked at," says lead author of the Nature article Seth Jacobson of the Observatory de la Cote d'Azur in Nice, France.
Published literature provided the estimate for the mass accreted by Earth after the Moon-forming impact. Other scientists previously demonstrated that the abundance in Earth's mantle of highly siderophile elements, which are atomic elements that prefer to be chemically associated with iron, is directly proportional to the mass accreted by Earth after the Moon-forming impact.
From these geochemical measurements, the newly established clock dates the Moon to 95 ±32 million years after the beginning of the solar system. This estimate for the Moon-formation agrees with some interpretations of radioactive dating measurements, but not others. Because the new dating method is an independent and direct measurement of the age of the Moon, it helps to guide which radioactive dating measurements are the most useful for this longstanding problem.
"This result is exciting because in the same simulations that can successfully form Mars in only 2 to 5 million years, we can also form the Moon at 100 million years. These vastly different timescales have been very hard to capture in simulations," says author Dr. Kevin Walsh from the Southwest Research Institute (SwRI) Space Science and Engineering Division.
Cite This Page: |
Almost every day we hear about things being described as analog or as digital it's kind of like the difference between men and women no wait this is the wrong channel for that instead it's like old cell phones which used to be analog but now they're all digital. At this point most countries have either changed or in the process of changing their television transmissions from analog to digital as well ah heck even clocks can be analog or digital. Let's use these clocks to learn the difference between analog and digital electronic signals. ok
Even though there are only 12 numbers on the face of this clock you can always tell how many minutes have a labs just by looking at how far the minute hand is between the two numbers when we look at a digital clock all it can do is show us for each digit of the time the numbers 0 through 9 even if we zoom in really really close we can't see how close the clock is to flipping to the next digit so this means the increments of time are fixed to certain digits or levels.
ok ok ok. i know you're asking me how does this all apply to electronics.Let's use this to talk about analog electronics first so we got to go back to the analog clock as the clock hands sweep across its face the time displayed is continuously updated with nearly infinite precision. The key here is the continuous change analog signals are signals that are continuously variable or in other words continuously changing just like the time on the clock as we learned in the video on the difference between AC and DC the AC voltage in your house continuously changes iterative 50 to 60 Hertz.
Imagine that we have the ability to stop time and look at the voltage.
ok I know bad pond but what I wanted to show here is that as we move across in time that the voltage value is constantly changing. This doesn't mean that analog is only found in the wall sockets or through a sea. It's just a very good example in fact there are sensors like this accelerometer which provides an analog output this sensor buries its output voltage depending on how much acceleration is occurring.If you want to use that analog voltage with a digital system like an arduino then you're going to need to convert that signal to digital first going back to the digital clock.
Remember that time can be represented by fixed numbers symbols like 0 through 9. The same thing is true in digital electronics the digital signals can only be represented with thick symbols like 0 or one but how do you get those zeros and ones well turns out that they're formed from voltage digital signals can have a bold a high level and a low level and so what this means is that at zero volts. These signals considered a digital 0 and then at some high voltage say like five volts the signal is considered a digital one as it turns out digital. Isn't quite this absolute depending on the transistor technology being used there's a range that the signal might be considered a 0 or 1, for example on this Arduino Uno a digital 0 is considered zero volts up to 0.5 volts while a digital one is anywhere from 3 point 5 volts up to 5 volts and so there's these two bands where there's the low and the high the area between the low and the high is an undefined area a digital signal with the voltage in this zone could either be considered a 0 or a 1 because digital logic needs to have a value associated with it. This undefined region will get turned into one's or zeros problem is we can't always predict which one that they will be the key to understanding the difference between analog and digital is to remember that analog voltages continuously change while digital voltages have defined levels when you step back and think about it it kind of turns out that digital signals are actually analog signals where the voltage levels have special meanings.
Exercise 5. Quiz |
Scientists are reporting discovery of a potential new drug for epidemic keratoconjunctivitis (EKC) — sometimes called “pink eye” — a highly infectious eye disease that may occur in 15 million to 20 million people annually in the United States alone. Their report describing an innovative new “molecular wipe” that sweeps up viruses responsible for EKC appears in ACS’s Journal of Medicinal Chemistry.
Ulf Ellervik and colleagues note that there is no approved treatment for EKC, which is caused by viruses from the same family responsible for the common cold. EKC affects the cornea, the clear, dome-shaped tissue that forms the outer layer of the eye. It causes redness, pain, tearing, and may reduce visions for months. “Patients are usually recommended to stay home from work or school, resulting in substantial economic losses,” the scientists write.
They describe discovery of a potential new drug that sweeps up the viruses responsible for EKC, preventing the viruses from binding to and infecting the cornea. The drug removes viruses already in the eye and new viruses that are forming. In doing so, it would relieve symptoms, speed up healing (potentially avoiding impaired vision, and reduce and the risk of infecting the patient’s other eye or spreading the infection within families, schools and work places, the scientists suggested. |
Classroom Interventions for Children with Attention Deficit Disorder
Time out is a punishment proceedure that involves the withdrawal of positive reinforcement as a consequence of inappropriate behavior. Time-out has been recommended for children as old as 9 to 10 years of age. The effectiveness of time-out in reducing imappropriate behavior is well-documented; however, the proceedure has been somewhat controversial die to its potential for misuse (Abramowitz & O'Leary, 1990). Most concerns about time-out are related to the restrictiveness of the proceedure; however, the degree of restrictiveness varies with the type of time-out proceedure used. The least restrictive forms of time out are those that do not involve exclusion or isolation. Examples of such nonexclusionary forms of time-out include: having the children put away their work for a time (which eliminates the opportunity to earn rewards for academic performance); the temporary removal of rewarding materials, such as taking away art materials; or having the children put their heads down on their (which reduces the opportunity for engaging in social interaction) (Barkley 1990).
A more restrictive time-out proceedure involves excluding a child from classroom activities by placing a child in a designated area for a period of time while attention and other rewarding activities are withheld. This may involve having the child sit in a chair facing a wall. The most restrictive time-out proceedures involve removal of a child to an isolated room. In most schools, the principal of least restriction should be followed when selecting a time-out proceedure, starting with the least restrictive proceedure, and moving to the more restrictive proceedures only after the others have been ineffective. To ethically and effectively implement isolatioin forms of time-out in the classroom, the following guidelines are recomended (Abramowitz & O'Leary, 1991).
* Prior to implementing time-out proceedures that involve isolation,
techniques for increasing appropriate behaviors and other punishment
proceedures that are less restrictive than isolation time-out should be properly
implemented for a time and documented.
* Staff must be fully trained in
the proceedure and physically capable of carrying
* Both the target behaviors and the
specific time-out proceedures should be
* Determine how noncompliance to
the time-out proceedure will be dealt with in
* Information on the level of target
behaviors prior to implementing time-out and
* Each time-out should be recorded.
The effectiveness of time-out in reducing inappropriate behavior is well-documented. Time-out has been shown to be an effective component of an overall behavioral plan to reduce noncompliance and aggression. An additional advantage to time-out is that it is a punishment technique that can be administered immediately. Behavioral techniques, which can be administered as close to the behavior as possible, are appropriate for younger children and children with ADD.
Time-out is a highly technical proceedure that is difficult to implement properly. There many way that time-out can go wrong and become less effective. Time-out operates on the assumption that the activity that is withheld from the child is rewarding. Teachers should be careful that the child is not avoiding some unpleasant activity by going to time-out. For example, going to time-out for not staying on-task during independant math work actually may be preferable to doing math work to a child who dislikes math. Due to its restrictiveness and difficulty in implementation, time-out should be reserved for intervention with the most disruptive and unacceptable behaviors and only with staff that are trained in how it is properly used (Abramowitz & O'Leary, 1991).
A second limitation of time-out is that it is a punishment technique. As with all punishment techniques,punishment will only have the effect of decreasing or weakening the punished behavior. Punishment will not increase or promote appropriate behavior. Increasing or strengthening appropriate behavior will require positive reinforcement techniques. Furthermore, punishment results in negative interactions between children and caregivers, which can disrupt children's mood. To avoid this negative side effect, punishment should be used sparingly and only after an appropriate reinforcement program has been in place.
Time-out does not appear to be as useful with children older that 10 years of age.
Implementation of Exclusionary Time Out:
Exclusionary time-out is a relatively technical proceedure. If implemented properly, it can be an effective management tool for reducing unacceptable behaviors; however, if not implemented properly time-out can be ineffective. The steps for implementing an exclusionary form of time-out are detailed below. These proceedures can be adapted for use with nonexclusionary and isolation forms of time-out.
Step 1: Choose an appropriate time-out
Time-out from attention or other rewarding activities. Therefore, the location of the time-out place should be relatively boring. The student should not have direct access to toys, books, people, windows, or any other potential reinforcement. It is often helpful to place a chair in the time-out place to serve as a reminder to the children. a chair in the corner or back of the room will suffice; some classrooms are a cardboard partition or a three-sided cubicle. The time-out place should be located where the teacher can easily monitor the child.
Step 2: Establish the rules regarding
behavior during time-out.
In most time-out proceedures, children need to fulfill specific criteria before they can be released from time-out. Recomended criteria are detailed below.
* The teacher should determine the duration of time-out. The timeout should be
long enough to be punishing, but short enough for reasons of ethics and
practicality. How children will respond to time-out appears to be somewhat
dependant on how long previous time-outs have been for the child. For example,
time-out is likely to be less effective if it has been preceeded by a time out of
greater duration (Kendall, Nay, & Jeffers, 1975; White, Neilsen, G., & Johnson 1972).
* Once in time-out, there should be no interaction with
the child for the duration
* Once in time-out, the child needs to stay in the chair
until the teacher releases
(These punishments should get progressively more severe)
* Release from time out should also be dependent on a brief
period of quiet. If a
* If the child was sent to time-out for noncompliance to
a teacher request, the
Some children will refuse to go to time-out or will not follow the rules while in time-out. In these cases, some back-up proceedures will be needed. One proceedure is to allow the children to earn time-off for complying with the rules. Another is to add time to time-out or restrict some other school privilege for not complying with the time-out process. It also may be necessary to remove the child from the classroom to serve the time-out in another area, such as another classroom or the principal's office. As with learning any new behavior, a child's time-out behavior may need to be shaped over time. The children will likely need to experience time-out several times before they learn that the teacher will be consistent in implementing time-out.
Step 3: Communicate effectively during
the time-out sequesnce.
Punishment should follow a predictable sequence of communication. Sending a child to time-out for noncompliance to a teacher request should be predicted by a properly worded request and a warning. The qalities of an effective request are outlined below.
|1. Requests should be direct
rather than indirect.
|A direct request should leave no question in the child's mind that s/he is being told to do something, giving no illusion of choice.||Indirect request:
"Let's pick up the toys."
"How about washing your hands?"
"Why don't you open your book?"
"Do you want to throw that paper away for me?"
|2. Requests should be
|Positively stated requests give the child information about what "to do."||Negative request:
|3. Requests should be
specific. Avoid vague
|Vague requests are so general and non specific that the child may not know exactly what to do to be obedient.||Vague requests:
"Clean up your act!"
|4. Give only one command
at a time. Avoid "hidden"
|Some children have a hard time remembering more than one thing at a time. You do not want to punish a child for having a short attention span or for failing to remember.||Stringing requests:
"Go close the door, then turn in your papers, and then go sit in your seat."
|5. Request should be simple.||The child should be intellectually and physically capable of doing what you are requesting.||Too difficult:
"Draw a hexagon." (If the child does not know what a hexagon is.)
After an effective request has been given, the teacher should expect compliance to begin within a reasonable time. This is called the "5 second rule." If compliance is begun within 5 seconds of an effective request, wait until the request is completed and give an enthusiastic praise. If compliance has not been initiated within 5 seconds of an effective request, a warning should be given. a warning is an "if-then" statement which connects the consequence with the behavior. Once the warning is given, the 5 second rule goes into effect again. Compliance to the warning should be rewarded with an enthusiastic praise. Noncompliance to the warning should result in immediate tine-out. Once it is decided to send a child to time-out, communication should be brief, clear, and direct. See the charts below for examples of effective communication.
Classroom rules should be clearly understood prior to sending a child to time-out for violating a rule.
This sequence will allow children to predict the consequenses of their behavior, thereby, allowing them to excercise self-control. Children will likele need to experience this sequence several times before they learn that the consequences to their behavior (positive and negative) will be consistent.
Step 4: Explain the time-out proceedure
to the children prior to
At a neutral time, explain the time-out rules and what behaviors will result in time-out. Be sure to also communicate the rewards that can be earned for compliance to rules and requests.
Abramowitz, A., & O'Leary, S. (1991).
Behavioral interventions in the classroom: Implications for students
with ADHD. School Psychology Review, 20(2), 231-234.
Barkley, R.A. (1990). Attention Deficit Hyperactivity Disorder: A Handbook for Diagnosis and Treatment. New York: Guilford.
Kendall, P.C., Nay, W.R., & Jeffers, J. (1975). Time out duration and contrast effects: A systematic evaluation of a successive treatment design. Behavior Therapy, 6, 609-615
White, G.D., Neilson, G., & Johnson, S.M. (1972). Time-out duration and the suppression of defiant behavior in children. Journal of Applied Behavior Analysis, 5, 111-120 |
Radiocarbon dating benefits
These are its half-life, the particulate or photon energy associated with its decay, and the type of emission What do you mean by half-life?
A half-life is defined as the amount of time required for one-half or 50% of the radioactive atoms to undergo a radioactive decay.
Since the half-life is defined for the time at which 50% of the atoms have decayed, why can’t we predict when a particular atom of that element will decay?
The type of decay determines whether the ratio of neutrons to protons will increase or decrease to reach a more stable configuration. How does the neutron-to-proton number change for each of these decay types?In this type of decay, however, the nucleus captures an electron and combines it with a proton to create a neutron.X-rays are given off as other electrons surrounding the nucleus move around to account for the one that was lost.A radionuclide has an unstable combination of nucleons and emits radiation in the process of regaining stability.
Reaching stability involves the process of radioactive decay.After this amount of time passes, half of the initial amount of C-14 is present. |
|This article does not cite any references or sources. (December 2009)|
The Proton Synchrotron (PS) is the oldest major particle accelerator at CERN, built as a 28 GeV proton accelerator in the late 1950s and put into operation in 1959. It takes the protons from the Proton Synchrotron Booster at a kinetic energy of 1.4 GeV and lead ions from the Low Energy Ion Ring (LEIR) at 72 MeV per nucleon. It has been operated as an injector for the Intersecting Storage Rings (ISR), the Super Proton Synchrotron (SPS) and the Large Electron-Positron Collider (LEP). Starting in November 2009, the PS machine delivers protons and will provide lead ion beams for the Large Hadron Collider (LHC).
It has also been used as a particle source for other experiments, such as the Gargamelle bubble chamber for which it supplied a neutrino beam. This led to the discovery of the weak neutral current in 1974.
The PS machine is a circular accelerator with a circumference of 628.3 m. It is a versatile machine having accelerated protons, antiprotons, electrons, positrons and species of ions. Major upgrades have improved its performance by more than a factor of 1000 since 1959. The only significant components remaining from its original installation some 50 years ago are the bending magnets and the buildings. |
This page deals with RC, RL, LC, and RLC circuits in the time domain.
RC Circuits Edit
Generally we have a DC source Vs, a resistor R, and a capacitor C in a loop. We let the voltage across the capacitor be the principle variable V. From Kirchoff's voltage law we have
We know that for capacitors , so with this substitution and a little rearranging this becomes
Integrating, we have
If we have initial conditions of V = 0 and Vs = some voltage, the capacitor is being charged through the resistor. In this case we find that A = V_s and so
If we have the initial conditions of Vs = 0 and V = some voltage Vi, the capacitor is discharging through the resistor to ground. In this case we find that A = Vi and so
Time Taken to Charge/Discharge Edit
It takes an infinite amount of time for the capacitor to completely charge or discharge. However we can approximate the time taken for this to happen in realistic circuits by considering the time taken for the capacitor to charge to a certain percentage.
It's easiest to express this terms of the percentage of charge remaining. You can do it the other way, but the expressions are more complicated. Let p be the percentage of the remaining charge capacity to be charged/discharged, as a fraction. In both cases we have
So for example, if you wanted the time for a 10pF capacitor connected to a 5K resistor to become 90% charged (that is, 10% or 0.1 charging left), you would do
In most cases a useful approximation is , the taken to get to 86% charge. If you're good at remembering magic numbers, you could try , which gives you the time for 90% charge. |
In pointing to Archaeopteryx as an intermediate form, evolutionists began with the assumption that it was the earliest bird-like creature on Earth. However, the discovery of certain far older bird fossils displaced Archaeopteryx from its perch as the ancestor of birds. In addition, these creatures were flawless birds with none of the supposed reptilian features attributed to Archaeopteryx.
The Protoavis fossil, estimated to be 225 million years old, demolished the theory that Archaeopteryx, a bird 75 million years younger than it, was the ancestor of birds.
The most significant of them was Protoavis, estimated at 225 million years old. The fossil, whose existence was announced in a paper in the August 1986 edition of the magazine Nature, demolished the idea that Archaeopteryx, 75 million years younger was the forerunner of all birds. Its bodily structure, with hollow bones as in all other birds, long wings and traces of feathers on those wings showed that Protoavis was capable of perfect flight.
N. Hotton of the Smithsonian institute describes the fossil thus: "Protoavis has a well-developed furcula bone and chest bone, assisting flight, hollow bones and extended wing bones . . . Their ears indicate that they communicate with sound, while dinosaurs are silent."195
The German biologists Reinhard Junker and Siefried Scherer describe the blow dealt to evolutionist theses:"Because Archaeopteryx is 75 million years younger than Protoavis, it emerged that this was a dead end for evolution. Therefore, the idea put forward by the proponents of creation that there are no intermediate forms, only mosaic forms, has been strengthened. The fact that Protoavis resembles modern birds in many ways makes the gap between bird and reptile even more apparent."196
Furthermore, the age calculated for Protoavis is so great that this bird-again according to dating provided by evolutionist sources-is even older than the first dinosaurs on Earth. This means the absolute collapse of the theory that birds evolved from dinosaurs!
195. Reinhard Junker, Siefried Scherer, Enstehung und Geschichte der Lebewesen, Wegel Biologie, Brühlsche Universitatsdruckerei, Giessen, 1986, p.175. |
Segregation was more than an attitude – it was a system supported by both law and custom, and its purpose was to control newly freed African American people. The system of racial domination was carefully constructed to accomplish its goal.
Its economic component was meant to control black labor, and included job discrimination that limited African Americans to agricultural and service jobs. Sharecropping, an essential part of this system, assured white planters of continuing black farm labor by establishing a cycle of debt. This was accomplished by “fixing the books,” debt peonage, vagrancy laws, a credit system, and a convict lease system. These tactics prevented black people from receiving wages due them or moving when their situation worsened. Often these laws were modeled on Slave Codes during slavery.
Politically, segregation disfranchised freedpeople and suppressed black political action – especially the expression of newly gained rights as citizens (14th Amendment) and the right of black men to vote (15th Amendment). Disfranchisement was a two-stage process. First the Ku Klux Klan and other related groups used violence and the threat of violence to suppress black political action. Lynching and other violence was justified by the threat of miscegenation, and the alleged need to protect white women against rape by black men. The second kind of disfranchisement came in the form of laws designed to prevent blacks from voting, including literacy requirements, poll taxes, grandfather clauses, and all-white primaries. These laws were carefully crafted to avoid the 15th Amendment – they could not explicitly use race as a barrier to voting.
A key piece of this system of control was Jim Crow laws and customs. More than a series of strict anti-black laws – it was a way of life that affected whites as well as blacks. There were many state laws touching all aspects of life, including these typical Jim Crow laws:
- Barbers: No colored barber shall serve as a barber (to) white girls or women (Georgia).
- Blind Wards: The board of trustees shall...maintain a separate building...on separate ground for the admission, care, instruction, and support of all blind persons of the colored or black race (Louisiana).
- Burial: The officer in charge shall not bury, or allow to be buried, any colored persons upon ground set apart or used for the burial of white persons (Georgia).
NOTE: Eleven additional Jim Crow laws are available at http://www.ferris.edu/jimcrow/what.htm
During Jim Crow segregation African Americans were not passive; they responded, resisted and negotiated in a variety of ways. Among the most famous responders were Booker T. Washington and W.E.B. DuBois. They represent two different approaches. Washington, born a slave in Virginia, became a well-known educator and founded Hampton Institute in Virginia then Tuskegee Institute in Alabama. In his important and influential Atlanta Compromise Speech of 1895, he stressed accommodation rather than resistance to the racist order under which southern African Americans lived. Acutely conscious of the narrow limitations whites placed on African Americans’ economic aspirations, he stressed that blacks must accommodate white people’s – and especially southern whites’ – refusal to tolerate blacks as anything more than sophisticated menials. In this 1895 speech to the predominantly white audience, Washington said:
Casting down your bucket among my people, helping and encouraging them as you are… to education of head, hand, and heart, you will find that they will buy your surplus land, make blossom the waste places in your fields, and run your factories. While doing this, you can be sure in the future, as in the past, that you and your families will be surrounded by the most patient, faithful, law-abiding, and unresentful people that the world has seen… [I]n our humble way, we shall stand by you with a devotion that no foreigner can approach, ready to lay down our lives, if need be, in defense of yours, interlacing our industrial, commercial, civil, and religious life with yours in a way that shall make the interests of both races one. In all things that are purely social we can be as separate as the fingers, yet one as the hand in all things essential to mutual progress.
W.E.B. DuBois, on the other hand, was born free and was the first African American to receive a doctorate from Harvard. In 1903 as an influential black leader and intellectual W.E.B. DuBois published an essay in his collection The Souls of Black Folk with the title “Of Mr. Booker T. Washington and Others.” DuBois rejected Washington’s willingness to avoid rocking the racial boat, calling instead for political power, insistence on civil rights, and the higher education of Negro youth. In 1905 DuBois and other middle-class but militant Black intellectuals, including Ida Wells Barnett, and some whites organized the Niagara Movement, and later the NAACP. Included in their “Declaration of Principles” was this statement on the Color-Line:
Any discrimination based simply on race or color is barbarous, we care not how hallowed it be by custom, expediency or prejudice. Differences made on account of ignorance, immorality, or disease are legitimate methods of fighting evil, and against them we have no word of protest; but discriminations based simply and solely on physical peculiarities, place of birth, color of skin, are relics of that unreasoning human savagery of which the world is and ought to be thoroughly ashamed.
Many African Americans resisted Jim Crow segregation. Among their strategies and tactics were collective protest, migrating north (or west) especially to urban communities, creating their own institutions, especially educating their children for a better life. Education particularly offered African Americans hope and a sense of possibility. Chafe, Gavins, and Korstad, in their introduction to Remembering Jim Crow, recognize:
- “The extraordinary resilience of black citizens, who individually and collectively found ways to endure, fight back and occasionally define their own destinies…”
- “The enduring capacity of families to nurture each other, and especially their children, in the face of a system so dangerous and capricious that there were no rules one could count on for protection. Under these circumstances parents still managed to convey a sense of right and wrong, strength and assurance.”
- “The incredible variety, richness and ingenuity of black Americans’ responses to one of the cruelest, least yielding social and economic systems ever created.” |
Table of Contents
Definition of Tumor Markers
In medicine, the term tumor markers refers to substances or cellular changes which can provide information about the presence, development, and prognosis of malign tumors, using qualitative and quantitative analysis methods. These substances can be proteins with carbohydrate or lipid portion, enzymes, antigens, or hormones.
Classification of tumor markers
A distinction can be made between cellular and humoral tumor markers. Cellular tumor markers are, for instance, tumor antigens located in the membranes, receptors for growth-promoting substances, and cell markers that indicate an increased expression of oncogenes and monoclonal cell growth. They are detected histologically from tumorous tissue that is obtained in a biopsy.
Humoral tumor markers are produced in the organism as an reaction to a tumor. These kind of substances will be detected in bodily fluids such as blood or urine in higher concentrations than would be normal under physiological conditions. Humoral tumor markers are synthesized and secreted by the tumorous tissue itself or will be released when the tumor disintegrates. To serve as a laboratory diagnostic instrument for tumor detection, there are certain criteria the ideal humoral or cellular tumor marker has to meet:
- 100 % specificity in distinguishing healthy persons from persons who have a tumor;
- Identifying all tumor patients; if possible, in an early stage;
- Organ specificity, providing information about the localization of the tumor;
- Correlation with tumor stages;
- Indicating all changes in the patient under treatment; and
- Prognostic conclusiveness.
Epidemiology of Malign Diseases
According to a report of German Cancer Aid, in Germany, around 500,000 people per year develop cancer, and around 224,000 people each year do not survive this disease. 1,800 children and adolescents under the age of 15 are diagnosed with cancer each year. In men, prostate cancer is the most common type of cancer and third most frequent cause of death. The number of new cases in Germany has been rising continually over the past years and is now estimated at 70,000.
Breast cancer is with approximately 75,000 new cases each year the most common type of cancer in women. In children under the age of 15, leukemia accounts for 60 % of all cancerous conditions. After cardiovascular diseases, tumors are the second most common cause of death in industrialized countries. Statistically speaking, one in three persons in the Western world suffers from a tumorous condition at least once in his or her life. For one in four persons, this type of condition becomes the cause of death.
Every tumor begins as a localized lesion in one single cell (monoclonality). Chemical, viral, and physical noxae can have mutagenic effects in humans and animals as well. Normal cells stop to proliferate as soon as they touch each other (contact inhibition). With the help of cell adhesion molecules, they attach to each other, communicate, and form a healthy tissue.
Tumorous cells, on the other hand, possess a mutated genome, due to the influence of the noxae. Their differentiating characteristics have changed so that the communication between the cells becomes disrupted and, among other things, their growth can no longer be controlled. They are no longer held together, thereby limiting their proliferation, and so they no longer need to remain in a cell group.
Tumor cells are characterized by their lack of a fixed location. The original cell group “grows wild”, changes its morphology, and spreads like weed.
Thereby, the tumor cells practically suffocate adjacent healthy cells and sometimes even themselves. This is why cancer cells are considered to be destructive and antisocial. This tissue neoformation is due to an autonomous, progressive, and excessive cell proliferation, which in turn is caused by activated growth-inducing genes (oncogenes) and ineffective growth-inhibiting genes (tumor suppressor genes).
In addition, the apoptosis program shows a genetic defect. Because of the damages in the genome, the expression of regulatory genes that regulate growth has been eliminated. These uncontrolled cell divisions are also referred to as the immortalization of cells. Among the participating genes are the master control genes (Hox genes), growth factor genes (continuous proliferation), and the above mentioned oncogenes and suppressor genes.
In most cases, the changed tissue patterns and mutated tumor cells are not recognized by the immune system as being “strange” and will therefore not be attacked and eliminated. Defective differentiating genes have caused the tumor cells to develop false identifying marks that mislead the immune system so that it simply does not notice the mutated cells (immune evasion).
If the tumor cells find other tissue areas that are suitable for them outside their area of origin, they will spread into other organs. This process, called metastasis, is probably the most feared aspect of any tumor disease as it makes it impossible to confine the disease to one cell area, building tumorous foci everywhere in the organism that all have to be treated at once. Cells metastasize by using the circulatory systems of the lymph or the blood. The activation of mobility factors facilitates the process of metastasis.
Tumor markers are, however, not an appropriate method for screening for malign diseases. The following example may serve as an explanation: When cycling or palpating the prostate gland, this gland becomes activated. As a consequence, the prostate gland and the periurethral glands increase their production and secretion of the protein PSA (prostate-specific antigen). An increase of PSA-levels in the blood can therefore be caused by many different processes happening in the prostate:
- An increase of size of the prostate (hyperplasia);
- An inflammatory change (prostatitis);
- Or a neoplastic change.
This shows that any signal to the cells to increase the production of PSA can lead to elevated PSA-levels, which, however, is not necessarily caused by a tumor or a malignant disease.
The above example should have shown that tumor markers cannot—with only a few exceptions—be used as a primary diagnostic tool for detecting a malign tumor. Benign tumors whose cells are multiplying may as well produce these markers without posing a dangerous health risk. On the other hand, measurements may show normal marker concentrations despite the presence of a possibly serious condition.
In sum, tumor markers are not able to give specific information on whether and in which organ there is a malign tumor growing or starting to grow. The diagnosis of tumor localization remains imprecise. Elevated concentrations in the blood may as well be the result of other processes occurring in the body and do not qualify as a primary diagnostic tool.
Tumor diseases with their corresponding tumor markers and reference ranges
|CEA||Carcino-embryonic antigen||Liver, colon, rectum, mamma, stomach, bronchial tract||3.4 μg/l|
|AFP||Alpha-fetoprotein||Gravidity, liver, germ cell tumor||9 lU/ml|
|CA 19-9||Carboanhydrate antigen 19/9 or cancer antigen 19/9||Gall bladder, pancreas, stomach, liver||≤ 37 lU/ml|
|CA 72-4||Cancer antigen 72-4||Stomach||≤ 4 lU/ml|
|CA 125||Cancer antigen 125||Ovaries||≤ 35 lU/ml|
|CA 15-3||Cancer antigen 15-3||Mamma, pancreas||≤ 25 lU/ml|
|NSE||Neurone-specific enolase||Small-celled bronchial tumor, neuroblastoma, cerebral diseases||serum: ≤ 18.3 g/l; liquor: 3 – 20 g/l|
|SCC||Squamous cell carcinoma antigen||Cervix, esophagus, lungs, head, mouth, throat||≤ 1.5 μg/l|
|CYFRA 21-1||Cytokeratin 19 fragment||Non-small-celled bronchial tumor||2 μg/l|
|hCG||Human chorionic gonadotropin||Gravidity, germ cell tumor, chorionic carcinoma, testis carcinoma with chorionic parts||men under the age of 45: ≤ 2 U/l; women under the age of 45: ≤ 3 U/l; both from the age of 45: ≤ 7 U/l|
|PSA||Prostate-specific antigen||Prostatic hyperplasia, prostate tumor, inflammatory diseases, after rectal palpatation||men under the age of 40: 1.4 μg/l; 40 – 60 years: 3.1 μg/l; 60 – 70 years: 4.1 μg/l; 70 – 120 years: 4.4 μg/l|
|HTG||Thyroglobulin||Thyroid, follicular and papillary|
|HCT||Human calcitonin||Thyroid, medullary|
|Calcitonin||C-cell tumor, medullary thyroid carcinoma||men: ≤ 18.2 ng/l; women: ≤ 11.5 ng/l|
|Beta-2 microglobulin||Multiple myeloma, lymphoma, leukemia||0.6 – 2.45 mg/l; elevated level in cases of limited renal functions|
Evaluation of changes over time
Tumor markers provide good indications of the activity of a tumor. As a prognostic parameter for the progression of the disease while being treated, tumor marker levels can serve only as benchmarks and are not 100% accurate; still, they offer solid prognoses about the success of the treatment. If one of the examined marker concentrations falls below 50% of the biological halftime of the marker, this is considered a sign of remission (i.e., the temporary or permanent subsidence of the symptoms of the disease). This situation would occur in the post-operative phase.
A constant tumor persistence or further increase by more than 25% point to a stable disease. An increase of the marker concentrations after treatment and after a previous normalization of those levels points to a relapse, indicating further diagnostic measures.
It is crucial that tumor marker levels should not be treated as an isolated phenomenon; rather, the patient’s entire condition, including all diagnostic measures, have to be taken into account. In 50 % of all tumor cases, the increase of tumor marker levels precedes diagnostic imaging. This means, a tumor can be detected first in the blood and then in, for instance, radiological images.
Westergren’s erythrocyte sedimentation rate as a non-specific diagnostic tool
The erythrocyte sedimentation rate (ESR) is defined as the rate at which red blood cells settle out of the plasma under the conditions of gravity in blood that has been treated to be unable to coagulate. The standard method is Westergren’s method, in which the height of the plasma layer above the settled red blood cells is measured in millimeters. The normal value for men is 3 – 8 mm within one hour; for women, it is 6 – 11 mm per hour.
The rate of sedimentation depends on the number of erythrocytes (hematocrit) and the form and aggregation of the erythrocytes. The more erythrocytes, the greater is the friction and interference between them and the slower they sink to the bottom.
Since women generally have a lower hematocrit than men (42 % vs. 47%), their ESR is accordingly higher. The erythrocytes sink at a faster pace. Furthermore, changes in the composition of the plasma protein, the temperature, or contaminations influence the ESR values, but this will not be further discussed at this point.
It is, however, of vital importance that an increase of the ESR is to be expected in the event of inflammations, immune responses, and tumors. Yet like the tumor marker concentrations, the ESR can only serve as a non-specific exploratory test for inflammatory and malign diseases.
Tumor prophylaxis and diagnostic imaging
In oncology, the field of cancer prevention is categorized into three stages: primary prevention (preventing development of the disease), secondary prevention (early detection or screening), and tertiary prevention (preventing recurrence). Imaging diagnostics can be useful in all three stages. Still, they can and should not replace the histological confirmation of a malign tumor.
In any case, the best approach is a holistic treatment that takes into account changes of tumor marker levels, diagnostic images, and biopsy results.
The diagnostic imaging starts with non-invasive procedures. Only when there is a reasonable and grounded suspicion, computed tomography (CT), magnetic resonance imaging (MRI), digital subtraction angiographies (DSA), and positron emission tomography (PET) will be taken as further steps
Popular Exam Questions on Tumor Markers
The answers are below the references.
1. Which statement is correct? Tumor markers…
- …are, biochemically speaking, carbohydrates.
- …only appear in malign diseases.
- …are synthetized by cancer cells.
- …are only detectable at the beginning of the tumor disease.
- …are organ-specific.
2. Which statement is not correct?
- Chemical noxae are potentially mutagenic.
- Humoral tumor markers are tumor antigens that are located in membranes.
- Tumor markers mostly do not qualify as a primary diagnostic tool.
- Test results within the reference range do not rule out a tumor.
- An increase in marker concentrations after previous therapy points to a relapse.
3. Which statement is correct?
- In 50 % of all cases, the tumor marker increase precedes a radiological diagnosis.
- In 25 % of all cases, the tumor marker increase precedes a radiological diagnosis.
- In 70 % of all cases, the tumor marker increase precedes a radiological diagnosis.
- Diagnostic imaging is only used for mamma carcinomas.
- For tumor diagnosis, only magnetic resonance imaging is suitable. |
ACTION POTENTIAL: A CTION POTENTIAL Dr Raghuveer Choudhary Associate Professor Department of Physiology Dr S.N.Medical College Jodhpur The Neuron: The Neuron dendrite cell body (soma) axon terminal (bouton / button) Transmission of information: Transmission of information Information must be transmitted within each neuron and between neurons The Membrane: The Membrane The membrane surrounds the neuron. It is composed of lipid and protein . Ion Channels: Ion Channels Types Ligand-gated Example: neurotransmitters Voltage-gated Open and close in response to small voltage changes across plasma membrane PowerPoint Presentation: Voltage-gated channels – are opened or closed by changes in membrane potential Ligand-gated channels – opened or closed by hormones, second messengers or neurotransmitters. + + + + + Recording Potentials: Recording Potentials The Resting Potential: The Resting Potential There is an electrical charge across the membrane. This is the membrane potential . The resting potential (when the cell is not firing) is a 70mV difference between the inside and the outside. inside outside Resting potential of neuron = -70mV + - + - + - + - + - Ions and the Resting Potential: Ions and the Resting Potential Ions are electrically-charged molecules e.g. sodium (Na+), potassium (K+), chloride (Cl-). The resting potential exists because ions are concentrated on different sides of the membrane. Na + and Cl - outside the cell. K + and organic anions inside the cell. inside outside Na + Cl - Na + K + Cl - K + Organic anions (-) Na + Na + Organic anions (-) Organic anions (-) Forces on ions: + - Forces on ions The electrical voltages and concentration gradients across the membrane exert forces on the ions. For K+ and Cl-, the forces of voltage and concentration are balanced. Organic anions are too large to pass through the membrane. BUT both voltage and concentration forces lead to Na+ entering the cell. inside outside Na + Cl - K + Organic anions (-) Maintaining the Resting Potential: Maintaining the Resting Potential Na+ ions are actively transported (this uses energy) to maintain the resting potential. The sodium-potassium pump (a membrane protein) exchanges three Na + ions for two K + ions. inside outside Na + Na + K+ K+ Na + PowerPoint Presentation: Resting membrane potential is the potential difference across the cell membrane in millivolts (mV). The resting membrane potential is established by different permeabilities or conductances of permeable ions. a. For example, the resting membrane potential of nerve cells is more permeable to K+ than to Na+. b. Changes in ion conductance alter currents, which change the membrane potential. c. Hyperpolarization is an increase in membrane potential in which the inside of the cell becomes more negative. d. Depolarization is a decrease in membrane potential in which the inside of the cell becomes more positive. PowerPoint Presentation: Depolarization – makes the membrane potential less negative (the cell interior becomes less negative) Hyperpolarization – makes the membrane potential more negative Inward current – the flow of positive charge into the cell. Outward current – flow of the positive charge out of the cell. ACTION POTENTIAL: ACTION POTENTIAL Action potential – property of excitable cells that consists of a rapid depolarization or upstroke, followed by repolarization of the membrane potential. Action potentials have stereotypical size and shape, are propagating and are all-or-none. Neuronal firing: the action potential: Neuronal firing: the action potential The action potential is a rapid depolarization of the membrane. It starts at the axon hillock and passes quickly along the axon . The membrane is quickly repolarized to allow subsequent firing. Course of the Action Potential: Course of the Action Potential The action potential begins with a partial depolarization [A]. When the excitation threshold is reached there is a sudden large depolarization [B]. This is followed rapidly by repolarization [C] and a brief hyperpolarization [D]. Membrane potential (mV) [A] [B] [C] [D] excitation threshold Time (msec) -70 +40 0 0 1 2 3 Course of the Action Potential: Course of the Action Potential Threshold – the membrane potential at which the action potential is inevitable. Inward current depolarizes the membrane. If the inward current depolarizes the membrane to threshold, it produces an action potential. Action potentials: Rapid depolarization: Action potentials: Rapid depolarization When partial depolarization reaches the activation threshold, voltage-gated sodium ion channels open. Sodium ions rush in. The membrane potential changes from -70mV to +40mV. Na + Na + Na + - + + - Action potentials: Rapid depolarization: Action potentials: Rapid depolarization Upstroke – inward current depolarizes the membrane potential to threshold. Depolarization causes rapid opening of the activation gates of the Na + channel, and the Na + conductance of the membrane promptly increases. Action potentials: Rapid depolarization: Action potentials: Rapid depolarization Depolarization also closes the inactivation gates of the Na + channel. Depolarization slowly opens K + channels and increases K + conductance to even higher levels than at rest. Repolarization is caused by an outward K + current. Action potentials: Repolarization: Action potentials: Repolarization Sodium ion channels close and become refractory . Depolarization triggers opening of voltage-gated potassium ion channels. K+ ions rush out of the cell, repolarizing and then hyperpolarizing the membrane. K + K + K + Na + Na + Na + + - Step 1: Step 1 Adequate stimulus is applied to a neuron, then the stimulus-gated Na+ channels at the point of stimulus open, Na+ diffuses rapidly into the cell producing a local depolarization Step 2: Step 2 If the magnitude of the depolarization surpasses a limit termed THRESHOLD POTENTIAL (-59 mV), the voltage-gated Na+ are stimulated to open Step 3: Step 3 As more Na+ rushes into the cell, the membrane moves toward 0 mV, then continues to a peak of +30 mV (the + indicates that there is an excess of +ions inside the membrane If the local depolarization fails to cross -59 mV the voltage-gated Na+ do not open and the membrane simply recovers back to the resting potential of -70 mV without producing an action potential Step 4: Step 4 Voltage-gated Na+ stays open for only about 1 ms before automatically closing . This means that once they are stimulated the Na+ always allow sodium to rush in. therefore the action potential is an all-or-nothing response Step 5: Step 5 Once the peak is reached the membrane potential begins to move back toward the resting potential termed REPOLARIZATION surpassing the threshold not only triggers the opening of voltage-gated Na+ but also the voltage-gated K+ these are slow to respond , however, and thus do not begin opening until the inward diffusion of Na+ has caused the membrane potential to reach +30 mV once the K+ are open it rapidly diffuses out of the cell. The outward rush of K+ restores the original excess of + ions on the outside of the membrane, thus repolarizing the membrane Step 6: Step 6 Because the K+ channels remain open as the membrane reaches its resting potential, too many K+ may rush out of the cell. This causes a brief period of hyperpolarization before the resting potential is restored by the action of the Na+-K+ pump and the return of ion channels to their resting state Action potentials: Resuming the Resting Potential: Action potentials: Resuming the Resting Potential Potassium channels close. Repolarization resets sodium ion channels. Ions diffuse away from the area. Sodium-potassium transporter maintains polarization. The membrane is now ready to “fire” again. K + K + K + K + Na + K+ K+ K+ K+ Na+ Na+ Na+ K+ K+ K+ K+ K+ K+ K+ K+ K+ IONIC BASIS OF EXCITATION & CONDUCTION: IONIC BASIS OF EXCITATION & CONDUCTION Resting membrane potential- mainly due to leaky K + channels( -70mv) Action potential- it has depolarization, repolarization, after-depolarization and after-hyperpolarization phases. It is mainly due to Na + and K + conductance. PowerPoint Presentation: Catelectrotonic current Surface becomes less positive Reduced potential difference b/w inside & outside Opening of voltage-gated Na + channels Rapid influx of Na + Potential increases towards Na + equilibrium potential PowerPoint Presentation: Na + channels enter inactivated state in few milliseconds Slow opening of voltage-gated K + channel Efflux of K + ions repolarization Sodium-Potassium Exchange Pump: Sodium-Potassium Exchange Pump Gated Ion Channels and the Action Potential: Gated Ion Channels and the Action Potential The Action Potential: The Action Potential The action potential is “all-or-none” . It is always the same size. Either it is not triggered at all - e.g. too little depolarization, or the membrane is “refractory”; Or it is triggered completely. ALL OR NONE RESPONSE: ALL OR NONE RESPONSE The action potential doesn’t occur in a nerve if the stimulus is sub-threshold. If the stimulus is threshold and above, the action potential produced will be of same amplitude, regardless of intensity of stimulus. * The frequency of action potential increases with the increasing intensity of stimulus. Action Potential: Action Potential Refractory Period: Refractory Period Sensitivity of area to further stimulation decreases for a time Parts Absolute Complete insensitivity exists to another stimulus From beginning of action potential until near end of repolarization Relative A stronger-than-threshold stimulus can initiate another action potential PowerPoint Presentation: Absolute Refractory Period Relative Refractory Period -70 mV -85 mV Time in seconds Resting Membrane Potential REFRACTORY PERIOD: REFRACTORY PERIOD Absolute refractory period- it is the period during an action potential, during which a second stimulus can’t produce a second response. Relative refractory period- it is the period during an action potential, during which a stimulus of higher intensity can produce a second response Refractory Period: Refractory Period Is a brief period during which a local area of an axon’s membrane resists restimulation, for about ½ ms after the membrane surpasses the threshold potential IT WILL NOT RESPONSD TO ANY STIMULI NO MATTER HOW STRONG Termed ABSOLUTE REFRACTORY PERIOD REFRACTORY PERIOD: REFRACTORY PERIOD The RELATIVE REFRACTORY PERIOD occurs a few ms after the absolute refractory period This is the time in which the membrane is repolarizing and restoring the resting membrane potential Will only respond to very strong stimuli Conduction of the action potential.: Conduction of the action potential. Passive conduction will ensure that adjacent membrane depolarizes, so the action potential “travels” down the axon. But transmission by continuous action potentials is relatively slow and energy-consuming (Na + /K + pump). A faster, more efficient mechanism has evolved: saltatory conduction. Myelination provides saltatory conduction. PowerPoint Presentation: Propagation of Action Potentials – occurs by the spread of local currents to adjacent areas of membrane which are then depolarized to threshold and generate action potentials. Action Potential Propagation: Action Potential Propagation Action Potential Propagation: Action Potential Propagation Myelination: Myelination Most mammalian axons are myelinated . The myelin sheath is provided by oligodendrocytes and Schwann cells . Myelin is insulating , preventing passage of ions over the membrane. Saltatory Conduction: Saltatory Conduction Myelinated regions of axon are electrically insulated . Electrical charge moves along the axon rather than across the membrane. Action potentials occur only at unmyelinated regions: nodes of Ranvier . Node of Ranvier Myelin sheath Saltatory Conduction: Saltatory Conduction ACCOMODATION: ACCOMODATION When a stimulus is applied very slowly, no matter however strong it might be, it fails to produce an action potential. Cause: a slowly applied stimulus causes slower opening of Na + channels with concomitant opening of K + channels. The influx Na + of is balanced by efflux of K + . PowerPoint Presentation: RHEOBASE - minimum current required to produce action potential. UTILIZATION TIME- time taken for response when rheobase current is applied. CHRONAXIE - time taken for response when twice rheobase current is applied. It is a measure of excitability of tissues. STRENGTH-DURATION CURVE: STRENGTH-DURATION CURVE TIME UTILISATION TIME STRENGTH RHEOBASE 2 X RHEOBASE CHRONAXIE COMPOUND ACTION POTENTIAL: COMPOUND ACTION POTENTIAL Multi-peaked action potential recorded from a mixed nerve bundle is called a compound action potential. |
To determine the relationship between temperature and volume.
Purpose: Launching Apples
To determine the relationship between volume and pressure.
Purpose: Can Implosion
To determine the relationship between pressure and volume.
A diameter the length from one point of a circle to another in a straight line without any curves or angles.
Temperature is the measurment of heat or how hot an object (gas,liquid,solid) is.
Volume is a 3-Dimensional object that takes up space.
Pressure is the amount of force applied on an object.
Materials: Launching Apples
Materials: Can Implosion
Procedure: Launching Apples
Procedure: Can Implosion
Data: Launching Apples
Data: Can Implosion
When a balloon was placed on the boiling water, it grew in size. This occurred because the temperature in the balloon grew when it was floating on the hot water. When the temperature went up, the atoms in the balloon started to move faster. The faster moving atoms in the balloon hit the walls of the balloon, expanding the rubbery solid. When the temperature increased, so did the volume.
When a balloon was placed on the cold water, it shrunk in size. This occurred because the temperature in the balloon shrunk when it was floating on the cold water. When the temperature went down, the atoms in the balloon started to move slower. The slower moving atoms in the balloon hit the walls of the balloon a fewer number of times, srinking the rubbery solid. When the temperature decreased, so did the volume.
Analysis: Launching Apples
When the second apple was pushed into the tube, the first apple launched out of the tube. The volume of air in the tube decreased when the second apple was being pushed into the tube. The atoms in the tube had less space to travel in, so most of them were bunched up. This meant an increase in pressure. Due to this increased and the movement of the second apple, all the atoms pushed towards the first apple, launching it/pushing it out of the tube. A decrease in volume, causes an increase in pressure.
Analysis: Can Implosion
When the can was heated, the temperature increased. When the hot can was placed in water at room temperature, the can imploded. When the can was placed in the water, the pressure increased. When the pressure increased, the volume decreased. This means that the can's volume decreased, which made the can implode. |
In 1990, humans placed in outer space the most accurate eye ever to gaze at the universe, the Hubble Space Telescope. But that would not have been possible without a less technological, but equally revolutionary, invention—the telescope presented by Galileo Galilei on August 25, 1609. That instrument of refraction—1.27 metres long, with a convex lens in front and another concave eyepiece lens—allowed the Italian polymath to become the father of modern astronomy.
Thanks to this apparatus, Galileo saw that the Sun, considered until then a symbol of perfection, had spots. The astronomer made direct observations of our star, taking advantage of moments when clouds covered the solar disk, or at dawn and dusk, when the light intensity was more bearable, a practice that left him totally blind at the end of his life.
The Moon was not perfect either. Galileo saw what he considered mountains and craters, evidence that our natural satellite, like our planet, was a rocky body full of irregularities on its surface and not a flawless sphere made of ether, as was believed at the time. These observations called into question the traditional Aristotelian theses on the perfection of the celestial world, which resided in the absolute sphericity of the stars.
The Pisa-born astronomer also noticed that Saturn had some strange appendages, which he described as similar to two handles. These “appendages” intrigued astronomers for half a century until 1659, when the Dutch mathematician, physicist and astronomer Christiaan Huygens used more powerful telescopes to unravel the mystery about the changing morphology of the second largest planet in the solar system—these handles were actually its rings.
However, the most curious thing that Galileo was able to observe with the eight-magnification telescope he built (the first tube he used is believed to have come from a pipe organ) was that Jupiter was surrounded by moons and was a system similar to what the solar system should be. The astronomer first observed the Galilean moons—named thusly in his honour—on January 7, 1610 and at first thought that they were three stars near the planet, which formed a line that crossed it. The second night he was struck by the fact that these bodies seemed to have moved in another direction. On January 11, a fourth star appeared and, after a week of observation, he had seen that the four celestial bodies never left the vicinity of Jupiter and seemed to move with it, changing their position with respect to the other “stars” and the planet.
Finally, Galileo determined that what he had been observing were not stars, but planetary satellites and he published his conclusions in his book Siderius Nuncius, in March of the same year. Galileo originally called the moons of Jupiter “Medicean Stars”, in honour of the Medici family, and referred to them with the numbers I, II, III and IV. This system would be used for two centuries, until the name given by the German astronomer Simon Marius was adopted, who claimed to have observed them before Galileo:
“Jupiter is much blamed by the poets on account of his irregular loves. Three maidens are especially mentioned as having been clandestinely courted by Jupiter with success: Io, daughter of the River, Inachus, Callisto of Lycaon, Europa of Agenor. Then there was Ganymede, the handsome son of King Tros, whom Jupiter, having taken the form of an eagle, transported to heaven on his back, as poets fabulously tell…” wrote Marius in 1614—surprisingly in sync with the writer Cervantes, who in an earlier poem had called the four Jovian satellites “little Ganymedes.”
Observations of the satellites of Jupiter and the realization that Venus passes through phases similar to those of the Earth’s moon confirmed the validity of Copernicus’s heliocentric system, which argued that the Earth is not at the centre of the solar system. In 1632, Galileo published Dialogue Concerning the Chief World Systems, an essay on the relative merits of the Ptolemaic and Copernican systems, with all the evidence that telescope observations had brought to the latter. His scientific militancy earned him the persecution and condemnation of the Catholic Church, and Galileo Galilei would die under house arrest and blind, near Florence, in 1642. While he did have to renege on his ideas, no one could take away from him the title of father of modern astronomy, for opening the eyes of humanity to a new universe. |
Clues for Making Better Metallic Glass
For most people, if they were to hold a piece of metal and a crystal in their hands, they would think the two materials have nothing in common. That would not be completely true as they are both crystals, meaning the molecules within them have a regular structure. When a material does not have such a regular structure, they are considered a glass, and metallic glasses are very interesting for many applications. One problem with them though is their brittleness, but researchers at Berkeley Lab and Caltech have found something that may help change that.
Thanks to their irregular molecular structures, metallic glasses can be stronger than their crystal counterparts, malleable as plastics, while also conducting electricity and resisting corrosion. With properties like those, it is not surprising that many industries are trying to use them. In bulk though, the glasses are brittle, so composite glasses, which can be less brittle, are used instead, but the researchers have found one kind of bulk glass that is as fatigue resistant as those composites. It turns out that palladium-based bulk metallic glasses have a unique staircase-like crack pattern within them. This pattern protects against large cracks by limiting the opening and closing of the any cracks.
If this pattern can be replicated in other metallic glasses, we may see pure, bulk metallic glasses being used for a variety of devices in the future. Such devices could include smartphones, biomedical implants, and more electronic devices.
Source: Berkeley Lab |
Inside XSL-T (4/4) - exploring XML
Creating result elements
All elements in the style sheet that are not from the XSL namespace will end up in the result document tree. The xsl:element element allows an element to be created with a computed name. The content of the xsl:element element is a template for the attributes and children of the created element. In a similar fashion XSL elements exist for creating other elements such as attributes, text, processing instructions, and comments.
Looping, selecting and sortingWhile the template mechanism of XSLT is the preferred way of transforming documents, instructions like xsl:for-each, xsl:if, and xsl:choose (aka case, switch) are included for allowing the imperative programmer to iterate over a set and selectively process nodes. Finally a powerful sort operation allows for a node set to be reordered numerically and alphanumerically.
ConclusionUntil browsers with support for the yet unfinished XSL formatting objects arrive, the only way to visualize XML documents is to transform them into HTML or to attach CSS to XML. Another popular area for XSLT will be the exchange of business information between different systems and organizations in the form of XML documents over the Internet, where a "style sheet" can be used to translate from one Document Type Definition (DTD) to another. XSLT is a powerful mechanism for transforming XML documents into something else, even non-XML formats such as RTF or PDF.
Created: Aug 13, 2000
Revised: Aug 13, 2000 |
The power of colour
30 October 2008Add to My Folder
Rated 5/5 from 1 rating (Write a review)
Supporting the NYR theme of Screen Reads, this interactive text explores the power of imagery within photos and film and the use of descriptive vocabulary and colour to create moods.
Before using the resource
- Talk about times the children have felt happy, sad, hot, cold, etc.
- Look at everyday objects and familiar views through different coloured acetates and discuss how these make them feel.
- Use the introduction as a shared text, focusing on graphemes relevant to the children’s phonic development.
- Read the first sentence and look at the first picture. Ask the children to work in pairs to choose the most appropriate word to describe the picture. Discuss their choices and encourage them to give reasons.
- After looking at the second picture individually, ask them to note two words they feel best match the picture. Again, discuss the choices.
- Ask the children why they haven’t chosen the same words for the two pictures. Refer back to activities on the use of colour and mood.
- Go on to read the rest of the introduction together.
- Model the use of Activity 1 prior to independent or guided work.
- After children have used Activity 1 and 2 independently, model the use of the Picturacy demo. Additional teachers’ notes for the demo are provided on the Picturacy website. |
shapes and patterns
How many squares can you find? Introduce your young student to shapes and beginning geometry concepts with this fun coloring sheet.
How many sides does a pentagon have? Introduce your young student to shapes and beginning geometry concepts with this fun coloring sheet.
How many triangles do you see? Introduce your young student to shapes and beginning geometry concepts with this fun coloring sheet.
This outer space visitor needs help finding shapes! Can your child help? He'll need to identify and color code all the different shapes in the worksheet.
Boost your child's pre-geometry skills with an alien activity! He'll get to identify and color the different shapes in order to help his alien friend.
Stellar students, get ready for a math mission from outer space! Your child will get to help this alien sort and color the different shapes in this worksheet.
This arithmetic alien comes in peace! Can your child help him color and identify all the different shapes in this worksheet?
Get ready for a stellar math mission! Your child will help his alien friend find and identify all the shapes in this worksheet.
Give your child a review of her colors, and a lesson in identifying shapes! She'll be building fine motor skills as she colors, prepping her for handwriting.
Here's a math sheet that also involves coloring! Challenge your child to identify the shapes in this worksheet by color coding them.
Send your child on a math mission to find shapes! He'll get to color code the different shapes he sees, building his fine motor skills.
Are you ready for a math mission? In this shapes worksheet, your child will help this alien collect shapes! He'll identify the shapes by color coding them.
Greetings, stellar students! Here is a math mission that your child will enjoy: color code all the different shapes to help the alien complete his task.
Can your kid match these basic shapes to their real-world counterparts? It's a great way to boost her shape recognition skills.
How many shapes are there? They're all mixed up! Can your kid count how many there are of each type?
Help your kindergarten child learn his geometric shapes with this printable coloring worksheet. |
With a death rate of more than 85%, equine grass sickness (EGS) is a serious, debilitating disease that affects horses, ponies and donkeys. It causes severe, extensive damage to nerves, mostly the ones that control involuntary functions, and the gastrointestinal tract is particularly affected.
EGS was first described in eastern Scotland in the early 1900s and Britain has the highest incidence of the disease worldwide, with equines in Scotland and northeast England being particularly susceptible. There is limited accurate information about how common EGS is but, on average, approximately 140 cases are reported to the nationwide Grass Sickness Surveillance Scheme each year.
What causes Equine Grass Sickness (EGS)?
Almost all cases of EGS occur in horses with access to grazing. Scientific evidence suggests that the disease may be caused by the bacterium Clostridium botulinum type C, which is commonly found in soil and can produce neurotoxins that horses are particularly sensitive to. The disease is thought to occur when a combination of several risk factors triggers C botulinum that’s present in the horse’s intestinal tract to produce neurotoxins.
Decades of research have identified many risk factors associated with EGS, including…
- access to grazing – this is the main risk factor for EGS, with only a couple of isolated reports of the disease occurring in stabled horses
- EGS having occurred on a premises previously – it’s considerably more likely to recur on premises where there has been a previous case
- the time of year – 60% of all cases occur between April and June, particularly during periods of cold, dry weather
- recent movement to a new premises or pasture – exposure to new grazing is associated with an increased risk of EGS
- age – young adults aged between two and seven years are at greatest risk
- breed – it’s been suggested that native Scottish breeds may be more susceptible
- pasture disturbance – studies have identified increased risk where there is a history of pasture disturbance, presumably due to increasing the chance of soil ingestion
What to look out for
The disease can occur in acute, subacute and chronic forms, reflecting the severity and the duration of clinical signs. The clinical signs of acute EGS are severe, appear suddenly and may include…
- excessive salivation
- difficulty swallowing
- an inability to eat
Subacute cases show signs that are similar to acute cases, but less severe. Cases of acute and subacute EGS are invariably fatal, with acute cases dying or requiring euthanasia within 48 hours.
Horses surviving beyond seven days are classed as having the chronic form of EGS. Approximately a third of EGS cases present with the chronic form, which has a more gradual onset, and the signs include…
- profound weight loss
- a ‘tucked up’ abdominal appearance
- an abnormal stance
While recovery may be possible in cases of chronic EGS, the level of nursing they require is intensive, and treatment can be prolonged and expensive.
One clinical sign that may be observed in all three forms of EGS is droopiness of the upper eyelids, giving the horse a sedated appearance. This sign is called ptosis. If you notice any of these signs in your horse, you should call your vet without delay.
Unfortunately, the most reliable way to diagnose EGS is by conducting a thorough post mortem examination, but in many cases a diagnosis can be made based on the clinical signs, case history and ruling out other possible diagnoses.
It’s possible to reach a definitive diagnosis in a live horse, but this involves taking biopsies of the gastrointestinal tract during exploratory abdominal surgery with the horse under general anaesthetic, then examining the samples under a microscope. Abdominal surgery can be useful because it helps to rule out surgical colic, which can show similar signs, but it’s expensive and usually requires referral to an equine hospital with surgical facilities.
A non-invasive test that’s commonly used to help diagnose EGS involves applying a small amount of
a drug called phenylephrine to one eye then, after a short period, the angles of both upper eyelashes are compared. This drug can temporarily reverse the droopy upper eyelid (ptosis) in EGS cases, but as horses without EGS can also show a positive result, it’s important to take the history and clinical signs into consideration, too.
Because horses with acute and subacute grass sickness are so unwell that they die or need to be put down quite quickly, treatment should only be attempted in cases of chronic EGS. The only treatment with any proven efficacy is intensive nursing, with particular attention on ensuring an adequate intake of food and water. There is no ideal diet, as appetite and food preferences vary, so a wide range of different, highly palatable wet and dry feeds should be offered on a little and often basis, and water should be refreshed regularly. Hand-feeding can help encourage horses to eat and regular in-hand walking exercise is beneficial.
Various other things that have been tried when treating EGS include…
- appetite stimulants (such as diazepam), but these were found to have no significant effect on appetite and resulted in undesirable side-effects including drowsiness and poor co-ordination
- a variety of lubricants and laxatives, including liquid paraffin, which can be administered via a stomach tube if required
- aloe vera gel, which not only acts as a laxative, but has antioxidant and anti-inflammatory properties, too. Unfortunately, aloe vera gel has to be administered daily via a stomach tube – a procedure that is resented by horses with sensitive noses, because passing the stomach tube down the nose and throat becomes uncomfortable. Only one small research study has evaluated its use and it found that aloe vera had no significant beneficial effect in treating chronic EGS cases.
The road to recovery is rarely smooth, with many horses suffering from secondary complications such as…
- gastric ulcers – given the management and dietary changes involved in treating chronic EGS cases, affected horses are at high risk of this condition
- episodes of colic, especially after feeding
- aspiration pneumonia, where horses inhale food material. This can be treated with antibiotics, but tends to be associated with a poor prognosis
- episodes of diarrhoea are fairly common and occur in around 30% of cases. However, if the diarrhoea is persistent, the prognosis becomes poorer
Recovery from chronic EGS
Recovery is a lengthy process, taking on average 6–12 months for horses to regain their normal bodyweight, but can take even longer for those who require a longer period of hospitalisation.
Approximately 50% of chronic EGS cases go on to recover from the disease, and the cases with the best chance of survival tend to show a significant improvement in their appetite and demeanour within the first month. Around 80% of recovered cases return to their previous level of work, with an average of 12 months post-diagnosis to return to ridden work and up to 18 months to return to competition.
Once recovered, it’s rare for affected horses to suffer from EGS again and the majority of cases appear to regain normal gastrointestinal tract function. Interestingly, examination of tissue from recovered cases indicates that the characteristic nerve damage remains even following a complete recovery.
Can it be prevented?
Currently, it’s recommended that owners minimise their horses’ exposure to known risk factors and implement management practices that have been associated with a decreased risk of EGS. However, it’s not possible to avoid many of the risk factors for EGS, particularly those related to the premises, season and climate, and preventive management strategies don’t guarantee protection from the disease.
However, there is another potential method of prevention – vaccination. Horses with EGS have lower antibody levels to C botulinum and those with higher antibody levels have a reduced risk of disease. Furthermore, horses who have been in contact with an EGS case appear less likely to develop the disease, suggesting they may acquire some degree of immunity. Other equine diseases caused by bacteria in the Clostridia family, such as tetanus and botulism, are both prevented successfully by vaccination, making vaccination against C botulinum type C a promising avenue to explore for the prevention of EGS.
In 2014, the AHT launched a ground-breaking trial for a potential vaccine against EGS, in collaboration with the Universities of Edinburgh, Liverpool and Surrey. A successful pilot trial was undertaken that involved a total of 95 horses in Scotland, and vaccinated horses mounted a significant immune response against C botulinum type C following their primary vaccination course.
For the main part of the trial, 84 practices and 1,037 horses and ponies were enrolled. The trial is still ongoing, but it’s hoped that the results will show that vaccination against C botulinum type C is effective in reducing the risk of grass sickness. |
Definition of Ojibwa in English:
noun (plural same or Ojibwas or Ojibways)
1A member of a North American Indian people native to the region around Lake Superior. Also called Chippewa.
- The Ojibwas had likewise used deception to their benefit in taking Michilimackinac.
- The two Ojibwas affectionately nicknamed him ‘Baptiste’ or ‘Bateese’ for reasons never clear to him.
- But long, long before the Voyageurs came the forests were home to the Sioux and the Ojibwa.
2The Algonquian language of the Ojibwa.
- There are two sources of native borrowing: the Canadian Indian languages such as Cree, Dene, and Ojibwa, and Inuktitut, the language of the Inuit or Eskimo.
- In this specific way the historical development of Miami-Illinois resembles that of Fox, one of its closest sister languages, rather than that of Ojibwa, another of its closest sister languages.
adjectiveBack to top
Relating to the Ojibwa or their language.
- Similarly, the Native Americans of the Chippewa / Ojibwa tribes thought that the Sun's flames were being extinguished, and so during an eclipse they would launch skywards burning arrows in order to replenish it.
- After 1840 many Metis buffalo hunters, the offspring of European fur traders and Cree and Ojibwa women, also joined these groups.
- The portability of Ojibwa lodging - the wigwam - enabled such moves to be made quickly and easily.
From Ojibwa očipwē, probably meaning 'puckered', with reference to their style of moccasins.
Words that rhyme with OjibwaInterlingua • siliqua • Iowa
Definition of Ojibwa in:
- British & World English dictionary
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. |
Home Environment of Early Readers
We have known that the home environment plays an important role in the development of early readers for over a quarter of a century. According to Jim Trelease, author of The New Read-Aloud Handbook, two major studies* (one from the 1966 and one from 1975) have been done on early readers as well as students who respond to early education without difficulty. These studies show that the following four indicators were present in the home environement of nearly every early reader.
- The child is read to on a regular basis. This reading included not only books, but billboards, signs, labels, and more. The parents, by example, were avid readers.
- Books, newspapers, magazines, and comics were always available in the home.
- Paper and pencils were also available. Dolores Durkin explained, “Almost without exception, the starting point of curiosity about written language was an interest in copying objects and letters of the alphabet.”
- Finally, Trealease explains that people in the child’s home answered endless questions, praised the child’s efforts, used their local library frequently, bought books, wrote stories that their child dictated and displayed their child’s work prominently.
****Preschool Reading Recommended Products****
Robot Reader Reading Games And Phonics Games – Improve Kids Reading Skills With These Reading Games And Phonics Games For School Or Homeschool Education – Full Color Childrens Games – Printable Board Games, Card Games, Bingo Games, Domino Games Etc. Ideal For Teachers, Parents And Homeschoolers.
*Dolores Durkin, Children Who Read Early (New York: Teachers College Press, 1966), and Margaret M. Clark, Young Fluent Readers (London: Heinemann, 1976).
Visit Jim Tealease’s website, a great site for anyone interested in children’s reading and education. Special emphasis is placed on the importance of reading aloud to children of all ages. |
Stormy Weather in Space
The Challenges of Exploring Vast Substorms
Simulating Substorms in the Computer
|Figure 1. Northern Lights from Space
View from space of the Earth’s Northern Lights, which are produced by the impact on the upper atmosphere of fast-moving charged particles from magnetic storms and substorms in the Earth’s magnetotail. Image of northern auroral oval obtained on March 25, 1996 is superposed on image of Earth’s surface. Auroral image, in the ultraviolet spectrum, is from the Visible Imaging System (VIS) on the Polar spacecraft. Underlying image of Earth’s surface is a subset of the Face of the Earth™ © 1996, ARC Science simulations.
he shimmering curtains of the Northern Lights or aurora borealis have long fascinated all who see them, including the scientists who study this "space weather" in the Earth’s protective magnetosphere. But these Northern Lights also have a dark side–they are the product of magnetic substorms that can damage satellites, communications networks, and even power grids. Developing a theory that explains the rapid onset of substorms and the trigger that unleashes them has eluded physicists for decades, both because of the expense of obtaining satellite data and the great computing resources required for complex models. Now, Philip Pritchett of UCLA and colleagues are gleaning new insights using NPACI supercomputers to simulate substorms more accurately than previously possible, as they continue the hunt for the elusive triggering mechanism.
Watching the eerie display of the aurora borealis overhead can awaken both feelings of awe and questions about this mysterious light show in the heavens. Where do these lights come from? What governs their comings and goings?
To answer these questions, research physicist Philip Pritchett of the Department of Physics and Astronomy at UCLA and colleagues Ferdinand Coroniti and Viktor Decyk are trying to penetrate the mysteries of magnetic substorms. "Magnetic substorms are one of the oldest problems in physics that is still unsolved–people have been working on this for 35 or 40 years–and now simulations on powerful supercomputers are playing a crucial role in letting us understand them," said Pritchett.
In addition to giving scientists a better understanding of the basic mechanisms of substorms, the knowledge the researchers are gaining should lead to practical benefits such as better predictions of the space weather around the Earth, including the magnetic substorms that can be so disruptive. And beyond substorms, the mechanisms Pritchett is studying play a role in other important areas such as the eruptions on the Sun known as solar flares and the Tokamak fusion energy device that may one day generate low-cost electricity.
Stormy Weather in Space
The Sun gives charged particles (protons, electrons, and heavier ionized atoms) that blow through interplanetary space toward the Earth. When this "solar wind" reaches the Earth, the charged particles interact in complex ways with the Earth’s magnetic field, forming the protective magnetosphere with its long magnetotail extending millions of miles or hundreds of Earth radii downstream. Just as the dynamics of the atmosphere give rise to the Earth’s weather, so the dynamics of the solar wind as it buffets and distorts the Earth’s magnetic field give rise to space weather–complete with magnetic storms, which have a timescale of days, and substorms which can have a timescale of hours.
During the period leading up to a magnetic substorm, the solar wind acts to stretch the Earth’s magnetic field lines like rubber bands, until eventually they break and reattach in a process known as reconnection. During reconnection, the magnetic field lines release their stored energy, accelerating the charged particles around them in a sudden explosion that scatters the electrons and ions great distances at high velocities.
Magnetic substorms occur on the night side of the Earth facing away from the Sun, and the explosion does not scatter the electrons and ions uniformly, but preferentially in two directions. Some are projected outward along the Earth’s magnetotail, while another "beam" is projected toward the Earth, and guided down the Earth’s magnetic field as it dips toward the poles. "The impact of these fast-moving charged particles hitting the Earth’s upper atmosphere excites atoms there so that they emit the light we see in the aurorae, which are mostly visible at high latitudes, although on rare occasions they can be seen as far south as Los Angeles," said Pritchett. He explains that an aurora acts as a kind of signature or mirror, providing a telltale visual display that reveals what is happening in the substorm farther above the Earth (Figure 1).
"Although many things about magnetic substorms have been understood for years, there are some fundamental questions that have turned out to be surprisingly difficult to answer," said Pritchett. One question involves the great suddenness with which substorms appear--far faster than earlier models have been able to account for. A second basic question is what triggers substorms. Is it an external event, such as some change in the solar wind that tilts the interplanetary magnetic field orientation, or is the trigger a local event in the magnetotail itself? And although the disruptions of substorms eventually extend from near the Earth to far down the magnetotail, where is the substorm triggered--relatively near the Earth at five or 10 Earth radii, or farther out in the magnetotail at perhaps 25 Earth radii? (See Figure 2.) Through their supercomputer simulations the researchers are finding new answers to these basic questions, which continue to intrigue space physicists.
|Figure 2. Possible Substorm Trigger Zones
The solar wind interacts with Earth’s magnetic field to create a long downstream magnetotail (black arrows indicate direction of magnetic field lines), where magnetic storms and substorms produce the Northern Lights (Figure 1). These simulations are investigating substorm dynamics, including two proposed mechanisms for substorm triggering. Top shows trigger far down magnetotail at around 25 Earth radii with inward high-speed flow, while bottom shows alternate theory with trigger of outward-moving rarefaction wave occurring relatively near the Earth at five or 10 Earth radii.
The Challenges of Exploring Vast Substorms
Magnetic substorms unfold throughout enormous volumes of space extending far above the Earth, presenting researchers with major difficulties in both the experimental and computational approaches used to explore them. "Until now we’ve never been able to have more than one satellite observing in the same general region at a time. Like a blind man exploring an elephant, the single measurement point of one satellite gives only a partial view of substorms. In a similar way, although we get a great deal of information from our simulations, in one respect we’re still limited in that we can’t yet include the entire substorm zone," said Pritchett.
Complementing more general studies with global magnetohydrodynamics models, the researchers’ model concentrates on the detailed local physics that leads to the triggering and onset of substorms, capturing the effects of individual particle dynamics in what are known as particle in cell simulations. But this sets a minimum size for each computational cell of several tens of km, so that the entire substorm zone would contain too many cells for even today’s largest supercomputers. Thus, these simulations encompass one part of the substorm zone at a time.
Along with an experiment that the researchers hope will one day be done with a dozen or more satellites observing simultaneously, being able to run larger simulations will help researchers answer further questions about substorm origins and dynamics.
"Our simulations are part of computational physics, a third branch that lies between the traditional theoretical and experimental branches of physics. The way I approached the substorm problem is to perform numerical experiments in which we follow the motion of a large number of charged particles in 3-D simulations in self-consistent electric and magnetic fields described by Maxwell’s equations," said Pritchett. The model can follow as many as 100 million particles, reproducing the interactions the particles experience in the earth’s magnetosphere and substorms.
"One of the most important things we’ve discovered is that instead of treating all charged particles the same, in line with previous assumptions, when we include certain kinetic effects of the plasmas and allow separate behavior for electrons and ions, the release of energy becomes much more rapid in the reconnection process," said Pritchett. While earlier models predicted rates of solar flare eruption, for example, that were far too slow, on the order of 10,000 years, Pritchett’s improved simulations produce a rate on the order of 30 minutes, consistent with what is actually observed in nature. "The simulations show that the proposed mechanism of reconnection can account for the rapid onset of magnetic explosions that form substorms and solar flares," said Pritchett.
The researchers have also tested predictions from each of the two competing theories for the trigger mechanism of substorms, finding some behavior consistent with the theory that predicts a near-Earth substorm origin. However, further simulations with larger grids will be necessary to fully answer the question of where substorms are triggered.
Simulating Substorms in the Computer
In their simulations, Pritchett and colleagues use a 3-D grid with 128 grid points per axis. Because the code uses a domain decomposition algorithm that is 1-D, it has been limited to 128 processors. To scale up the code for larger simulations, the researchers are adopting a 2-D domain decomposition, which will enable the simulations to run on 10,000 or even 20,000 processors.
"Data sets include the full particle and field information for up to 100 million particles, which is then used in post-processing to create statistical averages and calculate things like current densities," said Pritchett.
Looking ahead, the researchers have nearly finished migrating their code from the Cray T3E to Blue Horizon, which will allow a larger model with from 256 to 512 grid points per axis running on the same number of processors. This will encompass a greater part of the substorm domain, corresponding to a distance of about 10 Earth radii, allowing the researchers to more realistically model substorms and better test the two models for substorm origins.
"This is an interesting time to be studying substorms," said Pritchett, "because with better simulations we’re finally able to answer some very old questions." –PT
From the Sun: Auroras, Magnetic Storms, Solar Flares, Cosmic Rays,
Edited by S.T. Suess and B.T. Tsurutani, American Geophysical Union,
Washington, DC, 1998. |
Animal Vocabulary: Teaching Children Italian
This lesson plan will give you a suggested process and the tools you need, including audio files and worksheets, to help your students identify familiar animals using the Italian language. After this lesson, your students will be asking their parents for 'cagnolini' instead of 'puppies' in no time!
1. Print out Worksheets and Index Cards from the Language & Culture Media files for Teaching Children Italian. The first worksheet lists the animals in alphabetic order: This will be used as a guide for the adult teaching the lesson. The second worksheet is actually several pages of index cards. These index cards are to be cut and used as a learning tool.
2. Have children find pictures of each animal and paste on the opposite side of the index card with the correct corresponding name. So for anatra the child will paste a picture of a duck behind the index card. This will help with the testing later on. Have children study these cards.
3. Go to Syvum.com where you will find most of the animals named with corresponding audio files. Here you will listen to how to say the words in Italian. The worksheets list more animals than those in the audio files, so do not be discouraged.
4. Test children using the index cards. Show the picture of the animal and have them state how we say this animal in Italian. For every animal they do not get correct, have them write this animal in Italian three times. This method will help drill the word and image into their mind. They will learn to associate the picture with the name.
5. When you are out, or even inside, have children state animals in Italian. For instance, if they have a dog, start calling him “Cane” instead of feed the dog say "feed the cane".
6. Bring children to the zoo and a farm for two very important and fun field trips. Children will learn more about the animals while testing their knowledge of the Italian language.
7. After you think the child has grasped all of the animals, test the child without the pictures. Have the child sit down and fill out the last worksheet, which will have either the English or Italian word for each animal. The child is to fill in the corresponding word.
Using this lesson, children will start to apply the Italian language to animals in everyday and not so everyday life. In our next lesson we will look at family members.
This series is now continuing with an article on using pronouns and the verb “piacere”, meaning like, with the vocabulary learned so far. |
Presentation on theme: "TEACH LIKE A CHAMPION Hamblen County Department of Education"— Presentation transcript:
1 TEACH LIKE A CHAMPION Hamblen County Department of Education Based on the work of Author Doug LemovBy PresenterMedia.com
2 Technique 1-No Opt OutThe belief that a sequence beginning with a student unable to answer a question should end with that student answering correctly as often as possible.
3 Technique 2-Right Is Right Set and defend a high standard of correctness in your classroom. In holding out for what is right, you set the expectation that the questions you ask and their answers truly matter.
4 Technique 3- Stretch ItThe sequence of learning does not end with the right answer; reward right answers with follow-up questions that extend knowledge and test for reliability. This technique is especially important for differentiating instruction.
5 Technique 4-Format Matters It’s not just what students say that matters, but how they say it. Use format matters to prepare your students to succeed by requiring complete sentences and proficient grammar every chance you get.
6 Technique 5- Without Apology There is no such thing as boring content, only boring instruction.Four ways we are at risk for apologizing for what we teach:Assuming something will be boring.Blaming it.Making it “accessible.”Apologizing for students.
7 Technique 6- Begin with the End By framing the objective first, you substitute “What will my students understand today?” for “What will my students do today?” Frame each lesson by the learning goal, not the material.
8 Technique 7-4 M’sGiven the importance of standards, the teacher must strive to make the path to standard attaining proficiency of the standard useful and effective.Manageable-break everything into bite-sized pieces called objectives.Measurable-be able to assess mastery by the end of the class.Made First-design activities to meet the objective, don’t retrofit an objective to make the activity fit.Most Important-focus on what’s most important.
9 Technique 8-Post-ItPost your objective in the same place every day, so that anyone can identify your purpose for teaching that day in plain English.
10 Technique 9- Shortest Path All other things equal the shortest path is the best. Take the shortest path, and throw out all other criteria.
11 Technique 10-Double Plan It is as important to plan for what students will be doing during each phase of your lesson as it is to plan for what you will be doing.
12 Technique 11- Draw the Map Plan the environment to meet the learning goals of the students.
13 Short, engaging introduction to excite students about learning Technique 12- The HookShort, engaging introduction to excite students about learning
14 Technique 13- Name the Steps When necessary give students solution-specific steps by which to work or solve problems of the type you’re presenting. This involves breaking down a complex task into specific steps.1. Identify the steps.2. Make them “sticky”3. Use two stairways
15 Technique 14-Board = Paper Teach students how to take notes about they need to retain from your lessons. Copying from the board is the start and then as they grow in the process, they can discern what to include
16 Technique 15-CirculateMove around the classroom to engage and hold students accountable. Break the Plane-Move around the entire room within the first five minutes of every class. (Don’t just move around to take care of behavior problems.) Full Access Required- Make sure you can circulate everywhere in the classroom freely. (No backpacks or having to move chairs to get around) Keep passageways wide and clear. Move Systematically-Circulate universally and impersonally and unpredictably!! Position for Power-As you circulate, your goal should be to remain facing as much of the class as possible. discussion.
17 Technique 16-Break It Down Students learn complex skills by breaking them down into manageable steps and, if possible, giving each step a name so that it can be easily recalled. One of the best ways to ensure success with it is to prepare by identifying potential trouble spots and drafting both anticipated wrong answers and possible cues.
18 Technique 17-RatioThe proportion of the cognitive work students do in your classroom is the ratio. Push more and more of the cognitive work out to students as soon as they are ready, with the understanding that the cognitive work must be on-task, focused, and productive.
19 Technique 18-Checks for Understanding Should be constant and should be called, “Checks for Understanding and Do Something About it Right Away.”
20 Technique 19-At Bats“Teach them the basics of how to hit, and then get them as many at bats as you can.” Practice after practice!!
21 Technique 20-Exit Ticket 1-3 questionsdesigned to yield data ( Questions are fairly simple and focus on one key part of the objective.)Multiple formats.
22 Technique 21-Take A Stand This involves pushing students to actively engage in the ideas around them by making judgments about the answers their peers provide. The key is having them defend and explain their answers.
23 Technique 22- Cold CallIn order to make engaged participation the expectation, call on students regardless of whether or not they raise their hand.
24 Technique 23-Call and Response You ask a question and the entire class answers out loud in unison. Repeat: what the teacher says Report: Those who have finished the problems or questions on their own are asked to report their answers back. Reinforce: You reinforce new information or a strong answer by asking the class to repeat it. Review Solve
25 Technique 24- PepperA teacher tosses questions to a group of students quickly, and they answer back. No discussion! If an incorrect answer is given, the teacher asks another student the same question.Head to head.Sit down.
26 Technique 25- Wait TimeWait Time refers to the technique of delaying a few strategic seconds after you finish asking a question and before you ask a student to begin answering it.
27 Technique 26- Everybody Writes Set your students up for rigorous engagement by giving them the opportunity to reflect first in writing before discussing. Students remember twice as much of what they are learning if they write it down.
28 Technique 27-VegasIt’s the sparkle, the moment during class when you might observe some production value: music, lights, rhythm, dancing. It reinforces not just academics generally but one of the day’s learning objectives. Short, sweet, and to the point.
29 Technique 28-Entry Routine Your entry routine describes how you expect students to enter the classroom and how the class session begins. A good entry routine is planned to proceed quickly and automatically with little or no narration by the teacher. It becomes part of the classroom culture. The objectives, agenda, and homework assignments should already be posted in a consistent and predictable place.
30 Technique 29- Do NowA Do Now is a short activity that is written on the board or is waiting on the table by the door when the students enter.Four Criteria for Focus, Efficiency, and Effectivenesscompleted without direction or discussion3 to 5 minutes to complete3. a written productshould preview the day’s lesson or review arecent lesson
31 Technique 30-Tight Transitions The Power of Tight Transitions 1 minute x 10 transitions x 200 days = 35 hours of instructional time or one week
32 Technique 31- Binder Control 1. Have a required binder for students to take notes. 2. Require an organizational format. Assign a number to each assignment Use a student-made table of contents Require students to maintain the binder daily
33 Technique 32-SLANTFive Key Behaviors that Maximize Student Attention Sit up Listen Ask and answer questions Nod your head Track the speaker
34 Technique 33-On Your Mark Students should be prepared before class begins. 1. Ensuring Students are On Their Marks 2. Be explicit about what students need to start class 3. Set a time limit. 4. Use a standard consequence 5. Provide tools without consequences to those who recognize the need before class 6. Include homework
35 Technique 34-Seat Signals Seat signals should meet the following criteriaWhile seatedNonverbalSpecific, unambiguous, and subtleResponse does not distract from learning5. Explicit and consistent
36 Technique 35-PropsProps is public praise for students who demonstrate excellence or exemplify virtues.
37 Technique PercentThree principles to ensure consistent follow-through and compliance in the classroom. Use the least invasive form of intervention Relying on firm, calm finesse. Emphasize compliance you can see.
38 Technique 37-What to DoTo be effective, directions should be specific, concrete, sequential, and observable.
39 Technique 38-Strong Voice The Five Principles of Strong VoiceEconomy of languageDo not talk overDo not engageSquare up/stand stillQuite power
40 Technique 39-Do It AgainDoing it again and doing it right, or better, or perfect is often the best consequence.
41 Technique 40-Sweat the Details The key to Sweat the Details is preparation. Planning for orderliness means putting systems in place in advance that make accomplishing the goal quick and easy.
42 Technique 41-ThresholdThe most important moment to set expectations in your classroom is the minute when your classroom students enter or, if they are transitioning within a classroom, when they formally begin their lesson.
43 Technique 42-No Warnings The behavior that most often gets in the way of taking action is the warning. Giving a warning is not taking action; it is threatening that you might take an action and therefore is counterproductive. Warnings tell students that a certain amount of disobedience will not only be tolerated but is expected.
44 Technique 43-Positive Framing Making interventions to correct student behavior in a positive and constructive way.
45 Technique 44-Precise Praise Positive reinforcement is one of the most powerful tools in every classroom. Most experts say it should happen three times as often as criticism and correction.
46 Technique 45-Warm/Strict As teachers, we must be both: caring, funny, warm, concerned, and nurturing – and also strict, by the book, relentless, and sometimes inflexible.
47 Technique 46- The J Factor The finest teachers offer up their work with generous servings of energy, passion, enthusiasm, fun, and humor – not necessarily as the antidote to hard work but because those are some of the primary ways that hard work gets done.
48 Technique 47-Emotional Consistency First, modulate your emotions. Next, tie your emotions to student achievement, not to your own moods or the emotions of the students you teach.
49 Technique 48-Change the Pace Instead of changing topics every ten to fifteen minutes, which is distracting, confusing, and unproductive change the format of the work every ten to fifteen minutes as you seek to master a single topic.
50 Technique 49-Brighten Lines Bright, clear lines at the beginning and end of your instruction. It also improves pacing because the first and last minute of any activity play a large role in shaping students’ perceptions. Get your activities off to a clean start, and students will perceive them to be energetic and dynamic.
51 Technique 50-All HandsManaging questions, requests, and comments that are either off task or persist on a topic you are ready to dispense with.
52 Technique 51-Every Minute Matters Time is water in the desert, a teacher’s most precious resource: to be guarded and conserved. Each every minute of everyday.
53 Technique 52-Look Forward Use an agenda on the board for a lesson or daily plans.
54 Technique 53-Work the Clock Count it down, parcel it out in highly specific increments, announcing an allotted time for each activity, and allowing you to continually set goals for you class’s speed in meeting expectations. |
Your horse’s immune system is a remarkable defense mechanism that safeguards against diseases and infections. It is a complex network of cells and molecules working together to protect your horse from harmful pathogens such as bacteria, viruses, parasites, and fungi. Ensuring a strong immune system is vital for your horse’s overall health and well-being.
Factors like age, stress, and nutrition can profoundly impact the efficiency of the equine immune system. As a responsible horse owner, it is crucial to understand how the immune system works and take proactive steps to support and maintain its optimal functioning.
- Understanding the horse’s immune system is crucial for maintaining its overall health.
- Factors like age, stress, and nutrition can influence the strength and functioning of the immune system in horses.
- Proper veterinary care, including vaccinations and regular check-ups, is vital for disease prevention.
- Supplements formulated to support immune health can play a significant role in maintaining a strong immune system.
- Adopting good hygiene practices and providing a balanced diet are essential for supporting your horse’s immune system.
How a Horse’s Immune System Works
The horse’s immune system is a remarkable defense mechanism that protects against disease-causing pathogens. Understanding how this system works is crucial in ensuring the health and well-being of your equine companion.
The immune system comprises three levels of defense: physical barriers, innate immunity, and adaptive immunity.
Physical barriers, such as the skin and mucous membranes, form the first line of defense against pathogens. The skin acts as a protective barrier, preventing the entry of harmful microorganisms. Mucous membranes, such as those lining the respiratory and digestive tracts, produce mucus and contain specialized cells that secrete antimicrobial substances.
Innate immunity is the body’s immediate response to pathogens. It involves a diverse range of immune cells, including phagocytes, natural killer cells, and dendritic cells. These cells recognize and eliminate foreign invaders without the need for prior exposure. Innate immunity also relies on molecules like complement proteins and antimicrobial peptides to neutralize pathogens.
Additionally, inflammation is a key feature of innate immunity. When tissue is damaged or infected, immune cells release signaling molecules that initiate the inflammatory response. Inflammation helps to recruit immune cells and increase blood flow to the infected or injured area, aiding in the elimination of pathogens.
Adaptive immunity is the highly specialized arm of the immune system that provides long-term protection against specific pathogens. Unlike innate immunity, adaptive immunity takes time to develop but offers a targeted and highly effective response.
Adaptive immunity is mediated by specialized immune cells called lymphocytes, including T cells and B cells. When a pathogen enters the body, these cells recognize specific antigens (molecules on the surface of the pathogen) and mount a response. B cells produce antibodies that bind to the antigens, marking them for destruction by other immune cells. T cells, on the other hand, directly attack infected cells.
Notably, adaptive immunity has the ability to generate memory cells. These memory cells “remember” previous encounters with pathogens, enabling a faster and more robust response upon reinfection.
The effectiveness of the adaptive immune response can be enhanced through vaccination. Vaccines stimulate the immune system to produce a specific immune response without causing disease. This prepares the horse’s immune system to recognize and eliminate the pathogen more efficiently in the future.
In summary, the immune system of a horse relies on physical barriers, innate immunity, and adaptive immunity to defend against pathogens. Understanding how each level of defense operates can help horse owners make informed decisions regarding their horse’s health and well-being.
Factors That Influence Immune Function in Horses
The effectiveness of a horse’s immune system can be influenced by several factors, including age, stress, and nutrition. Understanding how these factors impact immune function is crucial for maintaining your horse’s overall health.
Horses of different ages have varying levels of immune system development and functionality. Foals, in particular, have developing immune systems, making them more susceptible to illness. As they grow and mature, their immune system strengthens, providing better protection against pathogens. On the other hand, older horses may experience a decline in immune function, making them more susceptible to infections and diseases.
Chronic and prolonged stress can have a significant impact on a horse’s immune system. Stressful situations, such as transportation, changes in environment, rigorous training, or competition, can lead to the release of stress hormones like cortisol, which can suppress immune function. It’s essential to minimize stressors and provide a calm, stable environment for your horse to maintain optimal immune health.
Proper nutrition plays a vital role in maintaining a healthy immune system in horses. A well-balanced diet that includes the right combination of energy, protein, vitamins, minerals, omega-3 fatty acids, and hydration is essential for supporting immune function. Nutrients like vitamin C, vitamin E, selenium, and zinc are known to enhance the immune response. Additionally, a healthy gut microbiome achieved through a balanced diet can positively impact the immune system.
“A well-balanced diet that includes the right combination of energy, protein, vitamins, minerals, omega-3 fatty acids, and hydration is essential for supporting immune function.”
By addressing these factors, you can help maintain a robust immune system in your horse, allowing them to better fend off infections and diseases. Providing age-appropriate care, minimizing stressors, and ensuring a well-rounded, nutrient-rich diet are key steps in supporting your horse’s immune health.
Next, we’ll explore practical tips and strategies to support and maintain a healthy immune system in horses.
|Impact on the Immune System
|Developing immune system in foals; decline in immune function in older horses
|Suppresses immune function
|Proper nutrition supports immune function
Tips to Support and Maintain a Healthy Immune System
To support and maintain a healthy immune system in horses, it is crucial to provide them with a well-balanced, high-quality diet tailored to their individual needs. Proper nutrition plays a significant role in boosting their immune system and overall health.
“An optimal diet is the foundation for a strong immune system in horses.”
In addition to a balanced diet, immune support supplements can be beneficial in maintaining your horse’s immune health. These supplements are specifically formulated to provide essential nutrients and support the immune system’s function. Some key ingredients to look for in horse health supplements include:
- Probiotics and prebiotics: These help maintain a healthy gut microbiome and promote a robust immune response.
- Diamond V yeast: This ingredient is known for its immune-boosting properties.
- CLOSTAT: It is known to support gastrointestinal health and boost the immune system.
- ButiPEARL Z EQ: This ingredient aids in promoting a healthy digestive system, which is closely linked to the immune system.
- Mushrooms and adaptogens: These natural substances have immune-modulating effects and can enhance immune function.
- Vitamin E, vitamin C, and selenium: These are powerful antioxidants that support immune health.
It is important to note that while supplements can be beneficial, they should not replace a balanced diet or veterinary care.
Regular veterinary care is essential for maintaining your horse’s immune system. This includes routine check-ups, dental care, and parasite control.
To protect your horse from various diseases, following a proper vaccination program is crucial. Vaccinations help stimulate the immune system to produce protective antibodies against specific diseases.
In addition, good hygiene practices play a key role in preventing the spread of diseases. Maintaining clean living conditions and regularly cleaning water buckets can help minimize the risk of infection and support your horse’s immune system.
Immune Support Ingredients and Benefits
|Probiotics and prebiotics
|Promote a healthy gut microbiome and enhance immune response
|Diamond V yeast
|Boosts immune health and function
|Supports gastrointestinal health and immune system
|ButiPEARL Z EQ
|Aids in maintaining a healthy digestive system and supporting immune function
|Mushrooms and adaptogens
|Enhance immune function and have immune-modulating effects
|Vitamin E, vitamin C, and selenium
|Powerful antioxidants that support immune health
By incorporating these tips into your horse’s care routine, you can provide optimal support and maintenance for their immune system, ensuring their overall health and well-being.
Immune System Support Supplements for Horses
When it comes to supporting your horse’s immune system, there are several horse immune system supplements available on the market. These supplements are specifically formulated to provide equine immune support and promote overall horse health. They often contain a combination of natural ingredients that are known for their immune-boosting, antioxidant, and anti-inflammatory properties.
Common ingredients found in these immune system support supplements include:
- Herbs such as garlic, rosemary, thyme, and echinacea: These herbs have been used for centuries to support immune function.
- Probiotics and prebiotics: These help maintain a healthy balance of gut bacteria, which is important for overall immune health.
- Garlic and rose hips: These ingredients are rich in antioxidants and can help strengthen the immune system.
- Astragalus and schizandra berries: These traditional Chinese herbs are believed to enhance immune function.
- Yucca and zeolite: These ingredients have anti-inflammatory properties and can help reduce immune system-related inflammation.
- Peppermint, myrrh, and juniper berries: These ingredients have antimicrobial properties and can help support a healthy immune system.
- Hops: This ingredient is known for its calming effects and can help reduce stress, which can have a positive impact on immune function.
- Milk thistle seed: This ingredient has antioxidant properties and can help support liver health, which is important for overall immune function.
By incorporating these horse health supplements into your horse’s daily routine, you can provide additional support to his immune system and help him stay healthy. However, it’s essential to consult with your veterinarian before introducing any new supplements to your horse’s diet. They can guide you in choosing the right horse immune system supplements and ensure that they are appropriate for your horse’s specific needs.
To give you a better idea of the horse immune system supplements available, here’s a table showcasing some popular options:
|Garlic, rosemary, probiotics
|Supports immune health, promotes gut health
|Echinacea, yucca, zeolite
|Boosts immune function, reduces inflammation
|Milk thistle seed, astragalus
|Provides antioxidant support, enhances immune function
With the variety of immune system support supplements available, you have options to choose from for your horse’s specific needs. Remember to always consult with your veterinarian and consider factors such as your horse’s age, overall health, and any existing medical conditions before introducing any new supplements. By combining proper nutrition, veterinary care, and the right immune system support supplements, you can help your horse maintain a strong and healthy immune system.
Strengthening your horse’s immune system is essential for maintaining their overall health and preventing diseases. Factors such as age, stress, and nutrition can influence the effectiveness of the immune system in horses. By providing a balanced diet, proper veterinary care, and immune support supplements, you can help support and maintain your horse’s immune system.
A well-balanced diet tailored to your horse’s individual needs is crucial for their immune system’s health. Supplements designed to support immune health, such as those containing probiotics, prebiotics, and immune-boosting ingredients like Diamond V yeast and ButiPEARL Z EQ, can also be beneficial.
In addition, regular check-ups from your veterinarian, along with a proper vaccination program, play an important role in disease prevention. Good hygiene practices, such as maintaining clean living conditions and regularly cleaning water buckets, help reduce the risk of infections and keep your horse healthy. By taking these steps, you are actively supporting your horse’s health maintenance and disease prevention.
How does a horse’s immune system work?
A horse’s immune system is a complex and highly coordinated system of cells and molecules that work together to defend the body against disease-causing organisms. It consists of physical barriers, innate immunity, and adaptive immunity.
What factors can influence the strength and functioning of a horse’s immune system?
Factors such as age, stress, and nutrition can influence the strength and functioning of a horse’s immune system.
How can I support and maintain a healthy immune system in my horse?
To support and maintain a healthy immune system in your horse, it is recommended to provide a well-balanced, high-quality diet tailored to their individual needs. Supplements designed to support immune health can also be beneficial.
Are there immune-boosting supplements available for horses?
Yes, there are immune system support supplements available for horses that contain ingredients known for their immune-boosting, antioxidant, and anti-inflammatory properties.
What are some good hygiene practices to prevent diseases in horses?
Maintaining clean living conditions and regularly cleaning water buckets are important hygiene practices to help prevent diseases in horses. |
March 15, 2018 - by Simone Ulmer
Ever since the successful production of graphene, two-dimensional materials have been intensively researched. Scientists Andre Geim and Konstantin Novoselov were honoured with the Nobel Prize in 2010 for extracting with tape an atom-thick layer of the new material Graphene from carbon (from the tip of a pencil), and for their research done on it. Two-dimensional materials are ascribed completely different physical properties to the three-dimensional compounds from which they derive. They are thus promising candidates for the next generation of electronic and optoelectronic applications, as Nicolas Mounet, Nicola Marzari and their team from the National Center of Competence in Research MARVEL at EPFL in Lausanne write in their article, recently published in the journal Nature Nanotechnology. Their latest research results even made the journal's front page - the team has developed a method that used the CSCS supercomputer "Piz Daint" to identify 258 promising candidates for two-dimensional compounds in one go.
Geometry and binding energy as search criteria
The researchers began their investigation with 108,423 materials known from other experiments. They first used their self-developed algorithm to filter out materials with suitable geometric properties: crystals with a layered structure. This helped them to narrow down the number to 5,619 compounds, which were then screened using high-throughput electronic structure calculations, thus fishing out materials whose layers only had weak bonding interaction between them. Using this step-by-step approach, the researchers succeeded in identifying 1825 crystal structures that may allow two-dimensional materials to be extracted. They also further tested in their simulations the crystals' mechanical stability, vibrational properties, electronic structure and potential magnetic strength for a subset of 258 promising candidates. The results showed that most of them are semiconductors.
Until now, two-dimensional materials have remained rare; only a few dozen could be produced or exfoliated from three-dimensional materials. For the team, the success of their new method in the search for two-dimensional materials is a perfect example of how computational methods can speed up the discovery of new materials. Others are interested in their approach too: Olle Eriksson from Örebro University in Sweden, who was not involved in the study and contributed an article for News & Views in Nature Nanotechnology, hopes that some of the materials will now be able to be experimentally produced. He is convinced that in future, this method will make it possible to find materials with particular desired properties using a suitable filter algorithm.
- The results are reproducible thanks to the deployment of the AiiDA materials' informatics infrastructure, which keeps track of the full provenance of each calculation and result: http://www.aiida.net/ |
CHICAGO (CBS) -- You may not hear the term "phase transition" in our daily weather forecasts very often, but it's the driving process behind many meteorological concepts and phenomena – from dew points and relative humidity to something as simple as rain turning to snow as the temperature drops.
You will not, however, hear about quantum phase transitions in daily weather forecasts. But quantum phase transitions were in the news Wednesday, because in the labs at the University of Chicago, some valuable research on how to simulate and study those far more complex phenomena more easily has been under way.
Quantum phase transitions are not observable in the same way as everyday phase transitions such as ice to liquid water, or liquid water to vapor. As explained by the U of C, quantum phase transitions happen when some materials are cooled to a temperature near absolute zero – the theoretical point at which there is no heat energy and atoms are not moving, or about -459.67 degrees Fahrenheit.
The U of C says quantum phase transitions can "make a physicist's jaw drop." They can involve a material becoming magnetic when it wasn't before, or gaining a "superpower" in which it can conduct electricity without losing any energy as heat, the U of C said.
The math required to calculate quantum phase transitions is so complex that even supercomputers have a hard time with it, the U of C said. But a new study proposes a shortcut that pulls only the most important information into the mathematical equations and can create a map of all possible phase transitions in the system being studied, the U of C explained.
Researchers say the new study could go on to bring about technological breakthroughs, the U of C said.
As explained by the U of C, phase transitions such as evaporation and condensation are dependent on changes in the temperature. Quantum phase transitions, by contrast, are instead brought about by environmental interferences such as a magnetic field, the U of C said.
Quantum phase transitions occur as a consequence of many electrons acting in relationship with one another, the U of C said. To simulate such transitions, scientists must create models that acknowledge the possibilities of what could happen to each and every electron – and thus, running those simulations requires computing power that quickly runs out of control, the U of C said.
Complex and powerful quantum computers are believed to be better suited for such calculations, but still in that case, the result is reams of data that need to be translated back to the language of everyday computers to be understood by scientists, the U of C said.
U of C researchers sought out a way to simplify the calculations on quantum phase changes without any loss of accuracy. So instead of creating simulations that require calculating each and every variable for all the electrons in a quantum system, the researchers found a new approach in which they substituted a set of numbers describing the possible interactions between each pair of electrons, the U of C said.
This is known as a "two-electron reduced density matrix," and allows researchers to map all the phases a quantum system can experience, the U of C said. Researchers said it turned out to be as accurate as the data-intensive method that involves predicting all the possibilities for every electron.
David Mazziotti, a theoretical chemist with the Department of Chemistry and the James Franck Institute at the U of C and a senior author for the study, told the U of C News Office's Louise Lerner the new calculation method could be key for developing a greater understanding of quantum phase transitions.
"There are some areas that have been underexplored because they are so difficult to model," Mazziotti was quoted. "I hope this approach can unlock some new doors."
for more features. |
Hidden underwater, these stony roads were built across swamps and swampy areas to be used for defense from foreign invaders. Known only to the locals (and undetectable from the surface), they were installed as a system enabling them to escape the eye of invaders (such as the Teutonic Knights in the 13th and 14th centuries), providing a safe shortcut between villages, hill forts, and other defensive structures.
The secret passages were built by bringing stones, wood, or gravel over frozen swamps during the winter, and letting them sink once the ice melted. This procedure would be repeated several times until the channel became grounded and permanent. Sometimes, wooden posts were inserted to protect the elevated area from washing away.
Although over 25 kulgrindas have been found in Lithuania (about half of them in Samogitia), remnants of passageways have also been found in Kaliningrad Oblast (former East Prussia), Belarus, and Latvia.
First investigated by Ludwik Krzywicki, the longest and most famous kulgrinda stretches across the Sietuva swamp in Samogitia. It was used up until the 19th century as a road between Kaltinėnai and Tverai. |
"The important thing is to never stop questioning."
At Todwick, we recognise how science impacts every aspect of daily life, and without science humankind would not have made progress throughout history.
Learning science is concerned with increasing pupils’ knowledge of our world, and with developing skills associated with science as a process of enquiry. Our science curriculum develops the natural curiosity of each child no matter their demographic, encourages them to have respect for living organisms, and instil in pupils the importance of caring for the natural environment.
Using the requirements of the Science National Curriculum as our guide, our Science lessons offer opportunities for children to:
- Develop scientific knowledge and conceptual understanding of the disciplines of Physics, Chemistry and Biology.
- Formulate their own questions about the natural world.
- Foster the confidence to ‘be wrong’ when it comes to making predictions and postulating their own theories.
- Promote an awareness of the importance of teamwork in scientific experimentation.
- Practically investigate their questions using various methods of enquiry.
- Gain competence in the science skills of planning scientific investigations, gathering and analysing data and critical evaluation of investigations across the disciplines.
- Use a range of methods to gather data from investigations and secondary sources including I.C.T., drawings, diagrams, videos and photographs.
- Present data in a variety of methods including tables, bar charts, line graphs, pictograms and pie charts.
- Have care for the safety of all individuals in lessons by developing knowledge of the hazards of the materials and equipment they handle, along with mitigating these hazards.
- Develop an enthusiasm and enjoyment of scientific learning and discovery.
Children have weekly lessons in Science throughout the school. In Early years, science is taught through the children learning by play. We endeavour to ensure that the Science curriculum we provide will give children the confidence and motivation to continue to further develop their skills into the next stage of their education and life experiences.
Teachers create a positive attitude to science learning within their classrooms and reinforce an expectation that all children can achieve high standards in science. Teaching is set out thus:
- Science will be taught as set out by the year group requirements of the National Curriculum. This is a strategy to enable the accumulation of knowledge and allows progress in repeated topics through the years.
- Pupils will concentrate on one science skill per term. Term 1 will be dedicated to planning investigations, Term 2 to results gathering and analysis, and Term 3 will be spent evaluating practical work.
- Through our planning, we involve problem-solving opportunities, allowing children to find out for themselves how to answer questions through a variety of practical means. Children are encouraged to ask their own questions and be given appropriate equipment to use their scientific skills to discover the answers.
- Engaging lessons are created with each lesson having both practical and knowledge elements.
- Working Scientifically skills are explicit in lessons to ensure these skills are being developed throughout the children’s school career and new vocabulary and challenging concepts are introduced through direct teaching. This is developed through the years, in keeping with the theme of the lesson.
- Teachers demonstrate how to use scientific equipment and the various Working Scientifically skills in order to embed scientific understanding.
- Children will have clear enjoyment and confidence in science that they will then apply to other areas of the National Curriculum.
- Children will show greater confidence and be more consistent in their use of scientific vocabulary.
- The impact can also be measured through key questioning within lessons to determine children’s understanding.
- Children will ultimately know more, remember more and understand more about science, demonstrating this knowledge when using tools or skills in other areas of the curriculum and in opportunities out of school.
- The vast majority of children will achieve age-related expectations in science.
- Pupil voice will be carried out to assess children’s views on the subject. |
NFPA 70E—the standard for electrical safety in the workplace, is there to save lives and property. No doubt about it—arc flash is dangerous and potentially deadly.
Every workday in the United States, one person dies from electrocution, shock, arc blast or arc flash. Some 8,000 employees every year find themselves being treated in emergency rooms for electrical contact injuries.
What is an arc flash?
Usually air acts as an insulator, albeit a relatively poor one. Normally, electricity does not travel through the air in a normal business or industrial setting. Arc flash happens when the resistance of air to electrical current breaks down and electricity jumps the gap between conductors. This will only happen when there is sufficient voltage and a path to lower voltage or ground.
For example, an arc flash of roughly 1000 amperes can result in immense damage and potential fire and injury. At the arc terminals, temperatures can exceed 35,000 degrees Fahrenheit—about 25,000 degrees hotter than the surface (photosphere) of the sun. Not only does this melt adjacent metal, but right at the arc itself, metal vaporizes and becomes a superheated plasma, rushing outward with explosive force. Molten metal is blasted in every direction from the arc like shrapnel or liquid metal bullets.
More than the explosive forces involved, the radiation created by an arc flash can permanently blind those who look directly at it. The brutal ultraviolet radiation can leave permanent shadows on nearby walls and equipment—like a nuclear blast.
The operator closest to the equipment will likely be killed without adequate safety measures, but nearby people can be injured or killed also.
New work practices in the updated standard help to reduce risks. This includes restrictions on cotton outerwear which might spontaneously combust under arc flash conditions. Recommendations are included for arc-resistant switchgear.
NFPA 70E training (arc flash training) is essential to understanding where such hazards may occur and how to comprehend and to navigate the new electrical code safety standards. Such corporate training could save lives and equipment not to mention eliminate the devastating effects on employee morale and public relations.
How can you acquire such training? Online training courses are available to bring all of the appropriate personnel up to speed on this critical safety issue. E-learning will allow them to take the training at their own pace and schedule so that disruption is minimized and safety goals are achieved more easily.
Image credit: Bram & Vera on Flickr |
The limit value is determined by a tensile test. Here the material is stretched until it breaks. The yield point is the value at which a material begins to deform plastically and does not return to its original shape after the strain has ceased.
Two yield strengths can often be determined for materials:
- Upper yield strength: maximum stress at which no permanent plastic deformation occurs
- Lower yield strength: lowest stress at which the material just not starts to deforms and does not return to its original shape.
Elasticity is a material property that provides information about the ability of materials to withstand stress.Read more
The forgeability of steel depends on the carbon content but also on various alloying elements.Read more
The hardness defines the mechanical resistance that the material is able to withstand mechanical impact of other materials.Read more |
October 1, 2020 – What’s Up for October? A harvest moon and a blue moon, Mars is up all night, and a journey beyond the galaxy…
This month brings not just one, but two full moons, at the beginning and end of the month. The full moon on October 1st is called the Harvest Moon. The Harvest Moon is the name for the full moon that occurs closest to the September equinox. (One of two days per year when day and night are of equal length.) Most years the Harvest Moon falls in September, but every few years it shifts over to October. The name traces back to both Native American and European traditions related, not surprisingly, to harvest time.
At the end of October, on the 31st, we’ll enjoy a second full moon. When there are two full moons in a month, the second is often called a blue moon. (There’s another, more traditional definition of a blue moon, but this is the most well known.) Note that this is the only two-full-moon month in 2020!
October is a great time for viewing Mars, as the planet is visible all night right now, and reaches its highest point in the sky around midnight. This period of excellent visibility coincides with the event known as opposition, which occurs about every two years, when Mars is directly on the opposite side of Earth from the Sun. This is also around the time when Mars and Earth come closest together in their orbits, meaning the Red Planet is at its brightest in the sky, so don’t miss it.
Spacecraft from several nations are currently on the way to Mars, including NASA’s Mars 2020 mission, which is scheduled to land there in February.
Finally this month, it’s a great time to try and spot the galaxy of Andromeda. Andromeda is also known as M31. It’s a spiral galaxy similar in appearance to our own Milky Way, but slightly larger. Both contain hundreds of billions of stars, and (we think), trillions of planets. Now we can’t see the overall shape of the Milky Way, because we’re inside it, so Andromeda gives us a sense of what our galaxy would look like if you could see it from afar.
Andromeda is faint, and best viewed with a telescope, but you can observe it with binoculars or even a cell phone with a good camera on it, even from light-polluted areas. And under very dark skies, it’s just barely a naked-eye object. So although it might be a little challenging, it’s worth it to see an entire galaxy with your own eyes!
To find the Andromeda galaxy, look to the northeast in the evening sky once it’s truly dark. Find the sideways “W” that represents the throne of queen Cassiopeia. To the right of Cassiopeia lies the constellation Andromeda, which includes this string of bright stars. Moving upward, hang a left at the second of these bright stars, and as you scan back over toward Cassiopeia, you’ll notice a faint, fuzzy patch of light. That fuzzy patch is the Andromeda galaxy, located 2 million light years away. If you manage it, congratulations! You’ve just gone intergalactic.
Here are the phases of the Moon for October. You can catch up on all of NASA’s missions to explore the solar system and beyond at nasa.gov. I’m Preston Dyches from NASA’s Jet Propulsion Laboratory, and that’s What’s Up for this month. |
Is there a planet or brown dwarf star called Nibiru or Planet X or Eris that is approaching the Earth and threatening our planet with widespread destruction?
There is no factual basis for these claims, says the NASA website: “Nibiru and other stories about wayward planets are an Internet hoax.” The story of the stray planets may have started with claims that a wandering planet discovered by the ancient Sumerians is headed toward Earth.
According to NASA: “This catastrophe was initially predicted for May 2003, but when nothing happened, the doomsday date was moved forward to December 2012. Then these two fables were linked to the end of one of the cycles in the ancient Mayan calendar at the winter solstice in 2012 -- hence the predicted doomsday date of December 21, 2012.
“If Nibiru or Planet X were real and headed for an encounter with the Earth in 2012, astronomers would have been tracking it for at least the past decade, and it would be visible by now to the naked eye. Obviously, it does not exist.”
On the other hand, Eris is real, but it is a dwarf planet similar to Pluto that will remain in the outer solar system; the closest it can come to Earth is about 4 billion miles.” |
The 'open' function opens the 'file' for input or output. The 'file' may be a string expression or a symbol. Following the 'file', there is an optional keyword, ':direction'. The argument following this is either ':input' or ':output' which specifies the direction of the file. If no ':direction' is specified, the default is ':input'. When 'file' is a string, you may specify a complete file location or extensions like "/usr/local/bin/myfile.lsp" or "A:\LISP\TIM.BAT". If the file open was successful, then a file pointer of the following form is returned as the result:
If the file open was not successful, a NIL is returned. For an input file, the file has to exist, or an error will be signaled.
(setq f (open 'mine :direction :output)) ; create file named MINE (print "hi" f) ; returns "hi" (close f) ; file contains "hi" <newline> (setq f (open 'mine :direction :input)) ; open MYFILE for input (read f) ; returns "hi" (close f) ; close it
File names: In the PC and DOS world, all file names and extensions ["foo.bat"] are automatically made uppercase. In using XLISP, this means you don't have to worry about whether the name is "foo.bat", "FOO.BAT" or even "FoO.bAt", they will all work. However, in other file systems [UNIX in particular], uppercase and lowercase do make a difference:
This will create a file named FOO-FILE in UNIX, because XLISP uppercases its symbols:
(open 'foo-file :direction :output)
This will create a file named 'foo-file' because UNIX doesn't uppercase its file names:
(open "foo-file" :direction :output)
So, if you are having trouble with opening and accessing files, check to make sure the file name is in the proper case.
Common Lisp: Common Lisp supports bidirectional files. So, porting Common Lisp code may be difficult to port if it uses these other file types.
function in the |
Blockchain is a distributed peer-to-peer technology. All nodes in the network have to agree on the state of chain and what are its valid blocks. Since there’s no centralized control, and nodes cannot be trusted, reaching this agreement is not trivial. Every blockchain implementation must therefore define what’s called a consensus algorithm to arrive at an agreement. This is also called consensus protocol.
What are consensus mechanisms?
This is how Wikipedia defines consensus decision-making:
“Consensus decision-making is a group decision-making process in which group members develop, and agree to support a decision in the best interest of the whole. Consensus may be defined professionally as an acceptable resolution, one that can be supported, even if not the “favourite” of each individual. Consensus is defined by Merriam-Webster as, first, general agreement, and second, group solidarity of belief or sentiment.
In simpler terms, consensus is a dynamic way of reaching agreement in a group. While voting just settles for a majority rule without any thought for the feelings and well-being of the minority, a consensus on the other hand makes sure that an agreement is reached which could benefit the entire group as a whole.
From a more idealistic point-of-view, Consensus can be used by a group of people scattered around the world to create a more equal and fair society.
A method by which consensus decision-making is achieved is called “consensus mechanism”.
So now what we have defined what a consensus is, let’s look at what the objectives of a consensus mechanism are (data taken from Wikipedia).
- Agreement Seeking: A consensus mechanism should bring about as much agreement from the group as possible.
- Collaborative: All the participants should aim to work together to achieve a result that puts the best interest of the group first.
- Cooperative: All the participants shouldn’t put their own interests first and work as a team more than individuals.
- Egalitarian: A group trying to achieve consensus should be as egalitarian as possible. What this basically means that each and every vote has equal weightage. One person’s vote can’t be more important than another’s.
- Inclusive: As many people as possible should be involved in the consensus process. It shouldn’t be like normal voting where people don’t really feel like voting because they believe that their vote won’t have any weightage in the long run.
- Participatory: The consensus mechanism should be such that everyone should actively participate in the the overall process.
From the general viewpoint of distributed systems, consensus is a challenge when nodes are either faulty (gone rogue) or unable to communicate reliably. The former is called Byzantine Generals Problem and the latter is called Two Army Problem. A consensus algorithm must therefore be fault tolerant.
How does Bitcoin achieve consensus?
Consensus achieved using Proof-of-Work.
- New transactions are broadcast to all nodes.
- Each node collects new transactions into a block.
- Each node works on finding a difficult proof-of-work for its block.
- When a node finds a proof-of-work, it broadcasts the block to all nodes.
- Nodes accept the block only if all transactions in it are valid and not already spent.
- Nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash.
- Nodes always consider the longest chain to be the correct one and will keep working on extending it.
Consensus is achieved by a simple rule that only the longest fork will survive. In other words, the fork on which most compute power has been expended (PoW) will survive. If two blocks are mined at the same time, there will be a fork. PoW therefore intentionally slows the mining process so that forks don’t happen faster than they are discarded by the network
Possible attacks on blockchain
The idea of any attack is to prevent nodes from reaching consensus or mislead them to a wrong consensus. Here are a few common attacks:
- 51% Attack – 51% attack refers to an attack on a blockchain – usually bitcoin’s, for which such an attack is still hypothetical – by a group of miners controlling more than 50% of the network’s mining hashrate, or computing power. The attackers would be able to prevent new transactions from gaining confirmations, allowing them to halt payments between some or all users. They would also be able to reverse transactions that were completed while they were in control of the network, meaning they could double-spend coins.
- Double-Spend – Applicable to cryptocurrencies, this is a case when the same coin is used for multiple transactions.
- DDoS Attack – Distributed Denial of Service (DDoS) attacks are nothing new, but recent attacks are increasing in severity, complexity, and frequency and have therefore become a mainstream concern for businesses and private customers alike. In Blockchain Sending nodes lots of transactions will prevent them from working on legitimate ones. Distributed DoS is a variant of this.
- Sybil attack – creating fake identities to take over network consensus (mitigated by mechanisms like Proof-of-Work, Proof-of-Stake, Proof-of-Elapsed-Time, etc.).
- Eclipse attack – trying to isolate some node by controlling all peers that it connects to, so that e.g. you can lie to it about the best chain.
- Finney attack – abusing merchants that accept zero-confirmation transactions by mining a block that refunds coins to you, sending the coins, then after the merchant accepts, broadcasting the block (making the unconfirmed transaction invalid).
- Cryptographic Attack – Quantum computing will bring computing power 100 million times that of conventional computers. This shifts the balance in favour of nodes with such power.
- Byzantine Attack – A single or few nodes prevent consensus.
- Time warp attack – messing with the block timestamps to cause the network difficulty to be reduced (recently used against the Verge cryptocurrency).
- Malleability hacks – changing transactions in a way that changes their hash, but doesn’t make them invalid (e.g. because the signature doesn’t cover the same elements that the transaction hash does).
What are the different types of blockchain consensus algorithms out there?
There are plenty of them and the following are some well-known ones:
Proof of Work (PoW): An expensive computation is required and this can be verified by other nodes. Nodes can remain anonymous and anyone can join. PoW is synonymous with mining. Systems that don’t use PoW can be said to be doing virtual mining.
Proof of Stake (PoS): Stakeholders are those having coins or smart contracts on the blockchain. Only they can participate. Those with high stakes are chosen to validate new blocks. They are rewarded with coins. While coins are “mined” in PoW, they are “minted” in PoS. Blocks may still need to be signed off by other nodes before added to the chain.
Delegated Proof of Stake (DPoS): In PoS, those with large stakes can take control. In DPoS, delegated nodes represent the interests of smaller nodes.
TAPOS:Transaction As Proof Of Stake or TAPOS is a feature of the EOS software. Every transaction in the system is required to have the hash of the recent block header. This does the following:
- Prevent transaction replay on different chains.
- Signaling the network that a user and their stake is on a particular fork.
Some another examples of consensus algorithms are Delegated Byzantine Fault Tolerance (dBFT), Practical Byzantine Fault Tolerance (PBFT), Federated Byzantine Agreement (FBA), and proof-of-weight etc.
Conclusion – Without consensus mechanisms we wouldn’t have a Byzantine Fault Tolerant decentralized peer-to-peer system. It is as simple as that. While, proof of work and proof of stake are definitely the more popular choices, there are newer mechanisms coming up every now and then. There is no “perfect” consensus mechanism, and chances are that there never will be, but it is interesting to see these newer cryptocurrencies coming out with their own protocols.
Two of blockchain’s most pressing challenges are:
- Scaling – many protocols are incapable of handling a large volume of transactions. And when considering transaction speed, they often pale in comparison to their centralized peers such as Visa and PayPal.
- Transaction fees can be costly, especially if there’s a huge backlog of unverified transactions on the network.
A growing number of cryptocurrencies are using DAG instead of blockchain, including IOTA, which refers to its DAG as a ‘tangle’; and Byteball which uses DAG to offer a digital currency, a privacy currency and several more use cases.
Directed Acyclic Graph (DAG) is a graph of nodes with topological ordering and no loops.
Consensus algorithms used by some well-known blockchains
- Proof of Work used by Bitcoin, Ethereum, Litecoin, Dogecoin etc.
- Proof of Stake used by Ethereum (soon), Peercoin, Nxt.
- Delayed Proof-of-Work used by Komodo
- Delegated Proof-of-Stake used by BitShares, Steemit, EOS, Lisk, Ark
- Proof-of-Authority used by POA.Network, Ethereum Kovan testnet, VeChain
- Proof-of-Weight used by Algorand
- Proof of Elapsed Time used by HyperLedger Sawtooth
- Chinese platform NEO uses Delegated BFT
How Blockchain technology could promote a secure IoT
The most recent DDoS attacks have been observed to hijack connected devices such as webcams, baby phones, routers, vacuum robots, etc. to launch their attacks.
The number of devices remotely controllable via apps is growing exponentially and the Internet of Things (IoT) is expected to easily surpass 20 billion connected devices by the end of 2020.
Today’s IoT ecosystem follows a centralized paradigm, which relies on a central server to identify and authenticate individual devices. This allows malicious devices to launch attacks against other equipment by means of a brute force Telnet attack or other attack vectors.
Blockchain technology could enable the creation of IoT networks that are peer-to-peer (P2P) and trust-less; a setting which removes the need for devices to trust each other and with no centralized, single point of failure.
A Blockchain, being a universally distributed ledger, ensures the security of all transactions through the cryptographic work of certain participants called nodes which validate those transactions, in return for rewards in the form of cryptocurrencies such as Bitcoin. This removes the need for a central authority to authenticate a device to interact with another device and also authenticate a user to login to a device. |
MCL: 10 ppm
Source: Occurs naturally in minerals and is found in fertilizers. Runoff of these fertilizers can contaminate natural water sources. Sewage waste may also produce nitrate.
Summary: Nitrate is the conjugate base of nitric acid and forms when ntiric acid loses a proton. It may occur as an environmental contaminant through its use in fertilizers. Leaks in septic tanks and discharge from sewage may also cause nitrate to occur as an environmental contaminant. Nitrate also occurs naturally and erosion may cause discharge into water sources. Nitrate is measured in water by the total concentration of nitrogen present. Excess amounts of nitrogen in water is used by algae and plants. Algae blooms are common occurrences resulting from excess nitrogen uptake in plants and algae. This process also depletes the dissolved oxygen levels in the water and may lead to fish kills. Nitrate may be used as a surfactant in industrial applications. When reacted with organic materials it may oxidize rapidly and explosively. |
Term 5 Week 2
Week 2 Activities
Warm up Activities:
Times Table Rock Stars: https://ttrockstars.com/
Maths Attack: see school website for additional tests ( our curriculum, maths)
White Rose Lessons 1 to 4, watch the video clips (on the link below) and then answer the questions on the sheets below.
L1. Find a half
L2 Find a quarter
L3 Find a quarter 2
L4 Problem Solving
L5 Number of the Week – Represent the number of the week in different ways and see how many calculations you can write that give the answer of that number. See if you can do this systematically.This week’s number is 28.
Year 1 English Unit, Sidney Spider - A Tale of Friendship (continued from last week)
Please have a go at the following activities:
Put in the missing capital letters and fullstops in Sidney's letter (pg 11)
Copy out the letter in cursive handwriting (pg 11)
Think of where Sidney could hide (pg 12)
Spider fact file (pg 13)
Go on a mini-beast hunt (pg 14)
Write sentences about your hunt using the conjunction 'and' (pg 14)
Write a fact file about a mini-beast you found (pg 15)
Bake a spider biscuit (pg 16)
Phonics play https://new.phonicsplay.co.uk/
Phonic activity sheet
Science: Please continue to complete your plant diary. Also, please can you label the parts of a tree and plant, see the activity sheet and watch this video to find out about different parts of a plant:
Computing: using the link below, complete a pictogram. Ask your family members what their favourite pet is and record it onto a pictogram. Also, use your findings from the mini-beast hunt in English to make a mini-beast pictogram too.
Topic: Continuing our topic of 'A Wonderful World' this week we are going to learn about the continent of Europe and the country of France, please have a look at the powerpoint and then complete the activity sheets.
Music:- 'I can sing a rainbow' song in French, revise the colour words in French and sing a long.
Art :- Landmarks
Can you make a drawing, college, painting or sketch of a famous landmark from France?
Ideas of famous French landmarks are on the attachment below. |
Groundwater is one of the most important water sources but it is often not far away from pollution. One source of pollution is leakage from polluted streams such as open drains. The effect of open drains on groundwater quality has become an essential issue. This study aims to use different lining materials to minimize the seepage from open drains to protect groundwater. MODFLOW is used to investigate flow and contaminant transport and to evaluate efficiency of different lining materials. A hypothetical case study is used to assess different lining materials such as clay, bentonite, geomembranes and concrete. The results showed that decreasing the conductivities of lining materials reduced the extension of the contaminate. The extension of contaminants was reduced by 43, 89.6, 91.4 and 93% compared with the base case when drains were lined by clay, bentonite, geomembranes and concrete, respectively. Also, cost analysis of lining materials was done to detect the best lining material. Lining using geomembranes reduced contaminant extension at low cost compared with concrete, which reduced contaminant extension at double the cost. This reveals that the geomembranes represent the preferred material to protect groundwater from drain seepage due to its high durability and low cost compared with concrete.
Drainage systems consist of two main types: subsurface drainage and surface open drains. Surface open drains are open channels that may be lined by grass, concrete, earth or stone. They are usually used to transport agricultural drainage water, residential and industrial wastes. Groundwater is an important natural source that may be directly affected by seepage from open drains. Groundwater is very sensitive to anthropogenic activities such as industrial activities, wastewater drains and leakage from sewage systems. However, agricultural activities showed a low contribution (Arnous & El-Rayes 2013). Deep confined groundwater has low content of heavy metals contaminates from local sources of pollution, while shallow depths have the highest content of nitrate and salinity (Ahmed et al. 2011).
Groundwater contamination has been studied by a number or researchers due to its importance. Ghasemlounia & Herfeh (2017) investigated water quality using a geographic information system (GIS) in Ardabil, Iran for 76 sample wells. The results showed that in the wet season only 6.58% were good quality, 22.37% were considered of unsuitable quality and 71.05% were of medium quality, while in the dry season these values reached 4, 32 and 64% for the three cases, respectively, using 50 wells. Kyere et al. (2018) investigated the level of heavy metals in soils from Agbogbloshie Recycling Site in Ghana using 132 soil samples. The results showed that the concentrations of cadmium (Cd), chromium (Cr) and nickel (Ni) were lower than the permissible levels of the Dutch and Canadian soil standards, while the concentrations of copper (Cu), lead (Pb) and zinc (Zn) were between 100% and 500% higher than the standard permissible levels.
A number of earlier studies have started to consider the impact of open drains on groundwater quality and quantity. Ali et al. (2004) studied the impact of deep open drains on groundwater levels at farm and sub-catchment level. The results showed that water levels changed between 1.5 and 2.5 m of the ground surface for drained areas, while in undrained areas levels changed between 0 and 1 m. The impact of irrigation loss and canal seepage on groundwater interactions in the lower valley of the Cachapoal River, Chile was assessed by Arumí et al. (2009). The results revealed that groundwater recharged by 22% from irrigation loss and 52% from canal seepage; also the study recommended that the hydrological system and agricultural production will be affected by changing the canal lining and irrigation system. Gomaah et al. (2016) used stable isotopes and hydrogeochemistry to evaluate groundwater in a Quaternary aquifer and its sources and geochemical evolution in the zone between Ismailia and Elkassara canal, Egypt. The results showed three sources of recharge: current Nile water including heavy isotopes, old Nile water which is more developed and depleted in heavy isotopes, and recharge from the underlying Miocene aquifer due to excessive pumping.
Tahershamsi et al. (2018) simulated groundwater flow using MODFLOW and the geostatistical method in the Ardebil plain in Iran. The result showed a small normal root-mean-square (NRMS) error of 2% using the mathematical model to confirm the accuracy of the calibrated model. Also, the statistical results indicated that the mathematical model accuracy is higher than the geostatistical method. Hosseini & Saremi (2018) tested 17 samples from the Malayer plain aquifer area of southern Hamedan Province, Iran using the modified DRASTIC and GODS models. In this aquifer 30 physicochemical parameters were studied to assess the groundwater vulnerability to different pollution sources. The results showed that the DRASTIC model is better than GODS in determining the groundwater vulnerability to pollution.
Seepage from canals and reservoirs is considered an important source of groundwater pollution. A number of approaches can be used to reduce that seepage and reduce the transmission of pollutants to groundwater by lining the canal with different materials. Some studies investigated natural materials with low permeability such as clay and bentonite and industrial materials such as geomembranes, geosynthetic clay liners and concrete mixtures to reduce the leakage from drains to groundwater. Figure 1 shows different types of lining materials used for canal lining. Lambert & Touze-Foltz (2000) presented a study to determine the geomembrane permeability. The study results indicated that geomembrane permeability is less than 10−6 m/d. Li et al. (2017) investigated factors affecting the control of groundwater contamination in the vadose zone. The results proved that contamination can easily extend into groundwater and the vadose zone that becomes more vulnerable under high values of hydraulic conductivity.
Stark & Hynes (2009) studied the effect of using geomembranes for canal lining on seepage. The results showed that seepage from the canal was reduced by 90% using the geomembrane lining. Also, covering the geomembranes by concrete increased durability but at high cost. Blanco et al. (2012) used different geomembrane types (plasticized polyvinyl chloride (PVC-P), high density polyethylene (HDPE), and ethylene-propylene-dienic monomer (EPDM) for waterproofing of reservoirs. The study indicated that these materials are suitable for waterproofing of hydraulic works and their selection depends on the function of the reservoir itself and on economic factors. The EPDM geomembrane was chosen as it showed the best resistance to static impact. Ojoawo & Adegbola (2012) indicated that the effectiveness of geomembrane material liners is ordered as smooth high density polyethylene (HDPE), smooth low density polyethylene (LDPE), textured HDPE and textured LDPE.
Meer & Benson (2007) presented a study to determine hydraulic conductivity for the geosynthetic clay liners (GCLs) exhumed from landfill. The study showed that 5.2 × 10−9 to 1.6 × 10−4 cm/s is the calculated value for the hydraulic conductivities of the geosynthetic clay liners. The results indicated that the gravimetric water content is the main factor affecting the hydraulic conductivity of the exhumed GCLs. Khair et al. (1991) studied the impact of lining irrigation canals with soil-cement tiles. The study recommended that soil-cement tiles are very effective to reduce the seepage losses through 2 mm or smaller soil aggregates and expected that the lining of irrigation canals by this material will be very important especially in the zones where irrigation water is very costly. Schneider et al. (2012) presented a study for determining hydraulic conductivity of concrete and mortar and found its values are 5.67 × 10−15cm/s for concrete and 5.87 × 10−16 cm/s for mortar.
Based on the literature, a number of studies have been conducted to reduce the seepage from canals and reservoirs using different materials. Natural materials with low permeability such as clay and bentonite, and industrial materials such as geomembranes, geosynthetic clay liners and concrete mixtures have been used to reduce the leakage to groundwater. Seepage from contaminated open drains has become one of the most important sources of groundwater pollution. A limited number of studies have been carried out to reduce the seepage from open drains to protect groundwater. This study aims to assess factors affecting the extension of contaminants from polluted open drains into groundwater and how to protect groundwater from contamination using different lining materials. The numerical model MODFLOW is developed in this study to assess the effect of using different lining materials on extension of contaminants into groundwater aquifers. Cost analysis of these materials is presented to help in the selection of the best material.
MATERIALS AND METHODS
A flow chart of the methodology used in this study is presented in Figure 2. The methodology includes a number of steps: review of the previous studies, identify the problem, collect the required data, develop and calibrate the numerical model, study different scenarios of pumping rates and different lining materials for groundwater protection.
A hypothetical case study is used to evaluate the impact of open drains lining using different materials on groundwater contamination by investigating contaminate extension (XT) from the pollution source. The study area of 4,000 m2 is used with length (Lx) 2,020 m divided into 101 columns, width (Ly) 2,000 m divided into 100 rows and depth of the domain was selected to be large enough to avoid the effect of bottom and side boundaries (d = 100 m) and divided into 10 layers as presented in Figure 3. Figure 4 shows the 3-D domain for the current case study and Figure 5 shows the vertical cross-section of the case study.
The polluted drain was installed in the center of the area as a major source of pollution with a constant concentration of 2,000 mg/l. Two rivers were assigned on both sides of the domain and parallel to the drain. The two rivers represent the source of recharge with a distance of 1,000 m from the drain. The abstraction was assigned using six wells with a discharge of 90 m3/h for each well. The wells are installed in two rows at the middle between the rivers and drain as shown in Figures 4 and 5.
A 3-D VISUAL MODFLOW is used to investigate the effect of polluted open drains on groundwater quality and assess the effect of lining on contaminant extension. This version of the numerical model is used to simulate groundwater flow and solute transport for steady state and transit flow. The groundwater flow equation used in MODFLOW (McDonald & Harbaugh 1988) and the contaminants transport in groundwater (Javandel et al. 1984) are presented in detail in Abd-Elhamid et al. (2018).
Model boundary conditions and hydraulic parameters
The boundary conditions for the case study are described in Figure 5. The river package is used to describe the two rivers on the left and right sides with gradually constant head stage range from −0.50 to −0.8 m and river bed bottom range from −3 to −3.3 m. The drain package is used to describe the open drain with elevation stage ranging from −2.5 to −2.8 m. A constant concentration of 2,000 mg/L has been assigned in the drain. Also, the annual recharge is 365 mm/year and the abstraction is 60 m3/h from each well with total abstraction of 360 m3/h. The aquifer is assumed to be homogeneous with horizontal and vertical hydraulic conductivities of 50 and 5 m/day, respectively, specific storage (Ss) is 27 × 10−7 while the specific yield (Sy) and total porosity (ntotal) are 0.20 and 20%, respectively, to represent the graded sand and gravel soil type (El-Arabi 2007).
Figure 6 shows different parameters of the numerical equation. The head between the drain and the river is calculated using Equation (1) at different points. The calculated heads are assigned to the model at nine observation wells. Figure 7 shows the calibration results as a comparison between the head calculated by the model versus the head calculated by Equation (1). Good agreement is observed between numerical model results and calculated head.
The numerical model MODFLOW is applied to the current case study with the above boundary conditions to assess the extension of contaminant in the aquifer. Figure 8 shows the extension of contaminate (XT) into the aquifer which is considered the base case. For the current abstraction rate (60 m3/h), the equi-concentration line 100 mg/l extended to 100 m in the aquifer measured from the drain center. Abd-Elhamid et al. (2018) studied the effect of well depth, location and abstraction rate on extension of contaminate (XT); their study found that increasing abstraction rate caused the highest extension of contaminants from the drain into the aquifer. In this study MODFLOW is used to assess the impact of increasing abstraction rate Q on the extension of contaminate (XT) using different abstraction rates of 60, 90, 120, and 150 m3/h. The results are shown in Figure 9. The contamination extension (XT) for equi-concentration line 100 mg/l reached 100, 290, 408 and 510 measured from the drain center, respectively. Figure 10 shows the relation between contaminant extension and abstraction rate. The result showed that increasing abstraction rate increased extension of contaminates, which matches the results of Abd-Elhamid et al. (2018). The second case with abstraction rate of 90 m3/h is used as the base case where extension of contaminant (XT) reached 290 m.
PROTECTION OF GROUNDWATER USING DIFFERENT LINING MATERIALS
MODFLOW is used to assess the protection of groundwater from leakage of polluted drains using different materials for an abstraction rate of 90 m3/h. Different lining materials are used to investigate the effect of each material on the extension of contaminate (XT) into the aquifer. These materials are selected based on hydraulic conductivity (K). Low hydraulic conductivity values are used because this has a clear effect on solute transport. Table 1 presents the hydraulic conductivity for the lining materials used. Four materials of low hydraulic conductivity have been selected for lining the drain: clay, bentonite, concrete and geomembrane. The results of these cases are shown in Figure 11.
|Scenarios .||Material .||Hydraulic conductivity (K) (m/d) .|
|Base case||Graded sand with clay lenses||50|
|Scenario 4||Concrete||4 × 10−9|
|Scenarios .||Material .||Hydraulic conductivity (K) (m/d) .|
|Base case||Graded sand with clay lenses||50|
|Scenario 4||Concrete||4 × 10−9|
Abd-Elhamid et al. (2018) showed that groundwater contamination was highly affected by the pumping schemes. However, the current study aims to protect the groundwater aquifer from contamination using different lining materials. The lining materials were selected based on hydraulic effect, availability, durability and cost. Four materials have been selected: clay, bentonite, geomembrane and concrete. The hydraulic conductivity for the clay is 0.25 m per day (m/d) (El-Arabi 2007). The hydraulic conductivity for the sand-bentonite mixture with a ratio of 20% bentonite to 80% sand is 0.033 m/d (Ojoawo & Adegbola 2012). The geomembrane permeability is less than 10−6 m/d (Lambert & Touze-Foltz 2000), and the hydraulic conductivity of concrete and mortar was 4 × 10−9 m/d and 4 × 10−10 m/d, respectively (Schneider et al. 2012).
Drain lining with clay
In the first case, the clay material is used for lining the polluted open drain due to its low hydraulic conductivity of 0.25 m/d compared with the base case with conductivity of 50 m/d. The results showed a large decrease in the extension of contaminant (XT) into the aquifer for equi-concentration line 100 mg/l, which decreased to 165 m compared with the base case of 290 m without lining at an abstraction rate of 90 m3/s. Figure 11(a) shows the effect of using clay for drain lining on the contaminant extension (XT). The results of this case indicated that the extension of contaminant (XT) into the aquifer has decreased by 43% from 290 to 195 m due to the use of clay lining.
Drain lining with bentonite
For the second case, mixed bentonite is used for drain lining with low hydraulic conductivity of 0.033 m/d. A sand-bentonite mixture is used with a ratio of 20% bentonite to 80% sand. Figure 11(b) shows the effect of using a sand-bentonite mixture on contaminant extension. The contamination has decreased with this type of lining and helped to prevent the spread of contamination into the aquifer. The extension of contaminant (XT) into the aquifer for equi-concentration line 100 mg/l decreased from 290 m at base case to 30 m after lining the polluted drain using bentonite. The extension of contaminant (XT) into the aquifer has decreased by 89.6% when bentonite is used for lining.
Drain lining with geomembranes
In the third case manufactured geomembrane sheets are used for drain lining with low hydraulic conductivity of 0.0001 m/d. Figure 11(c) shows the contamination extension into the aquifer was decreased using the geomembrane for lining. The extension of contaminant (XT) into the aquifer for equi-concentration line 100 mg/l decreased from 290 m at base case to 25 m with the geomembrane lining. The extension of contaminant (XT) into the aquifer has decreased by 91.4% when geomembrane is used for lining.
Drain lining with concrete
The fourth case represents lining of the drain using concrete with hydraulic conductivity of 4 × 10−9 m/d. Figure 11(d) shows a sharp decrease in the extension into the aquifer after the concrete lining was used. The extension of contaminant (XT) for equi-concentration line 100 mg/l was 20 m. The extension of contaminant (XT) into the aquifer has decreased by 93% when concrete lining is used.
Figure 12 shows the relation between the hydraulic conductivity of lining materials and extension of contaminate (XT) into the aquifer. The figure indicated that decreasing hydraulic conductivity of the materials used for lining led to a decrease in the extension of the contaminant. The results showed that using lining reduced the extension of contaminants by 43%, 89.6%, 91.4% and 93% for clay, bentonite, geomembrane and concrete, respectively.
Cost analysis is discussed in the next section to distinguish the best material that can be used for lining. The mass balance for each material used for lining is calculated using the numerical model. The salt mass for the base case reached 1,128.40 kg, but for the other lining materials reached 876.30, 205.10, 41.50 and 39.40 kg for clay, bentonite, geomembranes and concrete, respectively. Figure 13 shows the relation between the lining materials and transport mass balance (kg). The results showed that the contamination extension decreased with decreasing conductivity of lining material.
COST ANALYSIS OF MATERIALS USED FOR LINING
This section presents a cost-effectiveness study for using different lining materials (clay, bentonite, geomembranes and concrete) based on the cost of lining for the drain cross-section. The numerical model results showed that the effectiveness of materials used for lining to protect groundwater from contamination can be ordered as concrete, geomembranes, bentonite and clay. Also, from previous studies the cost of geomembrane was found to be between $0.50 and $5 m−2 depending on its properties and life time; the cost of concrete is $13/m2 for 25 cm thickness, bentonite costs $6/m2 and clay $3/m2. The open drain cross-section has a wetted perimeter of 43.40 m and the lining cost of this section was $130.20, $260.60, $217 and $564.20 m−2 for clay, bentonite, geomembranes and concrete, respectively. A summary of the results for different lining materials is presented in Table 2. The cost was estimated based on the Egyptian market prices.
|Material .||Contaminant extension (XT) (m) .||Salt mass balance (kg) .||Cost $/m2 .|
|Material .||Contaminant extension (XT) (m) .||Salt mass balance (kg) .||Cost $/m2 .|
Figure 14 shows the comparison between the costs of different lining materials. From the figure it can be observed that clay had the lowest cost of lining followed by geomembranes, bentonite and concrete with the highest cost. However, clay was low cost but reduced the extension of contaminant by 44% only. From this comparison geomembrane is considered the best material for lining as it reduced the extension of contaminant by 91% with a lower cost than concrete, which reduced the extension of contaminant by 93% but at double the cost of geomembranes.
Groundwater is considered an important source of water in many countries. But it is highly exposed to several sources of pollution. Leakage from polluted open drains is among these sources of pollution. Investigation of contaminant extension into aquifers and protection of such aquifers is an important issue. In the current study, numerical analysis is used to investigate contamination of groundwater due to seepage from an open drain. Also, the effect of drain lining using different materials on the extension of contaminate (XT) into the aquifer is presented. The numerical model MODFLOW is applied to simulate groundwater flow and contaminant transport. Different materials with different hydraulic conductivity including clay, bentonite, geomembranes and concrete are examined. The results indicated that the extension of contaminant (XT) from open drains into the aquifer is sensitive to decreasing hydraulic conductivity of the lining material. From the comparison between different materials, concrete gave the highest reduction of contaminant extension by 93% followed by geomembranes at 91.4%, bentonite at 89.6% and clay at 43%. Considering the cost of materials, geomembrane is considered the best material for lining as it is less than 50% of the cost of concrete. This study recommends using geomembranes for lining open drains as it can prevent contaminant extension at low cost and is durable with time. Drain lining could help to protect groundwater from contamination to protect the health and life of many people. This study presented a numerical simulation of a natural phenomenon that may in the future be studied experimentally. |
- Identify the general properties of the alcohol functional group
- Due to the presence of an -OH group, alcohols can hydrogen bond. This leads to higher boiling points compared to their parent alkanes.
- Alcohols are polar in nature. This is attributed to the difference in electronegativity between the carbon and the oxygen atoms.
- In chemical reactions, alcohols often cannot leave the molecule on their own; to leave, they often become protonated to water, which is a better leaving group. Alcohols also can become deprotonated in the presence of a strong base.
- carboxylic acidAny of a class of organic compounds containing a carboxyl functional group—a carbon with a double bond to an oxygen and a single bond to another oxygen, which is in turn bonded to a hydrogen.
- aldehydeAny of a large class of reactive organic compounds (R·CHO) having a carbonyl functional group attached to one hydrocarbon radical and a hydrogen atom.
- alkaneAny of the saturated hydrocarbons—including methane, ethane, and compounds with long carbon chain known as paraffins, etc.— that have a chemical formula of the form CnH2n+2.
- leaving groupIn organic chemistry, the species that leaves the parent molecule following a substitution reaction.
Alcohols are organic compounds in which the hydroxyl functional group (-OH) is bound to a carbon atom. Alcohols are an important class of molecules with many scientific, medical, and industrial uses.
Nomenclature of Alcohols
According to the IUPAC nomenclature system, an alcohol is named by dropping the terminal “-e” of the parent carbon chain (alkane, alkene, or alkyne in most cases) and the addition of “-ol” as the ending. If the location of the hydroxyl group must be specified, a number is inserted between the parent alkane name and the “-ol” (propan-1-ol) or before the IUPAC name (1-propanol). If a higher priority group is present, such as an aldehyde, ketone or carboxylic acid, then it is necessary to use the prefix “hydroxy-” instead of the ending “-ol.”
Alcohols are classified as primary, secondary, or tertiary, based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl group.
Structure and Physical Properties of Alcohols
The structure of an alcohol is similar to that of water, as it has a bent shape. This geometrical arrangement reflects the effect of electron repulsion and the increasing steric bulk of the substituents on the central oxygen atom. Like water, alcohols are polar, containing an unsymmetrical distribution of charge between the oxygen and hydrogen atoms. The high electronegativity of the oxygen compared to carbon leads to the shortening and strengthening of the -OH bond. The presence of the -OH groups allows for hydrogen bonding with other -OH groups, hydrogen atoms, and other molecules. Since alcohols are able to hydrogen bond, their boiling points are higher than those of their parent molecules.
Alcohols are able to participate in many chemical reactions. They often undergo deprotonation in the presence of a strong base. This weak acid behavior results in the formation in an alkoxide salt and a water molecule. Hydroxyl groups alone are not considered good leaving groups. Often, their participation in nucleophilic substitution reactions is instigated by the protonation of the oxygen atom, leading to the formation a water moiety—a better leaving group. Alcohols can react with carboxylic acids to form an ester, and they can be oxidized to aldehydes or carboxylic acids.
Alcohols have many uses in our everyday world. They are found in beverages, antifreeze, antiseptics, and fuels. They can be used as preservatives for specimens in science, and they can be used in industry as reagents and solvents because they display an ability to dissolve both polar and non-polar substances.
Boundless vets and curates high-quality, openly licensed content from around the Internet. This particular resource used the following sources: |
Making learning fun and interactive is a great way to teach children to improve their awareness outside the classroom. Your child should be taught that learning is a lifelong process of questioning and discussion. Creating new learning environments for your child will expose your child to new experiences that cannot be taught in a classroom. Here are 5 easy learning techniques to foster your child’s learning and curiosity.
Explore the World
Build your child’s curiosity by travelling with your child. Whether it’s a short day trip or a weeklong stay, make sure you expose your child to different cultures, traditions, food and history. To make the most of your travel, plan ahead by researching a few sights that would interest your child.
Include learning in everyday mundane tasks. For example include your child in making his or her own lunch or when watching a basket ball game, explain to your child how the game is played.
Take Every Opportunity to Answer “Why?”
Make sure everyday learning opportunities are not missed by always taking the time to answer your child questions. For example looking at the night sky can inspire your child to question about the universe.
If religion is part of your family, get your child involved in religious classes, youth groups and camps. Children will learn about religious concepts as well as its context to history.
Visit your local Public Library
Your local public library will provide you with free collections of books, magazines and archives. Reading programs in libraries will encourage your child to read with other children all the while sharing knowledge and making new friends. |
One of the fundamental goals of archaeology is to establish calendric ages for past cultural events. There are several ways to do this, including the following techniques: artifact seriation, obsidian hydration, and radiometric analysis. Each of these techniques has limitations, but after years of improvements--along with an increasingly larger data base, past culture chronologies have been accurately sequenced and have revealed a great legacy of complex human development taking place over many millennia throughout central California.
Artifact seriation involves organizing certain artifact types into a temporal order based on their styles and purposes. For example, ancient spear and arrowhead forms have shapes that can be assigned to general time periods of the past and can be ranked from oldest to most recent, in the same way that we might organize cars on the basis of their technological development and appearances. In prehistoric California, olive snail shell beads and abalone shell ornaments are especially time-sensitive artifact classes, and changing forms have been matched to calendric dates spanning the past 10,000 years.
Another means of relative dating is Obsidian Hydration analysis. In a nutshell, this involves removing a very thin slice of the edge of an obsidian artifact, then grinding it wafer thin to mount on a glass slide for viewing under a high-powered microscope, and reading the measured thickness of water absorbed on the surface of the artifact. This water band, or rind, is measured in microns, and the longer the edge of the artifact has been exposed to the atmosphere, the thicker the rind gets. All glass, which obsidian in fact is, will absorb traces of water; however, it was discovered that different obsidian sources (geographic points of volcanic origin) retain their own combinations of trace elements, which cause them to hydrate at different rates. Because of this, each specimen must be “sourced” before hydration analysis. Sourcing involves reading the results of a laser-generated light spectrum shown through the obsidian object by a device called a spectrograph. The method, known as x-ray fluorescence, allows us to source the obsidian and reconstruct the geography of prehistoric trade and exchange routes over time. Once the specimen has been sourced, it can then be cut and the hydration rind measured.
Tribes along the San Mateo Coast had access to a variety of obsidians from both the Napa Valley and Clear Lake flows to the north, as well as several sources on the eastern side of the Sierra Nevada Ranges. Detailed mapping of the temporal and spatial distributions of obsidian specimens in local sites and their source origins is an ongoing process. Unfortunately, variables in soil moisture and temperature can affect the results, so it’s desirable to run large numbers of specimens and mathematically work out their “mean hydration values.” This can be a difficult proposition in coastal sites where obsidian artifacts are not very numerous, and the sampled assemblages result in wider spans of time than they actually should be. Also, this kind of work requires specialists who work in a laboratory environment, where they necessarily charge their requisite fees.
With radiocarbon dating we can get an absolute age; therefore, this is the best method at our disposal. Of course, this application also involves a range of fees for services that are now commercially available through a number of physics labs. The method was developed by American chemist Willard Libby in 1949 and involves measuring the emissions of the radioactive isotope carbon-14 as it is released from decaying organic materials. In short, cosmic sub-atomic particles constantly bombard the earth, producing high-energy neutrons. These neutrons react with nitrogen atoms in our atmosphere to produce atoms of carbon-14, or radiocarbon, which are unstable because they have eight neutrons in the nucleus instead of the usual six for ordinary carbon. This instability leads to radioactive decay at a regular rate. Steady atmospheric concentrations of 14C are passed on uniformly to all living things through carbon dioxide and the food chain. Only when a plant, animal, or human dies does the uptake of 14C cease, and the steady concentration of 14C begins to decline through radioactive decay in a patterned and predictable rate. By measuring the decay over a given period of time through a Geiger counter, a date can be assigned that reflects the age of the sample at the time of death.
Willard Libby assumed that the atmospheric conditions that generated 14C were constant; however, we now know that this is not the case and a range of variables can affect the dating results. Fortunately, researchers have established correction applications that bring the raw radiocarbon dates into a recalibrated, “corrected” calendric date. For example, in the case of mussel shells (which are a useful material to sample in regards to coastal archaeological sites, given their abundance as dietary refuse), seasonal coastal upwelling of colder deep waters brings older carbon isotopes up from the depths, which are absorbed by the mollusks and incorporated into their shells. So when we date the mussel shell, we need to correct for the influences of these older 14C atoms. For sites dating to the last 3000 years along the San Mateo coast, a correction of 225 +25 years is factored in.
Up until about 10 years ago, the minimum sample size of mussel shells useful for radiocarbon dating was about 32 grams weight. That’s a lot of shells, and collecting them from volumetrically controlled archaeological excavation pits required sampling from various strata to get a sufficient combined total weight. This often resulted in the blurring-out of more subtle variations in time represented by the archaeological deposit. But a breakthrough in radiocarbon dating referred to as Accelerator Mass Spectrometry, or AMS dating, now allows for the sampling of sizes as small as 1/5th of a gram weight! This revolution in method has facilitated absolute dating of single shell specimens, individual shell beads, tiny dietary animal bones, and has been recognized as a less destructive and invasive dating method for human remains.
CSPA funded the sampling of two mussel shell AMS dates recovered from a prehistoric archaeological site located on the eroding cliffs of Montara State Beach. The site, recorded as CA-SMA-132, is one of several that still exist in the area; however, all of these local sites are threatened with loss from coastal erosion, construction activities, and artifact looting. The dated samples from SMA-132 were selected for two reasons: first, we needed to document the age of the site as part of the environmental review for proposed improvements to public access near the site; and second, we were interested in the possibility that the results might reflect dietary refuse from the Portola Expedition of 1769, and identify the location of an event described in the expedition’s journals. Don Gaspar de Portola commanded the first Spanish land exploration into Upper California, which ultimately lead to the inadvertent discovery of San Francisco Bay, followed shortly thereafter by the colonization of our state. On the eve of their great discovery, they had camped along Martini Creek and dined on the large mussels available on the nearby rocks of Point Montara. There was a possibility that the mussel shells visible at SMA-132 might be the result of that event and we were curious to know.
The resulting AMS dates from SMA-132 turned out to represent two different temporal ages, which accords with the fact that both samples were taken from auger borings at two different places within the larger site. Unfortunately, neither of the resulting dates can be attributed to the Portola Expedition; but of great interest to local prehistory, the dates reflect ancestral Native American shell gathering and consumption events at AD 445 to AD 660, and again at AD 1010 to AD 1195. These dates correlate to a period of great social change among the many large land holding tribal communities that were forming throughout the larger San Francisco Bay Area and contribute to our understanding of regional culture history and prehistory. |
Convex Spherical Mirrors
Regardless of the position of the object reflected by a convex mirror, the image formed is always virtual, upright, and reduced in size. This interactive tutorial explores how moving the object farther away from the mirror's surface affects the size of the virtual image formed behind the mirror.
The tutorial initializes with the object (an upright arrow) positioned with its tail touching the center of the mirror's optical axis on the front side of the mirror, far away from the center of curvature and the focal point (located behind the mirror). To operate the tutorial, use the Object Position slider to translate the arrow back and forth in front of the mirror. As the arrow approaches the mirror, the upright, real image grows larger, approaching the size of the arrow, but becomes much smaller as the arrow is moved farther away from the reflecting surface of the mirror.
The convex mirror has a reflecting surface that curves outward resembling a portion of the exterior of a sphere. Light rays parallel to the optical axis are reflected from the surface in a manner that diverges from the focal point, which is behind the mirror. Images formed with convex mirrors are always right side up and reduced in size. These images are also termed virtual images, because they occur where reflected rays appear to diverge from a focal point behind the mirror.
Convex mirrors are often used in automobile right-hand rear-view applications where the outward mirror curvature produces a smaller, more panoramic view of events occurring behind the vehicle. When parallel rays strike the surface of a convex mirror, they are reflected outward and diverge away from the mirrored surface. When the brain retraces the rays they appear to come from behind the mirror where they would converge, producing a smaller upright image (the image is upright since the virtual image is formed before the rays have crossed the focal point). Convex mirrors are also used as wide-angle mirrors in hallways and businesses for security and safety. The most amusing applications for curved mirrors are the novelty mirrors found in state fairs, carnivals, and fun houses. These mirrors often incorporate a mixture of concave and convex surfaces, or surfaces that gently change curvature, to produce bizarre, distorted reflections when people observe themselves.
Matthew J. Parry-Hill, Thomas J. Fellers and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1998-2021 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our |
Immunology 101 Series: 5 Ways Vaccines are Made
As you know from reading the first Immunology 101 Series post, vaccines are composed of non-disease causing forms of pathogen (the scientific term for ‘germs’ such as bacteria and viruses) that allow our immune system to create long-lived memory cells. Memory cells remember the pathogen from the first encounter with the vaccine and quickly defeat the real, disease-causing pathogen when it enters our body. Essentially, vaccines train our immune system to recognize and respond quickly to infection to keep us healthy!
While getting your flu shot this season, you may have been offered the choice of different types of vaccines. Live, attenuated? Inactivated? What do these terms mean, and how do they affect protection? Read on to find out!
Vaccines are made in several ways to create the most effective vaccine possible. Each pathogen is unique. Scientists use this information — the unique properties of each pathogen — to design the most effective vaccine using the same components found in the natural pathogen to stimulate an immune response.
The different types of vaccines and ways of creating them include live, attenuated, inactivated, subunit, conjugate, and toxoid.
The term live, attenuated refers to a vaccine that uses a virus that has been weakened to the point that it is incapable of causing disease. This type of vaccine is highly effective as it most closely resembles a natural infection and produces a vigorous immune response. This means that both B cells and T cells, the major white cells of the immune system military, are called into action and will produce memory cells. These memory cells continually patrol our body, ready to fight once we encounter the real, live pathogen through natural infection.
One limitation is that live, attenuated vaccines can’t be given to everyone. Children with a weakened immune system (as a result of an immunodeficiency or the use of chemotherapeutics to treat cancer) cannot receive live vaccines. Because their immune system is immunocompromised, or not fully functioning, exposure to a live virus, even one that has been weakened, is not recommended.
• Examples of live, attenuated vaccines are the varicella ‘chickenpox’ vaccine, the MMR (measles, mumps, and rubella) vaccine, and the nasal spray form of the influenza ‘flu’ vaccine.
Inactivated vaccines contain a virus that has been killed and is completely incapable of causing disease. Even though the virus is not living, the cells of the immune system still respond to the killed virus in the vaccine and create memory cells in preparation for the real pathogen. An advantage to this approach is that children with weakened immune systems can receive inactivated vaccines because they do not cause even mild forms of the disease. A limitation is that inactivated vaccines sometimes require several doses – referred to as ‘booster doses’ — to achieve high levels of immunity.
• Examples of inactivated vaccines are the injected or shot form of the influenza ‘flu’ vaccine, the inactivated polio vaccine, the hepatitis A vaccine, and the rabies vaccine.
Subunit vaccines contain pieces of the virus to create the vaccine. Rather than giving a whole pathogen, purified fragments of the pathogen can be used to trigger specific immune responses. In this type of vaccine the outer surface antigens (which can be thought of as the coat that surrounds the virus) are used to create subunit vaccines. This outer ‘coat’ is the portion of the pathogen the immune cells would encounter during a natural infection. Because subunits or pieces of pathogen are administered these types of vaccines are typically not as immunogenic, that is, they do not create as vigorous an immune response as whole pathogens and booster doses are often needed to be completely effective.
• Examples of subunit vaccines are the Hepatitis B (HBV) vaccine and the human papillomavirus (HPV) vaccine.
Conjugate vaccines were created to combat bacterial pathogens that have an outer coating composed of sugar-like substances. The immune system, specifically the T cells, have difficulty responding to these types of bacteria; the sugar-like substances on the outer surface of the bacteria act as a sort of disguise that allow them to be almost invisible to the T cells of the immune system (if you are a Harry Potter fan you can compare these sugar-like substances to the cloak of invisibility). In order to make these sugar-like substances visible to the T cells within the immune system they are conjugated, or actually connected, to substances that these immune system cells can recognize and respond to, such as large, harmless proteins. The connection or conjugation of pieces of the outer coating sugar-like substances of bacteria to harmless proteins allows the immune system to respond. This is particularly helpful to infants that have immature immune systems that are not yet fully developed.
• An example of a conjugate vaccine is the Hib vaccine that combats Haemophilus influenzae type b bacterium that infects the lining of the brain resulting in meningitis.
Toxoid vaccines contain inactivated or killed toxins called toxoids that are no longer capable of causing harm or disease. Toxins are poisonous substances produced by many types of bacteria. As a result, several bacteria cause disease through the production of harmful toxins. Toxoid vaccines use inactivated toxins to allow for the production of an immune response but to eliminate the possibility of disease. Toxoid vaccines primarily induce B cells to produce antitoxin antibodies. Research has shown that antitoxin antibody levels decline slowly over time so booster doses are recommended every 10 years to maintain protection.
• Examples of toxoid vaccines are DTaP (pediatric form of the vaccine) and Tdap (adolescent and adult form of the vaccine) – the vaccines that are made to train the immune system against diptheria, tetanus, and pertussis (whooping cough).
While there are several types of vaccines, all vaccines are created using a similar strategy: the pathogen is weakened to allow you to develop an immune response without becoming sick.
Vaccines are a safe and effective way to stop the spread of disease by building the immune system. Stay tuned for the next Immunology 101 Series post to learn how vaccines are made, tested and licensed for public use to ensure that we get the best and safest protection possible.
Check out these fun tools to better understand how vaccines are made: |
Chapter 1 Project
The world is full of different shapes. This project requires you to find examples of the 12 basic shape functions we just finished studying and to take a picture of each one. Signs and logos are a great place to start.
After gathering your 12 pictures, label each one with the name and the equation of the function. Capture a screen shot of the function from your graphing calculator and include a sentence or two about the picture. What did you take a picture of and where did you take it? Superimpose the shape of the function over the picture (see examples). ALL PICTURES SHOULD BE TAKEN OUTSIDE OF SCHOOL
Here’s a list of the 12 basic shapes:
Linear function: f(x) = x Absolute Value function: f(x) = x
Reciprocal function: f(x) = Squaring function: f(x) = x2 x1
Cubing Function: f(x) = x3 Square Root function: f(x) = x
Sine Function: f(x) = sin x Cosine function: f(x) = cos x
Log function: f(x) = ln x Exponential function: f(x) = ex
Logistic function: f(x) = Greatest Integer function: f(x) = [x] x e 1 1
You may work with one other person or you may work alone.
When you turn it in, staple the following together:
1) A cover page (include subject, name(s), date)
2) Introduction: Describe the project. Talk about what you learned doing this project.
3) 12 pictures (no more than 6 per page) with a sentence or two about the picture
4) The basic shape should be superimposed over each picture
You are to turn in both a hard copy and an electronic copy of the project. Extra credit will be given to pictures that are original and clever. Samples and a scoring rubric are on the back.
Due date: October 6th
Scoring Rubric for “Candid Camera”
Points possible Points earned
Followed all directions
Picture of all 12 shapes 24 points __________
Sentence or two describing the picture 12 points __________
Basic shape superimposed over shape 12 points __________
Turned in hard copy and electronic copy 2 points __________
Total points earned out of a possible 50 points ____________ |
Skills for Educators
Role Play for Behavioral PracticeThis article is divided into the following sections:
- an introduction to role play
- a description of the instructional strategy and its components,
- tips for using the strategy effectively,
- a sample lesson that illustrates the strategy in practice, and
- a sample observer checklist.
Note: Links on this page with the Portable Document Format icon require Adobe Acrobat Reader to view and print them. You can download this free software at:
Using role play for behavioral practice has become a favorite strategy of curriculum developers. However, it is often less enthusiastically embraced by educators. Educators often feel uncomfortable with this instructional strategy because of past negative experiences and/or because they need classroom management strategies that:
- minimize the noise and confusion that accompany role play practice,
- keep youth on task as they practice in small groups,
- assure that every student has a chance to practice and receive feedback, and
- assure that youth are practicing the skills effectively during the role play enactment.
Once educators learn the strategies and skills to address these concerns, they usually embrace role play as an effective, useful instructional strategy for behavioral practice. The following description, sample lesson and tips provide a model that has been effective in implementing this strategy.
DescriptionRole play for behavioral practice is a teaching strategy that allows youth to practice a variety of communication skills by acting out real life situations in a safe environment like a classroom or youth group. In order to assure that youth learn the skill effectively, the behavioral practice should include several phases.
Phase One: PreparationPrior to the behavioral practice, the educator and youth need to identify the scenarios that will be used for the skill practice. Youth can make up the scenarios or the educator can use ones found in already published curriculum like those described in the Evidence-Based Programs section.
Phase Two: Reviewing the SkillIf several days have passed since youth have seen the skill demonstrated, have several volunteers review the essential elements of the skills and offer a demonstration.
Phase Three: Preparing Small GroupsDivide youth into small groups of three to four. Then have small group members decide who will practice the skill in the first, second, third, etc. Round of the role play. Finally, prepare youth to be observers of each other's skill practice. We recommend that observers use an Observer Checklist . The Observer Checklist should list the essential characteristics of the skill being practiced and can be used to provide feedback once each youth has practiced the skill. Be sure that the characteristics are observable behaviors like; says the word "no," uses words that build the relationship, etc.
Phase Four: Enactment in Small GroupsOnce youth begin to practice in their small groups, the educator should walk around the room observing them to assure they are practicing the skill correctly. S/he may also provide coaching as appropriate. In addition, the educator should time each practice round, telling students when to move on to the next person. Timing assures that all students get a chance to practice and get feedback.
Phase Five: Small Group DiscussionAfter each role play, instruct youth to discuss how it felt to practice the skill and to identify what they did well and what they would change next time they used the skill. Have observers use the checklist to give feedback to their peers on the skill practice.
Phase Six: Large Group DiscussionAfter the small groups have completed their practice, reconvene the whole group and lead a discussion using the following questions:
- What feelings did you experience as you used the skill?
- What words or behaviors made the skill effective? What took away from the effectiveness?
- How were the role plays similar or not similar to real life?
- Where there any barriers to using the skill? E.g., strong, aggressive behavior from the other role player, etc. Help youth identify ways to overcome any barriers that are identified.
- In what ways or situations might you use the skill in the next week or two?
See sample lesson for teaching student refusal skills.
TipsTo maximize your effectiveness in using role play for behavioral practice, we recommend that:
- Prior to the behavioral practice, make sure the group or class has established basic ground rules including listening to others, no put downs, right to pass, confidentiality, etc.
- Involve youth in the development of role play scenarios so the role plays are relevant to their lives.
- Ask for volunteers to demonstrate the skill prior to the beginning of the practice lesson. This allows the volunteers to do a little preparation.
- During the initial practicing of a skill, it is helpful to have youth read a scripted role play or write out the words they will use to practice the skill before they begin. Once youth have practiced the skill several times, they can omit this step.
- If possible, make sure each small group has both genders. Mixing genders is important because youth may have trouble practicing with the same sex if scenarios involve saying no to or communicating about sexual situations.
- Use sets of instructional cards for each small group to help keep them on task. For example, sets should include one card for each of the following: Role Player #1, Role Player #2, Observer #1, Observer #2 with Small Group Discussion Questions written on the back (see Phase 5).
- Use props to help youth get into the role plays. For example, clothing, an old couch, chairs facing back to back if it is a phone conversation etc. help youth be more comfortable "play acting." Props also make the role play fun.
- Have the instructions for the small group practice written on the blackboard or newsprint so the small groups can refer to them if they get stuck.
- Decide on a method to indicate when youth should switch roles during the practice. For example, you could ring a bell, flick the lights, appoint a time keeper in each group, etc.
- If the whole group is small (10 or less), this five-phase process can be used in one large group instead of small groups. If you are doing practice in one large group, have youth who aren't role playing act as observers. Be sure that everyone gets a chance to practice the skill. |
An analysis of two punctured saber-toothed cat skulls suggests these extinct creatures engaged in intra-species combat. It’s further evidence that the exaggerated fangs of saber-toothed cats were strong enough to penetrate bone.
Saber-toothed cats disappeared around 11,000 years ago, but these fearsome predators dominated Pleistocene landscapes for millions of years. The purpose of their iconic fangs, however, is the subject of a longstanding debate, with some scientists arguing that the fangs—which grew as long as 28 centimeters (11 inches) in length—were too fragile, and the saber-tooth bite too weak, to be used for attacking prey. According to this theory, the fangs were only put to use once a saber-toothed cat brought its prey down to the ground with its huge forelimbs, at which point the elongated upper canines were used to pierce through the soft, vulnerable neck.
New research published in the science journal Comptes Rendus Palevol now presents a serious challenge to this scenario. A pair of saber-tooth skulls, both belonging to the species Smilodon populator, exhibited puncture marks consistent with a bite inflicted by a member of the same species. The finding suggests saber-toothed fangs were indeed strong enough to penetrate bone, while shedding new light onto their social behavior, namely that saber-toothed cats fought amongst themselves. The authors of the new study, led by Nicolás Chimentoa and Federico Agnolin from the Natural Sciences Museum of Argentina, theorize that the ancient cats engaged in intra-species combat, similar to modern felines.
Analysis of the two punctures (one on each skull) revealed a distinctly elliptical shape. Each hole was located on the upper nasal area between the eyes, and they were slightly sunken in, suggesting pressure was exerted onto the skulls. One of the specimens showed signs of healing, which meant the individual survived for a long time after enduring the injury.
“The size and general contours of the injuries present in [the] specimens...are consistent with the size and contours observed in the upper canines of Smilodon,” the authors wrote. “In fact, when a blade-like upper canine of a Smilodon specimen is inserted through the described opening, both perfectly match in size and shape.”
Based on the shape of the holes, the authors said it’s not likely—but not impossible—that the punctures were caused by the kicking action of a hoofed prey animal, which have anywhere from two to four toes. The holes also didn’t match the shape of teeth from other predators, such as bears—an animal that would have created a discernibly roundish puncture wound. The researchers also said it’s unlikely the punctures were caused by a large-clawed giant ground sloth, as its claws “should have resulted in very different injuries from those reported here,” the authors wrote. The “shape and general features of the injuries suggest that they were inflicted by the upper canines of another Smilodon individual during [antagonistic] interactions,” concluded the authors.
Importantly, similar injuries are often seen in living felines, including leopards, pumas, cheetahs, and panthers. Such injuries, the authors wrote, are often the result of violent encounters between males and sometimes females, and they frequently result in the death of one of the participants. The new study suggests the same sort of thing occurred among the saber-toothed cats, but that remains speculation.
It’s pretty amazing what can be gleaned from a couple of holes. This evidence suggests that saber-toothed cats could possibly have used their bone-penetrating fangs to take down prey. And indeed, this is not a completely wild assertion; previous fossil evidence has already hinted that saber-toothed cats hunted the giant armadillo-like glyptodonts in this fashion.
We always knew saber-toothed cats were intimidating, but this paper—along with the striking skulls image above—suddenly makes them seem a lot more terrifying. |
Vitamin C is a water-soluble vitamin that is necessary for normal growth and development. It is one of many antioxidants. Antioxidants are nutrients that block some of the damage caused by free radicals. Free radicals are made when your body breaks down food or when you are exposed to tobacco smoke or radiation. The buildup of free radicals over time is largely responsible for the aging process. Free radicals may play a role in cancer, heart disease, and conditions like arthritis. The body is not able to make Vitamin C on its own, and it does not store vitamin C. It is therefore important to include plenty of vitamin C-containing foods in your daily diet.
Vitamin C is needed for the growth and repair of tissues in all parts of your body. It is used to:
Form an important protein used to make skin, tendons, ligaments, and blood vessels.
Heal wounds and form scar tissue.
Repair and maintain cartilage, bones, and teeth.
Boost the immune system. |
The Konan Specular Microscope measures the corneal endothelium. This is the innermost layer of your cornea and its the most fragile layer because it is only one cell thick. The endothelial cells maintain the fluid balance of the cornea which is necessary to maintain clear vision. Unlike most cells in the body, endothelial cells cannot replicate themselves.
If these cells become sufficiently damaged, the cornea loses its clarity and clouds up, limiting or blocking vision.
So, What Causes Endothelial Damage?
There are many factors that can damage the cells of the corneal endothelium:
- Contact lens wear (long term wear of older types of lenses)
- Contact lens related eye problems (cloudy corneas, bloodshot eyes, etc..)
- Patients taking certain prescription eye drops
- Refractive surgery patients
- Cataract surgery patients (surgery is known to lower cell counts)
- Glaucoma patients (glaucoma reduces endothelial cells)
- Dry Eye
The procedure to evaluate your endothelium is simple, quick, non-invasive, and totally free of any discomfort. A special instrument called a specular microscope captures and image of your endothelium and allows us to analyze the appearance of the endothelial cells. If the screening examination indicates early endothelial cell damage, a more detailed examination of the endothelium may be indicated to provide the best possible treatment. |
Nature has imparted resources for survival and sustenance of life on the Earth. About 8.7 million (6.5 million on land and 2.2 million under ocean) known species of flora and fauna are dispersed in the different geographic zones, and some of them are indicator species. The big cats have been playing a vital role in establishing population equilibrium among primary, secondary and tertiary consumers in the sylvatic food chains . The big cats had been placed taxonomically under Felidae with four subfamilies, viz. Pantherinae, Felinae, Machairodontinae (extinct) and Proailurinae (extinct). However, in the present status, there are only three regions around the world where population of felines exists, viz. Africa (cheetah, leopard and lions), Asia (Asiatic cheetah, Amur tiger, clouded leopard, Sunda clouded leopard, snow leopards, Asiatic lions and tigers) and North and South America (cougars and jaguars; Spanish lion and tigers). According to the geographical background of Panthera species, tiger, lion and cheetah are evolved from their prehistoric creature called cave lion. Despite of infighting and retaliate killing, the disease manifestations in big cats were underestimated albeit emerged as serious threats for their survival.
In the present scenario, the complexities of challenges are gradually expanding through encroachment of forest land and increasing interaction between humans and domestic and wild animals leading menace to wildlife. Nonetheless, the admixture process of companion animals in the free-ranging habitat for sharing grasslands or water holes is a key factor for disease transmission in either of wild or domestic animals . Indiscriminate uses of forest wealth and simplification of the ecosystem are determined causes for unwarranted migration of zoonotic pathogens from sylvatic to domestic cycle and vice versa . Many pathogens of bacterial, viral as well as parasitic origins have been changing their migration pattern owing to consistent human interventions in the forest habitat. Therefore, susceptibility of zoonotic diseases is also increasing and expanding their host ranges day by day. The tribal population of adjoining areas of national parks and sanctuaries is more prone to such zoonotic infections as they utilise the common water resources.
It has been believed that free-ranging wild animals are comparatively possessing a high standards of health which is rigidly maintained by the action of natural selection. But owing to increased human interference, poaching and destruction of habitat, most of the significant threats are health related . Therefore, the use of advance technology for health monitoring, disease diagnosis and forensic investigations and their legal use are need of the hours for conservation strategies of wild animals.
2. Status of health management practices
Till the recent past, emphasis on ecological management has been focused on strengthening and provoking natural devices for conservation of wildlife wealth, whereas health management aspects were out of focus. Only firefight approaches were made out, and it was sometimes exclusively restricted to rescue of ailing wild animal. Up to 1990, there were limited amenities for restraining of wild animals, whereas after introduction of chemical immobilisation and modern technologies, the task became far from fear and turned into a mandatory component for drug orientation and health monitoring of felines . Large-scale mortality in the African lions in Serengeti Reserves in South Africa due to canine distemper and also due to Feline panleukopenia virus in 1995–1996 in different parts of the world including Japan and India was the eye opener in an Indian subcontinent. Subsequently, the government and national and international NGOs forced to think and adapt scientific wildlife health management strategies in their conservation programmes.
The present book has been focused on big cats and probably would be a tool for conservationist, wildlife biologists and wildlife veterinarians for making a concrete research programme to study anatomical variations, habit, habitat and their health aspects, including treatment, disease diagnosis and also active veterinary interventions, for their sustenance and survival in the free ranging as well as captive conditions.
The following are the key points for the successful health management programme either of captivity or in free-ranging habitat:
Care and management in zoos/captivity: The main objectives of zoos and captivity are conservation education, breeding of critically endangered species, recreational values and also scientific studies on various aspects related to behaviour and health of the animal species. However, the risk factor during health monitoring and treatment is always around the veterinarians, but following the standard operating procedures and the use of diagnostic tools, the handling of situations both in free ranging and captivity may ease the task. The handling of wild animals in captivity is quite complex and challenging, which needs knowledge about habit and health status of particular species. Sometimes, they have camouflage attitude showing false sickness, aggressive behaviour or even unpredictable posture during collection of biological samples for disease diagnosis . Therefore, during health monitoring or shifting of animal from the zoo, the imitation of standard guidelines shall be followed (Figure 1).
Analysis of health aspects in free-ranging habitat: The assessment of disease manifestations has yet not been fully warranted, but veterinary scientists have developed a predictive model through clinical symptoms, changes in behaviour as well as their physical appearances. The ailing wild animal subject to assess the risk of diseases and needs to be attentive for their care well in time. In free-ranging animals, veterinary interventions are difficult and need multidisciplinary approach for restraining, radio collaring, treatment or collection of biological samples for laboratory investigations to evaluate health status (Figures 2 and 3).
Precautions during immobilisation: The most adventurous and mandatory tasks of veterinarians are confined to restraining, radio collaring, health monitoring, disease diagnosis and reintroduction of wild animals. In such conditions, calculation of doses and combination of drugs for each and every circumstance seem rather difficult, and only a trained and experienced wildlife veterinarian knows better for how and why requisite doses may be administered to down the animal. The expert opinion/guidance may be helpful in commencing the plan; permission from the competent authorities is essential. The operational team must be well equipped along with sufficient number of wildlife veterinarians for successful operation and also to minimize the chance of errors :
Every step, in handling and in also administration of drugs in big cats, must be introduced slowly and carefully to avoid flight fear reaction and stress.
The use of appropriate bait prior to immobilisation/restraining of targeted animal.
For captive animals, the physical presence of the caretaker is a must during handling or restraining.
Blindfolds must be used; excessive noise and touching stress should be minimised.
Excessive and rough handling or stress in the restrained animal can lead to hyperthermia and capture myopathy particularly during warm or hot climate; therefore, such incidents may be avoided.
Capturing and restraining of pregnant animals may be avoided.
Physical posture of the sedated big cat must be in normal sitting posture to ensure that animal’s smooth breathing should not be compromised.
It is important that handler should protect themselves against possible injuries, exposure to drugs or chemicals, animal excretions, etc. to avoid zoonotic infections.
Study of emerging disease threats: The infectious diseases like Feline panleukopenia virus, canine distemper, feline viral rhinotracheitis, feline calcivirus infection, feline infectious peritonitis, feline immunodeficiency virus, ehrlichiosis, trypanosomiasis, babesiosis, paragonimiasis and gnathostomiasis are fatal; therefore, their prevention and control are important both in captivity and free-ranging populations. On the other hand, the laboratory diagnostics are important; thus, appropriate samples in proper preservative are important for diagnosis.
Post-mortem examination: The proverb has been very common among pathologists as ‘the carcass never tell lies providing expert knows the language of carcass’. Really, it means as the correlated post-mortem changes may lead to the past history and situation by interpreting the lesions present on the external surfawce and internal organs (Figure 4). On the basis of the presence of lesions, tentative diagnosis may be ascertained and communicated to competent authority for appropriate action.
Study of human and wild animal interface: Human and domestic animal intermixing with wild animals may communicate to zoonotic disease particularly tuberculosis that poses a threat to the health of wild animals. It is also responsible for human wildlife conflict resulting into poaching, poisoning and electrocution of problematic or distressed wild animals. So there is a need for disease surveillance plan along with scientific views to overcome human wildlife conflicts.
The authors are thankful to Hon’ble Vice-Chancellor, Nanaji Deshmukh, Veterinary Science University, Jabalpur, for the permission and moral support to act as editors of the book and Principal Chief Conservator of Forests (Wildlife), Government of M.P. for their kind support in scientific wildlife health management and recognition of the School of Wildlife Forensic and Health. |
Attributions are inferences generated by people when they try to explain reasons for events, the behavior of others, and their own behavior. Attributions may be internal (dispositional), based on something within a person, or external (situational), based on something outside a person. A student who wins an art contest may decide it is because of ability (internal attribution) or because the judges are friends of her or his parents (external attribution). The tendency to overuse internal attributions (such as blaming an adolescent driver rather than road conditions for a car accident) is called the fundamental attribution error. Another type of attribution error, called self‐serving bias, is described as the predisposition to attribute successes to abilities and efforts and failures to external, situational causes. Bernard Weiner, in his study of attributions made concerning success or failure, suggested that both internal and external attributions may be based on stability (that is, an internal factor may be deemed either stable or unstable) and controllability (the factor may be deemed either controllable or uncontrollable). |
Page Act of 1875
The Page Act of 1875 (Sect. 141, 18 Stat. 477, 1873-March 1875) was the first restrictive federal immigration law and prohibited the entry of immigrants considered "undesirable." The law classified as "undesirable" any individual from Asia who was coming to America to be a forced laborer, any Asian woman who would engage in prostitution, and all people considered to be convicts in their own country.
The law was named after its sponsor, Representative Horace F. Page, a Republican who introduced it to "end the danger of cheap Chinese labor and immoral Chinese women". The Page Act was supposed to strengthen the ban against “coolie” laborers, by imposing a fine of up to $2,000 and maximum jail sentence of one year upon anyone who tried to bring a person from China, Japan, or any Asian country to the United States “without their free and voluntary consent, for the purpose of holding them to a term of service”. However, these provisions, as well as those regarding convicts “had little effect at the time”. On the other hand, the bar on female Asian immigrants was heavily enforced and proved to be a barrier for all Asian women trying to immigrate, especially Chinese.
Factors that influenced the creation of the Page Act
The first Chinese immigrants to the United States were overwhelmingly males, the majority of whom began arriving in 1848 as a part of the California Gold Rush. They intended to make money in the United States and then return to their country, so even though more than half had wives and families, they stayed in China. However, anti-Chinese sentiment could already be found in discriminatory laws in 1852 that limited Chinese possibilities. The California State Legislature assumed that Chinese men were forced to work under long-term service contracts, when in reality immigrants to America were not coolies, but borrowed money from brokers for their trip and paid the money back plus interest through work at their first job. Without enough money to send for their wives, a prostitution industry developed in the male Chinese immigrant community and became a serious issue to white Americans living in San Francisco. Laws specifically directed at Chinese women immigrants were created even though prostitution was fairly common in the American West among many nationalities. Many of those in favor of Chinese exclusion were not worried about the experiences and needs of poor Chinese girls that were being sold or tricked into prostitution, but about “the fate of white men, white families, and a nation constructed as white”. Chinese men hurt white men’s ability to earn money, “while Chinese women caused disease and immorality among white men”. Both Chinese male “coolies” and Chinese female prostitutes were linked to slavery, which added to the American animosity toward them since slavery and involuntary servitude was abolished in 1865. Male-laborers were central to the anti-Chinese movement, so one might expect lawmakers to focus on excluding men from immigration, but instead they concentrated on women in order to protect the American system of monogamous marriages. Therefore, the number of immigrants (majority male) entering the U.S. from China during the Page Act’s enforcement “exceeded the total for any other seven year period, before passage of the Exclusion Act in 1882, by at least thirteen thousand,” but the female population dropped from 6.4 percent in 1870 to 4.6 percent in 1880.
Furthermore, the American Medical Association believed that Chinese immigrants “carried distinct germs to which they were immune, but from which whites would die if exposed”. This fear became concentrated on Chinese women, because some white Americans believed that germs and disease could most easily be transmitted to white men through sexual labor of Chinese prostitutes. Additionally, during difficult times in China, women and girls were sold into “domestic service, concubinage, or prostitution”. Some Chinese men had a wife as well as a concubine, usually a lower class woman obtained through purchase and recognized as a legal member of the family. A woman’s status depended on her sexual relationship with Chinese men; “first wives enjoyed the highest status, followed by second wives and concubines, followed in turn by several classes of prostitutes”. An additional concern was that the children of Chinese couples would become U.S. citizens under the Fourteenth Amendment and their cultural practices would become a part of American democracy. As a result, the Page Law responded to “what were believed to be serious threats to white values, lives, and futures". California state laws could not exclude women for being Chinese, so they were crafted as regulations of public morals, yet the laws were still struck down as “impermissible encroachment on federal immigration power". However, the Page Law sailed through Congress without any expressed concerns of having a federal law that racially restricted immigration or violated the Burlingame Treaty of 1868 (which allowed free migration and emigration of Chinese) because Americans were focused on protecting the social ideals of marriage and morality.
The American consul in Hong Kong from 1875–1877, David H. Bailey, was put in charge of regulating which Chinese women were actual wives of laborers, allowed to travel to the United States, as opposed to prostitutes. Bailey set up support for the process with the British colonial authorities and the Tung Wah Hospital Committee, an “association of the most prominent Chinese businessmen in Hong Kong". Before a Chinese woman could immigrate to the United States she had to submit “an official declaration of purpose in emigration and personal morality statement, accompanied by an application for clearance and a fee to the American Consul". The declaration was then sent to the Tung Wah Hospital Committee who would do a careful examination and then report back to Bailey about the character of each woman. Also, a list of the potential emigrants was sent to the British Colonial government in Hong Kong for investigation. In addition, the day before a ship sailed to America, Chinese women reported to the American consul for a series of questioning which included the following questions:
Have you entered into contract or agreement with any person or persons whomsoever, for a term of service, within the United States for lewd and immoral purposes? Do you wish of your own free and voluntary will to go to the United States? Do you go to the United States for the purposes of prostitution? Are you married or single? What are you going to the United States for? What is to be your occupation there? Have you lived in a house of prostitution in Hong Kong, Macao, or China? Have you engaged in prostitution in either of the above places? Are you a virtuous woman? Do you intend to live a virtuous life in the United States? Do you know that you are at liberty now to go to the United States, or remain in your own country, and that you cannot be forced to go away from your home?
The Chinese women who “passed” these questions according to the American consul were then sent to be interrogated by the harbor master of the British colonial government. He would ask the women the same questions in an effort to catch liars, but if the women were approved they were then allowed to board the steamer to America. Once on board the ship, the women were questioned again. The first year that Bailey was assigned to differentiate wives from prostitutes he did not yet have the assistance of the Tung Wah Hospital Committee, and 173 women were allowed to sail to California, he was disappointed with that figure and granted only 77 women passage in 1877. In 1878, under the authority of American consul Sheldon Loring, 354 women arrived in the U.S. which was a substantial amount compared to John S. Mosby’s grant of less than 200 women to be sent to the U.S. from 1879-1882. Upon their arrival in San Francisco, Colonel Bee, the American consul for the Chinese would observe the documents with photographs of each woman included and ask her the same questions she had heard in Hong Kong. If women changed their answers to the questions, did not match their pictures, or had incomplete paperwork, they could be detained, and sent back to Hong Kong. As a result, from 1875-1882 at least one hundred and possibly several hundred women were returned to China. The entire process was “shaped by the larger, explicitly racist assumption” that Chinese women, like Chinese men were dishonest.
Photographs were used as a means to identify the Chinese women through each stage of the examination process in order to ensure that unqualified women would not be substituted for a woman who was properly questioned at any point in time. Chinese women were subject to this method of identification prior to any other immigrant group because of the "threat of their sexuality to the United States." In addition to all the questioning that took place in regard to a woman’s character, there were also in depth questions about Chinese women’s fathers and husbands. Therefore, these women were subject not only to racism but also to sexist and classist beliefs because officials “accepted that male intentions and actions were more likely to determine a woman’s sexual future than her own actions and intentions”. Chinese women had to demonstrate that they grew up in respectable families and that their husbands could afford to support them in the United States. Also, “the appearance of the body and clothing supposedly offered a range of possible clues about inner character, on which some officials drew when trying to differentiate prostitutes from real wives." Bodily clues used to examine Chinese women included bound feet, “prettiness, youth, demeanor,” and how they walked. However, the task of differentiating "real" wives from prostitutes was virtually impossible. Men, on the other hand, faced more lenient restriction practices and were not required to "carry photographs, nor to match photographs that had been sent in advance to San Francisco Port authorities."
Effects on Chinese families and future immigrants to the U.S.
Most Chinese women that immigrated to the U.S. in the 1860s and the 1870s were "second wives, concubines in polygamous marriages, or prostitutes," but Americans were wrong to believe that all Chinese women worked as prostitutes. Enforcement of the Page Act resulted not only in the reduction of prostitutes but also the “virtually complete exclusion of Chinese women from the United States”. In 1882 alone, during the few months before the enactment of the Chinese Exclusion Act of 1882 and the beginning of its enforcement, 39,579 Chinese entered the U.S., and only 136 of them were women. Therefore, Chinese were unable to create families within the U.S. The Page Act was so successful in preventing Chinese women from immigration and consequently keeping the ratio of females to males low that the law "paradoxically encouraged the very vice it purported to be fighting: prostitution." Not until after World War II was an appropriate gender balance established, because between 1946 and 1952 almost 90 percent of all Chinese immigrants were women.
The sojourner mentality of the Chinese limited the number of wives that chose to immigrate as well as the financial cost of the trip; however, documents relating to the enforcement of the Page Act suggest that some women were able to overcome the barriers and join their husbands, but without this law the numbers might have been much higher. According to historian George Peffer, “all the evidence suggests that the women who survived this ordeal were most likely the wives of Chinese laborers” because they would have possessed the determination needed to endure the questioning, while importers of prostitutes “might have been reluctant to risk prosecution”. Yet, this fact is difficult to conclusively prove especially since Peffer himself noted that the cost of immigration as well as possible bribes paid to American consuls would have created a greater hardship for the “wives of immigrants who possessed limited resources, than for the wealthy tongs” who sent prostitutes to the U.S. Therefore, although the Chinese Exclusion Act was extremely important in transforming Chinese into a “declining immigrant group, it was the Page Law that exacerbated the problem of life without families in America’s Chinatowns”. Moreover, the Page Act created the policing of immigrants around sexuality which “gradually became extended to every immigrant who sought to enter America,” and today remains a central feature of immigration restriction.
- Abrams, Kerry, “Polygamy, Prostitution, and the Federalization of Immigration Law,” Colombia Law Review 105.3 (Apr. 2005): 641-716.
- George Anthony Peffer, “Forbidden Families: Emigration Experiences of Chinese Women Under the Page Law, 1875-1882,” Journal of American Ethnic History 6.1 (Fall 1986): 28-46. p.28.
- An Act Supplementary to the Acts in Relation to Immigration (Page Law) sect. 141, 18 Stat. 477 (1873-March 1875).
- Eithne Luibheid, Entry Denied: Controlling Sexuality at the Border (University of Minnesota Press, 2002) 31.
- Luibheid 32.
- Abrams 651.
- Luibheid 34.
- Luibheid 35.
- Abrams 657.
- Abrams 653.
- Peffer 29.
- Luibheid 37.
- Luibheid 40.
- Abrams 642.
- Abrams 643 and 644.
- Abrams 644 and 650.
- Peffer 33.
- Luibheid 41.
- Peffer 32.
- Peffer 32 and 38.
- Peffer 38.
- Luibheid 42.
- Luibheid 43.
- Luibheid 44.
- Luibheid 45.
- Luibheid 50.
- Luibheid 49.
- Luibheid 53.
- Luibheid 46.
- Abrams 698.
- Abrams 701.
- Abrams 702.
- Peffer 43.
- Peffer 34.
- Luibheid 32 and 53.
- Full text of 1875 Page Law, via San Diego State University Department of Political Science
- Guide to Internet Resources on Racism, Race, and American Law |
The birth of a child is typically one of the happiest moments in a family’s life. But for mothers and fathers infected with the Zika virus, pregnancy can be particularly stressful, as they wait to see if their child will suffer the sometimes devastating consequences of infection.
Mosquitos carrying the Zika virus are in the United States and infections are on the rise. Microcephaly appears to be the most devastating consequence and little is known how to stop it.
Scientists at Texas Biomed have begun several projects aimed at understanding the Zika virus and its impact on newborns.
Discovered in Uganda in 1947, Zika virus has been impacting lives for more than half a century. While Africans have built up an apparent immunity, the Western Hemisphere was left relatively unscathed until near the end of 2013 when cases of Zika virus and its most dangerous known consequence of infection, microcephaly, were reported in Brazil.
Mosquitos carrying the Zika virus are now in the U.S. Scientists at Texas Biomed have started several projects aimed at understanding this relatively unknown disease and are leading efforts to figure out how the virus works and how best we can test therapeutic strategies.
They hope to:
1. Develop an animal model to determine a timeline for infection and answer the questions: how long will Zika last in the body? When will it be most problematic for pregnant women, and how long do men have to wait before having sexual contact?
2. When vaccines and therapies are being developed, test them in animal models that mimic human immune and gestational systems to make sure that the vaccine offers protection against the various strains of the disease and does no harm to mothers or unborn children.
Applying the expertise of Texas Biomed scientists in virology, immunology, genetics, and pregnancy in several different nonhuman primate models will help lead to a better understanding of how Zika virus impacts fetal development. Texas Biomed scientists believe that different nonhuman primate models for this disease have the potential to reveal unique consequences of Zika virus infection.
Zika virus is a long-term health issue for the United States, and it is imperative we know more. |
Bet your knowledge and answer
Syphilis is a very serious sexually transmitted and viral infection caused by the spirochete bacterium Treponema pallidum subspecies pallidum. Typically three weeks pass before the first symptoms occur: first painful rashes on genitals and later insanity and death. The bacteria cannot transmit through skin - thus, you cannot get infected through e.g. holding hands. The infection was initially known as the 'French disease' but research shows that when Christopher Columbus came back to Europe in 1492, he brought tobacco, cocoa but also syphilis. |
of a wind instrument
is its interior chamber that defines a flow path through which air travels and is set into vibration to produce sounds. The term is used both for instruments made of wood and instruments made of metal, though only in the case of wood instruments is the bore typically produced by boring. The shape of the bore has a strong influence on the instruments' timbre
The cone and the cylinder represent two musically useful idealized shapes for the bore of a wind instrument
. As discussed below, these shapes affect the harmonics associated with the timbre of the instrument. For example, the conical bore is associated with a timbre that corresponds to a generally triangular waveform
, which is rich in both even and odd order harmonics
. The cylindrical bore is corresponds to a generally square waveform
, which is rich on odd harmonics.
The diameter of a cylindrical bore remains constant along its length. The acoustic behavior depends on whether the instrument is stopped
(closed at one end and open at the other), or open
(at both ends). For an open pipe, the wavelength produced by the first normal mode
note) is approximately twice the length of the pipe. The wavelength produced by the second normal mode is half that, that is, the length of the pipe, so its pitch is an octave
higher; thus an open cylindrical bore instrument overblows
at the octave. This corresponds to the second harmonic, and generally the harmonic spectrum of an open cylindrical bore instrument is strong in both even and odd harmonics. For a stopped pipe, the wavelength produced by the first normal mode is approximately four times the length of the pipe. The wavelength produced by the second normal mode is one third that, i.e. the 4/3 length of the pipe, so its pitch is a twelfth higher; a stopped cylindrical bore instrument overblows at the twelfth. This corresponds to the third harmonic; generally the harmonic spectrum of a stopped cylindrical bore instrument, particularly in its bottom register, is strong in the odd harmonics only.
Instruments having a cylindrical, or mostly cylindrical, bore include:
The diameter of a conical bore varies linearly with distance from the end of the instrument. A complete conical bore would begin at zero diameter—the cone's vertex. However, actual instrument bores approximate a frustum
of a cone. The wavelength produced by the first normal mode is approximately twice the length of the cone measured from the vertex. The wavelength produced by the second normal mode is half that, that is, the length of the cone, so its pitch is an octave higher. Therefore, a conical bore instrument, like one with an open cylindrical bore, overblows at the octave and generally has a harmonic spectrum strong in both even and odd harmonics.
Instruments having a conical, or approximately conical, bore include:
Bores of real-world woodwind instruments overall may approximate a cone or a cylinder. However, portions of the bores may deviate from these idealized shapes. For example, though oboes and oboes d'amore are similarly pitched, they have differently shaped terminal bells. Accordingly, the voice of the oboe is described as "piercing" as compared to the more "full" voice of the oboe d'amore.
Although the bore shape of woodwind instruments generally determines their timbre, the instruments' exterior geometry typically has little effect on their voice. In addition, the exterior shape of woodwind instruments may not overtly match the shape of their bores. For example, while oboes and clarinets may outwardly appear similar, oboes have a conical bore while clarinets have a cylindrical bore.
Brass instruments also are sometimes categorized as conical or cylindrical, though most in fact have cylindrical sections between a conical section (the mouthpiece taper) and a non-conical, non-cylindrical flaring section (the bell). Benade gives the following typical proportions:
|| Horn |
| Mouthpiece taper
|| 11% |
| Cylindrical part
|| 61% |
|| 28% |
To complicate matters these proportions vary as valves or slides are operated; the above numbers are for instruments with the valves open or the slide fully in. Therefore the normal mode frequencies of brass instruments do not correspond to integer multiples of the first mode. However, players of brasses (in contrast to woodwinds) are able to "lip" notes up or down substantially, and to make use of certain privileged frequencies in addition to those of the normal modes, to obtain in-tune notes.
- Nederveen, Cornelis Johannes, Acoustical aspects of woodwind instruments. Amsterdam, Frits Knuf, 1969. |
Cry Baby /a/ "Aaaaaaa!"
By: Jamie Storey
This lesson teaches children about the /a/ correspondence, and how to read /a/ by associating it with a crying baby. In this lesson students will learn to recognize, spell, and read words with the /a/ sound. They will also be able to recognize /a/ in spoken words by learning a meaningful representation (a baby crying) and the letter symbol A. Students will spell and read words containing /a/ in a Letterbox lesson, along with reading a decodable book that focuses on the /a/ sound.
1. Graphic image of a crying baby
2. Cover-up critter
3. Letter boxes
4. Primary paper
5. Letter tiles (a,t,h,m,p,d,r,g,c,l,s,s,f,b,n)
6. Chart paper with tongue twister written on it ("Andrew and Alice asked if Annie's active animals were angry.")
7. White board and overhead/document cam
8. Worksheet (listed at the bottom of the page)
9. Books for each student (A Cat Nap)
10. List of spelling words on chart paper or whiteboard for the students to read (lap, mat, Sam, gas, bank, glass, crab, splat)
1. Say: "In order to become expert readers we need to learn the code that tells us how to pronounce words. All letters make different sounds as we move our mouths a certain way. Today, we are going to learn about the /a/ sound. When I say /a/, I think of a baby crying, "Ahhhhhhh." (Show sound picture card).
2. Say: "The letter we are going to learn about today is a." (Have students take out primary paper and pencil). "Let's practice writing a on our primary paper." (Model on the board how to write the letter a). "Start at the fence line and make a curved line down until you touch the sidewalk, but don't stop here. Continue the curve around until you end up where you started. Then draw a straight line back down, and stop on the sidewalk." (As students practice drawing a row of a's, walk around the room observing and checking if they are correctly writing the letter a).
4. Say: "Now I am going to teach each of you a tongue tickler that will help you remember the sound that /a/ makes." "Andrew and Alice asked if Annie's animals were agitated." (I will briefly review what the word agitated means before we say the tongue tickler together). "Let's say it together! Now let's say it again, and if you hear the /a/ sound in a word, I want you to raise your hand. (Repeat tongue twister). Now I want each of you to stretch out the /a/ sound. (Ex: Aaaaaandrew aaaand Aaaaalice…) Good Job!"
5. (Have students take out their letterboxes and letters). Say: "We are going to use what we just learned about the letter a to spell words. I will call out a word and you can spell it using the letterboxes. Before each word I call out I will tell you how many boxes to use. Each sound or mouth move in the word will go in a box. For example, the word I am going to spell is hat. I will use three boxes (draw three boxes on the board), because it has three sounds. The first sound I hear is /h/. I will place the letter h in the first box (model on board). Now it might help to say the word again to yourself, hat. The second sound I hear is /a/. We just learned the letter a stands for /a/, so I will place the a in the second box (model on the board). The last sound I hear is /t/. I will place the t in the third box (model on board). I spelled the word hat. Now you try." The words I will call out are: Lap, mat, Sam, gas (3) bank, glass, crab (4), and splat (5). After the students spell a word, the class will spell the word and I will write it on the board. After writing the words on the board, the class will read each word together.
6. Now, I will divide the students into partners. I will give each partner a copy of the book, A Cat Nap. Say: This is a story about a boy named Paul who loved to play with his cat Maggie each morning. One morning Maggie was not there, so Paul searched everywhere for Maggie. He searched his house and even the neighborhood. Where could Maggie have gone? Will he ever find her? We must read to find out!
I will ask each partner to go back and forth reading a page to each other. (Remind students if they are reading and get stuck, that there are things they can do to help themselves). Say: "First, try to read the word by covering parts of it up like I demonstrated for you earlier. Then read the sentence all the way through. Think about if the sentence makes sense. Then change words that do not make sense. After you are finished correcting, always make sure you reread the sentence one time through with the corrections that you made. I will be walking around to help you if you need it."
7. To end this lesson, I will read the story to the students and we will discuss and talk about the story as we read. The students will reflect on the story. For the next lesson we will use this book by rereading a familiar text.
8. To assess the students they will each be given a worksheet where they will be asked to circle the picture of the objects that contain the a=/a/ sound. The students must say the name of each picture aloud, and then circle the pictures in which they hear /a/.
"Aaaaaaaaa!" The baby cried. By: Ashley Farrow
"Aaaaaaa," Cries the Baby: Janie Colvin
Book: A Cat Nap. Educational Insights, 1990.
Return to Awakening Index |
Triacylglycerols (fats, triglycerides) constitute 90% of dietary lipid, and are the major form of energy storage in humans. The structure of triglycerides has been shown before (See Figure 10.2). Oxidative metabolism of fats yields more than twice the energy as an equal weight of dry carbohydrate or protein. Remember that fats are insoluble in water. Digestive enzymes, however, are water soluble. Digestion of fats must therefore take place at the interface where fat meets water. Obviously more digestion can occur if more surface area is exposed. Two things act in this regard to aid fat digestion - 1) motion of the intestine (See Figure 18.3 ) and 2) bile acids (secreted by the liver), which act as "digestive detergents" to emulsify fats (See Figure 18.4 ).
Figure 18.1 depicts lipid metabolism relative to the rest of intermediary metabolism.
Triacylglycerols in the body are derived primarily from:
Pancreatic Lipase acts to hydrolyze triacylglycerols at positions 1 (last point of hydrolysis) and 3 (first point of hydrolysis). The products of this reaction are Na+ and K+ salts of fatty acids (also known as soaps). Soaps, of course, help emulsify fats. Further, the activity of pancreatic lipase actually increases when it contacts the lipid-water interface (interfacial activation). Binding of pancreatic lipase to the lipid-water interface requires a complex with another factor, pancreatic colipase. In the absence of substrate (lipids), the enzyme complex's active site is buried by a lid-like structure. In the present of lipid, the active site is exposed and in the process a hydrophobic entrance to the active site is exposed.
Remember that a micelle is formed when soaps surround a non-polar substance in water. Lipid digestion by pancreatic lipases generates mono and diacylglycerols that are absorbed in the small intestine. Bile acids aid in this process too, forming micelles. Blocked bile ducts inhibit absorption of fats considerably. Remember also that vitamins A,D,E, and K are fat soluble, and their absorption too is dependent on bile acids. Once inside the intestinal cells, fatty acids complex with a protein in the cytoplasm called intestinal fatty acid-binding protein (I-FABP) that increases their effective solubility and protects the cell from their detergent effects.
The mono and diacylglycerols in the digestive cells are converted back to triglycerides, and packaged into lipoproteins in the bloodstream called chylomicrons (See Figure 18.3). They travel into the blood stream via the lymph system. Triacylglycerols are also synthesized by the liver where they are packaged as very low density lipoproteins (VLDLs) and released into the blood. Upon arrival in adipose tissue and muscle cells, lipoprotein lipase cleaves them to free fatty acids and glycerol. Fatty acids are taken up by these tissues and glycerol is transported to liver or kidneys where it is converted to dihydroxyacetone phosphate (glycolysis intermediate) by glycerol kinase (puts phosphate on) and glycerol-3-phosphate dehydrogenase (oxidizes to DHAP).
In adipose tissue, hydrolysis of fats to fatty acids and glycerol is accomplished by hormone-sensitive triacylglycerol lipase. Free fatty acids are released there into the blood stream where they bind to albumin.
Lipoprotein complexes carry lipids in the bloodstream (See Figure 18.5). Very little free lipid can be detected in the blood. The protein components of the lipoproteins are synthesized in the liver and intestinal mucosa cells. Classes of lipoproteins and their properties are shown in Table 18.1. Lipoproteins form micelles with lipids as a mechanism of transporting them in the aqueous environment of the blood. The five categories of lipoproteins are summarized below:
Chylomicrons - assemble in intestinal mucosa, carry exogenous fats and cholesterol via lymph system to large body veins (See Figure 18.7). Chylomicrons adhere to inner surface of capillaries of skeletal muscle and adipose tissue (fat cells) (See Figure 18.6). Fats contained within them (but not cholesterol) are hydrolyzed by lipoprotein lipase, freeing fatty acids and monoacylglycerol. The remaining shrunken chylomicron structure is called a chylomicron remnant, which contains cholesterol and dissociates from the capillary endothelium and reenters the circulation system where it is taken up by the liver. Thus, chylomicrons deliver dietary fats to muscle and adipose tissue. They also carry dietary cholesterol to the liver.
VLDL - VLDLs are synthesized by the liver and, like chylomicrons, are degraded by lipoprotein lipase. VLDL, IDL, and LDL are interrelated. IDL and LDL appear in the circulation as VLDL remnants. VLDL is converted to LDL by removal of all proteins except apo B-100 and esterification of most of the cholesterol by lecithin-cholesterol acyl transferase (LCAT) associated with HDLs. The esterification occurs by transfer of a fatty acid from lecithin to cholesterol (forming lysolecithin).
The protein components of lipoproteins are called apolipoproteins (or apoproteins). These proteins, though water soluble, have a hydrophobic and a hydrophilic character that is apparent in their alpha helices. Their alpha helical regions are stabilized upon incorporation into lipoproteins. The lipids appear to stabilize the helices because they are composed of hydrophobic amino acids on one side of the helix (facing the lipid) and hydrophilic amino acids on the other (facing water). Nine apolipoproteins common in humans are summarized in Table 18.2.
Cholesterol makes it to animal cell membranes by either external transfer, or by cellular synthesis (See Figure 18.7) Exogenous cholesterol reaches cells from LDL. LDL binds to cellular LDL receptor, a transmembrane glycoprotein that binds both ApoB-100 and apoE. LDL receptors form clusters of "coated pits" (See Figure 18.9). Clathrin, the scaffolding protein of the coated vesicles that transfer proteins between RER and the Golgi apparatus, forms the backing of the coated pits. The coated pits invaginate into the plasma membrane, forming coated vesicles that fuse with lysosomes. This process is known as receptor mediated endocytosis (see Figure 18.8). Inside the lysosome, cholesteryl esters are hydrolyzed, yielding free cholesterol, which can be incorporated into cell membranes. Excess cholesterol is reesterified for storage. (See Figure 18.10).
Interestingly, high intracellular cholesterol suppresses synthesis of LDL receptor and biosynthesis of cholesterol, two factors that prevent overaccumulation of cholesterol in cells.
HDL removes cholesterol from tissues and transports it to the liver. HDL is created mostly from components from other degraded lipoproteins. HDL converts cholesterol to cholesteryl esters by LCAT, an enzyme activated by apoA-I in HDL. HDL appears to get cholesterol to the liver 1) by transfer of the cholesteryl ester to VLDL which after degradation to IDL and LDL is taken to the liver and 2) by direct interactions between HDL and the liver via a specific HDL receptor. The liver disposes of cholesterol as bile acids. HDL is also called "good cholesterol" because it is associated with lowering cholesterol levels.
Development of atherosclerosis is strongly correlated with the level of plasma cholesterol. Individuals with familial hypercholesterolemia (FH) have high levels of LDL and plasma cholesterol levels 3-5 times higher than the average level. People homozygous for the disease often die from myocardial infarction at ages as early as 5. Cells taken from these people completely lack functional LDL receptors. The high levels of plasma LDL is due to 1) decreased degradation of LDL, since there are no receptors to take them up, and 2) increased synthesis of IDL due to the lack of LDL receptors to take up IDL. Macrophages from FH homozygous individuals contain macrophages ( a type of white blood cell) so full of cholesterol they are called foam cells. Macrophages readily take up LDL that has been acetylated at Lys residues. This has the effect of increasing LDL's negative charge. A receptor on the macrophage called the scavenger receptor normally binds oxidized LDL as well as polyanionic molecules (molecules with many negative charges). LDL has many unsaturated fatty acids that are highly susceptible to chemical oxidation. Normally they are protected by antioxidants, but these become depleted when LDL is trapped within artery walls. When this happens, oxygen radicals oxidize the LDL fatty acids to aldehydes and oxides which react with Lys residues, like an acetylation reaction. Binding of macrophages to LDL results in its uptake of cholesterol and, ultimately to the formation of foam cells. Antioxidants prevent atherorsclerosis in rabbits. Cigarette smoke oxidizes LDL also.
High plasma HDL levels are strongly correlated with a low incidence of cardiovascular disease. Cigarette smoking is inversely correlated to HDL concentrations, but exercise, alcohol, weight loss, and estrogens are linked to higher HDL levels. There is an inverse correlation in humans between risk of atherosclerosis and plasma level of apoA-I, the major protein component of HDL. Cholesterol ester transfer protein (CETP) mediates transfer of cholesteryl esters from HDL to VLDL and LDL. Animals that express CETP have higher cholesterol levels in their VLDL and LDL and lower cholesterol levels in HDL than animals that do not express CETP Making animals that do not have CETP transgenic to express the protein developed atherosclerotic lesions more rapidly on a high fat diet than non-transgenic animals fed the same diet.
A diet rich in saturated fatty acids is associated with high plasma cholestero levels. Diets rich in polyunsaturated fatty acids produce decreased cholesterol levels. Consumption of fatty acids called n-3 (or omega-3) fatty acids that are abundant in fish oils dramatically lowers serum cholesterol and triglyceride levels.
Energy stored in fats is released after the digestion of the fat begins. An enzyme catalyzing this initial stage is called Triacylglycerol Lipase (or hormone sensitive lipase). Its action is shown in the figure on page 631. Fatty acids released by the enzyme enter the bloodstream and become bound to albumin. Glycerol released byfat digestion is exported to the liver. A part of the cAMP cascade is involved in activation of Triacylglycerol Lipase, as shown in Figure 18.11. |
A new study by meteorologist Michael Lockwood of the University of Reading in the United Kingdom and colleagues suggests that space weather is going to worsen over the next 40 to 200 years, increasing the chance of radiation hazards for astronauts and frequent flyers.
Space weather is defined as the amount of energetic particles above the atmosphere: when weather is bad, dangerous particles including protons and ions—known as galactic cosmic rays (GRCs), and similar particles from sun bursts called solar energetic particles (SEPs)— rain down on Earth at near light speed.
The amount of radiation the sun emits fluctuates over centuries and has the biggest impact on space weather; the more radiation the sun emits, the stronger its external magnetic field which acts as a blanket in the solar system against GRCs.
By analyzing ancient ice cores, Lockwood and his colleagues were able to gather data on the variations of GRCs and SEPs reaching Earth over the past hundreds of years. They found that during low solar activity, more GRCs reached Earth, and there were fewer SEP events. They also found however, that fewer SEP events led to an increase in their intensity, which occurred during “middling” solar activity—the period the sun is currently entering.
Dangers are most likely to arise for frequent flyers who take more than five long flights a year, and astronauts headed for the moon or Mars. Although some scientists state that the new predictions need to be tested, Lockwood believes that radiation levels will increase during the sun’s middling transition and cautions that the safe number of flights a year may drop to two, and that the amount of radiation astronauts are exposed to could double. |
Our Middle School English program fosters the imagination of young writers while exposing them to traditional literary elements. Students practice the fundamentals of composition through pre-writing, drafting, revising, and publishing. The English department encourages students to develop both self- awareness and personal style through the writing process.
Required, sixth grade
First semester + third or fourth quarter
In this course, we build on the basic foundations of reading, writing, and language within a highly collaborative environment. Units are anchored in the exploration of a rich variety of texts including novels, graphic novels, short stories, poetry, and plays. We strengthen both our vocabulary and grammatical skills from the readings encountered throughout the year. We conclude each unit with a project-based assessment such as creating an original graphic novel, generating a public service announcement, and crafting a collection of poems, in order for the students to demonstrate their learning beyond traditional tests and quizzes. Additionally, writing remains a focus throughout the year. Together we learn to recognize and compose effective sentences and build effective narrative, expository, and persuasive paragraphs.
Required, seventh grade
English 7 exposes students to literature and cultures at home and abroad. The course trains students in the fundamental skills of writing a formal, thesis-based essay. Students are taught to think critically about the works they read and to note similarities between themselves and the characters within those works. In this way, the course encourages students to learn empathy and gain understanding of the characters and their experiences. Formal grammar study is also a part of the curriculum, and is centered on the study of verbs and all their various forms. Vocabulary study is developed from the texts used and the genres encountered in the course. Assessment comes in the form of group and long-term projects, as well as through more traditional essays and creative writing assignments. The book list may include (but is not limited to) The Giver, A River Lost, Of Mice and Men, Hotel on the Corner of Bitter and Sweet, Feed, and The Alchemist.
Required, eighth grade
English 8 incorporates vocabulary and grammar study, creative and expository writing, and the reading of classical and contemporary literature. Students study vocabulary taken from the texts used and the genres encountered in the course, learn about the voice and tense of verbs, and examine sentence structure via their study of phrases and clauses. In addition to practicing different modes of expository writing, students write poetry and short stories. The reading list may include (but is not limited to) titles such as Lord of the Flies, Akhenaten, Speak, Oedipus Rex, Persepolis and An Iliad. Numerous essays, poems, articles, and speeches supplement these longer readings.
Elective, seventh and eighth grades
The purpose of this class is to experience various types of creative writing from a wide range of established literary figures, understand structures and techniques used within the different forms of writing, and talk about and personally practice the craft. Students produce their own original short story, personal essay and collection of poetry.
Required, sixth grade
This course is a partnership between sixth grade English and Science to explore astronomy, the solar system, and methods of inquiry in a creative, artistic, and hands-on fashion. Through a variety of trips and projects, students explore outward from the Moon, to the Solar System, and then to some of the anomalies and the history of the universe itself. Students read, analyze, and write Science Fiction using their own learning and projects for inspiration. Field trips include the University of Washington Planetarium, the Challenger Learning Center at the Museum of Flight, and the Museum of Pop Culture.
Seventh - eighth grades
This course focuses on the conception, design, and production of a visual narrative piece. Students read examples of comics and graphic novels that bring together writing and visuals to tell their stories. Using these pieces as inspiration, students then develop their own characters and storylines, write and draw the piece, and produce an anthology series distributed at UPrep. |
Within the past 50 years, eutrophication---the over-enrichment of water by nutrients such as nitrogen phosphorus---has emerged as one of the leading causes of water quality impairment. The two most acute symptoms of eutrophication are hypoxia (or oxygen depletion) and harmful algal blooms, which among other things can destroy aquatic life in affected areas.
The rise in eutrophic and hypoxic events has been attributed to the rapid increase in intensive agricultural practices, industrial activities, and population growth which together have increased nitrogen and phosphorus flows in the environment. The Millenium Ecosystem Assessment (MA) found that human activities have resulted in the near doubling of nitrogen and tripling of phosphorus flows to the environment when compared to natural values.
Before nutrients—nitrogen in particular—are delivered to coastal ecosystems, they pass through a variety of terrestrial and freshwater ecosystems, causing other environmental problems such as freshwater quality impairments, acid rain, the formation of greenhouse gases, shifts in community food webs, and a loss of biodiversity.
Once nutrients reach coastal systems, they can trigger a number of responses within the ecosystem. The initial impacts of nutrient increases are the excessive growth of phytoplankton, microalgae (e.g., epiphytes and microphytes), and macroalgae (i.e., seaweed). These, in turn, can lead to other impacts such as: loss of subaquatic vegetation, change in species composition, coral reef damage, low dissolved oxygen, and the formation of dead zones (oxygen-depleted waters) that can lead to ecosystem collapse. |
Administrators and teachers have long argued that students should not chew gum in school. For decades, it has been a standard school rule that no gum is allowed. These adults have argued that students don’t dispose of gum properly and chewing can be a distraction. Recently, however, studies have shown that gum chewing can help improve attention and focus, and when allowed, students properly dispose of gum. In fact, 65% of professional athletes report chewing gum before or during a game in order to relieve stress. Maybe it is time we re-examine our no-gum-chewing policy.
According to the National Institute of Health, chewing gum can reduce anxiety and tiredness. Students who chew gum report feeling more relaxed and alert. And 65% of professional athletes report chewing gum before or during a game in order to relieve stress. Chewing gum reduces cortisol levels which improves an individual’s mood. Improved alertness and a more relaxed state improves memory and student’s performance. Through his research, Dr. Kenneth Allen, a professor at New York University, has proven that chewing gum releases insulin, which improves brain function and memory. It’s hard to argue against the improved attention and mood that chewing gum can create.
The biggest reason teachers and administrators argue against gum chewing is because they think it is rude, distracting, and messy. If gum were allowed in school, students wouldn’t feel the need to be sneaky and stick it on furniture. Students wouldn’t have to risk getting in trouble and instead would dispose of it properly. Many teachers have reported seeing a decrease in gum spreading on furniture when they started allowing students to have it in the classroom. Some teachers feel it is rude to chew gum while a student is presenting. A teacher can set rules and an expectation that students will spit out their gum when presenting or participating in a class discussion where they may have to speak often. In order to curb the loud gum chewing or blowing bubbles that might cause a distraction, teachers would just need to set rules regarding the proper way to chew gum. Most students will choose to follow the rules rather than risk losing the privilege of chewing gum.
Given the power that chewing gum has over memory and attention, it seems illogical that we don’t allow it in school. The arguments against gum chewing can all be addressed with a few classroom rules, which is how teachers set the limits and boundaries in a number of areas for students. The academic and mood benefits should outweigh the slight risk that a student might cause a distraction by blowing a bubble in class. |
Many young people with reading, writing and communication difficulties do not realise their full potential. Even if they are very bright they may have difficulty accessing the National Curriculum. This inevitably undermines their motivation and self-esteem.
Research shows that audiobooks allow the listener to retain their visualisation and picture-making skills. A reader who struggles to ‘decode’ the words will have difficulty absorbing their meaning on a first reading and is therefore much less likely to be able to visualise – so comprehension, memory and, of course, enjoyment, all suffer. The listener, however, has not only the advantage of being able to visualise as they listen, their understanding is also helped by the tone of voice, accent, emphasis and timing given to the text by the professional reader.
'When someone else is reading I can understand it more... When we have tests on the book, I can definitely tell about the book because I have listened to the [audio]. I think you get the picture more by listening to the [audio] because of what goes into your head.'
If children do not read much, they miss out on vital language resources and their written output fails to reflect their ability. Listening to books in audio form they acquire not only a whole new range of experience, but a vocabulary beyond their own reading level and everyday conversation. Their horizons expand, they absorb the structure and conventions of storytelling and develop much greater confidence to communicate both orally and on paper, which has enormous benefits to their writing.
'They understand what a chapter is now. The structure of listening has helped. They understand paragraphs because when they have the text in front of them they can see and hear from the [audiobook] that there is a new idea. They have learnt about speech marks, because they hear different voices.'
When they discover the excitement of books through listening, pupils want to read more rather than less. If they follow the text while listening, their word recognition and reading speed improve.
'I used not to read because I thought it was boring. As soon as I’ve [listened to a book], I thought I would go to the library and get a book and, if it is good, I’ll read it.'
Good listening skills are essential for effective learning in all areas of the curriculum and will help pupils with their school work. Audiobooks improve concentration and engage pupils with their studies helping them to achieve at a higher level across the curriculum.
'Listening [to audio] has made him more inclined to listen in general.'
Audiobooks enable children to develop vital literacy skills in an enjoyable way, they restore confidence and self-esteem, and create a situation in which pupils can achieve success.
Click on the catalogue link to see just how many audiobooks we stock that support the National Curriculum.
With thanks to Evelyn Carpenter who was commissioned by Listening Books to evaluate our 3 year Sound Learning pilot project and subsequently wrote the report 'Sound Learning: an Evaluation'.
Click here to see our fantastic range of audiobook titles for children and young people.
We now have a fantastic online library of over 1000 titles so your pupils can listen on the go.
Find out the many ways that using audiobooks at home or school can help students support their studies. |
Converting Forest to Agriculture
Habitat fragmentation is defined as the breaking apart of continuous habitat in to distinct pieces, and can be understood in terms of three interrelated processes; a reduction in the total amount of original vegetation, subdivision of the remaining vegetation in to fragments, and the introduction of new forms of land use to replace lost vegetation, usually in the form of agriculture (Bennett & Saunders, 2010).
Forests and other natural habitats have been converted for agricultural use for as long as humans have walked the earth, and while most of the focus recently has been on the conversion of tropical rainforests to agricultural plantations, landscapes all throughout the world are still being converted, and in the developed world, natural landscapes are a shadow of their former selves.
Of the 16 million km2 of tropical rainforests that once existed, just around 9 million km2 exists today, with forests in South East Asia disappearing most rapidly. Tropical dry forests along the Central American Pacific coast now cover just 1% of the total land cover they used to, and from 1990 to 2000, over 1% of all mangroves were lost annually. By 1990, more than two-thirds of Mediterranean forests and woodlands had been lost, mainly for conversion to agriculture, and in eastern USA and Europe, old growth broad leaf forests have nearly disappeared. 10-20% of the world’s grasslands have been destroyed for agriculture, and in South America, more than half of the biologically rich cerrado savannas, which formally spanned over 2 million km2, have been converted in to soy fields and cattle pastures in recent decades (Laurance, 2010). It is estimated that over the past 3 centuries, the global extent of cropland has risen from around 2.7 to 15 million km2 (Laurance, 2010).
The conversion of tropical rainforests for agricultural purposes throughout Indonesia and Malaysia has been immense. Today, approximately 45% of Indonesia’s workers are engaged in agriculture, with 31 million hectares of land under cultivation, with 35-40% of that land devoted to the production of export crops. In Indonesia, there are three main types of agricultural farming; smallholder farming, smallholder cash cropping, and about 1,800 large foreign-owned or privately owned estates. Small scale farming is usually carried out in modest plots, and usually focuses on the cultivation of rice for subsistence, with vegetables and fruit also grown. These products are also grown as cash crops for export, with rubber also cultivated, and making up around 20% of all cash crop exports. Of estate grown crops, rubber, tobacco, sugar, palm oil, hard fiber, coffee, tea, cocoa and cinchona are the most important (Encyclopedia of the Nations, 2011).
The conversion of forest to agriculture involves the chopping down of trees, and the corresponding loss of biodiversity. Although very few animal species can live in any type of plantation, if managed well, plantations can still retain some of the ecosystem functions of tropical rainforests.
In their undisturbed state, tropical rainforests have a virtually closed canopy, comprise millions of different species of trees, leaves and animals, and have a forest floor covered in a thin layer of leaf litter, underlain by a highly permeable topsoil, a formation with means they have one of the lowest surface erosion rates of any form of land use (Critchley & Bruijnzeel, 1996). Tropical forests also produce an extraordinary amount of plant biomass, caused by the compact nutrient cycle of these ecosystems, with plant nutrients that enter the forest ecosystem, through rain, dust and aerosols, being cycled continuously between the canopy and the soil, with only small amounts leaking out of the system (Critchley & Bruijnzeel, 1996). This delicately balanced cycle is disturbed when trees are cut down.
Of all the methods of clearing trees, manual clearing is the least damaging to the soil. However, it is a slow and expensive method, particularly when large areas of forest need to be cleared. Instead, most plantations, particularly those owned by large corporations, will use heavy machinery, often with root takes, which are used to uproot tree stumps. When the timber is extracted, forest debris will often be set alight, a cheaper and easier way of clearing any vegetation still left. After clearing, this land will be planted with crops.
The soil quality and productivity of plantations depends heavily on both the methods used, and the crop being cultivated. Tea, for example, is usually grown in areas with year round abundant rainfall, and is often cultivated in areas of high altitude, where terraces will be constructed before planting. Tea plantations can last for several decades before production declines, and of all the land use systems that replace tropical rainforests, it is usually considered to be one of the most effective, with respect to soil erosion, because tea trees often grow tall, and form a closed canopy (Critchley & Bruijnzeel, 1996). In contrast, coffee trees need wider spacing, to allow access for picking and spraying, so coffee plantations have a much sparser canopy, and are more susceptible to soil erosion and invasive weeds. Rubber plantations, which are abundant throughout Sumatra, require deep, relatively fertile soil and thrive best on flat land. To establish rubber plantations, land is clear stumped, to avoid disease transmission to the trees, and newly planted rubber plantations are susceptible to high levels of erosion and runoff (Critchley & Bruijnzeel, 1996).
When a natural forest is converted to a plantation, the reduction in plant cover increases the overall catchment of water, and it can often take years, when crops and trees start growing, for this water yield to decrease. Even then, crop plantations almost always use less water than original forests, and in areas surrounding plantations, runoff waters, usually discolored with sediment, can be observed. However, although the process of conversion is highly destructive, the destruction of the lands services can be limited by using appropriate clearing practices and land management techniques, including controlled drainage, bench terraces, contour farming or the introduction of biological barriers, like hedges or woodland (Critchley & Bruijnzeel, 1996).
Over the last few decades, the agricultural commodity that has received the most attention and is considered to have caused the most destruction to primary forests throughout Indonesia and Malaysia is palm oil. Palm oil (Elaeis guineenis) is native to Africa and was first planed in Indonesia in 1848. It is suited to tropical regions within 12 to 15 degrees north and south of the equator, where the average rainfall is between 2,000 and 2,500 millimeters per year. As the palm oil harvest declines during the dry season and the flowering period and maturation of the fruit is affected by temperature, humidity needs to be high, between 80 to 90%, and the temperature needs to lie between 29 to 30 degrees (Rautner et al, 2005). Borneo and Sumatra are therefore ideal, and thousands of hectares of lowland tropical rainforest, habitat for the islands’ orangutans, tigers, elephants, rhino’s, leopards, gibbons, and numerous other species, has been converted to make room for this oil.
Palm oil, which has the highest per hectare yield (4-8 tons) of all edible oils, is now the most important vegetable oil in the world. In 2002, palm oil, and palm kernel oil, accounted for approximately 23% of the world’s edible oil production, and 51% of global trade in edible oils. Indonesia is now the world’s largest exporter of palm oil, with Malaysia a close second, and between 2016 and 2020, the projected production by Indonesia is around 18,000 million tons, or 44% of world production, while Malaysia’s estimated output will be 15,400 million tons, or 37.7% (Rautner et al, 2005).
Palm oil plantations are usually established after large areas of forest have been cleared by heavy machinery. After the timber has been extracted and sold on the international legal or illegal timber market, left over debris is usually set alight. The use of fire to clear forest is one of the most destructive practices, and is partly responsible for the extensive fires that have ravaged forests throughout Indonesia and Malaysia in recent years, including the devastating fires of 1997 and 1998 (Rautner et al, 2005; Harrison et al, 2009).
Palm oil trees are single stemmed and can grow up to 20 meters tall, and have leaves that grow up to 3-5 meters long. The palm fruits start bearing 2-3 years after the palm tree has been planted, and they take around 5 months to mature from pollination, growing in large bunches. Oil is extracted from both the pulp of the fruit, which becomes palm oil, and the kernel, which becomes palm kernel oil. In order for the trees to yield fruit earlier, and to control invasive weeds, pesticides are regularly used, including the highly toxic Paraquat, and have been blamed not only for the decreased level of biodiversity in palm oil plantations, but also for poisoning thousands of plantation workers (Rautner et al, 2005; WRM, 2005). Forest conversion for palm oil plantations results in a loss of 80% of plant species, and in palm oil monocultures, research has shown 80-90% of mammals, reptiles and bird species found in tropical forests cannot survive (Rautner et al 2005). It is feared that the increasing demand for palm oil in both household products and as a biofuel will see even more areas of lowland forest converted to plantations, and a continued decrease in populations of endangered species.
The conversion of forests for agriculture is a historic process that is unlikely to stop in the near future. So far in Indonesia and Malaysia, and wider parts of the world, this conversion has taken place in lowland areas home to some of the world’s most endangered animals. Managing and preventing future forest loss, while considering the needs of a rapidly increasing human population, will be one of the greatest environmental challenges of the next few decades.
Hopefully lessons can be learned from one of the greatest environmental disasters in history, Indonesia’s ‘Mega Rice Project’. The Mega Rice Project was a plan by former Indonesian president Suharto to make Indonesia self sufficient in rice production. The project proposed to convert 796,000 hectares of peat swamp forest in Central Kalimantan in to rice fields, with an additional settlement program to relocate 316,000 transmigrant families to the area. Despite not carrying out a cost benefit or sensitivity analysis, and repeated warnings by scientists that the project would fail and the depth of the peat made it unsuitable for conversion, Suharto went ahead with the project. Around $175 million was spent on the scheme, half of which went towards digging canals to drain the peat swamp, which was so deep it subsided . After Suharto fell from power in 1998, the project was abandoned, and no rice has ever been grown on the land. Today, the Mega Rice Project area is a barren wasteland, where the transmigrants are unable to grow rice or enough crops to survive, where poverty is rife, where orangutans and other wildlife are scarce and live in fragmented patches of forest, and the area is prone to illegal logging and frequent forest fires (Rautner et al, 2005).
Bennett, A. & Saunders, D. (2010). Habitat fragmentation and landscape change.In Sodhi, N.S. & Ehrlich, P.R.,editors, Conservation biology for all. Oxford University Press, UK
Critchley, W. & Bruijnzeel, S. (1996). Environmental impacts of converting moist tropical forest to agriculture and plantations. UNESCO
Encycolpedia of the Nations (2011). Indonesia. Encyclopedia of the Nations
Harrison, M.E., Page, S.E. & Limin, S.H. (2009). The global impact of Indonesian forest fires. Biologist, Vol 56, 3, pp. 156-163
Laurance, W.F (2010). Habitat destruction: death by a thousand cuts.
In Sodhi, N.S. & Ehrlich, P.R.,editors, Conservation biology for all. Oxford University Press, UK
Rautner, M., Hardiono, M. & Alfred, R.J. (2002). Borneo: Treasure island at risk. WWF
WRM. (2005). Oil palm plantations- No sustainability possible with Paraquat. World Rainforest Movement |
ESS1.B: Earth and the Solar System - The orbits of Earth around the sun and of the moon around Earth, together with the rotation of Earth about an axis between its North and South poles, cause observable patterns.
ESS1-1 Earth's Place in the Universe- Develop and use a model of the Earth-sun-moon system to describe the cyclic patterns of lunar phases, eclipses of the sun and moon, and seasons.
Day 1: Phases of the Moon and How a Solar Eclipse Occurs
1. Watch the video on The Phases of the Moon and then fill out the 2 pages on The Phases of the Moon. YOu may use the website to help you,
2. Read the article and watch the videos on Solar Eclipses and answer the questions in the flip book
Day 2: The Path of Totality of a Solar Eclipse (Page 7)
1. Read the article, watch the videos and then complete page 7
2. Check to see when the eclipse will start in Las Vegas and how much of the sun will be covered here, and complete page in flip book |
Action Based Research
News and Information
- 2018 Wisconsin State Music Conference – Action-Based Research – Educator Effectiveness in Disguise – to be presented by Lisa Benz, Paul Budde, and Shawn Gudmunsen
- 2017 Wisconsin State Music Conference – Action-Based Research – presented by Paul Budde
- Budde, P. (2017). Action-Based Research – A Collaborative Initiative for Wisconsin Music Educators, Part I. Wisconsin School Musician, 87,(3), 26-27.
- Budde, P. (2017). Action-Based Research – A Collaborative Initiative for Wisconsin Music Educators, Part II. Wisconsin School Musician, 88,(1), 38-42.
WHAT IS ACTION-BASED RESEARCH?
Action-based research is a systematic experiment carried out by an educator who seeks to improve teaching and learning within her unique classroom setting. Although there are many reasons to engage in action-based research, it is often initiated when an educator experiences turbulence in the journey of teaching and learning. In such cases, the teacher explores new ways of doing in order to help her students succeed. Like any research agenda, there are several steps included within action-based research. These include:
- Identifying a Research Focus – the teacher identifies an area of interest or need within her teaching setting
- Developing Questions – the teacher formulates specific research questions that provide the framework for her study
- Determining Approaches – the teacher identifies two or more distinct teaching approaches that she will implement (and compare) in her study
- Selecting Participants – the teacher identifies specific classes/students who will be included in her study
- Carrying Out Research – the teacher carries out her experiment by implementing specific strategies within her teaching setting
- Collecting Data – after a predetermined amount of time, the teacher collects data (e.g., tests, surveys, etc) to compare the teaching approaches utilized in her study
- Analyzing Data – the teacher analyzes data to determine the relative success of each teaching approach utilized in her study
- Reporting Findings – the teacher shares the results of her study with selected audiences (e.g., parents, colleagues, administration, conference attendees, etc)
- Incorporating Findings – the teacher adjusts strategies for teaching and learning, based on the findings of her study
BENEFITS OF ACTION-BASED RESEARCH
There are many benefits associated with action-based research. Most notably, action-based research can improve teaching and learning in one’s classroom setting. Clearly, if we discover ways to better serve our students via action-based research, then it is worth our time and energy. There are, however, additional benefits, as can be seen in the hypothetical scenario above. For example, sharing the results of an action-based research project with:
- students and parents gives the educator an opportunity to model lifelong learning and a commitment to excellence
- colleagues can lead to dialogue and collaboration within or between schools regarding ways to improve teaching and learning
- administration and policy-makers affords the opportunity for teachers to showcase professional growth based on tangible (data-driven) results
- professors opens the door for collaborative efforts between PK-12 teachers and University instructors and/or students
Further, educators who engage in action-based research often experience a sense of personal satisfaction, based on the knowledge that they are learning and growing as a professional, while also improving opportunities to learn for students in their classrooms. |
Narrative of the Life of Frederick Douglass, an American Slave
Frederick Douglass was born in slavery as Frederick Augustus Washington Bailey near Easton in Talbot County, Maryland. He was not sure of the exact year of his birth, but he knew that it was 1817 or 1818. As a young boy he was sent to Baltimore, to be a house servant, where he learned to read and write, with the assistance of his master's wife. In 1838 he escaped from slavery and went to New York City, where he married Anna Murray, a free colored woman whom he had met in Baltimore. Soon thereafter he changed his name to Frederick Douglass. In 1841 he addressed a convention of the Massachusetts Anti-Slavery Society in Nantucket and so greatly impressed the group that they immediately employed him as an agent. He was such an impressive orator that numerous persons doubted if he had ever been a slave, so he wrote NARRATIVE OF THE LIFE OF FREDERICK DOUGLASS. During the Civil War he assisted in the recruiting of colored men for the 54th and 55th Massachusetts Regiments and consistently argued for the emancipation of slaves. After the war he was active in securing and protecting the rights of the freemen. In his later years, at different times, he was secretary of the Santo Domingo Commission, marshall and recorder of deeds of the District of Columbia, and United States Minister to Haiti. His other autobiographical works are MY BONDAGE AND MY FREEDOM and LIFE AND TIMES OF FREDERICK DOUGLASS, published in 1855 and 1881 respectively. He died in 1895. |
Ii blood group system, classification of human blood based on the presence of antigens I and i on the surface of red blood cells. The Ii blood group system is associated with cold antibodies (antibodies that function only at temperatures below normal body heat) and several blood diseases.
The I antigen is found in the cell membrane of red blood cells in all adults, whereas the i antigen is found only on red blood cells of the developing fetus and newborn infants. In newborn infants the i antigen undergoes gradual conversion to reach adult levels of the I antigen within 18 months of birth. The formation of the I antigen from the i antigen in red blood cells is catalyzed by a protein called I-branching enzyme. Rare variants of the i antigen exist; for example, the antigen i1 is found as a rarity in whites, and the antigen i2 is found as a rarity mostly among blacks. Natural antibodies to I are found in adults who possess the i antigen; the presence of the i antigen in adults is caused by mutation of a gene known as GCNT2, which encodes the I-branching enzyme.
Auto-antibodies to I are the commonest source of cold antibodies in acquired hemolytic anemia. Auto-antibodies to i have been identified in persons with leukemia and other blood diseases; a transient auto-anti-i is relatively common in people with infectious mononucleosis. |
What is the significance of the Jack Hills zircons?Submitted by Joseph Reese, Edinboro University of Pennsylvania
Why is this important?Up to 4.4 billion years old, zircon grains are the oldest earth materials discovered.
What we know...
The Jack Hills zircons are the oldest terrestrial materials found so far. These detrital grains from a quartzite / metaconglomerate unit in western Australia clock in at between 4.0-4.4 Ga with a single grain dating at 4.4 Ga in age.
Oxygen isotopic studies yielding high isotopic ratios indicate that the magma from which these zircons originated was derived from recycled rock that had interacted with surface waters and not from a mantle source. As Mark Harrison commented These zircons tell us that they melted from an earlier rock that had been to the Earth's surface and interacted with cold water. The presence of quartz inclusions as well as results from neodymium and hafnium isotopic studies support a felsic source suggesting that continental crust may have been forming very early in Earth's history and that tectonic processes like subduction were operating as well. Implications are profound including the presence of continents oceans and perhaps life very early on in Earth's history.
References and other Resources
Here are some resources that can be used to teach about the implications of these mineral grains.
Images from Aaron Cavosie at the University of Puerto Rico, useful for teaching about zircons:
A PowerPoint file of images from the Jack Hills zircons (PowerPoint 17.1MB Apr23 07)
A low resolution version of the same file (PowerPoint 1.1MB Apr23 07)
Zircons are Forever, by John W. Valley, William H. Peck, Elizabeth M. King (1999). This is a reprint of a 1999 article from the University of Wisconsin-Madison Geology Alumni Newsletter, with links to extensive references and images.
Images from a Cool Early Earth, by John W. Valley, William H. Peck, Elizabeth M.King, and Simon A. Wilde, (2002) Geology. 30: 351-354.
A Cool Early Earth? Scientific American article A comprehensive webpage: Zircons Are Forever |
The human immune system uses both innate and adaptive immune responses to combat pathogens such as viruses and bacteria (see VAX March 2004 Primer on Understanding the Immune System, Part II). Innate immune responses are always on standby and can act quickly, usually within hours, to either snuff out or help limit an initial infection. If more help is needed, adaptive immune responses?which include both antibodies and cellular immune responses?kick in. These take longer to activate because they are designed to target a specific pathogen. The immune system generates HIV-specific antibodies and cellular immune responses against the virus, both of which are critical in either preventing or controlling the infection, and are therefore of great interest to AIDS vaccine researchers.
Antibody responses are Y-shaped molecules that primarily latch on to viruses and prevent them from infecting cells (see VAX February 2007 Primer on Understanding Neutralizing Antibodies). Once cells are already infected, cellular immune responses come into play. These responses involve a subset of immune cells known as CD4+ T helper cells that orchestrate the activities of activated CD8+ T cells, known as cytotoxic T lymphocytes (CTLs), which can kill cells already infected by the virus.
The role of cellular immune responses in HIV infection is complicated because the very cells that play a role in limiting infection are under attack?the virus preferentially targets and infects CD4+ T cells, severely hampering the immune system's ability to fight back. However, both CD4+ and CD8+ T cells still play a critical role in the control of HIV infection and are also likely to be important to the development of an AIDS vaccine. Researchers are now studying the ideal types of antibodies and cellular immune responses that a vaccine should induce to best prevent or control HIV infection.
Typically, researchers measure the size of the cellular immune responses that are induced by different candidates, as well as the ability of these cells to secrete cytokines, which are proteins produced by immune cells in response to viruses or bacteria (see VAX August 2007 Primer on Understanding Immunogenicity). Merck's MRKAd5 candidate induced T cells secreting a cytokine known as interferon-γ (IFN-γ) in more individuals than any candidate tested in Phase I clinical trials, prior to it being advanced to a Phase IIb test-of-concept trial. In Phase I trials, 80% of MRKAd5 recipients, who did not have high levels of pre-existing immunity to the cold virus used as a vector, developed T cells that secreted IFN-γ.
The majority of vaccine recipients in the STEP trial also developed both CD4+ and CD8+ T-cell responses against HIV after receiving MRKAd5. But these immune responses were not sufficient to protect against infection. Researchers have not observed any correlation so far between the size of HIV-specific immune responses in vaccine recipients and whether or not they subsequently became infected with HIV through risk behaviors, such as unprotected sex with an HIV-infected partner or injection-drug use.
Researchers have also found that the quantity of T-cell responses does not seem to correlate with control of the virus in some HIV-infected individuals, known as elite controllers, either. Elite controllers are a group of long-term nonprogressors who are HIV infected yet have very low levels of virus (viral loads) and do not progress to AIDS, even without the aid of antiretroviral therapy (see VAX September 2006 Primer on Understanding Long-term Nonprogressors). The magnitude of HIV-specific cellular immune responses are actually lower in elite controllers than those seen in individuals with typical viral loads who have normal disease progression.
Together these findings indicate that the size of the T-cell response may not be the key factor in either preventing or controlling HIV infection. Instead, the capability of the T cells to perform a particular function may be more important. Some immunologists suggest that it is not the size of the initial T-cell response to vaccination that matters, but the ability of these T cells to multiply later on, when the individual encounters the pathogen they were vaccinated against, that is most critical.
Other researchers are studying the direct ability of the T cells induced by an AIDS vaccine candidate to kill virus-infected cells. Researchers can extract T cells from volunteers in an AIDS vaccine clinical trial through blood samples and test them in a laboratory against HIV to see if they are actually capable of killing virus-infected cells. This method is now being used by some researchers to prioritize vaccine candidates in Phase I clinical trials.
Another approach is to study different viral and bacterial vectors that may be used for AIDS vaccine candidates to see if they induce different types of T-cell responses. Researchers have conducted preclinical experiments in mice to compare the T cells induced by different viral vectors. The results indicate that the choice of vector does affect the type of T cells that are induced upon vaccination.
Researchers are also currently studying the characteristics of effective T-cell responses in other viral infections, in which cellular immune responses are at least partly responsible for protection, to determine what types of T cells an AIDS vaccine candidate should ideally induce. More research on T-cell responses to HIV, as well as other pathogens, will shed light on these questions and help researchers design more effective AIDS vaccine candidates. |
Hydrogen: Fuel for Our Future?
|Hydrogen-powered cars like this one may be commonplace in the future.|
On July 18, BP and GE announced plans to jointly develop up to 15 new hydrogen power plants for generating electricity over the coming decade. The hydrogen will be derived from fossil fuels, including coal and natural gas. While the plants will emit greenhouse gases, the companies will employ carbon capture technologies they claim will reduce carbon dioxide (CO2) emissions by 90 percent. Although the operations will not be pollution-free, some environmentalists welcome the companies’ investment in hydrogen technology as a key development in bringing about a hydrogen economy.
Though often mistaken for an energy source, hydrogen is actually an artificial fuel—like gasoline—that can be used to transport and store energy. Although it can be separated from fossil fuels, its long-term promise lies in its ability to be separated from water through electrolysis, using solar power or other forms of renewable energy. Its most publicized application is in transportation: the hydrogen gas is stored in an on-board tank until combined with oxygen in a fuel cell, where the electrolysis process is essentially reversed, releasing chemical energy via an electrical charge. This electricity can then be used to power electric motors in cars, buses, boats, and other vehicles.
In the short run, fuel cells are also considered a promising source of electricity for some industries and buildings, particularly those that require steady back-up power during blackouts. In this application, hydrogen is most often derived from natural gas and propane, which already have extensive distribution systems in place.
Using fossil fuels to generate hydrogen can result in modestly lower emissions of CO2 and other pollutants than using these fuels as conventional energy sources, though this depends on the efficiency of the technologies involved. In order to get larger reductions, the CO2 must be captured and sequestered, a process that remains experimental and expensive. However, when the hydrogen separation process is based on renewable energy sources, hydrogen use is essentially pollution-free, with the only byproducts being water and heat.
Since 1999, when Iceland announced its plan to become the first hydrogen-based economy in the next 30–40 years, governments and businesses have begun to seriously consider the hydrogen option. In 2000, the small South Pacific island of Vanuatu joined Iceland in making steps towards widespread hydrogen use and deriving 100 percent of its energy from renewable sources. Hawaii, another island rich in renewable resources such as geothermal and wind energy yet still heavily dependent on oil imports, invested in hydrogen research in 2001, hoping to eventually export hydrogen to other states and nations. And California, the United States’ largest gasoline consumer, began developing the world’s first “hydrogen highway” in 2004.
Despite initial enthusiasm, some of these regions are making greater progress than others. Freyr Sverrisson, an independent energy consultant from Iceland, says that so far the Icelandic government has taken little concrete action toward meeting its hydrogen target: it is home to only one hydrogen fueling station, and the country has invested significant funds in the aluminum smelting industry that could have been placed in hydrogen development. By generating a carbon dioxide byproduct, the smelting process is helping Iceland become the world’s fastest growing emitter of CO2. The government is “squandering an opportunity,” Sverrisson says, by choosing to invest in the quick returns of aluminum smelting instead of developing the hydrogen economy with longer-term benefits.
Yet according to Jon Bjorn Skulason, general manager of Icelandic New Energy, the country is just 6–12 months behind the original plan proposed in 1999. In addition to having three operational fuel cell buses and the one fuel cell filling station, Iceland has passed a preemptive law that will eliminate all taxes on hydrogen cars once they begin to be sold domestically. With over 90 percent of citizens in favor of developing a hydrogen economy and continued support for the project from the government and business, Skulason does not foresee any further delays.
California, meanwhile, already boasts 23 hydrogen fueling stations (14 more are slated to be built this year) and has put 137 hydrogen-powered passenger cars and 9 buses on the road, more than any region in the world, according to Chris White of the California Fuel Cell Partnership. Although the partnership is still operating in a “demonstration phase,” notes White, several of its members (many of which are automotive companies) expect to make hydrogen-powered commercial vehicles as early as 2010, and to have showroom cars by 2015. According to White, this is the same way hybrid-electric cars were introduced to the market in the 1990s.
Yet transitioning to a hydrogen economy has raised some concerns. Because hydrogen is odorless and burns with a clear flame, leaks can be difficult to detect, although the gas is so light and disperses so quickly that the chance of an open explosion is considered minimal. (While many associate hydrogen with the 1937 Hindenburg disaster, the explosion of the German airship in fact began with ignition of the blimp’s highly flammable outer covering, not the gas it carried.) Even so, careful engineering is necessary to ensure that hydrogen fuel cell vehicles are safer than gasoline vehicles, according to a 1997 Ford Motor Company study.
“Hydrogen is one of the keys to a new energy economy that relies on solar and wind power rather than fossil fuels,” according to Worldwatch President Chris Flavin. “Private and public investment in hydrogen technology should be increased substantially. But in the next few years, the largest reductions in oil demand and greenhouse gas emissions will come from improved fuel economy and biofuels—both of which are fully competitive today.”
This story was produced by Eye on Earth, a joint project of the Worldwatch Institute and the blue moon fund. View the complete archive of Eye on Earth stories, or contact Staff Writer Alana Herro at aherro [AT] worldwatch [DOT] org with your questions, comments, and story ideas. |
One of the mysteries of the English language finally explained.
A paste used on a toothbrush for cleaning the teeth.
- ‘Avoid cleaning them with toothpaste, which will scratch the surface and cause stains to build up.’
- ‘So, of course, she was getting a double dose of fluoride toothpaste.’
- ‘Use a baby formulated toothpaste, or one containing fluoride, and gently rub it around the teeth and gums.’
- ‘Most toothpaste contains a mineral called fluoride that has been proven to protect teeth against decay.’
- ‘Even though most toothpastes contain fluoride, toothpaste alone will not fully protect a child's mouth.’
- ‘The main purpose of toothpaste is to help clean your teeth and it is also an effective means of delivering fluoride to the teeth.’
- ‘He has yet to successfully spit out any toothpaste whilst brushing.’
- ‘You should use a smear of fluoride toothpaste on a small toothbrush for your baby.’
- ‘It is important to clean teeth twice a day with fluoride toothpaste.’
- ‘For a change, these can hold your toilet accessories including your toothpaste and tooth brush.’
- ‘The bottle is still so soft with heat that it squeezes like a tube of toothpaste.’
- ‘Children with fissure sealants still need to brush their teeth with fluoride toothpaste.’
- ‘This means there is enough Fluoride in one tube of toothpaste to kill anyone who weighs up to 30 kilos.’
- ‘If you have a lot of sensitivity, try using toothpaste designed for sensitive gums.’
- ‘Use fluoride toothpaste as fluoride makes teeth stronger and more resistant to acid attacks.’
- ‘Metal tubes crimped up from the bottom like a tube of toothpaste should be firmly sealed.’
- ‘Ask your doctor or dentist what kind of toothpaste you should use for your baby.’
- ‘It is important to use a small amount of toothpaste so your baby does not swallow too much of it.’
- ‘You study the mashed tube of toothpaste on the counter beside a can of generic hairspray.’
- ‘Start brushing your baby's teeth as soon as they first appear, using a small trace of adult fluoride toothpaste.’
Top tips for CV writingRead more
In this article we explore how to impress employers with a spot-on CV. |
A global strategy on climate change has been agreed under the 1992 United Nations Climate Change Convention and its 1997 Kyoto Protocol. This international legal regime promotes financial and technical cooperation to enable all countries to adopt more climate-friendly policies and technologies. It also sets targets and timetables for emissions reductions by developed countries.
Most governments, however, have still not ratified the Protocol, which means that its emissions targets for developed countries - which add up to an overall 5% reduction compared to 1990 levels during the five-year period 2008-2012 - are not yet in effect. Many governments are awaiting agreement on the operational details of how the Protocol will work in practice before deciding on ratification.
The Hague meeting must decide these details and ensure that they will lead to action that is both economically efficient and environmentally credible. It must also strengthen the effectiveness of the many activities taking place under the Convention.
"The Hague conference is a make or break opportunity for the climate change treaties," said Michael Zammit Cutajar, the Convention's Executive Secretary. "Unless governments of developed countries take the hard decisions that lead to real and meaningful cuts in emissions and to greater support to developing countries, global action on climate change will lose momentum."
"The meeting's success will be measured by the early entry into force of the Kyoto Protocol - I hope by 2002, ten years after the adoption of the Convention at the Rio Earth Summit. With scientists increasingly convinced that we are already witnessing the effects of global warming, we must ensure that the next decade produces real progress on lowering emissions and moving economic growth on to climate-friendly paths," he said.
Developed countries are concerned that this rapid transition to a lower-emissions economy could have short-term economic implications, including a potential impact on trade competitiveness, both among themselves and vis-à-vis those developing countries that are now industrializing.
The Protocol will only enter into force after it has been ratified by at least 55 Parties to the Climate Change Convention, including industrialized countries representing at least 55% of this group's total 1990 carbon dioxide emissions. So far, only 30 countries - all from the developing world - have ratified the Protocol.
Key Protocol-related issues that still need to be resolved include rules for the Protocol's Clean Development Mechanism and its Joint Implementation and emissions trading systems, rules for obtaining credit for improving "sinks" (by planting new trees to absorb carbon dioxide from the atmosphere, for example, thus offsetting emissions), a regime for monitoring compliance with commitments, and accounting methods for national emissions and emissions reductions.
Key Convention-related issues include technology transfer, capacity building, financial assistance, and the special concerns of developing countries that are particularly vulnerable to climate change or to the economic consequences of emissions reductions by developed countries. The various Protocol and Convention issues are strongly interlinked and will only be resolved as part of a package deal.
The Hague meeting is officially called the Sixth Session of the Conference of the Parties to the Convention, or COP 6. It is expected to draw well over 5,000 participants and a large number of ministers. Dutch Environment Minister Jan Pronk has been designated the conference President.
Note to journalists: The press accreditation form, official documents, and other information are posted at www.unfccc.int. For interviews or additional information please contact
Michael Williams in Geneva at
Phone: (+41-22) 917 8242/44, fax (+41-22) 797 3464,
Nardos Assefa in Bonn at
Phone:(+49-228) 815-1526, fax (+49-228) 815-1999,
A closer look at the "crunch" issues for The Hague
The climate change talks cover a range of issues that is as broad as it is complex. Most of the issues are both technical and political and are linked to one another. There is no one correct way to prioritise them, but the following list offers a reasonable approach to grouping the main questions. Agreement on all the issues will be necessary for COP 6 to be considered a success.
1 - The "flexibility" mechanisms. The Protocol establishes three innovative mechanisms - the clean development mechanism, joint implementation, and emissions trading - that developed countries may use to lower the costs of meeting their national emissions targets. Their usefulness is based on the fact that, as far as the global climate and atmosphere are concerned, it does not matter where emissions originate. Because it can be cheaper to reduce a ton of greenhouse gas emissions in countries that are, for example, less energy efficient, the mechanisms can help ensure that the overall Kyoto target is achieved as inexpensively as possible.
The Protocol text authorizing these mechanisms is brief and leaves it to the current negotiations to determine how they should operate in practice. The Hague meeting must decide the roles of various institutions and craft the accounting rules for allocating credits. In the case of the two project-based mechanisms - the CDM and JI - it must also elaborate criteria for project eligibility and baselines for measuring each project's contribution to reducing net emissions.
A difficult sticking point is whether or not there should be a ceiling on how much credit a government can obtain through the mechanisms. The Protocol states that the use of the mechanisms is to be "supplemental" to domestic action. Some governments argue that there should therefore be a quantified ceiling on how many credits can be obtained from the mechanisms; others
disagree. The three mechanisms are:
2 - "Sinks". Sinks, or LULUCF in the jargon (land use, land use change, and forestry), introduce the technically complex and politically charged question of how much credit countries can receive against their emissions targets for promoting activities, such as reforestation or ending deforestation, that strengthen carbon sinks.
New and growing plants are called sinks because they remove carbon from the air, thus reducing a country's "net emissions" (total emissions minus removals). In most developed countries, on balance, land and forests do act as sinks. However, in many countries around the world, deforestation and changes in land-use release large amounts of CO2 into the atmosphere.
For some countries, growing new forests could be cheaper than reducing industrial emissions. Because it can be difficult to estimate just how much carbon a given tree or forest absorbs, rigorous accounting systems are needed for determining base lines and measuring changes. Also needed are clear definitions of what counts as a sink since it can be difficult to distinguish between the natural uptake of carbon by the biosphere and uptake caused by purposeful human activity or climate change policies. Decisions are also needed on whether or not to give credit for non-forestry sinks, such as agriculture and soils. Other issues include ensuring that climate-driven activities do not have negative impacts on biodiversity or socio-economic conditions, and that stored carbon that is credited is not later released into the atmosphere (for example during a forest fire).
3 - North-South cooperation. While only developed countries have targets and timetables for cutting emissions, developing countries can have a role to play in promoting sustainable development and thereby lowering the emissions-intensity of their economic growth. Strengthening their ability to do so will require an agreement on financial and technological cooperation. This should include a framework for capacity building, the necessary funding from developed countries, and practical steps for promoting the transfer of climate-friendly technologies to developing countries.
4 - Adverse impacts of climate change and of response measures on vulnerable countries. Under the Convention, the international community has accepted its responsibility to assist the least developed countries, small island states, and other vulnerable regions to adapt to the impacts of climate change and of policies to reduce emissions. Some of these states have called for various funds or programmes on adaptation, climate-related disasters, and research and observation. Other states are urging action to assist or compensate governments - notably the oil-exporting developing countries - that may be affected by efforts to meet the Kyoto targets. These issues will need to be a part of the overall package at COP 6.
5 - A compliance regime. To be credible, the Kyoto Protocol must have rules for determining compliance and measures for responding to cases of non-compliance. The key question is what the consequences of non-compliance should be. Alternative proposals call for payments into a compliance fund, extra reductions to be made in future periods, restrictions on the use of the mechanisms in future periods, financial penalties and the formulation of action plans. Other items for discussion include whether non-compliance applies only to Protocol commitments or to Convention commitments that are "referred to" in the Protocol, the balance of representation from different regions on the compliance committee, and membership in the expert review groups. |
16 February 2012
When the going gets tough, species start merging. Lake-dwelling fish species that once lived separately began interbreeding when pollution forced them together. Ultimately most of the lakes’ remarkable diversity has been lost – and the same could be happening in threatened habitats around the world.
After the last ice age, whitefish (Coregonus) in Europe’s Alpine lakes split into several species, each with a specialised appearance and lifestyle. They first separated because they spawned in different places, some favouring the lake bottom and others the surface layers.
That all changed when the lakes became polluted in the mid-20th century, says Ole Seehausen of Eawag: Swiss Federal Institute of Aquatic Science and Technology in Kastanienbaum and the University of Bern, Switzerland. Fertiliser ran off farmland into the lakes, leaving them over-rich in nutrients – a phenomenon called eutrophication. This caused algal blooms in the lakes, which in turn caused oxygen levels to crash deep in the lakes.
Seehausen and colleagues now think that this oxygen crash forced species to merge. They studied the whitefish populations in 17 lakes, each of which was also studied in the 1920s, before the eutrophication began. Back then, the deeper lakes had more species.
The number of species in the lakes has fallen 38 per cent since the 1920s, and the remaining species have become more similar in shape. What’s more, lakes that have experienced more eutrophication have fewer species. Seehausen found that the remaining species sometimes carry genetic markers previously found only in extinct species, suggesting that those species have hybridised themselves out of existence.
The European lakes are far from the only example of the dangers of hybridisation. Seehausen has previously shown that eutrophication of Lake Victoria in east Africa caused a crash in the diversity of cichlid fish. The different species used to be distinguished by bright colours, but these became useless in the murky water, and many hybridised. Some iconic North American animals, such as wolves and coyotes, are also intermixing as they are forced together.
Seehausen says hybridisation is dangerous because it can go unnoticed for a long time, and species can vanish before action can be taken to save them. Most assessments of endangered species focus on population size, but if species are interbreeding the population could hold steady even as species diversity collapses.
Who’s at risk?
Only species that split relatively recently can hybridise, because evolution ultimately makes them too dissimilar to mate. However, some fish species can still interbreed 20 million years after splitting, Seehausen says.
We don’t know how many species are at risk. Based solely on how evolutionarily different they are, 88 per cent of fish species could still hybridise with at least one other, as could 55 per cent of mammals (Molecular Ecology, DOI: 10.1111/j.1365-294X.2007.03529.x).
Admittedly, those figures overestimate the real risk, because only neighbouring species can breed. “Things that live on different continents don’t hybridise,” Seehausen points out. He says ecosystems where many species have recently diversified, such as large lakes and rivers, are most at risk.
Journal reference: Nature, DOI: 10.1038/nature10824 |
The surface of the earth is a mosaic of natural and cultural landscapes. Each patch forms part of a diverse, yet interconnected set of landscapes ranging from relatively pristine natural ecosystems to completely human-dominated urban and industrial areas. The mosaic is not static, but shifts due to changes from human activities and natural phenomena. Changes to the surface of the earth from weathering, fire, glaciations, and other natural events have always occurred.
Since the appearance of humans on the earth, natural landscapes have routinely been converted to human-dominated areas for cultivation, occupation, and other economic activity. The scale of landscape change ranges from local (conversion of a farm into a suburb) to regional (conversion of tall grass prairie ecosystems to agriculture) to global (climate change). Climate change is widely recognized as having the potential to profoundly transform both ecological and cultural landscapes.
Increasing interest in understanding landscape changes has led to the emergence of a new field of study: land change science. Land change scientists treat the complex dynamics of land cover and land use as a coupled human-environment system and develop new concepts, approaches, and tools for improved understanding and management of land resources. (Turner et al., 2007)
The Land Change Science Program
The Land Change Science (LCS) Program is focused on understanding the types, rates, causes, and consequences of land change. LCS scientists conduct studies of the land cover and disturbance histories of the United States and overseas areas to determine the reasons for and the impacts of land-surface change. They seek to answer questions such as "What kinds of changes are occurring and why?", and "What are the impacts of these changes on the land for environment and society?" Recording any type of land change requires the characterization of land features at two or more times. However, a long term, scientific perspective of land change requires continued, periodic monitoring of the land surface.
LCS researchers at USGS actively monitor and investigate several key aspects of land change, including:
Land Use and Land Cover — LCS staff develop and maintain the National Land Cover Database (NLCD), the standard land cover map and database for the nation. Other scientists study long-term changes in land cover associated with climate variability, fire disturbance, and changes in management activities, such as intensifying biofuel development and irrigated agriculture. Landsat satellite imagery is a major source of information for land cover and land use analysis.
Carbon — LCS staff lead a major national assessment, called LandCarbon, to identify and quantify biological stocks of carbon and the fluxes of carbon between terrestrial, aquatic, and atmospheric pools. This rigorous assessment involves complex biogeochemical modeling based on land cover data at fine spatial resolutions over very large regions. The assessment creates a starting point for future land management decision-making. Globally, LCS has a leadership role in SilvaCarbon, an effort to connect forest carbon sequestration experts in the USGS and other organizations with scientists in developing countries who are responsible for quantifying their nation's forest biomass carbon stocks and fluxes.
Ecosystems and Their Benefits — Ecosystems provide goods (food, fiber, water, fuel, etc.) and services (flood control, water provision, maintenance of soil fertility, pollination, etc.) that are crucial for human welfare. By providing such vital functions, ecosystems become a key focus of scientific investigation. To document changes in ecosystem extent and function, it is important to know what ecosystems are found on the landscape, where they are located, and how well they are functioning. LCS scientists participate in national and global ecosystem mapping to identify and delineate these ecosystems, research their production of goods and services, and develop methods to assess their economic and social values. LCS scientists also conduct research on incorporating this information into management planning and ecological restoration (e.g. Chesapeake Bay Ecosystem) decisions.
Risk and Vulnerability — The objective of LCS risk and vulnerability studies is to develop quantitative, qualitative, and geospatial methods and decision support tools that characterize and communicate the vulnerability of both human communities and natural ecosystems to natural hazards. Research approaches include assessing both episodic hazards (tsunami, earthquakes, volcanoes, wildfires) and longer term hazards (coastal erosion, sea level rise).
Land Change Science Tools and Approaches
All of the above studies rely on an applied geographic approach to emphasize and then correlate location, spatial relationships, and regional characteristics. LCS scientists use remotely sensed imagery, geographic information systems (GIS) and global positioning systems (GPS) to analyze and integrate data. A variety of image types (optical, radar, lidar, multispectral) are used with a range of spatial resolutions. LCS scientists also use quantitative models to characterize land surface features, develop possible scenarios of future conditions, and conduct integrative, holistic assessments of land cover change. In addition, LCS scientists are developing decision-support tools that incorporate scientific information in resource allocation decisions.
For additional information about the program: Land Change Science Information Sheet (5.8 Mb, PDF) |
Each lung is invested by an exceedingly delicate serous membrane, the pleura, which is arranged in the form of a closed invaginated sac. A portion of the serous membrane covers the surface of the lung and dips into the fissures between its lobes; it is called the pulmonary pleura (or visceral pleura). The visceral pleura is derived from mesoderm.
The visceral pleura is attached directly to the lungs, as opposed to the parietal pleura, which is attached to the opposing thoracic cavity. The space between these two delicate membranes is known as the intrapleural space (pleural cavity). Contraction of the diaphragm causes a negative pressure within this space and forces the lungs to expand, resulting in passive exhalation and active inhalation. This process can be made forceful through the contraction of the external intercostal muscles, forcing the rib cage to expand and aiding to the negative pressure within the intrapleural space, which causes the lungs to fill with air. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.