text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
||This article focuses too much on specific examples without clearly discussing its abstract general subject. (January 2013)| Darwinism is a theory of biological evolution developed by Charles Darwin and others, stating that all species of organisms arise and develop through the natural selection of small, inherited variations that increase the individual's ability to compete, survive, and reproduce. Also called Darwinian theory. It originally included the broad concepts of transmutation of species or of evolution which gained general scientific acceptance when Charles Robert Darwin published On the Origin of Species, including concepts which predated Darwin's theories, but subsequently referred to specific concepts of natural selection, the Weismann barrier or in genetics the central dogma of molecular biology. Though it usually refers strictly to biological evolution, the term has been used by creationists to refer to the origin of life, and has even been applied to concepts of cosmic evolution, both of which have no connection to Darwin's work. It is therefore considered the belief and acceptance of Darwin's, and his predecessors, work in place of other theories including divine design and extraterrestrial origins. The term was coined by Thomas Henry Huxley in April 1860, and was used to describe evolutionary concepts in general, including earlier concepts such as Spencerism. Many of the proponents of Darwinism at that time, including Huxley, had reservations about the significance of natural selection, and Darwin himself gave credence to what was later called Lamarckism. The strict neo-Darwinism of August Weismann gained few supporters in the late 19th century. During this period, which has been called "the eclipse of Darwinism", scientists proposed various alternative evolutionary mechanisms which eventually proved untenable. The development of the modern evolutionary synthesis from the 1930s to the 1950s, incorporating natural selection with population genetics and Mendelian genetics, revived Darwinism in an updated form. While the term has remained in use amongst scientific authors when referring to modern evolutionary theory, it has increasingly been argued that it is an inappropriate term for modern evolutionary theory. For example, Darwin was unfamiliar with the work of Gregor Mendel, and as a result had only a vague and inaccurate understanding of heredity. He naturally had no inkling of yet more recent developments and, like Mendel himself, knew nothing of genetic drift for example. In the United States, the term "Darwinism" is often used by creationists as a pejorative term in reference to beliefs such as atheistic naturalism, but in the United Kingdom the term has no negative connotations, being freely used as a shorthand for the body of theory dealing with evolution, and in particular, evolution by natural selection. Conceptions of Darwinism |This section requires expansion. (December 2009)| While the term Darwinism had been used previously to refer to the work of Erasmus Darwin in the late 18th century, the term as understood today was introduced when Charles Darwin's 1859 book On the Origin of Species was reviewed by Thomas Henry Huxley in the April 1860 issue of the Westminster Review. Having hailed the book as, "a veritable Whitworth gun in the armoury of liberalism" promoting scientific naturalism over theology, and praising the usefulness of Darwin's ideas while expressing professional reservations about Darwin's gradualism and doubting if it could be proved that natural selection could form new species, Huxley compared Darwin's achievement to that of Copernicus in explaining planetary motion: What if the orbit of Darwinism should be a little too circular? What if species should offer residual phenomena, here and there, not explicable by natural selection? Twenty years hence naturalists may be in a position to say whether this is, or is not, the case; but in either event they will owe the author of "The Origin of Species" an immense debt of gratitude...... And viewed as a whole, we do not believe that, since the publication of Von Baer's "Researches on Development," thirty years ago, any work has appeared calculated to exert so large an influence, not only on the future of Biology, but in extending the domination of Science over regions of thought into which she has, as yet, hardly penetrated. Another important evolutionary theorist of the same period was Peter Kropotkin who, in his book Mutual Aid: A Factor of Evolution, advocated a conception of Darwinism counter to that of Huxley. His conception was centred around what he saw as the widespread use of co-operation as a survival mechanism in human societies and animals. He used biological and sociological arguments in an attempt to show that the main factor in facilitating evolution is cooperation between individuals in free-associated societies and groups. This was in order to counteract the conception of fierce competition as the core of evolution, which provided a rationalisation for the dominant political, economic and social theories of the time; and the prevalent interpretations of Darwinism, such as those by Huxley, who is targeted as an opponent by Kropotkin. Kropotkin's conception of Darwinism could be summed up by the following quote: In the animal world we have seen that the vast majority of species live in societies, and that they find in association the best arms for the struggle for life: understood, of course, in its wide Darwinian sense – not as a struggle for the sheer means of existence, but as a struggle against all natural conditions unfavourable to the species. The animal species, in which individual struggle has been reduced to its narrowest limits, and the practice of mutual aid has attained the greatest development, are invariably the most numerous, the most prosperous, and the most open to further progress. The mutual protection which is obtained in this case, the possibility of attaining old age and of accumulating experience, the higher intellectual development, and the further growth of sociable habits, secure the maintenance of the species, its extension, and its further progressive evolution. The unsociable species, on the contrary, are doomed to decay. — Peter Kropotkin, Mutual Aid: A Factor of Evolution (1902), Conclusion. "Darwinism" soon came to stand for an entire range of evolutionary (and often revolutionary) philosophies about both biology and society. One of the more prominent approaches, summed in the 1864 phrase "survival of the fittest" by the philosopher Herbert Spencer, later became emblematic of Darwinism even though Spencer's own understanding of evolution (as expressed in 1857) was more similar to that of Jean-Baptiste Lamarck than to that of Darwin, and predated the publication of Darwin's theory in 1859. What is now called "Social Darwinism" was, in its day, synonymous with "Darwinism" — the application of Darwinian principles of "struggle" to society, usually in support of anti-philanthropic political agenda. Another interpretation, one notably favoured by Darwin's half-cousin Francis Galton, was that "Darwinism" implied that because natural selection was apparently no longer working on "civilized" people, it was possible for "inferior" strains of people (who would normally be filtered out of the gene pool) to overwhelm the "superior" strains, and voluntary corrective measures would be desirable — the foundation of eugenics. In Darwin's day there was no rigid definition of the term "Darwinism," and it was used by opponents and proponents of Darwin's biological theory alike to mean whatever they wanted it to in a larger context. The ideas had international influence, and Ernst Haeckel developed what was known as Darwinismus in Germany, although, like Spencer's "evolution", Haeckel's "Darwinism" had only a rough resemblance to the theory of Charles Darwin, and was not centred on natural selection at all. In 1886 Alfred Russel Wallace went on a lecture tour across the United States, starting in New York and going via Boston, Washington, Kansas, Iowa and Nebraska to California, lecturing on what he called "Darwinism" without any problems. The term Darwinism is often used in the United States by promoters of creationism, notably by leading members of the intelligent design movement, as an epithet to attack evolution as though it were an ideology (an "ism") of philosophical naturalism, or atheism. For example, Phillip E. Johnson makes this accusation of atheism with reference to Charles Hodge's book What Is Darwinism?. However, unlike Johnson, Hodge confined the term to exclude those like Asa Gray who combined Christian faith with support for Darwin's natural selection theory, before answering the question posed in the book's title by concluding: "It is Atheism." Creationists use the term Darwinism, often pejoratively, to imply that the theory has been held as true only by Darwin and a core group of his followers, whom they cast as dogmatic and inflexible in their belief. In the 2008 movie Expelled: No Intelligence Allowed which promotes intelligent design, Ben Stein refers to scientists as Darwinists. Reviewing the film for Scientific American, John Rennie says "The term is a curious throwback, because in modern biology almost no one relies solely on Darwin's original ideas... Yet the choice of terminology isn't random: Ben Stein wants you to stop thinking of evolution as an actual science supported by verifiable facts and logical arguments and to start thinking of it as a dogmatic, atheistic ideology akin to Marxism." However, Darwinism is also used neutrally within the scientific community to distinguish modern evolutionary theories, sometimes called "Neo-Darwinism", from those first proposed by Darwin. Darwinism also is used neutrally by historians to differentiate his theory from other evolutionary theories current around the same period. For example, Darwinism may be used to refer to Darwin's proposed mechanism of natural selection, in comparison to more recent mechanisms such as genetic drift and gene flow. It may also refer specifically to the role of Charles Darwin as opposed to others in the history of evolutionary thought — particularly contrasting Darwin's results with those of earlier theories such as Lamarckism or later ones such as the modern synthesis. In political discussions in the United States, the term is mostly used by its enemies. "It's a rhetorical device to make evolution seem like a kind of faith, like 'Maoism,'" says Harvard biologist E.O. Wilson. He adds, "Scientists don't call it 'Darwinism'." In the United Kingdom the term often retains its positive sense as a reference to natural selection, and for example Richard Dawkins wrote in his collection of essays A Devil's Chaplain, published in 2003, that as a scientist he is a Darwinist. In his 1995 book Darwinian Fairytales, Australian philosopher David Stove used the term "Darwinism" in a different sense than the above examples. Describing himself as non-religious and as accepting the concept of natural selection as a well-established fact, Stove nonetheless attacked what he described as flawed concepts proposed by some "Ultra-Darwinists". Stove alleged that by using weak or false ad hoc reasoning, these Ultra-Darwinists used evolutionary concepts to offer explanations that were not valid (e.g., Stove suggested that sociobiological explanation of altruism as an evolutionary feature was presented in such a way that the argument was effectively immune to any criticism.) Philosopher Simon Blackburn wrote a rejoinder to Stove, though a subsequent essay by Stove's protegee James Franklin's suggested that Blackburn's response actually "confirms Stove's central thesis that Darwinism can 'explain' anything." - Darwinism (book) - Modern evolutionary synthesis - Neural Darwinism - Social Darwinism - Darwin Awards - Pangenesis - Charles Darwin's hypothetical mechanism for heredity - Universal Darwinism - History of evolutionary thought - John Wilkins (1998). "How to be Anti-Darwinian". TalkOrigins Archive. Retrieved 19 June 2008. - "Expelled Exposed: Why Expelled Flunks » …on what evolution explains". National Center for Science Education. Retrieved 22 December 2008. - based on a European Southern Observatory release (9 December 2006). "Galactic Darwinism :: Astrobiology Magazine - earth science - evolution distribution Origin of life universe - life beyond :: Astrobiology is study of earth science evolution distribution Origin of life in universe terrestrial". Retrieved 22 December 2008. - Huxley, T.H. (April 1860). "ART. VIII.- Darwin on the origin of Species". Westminster Review. pp. 541–70. Retrieved 19 June 2008. "What if the orbit of Darwinism should be a little too circular?" - Bowler 2003, pp. 179, 222–225, 338–3399, 347 - Scott, Eugenie C.; Branch, Glenn (16 January 2009). "Don’t Call it "Darwinism"". Evolution: Education and Outreach (New York: Springer) 2 (1): 90. doi:10.1007/s12052-008-0111-2. ISSN 1936-6434. Retrieved 17 November 2009. - Olivia Judson (15 July 2008). "Let’s Get Rid of Darwinism". New York Times. - Sclater, Andrew (June 2006). "The extent of Charles Darwin’s knowledge of Mendel". Journal of Biosciences (Bangalore, India: Springer India / Indian Academy of Sciences) 31 (2): 191–193. doi:10.1007/BF02703910. PMID 16809850. Retrieved 3 January 2009. - Laurence Moran (1993). "Random Genetic Drift". TalkOrigins Archive. Retrieved 27 June 2008. - Joel Hanes. "What is Darwinism?". TalkOrigins Archive. Retrieved 19 June 2008. - Browne 2002, pp. 376–379 - "The Huxley File § 4 Darwin's Bulldog". Retrieved 29 June 2008. - Browne 2002, pp. 105–106 - "Evolution and Wonder - Understanding Charles Darwin - Speaking of Faith from American Public Media". Retrieved 27 July 2007. - Scott, Eugenie C. (2008). "Creation Science Lite: "Intelligent Design" as the New Anti-Evolutionism". In Godfrey, Laurie R.; Petto, Andrew J. Scientists Confront Creationism: Intelligent Design and Beyond. New York: W. W. Norton. p. 72. ISBN 0-393-33073-7. - Johnson, Phillip E. "What is Darwinism?". Retrieved 4 January 2007. - Matthew, Ropp. "Charles Hodge and His Objection to Darwinism". Retrieved 4 January 2007. - Hodge, Charles. "What is Darwinism?". Retrieved 4 January 2007. - Hodge, Charles (1874). What is Darwinism?. Scribner, Armstrong, and Company. OCLC 11489956. - Sullivan, M (2005). "From the Beagle to the School Board: God Goes Back to School". Impact Press. Retrieved 18 September 2008. - Ben Stein's Expelled: No Integrity Displayed, Scientific American. - Newsweek Nov 28, 2005 - Sheahen, Laura. Religion: For Dummies. BeliefNet.com, interview about 2003 book. - Stove, David (1995). Darwinian Fairytales: Selfish Gees, Errors of Heredity and Other Fables of Evolution. Avebury, ISBN 1-85972-306-3 - Blackburn, Simon. "I Rather Think I am a Darwinian" Philosophy, Vol. 71, 1996, pp. 605 - 616 - Franklin, James. "Stove's Anti-Darwinism" Philosophy, Vol. 72, No. 279 (Jan., 1997), pp. 133-136 - Bowler, Peter J. (2003). Evolution: The History of an Idea (3rd ed.). University of California Press. ISBN 0-520-23693-9. - Browne, E. Janet (2002). Charles Darwin: Vol. 2 The Power of Place. London: Jonathan Cape. ISBN 0-7126-6837-3. - Gopnik, Adam (2009). Angels and Ages: A Short Book About Darwin, Lincoln, and Modern Life. London: Quercus. ISBN 978-1-84724-929-6. |Look up darwinism in Wiktionary, the free dictionary.| - Universal Darwinism - (Russian) Nikolai Danilevsky. 1885-1889 Darwinism. A Critical Study (Дарвинизм. Критическое исследование) at Runivers.ru in DjVu format - Stanford Encyclopedia of Philosophy entry - What is Darwinism
<urn:uuid:081c4403-3fe0-4aae-b67b-956f257b6ea0>
{ "date": "2014-09-23T04:17:23", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137948.80/warc/CC-MAIN-20140914011217-00300-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9311187267303467, "score": 3.546875, "token_count": 3498, "url": "http://en.wikipedia.org/wiki/Darwin's_theory_of_evolution" }
Can you teach a baby to read? It’s a mind-bending thought, like the idea of getting your cat to play the Moonlight Sonata on the piano. It’s also a notion that’s sold many instructional DVDs, books, and flashcards. But a rigorous new experimental study – led by an expert in early childhood literacy – has found that seven months of training with a commercial baby reading program did not teach babies to read. In fact, it didn’t even seem to teach babies important pre-literacy skills, like the ability to recognize if a book is upside-down. And all this happened despite positive impressions that parents had of the results. In exit interviews, some parent participants told researchers they believed their babies were learning to read. As lead author Susan Neuman says in a press release, “It’s clear that parents have great confidence in the impact of these products on their children. However, our study indicates this sentiment is misplaced.” How did the study work? Neuman and her colleagues wanted to see what happens when you try to teach babies for many months using lots of instructional media: DVDs, picture flashcards, word flashcards, and picture books. So they recruited the parents of 117 babies, aged 10-18 months, and randomly assigned half of them to use an instructional, multimedia “baby reading” product. Parents were coached on the procedures, which included watching a DVD with the baby twice each day, pointing out words on the screen whenever possible, and spending an additional 45 minutes a day engaging the baby with word cards, picture cards, flip books, and word-related games. The researchers checked up on parents twice a week to track compliance, and they measured the babies’ progress with monthly parent questionnaires and four laboratory visits. The parents’ reports were unavoidably subjective; the lab visits, much less so. Since you can’t expect babies to read out loud–many babies were still learning to talk–researchers used an eye-tracking technique to figure out what babies knew. For example, in one test they would show a baby two different words, like “cat” and “dog,” and then say to the baby, “Look at ‘cat’!” If the baby looked longer at the correct word, that was interpreted as evidence that the baby recognized the word. But babies in the training group showed no visual preference for either word, even when these words had been heavily featured in the reading program. Nor did babies show evidence for having developed important pre-literacy skills, like an understanding of the sounds that letters make. Though babies were slightly more likely to look askance at pseudo-words containing “illegal” characters (e.g., “p#be”), they didn’t distinguish between regular writing and backwards (mirror) writing. As noted above, they didn’t even seem to recognize when books and words were presented upside-down. Why the failures? We might wonder if parents were inconsistent teachers, but the researchers found no link between a parent’s fidelity to the program and a baby’s outcome. We might ask if babies were nervous or distracted during the lab tests. But babies were seated with their mothers and given breaks if they got fussy, and researchers controlled for things like the babies’ baseline tendencies to look right or left. We might question the meaning of looking times, but it wasn’t merely that babies didn’t show a preference for one word or another. It was also that there were no differences between the “reading” babies and the control babies. Seven months of training seemed to have no impact on the way babies responded. So the study authors are persuaded. “Although we cannot say with full assurance that infants at this age cannot learn printed words, we can confidently say that they did not learn printed words from a product of this nature.” Parents, suggest the researchers, are better off investing time in adult-child conversation, reading books, and play. These are the activities “that have strong empirical support on children’s affect, cognitive development, early reading skills, and, in the long run, reading performance.” Have you ever tried, or been tempted to try, an early literacy program?
<urn:uuid:64e5d76c-da68-4184-a67f-3e9995b8bc06>
{ "date": "2015-03-02T19:07:03", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462982.10/warc/CC-MAIN-20150226074102-00309-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9711191654205322, "score": 3.3125, "token_count": 915, "url": "http://blogs.babycenter.com/mom_stories/study-babies-didnt-learn-to-read-despite-parents-beliefs/" }
Macchina di Santa Rosa The Macchina of Santa Rosa is a 30 metre high tower, which is rebuilt every year in the months of July and August in honor of Saint Rose of Viterbo, the patron saint of the city of Viterbo, Italy. Every year on the evening of 3 September 100 men called "Facchini di Santa Rosa" (porters of Saint Rose) hoist the Macchina and carry it through the very narrow streets and squares of the medieval town centre. The whole route is a little bit more than 1 km (1 mi). The procession is an important event in Viterbo and attracts thousands of people. Today, the procession is included in the UNESCO Representative List of the Intangible Cultural Heritage of Humanity. The celebration consists of two distinct parts. On the afternoon of 2 September, a reliquary containing the heart of Santa Rosa is carried in procession accompanied by people in period costumes of the 14th through the 19th centuries. The transport of Machine of Santa Rosa takes place the following evening. The term "machine" is borrowed from classical Greek theater. The transport of the Macchina dates back to the transfer of the body of Saint Rose of Viterbo. In 1258, six years after her death, her body was moved at the wish of Pope Alexander IV, from the former church Santa Maria del Poggio to the church Santa Maria delle Rose (today the pilgrimage chapel of Saint Rose). It is possible that originally a statue of St. Rose lit on a canopy was carried in procession. Guilds were very active in the processions in the 14th century. From 1654 through 1663 the procession was suspended due to the plague. The first "machine" was probably designed by Count Sebastian Gregory Fani in 1686. The Civic Museum of Viterbo has a collection of sketches of the machine dating to 1690. In the 18th century the noble families of Viterbo sponsored lavish machines of Saint Rose. In 1790 the machine fell during the move. In 1801 the cries of a spectator robbed of her jewels by some pickpockets in the Piazza Fontana Grande panicked some cavalry horses. Twenty-two people in the crowd died in the ensuing confusion and later that night the machine caught fire in Piazza delle Erbe. Because of these events, the transport was temporarily banned by Pope Pius VII, only to resume around 1810. In 1814 tilted backwards and a few porters died. In 1893 pouring rain prevented the transport, which proved fortunate when it was later discovered that some anarchists were planning to throw bombs at the machine. The transport was suspended with the outbreak of World War I, but resumed in 1918. From 1924 to 1951 (except for the interruption caused by the Second World War) the Machine of Santa Rosa was designed and constructed by Virgilio Papini, whose family had a long history of building the machines. In 1967 a new design did not get farther than the end of Via Cavour due to either excessive weight, or height, or tired Facchini. On the occasion of the arrival of the pontifex John Paul II, on 27 May 1984 there was organized a special transport. On 6 September 2009, Pope Benedict XVI viewed the new Macchina of Santa Rosa Fiore del Cielo in front of the pilgrimage chapel. In April 2014 the "Flower of Heaven" was erected at the Milan Expo; the new tower for 2015 is called "Gloria". Until a few decades ago, the Machine of Santa Rosa was built with paper mache and mounted on a wooden frame. Today, that system has been abandoned and replaced with various materials, such as resin, plastic and glass fiber, supported by a framework of steel pipes. Every five years a design competition is launched for a new Macchina. The guidelines of the competition ask for a 28m high tower, which is measured from the shoulder of the porters. The construction's maximum weight is to be less than 5 tonnes and the maximum width 4.3 metres. This is to respect the narrow parts of the historical centre, where eaves and balconies could strike the Macchina during transportation. The appearance of the Macchina has changed throughout history. The altar-like constructions from the 18th century developed to constructions similar to church towers and in the 2nd half of the 20th century they developed to 30m high sculptural towers. While originally the towers were mainly made from papier mâché, today materials like steel, aluminium and fibre glass are used to achieve a light and fireproof construction. The current model (since 2009) is called "Fiore del Cielo" (ital. Flower of Heaven). It was designed by the architecture office Architecture and Vision (architects: Arturo Vittori and Andreas Vogler). The design is characterized by the three golden helix surfaces which grow upwards. In contrast to the former Macchina many innovations were introduced. Some of these are the golden color scheme and more than 1200 computer-controlled LED lights, which are illuminating the handmade textile roses. A special scenography was developed, which also includes a rain of rose petals on the spectators at a certain stop. The Sodality Facchini of Santa Rosa was founded in 1978 to keep alive the age old tradition and ensure it is done in a safe and responsible manner. The Sodality also promotes cultural activities, tourism, and mutual aid for its members. The Sodality was one of the signatories to the UNESCO project application. It is based at the Museum of the Society Facchini of Santa Rosa which opened in 1994. The Facchini wear a white uniform with a red sash tied at the waist and a headdress covered in leather. To be selected as one of the Facchini is considered a particular honor, and one most pass a test of strength, carrying a 160-pound box on his shoulders for at least seventy meters without stopping. Before setting out they receive a special blessing. At around 9 pm the Facchini lift the 5 ton Macchina and start the first leg of the passage to the cheers of onlookers. For most of the route, the Facchini walk without any visual aid, directed by the capofacchini and guides posted at the four corners of the machine. The transport of the Macchina of Santa Rosa is the annual main event of the city of Viterbo. Already at the afternoon the streets of the historical centre are being filled with citizens and visitors. The transport begins at the Porta Romana, where the assembled Macchina stays in a scaffold, which is covered with curtains. At around 8 pm the 800 candles of the Macchina are lighted by the local fire brigade. The street lighting will be switched off completely. During the transport there are five breaks. During these breaks the Macchina is put on special frames. The stopping places are: - Piazza Fontana Grande - Piazza del Plebiscito (in front of the guildhall) - Piazza delle Erbe - Corso Italia (in front of the church Santa Maria del Suffragio) - Piazza del Teatro. The last stretch up to the church of Santa Rosa has a remarkable inclination. In order to overcome this slope, the Macchina is pulled by ropes and additional people and is eventually placed in front of the pilgrimage chapel. The Macchina is exposed there for some days after the event. - "The Machine of Santa Rosa", il Portale del Turismo a Viterbo e Provincia - Sodalizio Dei Facchini - "Trasporto della Macchina di Santa Rosa", City of Viterbo - Galeotti, Mauro. "La Machina di Santa Rosa" - "The machinery of Santa Rosa comes to Milan Expo", Chronicle (Milan), 17 April 2015 - "The Machine of Santa Rosa Viterbo", City of Viterbo |Wikimedia Commons has media related to Macchina di Santa Rosa.|
<urn:uuid:9156bf84-b2d7-47a2-9ea0-a87e29c8d09b>
{ "date": "2018-09-22T22:06:43", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158691.56/warc/CC-MAIN-20180922201637-20180922222037-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9514560103416443, "score": 2.859375, "token_count": 1670, "url": "https://en.wikipedia.org/wiki/Macchina_di_Santa_Rosa" }
|This biography from the Archives of AskART:| |Please note: Artists not classified as American in our database may have limited biographical data compared to the extensive information about American artists.| Palamedesz (Stevaerts, Stevens) was a Dutch painter and the younger brother and pupil of Anthonie Palamedesz. He was a specialist of military encampments and battle scenes. He was born in London, where his father, a gem cutter, was in the service of King James I. The family had come from Delft. After the family returned to Delft, Palamedes joined the Guild of Saint Luke in 1627. Although he was short, hunchbacked, and ugly, he married the daughter of a wealthy Delft family in 1630. The couple had four children. In 1631 Palamedes is recorded in Antwerp, where he was portrayed by Van Dyck. He died in Delft and was buried there. His calvary battles are related to the work of the Haarlem painter Esaias van de Velde. Sphinx Fine Art |** If you discover credit omissions or have additional information to add, please let us know at [email protected].|
<urn:uuid:77c49fc9-3e59-48a6-9545-4db9882a5930>
{ "date": "2014-09-19T13:50:22", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131376.7/warc/CC-MAIN-20140914011211-00319-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9291982650756836, "score": 2.609375, "token_count": 276, "url": "http://www.askart.com/askart/artist.aspx?artist=11212162" }
Presentation on theme: "助動詞. be 本動詞 My father is an English teacher. There is a vase on the table. There are many books on the desk. I want to be a doctor. 助動詞 He is writing."— Presentation transcript: be 本動詞 My father is an English teacher. There is a vase on the table. There are many books on the desk. I want to be a doctor. 助動詞 He is writing a letter. Many jobs are done by computers. have 本動詞 Does he have a car? Yes, he does. I had a cup of coffee then. He had to go there then. I ’ ll have him wash my car. 助動詞 I have lived here for three years. He has been ill since last week. Have you seen it? Yes, I have. do /does / did 本動詞 What are you doing? What do you do? 助動詞 I don ’ t like it. Do you love me? Yes, I do. He runs faster than you do. Can, can’t; could, couldn’t He can do it. =He is able to do it. His son could read when he was four. Can I use your car?Could I use your car? Could you tell me? can 推測現在 助動詞 + V Can the story be true? The story can ’ t be true. 推測過去 助動詞 + have pp He can ’ t have said so. He cannot but laugh. ( 忍不住 ) He cannot help laughing. ( 忍不住 ) You cannot be too careful when crossing the road. ( 再怎麼 ….. 也不為過 ) Please come as soon as you can. (= as ~as possible) ( 儘可能 …..) may, might You may go now. May I sit down? He works hard in order that he may succeed. Might I use your car? He worked hard so that he might succeed. He may have said so. May you succeed! May you be happy forever. You may well be proud of her. You may as well do it now. must You must obey the rules. You had to obey the rules. You mustn ’ t smoke inside. I had to go there yesterday. You must be tired. It must have rained last night, for the ground is wet. Must I go now? Yes, you must. ( 必須 ) No, you needn ’ t. ( 不必 ) should You should get up now. If it should rain tomorrow, he won ’ t come. You should have called me last night. ( 應該 ~ 卻沒有 ) You shouldn ’ t have said so. ( 不應該 ~ 卻做了 ) ought to ought not to We ought to do it. You ought not to do it. Ought I to go there? Yes, you ought (to). You ought to have told me. ( 應該 ~ 卻沒有 ) You ought not to have done that.( 不應該 ~ 卻做了 ) would wouldn ’ t I said I would do it. Would you please come in? He would often get up early. =He used to get up early. Would that I were young! = I wish I were young! I would like to go home. = I want to go home. I would rather stay home than go out. = I would stay home rather than go out. = I prefer staying home to going out. = I prefer to stay home rather than go out. = I prefer to stay home instead of going out. need / needn’t dare / daren’t He doesn ’ t need to go now. = He needn ’ t go now. Does he need to go now? Yes, he does. No, he doesn ’ t. Need he go now? Yes, he must. No, he needn ’ t. He doesn ’ t dare to go alone. He dare not go alone. How dare you say so to me? had better, had better not You had better leave now. You had better not do it again. Had I better see a doctor? Yes, you ’ d better. No, you ’ d better not.
<urn:uuid:1699a3c4-c89a-42cf-966a-e79fab01c899>
{ "date": "2017-09-26T20:34:44", "dump": "CC-MAIN-2017-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696681.94/warc/CC-MAIN-20170926193955-20170926213955-00496.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9877617955207825, "score": 2.59375, "token_count": 1035, "url": "http://slideplayer.com/slide/3980589/" }
These days it’s easy to think of Tibetan Buddhism as an international religion. We usually see this as something that came about in the second half of the twentieth century, when so many Tibetan lamas fled the country. Before that time, Tibetan culture is often presented as if it was enclosed within the mountain fastness of Tibet, taking its own path in splendid isolation. But if you know a bit of history, this picture doesn’t look quite right. Tibetan Buddhism was very popular at the courts of the Mongols and the Manchus, becoming for centuries the religion of choice for the ruling classes in China. And the religion of the Mongolian people is also essentially Tibetan, as a result of great missionary efforts on the part of Tibetan lamas. And then there are our Dunhuang manuscripts. Dunhuang was, of course, located at the northeastern end of Great Tibet, the old Tibetan empire, and even after the fall of the empire many aspects of Tibetan culture remained. But while neighbouring areas like Tsongka and Liangzhou had a large Tibetan population, the residents of Dunhuang were always mostly Chinese. So questions arise — Who actually wrote the Tibetan manuscripts found in Dunhuang? Who was practising Tibetan Buddhism there? There are no simple answers, but I think we can say that most of the time it wasn’t the Tibetans. * * * Let’s take an example. The Questions and Answers on Vajrasattva is one of the great tantric treatises of the early period of Tibetan Buddhism, written by Nyan Palyang, an important Tibetan tantric scholar of the ninth century. The questions are all about the Mahāyoga class of tantric practice (and shed some light on the early role of Dzogchen, as I discussed some time ago). This treatise was preserved in the Tibetan canons, as well as in several Dunhuang manuscripts, one of which (IOL Tib J 470) is signed by the scribe, like this: Though it’s written in Tibetan this is certainly a Chinese name. The first part of it is a rank, rather than a proper name: phu shi which is almost certainly Fushi 副使, an official title (found elsewhere in 10th-century Dunhuang) for the third-highest ranking district official in the Chinese government of tenth-century Dunhuang. So, this Tibetan treatise on the practice of Mahāyoga meditation was copied down on an (incidently rather nice quality) scroll by a Chinese official at Dunhuang. Other Tibetan tantric manuscripts are written by Khotanese, by Uighur Turks, sometimes, even by Tibetans. Tibetan Buddhism was clearly by this time a genuine international religion, a cultural point of contact between a great many ethnically diverse people. How did this happen? Well, when the Tibetans occupied Dunhuang (and other non-Tibetan speaking areas) they forced the locals to learn Tibetan. Official correspondence and legal documents had to be written in Tibetan, and the mass-produced sutras that the emperor Ralpachen funded (see here) were mainly written by Chinese locals. After the Tibetans were kicked out, locals carried on using Tibetan to draw up contracts and write letters. The Tibetan language became a lingua franca for Central Asia — one of our Tibetan manuscripts, for example, is a letter from the (Chinese) ruler of Dunhuang to the (Khotanese) king of Khotan. And these locals, like our Chinese official, found that their second language, Tibetan, was also the ideal language for learning about the newest developments in tantric practice (which had only a very limited circulation in Chinese translation). * * * Why does this matter? Well, consider that when the Mongol leader Godan Khan met Sakya Pandita in order to agree of Tibet’s status vis-a-vis the Mongol Empire, they met at Liangzhou — a few days journey from Dunhuang. The Mongols were inheritors of the Tangut practice of appointing Tibetan monks as imperial preceptors, and the Tanguts just formalized previous power relationships between Tibetan Buddhists and minor Chinese rulers in Dunhuang and the surrounding areas. Let me quote Christopher Beckwith, who says it better: The Tibetan successor states in Liangzhou and neighboring areas were pro-Buddhist. When the Tanguts finally occupied this region they simply continued to support an already long-established Buddhist church. Furthermore, Tibetan monks were quite active at the court of the Sung dynasty in China, where they assisted in the translation of several important Buddhist texts into Chinese. When the Mongols finally supplanted the Tanguts, they did not disturb the existing Buddhist establishment; on the contrary, they supported it as strongly as their predecessors had. And the tantric patron-priest model that the Mongols and Tibetans used to conceptualize their political relationship was hugely important for later Tibetan history. But rather than trying to draw a dubious causal line between the interest of a local Chinese official in Tibetan tantric Buddhism and Sino-Tibetan political relations, I will just express the hope that the Fushi’s scroll (and others like it) can give us an insight into the otherwise forgotten lives of the ordinary(ish) people within these grand historical movements. As Leo Tolstoy wrote in War and Peace: The movement of nations is caused not by power, nor by intellectual activity, nor even by a combination of the two as historians have supposed, but by the activity of all the people who participate in the events… * * * 1. Christopher Beckwith. 1987. “The Tibetans in the Ordos and North China.” in Christopher Beckwith (ed.), Silver on Lapis. Bloomington: The Tibetan Society. pp.3-11. 2. Gray Tuttle. 2007. Tibetan Buddhists in the Making of Modern China. New York: Columbia University Press.
<urn:uuid:3947dbf0-234a-43ec-ba8c-71812607e9f5>
{ "date": "2019-02-17T15:01:21", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481994.42/warc/CC-MAIN-20190217132048-20190217154048-00616.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9630433917045593, "score": 2.78125, "token_count": 1243, "url": "https://earlytibet.com/2009/03/30/the-international-religion/" }
Aggravated assault leads to 13 ... On 07-20-2017 at 03:04 AM Officers ... Thu, 08 May 2014 20:00:00 +0000 (NEW YORK) -- A new study shows that eating an extra apple a day could help keep more than just the doctor away. A review of the last 20 studies on diet and strokes, published in the journal Stroke, found that stroke risk drops by 32 percent with every additional 200 grams of fruit a person eats per day. For every additional 200 grams of vegetables, a person can also lower their risk of stroke by 11 percent. Stroke is the fourth leading cause of death -- and a leading cause of disability -- in the United States. Chinese researchers delved into previously completed studies and determined that the positive impact of eating more fruits and vegetables were consistent in both men and women. Additionally, eating more fruits and vegetables showed a "dose-response relationship," meaning that the more of them a person ate, the lower their risk of stroke would drop. Copyright 2014 ABC News Radio © 2017 Tomlinson-Leis Communications L.P.
<urn:uuid:b14edc9a-cdc5-48dc-a2b0-6ec68fbdb383>
{ "date": "2017-07-25T22:35:34", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425407.14/warc/CC-MAIN-20170725222357-20170726002357-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9564628005027771, "score": 2.515625, "token_count": 231, "url": "https://youreasttexas.com/national/health/extra-fruit-vegetables-in-your-diet-may-reduce-stroke-risk" }
Growth and Development Part of a baby's normal development is learning that separations from parents are not permanent. Young babies do not understand time, so they think a parent who walks out of the room is gone forever. Also, they have not yet developed the concept of object permanence - that a hidden object is still there, it just cannot be seen. Without these concepts, babies become anxious and fearful when a parent leaves their sight. Separation anxiety usually begins around the age of 6 months. Babies may suddenly be afraid of familiar people such as babysitters or grandparents. Stranger anxiety is also common at this age, when they are fearful of unknown people. Separation anxiety is usually at its peak between 10 and 18 months. It typically ends by the time a child is 3 years old. Babies experiencing separation anxiety fear that a parent will leave and not return. The fear may be worsened in the presence of a stranger. Typical responses of babies experiencing this normal phase of development may include the following: - crying when you leave the room - clinging or crying, especially in new situations - awakening and crying at night after previously sleeping through the night - refusal to go to sleep without parent nearby Children who feel secure are better able to handle separations. Cuddling and comforting your child when you are together can help him/her feel more secure. Other ways to help your child with separations include the following: - Comfort and reassure your child when he/she is afraid. - At home, help your baby learn independence by allowing him/her to crawl to other (safe) rooms for a short period of time by himself/herself. - Tell your baby if you are going to another room and that you will be back, then come back. - Plan your separations when your baby is rested and fed, rather than before a nap or meal. - Introduce new people and places gradually, allowing your baby time to get to know a new care provider. - Do not prolong good-byes and have the sitter distract your baby or child with a toy as you leave. - Introduce a transitional object such as a blanket or soft toy to help ease separations. - For night awakenings, comfort and reassure your child by patting and soothing, but avoid letting your child get out of bed. Click here to view the Online Resources of Growth & Development
<urn:uuid:946b4579-6d8b-4daf-867c-3e22d854de2a>
{ "date": "2014-03-09T13:20:45", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999678747/warc/CC-MAIN-20140305060758-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9424331784248352, "score": 3.5625, "token_count": 495, "url": "http://www.suburbanhospital.org/HealthInfo/Content.aspx?pageid=P02283" }
Yoshiaki (who held the ranks of Dewa no kami and Ukyô-daibu) was the eldest son of Mogami Yoshimori (1521-1590) and ruled a large domain in Dewa Province. He clashed repeatedly with the Date and Uesugi clans in the Shonai and Semboku areas to expand Mogami influence and became known as a capable leader. In 1590 he submitted to Toyotomi Hideyoshi and later, in the hopes of securing his clan's future, sent his daughter (Komahime) to be a concubine of Toyotomi Hidetsugu, Hideyoshi's nephew. Komahime had just arrived in Kyoto when Hidetsugu was suddenly ordered to commit suicide and his family was put to the sword. Yoshiaki, who felt much affection for his daughter, attempted without success to save Komohime and himself fell out of favor as a result of the affair (which pleased his many rivals in northern Japan). Yoshiaki was said to have been enraged and saddened by the event, and nursed a grudge against the Toyotomi that saw him drift towards Tokugawa Ieyasu. He sent his second son, Iechika, as a hostage to the Tokugawa and supported Ieyasu during the Sekigahara Campaign, where he assisted Date Masamune (his brother-in-law, despite their earlier feuds) in containing the activities of Uesugi Kagekatsu. After the Tokugawa victory at Sekigahara, the Mogami clan's income was increased from 330,000 to 570,000 koku. Yoshiaki was later compelled to order his eldest son to commit suicide by Ieyasu, who perhaps desired the second son as heir. Yoshiaki's grandson, Yoshitoshi, would cost the family dear by allowing internal problems to get out of control - in 1622 the Tokugawa ordered him to give up his fief in Dewa and move to Ômi Province, leaving him with a mere 10,000-koku income.
<urn:uuid:40e2e95f-52f4-4613-8734-d1b1868f14ad>
{ "date": "2018-11-14T02:33:39", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741578.24/warc/CC-MAIN-20181114020650-20181114042650-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9834150075912476, "score": 2.796875, "token_count": 428, "url": "http://samurai-archives.com/mogami.html" }
Cockatoo Article Library | Message Board | Here are just a few of the many references on large caged 1) “Pet birds: historical and modern perspectives on the keeper and the kept,” by Dr. David L. Graham from the Journal of the American Veterinary Medical Association. In this article Dr. Graham points out that necropsies of “pet” birds often reveal evidence of “a life beset with stress.” He partly attributes this stress to restriction or deprivation of natural behaviors and activity including flight. His recommendations for keeping a captive bird happy far exceed the means most people are capable of providing. He writes, “It would seem that the ideal enclosure for a captive bird is one of such size and equipped with such internal furnishings that the bird would have no awareness of its captivity. Anything less is a compromise and acceptance, on the part of the keeper, that the kept may or will be subject to the stresses imposed by a lesser or greater degree of restriction of its normal behaviors 2) “Who’s a clever parrot, then?” from New Scientist. This article examines parrot intelligence and the ethics of keeping parrots captive. The author describes that parrots develop strong and complex social-emotional bonds and that they can develop behavioral problems when deprived of companionship. Not surprisingly, biologists who study parrots refer to them as “flying primates” and “honorary primates.” According to James Serpell of the department of veterinary medicine at the University of Cambridge and its Companion Animal Research Group, parrots should not be kept as pets unless caretakers are prepared to devote as much time interacting with them as they would a human child. Serpell contends that parrots will suffer unless they are kept in large aviaries with other members of their own species. Charles Munn, a well-respected research zoologist interviewed in this article, believes that no one should be allowed to keep parrots over a certain size. He has compared keeping large parrots such as macaws to keeping wolves instead of domestic dogs. 3) “Considerations in selecting an appropriate pet bird” by Liz H. Wilson, CVT, from the Journal of the American Veterinary Medical Association, highlights the behaviors of different parrot species. She also notes that, “under the best of circumstances, parrots are difficult creatures to live with, and few people will actually enjoy long-term cohabitation with 4) “Captive mangement of birds for a lifetime” by Susan L. Clubb, DVM, from the Journal of the American Veterinary Medical Association, explains that “many birds are given up within a few years of being brought into their owner’s homes,” and describes the common reasons cited for giving up “pet” birds. She notes that, “in many cases, owners simply do not have accurate expectations when they purchase parrots or have not been properly educated and made aware of normal psittacine behavior.” One of our primary concerns about the sale of birds is that very few people are capable of caring for the special needs of exotic birds or comprehend the seriousness of the commitment for the birds’ life span. Schuppli and Fraser point out that “animal welfare may also be jeopardized if the owner loses interest in, or commitment to, the animal,” and that “consistent care may also be jeopardized if animals are very long lived. For example, parrots in captivity can live 30-80 years, as do many primates.” Each year thousands of birds are sold into the pet trade to individuals who are under the mistaken impression that a bird will make an intriguing pet. Eventually, whether due to frustration, disinterest, or concern, many people attempt to rid themselves of the responsibility of caring for their birds, or reduce the quality of care provided. Once again, our assertion that birds do not make good “pets” is based on our belief that it is inherently cruel to keep an intelligent, social, and active animal adapted to flight confined and often isolated in a cage that is too small to facilitate normal behavior. We also question the ethical inconsistency of protecting our own native birds such as robins, blue jays, bald eagles, and cardinals on one hand while exploiting the native species of other countries on the other. We hope that petshops will continue to evaluate the appropriateness of selling any birds in it stores and in the interim immediately cease sales of the larger and more difficult to care for species. Considering enclosed information coupled with reports from bird rescue organizations it seems that the species with the most “objectionable” behaviors and those most frequently “surrendered” with physical or physiological problems are (in descending order), cockatoos, macaws, amazons, African greys, and conures. These references were compiled by Monica Engebretson of the Animal Protection Institute. [email protected]
<urn:uuid:c8df59ef-ee5d-48f7-94f1-ad4bdbbd724c>
{ "date": "2019-01-18T03:47:00", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659677.17/warc/CC-MAIN-20190118025529-20190118051529-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9438279271125793, "score": 2.703125, "token_count": 1119, "url": "http://mytoos.com/references.shtml" }
The Nobel Prize is perhaps the most prestigious award in various fields such as medicine, chemistry, physics and peace. Here are the numbers behind the medals. 130 years ago, Swedish scientist and philanthropist Alfred Nobel signed his last will and testament, dying one year later. In the will, Nobel left the largest share of his fortune to a series of prizes in physics, chemistry, physiology or medicine, literature and peace. Nobel Prize: Dynamite The fortune created by Nobel came from inventing the likes of dynamite and his dealings in other types of armaments and weaponry. The prizes are thought to have been created due to his reading of a premature (and highly critical) obituary. In 1968, Sveriges Riksbank added an economic field, completing the set of the esteemed catalogue of Nobel Prizes today. With all but the peace awards limited to just three winners each, 573 have been awarded since the first ceremony in 1901, with 900 laureates receiving a medal. These include several Irish people: - William Campbell, Physiology or Medicine, 2015 - John Hume, Peace, 1998 - David Trimble, Peace, 1998 - Seamus Heaney, Literature, 1995 - Betty Williams, Peace, 1976 - Mairead Maguire, Peace, 1976 - Seán MacBride, Peace, 1974 - Samuel Beckett, Literature, 1969 - Ernest Walton, Physics, 1951 - George Bernard Shaw, Literature, 1925 - William Butler Yeats, Literature, 1923 Nobel Prize: Physics Physics is the field that has received the most prizes, with its figure of 109 – one, two and three more than literature, chemistry and medicine respectively. Peace and economic sciences have received just 143 between them. Since the initial ceremony in 1901, zero awards have been given out on 49 occasions. Most of these instances occurred during the periods of the two World Wars, 1914-1918 and 1939-1945. Interestingly, towards the end of the Second World War, the International Committee of the Red Cross and the ‘father’ of the UN, Cordell Hull, received Nobel Peace Prizes, in 1944 and 1945 respectively. In the statutes of the Nobel Foundation it says: “If none of the works under consideration is found to be of the importance indicated in the first paragraph, the prize money shall be reserved until the following year. “If, even then, the prize cannot be awarded, the amount shall be added to the Foundation’s restricted funds.” Nobel Prize: Ageless The average age of prize winners is 59, with Irishman Campbell’s award last October occurring in his 86th year. This came decades after his discovery led to the development of a drug called Avermectin, which has seen the creation of derivatives that have “radically lowered the incidence of river blindness and lymphatic filariasis”, according to the Nobel Foundation. The youngest Nobel laureate is Malala Yousafzai, winning the 2014 peace prize at the age of 17. All medals are 18 carat recycled gold and some, according to Campbell, are sold on by recipients, rather than retained. Winners also receive a diploma and around €900,000, divided amongst those sharing the prize.
<urn:uuid:e9f547a9-5c69-4b27-88fe-21e845ace1c3>
{ "date": "2019-05-23T12:06:48", "dump": "CC-MAIN-2019-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9540726542472839, "score": 3.015625, "token_count": 685, "url": "https://www.siliconrepublic.com/innovation/nobel-prize-numbers" }
Mental health is partly controlled by our genetics and partly by our environment. While counseling can help you manage stress and anxiety, you can also take measures at home to keep things cool and calm so you have a place of respite from whatever ails you. Here are a few tips to help you keep a clean house and create a calming space. Cut the clutter. If your decorating style can best be described as “junk drawer chic,” you might want to reconsider the amount of stuff you keep in your home. As the Huffington Post explains, clutter can actually ruin your life. Not only can it increase your stress levels, but having a cluttered house can wreck your diet and make your home anything but a safe haven. Spend an afternoon purging your countertops and cabinets of things you really don’t need, and only keep those items you can’t live without. Simplify your cleaning routine. After the clutter is under control, keep things clean by streamlining chores. Dust, mop, and vacuum all rooms at the same time to avoid having to get your cleaning tools in and out of the utility closet. And don’t forget neglected areas, such as the walls and baseboards, behind the appliances, and inside the kitchen cabinets. Angie’s List also suggests dusting the ceiling fan and above the kitchen cabinets, which will reduce allergens in the air. Insist on assistance. If you live in a home with more than one person, more than one person should handle the cooking, cleaning, and home maintenance. Assign specific chores to each member of your household. Even something as simple as having your kids put the dishes away or fold the laundry will help keep your home clean and free up valuable time for you to focus your attention on self-care. Your family—or roommates—will no doubt offer up excuses as to why they can’t help out. Be ready with a counter argument and available to offer instructions on how to get things done. And remember, teaching your children how to care for a household puts them in greater position to manage their own homes as young adults. Learn to love lists. The human brain is hardwired to like lists. Not only do lists help you keep track of what you have—and haven’t—done, keeping a list of daily, weekly, and monthly household chores can help you remain focused. Creating lists helps bring order to the chaos, and that can lower your stress levels. Keep separate lists for each room in the home and make sure they are visible. Another benefit of listmaking is that it gives you a mental boost when you see tasks being marked off. Don’t forget the outdoors. You don’t want to relax inside all year, so it’s a good idea to focus on the exterior of your home as well. There are a few simple tricks to make sure your outdoor spaces remain maintained season to season. Start by clearing debris out of the yard at least once a week. Once leaves begin to fall, rake them onto tarps for mulching or composting instead of allowing them to pile up and mold throughout the winter. Bob Vila offers more tips on simple outdoor maintenance tasks. Don’t forget to set up a cozy spot with a hammock or lounge chair where you can relax and enjoy the outdoors. Once you have your cleaning routine down, consider changing your spaces to best accommodate your interests. For instance, if you like to read, carve out a corner of a room where you can cozy up with a good book, or rearrange your kitchen to free up counter space for baking. Whatever you decide, make it something just for you, and keep it neat and tidy. Guest Blog Written By Alice Robertson Laura is the Owner of Clean & Clutter Free, professional organizing services.
<urn:uuid:9a570003-e883-4020-8aaa-32e6d81ccb4d>
{ "date": "2019-01-16T05:33:18", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656897.10/warc/CC-MAIN-20190116052151-20190116074151-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9488049745559692, "score": 2.515625, "token_count": 791, "url": "http://www.cleanclutterfree.com/blog/archives/12-2018" }
At a Glance The origins of Hinemaiaia Scheme were in 1939, with a push to develop an electricity supply for Taupo. Power was first generated from Hinemaiaia A Station in 1952 and fed into the national grid from 1958. A second generator was commissioned in 1982, boosting output to 2 MW. Hinemaiaia B (1.3 MW) was commissioned in 1966 and Hinemaiaia C (2.8 MW) in 1982. Environment and recreation Hinemaiaia A Lake has grown into a rich wetland, protected through restricted public access. Below Hinemaiaia B Station, there’s a productive trout spawning area. For this stretch of river, we maintain a flow of three cubic metres per second, so long as there’s sufficient flow from Hinemaiaia A Lake. This consistent water flow aids trout migration up the river and avoids any concerns of erosion due to irregular flows. We run a comprehensive trout trap and transfer programme, each year releasing 200 trout and around 35,000 fry above Hinemaiaia B Dam. We need to periodically dredge Hinemaiaia A Lake to ensure sufficient water storage. Scientific investigations that we commissioned showed no significant long-term environmental effects from the dredging. New Resource Consents for the Hinemaiaia Scheme were granted in 2003, with expiry scheduled for 2036.
<urn:uuid:2ceba865-1aef-496f-8086-ae218ac452be>
{ "date": "2019-03-21T02:11:09", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00376.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9222131371498108, "score": 3.015625, "token_count": 295, "url": "https://www.trustpower.co.nz/our-assets-and-capability/power-generation/hinemaiaia" }
1911 Encyclopædia Britannica/Recorder (music) |←Recorder (legal officer)||1911 Encyclopædia Britannica, Volume 22 |See also Recorder on Wikipedia; and our 1911 Encyclopædia Britannica disclaimer.| RECORDER, Fipple Flute or English Flute (Fr. flûte-à-bec, flûte douce, flûte anglaise or flûte à neuf trous; Ger. Block- or Plockflöte, Schnabelflöte, Langflöte; Ital. flauto dolce, flauto diritto), a medieval flute, blown by means of a whistle mouthpiece and held vertically in front of the performer like a clarinet. The recorder only survives in the now almost obsolete flageolet and in the so-called penny-whistle. The recorder consisted of a wooden tube, which was at first cylindrical or nearly so, but became, as the instrument developed and improved, an inverted cone. The whistle mouthpiece has been traced in almost prehistoric times in Egypt and other Oriental countries. The principle of the whistle mouthpiece is based on that of the simplest flutes without embouchure, like the Egyptian nay, with this modification, that, in order to facilitate the production of sound, the air current, instead of being directed through ambient air to the sharp edge of the tube (or the lateral embouchure in the modern flute), is blown through a chink directly into a narrow channel. This channel is so constructed within the mouthpiece that the stream of air impinges with force against the sharp edge of a lip or fipple cut into the pipe below the channel. This throws the air current into the state of vibration required in order to generate sound-waves in the main column of air within the tube. The inverted cone of the bore has the effect of softening the tone of the recorder still further, earning for it the name of flûte douce. Being so easy to play, the recorder always enjoyed great popularity in all countries until the greater possibilities of the transverse flute turned the tide against it. The want of character which distinguishes the timbre of the whistle-flute is due to the paucity of harmonic overtones in the clang. The recorder had seven holes in front and one at the back for the thumb. As long as the tube was made in one piece the lowest hole stopped by the little finger was generally made in duplicate to serve equally well for right- and left-handed players, the unused hole being stopped with wax. Being an open pipe, the recorder could overblow the octave and even the two following harmonics (i.e. the twelfth and second octave). The holes produced the diatonic scale, and by means of harmonics and cross-fingering the second and part of a third octave were obtained. The recorder is described and figured by Sebastian Virdung, Martin Agricola and Ottmar Luscinius in the 16th century, and by Michael Praetorius and Marin Mersenne in the 17th century. Praetorius mentions eight different sizes ranging from the small flute two octaves above the cornetto to the great bass. The lowest notes of the large flutes were provided with keys enclosed in perforated wooden or brass cases, which served to protect the mechanism, as yet somewhat primitive; the keys usually had double touch pieces to suit right- or left-handed players. There are at least two fine sets of recorders extant: one is preserved in the Germanisches Museum at Nuremberg, consisting of eight flutes in a case and dating from the 17th century; the other is the Chester set of four 18th-century instruments, which are fully-described and illustrated in a paper by Joseph C. Bridge. The recorder has been immortalized by Shakespeare in the famous scene in Hamlet (II. 3), which has been treated from the musical point of view in an excellent and carefully written article by Christopher Welch, the author of an equally valuable paper, “The Literature of the Recorder.” The small whistle-pipe used to accompany the tabor (Fr. galoubet; Ger. Stamentienpfeiff or Schwegel), which had but three holes, belongs to the same family as the recorder, but from its association with the tabor it acquired distinctive characteristics (see Pipe and Tabor). (K. S.) - “The Chester Recorders” in Proc. Mus. Assoc., London, 1901. - “Hamlet and the Recorder,” ibid., 1902 and 1898.
<urn:uuid:985791fe-cfbd-4605-a5b0-369bc0bc3483>
{ "date": "2014-03-12T18:14:41", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023864525/warc/CC-MAIN-20140305125104-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9479934573173523, "score": 4, "token_count": 979, "url": "http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Recorder_(music)" }
Into the Weeds Frustrating, back-breaking, pokes and prickles, these are the vocabulary words of weeds. But, what makes a weed a weed? Why do we cultivate some plants and pull others? What we think of as our backyard weeds can be more adequately labeled as invasive species. Invasive species are those, in this case plants, that are non-native to the area, aka your backyard, and grow quickly and reproduce rapidly (Natural Resources Conservation Science, 2017). But don't let that give all non-native plants a bad rap, being a non-native plant doesn't mean that it is an invasive species (Natural Resources Conservation Science, 2017). Like a square is a rectangle, but not all rectangles are squares. An invasive plant is a non-native, but not all non-native plants are invasive. But why are invasive so bad? Again, why do we cultivate some plants and pull others? Can't I just let my lawn turn from green grass to a field of dandelions? Well, dandelions are pretty cute and edible too, so yes you can. A weed is just a weed because so it has been named, or as Shakespeare put it "A rose by any other name would smell as sweet." But there are common characteristics to the plants that have been given the title of weed: they establish quickly, produce a lot of seed, their seed can lay dormant in the soil for many years, and they can establish in the most inhospitable lands. And the plants that have been given the additional title of "invasive" have even more of these deleterious qualities, with the ability to take over entire landscapes and smother out all other plants in their path. Invasive species create homogenous landscapes lowering biodiversity and ecosystem functions (Natural Resources Conservation Science, 2017. As you look across your garden you may see many beautiful non-native plants. And this doesn't mean you need to go pulling out all of those! Only the invasive ones, and I'm here to help. Canada Thistle, Field Bindweed, and Cheatgrass, these are three common and toughest invasive species that we have in my local region, Colorado, and which span the entire United States. The slideshow below will take you through each of their life history strategies, showing you; how they got here, how their adaptations make them invasive, and best of all how we can use their own adaptations against them to conquer the weeds and take back your backyard! now lets get into those weeds! Morishita, D. (n.d.). » WSSA » Weeds » Articles on Garden Weeds » WHAT MAKES A WEED. Retrieved April 29, 2017, from http://wssa.net/wssa/weed/articles/wssa-what-makes-a-weed/ Natural Resources Conservation Service. (n.d.). Retrieved April 29, 2017, from https://www.nrcs.usda.gov/wps/portal/nrcs/detail/ct/technical/ecoscience/invasive/?cid=nrcs142p2_011124 through the weeds But, the issue is greater than that of your backyard. How can we join others in our community? If you live in Colorado you can help by plotting GPS coordinates of where you find invasive species by using the "Weed Watch" map and "Spotter Form" from the Colorado Department of Agriculture. Now that you've ripped, weeded, and combatted those pesky invasives you may be wondering "What's next?!" Well you are a plant ninja, certified to fight the good fight against invasive plant species. In the comments below tell me you're winning the fight, and let me know what invasive weeds you want me to focus on next! But...before you go off
<urn:uuid:7f106be2-0329-4f02-8c9c-34c08234fab3>
{ "date": "2018-10-23T08:03:22", "dump": "CC-MAIN-2018-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.945762038230896, "score": 3.203125, "token_count": 800, "url": "http://www.iamacollection.com/into-the-weeds/" }
Did you know? South Africans refer to edible indigenous plants as veldkos. At its most elemental level there is no food without rain. At the beginning of the growing season almost all South Africans appeal to the ancestors for rain. The rites of the Modjadji (also known as the Rain Queen) of the Balobedu people of the Limpopo region include the pouring of African beer out of calabashes onto the earth and the Queen's intercession with ancestral spirits. Food related celebrations and ceremonies are not confined to rural areas. In Cape Town, the Cape Malay people celebrate the birth of a baby with Kolwadjib rose water-infused rice cakes at Cape Malay naming ceremonies. The Tshoa ritual of the San people requires food taboos and reintroduction of ingredients to ensure the health of young girls on the brink of womanhood. Similarly young Xhosa abakhwetha initiates undergo a harsh diet and detailed education programme in isolation from their community before being reincorporated as men. Afrikaans young men are often taken game hunting and daubed in blood from their first kill. However and wherever you observe our South African food rituals, you will discover that South African food culture is deliciously diverse. We can say yum in 11 official languages and we love to do so!
<urn:uuid:97c99f4d-3db4-4cd7-a67a-253ef11669ca>
{ "date": "2014-08-21T08:14:53", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815756.79/warc/CC-MAIN-20140820021335-00384-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9353694319725037, "score": 2.90625, "token_count": 274, "url": "http://www.southafrica.net/za/en/articles/entry/article-southafrica.net-celebrations-and-ceremonies" }
Amino Acids, Urea Cycle Disorders Panel, Plasma Clinical Information Discusses physiology, pathophysiology, and general clinical aspects, as they relate to a laboratory test Urea cycle disorders (UCD) are a group of inherited disorders of amino acid catabolism that result when any of the enzymes in the urea cycle (carbamoylphosphate synthetase I [CPS I]; ornithine transcarbamylase [OTC]; argininosuccinic acid synthetase; argininosuccinic acid lyase; arginase; or the cofactor producer, N-acetyl glutamate synthetase [NAGS]), demonstrate deficient or reduced activity. The urea cycle serves to break down nitrogen and defects in any of the steps of the pathway can result in an accumulation of ammonia, which can be toxic to the nervous system. Infants with a complete enzyme deficiency typically appear normal at birth, but present in the neonatal period as ammonia levels rise with lethargy, seizures, hyper- or hypoventilation, and ultimately coma or death. Individuals with partial enzyme deficiency may present later in life, typically following an acute illness or other stressor. Symptoms may be less severe and may present with episodes of psychosis, lethargy, cyclical vomiting, and behavioral abnormalities. All of the UCDs are inherited as autosomal recessive disorders, with the exception of OTC deficiency, which is X-linked. UCDs may be suspected with elevated ammonia, normal anion gap, and a normal glucose. Plasma amino acids can be used to aid in the diagnosis of a UCD. Measurement of urinary orotic acid, enzyme activity (CPS I, OTC, or NAGS), and molecular genetic testing can help to distinguish the conditions and allows for diagnostic confirmation. Acute treatment for UCDs consists of dialysis and administration of nitrogen scavenger drugs to reduce ammonia concentration. Chronic management typically involves restriction of dietary protein with essential amino acid supplementation. Differential diagnosis and follow-up of patients with urea cycle disorders The quantitative results of glutamine, ornithine, citrulline, arginine, and argininosuccinic acid with age-dependent reference values are reported without added interpretation. When applicable, reports of abnormal results may contain an interpretation based on available clinical interpretation. Cautions Discusses conditions that may cause diagnostic confusion, including improper specimen collection and handling, inappropriate test selection, and interfering substances Reference values are for fasting patients. Reference Values Describes reference intervals and additional information for interpretation of test results. May include intervals based on age and sex when appropriate. Intervals are Mayo-derived, unless otherwise designated. If an interpretive report is provided, the reference value field will state this. < or =23 months: 316-1020 nmol/mL 2-17 years: 329-976 nmol/mL > or =18 years: 371-957 nmol/mL < or =23 months: 20-130 nmol/mL 2-17 years: 22-97 nmol/mL > or =18 years: 38-130 nmol/mL < or =23 months: 9-38 nmol/mL 2-17 years: 11-45 nmol/mL > or =18 years: 17-46 nmol/mL < or =23 months: 29-134 nmol/mL 2-17 years: 31-132 nmol/mL > or =18 years: 32-120 nmol/mL Reference value applies to all ages. Clinical References Provides recommendations for further in-depth reading of a clinical nature 1. Amino acids In The Metabolic and Molecular Bases of Inherited Disease. Eighth edition. Edited by CR Scriver, AL Beaudet, WS Sly, et al. New York, McGraw-Hill Inc. 2001, pp 1667-2105 2. Haberle J, Boddaert N, Burlina A, Chakrapani A, et al: Suggested guidelines for diagnosis and management of urea cycle disorders. Orphanet J Rare Dis 2012;7:32 3. Singh RH: Nutritional management of patients with urea cycle disorders. J Inher Metab Dis 2007;30(6):880-887
<urn:uuid:bc1b2e5b-d8f1-4509-bdae-44dd13fa667f>
{ "date": "2014-07-28T20:54:05", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00300-ip-10-146-231-18.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8390413522720337, "score": 2.53125, "token_count": 906, "url": "http://www.mayomedicallaboratories.com/interpretive-guide/?alpha=A&unit_code=60202" }
1930s movie posters proclaimed, week after week, what Hollywood had to offer to an eager world during the days of the great movie studios and the Great Depression. No better example of this is the above exquisite 1932 vintage original Belgian poster of Marlene Dietrich in Shanghai Express. In the beginning, as the fledgling studios began to grow, and knowing that a portion of their potential audience was illiterate, they took their cue from vaudeville, fairs and the circus to create colorful artwork that depicted scenes from their movies in order to promote their films. From the mid 1920s through the 1940’s, movie studios developed their own artwork styles for their posters, lobby cards and other marketing materials. They hired well-known artists and illustrators, such as Al Hirschfeld, John Held Jr., Hap Hadley, Ted Ireland, Louis Fancher, Clayton Knight and Armando Seguso, to create the illustrations and graphic designs. The introduction of the color offset lithography printing technique in the 1920’s changed the artistic quality of posters, sharpening the image and, over time, shifting the emphasis from illustration to photography. At the same time, Hollywood Portrait Photography evolved as a result of the work of six individuals that became the photographers of choice for “shooting the stars:” Albert Witzel, George Hurrell, Clarence Bull, Ruth Harriet Louise, Milton Greene and Cecil Beaton. Tags: Columbia Pictures, Film Posters and Contemporary Art Curator, Fox, Hollywood Movie Memorabilia, MGM, Original Vintage Film Posters, Original Vintage Movie Posters, RKO, Twentieth Century Fox, Warner Brothers
<urn:uuid:d72a82d6-de7c-4052-9979-72367a021da9>
{ "date": "2019-04-20T08:14:23", "dump": "CC-MAIN-2019-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529472.24/warc/CC-MAIN-20190420080927-20190420102927-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9220625758171082, "score": 2.828125, "token_count": 343, "url": "https://www.walterfilm.com/articles/blod/" }
Choosing the appropriate routing protocol is critical to an IP addressing plan. This tip, reposted courtesy of SearchNetworking.com, explores the parameters used to evaluate the suitability of a routing protocol. The different characteristics of IP routing protocols are described along with the operation of industry standard protocols such as Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). There are several characteristics against which a routing protocol is judged: The routing protocol must exhibit stability against routing loops, which can crash a network as a result of When a topology change occurs, such as the loss or addition of a subnet, then there is a time lapse before every router on the network is aware of this change. During this time interval, which is called the convergence time, some routers are operating off inconsistent information. Hence the convergence time can also be thought of the time lag from a topology change occurring to the point where all routers in the network have consistent routing information in relation to the affected subnet. The speed of convergence can vary dramatically on a network depending on a number of factors, not the least of which are the operational characteristics of the routing protocol itself. Sophisticated link state routing protocols such as Open Shortest Path First (OSPF) maintain a link state database of all subnets on the network detailing what routers are attached to them. If a link goes down the directly attached router will send an immediate Link State Advertisement (LSA) to its neighbor routers and this information floods through the network. Each router, upon receiving the LSA, can consult its database and independently re-calculate the routing table following the topology change. Convergence is fast and reliable as a consequence of OSPF maintaining extensive network topology information above and beyond a routing table. This is distinct from simpler protocols such as RIP, which, as already discussed, require the use of a hold-down timer following a topology change in order to ensure a loop-free convergence. A router that learns multiple paths to a particular destination network (via a routing protocol) will choose the path with the best metric and place that in its routing table. If the best metric is true of more than one path then each of these least cost paths will be placed in the routing table, and equal cost load balancing will be performed. Different routing protocols use different metrics; in other words various routing protocols each have their own way of deciding the best path to a destination. The metric should be sufficiently sophisticated to ensure that the routing protocol's interpretation of the best path is a realistic one. RIP uses hop count as its metric and this is yet another limitation of that particular routing protocol. For example if a router had two paths to a destination where one path was a 56k link and the other were a T-1, RIP would see each path as equal cost if the number of router hops is equal. Thus, RIP would load balance even though one path is 23 times faster than the other one. OSPF uses an administrative cost metric that can be configured arbitrarily. On Cisco routers it is automatically calculated to be inversely proportional to the bandwidth of the link. Nortel take an alternative approach by keeping the OSPF cost equal by default on all links. The network administrator then configures the value on the router interface to relate inversely to the speed of the link. The significance of VLSM has already been demonstrated. Classless routing protocols support VLSM since they carry the mask in the routing updates. Standardized classless IP routing protocols include OSPF and RIP version 2. RIP version 1 is considered a classful routing protocol since it does not include the subnet mask within the routing update. A routing protocol should support configurable route summarization. The significance of being able to configure route summarization at strategic points in the network has already been described. Apart from configurable route summarization, some protocols exhibit automatic route summarization. This feature is not necessarily as good as it sounds and in some cases it can be decidedly problematic. Classful routing protocols such as RIP v1 automatically summarize based on class when advertising across a major network boundary. For example subnets of 172.16.0.0 would be advertised as a single route to the 172.16.0.0/16 Class B network if the router were advertising across a link that was part of anything other than this particular Class B network. This is necessary with classful routing protocols since because they do not advertise the mask the downstream router has no way of deducing the subnet mask if it does not have interfaces in that major network. Hence it must be assumed (usually incorrectly) that no subnetting is taking place. Automatic route summarization can potentially cause problems if summarization occurs at more than one point in the network, since the summarized routes may be in conflict. This scenario occurs when a router receives identical summary routes from opposite directions and is commonly referred to as a discontiguous network. You can think of discontiguous as meaning 'broken up' by another network. If a major network such as 172.16.0.0 were discontiguous, then routers in the intermediate network (say it's addressed as part of the Class B 188.8.131.52) would receive 172.16.0.0/16 summary routes from opposite directions. These routes would attempt to load share across these routes. It actual fact there would be serious connectivity problems. TCP-based applications would require re-transmissions for every wrong routing choice and UDP applications simply wouldn't work! The difference between a classful and classless routing protocol is very simple. Classless protocols include the mask in the update while classful protocols do not. The preceding discussion however should have highlighted the fact that the consequences of this simple difference are far-reaching. Classful protocols such as RIP version 1 do not support VLSM, discontiguous networks or configurable route summarization, and are therefore unsuitable for modern networks. The question of scalability relates to the ability of the routing protocol to adequately support network operation as the network grows with the addition of more IP subnets. Issues such as convergence speed and support for VLSM and configurable route summarization ultimately determine the scalability of the routing protocol. The efficiency with which routing information is exchanged is also relevant. Distance vector protocols such as RIP periodically broadcast the entire routing table to neighbor routers. The more sophisticated protocols only advertise event-driven topology changes once the initial routing information has been exchanged, clearly a more efficient mechanism. Open Shortest Path First (OSPF) OSPF is a very complex IP routing protocol and a full explanation of its operation is beyond the scope of this article. However it is worth summarizing the advantages that it provides over distance vector routing protocols such as RIP. If one word were needed to justify the employment of OSPF, it would be scalability. There are a number of reasons why OSPF is suitable for large and growing networks, and they are all in some way inter-related. - Hierarchical structure: OSPF supports the ability to divide the network into multiple areas that have a certain degree of autonomy from each other. In such a structure there is a backbone area (which is always designated as Area 0) and a number of other areas that, barring exceptional cases, must directly attach to Area 0. A consequence of a well-planned hierarchical design is that each area's routes can be summarized into contiguous blocks. OSPF also supports the ability to summarize routes that are redistributed from another routing protocol. - Speed of convergence: Each router running OSPF maintains a database of the logical topology of the network. The database details every link, LAN segment and router on the network. This increased intelligence of OSPF means that it can converge faster without having to resort to the crude convergence methods of distance vector protocols. Efficient update processing: Incremental updates are sent when there is a network topology change rather than using periodic updates. OSPF also uses well-known multicast addresses rather than broadcasts to transfer routing information. - VLSM: Since it is a classful protocol, OSPF supports VLSM allowing for an efficient use of IP address space. Okay, so I have now alluded to all of OSPF's advantages. However almost every networking protocol is a double-edged sword to at least some extent, and OSPF is no different. There are two potential disadvantages of OSPF that deserve consideration: - Resource utilization: OSPF increases router memory requirements due to the fact that each OSPF router maintains a topological database of the network. The routing table is calculated from this database, which consumes more memory than the routing table itself. Running OSPF also increases the average router CPU utilization. In order to recalculate the routing table following a topology change the Shortest Path First (SPF) algorithm is run. This is a processor-intensive activity that could potentially restrain the performance of low-end routers. - Design restrictions: For a large network that also needs to incorporate scope for growth, multiple OSPF areas should normally be used. There are certain rules how traffic should move between these areas and this can impose some design restrictions. OSPF provides a facility whereby a network can be segregated into multiple areas. The whole idea behind this concept is to reduce the memory and CPU overhead associated with running the protocol. A router running OSPF in a multi-area implementation retains the database for its local area rather than for the entire network. This reduces memory consumption and it exploits the fact that on a well designed network it's usually unnecessary for a router to have full details of sections of the network that are very remote. For this same reason updates are just flooded within the local area after a topology change, thus reducing routing traffic and the CPU consumption associated with frequent and often unnecessary route re-calculations. About the author Cormac Long is the author of IP Network Design and Cisco Internetworking and Troubleshooting. This tip originally appeared on SearchNetworking.com.
<urn:uuid:b36f7bf3-8eee-46d4-a5a1-1efbe49b7b0d>
{ "date": "2018-12-15T03:28:39", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826686.8/warc/CC-MAIN-20181215014028-20181215040028-00376.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9419143199920654, "score": 3.34375, "token_count": 2069, "url": "https://searchitchannel.techtarget.com/tip/IP-address-planning-Choosing-a-routing-protocol" }
What Is Depression? Understanding Your Body Major depressive disorder, often referred to as depression, is a common illness that can affect anyone. About 1 in 20 Americans (over 11 million people) get depressed every year. Depression affects twice as many women as men. Depression is not just "feeling blue" or "down in the dumps." It is more than being sad or feeling grief after a loss. Depression is a medical disorder (just like diabetes, high blood pressure, or heart disease are medical disorders) that day after day affects your thoughts, feelings, physical health, and behaviors. Symptoms include sadness, inactivity, difficulty thinking and concentrating, and feelings of despair. Depressed persons often have trouble sleeping, changes in appetite, fatigue, and agitation. Depression may be caused by many things, including: - Family history and genetics. - Other general medical illnesses. - Certain medicines. - Drugs or alcohol. - Other psychiatric conditions. Certain life conditions (such as extreme stress or grief), may bring on a depression or prevent a full recovery. In some people, depression occurs even when life is going well. Depression is not your fault. It is not a weakness. It is a medical illness. Depression is treatable.
<urn:uuid:01ddb120-a505-4d98-bf7d-e4e5dbd46915>
{ "date": "2015-11-26T12:21:55", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00202-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9317428469657898, "score": 3.671875, "token_count": 261, "url": "http://archive.ahrq.gov/patients-consumers/prevention/understanding/bodysys/edbody10.html" }
Kids these days -- amirite? No, but actually. For real. Kids these days are more sensitive and fragile than kids of the past. Even according to the president of an elite university that I spoke with recently, “Today’s college students are not like you.” All you really have to do is look at videos of college protests or read the accounts of high school and college professors to know that today's children (and young adults) are overly emotional -- even mentally ill. And as nasty and rude as they appear in videos like this one (though, granted, this is an extreme example): I can't help but feel a little bit bad for them. Their parents failed them. Their teachers failed them. And they never learned the lessons children of previous generations learned in preschool ("use your words"; emotion regulation; leisure skills; kindness; etc.). It starts all the way back in preschool and kindergarten. For example, as I wrote in Thanks to Helicopter Parents and The Self-Esteem Movement, Schools Have Banned Musical Chairs, Literally a hundred years of research shows that competitive and physical play are an essential part of kids' development. "But but but... What about their self-esteem?" well-meaning but misguided educators stammer. Here's the thing: Aggression, competition and losing don't actually hurt a child's self-esteem. But they do teach children valuable coping and resilience skills. You may think you're helping your child out by insulating them from social rejection, embarrassment or hurt feelings as long as possible. But the truth is, you're emotionally crippling them. Resilience isn't something that magically happens once you turn a certain age. It's something you develop by dealing with the small disappointment of being the first kid eliminated from musical chairs or the last kid picked in the kickball game. Determination isn't something that happens automatically. It's something you develop through loss, setbacks and disappointment. It's what happens when you get a bad result, then vow to practice, improve and do better next time. Read more > What do you expect them to do when they get to college if they've never even had the experience of controlling their emotions and navigating social situations during harmless games like Red Rover or Musical Chairs? This is what happens when parents and teachers make decisions in the name of “self-esteem” or “mental health” or “equity”... without actually doing any research about whether their feelings align with the facts. For example, lots of schools -- from elementary to secondary -- commit time, energy, and resources into "self-esteem" development in students. But the research shows that There is No Benefit to "Teaching" Self-Esteem, though it could cause long-term damage. Here's an excerpt from my post on the topic: Instead of letting children develop self-esteem on their own, through hard work, SMART goal setting, improvement and achievement... teachers force it upon their students as an exercise in and of itself. Take, for example, Self Science: The Emotional Intelligence Curriculum, a two-year, 54 lesson program for teaching self-esteem (and, to be fair, other emotional skills) to elementary school students. This contains exercises such as the self-esteem roll call game: when the teacher calls out your name during attendance, you don't answer by saying, "Here," but by saying either: ... If that weren't bad enough, schools have taken several well-intentioned (but ultimately harmful) measures to "protect" students' self-esteem. For example, many schools have become anti-competition zones -- games with winners and losers are no longer acceptable, in spite of the fact that decades of psychology research show that competition is an important and healthy part of every child's development. When competition is inevitable, such as during athletic contests, all students "win" a participation award. Schools -- even high schools -- are getting rid of honor rolls, because it's not "fair" to those who don't make it. (And, yes, I understand that getting an A isn't the same as learning. But it is still important for students to have goals and rewards for their hard work. See also: Straight As Make You Look Complacent, Not Curious.) But is all this effort really worth it? According to Roy F. Baumeister et al.'s 2003 meta-analysis, Does High Self-Esteem Cause Better Performance, Interpersonal Success, Happiness or Healthier Lifestyles, No. There is no relationship between high self-worth and achievement. In fact, high self-regard is commonly found in narcissists, bullies and sociopaths. People with high, unwarranted self-esteem often have an inflated sense of popularity and likability. They get hostile when criticized or rejected. They alienate others. Read more > "Narcissists, bullies, and sociopaths." "Hostile when criticized or rejected." Remind you of anyone? The research is clear on this... yet "self-esteem" warriors march on! Many schools (including "good" and "innovative" schools like AltSchool) are getting rid of grades and other valuable forms of feedback. Because "accessibility"? And "feelings"? But, once again, this is not backed by research. In fact, it flies directly in the face of research. As I wrote in 5 CRUCIAL Lessons Parents and Teachers Can Learn From Video Games (That Helicopter Parents Will HATE): Knowing how you're doing is important for both progress and motivation. And video games are full of metrics. Which is why grades and prompt feedback are important… First, people are horrible at dealing with uncertainty. Studies show that people in the hospital would rather receive a bad diagnosis... than no diagnosis. A friend of mine was recently diagnosed with chronic fatigue syndrome (CFS), a condition that can sometimes be treated, but cannot be cured. She was still relieved to know what was going on. Second, knowing you're so close to your goal -- seeing yourself make progress or lose ground in real time -- is extremely motivating. It can keep you going when you're about to give up. You can immediately see what you've done wrong, and do it better the very next time. Read more > Stanford Professor Mark Lepper once told me, "The people who understand motivation better than anyone else in the world... are people who make video games. We should listen to them." Instead, parents and teachers listen to "feelings." But perhaps part of the reason so many people "feel" like grades are toxic is because, at the other end of the spectrum, you have students who are ruthlessly pursuing perfect grades, because that is what the adults and peers in their lives tell them to do. This has led to the weird phenomenon of kids today being way over-tutored. Some students I've worked with through Paved With Verbs have tutors in every single subject. But, as I wrote in 4 Reasons A Tutor is the WORST Thing You Can Do For Your Kid, Over-tutoring stunts their coping skills decreases resilience. As a parent, one of the most important things you can do for your child's long-term mental health is to let them fail. Having a tutor gives them a shortcut. Instead of facing a disappointing academic outcome — and asking themselves, "What did I do wrong? Did I really give it my best effort? What can I do differently next time?" — they rely on the tutor to figure things out for them. (And, as mentioned above, put at least some of the responsibility on the tutor, instead of owning their mistakes.) Here's the thing about children: you can't insulate them from bad things forever. Eventually, they're going to run into problems you can't fix for them. They're going to have to face a problem on their own. It's not if; it's when. And when the time comes, you want your child to have the emotional and cognitive maturity to turn a disappointment into a learning opportunity. Read more > Part of the reason parents hire so many tutors is because they're afraid if their child doesn't get an A on every assignment, they won't get into their first-choice private middle school, high school, or college. But part of it is that they sincerely doubt (or don't know) their child's abilities and limitations. And this is because they never let their kids figure anything out on their own. As I wrote in By 1979 Standards, Your 1st Grader is Physically and Emotionally Stunted: With constant adult supervision and insufficient outdoor play, kids miss out on important muscle development, as well as important lessons in self-efficacy, self-regulation, and self-empowerment. Many parents today shudder at the idea of sending their kids on a 4-8 block mission alone. They're not sure if their kids can do it. All first graders used to be able to do it -- so either you're underestimating your child, or you've stifled them. Read more > There's a reason we find movies and shows like Stranger Things and Stand By Me and Now and Then so appealing. I mean, who doesn't envy the freedom these kids have to explore the world around them? Who doesn't love the initiative they take to solve problems on their own? Who doesn't want to feel the excitement these kids feel about their middle school AV club? (And who doesn't want this Hawkins Middle School AV Club t-shirt?) It's not just about feeling nostalgia for our own childhoods. It's remembering the feelings of empowerment and freedom we had as kids on bikes or kids with tools or kids in the woods. Feelings that many children today never experience. And this is a very bad thing. As I wrote in Kids' Games are Getting More Dangerous, and it's ENTIRELY Their Parents' Fault, Children are hardwired to explore. Risk-taking (or, at least, the perception thereof) is in their nature. Risks tend to manifest themselves in one of six ways: 1. Exploring heights 2. Handling "dangerous" tools, such as scissors, knives or hammers 3. Being near dangerous elements, such as water or fire (or, as was the case in Stand By Me, a dead body) 4. Rough-and-tumble play (which, as I mentioned above, is a way for kids to learn to negotiate aggression and cooperation) 5. Speed -- e.g. cycling, skateboarding, ice skating at a pace that feels too fast 6. Exploring on their own When kids do these things, they will eventually fall down. Bruise their arm. Skin their knee. Maybe even break a finger. It will hurt. BUT... Research by Ellen Sandseter, a professor of early-childhood education at Queen Maud University College in Trondheim, Norway, has found that kids who spend more time exploring on their own before the age of nine are less likely to to have anxiety and separation issues as adults. Likewise, kids who got hurt falling from heights when they were 5-9 years old are less likely to be afraid of heights at age 18. Our minor injuries actually give us confidence. They teach us what our limits are, how to handle ourselves in scary situations... and that, even if something goes wrong and you get hurt, you can get better. (Resilience for the win!) And, of course, these injuries will happen less frequently and less severely if your child has better joint and muscle control. If you really want to increase your child's safety, don't ban them from the playground. Help them develop their proprioceptive sense. Read more > So what can a parent or teacher do? One thing is to educate yourself. And I have several books that I highly recommend that will help. The first, of course, is How to Raise an Adult: Break Free of the Overparenting Trap and Prepare Your Child for Success, by Julie Lythcott-Haims. Other incredible suggestions include Free Range Kids: How to Raise Safe, Self-Reliant Children (Without Going Nuts With Worry) by Lenore Skenazy: Playborhood: Turn Your Neighborhood Into a Place of Play, by Mike Lanza. Teach Your Children Well: Why Values and Coping Skills Matter More Than Grades, Trophies or "Fat Envelopes," by psychiatrist Madeline Levine. And, of course, Play: How it Shapes the Brain, Opens the Imagination and Invigorates the Soul, by Stuart Brown, MD, founder of the National Institute for Play. And, on a slightly unrelated but still very important note, might I also recommend Unwanted Advances: Sexual Paranoia Comes to Campus, by feminist scholar Laura Kipnis. It's about how, when we treat women like helpless children, they become catatonic with fear when, say, a boy stands between her and the exit to his apartment. Unable to establish their own sexual boundaries, they end up having sex they don't consider to be consensual... and the guy has no idea until he gets slammed with a rape charge. (Which isn't to downplay the importance of stopping rape and sexual assault on campus. That's a slightly different issue.) I'm not done with it yet, but I'm pretty hooked. Laura Kipnis writes a lot like Bill Bryson, who makes me laugh out loud when I'm sitting alone in a room. And her story about both Professor Peter Ludlow, who was charged with "forcing" a student to drink (2-3 drinks) in a public setting until she became incapacitated, and her own experience fighting Title IX claims (because she wrote an essay) is captivating. For another take on today's adolescent and young adult sexual climate, check out Peggy Orenstein's Girls and Sex: Navigating the Complicated New Landscape. About the Author Eva is a content specialist with a passion for play, travel... and a little bit of girl power. Read more > Want to support The Happy Talent? CLICK HERE! Or Find me on Patreon! What's Popular on The Happy Talent: Trending in Dating and Relationships: What's Popular in Science: Playfulness and Leisure Skills: Popular in Psychology and Social Skills:
<urn:uuid:4c5077e2-5688-4094-82c4-a3db1c1082d2>
{ "date": "2018-12-10T14:01:15", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823339.35/warc/CC-MAIN-20181210123246-20181210144746-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9678314924240112, "score": 2.71875, "token_count": 3022, "url": "http://www.thehappytalent.com/blog/why-todays-children-and-young-adults-are-more-sensitive-and-fragile-than-past-generations" }
Superior vena cava syndrome is a condition where blood flow through one of the major veins to the right side of the heart is blocked. This vein is known as the superior vena cava (SVC). It carries low oxygen blood from the upper part of the body to the right atrium of the heart. The other major vein is the inferior vena cava (IVC) that carries deoxygenated blood from the lower part of the body. There are several reasons why the superior vena cava may become obstructed, either partially or completely. The blockage may arise within the vein itself or external compression on the vein can also cause a blockage. If not treated immediately, superior vena cava syndrome can be lead to serious complications such a cerebral edema (brain swelling). Therefore SVC syndrome is considered to be a medical emergency. Overall superior vena cava syndrome is an uncommon condition. In fact it affects less than 10% of patients who have a right-sided mass within the thoracic (chest) cavity. Most of these cases arise with cancer of the right lung and a smaller number is seen with lymphoma. Since lung cancer is more common in males, the incidence of superior vena cava syndrome is therefore higher among males. Superior vena cava that arises with cancers is mainly seen in the 40 to 60 year age group. In younger people it mainly occurs for reasons other than cancer. It is rare in children and infants. Blood that is low in oxygen (deoxygenated blood) returns to the right side from the heart through the vena cavae : - Superior vena cava (SVC) returns blood from the head, neck, upper limbs and upper thorax (chest). - Inferior vena cava (IVC) returns blood from the lower limbs, pelvis, abdomen and lower thorax. This blood then enters the right atrium during diastole (relaxation of the heart). It is pushed into the right ventricle which then sends the blood to the lungs for re-oxygenation via the pulmonary artery. An adequate amount blood has to return to the heart (venous return) for the blood to fill and stretch sufficiently for forceful contraction. This ensures that enough blood can also return to the left side of the heart in order to be sent out to the rest of the body. Circulation also ensures that tissue fluid and blood does not become congested in any part of the body or veins. Picture from Wikimedia Commons In superior vena cava syndrome, there is congestion within the SVC due to some obstruction. There is some degree of compensation by collateral veins that drain blood from the upper half of the body. This occurs through the azygous and internal mammary venous systems and its tributaries. However, these systems cannot completely compensate for the reduced return through the superior vena cava. As a consequence the pressure in the SVC and most of the veins in the upper body increases substantially. Therefore swelling occurs in the upper body most notably in the head. Distribution of sufficient oxygenated blood throughout the body is subsequently hampered to some degree. It is not uncommon for patients to be asymptomatic, particularly in the early stages or when the SVC is only partially obstructed. As the condition progresses only minor symptoms may be evident, which is often missed. As the SVC becomes completely blocked, the symptoms become more evident and continue to worsen. Lying flat or leaning forward tends to aggravate the symptoms. These symptoms include : - Difficulty breathing (dyspnea) is the most common symptom. - Neck vein distension. - Swelling of the face. - Facial redness (flushing). - Lightheadedness and dizziness. - Distorted vision. - Bluish tinge of the skin (cyanosis). - Difficulty swallowing. - Hoarse voice. - Arm swelling. - Chest pain. - Fluid accumulation around the lungs (pleural effusion). Not all of these symptoms may be present to the same degree at the same time in SVC syndrome. Blockage of superior vena cava may arise through one of two ways : - Intrinsic obstruction - Extrinsic compression An obstruction within the SVC (intrinsic) mainly occurs when cancer invades the blood vessel wall from a surrounding site, like in right lung cancer. However, any mass, deformity of the vessel or inflammation of the vein can be a precipitating factor. Compression of the vein from a mass outside of it (extrinsic) is the other mechanism by which a blockage of the SVC may occur. Since the walls of veins are thin compared to arteries and the blood pressure is lower, it can be more easily compressed by an external mass. These defects do not always cause the actual obstruction. Instead it disturbs blood flow through the superior vena cava leading to the formation of a blood clot within the vein (thrombosis). Complete obstruction of the SVC is more likely due to a thrombus forming within the vein and occluding the remaining open part of the superior vena cava. Partial obstruction is more likely when there is no clot formation. The majority of cases of superior vena cava syndrome arises with malignancies in the mediastinum. The most common malignant cause of SVC syndrome is bronchogenic carcinoma (lung cancer) and the most common type of small cell carcinoma. The other major malignant cause, although much less common, is non-Hodgkin’s lymphoma. Other cancers that arise in the mediastinum or arise elsewhere and spread to the mediastinum may also be responsible for SVC syndrome but overall this is rare. It is important to note that SVC syndrome only occurs in a minority of cases when these malignancies are present. Some of the non-cancerous (benign) causes of superior vena cava syndrome includes : - Aortic aneurysm. - Benign (non-cancerous) tumors in the mediastinum. - Infections like tuberculosis (TB). - Scar tissue within the mediastinum. Patients with prominent symptoms and clinical signs of SVC syndrome may be diagnosed even without further investigations. However, various investigations should be conducted to assess the degree of SVC obstruction and the possible cause. Patients with minor symptoms require investigations for a conclusive diagnosis. These tests include : - Invasive contrast venography where a special dye is injected into the vein so that the blood flow through it can be visualized. - Computed tomograph (CT) scan. - Magnetic resonance imaging (MRI). Treatment is primarily directed at symptomatic relief until the underlying cause is conclusively identified and treated where possible. Symptomatic relief is achieved in various ways : - Elevating the head of the bed. - Administering oxygen. - SVC bypass surgery. - Superior vena cava stenting. Thrombolytics can be used to breakdown a clot thereby increasing the blood flow in SVC syndrome cases where these is thrombosis. However, this does not remove the primary cause and the clot may form again later. Therefore anticoagulants are also prescribed to prevent new clot formation until the underlying cause is treated. Other treatment measures depends on the cause. Radiation therapy and chemotherapy may be used to target the malignancy when surgical removal of the tumor is insufficient to reduce the obstruction of the SVC.
<urn:uuid:49dece6e-97a8-4fea-a7e1-c764a2609a3d>
{ "date": "2018-12-16T11:00:24", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827639.67/warc/CC-MAIN-20181216095437-20181216121437-00416.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9126595854759216, "score": 3.546875, "token_count": 1586, "url": "https://www.healthhype.com/superior-vena-cava-syndrome-svc-blockage.html" }
CONCEPTS OF SCIENCE The Concepts of Science is a course designed to present to the student an outline of the important scientific ideas, methods, discoveries, and laws that have evolved in the course of human development. We will examine basic scientific principles that are important to any scientifically literate member of our technological society. Our society has entered an era in which cultural and social decisions are being made which will effect the well being of our planet itself. Responsible members of our culture must have the knowledge necessary to make responsible decisions. Dr. Elder's Concepts of Science Page Links to General Science Sites Page designed and maintained by Michel Smith
<urn:uuid:599a0302-c4cc-49c0-9bb1-078c228a93ef>
{ "date": "2013-12-10T16:22:32", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164022163/warc/CC-MAIN-20131204133342-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9415692687034607, "score": 3.75, "token_count": 130, "url": "http://www.auburn.edu/~smith01/contents.html" }
We in Naturopathy doesnt deny the existence of bacteria; they are inhaled from the air and ingested through contaminated food and water. The presence of germs in human body is merely a symptom of disease- not the cause. Many types of bacteria would be found in a human organism but they dont necessarily give rise to disease. They are present in the atmosphere but everyone doesnt fall sick – the reason , according to traditional medicine is the capacity of the human body to ward off diseases. Bacteria needs a stressed body – filled with anxiety and ill -looked- after body (irregular food timings and disturbed sleep pattern). Naturopathy believes that bacteria helps in fighting diseases not in causing it. Bacteria are nature’s scavengers – they will infest only that organism thats rotten. Improve lifestyle by having food at a fixed time and sleep for atleast 8 hours for a good health and immunity.
<urn:uuid:6accef34-367f-4da7-9ea5-d2034dcf05b3>
{ "date": "2017-06-29T00:25:26", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00417.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9517501592636108, "score": 2.78125, "token_count": 189, "url": "https://revisediet.wordpress.com/2015/07/30/bacteria-helps-in-fighting-diseases-not-in-causing-it/" }
Neit (in Greek: Neit) this goddess' other names via different spellings are Neith, Nit, and Net. Neb-Ma-at-Re is right about Neit. But she did have some other attributes which are below. http://www.nemo.nu/ibisportal/0egyptint ... r/neit.htm and in her home town she was said to be the sole creator of all other gods. She was patroness of virginity and virgins and also cared for weaving. She wore the red crown of Lower Egypt and her weapons - the bow and the arrows. A shield with two crossed arrows was her sign and was seen stylised upon her head (right). In the NK she was the mother of Re. In the new capital Memphis (3.200 BC.) she protected the royal crown. In the Early Dynastic and Old Kingdom Neit(h) was a popular addition to the name of the Queen. The presumed mother of Aha, 1st Dynasty was Neith-hotep, 1st Dynasty Queen Merneith, mother of Pharaoh Den is another Queen using Neit(h) in her name. The last Queen of the Old Kingdom was Nitokerty, the Nit equalling Neith. Since since Myron says that other airplanes are named after Queens, to name one Neit is appropriate i believe.
<urn:uuid:479c429e-50cc-443b-a77e-8d66080a58bc>
{ "date": "2015-03-06T17:34:04", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469305.48/warc/CC-MAIN-20150226074109-00068-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9700596332550049, "score": 2.6875, "token_count": 286, "url": "http://www.kingtutone.com/forum/viewtopic.php?f=11&t=658" }
I introduced weaving to my Gainesville class today and for all but one child, it was their first experience with weaving. I thought it would be fun to have them weave on a branch- kind of organic and unprecise, and we have a lot of bamboo that was recently cut. I played around weaving some yarn and ribbon between the small branches to see how well it would work. Linking to Mandarin Orange Monday We practiced with paper first to get the hang of it. Weaving is great for refining fine motor skills and eye-hand coordination. It offers an experience of working with different textures and materials and a feeling of satisfaction in creating a decorative or useful object. These looms are made by folding the paper in half and cutting the 'warp' lines from the fold up to within an inch of the top of the paper. The lines can be drawn first, so it is easier to follow. I had them vary the 'warp' with some straight and some wavy lines. The weft pieces are woven under, over, under over Next, they created a loom with a bamboo branch. The kids used yarn to create the warp by wrapping back and forth between two branches. The yarn was wound a few times around each branch, before going back across to the other. I provided various choices of yarn to weave through the warp. I also had feathers, beads and shells they could weave in. They really did well and had fun! Depending on how the bamboo is cut, there can be several tiers of weaving and it can actually stand up, balancing on the branches.
<urn:uuid:006fc428-e786-4bf0-a894-986da5eb881e>
{ "date": "2014-07-28T04:12:34", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00276-ip-10-146-231-18.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9823789596557617, "score": 2.796875, "token_count": 333, "url": "http://floridacreate.blogspot.com/2014/04/weaving-on-bamboo.html" }
"We looked at the glutamate receptors at the cell synapse, and depending on other activity, ephrin appeared to decrease the number of glutamate receptors," said Yamaguchi. The regulation of glutamate receptors is crucial to maintaining memory and learning. The strength of a signal through a nerve cell synapse can be enhanced (by increasing the number of receptors) or diminished (by a receptor decrease). "The balance has to be optimal, since too much memory activation can also be a problem," said Yamaguchi. Yamaguchi's team, which worked on this project for more than two years, had suspected that ephrins played some important part in nerve cell synapse function. Previous studies had shown that animals injected with addictive drugs had activated EphB receptors, and that there is a connection between synaptojanin-1 and bipolar disorders and schizophrenia. Until now, nobody had made the connection between EphB and the endocytosis involved in neurotransmitter regulation. "There's also an increased interest in endocytosis in cancer, in which the process may help diminish anti-proliferation signals and, as a result, trigger tumor progression," said Yamaguchi. "But this is a novel finding in biology, and we can only just begin to speculate on the broader implications of Ephrin and EphB's activity." Yamaguchi is a professor of developmental neurobiology at the Burnham Institute, where his research zeros in on the structure and activity of nerve cell synapses. Irie, the lead author of the paper, is a staff scientist in Yamaguchi's laboratory. Their colleagues included Misako Okuno in Yamaguchi's laboratory and Elena Pasquale, who also is a professor of
<urn:uuid:6bb37ba2-fdf2-42a7-b381-aee5f456428b>
{ "date": "2015-05-24T18:06:04", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928030.83/warc/CC-MAIN-20150521113208-00049-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9673407077789307, "score": 3.09375, "token_count": 359, "url": "http://www.bio-medicine.org/biology-news/Structure-building-cell-signals-also-may-influence-learning-and-memory-693-2/" }
When solving CIDNI problems, constraints are those factors that can reduce your solution space, such as the money, time, expertise, etc. at your disposal. Too many constraints will reduce your solution space so much that you might not find a satisfactory solution. But not enough constraints can be crippling too: If you start with a blank page, everything can be a solution and you might fall victim of paralysis by analysis where you’re afraid to move in one direction because it would close doors in other directions. See constraints as enablers So there is value in having some constraints. In fact, I’d argue that the less you know what you’re doing, the more beneficial constraints are. To illustrate, think of how we learned to draw. As a kid, my parents bought me some books with shapes in them, and I started by coloring within the shapes. Those shapes were pretty significant constraints, but they were fitting considering my skill level. Sure, I couldn’t draw a dinosaur if the shape was that of monkey, but I got to decide which parts of the monkey should be bright orange. Then I got those books where there were dots all over the page, each with a number. Before drawing, I had to connect the dots to create the shape. Arguably that was more for me to learn how to count than to draw, but I also got to decide which dots I left out, thereby getting some control on the shape I was creating. And, once I created a shape, I got to color it too. Those were less constraining setups, but still significantly constraining. A few years later, at Rice, I took a couple of drawing classes with Basilios Poulos. For a specific drawing, Bas would tell us what medium to draw with or he would bring a model to the class and that would be our subject. Those were still constraints, but, again, we were freer. You could continue this up until you remove all constraints. Then you’d start with a blank page, as Picasso. Picasso starts with a completely empty sheet and creates fantastic images but, well, he’s Picasso. For the rest of us, starting with a blank page might just be a little too daunting. So constraints will limit your independence, fine. But that limitation isn’t necessarily bad. Especially if you’re new to the type of problem you’re addressing. So don’t focus on what constraints prevent you from doing. Rather, think about what they enable you to do. Chevallier, A. (2016). Strategic Thinking in Complex Problem Solving. Oxford, UK, Oxford University Press. Image credit: 6689062/Pixabay.
<urn:uuid:8228c8e3-1c6f-484d-981b-9558a5809c4b>
{ "date": "2020-01-29T06:03:12", "dump": "CC-MAIN-2020-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00416.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9702599048614502, "score": 2.5625, "token_count": 575, "url": "https://powerful-problem-solving.com/if-youre-not-an-expert-embrace-constraints/" }
These tweets are two centuries old. Twitter's 140-character microblogging platform was revolutionary when Jack Dorsey and friends introduced it in 2006 -- or it seemed revolutionary, anyway. But microblogging was already very popular in the 19th century as a means of expressing one's self, said Lee Humphreys, assistant professor of communication at Cornell University. "To submit a message on Twitter, it has to be under 140 characters," Humphreys said during a lecture at Cornell, according to Phys.org. "These are not paragraphs; these are barely even sentences … I was really interested in who does this and why do they do it, as well as what it means to be communicating in this way." While the Internet and Twitter are new technologies, people in the 19th century used their diaries as a way of sharing their thoughts with friends and family. The diaries they used were as small as 2 by 3 inches in size, and this caused people to write in short sentences just as they do on Twitter, she explained. 2,000-year-old electronics tech still can't be matched Sony, Panasonic plan 300GB successor to Blu-ray iAbuse: Another Apple supplier accused of labor abuses Game over? 'World of Warcraft' loses 600,000 subscribers 12 obsolete technologies Americans still use iGrandpa? 'Ancient' roots of modern tech "There are clear analogue examples that helped me to understand why someone would opt into 140 characters … Twitter users took that invitation to limit their text, and found it, in fact, very liberating," Humphreys explained. Humphreys and undergraduate student Seth Shaprio analyzed military blogs of today and Civil War diaries and letters of the past to highlight the differences and similarities between "Twitter talk" and 19th century writing. "We chose two soldiers: 'Dadmanly,' a blogging soldier from the war in Iraq, and the diary and letters of 'CharlieMac,' a union soldier in the Civil War." Both soldiers relied on technological systems of their time to communicate with home: One chose the Internet, the other the postal system. "In these mundane details we share our lives with those we love," Humphreys said. "We see this being done historically … with diaries and letters -- and today with social media." "It isn't surprising that this hasn't changed," she added.
<urn:uuid:af1141a8-c613-46ab-8fe6-70f7d21ac481>
{ "date": "2017-01-22T23:56:29", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00284-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9694218039512634, "score": 3.09375, "token_count": 489, "url": "http://www.foxnews.com/tech/2013/07/29/twitter-is-not-revolutionary-microblogging-also-popular-in-1th-century.html" }
Stock. It is essential to serious cooking. Walk into any restaurant that aspires to prepare fine cuisine and you will undoubtedly see a large pot of stock gently bubbling somewhere in the kitchen. In French cuisine, stock is so important that it is called "fond," which translates into "foundation." Chefs frequently compare cooking (and culinary training) to building a house. As any architect knows, a strong foundation, while never really seen, is of greatest importance. If the foundation is weak, what is built on it will be unstable--especially if it is destined to be a tall and magnificent structure. Stock is similarly crucial to fine cuisine. It is rarely seen on its own but is one of the principle foundational building blocks of the culinary arts. If you start with a bad stock, it is impossible to make great food. The more perfectionistic the cuisine, the greater importance quality stocks will have. SO, WHAT IS STOCK? Basically, if you have ever made chicken soup from scratch, you have made a type of stock. Stock is a clear liquid, well flavored with bones, meat, vegetables, herbs, spices, and no salt. In restaurant kitchens, stock is used everywhere. Stock is the backbone of soups and sauces. It is frequently used in poached and braised dishes. It can be reduced to a syrupy consistency (called glace or glaze) and used to flavor a multitude of preparations like pates and sausages. All stocks can be divided into one of these 2 categories: white stock and brown stock.. White stocks are light in color while brown stocks are dark in color. In today's class, we will focus on white stock. White stock is the simplest stock to make. It is made from poultry bones, veal bones, fish bones (in which case it is classically called fumet), or strictly vegetables. Despite differences in ingredients, the stock making procedure is essentially the same for each white stock. MAKING WHITE STOCK Let's summarize the procedure for making white stock before delving into the why questions. The first step in stock making is to rinse the bones in cold water. Once rinsed, place the bones in a large stockpot (preferably lined with stainless steel, as aluminum can sometimes discolor certain stocks) and fill with cold water, just enough to cover the bones by 1-2 inches. (For our photo shoot we used chicken with meat still on the bones. Typically you would use straight chicken bones (without meat) to keep the cost down.) Bring the water to a boil over moderate heat. Once the water has boiled, reduce heat immediately so that the stock is barely simmering. Using a ladle, skim the top of the water to remove any fat and/or impurities (scum). At this point, add the flavoring components (vegetables, herbs, and spices) to the stock. Chefs refer to the vegetables in stock as mirepoix (French, pronounced mir-pwa). For white stocks, mirepoix generally consists of 2 parts (by volume) onion and 1 part celery. Many chefs vary the composition of mirepoix with such additions as well-washed chopped leeks or carrots (which in too great a quantity will both sweeten and add a dark orangy hue the finished stock). The mirepoix is generally cut in large 1/2 inch to 1 inch pieces. The longer a stock cooks, the larger the mirepoix is cut. Herbs and spices (bay leaf, thyme and black peppercorns) are added to stock in one of 2 ways. If using fresh thyme, prepare a bundle, or "bouquet garni" in French. To do this, cut a 2 inch piece of celery, place a couple of branches of fresh thyme and parsley stems in the cavity of the celery. Top with a bay leaf, and tie it securely with kitchen string. Dried ingredients like thyme, peppercorn and sometimes bay leaf are put in a stock in a "sachet." To make a sachet, place dried ingredients on a square of cheesecloth. Wrap up the cheesecloth to make a bag (or sachet), and tie with a string. Continue cooking the stock at a very slow simmer for the prescribed length of time. A good rule of thumb is this: fish and vegetable stocks--30 minutes, poultry stock--2-3 hours, and veal stock** 8-12 hours. Whenever impurities or fat rise to the surface, skim them off carefully using a ladle. When the stock has cooked long enough to extract all the gelatin from the bones and flavor from the vegetables, herbs and bones, ladle (or pour very carefully) the stock through a chinois (fine mesh conical strainer). Remove the stock from the stockpot as gently as possible. If it is poured carelessly, one could stir up small solid particles that typically settle to the bottom of the stockpot during cooking. If these are stirred up, the stock could become cloudy. When straining through a chinois, do not press on any solid ingredients that fall into the chinois. Not only can this cloud the stock, but a sharp bone could pierce the mesh of the chinois, thus rendering this expensive utensil useless. If the stock strains too slowly, you can tap on the edge of the chinois to speed the process. Once strained, professionals always chill stock in an ice water bath. This is the fastest way to cool a stock. Once cold, store for up to several days, covered, in the refrigerator. Stock also freezes very well. Knowing how to make stock and really understanding why we do what we do, are two different things. So let's retrace our steps and discuss some of them in more detail. We always start making a stock using cold, never hot, water. Cold water is necessary to make a stock that is crystal clear and not cloudy or murky. When a stock starts with cold water and is heated gently to the boiling point, the proteins in the bones and meat will have time to slowly coagulate, clump together, and rise to the surface in the form of scum. This scum can then be easily skimmed off. Once the stock reaches the boiling point, it is crucial that the heat is reduced immediately so that the stock simmers slowly for the remainder of the time. If the stock boils hard for any length of time, the coagulated protein and fat that rose to the surface of the stock will be churned back into the stock. This creates a greasy and cloudy stock which is considered a grave fault in stock making. Sometimes a lengthy period of simmering will correct the cloudiness, but if the stock has sufficiently churned and boiled it is hopeless. Why add the mirepoix after the first boil and not along with the bones and cold water? True, this would extract more flavor from the vegetables, but they will have plenty of time to cook even if added after the first initial boil. The answer is strictly a practical one. When a stock first comes to a boil, it will produce the greatest amount of scum. With no mirepoix added to the pot, it is simple to skim off the scum and fat without removing any of the aromatic ingredients. Why bother making a sachet or a bouquet garni? The answer is basically the same as for the mirepoix. While the stock is simmering, it is necessary to skim it from time to time. If we don't contain the herbs, we will likely skim them off the top of stock along with the fat and scum. This, in effect, removes flavor from the stock. Lastly, why chill a stock in an ice bath? Most home chefs do not do this, but it is an excellent practice to get in the habit of doing. Basically, it is a question of bacteria. Stock is a great medium for bacterial growth. Bacteria love to grow between 40-145 degrees. Chilling stock in ice water lowers the temperature of the stock rapidly. The stock does not then spend much time between 40-145 degrees and this greatly reduces the possibility of bacterial growth. **When making a white veal stock, it may be necessary to first blanch the bones. To do this, cover the rinsed bones with cold water. Bring to a boil. Drain water from bones and rinse the bones again. Cover the bones with new fresh cold water in the stockpot and bring to a boil over moderate heat. Continue as with any other white stock by adding the mirepoix,etc. Blanching veal bones is a trick to keep the stock from turning a dark, murky color. BASIC WHITE CHICKEN STOCK Yield: 1 1/4 Gallons. 10 lbs chicken bones, coarsely chopped (chicken legs--drumsticks--are inexpensive and very flavorful) 1 lb mirepoix (2/3 onion and 1/3 celery--and possibly a bit of leek and 5 garlic cloves) 1 bouquet garni using several branches of fresh thyme, parsley, and 1 bay leaf 1 T. whole black peppercorns wrapped in a sachet cold water to cover - Rinse bones. - Cover with cold water. - Bring to a boil over moderate heat. Skim. - Add remaining ingredients. - Simmer for 2 1/2-3 hours, skimming frequently. - Strain through a chinois and chill in an ice bath.
<urn:uuid:4a1f4e33-9353-4618-9a76-ecf38955fdbb>
{ "date": "2015-10-08T21:59:21", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737904854.54/warc/CC-MAIN-20151001221824-00116-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9359622597694397, "score": 2.796875, "token_count": 1990, "url": "http://www.cheftalk.com/a/how-to-make-white-stock" }
A group of injured British soldiers hope to become the first people to fly to the South Pole in microlight aircraft. A team of 12, including seven military personnel who have sustained serious injuries, will set out from the edge of Antarctica’s ice and aim to fly the 2,000 miles to the South Pole. The expedition, which has never been attempted by an able-bodied person, will take place in 2014 after the team have completed their pilot’s training and undertaken cold weather training. The polar aero trek will involve a round trip flight of over 3,000 miles, flying at cruising altitudes of up to 10,000 feet in temperatures as low as -30°C. The expedition will also attempt to achieve 3 world-firsts: the first flexible-wing flight in Antarctica, first over the South Pole, and first over Mount Vinson, which, at 16,050 feet, is the highest peak on the Antarctic continent. The expedition is part of a wider programme being undertaken by Flying for Freedom to establish a number of ‘flying’ recovery centres around the UK for injured and disabled servicemen and women.
<urn:uuid:41ac983c-1de8-4148-86e6-dd9894a4efbc>
{ "date": "2018-01-20T21:16:07", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9629560708999634, "score": 2.703125, "token_count": 236, "url": "https://communityhomefront.wordpress.com/2013/01/24/injured-personnel-in-bid-for-world-first-microlight-flight-to-the-south-pole/" }
Air Car Ready for Mass Production The world’'s first commercial compressed air-powered vehicle is rolling towards the production line. The Air Car, developed by ex-Formula One engineer Guy Nègre, will be built by India’s largest automaker, Tata Motors. The Air Car uses compressed air to push its engine’s pistons. It is anticipated that approximately 6000 Air Cars will be cruising the streets of India by 2008. If the manufacturers have no surprises up their exhaust pipes the car will be practical and reasonably priced. The CityCat model will clock out at 68 mph with a driving range of 125 miles. Refueling is simple and will only take a few minutes. That is, if you live nearby a gas station with custom air compressor units. The cost of a fill up is approximately $2.00. If a driver doesn't have access to a compressor station, they will be able to plug into the electrical grid and use the car’s built-in compressor to refill the tank in about 4 hours. The compressed air technology is basically just a way of storing electrical energy without the need for costly, heavy, and occasionally toxic batteries. So, in a sense, this is an electric car. It just doesn't have an electric motor. But don't let anyone tell you this is an "emissions free" vehicle. Sure, the only thing coming out of the tailpipe is air. But, chances are, fossil fuels were burned to create the electricity. In India, that mostly means coal. But the carbon emissions per mile of these things still far outdoes any gasoline car on the market. Unfortunately, the streets of North America may never see the Air Car, though; it's light-weight, glued-together fiberglass construction might not do so well in our crash tests. However, that does not mean the Air car is confined to the sub-continent. Nègre has signed deals to bring its design to 12 more countries, including Germany, Israel and South Africa. And this isn't the last we'll hear of the technology. The folks making the Air Car are already working on a hybrid version that would use an on-board, gasoline-powered compressor to refill the air tanks when they run low. Negre says that technology could easily squeeze a cross country trip out of one tank of gasoline.
<urn:uuid:4a48cf8e-f62d-4b91-84d3-f23bbb4128a2>
{ "date": "2014-03-08T13:51:26", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654667/warc/CC-MAIN-20140305060734-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9541152715682983, "score": 2.53125, "token_count": 486, "url": "http://www.coachella.com/forum/showthread.php?10072-Air-Car-discussion-thread&p=216183&viewfull=1" }
Our ancestors may have had to garden to survive, but these days, many of us choose gardening to thrive. Not only is it a great way of providing yourself and your family with delicious, nutritious produce or beautiful fresh-cut flowers, but there are tremendous physical and mental benefit to be—pardon the pun—reaped. Just the act of being outside is good for both the human body and mind. Being exposed to the sunlight and feeling like you’re part of nature lowers your levels of the stress hormone cortisol, lowers your blood pressure, and strengthens your immune system. All that vitamin D you’ll be soaking up with the sun’s rays will boost your mood, potentially reducing a dependency on anti-depressants, while also making you half as likely as someone with low levels of vitamin D to develop heart disease. People these days spend so much of their time indoors it’s actually harming their health, and someone who gets 20 minutes of outdoor activity and sunshine a day is many ways healthier than someone who spends an hour in a cave-like gym. And gardening is also great exercise in its own right. Though it isn’t as intense a workout as running or lifting weights, it’s still a valuable form of aerobics and a good alternative for someone who can’t put too much stress on their joints. Movements like digging holes, planting seeds, weeding, or lifting bags of fertilizer increase your strength, improve flexibility, and improve your endurance. Adults should aim to spend half an hour in the garden daily to keep in shape. And if you’re concerned that you don’t have time for that kind of commitment, consider that exercising for 30 minutes will boost your energy so much that you’ll increase your productivity in all other areas of your life throughout the day, essentially returning those minutes to you. Gardening is also great for keeping your brain active. A long-term study that followed approximately 3,000 adults for 16 years found that daily gardening was the single biggest risk-reduction action participants could take against dementia, reducing incidence by 36%, and another Alzheimer’s study puts that number at 47%! It’s unknown why gardening has such an incredible effect on brain health and longevity, but it’s likely a mixture of things including learning, dexterity, and sensory awareness. Gardening can also help alleviate feelings of depression, and it’s been proven that even the simple act of looking at plants, let alone touching them and raising them, can lift someone’s mood. Interestingly, the effects of growing your own food and eating organically can also help with your emotional health. A study in 2008 discovered that glyphosate, which is the active ingredient in pesticides like Roundup—depletes serotonin and dopamine levels in mammals. Those are exactly the chemicals we rely on to feel happy, and when you think about how much of the food at a grocery store has been treated with pesticides containing glyphosate, it’s no wonder that depression levels are spiking among Americans. And if you garden using a greenhouse and heating equipment, you can grow your own food even when it’s snowing outside, meaning that you can avoid the dangerous pesticides of commercial fruits even out of season.
<urn:uuid:778e1ef6-cc87-4399-b9a4-89a3c8f321a5>
{ "date": "2018-12-14T01:16:41", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00416.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9559974670410156, "score": 2.90625, "token_count": 677, "url": "https://simpleliving.com/health/how-gardening-gives-you-more-energy-helps-prevent-alzheimers-and-makes-you-happier/" }
Special Feature brought to you by ARALTAS.COM Burke / Bourke The history of the energetic Burke family is complex and widespread. William de Burgh (called William the Conqueror by Irish annalists and wrongly described as William Fitzadelm de Burgo) was the progenitor of the Burkes in Ireland and brother of Hubert de Burgh, "the most powerful man in England next to King John". These brothers claimed ancestry directly from Charlemange. William came to Ireland in about 1185 and was made Governor of Limerick and succeeded Strongbow as Chief Governor. He consolidated his social position by marrying a daughter of Donal Mor O Brien, King of Thomond (now the area around Shannon airport). He set out to conquer Connacht and after much massacre and pillaging he overcame the reigning O Conors. According to the annals "he died of a singular disease too horrible to write down". He was buried c. 1205 in Athassel Abbey which he had founded. William's son, Richard (c. 1193 - 1243), Viceroy of Ireland and Lord of Connacht and Trim in County Meath, despite his continual assaults on the O Conor kings of Connacht, married an O Conor daughter. It is said that he founded the city of Galway. Certainly he built himself a fine house there between Lough Corrib and the Atlantic Ocean. His eldest son, Walter (d. 1272), acquired the Earldom of Ulster through marriage with a daughter of Hugh de Lacy. He fortified his Ulster territory with many castles which still enliven the coast in counties Donegal, Down and Antrim. It was he who built the amazing Dunluce Castle near Portrush in County Antrim which was used in succeeding centuries by the MacQuillans and the MacDonnells. From Walter, 1st Earl of Ulster, descend the Burkes of Limerick and Tipperary. Burke (Bourke, de Burgh), gaelicised as de Burca, is much the most numerous of the Hiberno-Norman surnames. It is estimated that there are some 19,000 people of the name in Ireland today: with its variant Bourke it comes fourteenth in the list of commonest names. Sir John Davis said in 1606: "There are more able men of the surname of Bourke than of any name whatsoever in Europe". Having regard to the large number of Burkes, or Bourkes, now living - the figure 19,000, given above, must be multiplied several times to include emigrants of Irish stock to America and elsewhere - it is hardly possible that they all stem from the one ancestor (the name, it may be remarked, is not found in England except in families of Irish background); nevertheless, even if several different Burkes came to Ireland in the wake of Strongbow, it is the one great family, mentioned above, which has been so prominent in Irish history. The Burkes became more completely hibernicised than any other Norman family. They adopted Brehon Law and proclaimed themselves chiefs after the Irish fashion, forming, indeed, several septs of which the two most important were known as MacWilliam Uachtar (Galway) and MacWilliam Iochtar (Mayo). Minor branches became MacDavie, MacGibbon, MacHugo, MacRedmond and MacSeoinin. Of these the name Mac Seoinin is extant in Counties Mayo and Galway as Jennings, and MacGibbon as Gibbons. As late as 1518, when the City of the Tribes was still hostile to its Gaelic neighbours and the order was made that "neither O nor Mac should strut or swagger through the streets of Galway", a more specific instruction was issued forbidding the citizens to admit into their houses "Burkes, MacWilliams, Kelly or any other sept". Lacking a male heir, the title of Ulster went from the de Burgos to the royal family of England when Elizabeth de Burgo, Countess of Ulster (d. 1363), an only child, married Lionel, Duke of Clarence, the third son of the Yorkist king of England, Edward II. Lionel became Earl of Ulster, a title still used by the royal family. The Burkes saw to it that no Duke of Clarence, Earl of Ulster or not, would get hold of their Connacht territory. In fact they had grabbed it from the native O Flahertys, having driven them from Galway city. They leased some land back to the O Flahertys, but, as no rent seemed to be forthcoming, a Burke was sent to collect it at the O Flaherty headquarters at the magnificent Aughanure Castle in Oughterard. They were enjoying a banquet and he was invited to join them. During the feasting he mentioned the rent. Immediately, an O Flaherty pressed a concealed flagstone which hurled Burke into the river. They cut off his head and sent it back to the Burke stronghold, describing it as "O Flaherty's rent". Bernard Burke (1814 - 92) and his father John Burke (1787 - 1848) were genealogists and publishers of a succession of weighty volumes containing the pedigree of the British and Irish aristocracy, including Burke's Peerage which became known as "the stud book of humanity". Bernard Burke was Ulster King of Arms at the Genealogical Office in Dublin Castle, precursor of the present day Chief Herald. Of his own name, Sir Bernard wrote: "The family of de Burgh, de Burgo, or Bourke (as at different times written), Earls and Marquesses of Clanricarde, ranks amongst the most distinguished in the Kingdom, and deduces an uninterrupted line of powerful nobles from the Conquest. John, Earl of Comyn, and Baron of Tonsburgh, in Normandy (whose descent has been deduced from Charlemagne), being general of the King's forces, and governor of his chief towns, assumed thence the surname of de Burgh. The family of de Burgh, or Burke, has, since the reigns of Henry III and Edward I, been esteemed one of the most opulent and powerful of the Anglo- Norman settlers in Ireland, under Strongbow. It held, by conquest and regal grant, whole territories in the counties Galway, Mayo, Roscommon, Tipperary, and Limerick; and so extended were its possessions, that its very cadets became persons of wealth, and were founders of distinguished houses themselves." This extract from Burke's Peerage 1876 sets the scene for a Norman family which was to become highly influential in Ireland. Richard Burke, known as Richard an Iarainn (of the iron), possibly because of the iron mines on his Burrishoole lands, was the second husband of Grania O Malley the pirate queen, one of the outstanding Irish women of the Elizabethan age. Their son, "Theobald of the ships", was born at sea just before his mother fended off marauding Turkish pirates. Theobald was taken hostage by the English and brought up to the English point of view. Like his mother, he knew how to play both sides, and when he failed to be elected to the leadership of the Burkes of Mayo, he returned to England. He fought on the English side in 1601 at the decisive battle of Kinsale. He was created 1st Viscount Mayo in 1627 by Charles I - a title which lasted only until 1767. The de Burgos had long since sprouted new family branches. Like the Irish, they appointed chieftains over their separate territories. The most prominent County Galway Burke family was that of the chiefs of Clanricarde. In 1543, Ulick de Burgo had submitted to Henry VIII who created him Earl of Clanricarde. In the seventeenth century, to prevent their lands from being confiscated by the followers of William of Orange, they changed from Catholicism to Protestantism, as did many of the neighbouring families. The Clanricardes built a fine castle at Portumna which was inherited by Viscount Lascelles, the husband of Princess Mary, only daughter of George V. It came to him from a great uncle, the last Marquess of Clanricarde (d. 1916), an eccentric who lived in miserly squalor in rooms in London. In Mayo, the most significant de Burgh families are the Viscounts Mayo and the Lords Mayo (and Barons Naas). In successive generations they have been politicians, bishops, priests and statesmen at home and abroad. Of the many Burkes who took service with continental powers after the defeat of James II, none was more distinguished than Toby Bourke (c. 1674-c. 1734), whose connection was with Spain. Raymond Bourke (1773-1847), a peer of France descended from the Mayo Burkes, accompanied Wolfe Tone to Ireland in the 1798 expedition and later became a famous Napoleonic commander. Several other Bourkes or Burkes distinguished themselves in the army of France. One of the greatest statesmen of his day, Edmund Burke (1729 - 97), was born in Dublin. A political writer and a powerful orator, while a Member of Parliament in Britain at the time of the French Revolution he exhorted diplomacy rather than bloodshed. Nor was he afraid to say that British stupidity had lost America and would lose Ireland. Although far from wealthy, when he was Privy Counsellor he reduced his own salary by three-quarters! His book, Reflections on the Revolution in France, was considered enormously important all over Europe. In one of his orations he said "the age of chivalry is gone. That of sophisters, economists and calculators has succeeded". His only son, Richard Burke (1758-1794), was agent of the Catholic Committee. Dr. Thomas Burke (1705-1776), was Dominican Bishop of Ossory and author of Hibernica Dominicana. Walter Hussey Burgh, statesman and orator, was born in Kildare in 1742. He studied law at Trinity College, Dublin. It was said of him, "No modern speaker approaches him in power of stirring the passions". Contemporary with Walter, there was William Burgh of Kilkenny. He went into politics in England where he bravely advocated the abolition of the slave trade and vigorously opposed the Union which he saw would tie the Irish government even more tightly to England. He lived in York, England, for many years and left his library to York Minster. William Burke (1792 - 1829) of Cork was hanged as a notorious criminal. With his fellow-countryman, Hare, he lured strangers into his Edinburgh lodging house, made them drunk, suffocated them and sold their bodies for dissection. His awful work gave a new word to the English language - "to burke" - meaning to suffocate. Robert O Hara Burke (1820 - 61) of St Cleran's, Craughwell, County Galway, was of the Clanricarde Burkes. He served in the Austrian army as a captain, and later joined the Australian police as an inspector. He and his companion, W.J. Wills, were the first white men to cross Australia from south to north. Their expedition was far from well planned and, on the return journey in 1861, they both died from starvation after they had covered 3,700 miles by foot and on camel back. A film of their tragic adventure, Burke and Wills, was made in Australia in 1986. Richard Southwell Bourke (1822 - 1872), 6th Earl of Mayo and also Lord Naas, was Chief Secretary for Ireland during the Fenian risings. In 1869, aged only 46, Disraeli appointed him Viceroy of India. He was regarded as being "One of the ablest administrators that ever ruled India". While on a visit to a penal settlement in the Andaman Islands he was assassinated. Canon Ulick Bourke (1829 - 87) was from County Mayo. He was one of the first and most influential of the Irish language revivalists. Thomas Henry Burke (1829 - 82) of Galway, while under-secretary at Dublin Castle, was walking in Phoenix Park with the newly-arrived Chief Secretary for Ireland, Lord Frederick Cavendish, on Sunday, 6 May 1882, when they were knifed to death by terrorists styling themselves "Invincibles". Great numbers of Burkes, many of them lawyers, went to America. Aedanus Burke (1742 - 1802) of Galway went to Virginia where his law studies led to his appointment as judge. He was the first Senator to represent South Carolina at Congress. A man at cross-purposes with himself, he believed in slavery and in democracy. During the French Revolution he wrote widely disseminated pamphlets advocating the abolition of all titles of nobility. He has been nicely described in the Dictionary of American Biography as "an irascible man leavened with Irish wit". Thomas Burke (c. 1747 - 83), an aristocratic Galway man, prospered in law and politics in North Carolina where he called his estate Tyaquin after the family seat in Galway. He organised the US army in its fight for independence so thoroughly that the British kidnapped him, but he escaped. Burke County, North Carolina, is named after him. John Daly Burke (c. 1775 - 1808) added Daly to his name in gratitude to a Miss Daly who aided him, as a political refugee, to escape to America in 1796. In Boston he struggled unsuccessfully with newspaper publishing. Success came when he found a dramatic formula which suited the nationalism of his time by writing a play with a battle scene depicting Bunker Hill. The play had long runs in Boston and New York. He was killed in a duel by a Frenchman with whom he had quarrelled. John Gregory Bourke (1826 - 96) of Philadelphia was over-intensively educated by his parents who had emigrated from Galway. He ran away to join the 15th Pennsylvania Cavalry and made a career in the army. He also studied the customs of the Indian tribes and was recognised as a reliable and scientific ethnologist. Stevenson Burke (1826 - 1904), son of Ulster Scottish-Irish immigrants, was a lawyer who prospered in the nineteenth-century boom. He owned mines and railroads and conducted many important legal cases in Cleveland. He was the founder of the Cleveland School of Art. Thomas Nicholas Burke (1830 - 83), a Dominican, preached throughout the United States of America in the mid-nineteenth century and, although his goals were chiefly Irish political ones, he was able to donate £100,000 to charities in America. Thomas Burke (1849 - 1925), born in New York of Irish parents, was a self-made lawyer. He practised in Washington DC for fifty years where, it was said, "his career was synonymous with Washington's history". He expanded trade to China and Japan and organised the railroads to the Pacific, and so became a leading citizen of Seattle, Washington State. Many Bourkes went to Australia, including Sir Richard Bourke (1777 - 1855), a relative of the great Edmund Burke with whom he stayed in London as a student. Following a military career, he retired to Thornfield, his family estate near Limerick. The Colonial 06ice tempted him away with a political - military post in Cape Colony, where he demonstrated an enlightened attitude towards the Kafirs. In 1828 he was appointed Governor of New South Wales. It was a period of great economic growth and exhausting controversies. Although o6ered a number of other high colonial appointments, he resigned in 1838. John Burke (1842 - 1919) and John Edward Burke (1871 - 1947) were from a Kinsale family who sailed on the emigrant ship Erin go Bragh to Queensland. With their many Burke children they were very much to the fore as shipmasters and shipowners in Australia. Perhaps the strength of the powerful, well-recorded Burke presence in Ireland can best be demonstrated by the physical mark they have left on the island, where they built 16 abbeys and 62 castles in County Mayo and 121 castles in County Galway, and left at least 38 variations of the de Burgo - Burke - Bourke name! The versatile Burkes display a diversity of aptitudes: from William de Burgh, "the conqueror of Ireland", progenitor of the Burkes in Ireland, to Martha Jane Burke (1852 - 1903) of the Wild West known as "Calamity Jane"; from the internationally acclaimed photographer, Margaret Bourke White, born in New York in 1906, and back home to "the gentle rock star", Chris de Burgh, grandson of General Sir Eric de Burgh of Bargy Castle, County Wexford. In 1990, Ireland elected its first woman President, Mrs Mary Robinson. A graduate of Trinity College, Dublin, she is a distinguished lawyer. She was born in County Mayo where her father, a Bourke, is a medical doctor. The Heraldry of the Burkes is well documented and there is a long list of coats of arms associated with the name. The following is regarded as the most ancient and is the coat of arms for the entire sept. Arms: Or a cross gules, in the dexter canton a lion rampant sable. Crest: A cat-a-mountain sejant guardant proper collared and chained or. Motto: Ung roy, ung foy, ung loy (though the language is a little strange, the motto means "one king, one faith, one law").
<urn:uuid:a1835bd0-7d58-43eb-a755-bb9ea31dd6e1>
{ "date": "2017-01-19T19:05:07", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00092-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9808027744293213, "score": 2.828125, "token_count": 3693, "url": "http://www.araltas.com/features/burke/" }
Make sure toys are safe for kids Published 8:00 pm, Thursday, November 1, 2007 In recent months recalls have been issued for millions of toys because they contain lead paint. Lead, of course, is highly toxic and poses serious health risks to children. And any parent knows that young kids especially tend to put things including toys in their mouths. So lead paint in toys is extremely dangerous. Parents and others buy children toys to bring them joy, not to cause them harm. If its on store shelves, parents should be able to purchase a toy with confidence that it is safe. But you know, and we know, things never seem to be as simple as they should be. Agencies such as the Consumer Product Safety Commission dont have the staffing or funding to provide as many or as thorough of inspections as consumers would like. And simply banning toys from a specific country isnt nearly as easy as just deciding to do so. So, that means parents must be extra vigilant when buying toys for their children. Here are some things you can do: Skip buying jewelry especially cheap jewelry for young kids. Many such items have been recalled because they contain lead. And, many pieces pose a choking hazard for youngsters. Test your toys for lead with a home testing kit. Plenty are out on the market. The Huron County Health Department recommends LeadCheck Swabs by Hybrivet Systems Inc., a Massachusetts-based company formed in 1984. If youre concerned about your childs health, a simple blood test can detect lead levels.
<urn:uuid:8c2411b0-0518-430d-b90a-0e23b0d257a5>
{ "date": "2018-03-23T07:23:59", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9723691940307617, "score": 2.8125, "token_count": 316, "url": "https://www.michigansthumb.com/news/article/Make-sure-toys-are-safe-for-kids-7315036.php" }
Diverticulitis occurs when the small pouches that line the lower portion of the large intestine, called diverticula, become inflamed. The most common symptom of diverticulitis is pain in the abdomen, varying in severity; however, an individual may also experience nausea, chills, fever and change in bowel habits. Below you will find alternative and natural treatment options including those from a Chinese Medicine perspective for diverticulitis. Need treatment options for diverticulitis and not finding the information you need? Using our forums our staff and our community may offer guidance with regards to the treatment of diverticulitis.
<urn:uuid:6d31fc90-b55f-4434-b071-25ed18fb76fb>
{ "date": "2016-08-25T07:56:42", "dump": "CC-MAIN-2016-36", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292975.73/warc/CC-MAIN-20160823195812-00096-ip-10-153-172-175.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.904969334602356, "score": 2.625, "token_count": 132, "url": "https://theory.yinyanghouse.com/conditions-treated/alternative-natural-options-for-diverticulitis" }
By the end of the last decade, psychiatric medications were being used less often in very young children, a new study suggests. Researchers found the percentage of children prescribed antipsychotics, stimulants and antidepressants at doctors' visits spiked in the mid-2000s but leveled off again between 2006 and 2009. "I'm very excited that the use of these drugs in this age group seems to be stabilizing," Dr. Tanya Froehlich, the study's senior author from Cincinnati Children's Hospital Medical Center in Ohio, said. "It's good to get a gauge on what we're doing with psychotropic medications in this age group, because we really don't know what these medications do to the developing brain," she said. Previous studies have tried to estimate the use of psychiatric, or psychotropic, drugs among preschoolers, Froehlich and her colleagues write in Pediatrics. But those studies tended to focus on one class of medication or only a segment of the population. For the new study, the researchers pulled national data from 1994 to 2009 on about 43,500 doctors' visits for kids aged two to five. During that time, the proportion of psychotropic drug prescriptions varied between one prescription for every 217 doctors' visits in 1998 and one for every 54 visits in 2004. Overall, the researchers found about 1.0 percent of preschoolers left doctors' visits with a psychotropic prescription between 1994 and 1997. That rate fell to about 0.8 percent between 1998 and 2001. It then jumped to a high of about 1.5 percent between 2002 and 2005 and then returned to 1.0 percent between 2006 and 2009. The decrease and stabilization in the most recent years occurred even though more children were being diagnosed with behavioral disorders throughout the study period. Although the study can't explain why the rate of prescriptions dropped in 2006 to 2009, the researchers suggest it may be due to an increased awareness of possible side effects from these types of medications. For example, the U.S. Food and Drug Administration issued a strong warning in 2004 about a link between antidepressant use among children and suicide risk. A number of conditions, including diabetes and obesity, have also been linked to the use of antipsychotics among children (see Reuters Health story of August 22, 2013 here.) "I think this is an area that has gotten a fair amount of public attention and it could be this is parents and physicians stepping back from a willingness to prescribe these medications," Dr. Mark Olfson, who was not involved with the study but has researched medication use among children, said. "Mostly they're being prescribed to bring various kinds of disruptive behavior in preschoolers under control. I hope (these findings suggest) parents are searching for other means to address this behavior," Olfson, a professor of clinical psychiatry at Columbia University Medical Center in New York, said. "The thing pediatricians should be asking themselves is, ‘Are we really following the guidelines in treating these children?' which is trying behavioral therapy and then going to the medications," Froehlich said. "What really is important is that a thorough assessment be conducted before any decision is made about prescribing medications," Olfson added.
<urn:uuid:0b12512d-fbfd-48e6-88b0-57062f4652d1>
{ "date": "2015-03-04T08:33:00", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463460.96/warc/CC-MAIN-20150226074103-00088-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9671311378479004, "score": 2.65625, "token_count": 653, "url": "http://www.foxnews.com/health/2013/09/30/psychiatric-drug-use-in-preschoolers-stabilizing/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+foxnews%2Fhealth+%28Internal+-+Health+-+Text%29" }
A Royal Navy survey vessel has discovered a 250m-deep canyon in the Red Sea during a nine-month mission to improve understanding of the waters east of Suez. The 3D images were created using a multibeam echo sounder after HMS Enterprise left the Egyptian port of Safaga, the Independent reported. Derek Rae, commanding officer of HMS Enterprise, said that the features could be the result of ancient rivers scouring through the rock strata before the Red Sea flooded millennia ago. He asserted that some of the features could be younger and still in the process of being created by underwater currents which were driven by the winds and tidal streams as they flow through this area of the Red Sea, carving their way through the soft sediment and being diverted by harder bed rock. He added that there could also be a possibility that the features are a combination of the two. (ANI)
<urn:uuid:79e27ea0-c1b2-4970-aaad-0a562e6d9525>
{ "date": "2016-02-06T18:51:57", "dump": "CC-MAIN-2016-07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147492.21/warc/CC-MAIN-20160205193907-00264-ip-10-236-182-209.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9696694016456604, "score": 2.953125, "token_count": 181, "url": "http://www.sify.com/news/UK-Navy-finds-underwater-Grand-Canyon-news-International-nczpasihecisi.html" }
'Tomayto', 'tomahto'? Not necessarily true when it comes to Spanish. And definitely not according to the hit Youtube video “Que Difícil Es Hablar En Español” (“It's Really Hard To Speak Spanish”), in which Colombian musicians and brothers Juan Andrés and Nicolás Ospina, sing about the endless (and confusing) variations for many common Spanish-language words. In the video, which already has garnered close to 2 million views since it hit the web last week, the Ospina brothers sing about “que difícil es hablar en Espanol, porque todo tiene otra definición” which translates to “it's hard to speak Spanish when everything has different meanings”. A word that means something in one place, means something completely different elsewhere. For example, they sing about the word “fresa” which in Colombia means “strawberry” -- the fruit -- but in Mexico “fresa” means a waspy snob. In Argentina a waspy snob is called a “cheto”, but “cheto” is not strawberry. Strawberry is “frutilla” in Argentina. And so on. The humorous lyrics also speak about an important truth: the fact that a “correct" form of Spanish doesn’t really exist anymore. The same word or phrase can have multiple definitions depending on the country. And as such “speaking correct Spanish” is close to impossible. This idea is not surprising when you look at the numbers. With 329 million native speakers, Spanish ranks as the world's No. 2 language in terms of how many people speak it as their first language. It is second only to Mandarin. Spanish is the official language of 24 different countries. The Ospina brothers sing part of their song with an accent -- one which a native English speaker who also speaks Spanish might have. They do this to allude to the fact that Spanish is becoming even more complicated now that countries have adopted English phrases. Spanglish, if you will. Some of the Spanglish phrases they sing about are: "guachiman" (from a "watchman") and "hanguear" (from "hanging out"). And even though Spanglish might provide some comedic material, it also confuses the language further. "Porqué tiene que ser tan difícil saber como diablos hablar español!?!?" ("Why the hell is it so hard to speak Spanish!?!?") the Ospina brothers sing. According to their Youtube page, they dedicate their song to: To all our brothers in Latin America and Spain, and all the Spanish speaking community, the cultural diversity, the wealth of the language and to all the people who once tried to speak in Spanish and weren’t able to” (“Todos los hermanos en Latinoamerica y España, y a toda comunidad hispanoparlante, la diversidad cultural, la riqueza del lenguaje y las personas que intentaron hablar español alguna vez y no lo lograron.")
<urn:uuid:34fb1714-f453-46f7-a190-c3f171c92cfd>
{ "date": "2014-10-01T22:43:33", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663611.15/warc/CC-MAIN-20140930004103-00264-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8719847798347473, "score": 2.59375, "token_count": 698, "url": "http://www.huffingtonpost.com/2012/02/29/viral-video-speaking-spanish_n_1310917.html" }
WEDNESDAY, Dec. 7, 2016 (HealthDay News) -- Even small increases in blood pressure can be dangerous for black people, a new study suggests. A rise of as little as 10 mm Hg in systolic blood pressure in blacks raised the risk of dying during the study by 12 percent. The risk was even greater for black people under 60 -- each additional 10 mm Hg increased the risk of dying early by 26 percent, compared with a 9 percent increase for those over 60, the study showed. "These findings should urge doctors and patients to consider all the available data and weigh the risks and benefits prior to selecting a blood pressure goal in African-American patients," said lead researcher Dr. Tiffany Randolph. She's a cardiologist with the Cone Health Medical Group HeartCare in Greensboro, N.C. Blood pressure is made up of two numbers. The top number is called systolic pressure. This measures the pressure in the arteries when blood is being pumped from the heart. The bottom number -- diastolic pressure -- measures the pressure between heartbeats. Blood pressure is expressed in millimeters of mercury (mm Hg). The 2014 blood pressure guidelines from the U.S. National Institutes of Health Eighth Joint National Committee changed blood pressure goals for patients over 60 without diabetes or kidney disease. The goal was changed to a target of less than 150/90 mm Hg. Previously the goal had been 140/90 mm Hg, Randolph said. Although the recommendations were based on clinical trials, the trials didn't include many black people, she said. "Our data suggest that increases in blood pressure are associated with greater risk of death among all ages of African-Americans, even people over age 60," Randolph said. Only about 50 percent of all people with high blood pressure reach these goals. And because black people are more likely to have high blood pressure and suffer from its consequences, such as stroke, heart attack and kidney failure, "there is concern that raising the recommended blood pressure goals in this population may have unintended consequences," Randolph said. Moreover, even though the increased risk of death from high blood pressure was smaller among people 60 or older, they may actually benefit most by having well-controlled blood pressure, as their overall risk of death is higher than those under 60, she said. Dr. Gregg Fonarow is a professor of cardiology at the University of California, Los Angeles and a spokesman for the American Heart Association. He said, "These findings provide further evidence of the potential harms in terms of increased risk of heart attacks, strokes, heart failure and premature deaths that resulted from any physician or patient that followed the Joint National Committee blood pressure guidelines." These guidelines have been controversial, Fonarow added. Rather than tightening blood pressure goals to be consistent with all clinical trial evidence in adults 60 and over, they actually loosened the goal. Major professional societies, such as the American Heart Association and others, have refused to endorse these guidelines, he said. The new study included more than 5,200 people enrolled in the Jackson Heart Study between 2000 and 2011 in Jackson, Miss. All of the study participants were black and their average age was 56. Nearly two-thirds were women. Participants were followed for an average of seven to nine years. At the beginning of the study, 60 percent of the participants had high blood pressure, Randolph said. The median blood pressure at the start was 125/79 mm Hg. "We found that every 10 mm Hg increase in systolic blood pressure was associated with a 12 percent increase in the risk of death and a 7 percent increase in the risk of being hospitalized for heart failure," she said. Fonarow recommended these target numbers for optimal health: "The ideal for heart and brain health is a systolic blood pressure of less than 120 mm Hg and diastolic blood pressure less than 80 mm Hg," he said. Recently, the Systolic Blood Pressure Intervention Trial (SPRINT), of which 30 percent of patients were black, showed that aiming for a systolic pressure of less than 120 mm Hg saved lives, reducing deaths from any cause by 27 percent, Fonarow said. Dr. Stacey Rosen is vice president of women's health at Northwell Health's Katz Institute for Women's Health in New Hyde Park, N.Y. "This study highlights the need to do more work on where treatment goals should be," she said. "We cannot underestimate the importance of pushing blood pressure lower in order to minimize cardiovascular risk," Rosen said. High blood pressure is manageable with a heart-healthy lifestyle, including maintaining a healthy weight, eating a healthy diet, being physically active, not smoking and, for some, taking blood pressure-lowering medication, the researchers said. The report was published online Dec. 7 in the Journal of the American Heart Association. For more on blood pressure, visit the American Heart Association.
<urn:uuid:cf617ac9-001e-4b4d-8a44-23d9cdfa909f>
{ "date": "2017-03-28T13:54:22", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189771.94/warc/CC-MAIN-20170322212949-00171-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9653461575508118, "score": 2.953125, "token_count": 1023, "url": "http://healingwell.e-healthsource.com/index.php?p=news1&id=717557" }
- Author: Ben Granholm Epizootic bovine abortion (EBA), also referred to as foothill abortion, is one of the most serious cattle diseases in the Western United States. UC researchers, Professor Jeff Stott and Specialist Myra Blanchard from the UC Davis School of Veterinary Medicine have made major headway in developing a vaccine for this disease and currently are mid-way through a multi-year field trial examining vaccine field efficacy. This disease is carried by ticks and is present in many foothill regions making SFREC a natural outdoor lab to evaluate field efficacy. Last Wednesday researchers checked pregnancy status and condition on heifers assigned to the study. By August, heifers will be moved to irrigated pasture where SFERC staff can monitor animals closely for how the vaccine improves calving success and calf health. To read more about the efforts of SFREC researchers to combat foothill abortion, click here. - Author: Maddison Easley Small black dots can be seen from afar amidst the Lower Ranch fields at the Sierra Foothill Research & Extension Center. Upon closer inspection, those spots morph into fuzzy, knob-kneed, curious little calves that are sure to insight many cries of “Awwwwe!” from visitors. However, to a seasoned rancher those cute calves are a testament to the worthwhile blood, sweat, and tears that were shed leading up to a successful delivery. A healthy calf is the ultimate goal of any cow-calf manager, but once those critters finally do take their first breaths, the work has just begun…again. In the Sierra Foothills, healthy calves signify a greater achievement - the triumph over a bacterial disease called epizootic bovine abortion (EBA). Extensive research has been conducted on this economically devastating problem, with annual losses in the range of 45,000 to 90,000 calves in the state of California alone. EBA is commonly termed “foothill abortion” due to the regional outbreaks affecting only foothill, semi-arid and mountainous ranges of California, parts of Nevada, and southern Oregon. Through studies and research efforts by scientists associated with UC Davis, known information and management strategies have made slow, yet very significant progress since the recognition of EBA in the 1960's. For example, the culprit of EBA has been identified as the soft-shelled tick Ornithodoros coriaceus – explaining the climatic limitations of the disease so far. Faculty and site conditions at SFREC have provided the ideal atmosphere for useful data collection. Staff Research Associate Nikolai Schweitzer is charged with the task of checking the irrigated fields daily for signs of aborted fetuses. “It's important to be highly aware and check the fields at least twice a day. The scavengers in this area move in quickly!” said Schweitzer. All aborted fetuses are transported to UC Davis for additional lab tests to accurately determine if EBA was the cause of death. Infected cows do not show signs of the disease during pregnancy because the bacteria is transmitted to the immature fetus where it proliferates and results in a late-term abortion. Fortunately, the outlook for the candidate vaccine is very promising. The release of an effective EBA vaccine in the future will save ranchers countless hours of disappointment and headaches, while beefing up their worn wallets! This will be another significant feat for the cattle industry, SFREC, UCANR, and animal scientists in the West. - Author: Maddison Easley The assessments are confidential and will be used to generate training materials that producers can then utilize to improve the health and welfare of their herds. The leaders and key individuals involved with this project include Cassandra Tucker of the UCD Department of Animal Science, Bruce Hoar – the Western Institute for Food Safety and Security, and UCD graduate student Gabrielle Simon. Here are a few useful links to additional information and resources about beef health and welfare: - Author: Jeremy James Summer is a prime time for pinkeye on California rangeland. SFREC is not excluded from this problem so we screen for pinkeye frequently, particularly during animal handling efforts. Pinkeye is often observed as an oozing, discolored, bulging eyeball. Pinkeye, known in the science community as infectious bovine keratoconjunctivitis (IBK), is a bacterial disease which has varying degrees of severity. This troublesome inflammation can ultimately lead to blindness in severe cases. Last week, 103 heifers were examined for pinkeye at SFREC. Most of the cattle had no visible symptoms of eye troubles, but a portion had some degree of pinkeye present – healing, active, or scarring. From a manager's viewpoint, this is a very costly disease. Pinkeye is known to inhibit calves from thriving due to ocular pain and poor vision. The cost of treating pinkeye with antibiotics adds up quickly, not to mention the extra time and effort that is spent administering treatment. Additionally, the marketability of affected animals can be hindered. Here is a link to general information on the disease: Pinkeye is a complicated disease and SFREC has provided key research support in this arena for the last several decades. Pinkeye is caused by Moraxella bovis, a bacterium that is typically transmitted from infected animals by flies. Multiple factors may contribute to the development of the disease, but eye irritation to some degree is necessary for infection. Cattle plagued with IBK develop painful corneal ulcers that oftentimes leave scarring in the eye. When the cornea ruptures, blindness will occur. This link offers additional information published in 1990 from research conducted at SFREC: The challenge of controlling pinkeye continues to be a prominent focus of scientists and industry professionals. Recent studies at SFREC, led by Associate Professor John Angelos at the University of California Davis School for Veterinary Medicine, have increased knowledge of the molecular composition of M. bovis cytotoxin, and even indicate promise for a recombinant subunit vaccine. Agrilabs, a company that works to connect research, manufacturers, and consumers, published an article featuring Angelos and worthwhile information about IBK: A successful management strategy for pinkeye in cattle involves an integrated approach that should include mineral supplements and quality nutrition to help maintain a strong immune system, reduction of environmental irritants (i.e. those annoying little creatures called flies and tall grasses), and a well-planned medication strategy. Isolation of infected animals is always a wise measure to take. Be sure to contact your practicing veterinarian for specific questions and recommendations.
<urn:uuid:14c76531-ca76-4b61-a0ba-e0a26f132235>
{ "date": "2019-12-11T02:50:49", "dump": "CC-MAIN-2019-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00496.warc.gz", "int_score": 3, "language": "en", "language_score": 0.942863941192627, "score": 3.078125, "token_count": 1364, "url": "https://ucanr.edu/blogs/mySFRECblog/index.cfm?tagname=Animal%20Health" }
Writing is a form of communication that can travel far and reach many. Not only is writing a mobile form of communication, but it is also a long-lasting form. We move our mouths and produce odd sounds that we somehow comprehend, and this forms our main method of communication. We communicate through touch, gestures, facial expressions, and body language. There is a constant exchange of information between you and the world around you, between you and the people around you. For example, you are walking down a crowded hallway. People are scattered about, some sitting on the floor, most standing around, others focused on their phones, and some engaged in conversation. Though you don’t make much of it, though you don’t really notice it, though you are thinking your thoughts, there is an exchange going on between you and these strangers. You notice most are ignoring you and you may notice someone checking you out, whatever it is, you are receiving information from what you perceive and what that perception makes you feel. That is the basic and physical form of communication we constantly participate in. Many think that communication between human beings is limited to the physical senses. How else can you communicate if not with what is visible, tangible, and heard? Yet many have had experiences that challenge the assumption that communication is limited to the physical. Many often describe an odd feeling in their lower stomach before horrible events. Sometimes just the phrase “I knew that was going to happen” sums up that indescribable experience. For example, you’re working at your computer and you get thirsty. You get up, go and get a drink, and come back to your computer. You try to find the perfect spot where you won’t spill your drink. Maybe here. Maybe there. You put the cup down. Then pick it up. Maybe not there. Eh, I’ll just leave there. And you start working again. Moments later, you manage to spill the drink all over. “I knew that was going to happen,” you muttered as you rush to salvage your computer and whatever else. Intuition? Yes, a minor instance of intuition. Intuition is often described or thought of as a superpower. Mothers often have a very strong sense of intuition when it comes to their children. Even scientists cannot deny or fully explain intuition between mothers and their children. But what is intuition? Intuition is a sense that belongs to the extrasensory abilities we all naturally have. Extrasensory perception (ESP) is the thing of psychics and ghosts, of the X-Files and Spooky Mulder, of candles and incense. At least, that is the mainstream perception of intuition. Intuition is seen as something obscure, secret, dangerous, unholy, unnatural, and it is relegated to the darker corners of the imagination. And so, in order to begin to understand and even awaken your intuition, you need to let go of what you think intuition is. As always, when delving into the spiritual and learning, learning anything really, having an open mind and being as aware as possible, are the most beneficial and important things. So intuition is a sense. You have the physical five senses which are: sight, hearing, taste, touch, smell. And then you have the nonphysical senses of which intuition is one. There are many nonphysical senses: telepathy, intuition, precognition, scrying, etc. There are many psychic or spiritual abilities and many overlap each other. Telepathy and intuition are very closely related. Scrying and clairvoyance are closely related. But the reason telepathy and intuition are grouped here and are being discussed here, is because these two abilities are the ones that can bring people closer to each other. Telepathy and intuition are abilities that can unite a couple. These are the abilities that can connect you to your spirit guides, to the Gods, to angels, and ascended masters. Telepathy and intuition open the doors of the spiritual reality. Intuition is that moment when you decide to follow your gut feeling, that instinct, instead of your mind. And the manifestation of intuition is the moment when following through gets you positive results. Telepathy is different. Whereas intuition is somewhat of a subconscious action, telepathy is a conscious action. Telepathy won’t happen without some amount of effort. Through meditation telepathy can be developed. But how does it work? It is not so easy to explain. For each person telepathy will feel different, work and develop differently. Telepathy is the conscious interpretation of energy, because life is energy. Every living thing, and perhaps even the inanimate, has energy. When we interpret feelings into worded messages, that is telepathy. When two people communicate their thoughts while being in separate rooms, that is telepathy. And it happens because their energies have connected for however long the communication lasts. They are able to understand each other without using words. Telepathy is communicating with a language beyond words, a language higher than the dense physical existence. Telepathy is the language of the soul.
<urn:uuid:7facdf0f-0c00-49d0-baa6-c5c6784f9456>
{ "date": "2017-10-24T00:33:57", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9646670818328857, "score": 2.9375, "token_count": 1050, "url": "https://theimportant1111blog.wordpress.com/category/telepathy/" }
Doctors first started using the ketogenic diet to treat patients with epilepsy in the 1920s. While the diet has evolved over the decades to include less strict versions, and is gaining mainstream popularity for weight loss, children with epilepsy and other neurological conditions continue to benefit from its seizure-controlling effects. The ketogenic diet team at Seattle Children’s Neurosciences Center takes a modern approach to help families use food as medicine. Here, ketogenic diet team members, neurologist Dr. Christopher Beatty; advanced practice provider Haley Sittner; clinical dietitian Marta Mazzanti; and nurse Deborah Rogers discuss how the diet works and how the team sets families up for success on the ketogenic diet. Q: What is the ketogenic diet? MAZZANTI: The ketogenic or “keto” diet is a medical diet that mimics fasting. When someone is fasting, the body uses fat as the main source of fuel instead of sugar. On the ketogenic diet, we trick the body into thinking that it is fasting by increasing the fat and limiting the proteins and carbohydrates a person eats. ROGERS: The ketogenic diet we prescribe at Seattle Children’s to treat epilepsy and other neurological conditions is the not the mainstream version touted for weight loss. For one, the diet is very structured. Not only do we seek to achieve ketosis – the therapeutic state where the body is using fat as fuel – but we also monitor how well the diet is controlling seizures by regularly checking a patient’s blood levels. Because the diet limits several nutrients needed for growth and development, close medical management is required to monitor for any side effects and insufficiencies in vitamin and mineral intake. Q: How does the ketogenic diet help treat epilepsy? BEATTY: While experts are not completely sure why the ketogenic diet works, we do know that limiting the carbohydrates helps control seizures as does making the body run on fat. Children with epilepsy can see long-term reduction in their seizures from having been on the diet for a period of time. This means once the diet initially controls their seizures, the child can often go off the diet and remain seizure-free. In many cases we are able to reduce the medications a patient is taking because the diet is so effective. SITTNER: Even patients who may not experience seizure improvement on the ketogenic diet can see cognitive and developmental improvement after being on the diet for a short time. Parents will say their children seem clearer, more energetic, with improved language and motor function. Q: Why are families interested in pursuing the ketogenic diet for epilepsy? BEATTY: In treating childhood epilepsy, usually after two medications fail to stop a child’s seizures, the chances of the next medication making a significant impact is very low, about 5%. On the keto diet, the majority of children experience greater than 50% seizure reduction. About 10-20% of patients have 90% seizure reduction. When anti-seizure medications no longer work and surgery is not an option, I will advise parents that the ketogenic diet is the most effective treatment we can offer. Q: Are certain patients a better fit for the ketogenic diet? BEATTY: Though we can typically explore using the ketogenic diet with any interested family, there are certain diagnoses where we know it is a slam dunk. The classic one is GLUT1 deficiency syndrome, which prevents glucose from being transferred into the brain. The diet works so well in these patients because we train the brain to run off ketones instead of glucose. Children with Dravet syndrome, tuberous sclerosis complex and infantile spasms tend to have high success rates on the diet too. For a rare form of epilepsy known as febrile infection-related epilepsy syndrome (FIRES), where a healthy school age kid goes into sudden status epilepticus, the ketogenic diet is the best treatment option. At Seattle Children’s, we can get the diet up and running in 48 hours or less for patients in this life-threatening situation. Our ketogenic diet team was involved in one of the earliest cases of establishing the ketogenic diet to treat FIRES, helping to stabilize the patient and minimize the amount of damage caused by the constant seizures. Q: What does a typical meal on the ketogenic diet look like? MAZZANTI: We use a version of the ketogenic diet known as the Modified Atkins diet at Seattle Children’s. Unlike the classic ketogenic diet, it does not restrict calories. While it is more liberal than the diets other medical centers offer, we have been able to achieve really good results and families enjoy having more flexibility when preparing meals. Each meal includes a protein, two servings of fat and then a specified amount of carbohydrates. For example, a breakfast may include two eggs, butter and heavy cream for cooking, sliced avocado and a small serving of fruit. There are two rules for every meal – the child must eat the fat first and they must clean their plates. Q: How long do children stay on the keto diet? BEATTY: Because it can take from a week up to one to three months for the diet to be effective, we ask families to try the diet for at least three months. If the child does well, the diet is usually followed for at least two years. After two years, we can decide if it makes sense to slowly introduce carbohydrates back into the diet. Q: How does Seattle Children’s help families be successful on the diet? SITTNER: Families come to us ready for the change, so we do everything we can to set them up for success. Our ketogenic diet team, which includes the patient’s neurologist, advanced practice providers, nurses, a dietitian and a social worker, works with the family to integrate the diet into their lives. Before starting a patient on the diet, we identify and overcome any barriers the family may have. Then we teach them how to identify and count carbs, how to make food and how the metabolic system works. We give families tools to address the different scenarios they will encounter – whether that’s following the diet at school or avoiding hidden carbs in items like lotion, shampoo and toothpaste. ROGERS: Another important aspect of our training is educating the family. We call it a diet, but it is a medication. In order for medications to work, they must be taken as directed 100% of the time. The same holds true for the ketogenic diet.
<urn:uuid:c8336512-1ca3-42a2-9262-e9a3cc021edc>
{ "date": "2019-04-25T15:59:22", "dump": "CC-MAIN-2019-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9314882159233093, "score": 2.828125, "token_count": 1350, "url": "https://pulse.seattlechildrens.org/food-as-medicine-high-fat-keto-diet-prescribed-to-treat-epilepsy/" }
He says that one of the greatest legacies of the two rovers is how they’ve made Mars familiar. “It’s no longer this strange distant alien world. My team goes to work on Mars every day. That’s one of the greatest intangibles these rovers have given us. Our universe is larger now.” When the twin rovers were launched, they were designed to last 90 days. And that was just over six years ago. Callas calls the extra time on the planet and the accompanying data a bonus. “We do recognize that they are older. They are a bit more frail and a little bit more arthritic, but they are still tremendously capable resources,” Callas says, “and they’re on the surface of another world. Right now it is the only presence on the surface of another world anywhere in the solar system. So as long as they have capability, we can explore. The great thing about a rover, you’re not stuck in the same location looking at the same real estate or the same rock.” Each day, the rovers take images and send them back to NASA and Callas’ team to be studied. “We get about 100 images a day from each rover and that translates only out to about 100 megabits of data, so stuff that would easily fit within your cell phone or your thumb drive, so it’s not a huge amount of data, but the fact that it’s from this exotic location is what makes each one of those little bits count so much.” During each Mars winter, the rovers enter a hibernation period in order to store energy. Before the past Mars winter (the rovers’ fourth on the planet) Spirit became what Callas called “embedded”. “We weren’t able to reposition (Spirit) favorably for the deep dark days of winter so it didn’t generate enough energy each day during the depths of the winter to power all it’s systems so it powered down into a hibernation state. It’s not talking to us, so we haven’t heard from the rover in many, many weeks.” Callas says they hope to hear from Spirit sometime in the fall, but he knows the extreme cold temperatures could affect the rover. “(Spirit) is going to get colder than it’s ever been before on Mars because it’s shut down. We’re talking about temperatures colder than Antarctica, without any heaters being on. So think of leaving your laptop out at night in the winter time in Antarctica and expecting it to work the next day.” The rovers are located on opposite sides of the planet from one another. So while Callas’ team waits for Spirit to respond again, Opportunity is busy heading towards Endeavor Crater. “We want to get (to Endeavor Crater) because it’s scientifically exciting. There are these plain minerals that are found around this crater that we see from orbit that formed a long time ago in neutral pH water. This is very exciting because we found evidence of acidic water on Mars, but not neutral water. This is exciting to the astrobiologist because if you’re looking for life, you’re expecting life to have formed in a neutral pH environment.” When launched, each rover was designed to travel approximately one kilometer. Opportunity has already traveled close to 22 kilometers in the past six years. While there is still about 10-12 kilometers before Opportunity reaches Endeavor Crater, Callas is optimistic about the information they can obtain from the area. Callas is also very excited about what lies ahead for both the rovers and for his team at NASA. “Even as old as these two rovers are, there is still an exciting future ahead for each one. Opportunity is headed toward Endeavor Crater. For Spirit, once we recover from the winter, we’re going to use that rover as a way to investigate the interior of the planet by tracking it’s radio signal.” “Because the rovers will be stationary, or near stationary, by tracking its radio signal, we actually measure the motion of the planet Mars. And measuring that motion, we can look for the subtle wobble in the spin of the planet which tells us about not only the distribution of mass inside the planet, like the size of the core, but whether the core is liquid or solid. So both rovers will be very busy for as long as we can make them busy on the surface.” For information on NASA’s Mars Rover Project, click here.
<urn:uuid:0a959b66-dd04-45ba-a4fe-24c419c39798>
{ "date": "2016-09-28T01:45:40", "dump": "CC-MAIN-2016-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661289.57/warc/CC-MAIN-20160924173741-00182-ip-10-143-35-109.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9507753252983093, "score": 3.21875, "token_count": 996, "url": "http://federalnewsradio.com/technology/2010/08/mars-rover-project-continues-successful-mission/" }
Which Joe gave his name to ‘sloppy joes’? We look at five interesting sandwiches and their lexical origins. A strike-slip fault occurring at the boundary between two plates of the earth's crust. - ‘At transform fault boundaries (such as the San Andreas Fault in California which divides the Pacific and North American plates), the plates move or slide horizontally past each other.’ - ‘The transform fault is represented by the Arakapas fault zone, and would have had a dextral offset and sinistral slip.’ - ‘Most of these plutonic rocks come from the inside corners of transforms, the strips of crust generated in the angle between the spreading centre and the active transform fault.’ - ‘This is the first time, however, that such tremors have been recorded under a transform fault.’ - ‘To decouple the spreading basin from northern New Zealand, they must have been separated by a transform fault.’ - ‘In plate tectonics, a transform fault is a strike-slip fault extending throughout the lithosphere and joining any two other plate margins.’ We take a look at several popular, though confusing, punctuation marks. From Afghanistan to Zimbabwe, discover surprising and intriguing language facts from around the globe. The definitions of ‘buddy’ and ‘bro’ in the OED have recently been revised. We explore their history and increase in popularity.
<urn:uuid:f96459c6-8dec-4419-ae05-450eb8774aeb>
{ "date": "2017-01-18T21:20:43", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280319.10/warc/CC-MAIN-20170116095120-00450-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9524258375167847, "score": 3.8125, "token_count": 315, "url": "https://en.oxforddictionaries.com/definition/transform_fault" }
problem 1: Describe the legal principles comprised in the expropriation of private property by a state. problem 2: Describe the juridical basis of international law. problem 3: Describe the concept of state immunity. problem 4: To what extent does customary international law play an important role in the efficient protection of human rights standards? problem 5: Comment on the concept of the territorial sea. problem 6: Describe the role of treaties as a source of general international law. problem 7: Describe the defenses available to a state for a breach of international law. problem 8: Describe the extent to which a state can be considered responsible for the environment.
<urn:uuid:5df3eab6-bfe8-40de-9148-ded1643e4fa4>
{ "date": "2017-01-22T03:43:49", "dump": "CC-MAIN-2017-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281332.92/warc/CC-MAIN-20170116095121-00486-ip-10-171-10-70.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8450323939323425, "score": 3.453125, "token_count": 142, "url": "http://www.mywordsolution.com/question/to-what-extent-does-customary-international-law/94144" }
The Internet has completely changed our lives. Society has completely succumbed and adapted to all its great benefits and perks. It has affected the way we use a map, read the news, and even watch TV. Without giving much thought, it has turned the single-taskers to multi-taskers. The Internet is a giant web of computer networks communicating with one another all over the world. Believe it or not, the Internet was first discovered back in the 1960s. It was used for government and military purposes only. You might be asking yourself how the Internet works. The network that computers use to communicate is what is called the Internet Protocol Suite (TCP/IP). As we all know, the Internet is a vast resource for data and information. It is all about how fast we can receive information, which is probably why the military used it. We are beginning to see Wi-Fi signs in store-front windows such, on public transportation, and in airports. The more time goes on, we will most likely see them in grocery stores, banks, and movie theaters. Internet cafes are already becoming a thing of the past. The Internet is all around us. More and more Wi-Fi “hot spots” are popping up in cities and towns. Local coffee shops and restaurants, airports, and libraries all offer free Wi-Fi access to their customers and visitors. As time goes on, we will begin to see them more and more in other locations such as gyms, schools, and hospitals. As the appearance and popularity of Wi-Fi hot spots and hand-held technologies continue to grow, it has become as easy as a press of a button to access the Internet. Some gadgets out there now don’t even require a button; you can set notifications to alert you whenever a Wi-Fi signal in your area is available. Remember the days of accessing the Internet by using a phone line and sometimes waiting up to ten minutes for a dial-up connection? With high speed cable Internet and fiber optics now readily available, those days are long gone. The Internet is like anything else. It can be really beneficial, but too much of it can lead to problems. It is probably safe to say that in another decade or less, we will be able to connect to the Internet from anywhere using any kind of device. With 2011 just beyond the horizon, we can walk into a McDonald’s and check our email or Facebook pages by using a cell phone or iPod. Half the time we don’t think twice about “going online”. It takes seconds and it is so readily available we don’t give it a second thought. It’s easy to check your email or work while you are riding public transportation, or look up the status of your flight with your iPhone app, or order take out from your favorite pizza joint. Our world is becoming one big Wi-Fi hot spot when ironically, owning your own computer was a luxury not some twenty years ago. Learn more at Charles Granere and his admiration for information technology. categories: inventions,social media,information technology,networking,programming,computers,internet
<urn:uuid:3150566d-2d96-4e16-a65a-932c1773e30c>
{ "date": "2017-03-28T19:44:44", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00176-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9450271129608154, "score": 2.984375, "token_count": 651, "url": "http://www.cqwen.com/new-information-technology-operation/" }
Neanderthals or NeandertalsUK/niˈændərˌtɑːl/, us also /neɪ/-, -/ˈɑːndər/-, -/ˌtɔːl/, -/ˌθɔːl/) (named after the Neandertal area in Germany) were a species or subspecies of human in the genusHomo which became extinct around 40,000 years ago. They were closely related to modern humans, having DNA over 99.5% the same. Remains left by Neanderthals include bone and stone tools, which are found in Eurasia, from Western Europe to Central and Northern Asia and the Middle East. Neanderthals are generally classified by paleontologists as the species Homo neanderthalensis, or alternatively a subspecies of Homo sapiens (Homo sapiens neanderthalensis). Neanderthals were large compared to Homo sapiens because they inhabited higher latitudes, in conformance with Bergmann's rule, and their larger stature explains their larger brain size because brain size generally increases with body size. With an average cranial capacity of 1600cm3, the cranial capacity of Neanderthals is notably larger than the 1400cm3 average for modern humans, indicating that their brain size was larger. Males stood 164–168cm (65–66in) and females 152–156cm (60–61in) tall. The plot of Neanderthal revolves around two rival scientists, Matt Mattison and Susan Arnot, who are sent by the United States government to search for missing Harvardanthropologist James Kellicut. Their only clue is the skull of a Neanderthal. Carbon dating shows that the skull, which should be 40,000 years old, is suspiciously only 25 years old. The Russian and American governments are competing to study the surviving Neanderthals in Tajikistan in order to learn more about their "remote viewing" capabilities. The Neanderthals are split into two tribes, a peaceful valley tribe and a cannibalistic and violent mountain tribe. Soon, the protagonists are captured by Neanderthals and must try to escape from the cannibals. They hope to do so without jeopardizing the safety of the peaceful tribe. It eventually, however, becomes necessary to train the peaceful tribe for war. The novel explains that a completely peaceful society like that was doomed in any case, and would have been destroyed soon by the mountain tribe. On September 29, 2007, Pulpí tossed the world's largest salad, with 6,700 kilograms (14,740 pounds) of lettuce, tomato, onion, pepper and olives, supervised by 20 cooks over 3 hours. A Guinness World Recordsjudge was present to confirm the new record. The salad was prepared in a container 18m (59ft) long and 4.8m (15.7ft) wide. In December 1999, the Pulpí Geode was discovered in the Pilar de Jaravía lead mine by the Grupo Mineralogista de Madri. (Spanish)Pulpí - Sistema de Información Multiterritorial de Andalucía The dental pulp is the part in the center of a tooth made up of living connective tissue and cells called odontoblasts. The dental pulp is a part of the dentin–pulp complex (endodontium). The vitality of the dentin-pulp complex, both during health and after injury, depends on pulp cell activity and the signaling processes that regulate the cell’s behavior. Each person can have a total of up to 52 pulp organs, 32 in the permanent and 20 in the primary teeth. The total volumes of all the permanent teeth organs is 0.38cc and the mean volume of a single adult human pulp is 0.02cc. Maxillary central incisor has shovel shaped coronal pulp with three short horns on the coronal roof and triangular in cross section. Canine has the longest pulp with elliptical cross section. The large mass of pulp is contained within the pulp chamber of the tooth. The shape of each pulp chamber corresponds directly to the overall shape of the tooth, and thus is individualized for every tooth; the pulp tissue in the pulp chamber has two main divisions: coronal pulp and radicular pulp. Crowns of the teeth contain coronal pulp. The coronal pulp has six surfaces: the occlusal, the mesial, the distal, the buccal, the lingual and the floor. Because of continuous deposition of dentin, the pulp becomes smaller with age. This is not uniform throughout the coronal pulp but progresses faster on the floor than on the roof or side walls. The Star Brigade is a fictional sub-team from the G.I. Joe: A Real American Hero toyline, comic books and cartoon series. With specialized space suits and accessories, these high-tech astronauts were designed to protect the universe from Cobra and the Lunartix Empire. All of the Star Brigade figures came with spring-loaded weapons, which actually fired the ammo that came with the figure. In some of the Armor-Tech figures, the spring-loaded weapon was part of the figure. In 1993, Hasbro released new versions of the following figures, as part of the Star Brigade line: Countdown - Countdown is the Star Brigade's combat astronaut. Ozone - Ozone is the Star Brigade's astro-infantry trooper. There were two different versions of Ozone released in 1993 with the same packaging. Payload - Payload is the Star Brigade's astro-pilot. There were two different versions of Payload released in 1993 with different packaging. Publicity photos and the filecard art originally depicted Payload as being made of the same mold as the original Payload figure. However, Hasbro had apparently lost the mold, and used the mold for the Eco-Warriors Barbecue figure instead. This was reflected in the second versions' packaging and filecard art.
<urn:uuid:91036990-7ec1-45fd-a325-cc6ee52536f3>
{ "date": "2019-12-13T09:37:25", "dump": "CC-MAIN-2019-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00296.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9409099817276001, "score": 3.578125, "token_count": 1250, "url": "https://wn.com/Carcass_Pulp_Neanderthal" }
Before Google doodles, we honored important forgotten figures with postage stamps. Carlos Juan Finlay, the Cuban physician who first linked yellow fever to mosquitoes in 1881, has received both tributes. Given the thousands of lives he saved and the decades of scorn he endured, we'd say he deserved them. Born in Puerto Príncipe, Cuba, Finlay studied abroad before returning to Havana as a general practitioner and ophthalmologist with a penchant for scientific research. At the time, yellow fever still ravaged the tropics, terrorizing populations and disrupting shipping, especially in Havana [sources: Frierson; Haas; PBS; WHO; UVHSL]. Finlay noticed that yellow fever epidemics roughly coincided with Havana's mosquito season, but his mosquito-transmission hypothesis was met with disdain for decades until he convinced American military surgeon Walter Reed (like the hospital) to look into it. Reed and his colleagues, who had been dispatched to Cuba to fight the disease that had killed so many soldiers during the Spanish-American War, helped Finlay improve his experiments and verified that the species now known as Aedes aegypti was indeed the culprit. Yellow fever was wiped out of Cuba as well as Panama, enabling engineers to finally complete the Panama Canal [sources: Haas; PBS; UVHSL]. Today, yellow fever afflicts roughly 200,000 and kills 30,000 people annually, mostly in African areas lacking vaccines. Symptom reduction remains the only treatment; untreated, the disease has a 50 percent mortality rate. Occurrences of yellow fever have ramped up in recent years [sources: WHO].
<urn:uuid:db15069b-b0b8-43a5-b9e0-d1dc54f90bcc>
{ "date": "2017-03-29T01:28:45", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00181-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9683329463005066, "score": 3.515625, "token_count": 334, "url": "http://science.howstuffworks.com/dictionary/famous-scientists/physicists/10-hispanic-scientists1.htm" }
You are here All living organisms, including fish, can have parasites. Parasites are a natural occurrence, not contamination. They are as common in fish as insects are in fruits and vegetables. There are two types of parasites that can infect people through food or water: parasitic worms and protozoa. Parasitic worms include roundworms (nematodes), tapeworms (cestodes) and flukes (trematodes). These worms vary in size from barely visible to several feet in length. Protozoa are single-cell animals, and cannot be seen without a microscope. Just as there are risks to eating raw or undercooked meat, there are also risks with eating raw, undercooked, pickled, and lightly or cold-smoked seafood dishes. Parasites do not present a health concern in thoroughly cooked fish. Parasites become a concern when consumers eat raw or lightly preserved fish such as sashimi, sushi, ceviche, and gravlax. When preparing these products, use commercially frozen fish. Alternatively, freeze the fish to an internal temperature of -4°F for at least 7 days to kill any parasites that may be present. Home freezers range from 0°F to 10°F and may not be cold enough to kill parasites. Parasites (in the larval stage) consumed in uncooked, or undercooked, unfrozen seafood can present a human health hazard. Among parasites, the nematodes or roundworms (Anisakis spp., Pseudoterranova spp., Eustrongylides spp. and Gnathostoma spp.), cestodes or tapeworms (Diphyllobothrium spp.) and trematodes or flukes (Chlonorchis sinensis, Opisthorchis spp., Heterophyes spp., Metagonimus spp., Nanophyetes salminicola and Paragonimus spp.) are of most concern in seafood. Some products that have been implicated in human infection are: ceviche (fish and spices marinated in lime juice); lomi lomi (salmon marinated in lemon juice, onion and tomato); poisson cru (fish marinated in citrus juice, onion, tomato and coconut milk); herring roe; sashimi (slices of raw fish); sushi (pieces of raw fish with rice and other ingredients); green herring (lightly brined herring); drunken crabs (crabs marinated in wine and pepper); cold-smoked fish (lox); and, undercooked grilled fish. The process of cooking (145°F for 15 seconds) raw fish sufficiently to kill bacterial pathogens is also sufficient to kill parasites. The effectiveness of freezing to kill parasites depends on several factors, including the temperature of the freezing process, the length of time needed to freeze the fish tissue, the length of time the fish is held frozen, the fat content of the fish, and the type of parasite present. The temperature of the freezing process, the length of time the fish is held frozen, and the type of parasite appear to be the most important factors. For example, tapeworms are more susceptible to freezing than are roundworms. Flukes appear to be more resistant than roundworms. Freezing and storing at -4°F (-20°C) or below for 7 days (total time), or freezing at -31°F (-35°C) or below until solid and storing at -31°F (-35°C) or below for 15 hours, or freezing at -31°F (-35°C) or below until solid and storing at -4°F (-20°C) or below for 24 hours is sufficient to kill parasites. FDA's Food Code recommends these freezing conditions to retailers who provide fish intended for raw consumption. Trimming away the belly flaps of fish or candling and physically removing parasites are methods for reducing the numbers of parasites. However, they do not completely eliminate the hazard, nor do they minimize it to an acceptable level. The health risk from parasites is far less than the risk from bacterial pathogens and mishandling of seafood. Additional Links for More Information: A California Sea Grant publication “Fish Parasites and Human Health, Epidemiology of Human Helminthic Infections” by J. Sakanari, et al. includes the life cycles of common parasites in freshwater and marine fishes, transmission, and prevention. The US Food and Drug Administration’s “BAM” (Bacteriological Analytical Manual) has a chapter on Parasitic Animals in Foods which discusses techniques for examining foods for the presence of parasites. An in-depth discussion of the candling method with finfish and molluscs is described. The US Food and Drug Administration’s “Bad Bug Book” (Foodborne Pathogenic Microorganisms and Natural Toxins Handbook) includes basic facts on foodborne pathogenic microorganisms and natural toxins. The material is collected from the Food and Drug Administration, the Centers for Disease Control & Prevention, the USDA Food Safety Inspection Service, and the National Institutes of Health. The US Food and Drug Administration’s “Fish and Fisheries Products Hazards and Controls Guidance” describes the potential hazard of parasites and methods of its control in commercially processed seafood. The U.S. Food Code 2009 provides the time and temperature for parasite destruction (see Food-Freezing, Chapter 3, Section 402.11)
<urn:uuid:29679679-19f4-484b-9a79-f5504fcd3a6a>
{ "date": "2019-06-26T22:25:25", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00536.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8924370408058167, "score": 3.859375, "token_count": 1140, "url": "https://www.seafoodhealthfacts.org/seafood-safety/general-information-healthcare-professionals/seafood-safety-topics/parasites" }
Most people are clueless to the causes of weight gain. Weight does not come on by itself. All causes of weight gain are directly or indirectly influenced by two main elements. This is your lifestyle and hormonal balance. From this article, you can assess if these causes of weight gain apply to you. You can then take the necessary remedial steps to overcome them. Since this is an exhaustive guide, you may want to bookmark it since you will need to keep coming back to it. Table of Contents Why we gain weight Wikipedia defines the causes of weight gain to be a caloric imbalance. This is a rather simplistic view. According to this theory, you gain weight when you consume more calories than you expand. This theory is coming under increasing attack in recent times. Most people gain weight because of the way their bodies partition calories. When you eat something, your body decides how to use those calories. Some portion may be used by the body for energy while the rest may be stored as fat for use at a later stage. Your hormonal system is the single biggest influence on how your body partitions the calories that you consume. Gary Taubes goes to great length to explain the effect of calorie partitioning in his book entitled “Why We Get Fat“. This book is worth reading as Gary has backed his writing with extensive scientific research. Lifestyle and your genetics Most people blame their genes as one of the main causes of weight gain. Genes do have a role to play but genes cannot act alone. Genes react to your lifestyle. If you have genes that will increase your propensity to gain weight, your lifestyle will ultimately affect how bad the effect will be. Genes are like a loaded gun. Lifestyle can either help pull the trigger or keep the loaded gun in the holster. Hormones and weight gain Your lifestyle affects key hormones that ultimately decide how your body partitions calories. One of the main hormones that comes into play is insulin and it is one of the main causes of weight gain. Insulin is a very potent fat storage hormone. You will find it extremely difficult to lose weight when your insulin levels are elevated. When insulin is elevated, the body is inclined to store calories as fat rather than burning it. This is a condition known as insulin resistance. Learn more about the role of insulin as one of the main causes of weight gain from the following resources ; Your food and exercise choices will affect if your insulin levels remain elevated. It will also affect the release of fat burning hormones such as testosterone or human growth hormone. Hormonal balance is also affected by aging. When you age, your hormonal balance is compromised and there is a tendency to gain weight. Aging affects the production of testosterone, growth hormones and other fat burning hormones. Your lifestyle directly affects your hormonal balance. Your hormonal balance influences your cravings and energy levels. This gravitates you more towards a unhealthy lifestyle. An imbalance in your hormonal system could cause you to become sluggish. An unhealthy lifestyle could accelerate aging which increases the rate of weight gain. Accelerated aging disrupts your hormonal system even further. As you can see, these factors are inter-related. 22 causes of weight gain As mentioned earlier, causes of weight gain are largely influenced by lifestyle, hormonal balance, aging or a combination of the three. You will find below the 22 most common causes of weight gain. 1. Calorie dense food – This is the amount of calories contained, in say a pound of food. This is one of the major causes of weight gain. A study published in the American Journal of Clinical Nutrition in 2004 identified calorie density as one of the obvious causes of weight gain. One such culprit is fast food. Most fast food also contains lots of trans-fatty acids which cause severe havoc in your body’s metabolism. Most calorie dense food are filled with sugar, fat or a combination of the two. The body has its inbuilt mechanism to assess satiety levels. Calorie dense food such as a large bugger does not give your body’s satiety system enough time to react. By the time your body realizes that you have overeaten, it is usually too late.This was also evident in the 2004 study published in the American Journal of Clinical Nutrition. Calorie dense food also have the lowest nutrient density. Refer to the following resources for more information on calorie dense food ; - List of calorie dense food - 5 food combinations that are potent fat storage recipes - Energy density is the key to healthy and high volume eating 2. Simple carbohydrates – Simple carbohydrates cause insulin spikes. Repeated insulin spikes bring about a condition known as insulin resistance. This is a condition where the body becomes resistant to the effects of insulin. Insulin is required to allow cells to absorb glucose molecules. With insulin resistance, the cells become desensitized to insulin. More insulin is required to accomplish the same task. The high levels of insulin linger around thus making it extremely difficult to lose weight. It makes it very easy to gain weight. You should completely remove refined carbohydrates from your diet. Learn more about the effects of carbohydrates in weight loss from the following resources; - Complex and refined carbohydrates for sustainable weight loss - Carbohydrates protein and fat for correct weight loss - Learn more about insulin resistance from the PubMed website 3. Liquid calories – Liquid calories are drinks that contains calories. These includes soda, gourmet coffee, milk or any other beverage that contains carbohydrates or fat. Liquid calories are potent causes of weight gain because these are absorbed very quickly by the body. This is one of the reasons why you should limit yourself to plain water. For more information on the effects of liquid calories as one of the causes of weight gain, refer to the following resources; 4. Alcohol consumption – Alcohol consumption is another one of the insidious causes of weight gain. Even a single drink of alcohol can halt fat burning for up to 24 hours. This is further aggravated when you consume calorie dense food with alcohol. Your body will store the calories from food as fat while it burns the calories in alcohol. Learn more about alcohol as one of the causes of weight gain from the following resources; - Alcohol and fat loss - How drinking alcohol before meals affects weight gain - Alcohol’s effect on your exercise sessions 5. Skipping breakfast – Studies have shown that people who skip breakfast have a tendency to accumulate abdominal fat. Most people do not realize this as being one of the causes of weight gain. People who skip breakfast put the body into a prolonged state of starvation. When they finally eat during lunch, the body greedily stores calories as fat. The body thinks that there is a shortage of food and thus becomes paranoid of wanting to store calories as fat. For more information on the effects of skipping breakfast as one of the causes of weight gain, please refer to the following resources; 6. Skipping meals – The body needs to know that it can get its meals at a predefined time. When you skip a meal or eat it at an irregular time, the body becomes paranoid. It thinks that there is a shortage of food and enters into starvation mode. In starvation mode, the body wants to store calories instead of burning fat. Eating 5 to 6 small meals can help your weight loss efforts tremendously by assuring your body that there is a constant supply of food. This avoids your body from moving into starvation mode. Refer to the following resources for more information about skipping meals as one of the causes of weight gain; 7. Inverse tapering of calories – Most people have no breakfast, a very light lunch and a large dinner. This is a proven recipe to gain weight. In order to lose weight, you need to taper calories as the day progresses. Breakfast should be your largest meal and dinner the smallest. Learn more about this subject from the resources below; 8. Sedentary lifestyle – Most people recognize this as one of the leading causes of weight gain but may have become numb to it. Anyone with a sedentary lifestyle will inevitably gain weight. Exercise must be an integral part of your life and must remain so. The effects of exercise are well documented within the records of the National Weight Control Registry which tracks people who have lost weight and kept it off. People who have managed to keep the weight off have incorporated an hour of exercise into their daily routine. Read more about what it takes to lose weight and keep it off. 9. Lack of strength training – This is one of the overlooked causes of weight gain. Strength training is important to build your muscle mass irrespective of whether you are a man or a woman. Increased muscle mass helps hormonal balance as well as keep your metabolism at a higher level. One of the main reasons that most people gain weight as they age is because they lose lots of muscles. In order to prevent weight gain, strength training must be a core part of your exercise regimen. Read the following resources to understand why a lack of strength training is one of the causes of weight gain; - How strength training helps weight loss - 8 benefits of strength training for women - Strength training improves body image significantly 10. Plate size – Your plate size directly affects your waist size. The larger your plate size, the more calories you will consume at each meal. We have been programmed to finish the food on our plates. Research has shown that this is exactly what happens. You should ideally limit your plate size to not more than 9 inches. Refer to the following resources to understand better why plate size is one of the causes of weight gain; - Plate size and its effect on weight loss - Smart management of your plate for easier management of your waistline 11. Pregnancy weight gain – Most women gain more weight than necessary during pregnancy. A lot of women have been found to overeat during the course of their pregnancy. Pregnancy weight gain has been steadily increasing in the US and most other developed countries. A recent study has found that 40 to 50% of women gain more weight than is recommended by the guidelines from the Institute of Medicine. Find out more about one of the leading causes of weight gain in women from the following resources; - What is healthy weight gain during pregnancy - Pregnant overweight women will find it more difficult to lose weight 12. Lack of education – Most people would not gain weight if they understood weight loss 101. Lack of fundamental weight loss knowledge is very glaring in a large number of people who are overweight. They do not understand how their bodies interact with the environment and consequently influencing their hormonal balance. This can be very quickly addressed as there is lots of useful information that is easily available on the internet. Get more insight from the following resources on why lack of weight loss education is one of the causes of weight gain; The reader can get most of the useful information required about healthy weight loss from this Correct Weight Loss Blog. 13. Dieting – Most people do not realize how influential diets are in being the leading causes of weight gain. Most popular diets force people to go on a temporary dietary regimen. They remain on this dietary regimen until they achieve their weight loss goal. Examples of popular diets include the low-carb diet, zone diet and many other popular fad diets. The moment one gets off the diet, the weight starts creeping back on. It is not long before one has gained all the weight back and more. Read the following resources to understand how diets cause you to gain weight; - Why fad diets don’t work - Why do you lose weight and put it back on just as fast - Dieting and pregnancy weight gain 14. Aging – Aging is one of the inevitable causes of weight gain. As you age, you lose muscle mass. There is also a steady decline in fat burning hormones such as testosterone and human growth hormone. You can counter these negative effects of aging by slowing down the loss of your muscle mass. This can be achieved by incorporating strength training into your lifestyle. Bob Delmonteque and Jack Lalanne are examples of superseniors who have maintained very lean physiques even when they were well into their 80s. 15. Influence of your friends – Your weight should be very close to the average weight of your 5 closest friends. Your social circle has a strong influence on your lifestyle. If your friends are overweight, it is very likely that you are overweight too. For more information on how your friends can influence your waistline, refer to the following resource; 16. Sleeping patterns – Many people do not realize sleep as being one of the main causes of weight gain. Studies have shown that too much sleep and too little sleep can affect your waistline. Sleep seems to have a “U” shaped influence on weight loss. Lack of sleep greatly influences your hormonal balance and thus affecting your body’s fat metabolism. Read more about the effects of sleep on weight loss from the following resources; - Too much sleep as one of the causes of weight gain - Too little sleep as one of the causes of weight gain 17. Weight loss drugs – Weight loss drugs are usually the easy way out for most people. They do not want to take responsibility for their actions. As such, most people go about their obesity promoting lifestyle hoping that these drugs can help manage their weight. These drugs may succeed temporarily but the weight loss usually plateaus. Worst still, the weight comes back on with a vengeance. Refer to the following resources to understand better how sleep affects your waistline; - Can drugs provide long lasting weight loss - Can drugs help you lose weight (Mayo Clinic) - How much do diet pills help weight loss 18. Medication – Some medications disrupt your metabolism and cause weight gain. Corticosteroids are one such family of medicines. These steroidal medication is usually taken to suppress inflammation. There is nothing much anyone can do about this as doctors have very good reasons to prescribe such drugs. 19. Marriage – Marriage is another one of the causes of weight gain. Studies have found that most couples seem to gain weight after marriage. Divorced individuals have been found to have a lower rate of weight gain as compared to married individuals. Researchers have suggested that most people let themselves go after marriage. One of the incentives to stay in shape is to look attractive to the opposite sex. We have an in-built mechanism that drives us to look for a suitable mate. Once one is married, the motivation to stay in shape seems to be lost. Read more about the effects of marriage on weight gain in the following resources; 20. Weekend binges – A lot of studies have found that people are fairly disciplined with their exercise and diet on weekdays. Most people let themselves go on weekends where there is an absence of routine. Read more about this in the following resources; 21. Procrastination – Most people realize that their weight is creeping up but just do not take any action. They procrastinate on efforts to lose weight. Before they know it, they could be 50 to 100 pounds overweight. At this juncture, they wonder where all the weight came from. Weight gain does not happen overnight. It takes months and years. You should take immediate action the moment your clothes are getting tighter. Read a related article that gives 7 ways how you can use your clothes to lose weight. 22. Time management – Poor time management is another one of the causes of weight gain. Most people are so overwhelmed with their daily schedules that they cannot find time to eat regular meals or exercise. This is simply a misalignment of priorities. Causes of weight gain: A final word With a thorough understanding of the possible causes of weight gain, you can take the necessary measures to counter it. As already mentioned, causes of weight gain originate from your lifestyle. Aging also affects your lifestyle and hormonal balance. Your lifestyle affects your hormonal balance and the rate of aging. Your lifestyle, hormonal balance, aging and a combination of the 3 affect how your body partitions calories and gains weight. The only factor that you have full control over is your lifestyle. With a healthy lifestyle, you can ensure that your hormones are in balance and the negative effects of aging are not accelerated. Alex Chris assisted in providing content for this article. He maintains a weight loss tips blog. In his blog you can read many articles on why you should lose weight and how to do it correctly and healthily. Identify the major causes of weight gain and get into action today!
<urn:uuid:cbd3adf8-66b0-44ea-a9cb-83bbacb55093>
{ "date": "2015-10-06T12:17:38", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678654.30/warc/CC-MAIN-20151001215758-00212-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9587919116020203, "score": 2.5625, "token_count": 3370, "url": "http://correct-weight-loss.net/causes-of-weight-gain/" }
Scientists have developed a natural alternative to morphine that appears to be as effective at killing pain, but has fewer side effects. Morphine can produce bad side effects Although morphine is a very effective painkiller, it is addictive, and can cause side effects such as severe constipation, reduced blood pressure and difficulties with breathing. The new drug is based on proteins called glycosylated enkephalins which are produced by the human body to reduce pain. It may be particularly useful for the military, which is keen to find drugs that can safely be self-administered by soldiers who are severely wounded during battle. Our hope is that glycosylated enkephalins can be used to block pain in severe trauma injuries, in victims who could not normally receive narcotics Lab tests carried out on mice by researchers at the Universities of Arizona and New England have produced highly promising results. Lead researcher Professor Robin Polt said: "Our hope is that glycosylated enkephalins can be used to block pain in severe trauma injuries, in victims who could not normally receive narcotics." Other scientists have tried to produce synthetic glycosylated enkephalins. However, they have never found a way to breach the protective biological membrane that shields the brain from invading toxins, and so have never got the drugs to work. Professor Polt's team have found that it is possible for enkephalins to cross the blood-brain barrier if they are attached to glucose molecules. Then, once inside the brain they are able to attach to pain receptors and reduce pain in a way similar to morphine. Tests on mice showed that the drug produced significantly fewer side effects than morphine, and less signs of addictive behaviour. They also revealed that the drug works by attaching itself to two different type of pain receptor in the brain, known as "mu" and "delta" receptors. This makes it more effective than morphine, which only binds to "mu" receptors. It is also easily broken down by the body into amino acids and sugars, which reduces the risk of toxicity. The researchers plan further research to test the effectiveness of the drug. It is unlikely to be made available for at least five years. But Professor Polt believes the work could eventually lead to a whole new class of drugs that may be able to tackle poor memory, attention problems and even depression. Professor Anthony Dickenson, an expert in neuropharmacology at University College London, told BBC News Online: "There is considerable potential for an opioid-like analgesic that differs from morphine. "However, only human studies will reveal whether this type of compound has benefits over existing agents. "For example, subtle side-effects of drugs that may preclude their use in humans [eg hallucinations] may not be revealed in animals." Details of the research were presented at a meeting of the American Chemical Society.
<urn:uuid:386d5731-2f89-4b96-bebd-7bab4ff7dcd7>
{ "date": "2016-05-28T14:11:00", "dump": "CC-MAIN-2016-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277807.82/warc/CC-MAIN-20160524002117-00071-ip-10-185-217-139.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9616238474845886, "score": 3.296875, "token_count": 595, "url": "http://news.bbc.co.uk/2/hi/health/2880829.stm" }
Rugged Technology: The 12th man in the fight against wildfires Science and technology have come together in the last few years to give heroes on the frontlines ways to fight fires smarter and harder By Michael Cayes Mooring Tech, Inc. This article is provided by Mooring Tech, Inc. and does not necessarily reflect the opinions of FireRescue1. As of 7:15 AM on August 6th, 2015, the Rocky Fire in northern California has burned over 69,000 acres of land. It is only 30% contained. Frustration and fear are mounting, and fire crews from multiple states are working around the clock to ensure that communities are protected, or at least safety evacuated. All four of the largest and most damaging fires in California’s history have happened since the year 2000. This is in line what is happening all over the United States. Wildfires are becoming more frequent, and they’re getting bigger and harder to contain. The Rocky Fire is one in what is shaping up to be a long line of fires this year; the need for new fire-fighting solutions, cooperation from the weather, and a little bit of luck grows every day. Only one of those three things is within the realm of human control. Thankfully, science and technology have come together in the last few years to give heroes on the frontlines ways to fight fires smarter AND harder. In seasons of widespread drought such as the one the western United States is facing, emergency responders are pushed to their limits physically and mentally. They need equipment and software that will go to those limits with them. The innovative technological solutions range from very simple to highly interactive and complicated. Cloud storage, which is already widely implemented in the consumer markets for personal data, is being embraced by fire departments to give firefighters access to data about previous fire patterns and current fire location and movement. As U.S. Forest Service employee Tim Sexton says, the cloud can save lives: "We could remotely look at the locations of firefighters in relationship to where the fire is …and perhaps anticipate movement of the fires before it reaches the crews. Sometimes it comes over a ridge, and the crews can't see it coming." The Forest Service is in the first stages of testing an unmanned drone plane fitted with infrared sensors; the plane remains aloft most of the day, periodically beaming scans of the fire’s movement to tablets or computers of crews on the ground. The devices receiving these scans must be ruggedized. Taking a regular tablet into wildfire conditions would be largely ineffective. This is where innovative companies such as Panasonic have stepped in, and started producing rugged hardware that is heat-resistant, shock-resistant (up to 6’ depending on the model), and ultimately perfect for rugged people doing rugged jobs. These rugged tablets can be mounted easily in vehicles, and just as easily detached for use on-the-go. They can be fitted with detachable keyboards as needed for communication and data entry. The touch screen makes using imaging and mapping software fast and easy; zoom in, zoom out, examine the fire from a new angle, and do all of it with just a few simple movements of your fingers. Fighters on the front lines are finding new ways to use this technology all the time; each advance evens the playing field a little more in what’s sure to be a daunting wildfire season. Join the discussion The comments below are member-generated and do not necessarily reflect the opinions of FireRescue1.com or its staff. If you cannot see comments, try disabling privacy and ad blocking plugins in your browser. All comments must comply with our Member Commenting Policy.
<urn:uuid:f4e5a45f-9b6e-4a5f-9127-dfa4122f4e84>
{ "date": "2016-06-30T10:32:35", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00108-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9554027318954468, "score": 2.84375, "token_count": 749, "url": "http://www.firerescue1.com/fire-products/mobile-computers/articles/48523018-Rugged-Technology-The-12th-man-in-the-fight-against-wildfires/" }
When 76 percent of healthcare providers are using at least one form of complementary and alternative medicine (CAM) and 42 percent of hospitals offer complementary and alternative medicine services, this trend proves that implementing holistic approaches in health care fields such as occupational therapy can indeed be a benefit to patients. Holistic occupational therapy provides health, wellness and disease prevention services. Occupational therapists are also being given special training in holistic care so that they can incorporate the new trend into their practices. Holistic occupational therapy enhances conventional occupational therapy with natural and healthy practices. Trending Holistic Services Occupational therapists have embraced the CAM trend and are offering their patients services that complement traditional occupational therapy. These new and trending holistic services include sensory integration, music and listening therapy, art and movement therapies, aromatherapy, relaxation, myofascial release and guided imagery. Yoga As A Holistic Approach With Helpful Components Yoga is used as a holistic approach and emphasizes balance. Yoga’s asana component addresses your physical posture. It also tones, stretches and strengthens your body’s musculoskeletal system while helping you to maintain a calm demeanor. Yoga’s pranayama component is a breathing and meditation exercise, which handles your body’s relaxation response. By calming your nervous system, pranayama lowers your blood pressure, breathing rate and heart rate. It also reduces stress and symptoms of anxiety and depression while boosting your all-important immune system. Listening therapy helps to activate parts of your brain that are in an off mode. This therapy manages to convert left-dominant listeners into right-dominant listeners. It turns on a multi-dimensional process which improves your auditory processing when you use headphones. All this is accomplished because listening therapy affects your auditory and vestibular systems. Helping All Age Groups Holistic occupational therapy is not limited to any particular age group. It is inclusive and can be implemented at the most major stages of your life. Its fundamental focus on staying healthy is a smart learning tool for your children to learn about lifelong skills. As you approach middle age, holistic occupational therapy equips you with knowledge and practices that promote healthy living. Your trained occupational therapist, after evaluating your case, will assign appropriate health practices that help you to remain healthy over a longer period of time. If you are an elderly person, holistic occupational therapy like http://www.advancedphysicaltherapyofsj.com gives you a purpose to stay healthy, and it promises hope that indeed your golden-age years can be filled with vigor.
<urn:uuid:2846239c-1aab-414d-bc3d-d9d7a8b92f11>
{ "date": "2016-08-25T20:07:13", "dump": "CC-MAIN-2016-36", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982294097.9/warc/CC-MAIN-20160823195814-00010-ip-10-153-172-175.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9107538461685181, "score": 2.65625, "token_count": 523, "url": "http://friede-immobilien.com/" }
Tips from Other Journals Cholesterol Awareness Among Patients Following Screening Am Fam Physician. 1998 Mar 15;57(6):1412-1414. An elevated cholesterol level (greater than 240 mg per dL) is an established risk factor for coronary artery disease; however, 60 percent of all deaths from coronary heart disease occur among persons with lower cholesterol levels. Since 1991, the National Cholesterol Education Panel (NCEP) has encouraged physicians to screen patients for hyperlipidemia and provide dietary counseling in order to reduce the risk for heart disease. The NCEP recommends that physicians inform patients of their cholesterol test results in a clear, understandable manner and encourage all patients, regardless of concurrent risk factors, to reduce their fat intake. Previous studies have shown that patients who were screened for high blood cholesterol and were informed of their cholesterol status were more motivated to modify other cardiac risk factors and to reduce their serum cholesterol levels. Murdoch and Wilt surveyed patients within a year of their cholesterol measurement to assess compliance with the NCEP guidelines. Any patient at a midwestern Veterans Affairs hospital who had a cholesterol level checked by a physician's order between January 1993 and 1994 was eligible for the study. Multiphasic blood screening was not done at that institution during the study period, so cholesterol testing could only be performed if specifically requested by a physician. A total of 250 patients (125 men and 125 women) who had cholesterol screening were randomly selected by a computer-generated list. A 17-item questionnaire was mailed to study participants within one year of their last cholesterol measurement. The participants were asked to identify their cholesterol status in two different ways: as a category (i.e., desirable versus undesirable) and as the actual number. Respondents were also asked to estimate their perceived risk of coronary artery disease due to high cholesterol levels, their overall health perceptions, their other cardiac risk factors, if they had been prescribed a specific cholesterol-lowering diet and if a physician had told them their actual cholesterol number. Eighty-three percent of study participants responded to the survey. The average age of the participants was 61 years for the men and 55 years for the women. The mean length of time between the survey and respondents' last cholesterol measurement was 4.4 months. Almost all respondents (99 percent) either agreed or strongly agreed that a high cholesterol level increases the risk for coronary heart disease, and the majority (76 percent of men and 83 percent of women) believed that lowering their cholesterol level would decrease their personal risk of coronary disease. When asked if a physician had checked their cholesterol level in the past year, 60 percent of the men and 65 percent of the women answered affirmatively. Yet only 50 percent of men and 55 percent of women stated that they were told their cholesterol results by a physician. Less than one half of the study participants said they were given dietary instructions by a physician. Twenty-eight percent of the men and 37 percent of the women said they knew their cholesterol number, but only 40 percent of the numbers they reported were accurate. Overall, only 19 percent of the survey respondents accurately reported their cholesterol level. Respondents were more likely to accurately recall their cholesterol numbers if they remembered being told their test results or remembered receiving dietary advice. Female gender and more years of education were correlated with cholesterol awareness. The authors conclude that physician compliance with the NCEP guidelines is poor. Even among better-educated patients, knowledge of the importance of cholesterol level and the need for dietary intervention was found to be significantly lacking. Physicians should endeavor to improve patient awareness by following up cholesterol screening with meaningful feedback and by being diligent in prescribing dietary therapy. Murdoch M, Wilt TJ. Cholesterol awareness after case-finding: do patients really know their cholesterol numbers? Am J Prev Med. 1997;13:284–9. Copyright © 1998 by the American Academy of Family Physicians. This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP. Contact [email protected] for copyright questions and/or permission requests. Want to use this article elsewhere? Get Permissions
<urn:uuid:18aac33d-fec0-4a11-93df-f1bd6aa25dad>
{ "date": "2014-07-31T05:24:28", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00384-ip-10-146-231-18.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9692742824554443, "score": 3.125, "token_count": 898, "url": "http://www.aafp.org/afp/1998/0315/p1412a.html" }
What does it mean to have an impairment? ‘When I was ten years old and in school, I realised I couldn’t read from the blackboard like the other children in my class. My family took action immediately, and I was seen by an ophthalmologist at the most advanced eye clinic in Ghana at the time. I was referred to an optometrist and given spectacles, but I needed a new prescription every three months. Eventually we were told that there were no other reading glasses that could help. ‘Even though I grew up in the vicinity of the first school for the blind in Ghana, I remained in my mainstream school and continued to a mainstream secondary school at the age of 13. By the time I was 14, it was really difficult for me to read textbooks: I could only read large print and my own handwriting. I learned mainly by listening and also working with my classmates, who gave me support as we studied and did our homework. Some teachers would offer extra help after the class, and others were willing to read what they were writing on the board so I could hear and follow. But it was not a formal low vision service. I didn’t know that any existed as low vision students at the School for the Blind then were all learning like blind students. ‘Later, when I had finished school, I met one of my teachers, and he explained that the headmaster of the school had received some exposure to special needs education and gave the teachers hints on how they could support me. Because I was not involved in the discussion and did not know about my rights then, I didn’t know I had the right to demand such services. I didn’t know that what those staff members did for me was not charity, but their responsibility. This meant I didn’t feel I was able to ask for the additional support that I really needed in school. ‘At the hospital, when they could no longer improve my vision or even prevent it from getting worse, nobody explained to me what the condition was and what I should expect in the future. I am not sure whether my relatives had a better understanding than I had, but they didn’t tell me much. It was also not normal for a child in my culture to ask too many questions. ‘When it came to my final examinations, although the school applied for questions in large print, two weeks before the examination information reached me that the examining board could not provide this. Fortunately, my biology teacher had an idea – I could use a hand magnifying lens, like the ones we used to examine specimens! Although I could see only a few letters at a time, as it was such as small lens, I was able to read the exam questions. I still have the lens today although it is no longer of use! ‘Soon after I left school, there was an advert in the paper about teachers who could be trained to support people with visual impairment. My uncle saw this and investigated – he found out that I could go to the school for the blind where I could learn to read and write Braille, so I could continue my education. ‘The admission form for the school for the blind had to be signed by an ophthalmologist, to certify that I needed such a service. My ophthalmologist, who I’ve been with for many years, said he thought it was a good idea for me to go, but he couldn’t say why he had never suggested this before! ‘When I went to the school for the blind, which was near where I grew up, the people I knew in the area reacted very differently to me – even though my vision hadn’t changed. They were sad, would say how sorry they were for me, and spoke with such pity! ‘I was visited at the school by a lady who is blind and who was already enrolled at the teacher training college. I listened to her talk about her experiences and then I knew that I had a future. ‘Over the next two decades, I qualified as a teacher, then specialised in special needs education, followed by a degree course in education. I taught psychology, counselling, and special needs education at the teacher training college. ‘Maturing as a person with a visual impairment was very difficult – society didn’t make it easy. Thinking back, I knew that the way people reacted to me when I went to the school for the blind was wrong but I didn’t know what to do about it. I struggled with this and similar concerns for many years. ‘Then, in the early 1990s, things changed when I attended a workshop initiated by the World Blind Union’s Institutional Development Programme (IDP). IDP is an international capacity building programme for organisations of and for the blind mainly in Africa, and is sponsored by Sightsavers and Perkins International. At the workshop, I realised that continued advocacy and awareness raising would be required to address the challenges faced by individuals and organisations for the blind. What stood out for me was that everybody had a role; you could initiate change from wherever you were and engage others to join you. It had a great impact on me and encouraged me. So I strengthened my participation in the organisations of persons with disabilities at national, regional (Africa) and global levels, and served in different leadership positions. ‘Whenever one is able to push disability behind and move on with life, those with positive thoughts see the person and not the disability. Sometimes, my friends forget that I cannot see – it is because they see me, and not my visual impairment. Everybody in society can be like this – particularly if we start by educating our children that people with disabilities are just the same as everyone else.’ If I could choose … Eye care practitioners would be careful how they promoted the restoration of sight ‘Don’t create the impression that, if vision cannot be restored, that it is the end of that individual. Right from the beginning, eye care practitioners should say: “I know you are managing, but some of the challenges you are facing can be limited.” Then, if the operation is not successful, they can say: “Well, you remember the conversation we had, about how well you were managing, this is what we need to continue and strengthen. There are also these services I can refer you to for more support …”’ Eye care practitioners would talk to people with disabilities, instead of their guides or carers ‘People think the person who walks in with you knows more about you than you yourself, even if you are an adult. It is usually because there is no eye coordination, and sighted people feel more comfortable talking to people with whom they can link eyes (make eye contact). This should be explained to people not just in the eye profession, but across the board. So, more awareness within the health training, and refresher courses around disability, are needed.’ Pharmacists would ensure people with disabilities understood how to use their medicines ‘The ideal would be the provision of a Braille label to those who can read it. For those who can’t, pharmacists can help them to examine the package, and perhaps the content, so they are able to identify the correct dosage. This could also be provided by the Support Services Department, which (ideally) every hospital should have.’ People promoting eye health would ensure that people with disabilities also get the information they need ‘If you are giving a health talk, or doing health education in a community, check that people with different impairments are also there to listen. There will certainly be several people with impairments in the community – you are responsible for ensuring that they also hear your message. Don’t rely on others to tell them. Remember, also, that people with a visual impairment cannot read posters.’ People working in eye clinics would value all patients ‘People coming for eye care often have some visual disability already. They will experience fear, anxiety, and confusion, as well as worries about the costs. So when they come to the clinic, and the receptionist – whoever is doing the papers – is harsh, then it gets much more difficult. If the people you meet are warm and friendly, it is much easier. The way people working in eye clinics treat people with impairments is very important: be polite, understanding, and encouraging. Despite the fact that there is so much work that needs to be done, the eye team have to be very professional – this should be part of their training.’ People with disabilities would be included in the health sector ‘Eye health programmes need to include people with disabilities in all aspects of health promotion, blindness prevention, and eye care delivery. For example, people with disabilities would make excellent counsellors for people who have become disabled, as they are good role models and mentors.’ Counselling would empower people with impairments ‘This is the type of counselling that deals with the inner awareness and self-actualisation of the person – it is about that person reaching their full potential, disabled or not. Yes, it’s good to tell people that there are services available for them, but this is about their head or heart, which is telling them that the world has come to an end. That is the perspective good counselling tries to change. Counsellors must say to people: “This is not your end”, and talk to them about others who are doing well and even excelling in life, despite their impairments. ‘Counsellors must also empower people as individuals: tell them they have a right to ask for the assistance they need, that they have a right to participate. Help them to develop assertiveness and confidence in themselves as an individual – a platform which every person needs to develop and grow. This is when people with impairments can grow from strength to strength.’
<urn:uuid:dfd84cf0-2ac9-418b-b96d-86e50dbdbd7e>
{ "date": "2016-12-09T17:28:10", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00144-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9882199764251709, "score": 2.734375, "token_count": 2069, "url": "http://www.cehjournal.org/article/what-does-it-mean-to-have-an-impairment/" }
Sage Grouse Population Dynamics Prior to European settlement, this species was considered abundant in parts of the state. Although considered less common, sage grouse were thought to be found in several counties east of the current range. As land-use changed with settlement, the sage grouse range shrunk as more sagebrush was lost to cropland expansion and altered by livestock grazing which impacted the natural vegetative communities and reduced available cover. Early hunting records are sparse; however it is thought that high harvest in the early 1900s contributed to the decline of sage grouse. Department records indicate that the sage grouse season reopened in 1955 where 59 birds were harvested. Thereafter, the season was open and closed with little information available in respect to harvest. The season was again closed from 1980-1999, and re-opened in 2000. Since then, an average of 36 hunters per year take the field with an average of 18 sage grouse harvested annually. Hunters are interviewed in the field and biological data is ascertained to determine age and sex information, providing some insight to reproduction; although acquired from a limited sample size. The current range of both sagebrush habitat and sage grouse are quite similar to that of 30 years ago. The majority of sage grouse are found in Harding and Butte counties, although smaller numbers exist in Perkins and northwest Meade counties. One known lek in Fall River County continues to be monitored; however it seems that sage grouse numbers continue to decline in that area. Sage grouse are monitored by spring lek counts. Observers count the total number of males on each lek and this information is used as a reference point to compare current numbers to the previous year and historical numbers. In the spring of 2007, a total of 31 leks were surveyed, 24 were considered active and had displaying males, with a total of 560 males counted. Through data collected by spring lek counts and current research projects conducted by South Dakota State University, the estimated breeding population of sage grouse in the spring of 2007 was 1,500.
<urn:uuid:8146b813-3728-4cd4-adfa-8197ffd0b4ed>
{ "date": "2014-04-18T23:15:19", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00075-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9708966612815857, "score": 3.578125, "token_count": 419, "url": "http://www.gfp.sd.gov/hunting/small-game/sage-grouse-population-dynamics.aspx" }
Whether a dog is barking at a ball, or wants to play, can be now be discerned by a new computer program with greater accuracy than owners of the pets. The software has learned the nuances of woofs, howls, yaps, snarls and growls in various situations and is now able to classify dog barks with reasonable accuracy, along with the identity of the animals themselves. Computer programs now appear to be the most precise tool on offer to study how animals communicate, conclude Csaba Molnár from Eötvös Loránd University in Hungary and his research team, who describe tests of the new software in the journal Animal Cognition. The software analysed more than 6,000 barks from 14 Hungarian sheepdogs - Mudi breed - in six different situations: 'stranger', 'fight', 'walk', 'alone', 'ball' and 'play' to learn the nuances of dog language. When presented with novel barks, the software correctly classified them in 43 percent of cases. The best recognition rates were achieved for 'fight' and 'stranger' barks, and the poorest rate was achieved when categorizing 'play' barks. Computers offer a way to tell what a pooch is really trying to tell us. "Since we have no reasons to say that Mudis are special among other dog breeds I am pretty sure that this method for categorizing barks could work in other dog breeds' barks as well," Dr Molnár tells The Daily Telegraph. When it came to distinguishing the yelps and woofs of different dogs, the software correctly classified the barks in 52 percent of cases, which suggests that there are individual differences in barks of dogs even though humans are not able to recognise them. Despite the claims of owners to be able to distinguish the bark of their beloved pets, earlier work by the Hungarian team showed this is a task that even Dr Doolittle would find challenging. He believes that the sofware could help owners and dog trainers. "If we could find the acoustic characteristics of barks which reflect to certain emotional states of dogs we could gain information about the dogs' "well-being" which would have several applications on the animal welfare field. Other application could be a computer which "understands" the dogs' barks, so the dog could operate it with voice. With such a computer the dog could communicate with owners as well, for example alert them when a stranger has turned up." The team now plans to compare the way different dogs communicate, comparing barks of different breeds such as sheepdogs, hunting dogs, toy dogs and so on, to find out what characteristic of bark were favoured as they were domesticated from wolves over thousands of years. The speech of owners could be categorised in the same way, revealing the emotional colour to human speech, Dr Molnár adds.
<urn:uuid:d4ff5329-221c-40d9-a106-edd85f3bc785>
{ "date": "2017-04-23T23:47:53", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00233-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9672918319702148, "score": 3.0625, "token_count": 602, "url": "http://www.telegraph.co.uk/news/science/science-news/3322045/Computer-communicates-with-dogs.html" }
By Paul Sussman for CNN Adjust font size: (CNN) -- This week the Chinese government unveiled its long-anticipated blueprint for tackling climate change and atmospheric pollution. The 62-page action plan, issued ahead of the forthcoming G8 summit in Heiligendamm, Germany, and in the face of growing international pressure to set concrete targets for controlling greenhouse gas emissions, openly acknowledged the scale of China's pollution problem It reaffirmed Beijing's aim of cutting energy use by a fifth before 2010, and of doubling its reliance on renewable energy sources such as wind, hydro and nuclear power by 2020. At the same time, however, it refused to set binding targets for greenhouse gas emissions, emphasizing that it was down to the world's major industrialized nations to take the lead in tackling a problem for which historically they bore the burden of blame ("unshirkable responsibility" as the report termed it.) More significantly for a country that still relies on coal for 70 percent of its energy requirements, Beijing insisted that economic development must remain higher on its priority list than environmental protection. "The first and overriding priorities of developing countries are sustainable development and poverty eradication," declared Ma Kai, chairman of China's National Development and Reform Commission, which produced the plan. "Climate change is an environmental issue, but also a development issue. The international community should respect the developing countries' right to develop." So just how serious is the pollution problem in China? And, despite its avowed good intentions, just how serious is the Chinese government about actually tackling that problem? Respiratory disease and contaminated water The issue is perhaps not quite as clear cut as some commentators have made out. "It is not enough to simply say that China is a big polluter and leave it at that," Dr. Tim Forsyth of the London School of Economics told CNN. "You have to analyze exactly what you mean by 'pollution.' If you are talking about climate change, for instance, rice paddies emit very significant levels of methane, but you can't very well criticize people for growing food. "On a per capita basis, given the size of the country, China is actually not a big polluter at all. Per person it's responsible for about half the carbon emissions of somewhere such as the U.S." If the situation does not allow for glib, black-and-white analyses, however, there is little disagreement -- even from Beijing itself -- that China is not only responsible for significant levels of pollution, but also suffers very significantly from the effects of that pollution. Currently second behind the U.S. in the table of the world's leading greenhouse gas emitters, China -- whose economy continues to grow at a rate of about eight percent annually -- is expected to top the list in the near future, with many analysts predicting the "near future" to mean this year, 2007. As well as contributing to global warming, those emissions -- as well as a host of other toxic by products of Chinese industrialization -- are having a catastrophic effect on the health and environment of the nation that is producing them. According to environmental monitoring group the Worldwatch Institute, China now boasts 16 of the world's 20 most polluted cities. As much as 70 percent of the country's water is suffering from pollution, with an estimated 300 million people drinking contaminated water on a daily basis, and 190 million drinking water that is so contaminated it effects their health. Crop returns are decreasing both in terms of quality and quantity as a result of polluted land; while approximately 400,000 people in China die annually from respiratory infections directly attributable to air pollution. "The sheer scale of the economic activity in China means that pollution is as probably bad as it has ever been anywhere in the world, ever," Lester Brown, head of Washington-based Earth Policy Institute, told CNN. "Such is the pollution haze in many of the cities that you can't even see the sun. "A lot of the rivers are so dirty their water can't now be used for irrigation, while some of the soil is so badly contaminated with cadmium and mercury that there is a question as to whether food grown in those soils is safe to eat." Nor is the cost just human and environmental. Ironically given that it is China's bullish economic growth that is fueling such high levels of pollution, that same pollution is proving increasingly detrimental to the country's economic well being, with the China's economy losing an estimated $200 billion annually due to the effects of pollution and global warming, almost 10 percent of its GDP. Overall, then, a bleak picture, and one that most analysts are predicting will become bleaker before it improves. "The Chinese government are certainly aware of the problems," says Lester Brown, "And they do regularly issue proclamations from Beijing to try to improve things. "The difficulty is that at present they have neither the institutional structure nor the local enforcement to make a real difference. "The U.S. Environmental Protection agency, for example, has about 17,000 staff. The equivalent organization in Beijing has less than 1,000. "In the end things are left to local officials, and at the moment the imperative for those officials is to create jobs and raise the standard of living, not protect the environment." Despite this there are signs that things are changing, albeit slowly. Environmental awareness has certainly increased within China, with over 2,000 environmental NGOs operating around the country, while the Chinese government is actively monitoring pollution levels at some 300,000 factories. And while it fell short of what many western governments and green lobby groups would have liked, the recently released action plan nonetheless represents a significant step by the Chinese authorities. "China will not take the old path of rapid development with high resources and energy consumption," insisted Ma Kai. "We will blaze a new road of low energy consumption, low levels of emissions, high efficiency and high productivity." "This is a first," Yang Ailun, of Greenpeace China, acknowledged in an interview with the Guardian newspaper. "It shows China has done its homework about what needs to be done. Even though the plan is mostly a compilation of existing policies, that shouldn't detract from its significance or the current level of effort." Certainly the suggestion that, fixated on the rush to industrialization, the Chinese authorities are wholly blind to the environmental consequences of that industrialization, is an overly simplistic analysis of the situation. "I think the Chinese government is well aware of these issues, is worried about them and is serious about confronting them," says Dr. Tim Forsyth. "They are taking rational decisions about technological development, energy supply and energy efficiency. "What they are not doing is simply giving in to what western governments and environmentalists would like them to do. They are more canny than that." Analysts predict that China could become the world's single greatest emitter of greenhouse gases by the end of 2007
<urn:uuid:3779c9e8-64c0-433f-9347-fe3a7e0a12f9>
{ "date": "2018-01-23T14:57:04", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891976.74/warc/CC-MAIN-20180123131643-20180123151643-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9631181955337524, "score": 2.875, "token_count": 1432, "url": "http://www.cnn.com/2007/TECH/science/06/04/china.environment/" }
Nutrition in the HIV Positive Woman Proper nutrition plays an important role in overall health care. For the HIV-infected woman, adequate nutrition is critical, and efforts must be made to optimize nutritional status. Since women in today's society are pulled in so many different directions by taking on many roles, playing homemaker, mother, caregiver, wife, and career women all in the same day, often we neglect ourselves. Part of that neglect may be in our diet habits. Reasons such as too busy, too tired, and forgetting to eat are some of the more common phrases used to explain why proper diet is often lost during the day. Eventually something serious occurs, most obviously presented by unexplained weight loss. This is a visible indication of what has already been a progression of body changes from HIV disease itself. HIV-infected women are all at risk for poor nutrition status. Women who play "superwoman" and do not take care of their health may be at increased risk for compromised nutritional status. Unfortunately, unless there has been some significant weight loss, we may not know what's going on inside the body. It is not until this time that a woman thinks about her diet and food intake. What can we do to help prevent wasting? Aside from visiting your doctor regularly, nutritionally you can do a number of things. First, you can eat a variety of foods. Use the food guide pyramid to make sure you are getting enough vitamins and minerals, calories, and protein daily, which recommends the following: If you need to gain weight, or to keep from losing weight, eat the higher number of servings for extra calories. When cooking, preparing, and/or handling foods, your primary goal should be to avoid food infection. It is critical that hands are washed with hot soapy water before and after handling any type of food, whether you are cooking or eating. Keep foods at a safe temperature -- cold foods should be cold, and hot foods hot. Food left at a temperature between 40-140 degrees F are in the "danger zone," where bacteria may grow. Heat leftovers to at least 140 degrees F. Check food labels -- do not use packaged food past the recommended date on the label. Finally, avoid eating raw foods, including eggs, fish, and meats. Check to be sure milk products and juices are pasteurized because not all milk and juice is. If the item has not gone through the pasteurization process it may contain harmful bacteria. Food safety is especially important in the immune compromised patient, as it can be hard to fight infection. Symptoms of food borne illness can include nausea, vomiting, fever, diarrhea and dehydration, and can lead to hospitalization. Women must learn to make their own mental and physical health a priority. Without good health, we are putting family, job/financial security, and ourselves on the line. Kids want and need healthy moms, and co-workers need healthy colleagues. Proper nutrition is one way to help obtain and keep good health. It is a crucial part of the overall healthcare of the HIV-infected person, and should be taken seriously. Tami Jones Mackle, RD is a Registered Dietitian, and works in the Infectious Disease Clinic at the University of Medicine and Dentistry in Newark, NJ. Got a comment on this article? Write to us at [email protected]. This article was provided by Positively Aware. It is a part of the publication Positively Aware. Visit Positively Aware's website to find out more about the publication.
<urn:uuid:859fa403-dab4-414c-9913-14f211766a6b>
{ "date": "2015-08-02T09:42:37", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989018.48/warc/CC-MAIN-20150728002309-00133-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9612820744514465, "score": 3.1875, "token_count": 729, "url": "http://www.thebody.com/content/art1065.html" }
These pieces originally appeared as a weekly column entitled “Lessons” in The New York Times between 1999 and 2003. [THIS ARTICLE FIRST APPEARED IN THE NEW YORK TIMES ON MARCH 1, 2000] What Toddlers Could Use: Some Graying Boomers There are two unrelated social problems that can solve each other. Too few poor children in their prekindergarten years have the kinds of literacy experiences that lead to later academic achievement. Simultaneously, healthier retirements leave many aging Americans craving socially useful roles. Why not match them, giving retirees opportunities to read to young children? There is a stubborn achievement gap between white and minority students, and by the time they enter school much of the damage has already been done. At least half the test score difference between black and white 12th graders is attributable to what occurs before they enter first grade. All 3- and 4-year-olds need intellectually stimulating experiences. They should be read to, talked to, told stories and given play opportunities that include using paper and crayons, puzzles, clay, blocks and other objects to manipulate. Students who succeed usually had the benefit of regular lap reading as toddlers, and learned to pretend-read a variety of word and picture books. With full employment and welfare reform moving mothers into jobs, it might seem that poor children will now get these experiences in formal preschools. Yet a recent survey by Dr. Bruce Fuller of the University of California and Dr. Sharon Kagan of Yale finds that the quality of day care for low-income children is remarkably poor. At day care settings that serve welfare-to-work mothers in California, Connecticut and Florida, Professors Fuller and Kagan recorded children’s activity at 40 points during a three-hour observation. In California, a given child was being read to on an average of less than one of those 40 occasions. In Connecticut, it was about two, and in Florida about one and a half. In contrast, children were watching television or wandering aimlessly in 6 of these “snapshots” in California and Connecticut, and in nearly 13 in Florida. Against that kind of backdrop, the first baby boomers turn 55 next year. Already, life spans have so lengthened, and retirement ages so declined, that only 20 percent of Americans 55 and over still work. Those who do not are mostly healthy and receiving benefits from Social Security or private pensions, or both. With time on their hands, many want more useful roles. The affluent ones moving to Arizona or Florida are exceptions; most reside not far from the very communities where needy preschoolers could use attention. Yet as Marc Freedman notes in his book “Prime Time” (Public Affairs, 1999), while we “face a profound shortage of human beings to tend the social fabric, we overlook the presence of untapped human resources in the older population.” In 1995, Mr. Freedman helped found the Experience Corps, a project sponsored by the government’s Corporation for National Service, which is directed by former Senator Harris Wofford of Pennsylvania. The fledgling project (www.experiencecorps.org) has placed 800 volunteer retirees in 70 schools around the country. They read to and with children, talk to and tutor them. The volunteers get stipends of about $150 a month, and commit to 15-hour-a-week schedules. Schools often set aside rooms where they can gather, exchange experiences, consult with teachers and even meet with parents. The project relies on foundation grants and some discretionary financing that Mr. Wofford assembles from other federal programs. The Experience Corps is now beginning to move into preschools. Literacy training can do the most good there, and most retirees already have the required grandparenting and lap reading skills. At a demonstration project in the heart of Kansas City’s most impoverished African-American community, Juanita Carter, 75, volunteers in a Y.M.C.A.-operated preschool. Retired from her job cleaning university classrooms, she lives on Social Security and a small pension. “I love the Experience Corps,” Ms. Carter said, “and as long as I can stay healthy, I’m going to stay here,” reading to 3- and 4-year-olds, helping them practice writing letters and numbers, comforting them when they cry, meeting them when they get off a bus in the morning. Another Kansas City volunteer, Laura White, is 90, retired from restaurant and housecleaning work. With eyesight failing, she now mostly “reads” books with big pictures, asking children to identify the objects. This not only benefits the children, “it keeps me alert,” Ms. White said. “America,” Mr. Freedman noted, “now possesses not only the largest and fastest-growing population of older adults in our history, but the healthiest, most vigorous and best educated.” Working generations too frequently cannot provide the intellectual stimulation for infants and toddlers that promotes success in school. Matching retirees (in need of purpose) with preschoolers (in need of storytelling) is like elementary algebra: multiplying two negative trends brings a positive result.
<urn:uuid:e9c85af0-616e-4b58-a285-4ad32f03d223>
{ "date": "2018-03-24T17:52:17", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650764.71/warc/CC-MAIN-20180324171404-20180324191404-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9555198550224304, "score": 2.609375, "token_count": 1087, "url": "https://www.epi.org/publication/webfeat_lessons20000301/" }
A small, near-Earth asteroid named Itokawa is just a pile of floating rubble, probably created from the breakup of an ancient planet, according to a University of Michigan researcher was part of the Japanese space mission Hayabusa. The finding suggests that asteroids created from rubble would be pristine records of early planet formation. Daniel Scheeres, U-M associate professor of aerospace engineering, was part of the team that determined the asteroid's mass, surface environment, and gravitational pull and helped interpret the images that were taken of the asteroid from the spacecraft. Some of the findings will be discussed in a special issue of the journal Science on June 2. The mission is led by the Japan Aerospace Exploration Agency. The Hayabusa space probe arrived at asteroid Itokawa last fall and orbited for three months. During that time it descended twice to the surface of the asteroid, which is named for the father of Japanese rocketry, to collect samples. In 2010 the probe will return to Earth and eject a sample canister that will reenter the atmosphere and land in central Australia. Researchers hope this will be the first asteroid sample brought back to Earth. Scheeres said that the confirmation of Itokawa's makeup as rubble rather than a single rock has large implications for theories of how asteroids evolved, and will lead to a better understanding of the early solar system. Asteroids are thought to be the remnants of material that formed the inner planets, which include Earth, and could bear the record of events in the early stages of planet formation. It is a significant finding that Itokawa is a pile of rocks ranging in size from tiny sand grains all the way up to boulders 50 meters wide, because it verifies a number of theories about the makeup and history of asteroids. The existence of very large boulders and pillars suggests that an earlier "parent" asteroid was shattered by a collision and then re-formed into a rubble pile, the researchers conclude in the paper. It's likely that most asteroids have a similar past, Scheeres said. "Analysis of the asteroid samples will give us a snapshot of the early solar system, and provide valuable clues on how the planets were formed." Also, knowing if an asteroid is a single, big rock or a pile of rubble will have a major influence on how to nudge it off course, Scheeres said, should its orbit be aimed at Earth. An asteroid collision with Earth, while unlikely, could have disastrous consequences. It's widely thought that an asteroid collision caused the mass extinction of dinosaurs 65 million years ago, so some have discussed ways to demolish or steer an approaching asteroid, should we see one coming. Another striking finding, Scheeres said, is that regions of Itokawa's surface are smooth, "almost like a sea of desert sand" and others are very rugged. This indicates that the surfaces of asteroids are, in some sense, active, with material being moved from one region to another. Gravity holds the mass of rubble together. "These are the first such detailed observations of an asteroid from this close," Scheeres said. Cite This Page:
<urn:uuid:ab80357b-5a68-4af5-ac48-34d5f2619bc2>
{ "date": "2017-03-23T22:51:17", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00076-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9547454714775085, "score": 3.75, "token_count": 640, "url": "https://www.sciencedaily.com/releases/2006/06/060602073338.htm" }
The unprecedented rate of ocean acidification is one of the most alarming phenomena generated by climate change and the only way to mitigate the dangers it represents consists in reducing CO2 emissions significantly. This is the conclusion of the summary of the Third Symposium on the Ocean in a High CO2 World (Monterey, USA, September 2012) which were presented at the Conference on Climate Change taking place in Warsaw (Poland) from November 11 to 22. The document represents the conclusions of 540 experts from 37 countries reflecting the latest research on the subject. It was prepared by UNESCO’s Intergovernmental Oceanographic Commission (IOC), the Scientific Committee on Ocean Research (SCOR) and the International Geosphere-Biosphere Program (IGBP). It emerges that all the oceans, which together absorb close to one quarter of CO2 emissions generated by human activity, have experienced an overall 26% rise in acidity since the dawn of the industrial age. Twenty-four million tonnes of CO2 are absorbed by the seas daily and, if current emission rates are maintained, the level of the ocean acidity worldwide will rise by 170% before 2100, compared to the pre-industrial age. As acidity increases, the ocean’s ability to process atmospheric CO2 emissions declines, reducing their ability to mitigate climate change. This phenomenon is all the more worrying in view of other threats to marine ecosystems such as rising water temperatures, overfishing and pollution. While sea grass and some phytoplankton species seem able to cope with higher acidity, other organisms, such as corals and crustaceans are likely to be severely affected. Substantial changes in marine ecosystems are expected and they are likely to have a major socioeconomic impact. Experts expect seashell fisheries to lose some $130 billion annually, if current CO2 emissions remain unchanged. While expertise regarding the effects of CO2 on the marine environment has grown, it remains difficult to provide reliable projections regarding its impact on whole ecosystems. Questions still to be answered include: Will some of the species that will have disappeared be replaced? Will some be able to adapt? For this reason, scientists are pleading in favour of initiatives that will enable them to learn more about acidification, such as the Ocean Acidification Network co-founded by the IOC and the International Ocean Carbon Coordination Project (IOCCP) set up by the IOC and SCOR. They also call for the establishment of international mechanisms capable of handling specific questions regarding ocean acidification so as to ensure that they receive the attention they deserve in climate change negotiations.
<urn:uuid:142799cd-9fdc-4c1e-a42c-2b69c5804366>
{ "date": "2014-04-24T18:51:55", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00515-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9417905807495117, "score": 3.546875, "token_count": 523, "url": "http://www.marinelink.com/news/increasing-acidity361095.aspx" }
What shall we do with a neutron microscope? By Brian Dodson October 20, 2013 Neutrons have a set of unique properties that make them better suited than light, electrons, or x-rays for looking at the physics and chemistry going on inside an object. Scientists working out of MIT's Nuclear Reactor Laboratory have now invented and built a high-resolution neutron microscope, a feat that required developing new approaches to neutron optics. Why would anyone want to use neutron imaging to study materials? Optical microscopes tell you what the reflectivity of the surface of a material is, but little else. X-ray microscopes tell you what the mass density of the insides of an object is, but again, little of any structure that isn't mirrored in the density of the material. In contrast, neutrons are heavy compared to the other particles (photons and electrons) used in forming images, and have no electric charge, properties that make it possible to look deeply inside an object while gaining information about the structure that is not accessible through the other forms of microscopy. Unfortunately, these same properties make it difficult to focus a beam of neutrons – a prerequisite for forming an image. Neutrons do interact with atomic nuclei via the strong force. This interaction can cause the neutrons to scatter from their original path, and can also remove neutrons through absorption. Either way, a neutron beam that is penetrating a material becomes progressively less intense. In this way, neutrons are analogous to x-rays for studying the invisible interiors of objects. However, while the darker regions of an x-ray image indicates how much matter the x-rays have passed through, the density of a neutron image provides information on the neutron absorption of the material. This absorption can vary by many orders of magnitude among the chemical elements. As a result, a neutron image provides different information about the composition and structure of the interior of an object than do x-ray images. In particular, neutron imaging has great potential for studying so-called soft materials, as small changes in the location of hydrogen within a material can produce highly visible changes in a neutron image. Neutrons also offer unique capabilities for research in magnetic materials. Neutrons may be uncharged, but they do have spin, and hence also a magnetic moment. It can help to think of a tiny bar magnet within the neutron that can interact with other magnetic fields. The neutron's lack of electric charge means there is no need to correct magnetic measurements for errors caused by stray electric fields and charges, another argument for using neutrons to study magnetism. The most informative approach to using neutrons to study magnetic materials is likely the use of polarized neutron beams, beams in which the neutron spins are oriented in the same direction. This allows measurement of the strength and characteristics of magnetism within a material. Such information is extraordinarily difficult to determine in any other way, and cuts to the essence of the magnetic properties of a material. Neutron images, such as those used in nondestructive testing, have been based mainly on shadowgraphs – images produced by casting a shadow on a surface, usually taken with a pinhole camera. Such methods, however, always involve an awkward balance between low illumination levels (and hence long exposure times) and poor spatial resolution – both being the natural result of using only pinhole optics. Similar problems are associated with the pinhole optics of the camera obscura, a camera that forms an image of a scene by projecting light from the scene through a pinhole. A rule of thumb states that a good balance between illumination and resolution is obtained when the diameter of the pinhole is about 100 times smaller than the distance between the pinhole and the image screen, effectively making the pinhole an f/100 lens. Optimum, however, is not necessarily good. The level of illumination on the image screen projected from an f/100 pinhole would be more than 1,000 times dimmer than that from a standard f/2.8 camera lens. Perhaps worse, the resolution of the pinhole lens cannot be smaller than the diameter of the hole. The resolution of an f/100 pinhole is about half a degree, making the camera obscura barely able to notice that the Moon looks like a disk rather than a point of light. However, an f/100 glass lens with a diameter of an inch can see lunar craters smaller than 10 miles (16 km) across. The potential for dramatically improving the performance of pinhole-based neutron optics led the MIT Nuclear Reactor Laboratory group to develop an imaging neutron microscope. Their goals were to increase both the resolution of the image and the level of illumination, so that the neutron microscope can quickly produce higher-quality images. Unlike the case of an optical microscope, however, there is no equivalent of optical glass from which lenses for neutrons can be made. Conventional mirrors also tend not to work, as the neutrons simply go through them. When a neutron grazes the surface of a metal at a sufficiently small angle, it is reflected away from the metal surface at the same angle. When this occurs with light, the effect is called total internal reflection. However, owing to the way neutrons interact with the electrons in a metal, it would be better to call this total external reflection – the neutrons refuse to enter the material. Fortunately, the critical angle for grazing reflection is large enough (a few tenths of a degree for thermal neutrons) that a curved mirror can be constructed. Given curved mirrors, an optical system that creates an image can be made. The figure below shows a cartoon of a four power neutron microscope after the MIT design. Having formed a neutron image, it is necessary to find a way to visualize it. In the MIT microscope, the neutron flux at the imaging focal plane was measured by a CCD imaging array with a neutron scintillation screen placed in front of it. The scintillation screen is made of zinc sulfide (a traditional fluorescent compound) laced with lithium. When a thermal neutron is absorbed by a lithium-6 nucleus, it causes a fission reaction that produces helium, tritium, and a lot of excess energy. These fission products cause the ZnS phosphor to light up like a Christmas tree, producing an image in light that can be captured with the CCD array. MIT’s new neutron microscope is a proof-of-principle, attaining only a four-fold magnification and 10-20 times better illumination than earlier pinhole neutron cameras. However, it points the way toward new approaches to study properties of whole classes of fascinating and potentially useful materials. Source: MIT Nuclear Reactor LaboratoryShare - Around The Home - Digital Cameras - Good Thinking - Health and Wellbeing - Holiday Destinations - Home Entertainment - Inventors and Remarkable People - Mobile Technology - Urban Transport - Wearable Electronics - 2014 Action Camera Comparison Guide - 2014 Smartwatch Comparison Guide - 2014 Windows 2-in-1 Comparison Guide - 2014 Smartphone Comparison Guide - 2014 Full Frame DSLR Comparison Guide - 2014 Tablet Comparison Guide - 2014 Superzoom Camera Comparison Guide - 2014 iPad Comparison Guide - 2014 Entry-Level to Enthusiast DSLR Comparison Guide - 2014 Small Compact Camera Comparison Guide
<urn:uuid:b1d2b552-c05b-4c49-bfe2-f63110a5a93c>
{ "date": "2015-03-01T06:58:40", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462232.5/warc/CC-MAIN-20150226074102-00213-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9109863042831421, "score": 4.25, "token_count": 1509, "url": "http://www.gizmag.com/mit-neutron-microscope-reactor-prototype/29392/" }
Canadian Geese Migration PathIt seems that Canadian Geese are one of the most followed bird groups in the US. Their overhead migrations are a sure sign of the changing of the seasons. Buy Poster at Art.com Canadian Geese have incredibly long migration routes. In the winter, they take up residence throughout all lower US states, from California completely across to Georgia. Many even hang out in regions of Mexico. When spring comes along and the weather gets warmer, they head north. Some stop in the top US states, from Washington through Maine. But many others continue to fly northwards, hitting the upper reaches of Alaska and the northernmost Canadian provinces. Each goose group has its own path to take as it heads north. The paths are generally straight north-south, so pretty much every single state has geese either living in it or flying directly over it at some point during the year. It would be a mistake to think "all geese start at point x and move together to point y". There are geese of all shapes and sizes that winter along a rather thick band of the southern US and Mexico. Some leave early and stop only a few states north, enjoying that location. Others leave late and decide to fly to the very top of Canada. Some may plan to stop in one state, find it crowded and meander along to another state. Whenever they finally end up somewhere they enjoy that has a good temperature, that is when they start laying eggs. That could be anywhere from early March to late June. Geese are not necessarily aiming for "A Lake" that is exactly "1205.7 miles north". They simply decide it's too warm where they are, head north, and see what they see. Their decisions about when exactly to leave, how far to fly in a given day and when to stop are all based on a wide variety of factors like weather, comfortable temperature range, presence of dogs or other harassments, and so on. As they age, if they keep finding a certain lake that is uncrowded, safe and full of food, they may aim for it in future years. But if the weather's bad or the lake's hospitality changes, they are quite happy to move on to a new spot. The geese are definitely built for this long distance travel. They can reach up to 60mph during their flights, and can reach an altitude of 8,000 feet. They can fly at night, and can fly for up to 16 hours in a stretch. Fly Away Home is a great movie about Canadian goose migration! Canadian Gooses Main Index of Photos and Articles Encyclopedia of Birds - Descriptions and Photos Note: I originally wrote this content while I edited the site at birding.bellaonline.com. That site has permission to show my content. Birding Help Hints Tips and Information
<urn:uuid:4944c5c0-8f44-46ad-becb-f2ba71e0bfaa>
{ "date": "2016-02-09T11:43:08", "dump": "CC-MAIN-2016-07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157075.54/warc/CC-MAIN-20160205193917-00143-ip-10-236-182-209.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9622476696968079, "score": 2.984375, "token_count": 588, "url": "http://www.lisashea.com/birding/encyc/canadiangoose/migration.html" }
Large-scale federal intervention into America's energy markets began in the 1930s and continued through the 1970s. A series of major laws and executive actions sought to control energy prices, regulate electric and gas utilities, and limit imports. Competition was stifled and domestic investment was suppressed. By the 1970s, the Middle East oil embargoes and other upheavals began making the failure of federal energy interventions clear to policymakers. They reversed course, and took major deregulatory steps in the 1970s and 1980s to free up energy markets, to the ultimate benefit of consumers and the overall economy. The following sections on oil, coal, and natural gas discuss how federal policymakers intervened to try and solve perceived problems in markets, often with the active encouragement of the energy industry.1 Unfortunately, most of the major federal intrusions in energy markets during the 20th century proved to be serious mistakes, as they often destabilized markets, reduced domestic output, or decreased consumer welfare. Energy markets have a number of features that have prompted government intervention. One problem in oil markets is "capture," which relates to the failure of surface property rights to coincide with oil reservoir boundaries underground. If good property rights are not established for oil reservoirs, each surface owner with access will produce as rapidly as possible leading to reduced overall output in the long run. Another feature of oil markets is the low short-run price elasticity of demand and supply. The inability of oil markets to respond quickly to supply and demand changes has resulted in repeated boom and bust cycles over the last 130 years or so. Consumers are unhappy during price booms, and producers are unhappy during busts, and both have sought help from Washington in those situations. A further issue is that oil, coal, and natural gas are commodities, which makes it more difficult for producers to enjoy a steady income than producers of brand-name products. Since different brands of oil, for example, are equivalent, consumers will desert an existing supplier if cheaper sources become available. One consequence was that during the early 1930s, early 1970s, and mid-1980s, U.S. oil firms called for restrictions on imports. Another energy market feature is the high variance in production costs between different sources of supply, which creates intra-industry tensions because producers with the lowest-cost supplies earn higher profits. As a consequence, higher-cost producers have often called on policymakers to pass legislation that directly or indirectly imposes extra costs on lower-cost producers in order to even the playing field. Oil markets have long-term price cycles. The peaks and troughs of those cycles have often coincided with demands for federal intervention, but those interventions have usually made cycles more, not less, severe. In the 1920s, oil prices were peaking and many commentators believed that oil supplies were running out. Congress was confronted by requests to augment supplies, so it enacted a generous depletion allowance for producers in 1926, which increased investment returns substantially. This change induced additional exploration activity, and subsequently the discovery of large new oil reservoirs. During the next decade, the situation was reversed, with prices low and dropping. That led to demands for more "orderly" competition and oil price supports. Rather than repealing the supply-enhancement policies enacted during the 1920s, Congress left them intact and enacted a price-support system. Similar cycles occurred in the 1950s and 1970s. In each case, Congress enacted policies that overreacted to the current peak or trough and failed to quickly repeal the policies when petroleum prices retreated from their extreme highs or lows. Beginning in the late-1920s, different groups in the oil industry proposed policy measures to help prop up prices. Initially, the major oil companies supported industry planning similar to that used during World War l. The war experience left many corporate leaders favorably disposed toward managed capitalism under the protection of the state. The major oil companies sought and received a prorationing order from the Texas Railroad Commission in 1930. Prorationing involved mandated production cutbacks designed to prop up prices. The wildcatters in East Texas ignored the order, and in 1931 a federal district court ruled the order illegal. In 1932, the majors again demanded that the Texas legislature do something about what they believed was overproduction. Ultimately, the legislature added new powers for the Texas Railroad Commission to limit supply. For their part, independent domestic oil companies lobbied for import quotas to restrict competition from oil imported by the major multinationals. Federal intervention increased enormously in the 1930s. The National Industrial Recovery Act of 1933 (NIRA), which passed with support from large oil companies, substituted producer agreements for normal market competition. The oil industry was the first to adopt a "fair trade" code under the Act. When the Supreme Court ruled NIRA unconstitutional in 1934, the majors once again turned their attention to Washington. They favored federal regulation to limit supply, but when some in the Roosevelt administration argued for public-utility style regulation of major oil companies (which would involve limits on rates of return), oil company support shifted to the Connally Hot Oil Act of 1935, which gave federal sanction to the state prorationing (supply restriction) programs that restricted competition and raised prices. After World War II, a major issue became the surge in oil imports, which was partly prompted by government policies that kept domestic prices artificially high. Independent oil firms, such as Occidental and Amerada Hess, were not participants in the oligopolistic bargaining arrangements that had governed the world oil market since 1928, and they profited by importing lower-cost oil into the United States. Domestic producers became alarmed at the imports because it forced them to cut back their production under the controlled domestic markets. After an intense lobbying effort, Congress adopted a clause in the Reciprocal Trade Act Amendments of 1955 that authorized the president to limit imports of a commodity if he thought such imports were detrimental to national security. In 1959, President Dwight Eisenhower invoked the clause and imposed oil import quotas. This effort to restrict imports drove down international oil prices and encouraged the creation of the Organization of Petroleum Exporting Countries. International oil companies, which were now effectively shut out of the U.S. market, flooded Europe with cheap oil and increased its oil dependence. The world price of oil declined until the mid-1970s. With lower prices, oil companies reduced their royalty payments to Middle East countries, which prompted those nations to create OPEC. The energy crises of the 1970s (the oil shocks of 1973 and 1979) coincided with a predictable upswing in the long-run oil price cycle, but it was exacerbated by perverse incentives created by government policies. Controls on oil imported into the United States, enacted in 1959, lowered the price of oil elsewhere in the world and increased consumption, particularly in Europe. By the early 1970s, the elaborate production and marketing control mechanisms on U.S. domestic oil markets was collapsing. The price of oil had dropped almost to depression levels and oil company profits were flat or falling. At the same time, oil-import quotas and strong economic growth had exhausted the U.S. oil surplus. Domestic production reached a peak in 1970. Even though U.S. prices had been kept well above world competitive levels, petroleum demand was growing rapidly just at a time when the huge easy-to-produce pools discovered in the 1930s were producing at maximum rates. In the United States, the effects of a tighter world oil market were aggravated by President Richard Nixon's price controls, which gave special attention to oil because oil prices were rising rapidly. The Nixon price controls, which began in August 1971, were complex and they went through a series of phases over time. The controls interacted with changing market conditions to create shortages of different products at different periods during the 1970s. For example, heating oil shortages arose during late-1972, but most other oil products were less affected at that particular time. Then in 1973, severe shortages of gasoline developed at independent retailers. Oil price controls collided with the rising cost of imports, forcing oil companies to cut back on imports. Those cuts in turn particularly hurt independent refiners and retailers, who obtained a large share of their supplies from the major importers. Thus, gasoline shortages were particularly acute at independent gas stations. Congress responded to this situation not with a repeal of the price controls that were the source of the problems, but rather with a series of new complex regulations. Congress passed the Emergency Petroleum Allocation Act in 1973, which enmeshed federal regulators even closer into oil company operations, and it created a two-tier system of price controls on domestic oil. The price of "old"domestic oil was frozen, but "new" domestic oil was decontrolled. The EPAA created many distortions, as one example will illustrate. Expensive imported oil was not subject to price controls and it determined the marginal cost, and thus price, of gasoline sold in the United States. But since many refiners had access to domestic old oil that was subject to price controls, they made larger profits than refiners dependent on new domestic oil. In response to this situation, the Federal Energy Administration created complex new rules in 1974 to spread around the refiner benefits of price-controlled old oil. Those rules, in turn, created incentives for refiners to further increase oil imports. The EPAA helped to create the very shortages that it was supposed to ameliorate. By attempting to insulate the U.S. market from world oil prices, EPAA actually created incentives to hoard just at those times when inventories should have been released on the market—during the disruptions of 1973 and 1979. In sum, a range of new government interventions in the 1970s exacerbated the conditions that they were supposed to resolve. The EPAA regulations were scheduled to expire after two years, and Congress replaced them with new rules under the Energy Policy and Conservation Act of 1975. This law placed previously uncontrolled new oil produced since EPAA had passed under price controls, so we went from a two-tier price control system to a three-tier system. EPCA created and exacerbated a range of economic distortions, including increasing the incentives to import and decreasing consumer incentives to shift from oil to other energy sources or to conserve. Price controls on oil and refined products were extended through 1979 with various further iterations. Finally, in 1979 President Jimmy Carter began to repeal price controls through a series of administrative actions. President Ronald Reagan finished the job in 1981. America's experience with oil regulations from the 1930s through the 1970s has been much studied, and an academic consensus is that those regulations had large negative effects on both oil producers and consumers.2 Congress has typically responded to petroleum-market problems with inappropriate legislation that has damaged markets and prompted further rounds of legislation and regulatory action. However, in a world where a cartel, such as OPEC, is able to raise world crude oil prices by constraining production, are price controls warranted? From an economic perspective, the answer is no. Domestic price controls will not reduce OPEC's market power. The manner in which domestic price controls were implemented in the United States in the 1970s actually increased the demand for OPEC imports and thus increased its profits and punished domestic producers who are not responsible for OPEC production decisions. Price controls also reduce incentives to increase production—and, thus, reduce supply—whether OPEC is strangling the market or not. Domestic price controls thus assist the cartel's attempts to restrict supply. Congress finally allowed oil price controls to expire, but decided to place a windfall profits tax on companies in 1980. The tax was not really a tax on profits, but an excise tax on domestic oil production and thus made domestic production less attractive, while encouraging imports. One congressional study found that the tax reduced domestic oil production by 3-6 percent and increased U.S. imports by 8-16 percent.3 The windfall profits tax was repealed in 1988. And the period since 1990 has been generally free of petroleum market regulation.4 Like oil producers, coal producers have had various reasons to dislike open markets and have often called for federal regulations. Coal industry profits have been volatile and the industry has easy entry, which has heightened competition. As in the oil industry, there have been struggles between lower-cost producers and higher-cost producers. Further, coal producers have faced competition from fuel oil, natural gas, and nuclear power. The tough competitive climate has sometimes manifested itself in federal regulations on issues regarding worker wages, health and safety, and the environment. High-cost producers have used these issues to favor policies that disadvantage lower-cost competitors. From the 1930s to the present, coal disputes have involved struggles between traditional Appalachian underground mines, which are unionized, and cheaper market substitutes such as southern-drift and surface-mined coal. Numerous policy disputes involve regulations that will make these substitutes relatively more expensive. As with oil markets, major federal intervention began in the 1930s. Most coal companies were in favor of the Roosevelt administration's National Industrial Recovery Act, which substituted a producer cartel structure for market competition. After the Supreme Court ruled NIRA unconstitutional, industry leaders and politicians from coal states looked for a substitute. The substitute was the Guffey Coal Act of 1935, which imposed price controls and various labor regulations on the industry. The effect was to limit competition and to favor high-cost Appalachian coal at the expense of other lower-cost coal sources. The Supreme Court struck down the Act in 1936, but a second Guffey Act that included price controls was passed 1937. The law was renewed in 1941, but allowed to expire during World War Two. After the war, a coal price boom was ending, and Congress considered a variety of direct and indirect policies to stem the industry's decline. The depletion allowance was raised modestly, but legislative efforts to boost demand for coal and restrict competition were not successful. Instead, higher-cost union mines pushed for indirect methods of equalizing coal industry costs at higher levels, such as by having Congress mandate higher mine safety standards. The Department of the Interior imposed new mine safety standards in 1946 and those were codified in a 1952 law. At first, federal rules exempted small, low-cost operators, but over the next decade, more comprehensive safety laws were passed with the effect of eliminating many of the smaller competitors. The 1969 Coal Mine Health and Safety Act caused the exodus of small mines and thus reduced competition for the underground, unionized, mines. Another competitive threat to the large, high-cost mines in Appalachia were western surface mines. Surface mining began to grow rapidly in the mid-1960s. The struggle to enact federal regulation of surface mines began with the introduction of a bill from President Lyndon Johnson in 1968, and ended with the passage of the Surface Mining Control and Reclamation Act of 1977. In between, President Gerald Ford vetoed bills in 1974 and 1975. The Appalachia mine operators and unions favored federal restrictions, while surface mine owners resisted them. By 1977, however, the unions had organized numerous surface operations, and resistance to surface mining regulation crumbled. The passage of the 1977 Act and the new source performance standards in the Clean Air Act Amendments of 1977 decreased both the productivity and pollution advantages held by western coal. Federal policies moved in coal's favor in the 1970s. With the Middle East oil crisis, policymakers began to adopt policies to try and shift the nation toward greater coal consumption, which was a domestic energy resource. The Energy Supply and Environmental Coordination Act of 1974 directed the Federal Energy Administration to prohibit the use of oil or natural gas by electric utilities that could use coal, and it authorized the FEA to require that new electric power plants be able to use coal. The Energy Policy and Conservation Act of 1975 extended those powers for two years and authorized $750 million in loan guarantees for new underground low-sulfur mines. Further pro-coal mandates were passed in the late-1970s. In sum, coal's policy history has reflected a series of struggles between high-cost producers and lower-cost substitutes. From the 1930s until 1970, the coal industry was plagued with chronic excess capacity, but disinvestment was slow because of the reluctance of marginal workers and operators to migrate from Appalachia. The struggle over safety legislation was partly a manifestation of the battle between segments of the industry over excess capacity. Since 1985 coal, like oil, has not been subjected to explicit economic regulation. Instead coal regulatory struggles have been environmental in nature regarding the pollution from its use rather than the economics of its production.5 Natural gas markets possess characteristics that are similar to petroleum markets, but with two key exceptions. Natural gas producers have been more immune to import competition, and the retail segment of the industry has been a regulated monopoly since the beginning. Consequently, two sources of income variation that have plagued the petroleum industry have been absent in natural gas. Also, the political struggles in natural gas markets have been producer versus consumer battles, rather than battles between low and high cost producers. Federal policymakers have struggled with a key economics question: does the production or transportation of natural gas suffer from market failures that warrant public action? Congress decided initially that pipeline transportation was a natural monopoly and deserved what is described as public-utility regulation in which profits and prices are limited. In 1938, Congress passed the Natural Gas Act, which empowered the Federal Power Commission to regulate the rates for interstate natural gas sales and to restrict interstate pipeline construction. To build an interstate pipeline, a company now needed approval from the FPC. Regarding natural gas production, Congress decided in the 1938 Act that it did not suffer from market failure and needed no policy intervention. Congress exempted "production and gathering" from federal price controls. However, the Supreme Court ruled in 1947 that this congressional exemption only applied to regulation of the physical processes of production, not the sale of the product. In response, some members of Congress stepped in on the side of the industry and open markets arguing that the 1938 law had not been intended to regulate producer, or wellhead, prices. But President Truman vetoed a bill in 1950 that would have exempted natural gas production from price regulations. In 1954, the Supreme Court ruled in Phillips Petroleum v. Wisconsin that the FPC must regulate natural gas prices at the wellhead. This action had profound effects on the industry, and it generated a huge growth in bureaucracy at the FPC to administer a complex array of new price controls.6 The government would have to decide what the costs of production and "fair" profit levels were for the many natural gas producers across the country. Over the years, natural gas price controls led to many serious distortions including, ultimately, natural gas shortages in the 1970s. Federal price controls kept natural gas prices artificially low, leading to higher consumer demand and reduced incentive for conservation. For producers, the artificially low prices reduced their incentives to explore for new reserves. Note that federal price controls applied only to natural gas sold in interstate commerce, with intrastate gas being exempt. One effect was that as the gap between the interstate and intrastate prices grew, producers sold their product within states and withheld supplies from interstate pipelines. The result of those distortions was that consumers in states that did not produce natural gas began seeing severe shortages during the 1970s. In 1976 and 1977, many factories and institutions such as schools were forced to close occasionally from lack of natural gas. To economists, the obvious reform to natural gas shortages in the 1970s was to decontrol prices. However, there were brutal political battles fought over the issue in Congress. Members from producer states, and presidents Nixon and Ford, favored deregulation of producer prices, but northeastern Democrats in consuming states favored continued controls because they feared constituent reaction to price increases. Congress passed the Department of Energy Organization Act in 1977 and the Natural Gas Policy Act in 1978. Under the two pieces of legislation, the FPC was replaced with FERC, and price controls of wellhead natural gas prices were phased by 1985 in a complex compromise of temporary price regulations. The compromise kept price controls on old gas but freed up new gas, which created numerous market distortions during the 1980s. Additional legislation was needed in 1989 to finally complete the job of full deregulation of wellhead prices. Today, natural gas pipeline rates continue to be regulated as common carriers. They transport gas owned by others often under long term contract. An active secondary market exists so that those with long term transportation rights can sell them to others. While distortions from this rate regulation probably exist, they are not consequential enough to have generated much academic or interest group criticism. 1 A more detailed history can be found in Peter M. VanDoren, Politics, Markets, and Congressional Policy Choices (Ann Arbor: University of Michigan Press, 1991). 2 Joseph P. Kalt, The Economic and Politics of Oil Price Regulation: Federal Policy in the Post-Embargo Era (Cambridge, MA: MIT Press, 1981). 3 Salvatore Lazzari, "The Windfall Profit Tax on Crude Oil: Overview of the Issues," Report 90-442E, Congressional Research Service, September 12, 1990, p. 7. 4 To be sure, petroleum markets have been affected by policy but almost exclusively from environmental mandates rather than legislation that directly regulates, subsidizes, or taxes oil. 5 There has been continuing struggle over the use of so-called mountain-top removal production techniques in Appalachia that eliminate underground mining by blasting away the tops of hills and exposing the coal which is then surface mined. See Jeff Godell, "How Coal Got its Glow Back," New York Times Magazine July 22, 2001.
<urn:uuid:85ce4e71-6357-428d-91b1-1dba7c092ded>
{ "date": "2014-03-11T08:02:59", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011155657/warc/CC-MAIN-20140305091915-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9730443358421326, "score": 3.5625, "token_count": 4393, "url": "http://www.downsizinggovernment.org/energy/regulations" }
Healthy Hearts in Roanoke, VA – Make Yourself Healthier! The Roanoke Health District leads the Commonwealth in prevalence of stroke and is second in the state in occurrence of heart disease. African-American women aged 35 – 74 have the second-highest mortality rate due to cardiovascular disease, second only to African-American men. According to the America Heart Association, heart disease and stroke account for 28.5% of all female deaths in Virginia, meaning that about 23 women in the Commonwealth die from heart disease and stroke every single day. Heart disease alone is the second leading cause of death in Virginia, claiming the lives of nearly 7,000 Virginia women in 2009.So, what can YOU do to decrease your risk? Here are some preventive measures that will help! 1. Exercise regularly No matter what kind of activity you’re engaging in, getting your heart rate up and keeping it there for at least 30 minutes every day is an easy way to lower your risk for cardiovascular disease and stroke. This type of activity works cardiac muscles, improves blood flow, and strengthens blood vessels. 2. Maintain and healthy body weight We all struggle with weight issues from time to time. Starting with a healthy diet that’s high in proteins, low in saturated fats and cholesterol, low in sodium, and containing plenty of fresh fruits and vegetables is a great start toward attaining and maintaining a healthy weight. Healthy weight status in adults is assessed by calculating body mass index (BMI), which indicates the amount of a person’s body fat. Any adult with a BMI of 30 or higher is considered obese, while those with a BMI of 25 – 29.9 is considered overweight. Normal weight (lowest weight-related risk of heart disease and stroke) is a BLI of 18 – 24.9. 3. Go easy on the alcohol. Excessive alcohol use increases the risks for high blood pressure, heart disease, and stroke. Enjoy your drinks, but do so in moderation to promote heart health. 4. Prevent and control high blood pressure The simple changes in your lifestyle like exercising regularly and eating a healthy diet can help to reduce high blood pressure and keep blood pressure at appropriate levels. Check your blood pressure regularly and consult with your doctor if your blood pressure is elevated (anything over the normal range of 120/80) 5. Prevent and control diabetes People with diabetes are at greater risk of cardiovascular disease and stroke. Ask your doctor what steps you can take to reduce your risk if you have diabetes. Be healthy! Be happy!
<urn:uuid:ead491fe-4a0a-4682-8c73-3d0e10d2661e>
{ "date": "2018-03-19T12:26:19", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646914.32/warc/CC-MAIN-20180319120712-20180319140712-00656.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9217221140861511, "score": 2.625, "token_count": 525, "url": "http://www.transformedfoundation.org/health/" }
Henry David Thoreau was born on July 12, 1817, of rather ordinary parents in Concord, outside of Boston, Massachusetts. His childhood and adolescence, from what little is known about these periods of his life, appear to have been typical for the time. Thoreau attended the Concord Academy as an undistinguished student, and when he was sixteen, his father, a pencil manufacturer, had saved enough money to send him to Harvard. There he read a great deal and thus philosophically and literarily prepared himself to become a spokesman for the transcendentalist movement; again, however, his career as a student was unspectacular. When Thoreau graduated from Harvard in 1837, he had been educated for four possible professions: law, the clergy, business, or teaching. He was really interested in none of these professions for which he had been prepared, but he tried teaching for a short while. He was given a position in Concord, but soon resigned when he discovered that he was expected to teach by conscientiously beating the ABC's into his students with a rod. He decided that he would rather make pencils with his father and do some occasional surveying. (The latter activity would later come to be one of the main bread-and-butter occupations of his life.) Needless to say, the townspeople were surprised that a Harvard man should turn out so disappointingly. This was to be the first of many ways in which Thoreau would rebel against society's expectations for him. Yet, while the townspeople were looking upon him as a loafer, Thoreau was then, in the late 1830s and early 1840s, mapping out his strategy to become as famous and influential a transcendentalist writer and lecturer as Emerson. He tried teaching again in 1838 with his brother John, and they conducted what today would still be considered a progressive school. But this was only a tangential interest for him; he had already decided what his primary vocation would be. In 1837, he had begun his journal, the workbook to which he would practically devote his life and in which he would perfect his art. Until his death in 1862, Thoreau religiously worked day in and day out at this occupation of which the scoffing townspeople were ignorant. To realize the intense seriousness with which he pursued it, one can profitably read through his journal of 1838. There one finds the anxiety of the struggling, would-be master craftsman whose work does not yet meet his own standards of excellence: But what does all this scribbling amount to? What is now scribbled in the heat of the moment one can contemplate with somewhat of satisfaction, but alas! tomorrow — aye, tonight — it is stale, flat and unprofitable — in fine, is not; only its shell remains like some red parboiled lobster shell which, kicked aside ever so often, still stares at you in the path. In short, Thoreau was deadly serious when he took up his pen — so serious that, as was usual with Thoreau, he probably revised and polished the above complaint several times before he entered it in his journal. During the time that Thoreau and his brother were conducting their academy, they went on a boat trip (1839) that was to provide the raw material which Thoreau would work into his first book, A Week on the Concord and Merrimack Rivers (1849). It was ten years between the actual river voyage and when his highly idealistic celebration of it was published. During that time, Thoreau read, wrote, and worked at whatever jobs he could find. He surveyed, made pencils with his father, and did odd jobs when he needed the money — thus leaving him plenty of time for his journal. In 1841, Thoreau moved into the Emerson household as the family's handyman. He made much use of Emerson's library, and a warm relationship grew between them as they daily conversed and as Thoreau began to submit poems and essays to the Dial, the transcendentalist journal that Emerson edited. (Most of these poems and essays were later included in A Week on the Concord and Merrimack Rivers.) Emerson came to admire Thoreau so much that he allowed him to edit the entire April 1843 issue. Emerson had high ambitions for his young friend and, in 1843, he arranged for Thoreau to stay with his brother, William Emerson, on Staten Island so that he might make contacts with New York publishers. Unfortunately, this attempt to find publication was a failure, and Thoreau soon returned to Concord and resumed work on his journal. Then in March 1845, he initiated what was to be the most significant event of his life: he borrowed an ax and began to construct a cabin on Emerson's land by the north shore of Walden Pond. He moved into his cabin on July 4, 1845, and, as Walden indicates, he attempted to reduce his needs to the barest essentials of life and to establish an intimate, spiritual relationship with nature. For Thoreau, living at Walden Pond was a noble experiment in three ways. First, Thoreau was intent upon resisting the debilitating effects of the industrial revolution (division of labor, the mind-dulling repetition of factory work, and a materialist vision of life). The Walden experiment allowed him to "turn back the clock" to the simpler, agrarian way of life that was quickly disappearing in New England. Second, by reducing his expenditures, he reduced the time necessary to support himself, and thus he could devote more time to the perfection of his art. While at the pond, he was able to write most of A Week on the Concord and Merrimack Rivers. And third, he and Emerson had asserted that one can most easily experience the Ideal, or the Divine, through nature; at Walden Pond, Thoreau was able to test continually the validity of this theory by living closely, day-to-day, with nature. Thoreau left the pond in 1847, and when Emerson went to England in the fall of that year, Thoreau once again joined the household to look after the family's needs. Upon Emerson's return in 1848, Thoreau moved back to his parents' home, where he remained until his death. Between 1847 and 1854, Thoreau spent his time walking through the countryside, making pencils, surveying, and devoting himself to a new passion: the composition of Walden. The work went through many painstaking revisions during those seven years; yet when it appeared, the product of those years of labor was not well received. While it was not so great a failure as A Week on the Concord and Merrimack Rivers (275 sold; 75 given away), and while it did receive some good reviews, it hardly fulfilled Thoreau's dream of becoming a major spokesman for the transcendentalist movement. He did not complain about the poor reception given to Walden, but it must have been a major psychological setback. Viewed today, its publication marked the high point of his career, and his contemporaries virtually ignored it. Thoreau's later years were characterized by an increased interest in the cause of abolition and the scientific study of nature. In 1844, he wrote an essay entitled "Herald of Freedom," which praised abolitionist Wendell Phillips, and in 1849 he published "Civil Disobedience," which also dealt with the subject of slavery in America. In neither piece did Thoreau protest loudly, but in 1854, his indignation began to grow when he delivered a speech entitled "Slavery in Massachusetts." He became more involved with the abolitionist movement, and in 1859 delivered his fiery "Plea for Captain John Brown," wherein he praised the morality of Brown's violent resistance to slavery and sternly denounced the federal government for sanctioning the institution of slavery. This speech was soon followed by another entitled "The Last Days of John Brown." In 1844, Thoreau had advocated non-violent, passive resistance to slavery, but as it became more and more a central concern of his life, he gradually came to advocate armed revolt, even civil war, as a valid means of destroying an immoral system. In his abolitionist speeches and essays, Thoreau revealed a turbulent sense of outrage. That was one side of his personality. The other side, as seen when he was in the presence of nature, also remained strong during his later years. And as he grew weaker after his bouts with tuberculosis in 1851 and 1855, he turned to nature in order to regain his health — but not with the transcendentalist fervor that characterized his youth. During this period of decline, his journal reveals a growing interest in natural history accompanied by a more "scientific," less transcendental, approach to nature. Although the latter part of his journal does contain many imaginative descriptions of nature similar to those found in Walden, there is an increasing number of entries like the following of 1860: It rained hard on the twentieth and part of the following night — two and one eighth inches of rain in all, there being no drought — raising the river from some two or three inches above summer level to seven and a half inches above the summer level at 7 A.M. of the twenty-first. Such entries have led some scholars to think that Thoreau gradually "decayed" as a transcendentalist during the late 1850s and early 1860s. On May 6, 1862, Thoreau died in his parents' home in Concord. A man of admirable spirit, he passed out of the world with typical Thoreauvian humor: when a friend asked him if he had made amends with God, Thoreau quipped, "I did not know that we had ever quarreled." When Thoreau died, scarcely anyone in America noticed, and the few that did mourn his passing would have been surprised to learn that, a century later, he would be unanimously acknowledged as one of America's greatest literary artists. George W. Curtis did not understate the matter when he wrote in Thoreau's obituary that "the name of Henry Thoreau is known to very few persons beyond those who personally knew him." Thoreau had fervently devoted himself to the pursuit of a literary career in the late 1830s, but after thirty years of intense effort in his art, he died a failure by contemporary standards of success. In his eulogy at Thoreau's funeral, Emerson declared that "the country knows not yet, or in the least part, how great a son it has lost," and it was not until the twentieth century was well under way that Thoreau came to be recognized as the genius that he was. What little recognition Thoreau did receive during the latter half of the nineteenth century was strongly colored by some unfortunate remarks made by Emerson and James Russell Lowell, two very influential men in matters of literary taste. Both men published essays on Thoreau shortly after his death and virtually determined for quite some time what the public's attitude toward Thoreau would be. While supposedly eulogizing Thoreau, Emerson managed to emphasize every negative trait that he had found (or imagined) in Thoreau's personality. One sees in his portrait of Thoreau an almost inhuman ascetic and stoic ("He had no temptations to fight against — no appetites, no passions, no taste for elegant trifles") and a somewhat cranky, anti-social hermit ("Few lives contained so many renunciations. . . . It cost him nothing to say No; indeed, he found it much easier than to say Yes"). In this eulogy, Emerson also strongly emphasized Thoreau's abilities as a naturalist, and thus established the image of Thoreau-the-nature-lover (in the worst sense of the term) that was to obscure his primary significance as an artist for quite some time. Three years later, in 1865, James Russell Lowell published his essay on Thoreau, and reinforced Emerson's caricature of Thoreau as a cold, brittle, anti-social recluse. He wrote that Thoreau "seems to me to have been a man with so high a conceit of himself that he accepted without questioning, and insisted on accepting, his defects and weaknesses of character as virtues and powers peculiar to himself. . . . His mind strikes us as cold and wintry. This was a damning indictment, but even more detrimental to Thoreau's reputation was Lowell's assertion that Thoreau was merely a minor Emerson, an imitator of his mentor. In A Fable for Critics, Lowell depicted a Thoreau who trod "in Emerson's tracks with legs painfully short." In addition, he opened the essay on Thoreau with a similar gibe: Among the pistillate plants kindled to fruitage by the Emersonian pollen, Thoreau is thus far the most remarkable; and it is something eminently fitting that his posthumous works should be offered us by Emerson, for they are strawberries from his own garden. To realize the influence that Lowell's opinion carried in literary circles, one should note that as late as 1916, Mark Van Doren reiterated a similar misconception in his Henry David Thoreau. Van Doren wrote that "Thoreau is a specific Emerson" and that, philosophically, Thoreau's position was "almost identical with Emerson's." To those familiar with Emerson's and Thoreau's writings, such a view of an "Emersonian Thoreau" is a gross misconception. Philosophically and aesthetically, they were often at odds, and one need only read Emerson's Nature and Thoreau's Walden to note the differences in personality and, most important, the differences in their art. Yet, the "Emersonian" tag hindered the recognition of Thoreau's unique greatness for over half a century, as did the popular conceptions of the effete "nature lover" and the cranky hermit. One finds, for example, Oliver Wendell Holmes treating Thoreau as a joke: "Thoreau, the nullifier of civilization . . . insisted on nibbling his asparagus at the wrong end." And Robert Louis Stevenson echoed Lowell by terming Thoreau "dry, priggish, and selfish," adding that "it was not inappropriate, surely, that he had much close relations with the fish." The ill-founded jokes began to come to an end during the 1890s when serious scholars began to take a closer look at the basis of Thoreau's small reputation. The portraits of Thoreau by Emerson and Lowell were re-examined and most critics came to the conclusion that, as Charles C. Abbot wrote in 1895, "neither Emerson nor Lowell was fitted to the task they undertook." Emerson's journals revealed a basic misunderstanding of Thoreau's aims and accomplishments; Lowell, the "in-door, kid-glove critic," was obviously out of touch with the thorny world that Thoreau inhabited. Between the 1890s and the mid-twentieth century, the old misconceptions about Thoreau withered away, and as critics began examining Thoreau on his own ground — that is, his writings — his reputation grew rapidly. Today, his reputation as an artist is greater than Emerson's, and, ironically, virtually no one except specialists in American literature reads either Lowell's poetry or his literary criticism. As Wendell Glick has noted: "One of the most conspicuous nails in the coffin of Lowell's reputation is his maligning of Thoreau's genius." By the unanimous consent of literary critics, "genius" is the only word to describe the once unappreciated artist of a small town in Massachusetts.
<urn:uuid:d905f3c3-0b10-4f56-8e80-6c66a9a124ff>
{ "date": "2013-12-12T02:15:53", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164346985/warc/CC-MAIN-20131204133906-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9879257678985596, "score": 3.234375, "token_count": 3276, "url": "http://www.cliffsnotes.com/literature/w/walden/henry-david-thoreau-biography" }
Can word associations and affect be used as indicators of differentiation and consolidation in decision making? MetadataShow full item record Two studies investigated how free associations to decision alternatives could be used to describe decision processes. Choices between San Francisco and San Diego as a vacation city were investigated in the first study with US participants. The participants were asked to list any association that occurred to them while thinking about each of the cities in turn. After this, the attractiveness values of these associations were elicited from each individual. Half of the subjects gave the associations before the decision and half after having made their decisions. In congruence with Differentiation and Consolidation theory (Svenson, 1996), the attractiveness values of the associations were more supportive of the chosen alternative after the decision than before primarily on more important attributes. The results also showed that a significant number of associations were neutral and had no affective positive or negative value. The participants in the second very similar study were also asked to rate their immediate holistic/overall emotional reactions to each of the vacation cities (in this case Paris and Rome with Swedish subjects) before the start of the experiment and the associations. After having given their associations, rated them and made their decisions, the participants were asked to go back to their earlier attractiveness ratings and judge the strengths of the emotional/affect and cognitive/rational value components of each of the earlier associations. The results replicated the results from the first study in that the average rated attractiveness of the associations to a chosen alternative was stronger after a decision than before. However, the change was smaller than in Study 1, which was interpreted as a possible result of the initial holistic associations given in Study 2. It was concluded that the technique of free associations is a valuable tool in process studies of decision making, here based on the Diff Con theoretical framework.
<urn:uuid:acdc89f6-3381-4c51-a251-48e2edbe000e>
{ "date": "2018-07-17T04:33:13", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9727829694747925, "score": 2.71875, "token_count": 367, "url": "https://scholarsbank.uoregon.edu/xmlui/handle/1794/20639" }
Widespread publicity has drawn attention to lead-tainted children’s toys from China, but many people don’t know that the biggest source of lead poisoning may be lurking in their homes. A UConn program, the Healthy Environments for Children Initiative, is working to educate the public about the dangers of lead poisoning and ways to prevent it. Dangerous levels of lead can be found in paint chips, dust, and debris in houses built before 1978, when the U.S. banned lead-based paint from residential use. Today more than 300,000 children have lead poisoning, which damages the brain, nervous system and other systems, and causes lifelong learning, behavior and health problems. “Sadly, this preventable problem still exists,” says Joan Bothell, a writer and curriculum developer for the Healthy Environments for Children Initiative, a collaboration between the University’s Cooperative Extension System and the Department of Human Development and Family Studies. Since the mid 1990s, Bothell has been working with Mary-Margaret Gaudio, extension educator at UConn’s Hartford County Extension Center and a co-founder of the program, on educational materials about the dangers of lead poisoning and how to avoid them. “We try to make people aware that they can prevent lead poisoning, and that it’s not difficult,” Bothell says. The work began in 1992, when state Department of Public Health officials asked Gaudio to write some easy-to-understand fact sheets about lead poisoning. “They liked what we did,” Gaudio says. The next project was a training manual about lead poisoning. Since then, the Healthy Environments for Children Initiative has developed educational and outreach programs and materials in English and Spanish for children, childcare providers, teachers, contractors, and do-it-yourselfers, and has partnered with state, regional, and national agencies, as well as non-profits and community-based organizations. | Materials produced by UConn’s lead poisoning prevention program. |Photo by Peter Morenus The materials include a Native American-themed curriculum for young children, “How Mother Bear Taught the Children about Lead,” that won an award from the U.S. Environmental Protection Agency’s (EPA), and a video aimed at do-it-yourselfers, “Don’t Spread Lead.” The federal Centers for Disease Control and Prevention has used some of the program’s materials in its National Lead Poisoning Prevention training programs. In Connecticut, the program’s 24 trainers have trained nearly 2,000 people in lead-safe work practices for painting, remodeling, and maintenance. “Lead dust is usually the major culprit for any child who lives in a house with lead-based paint that is disturbed or deteriorating,” says Bothell, adding that people who live in older houses need to learn ways to deal with lead safety issues. Other sources of lead include old furniture, toys, and jewelry. Bothell recommends checking lead recalls for consumer products at the state Department of Public Health web site. “Simple good practices” are part of the prevention process, according to Gaudio. “We tell parents to make sure their children wash their hands before meals and snacks, leave their shoes at the door, eat healthy foods, and stay away from paint dust and paint flakes,” she says. The educational materials also teach children to do some of these things themselves. HEC also administers the New England Lead Coordinating Committee, a regional consortium of state agencies working to eliminate lead poisoning, especially in children. The group held a conference on new approaches to prevent lead poisoning at the Storrs campus in June.
<urn:uuid:baa00f26-4df5-4055-af3d-c85784e6751d>
{ "date": "2014-04-19T14:57:43", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00091-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9507855772972107, "score": 3.390625, "token_count": 785, "url": "http://www.advance.uconn.edu/2008/080825/08082511.htm" }
Physical feature: The wild pigs are distinguished by having a barrel-like body with a large head. The snout is prominent. Each foot has four digits. The body is covered by bristly hair. The upper canines are well developed and turn upward. Reproduction: The puberty is attained at about one year of age. The females are highly prolific animals. The normal litter size varies between six and eight. The gestation period is 115-120 days. Behaviour: Wild pigs are mainly active at night and live in groups. The adult male leads a solitary life. Life span: The longevity is 10-20 years in the wild.
<urn:uuid:c3818bb8-6ea9-4b8a-89a0-0c3e33f0395d>
{ "date": "2018-09-24T04:01:45", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160142.86/warc/CC-MAIN-20180924031344-20180924051744-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9568122625350952, "score": 3.46875, "token_count": 135, "url": "http://animal-dream.com/wild-pig-2.html" }
This article has been posted for discussion at Judith Curry’s “Climate Etc.” Satellite data for the period surrounding the Mt Pinatubo eruption in 1991 provide a means of estimating the scale of the volcanic forcing in the tropics. A simple relaxation model is used to examine how temporal evolution of the climate response will differ from the that of the radiative forcing. Taking this difference into account is essential for proper detection and attribution of the relevant physical processes. Estimations are derived for both the forcing and time-constant of the climate response. These are found to support values derived in earlier studies which vary considerably from those of current climate models. The implications of these differences in inferring climate sensitivity are discussed. The study reveals the importance of secondary effects of major eruptions pointing to a persistent warming effect in the decade following the 1991 eruption. The inadequacy of the traditional application of linear regression to climate data is highlighted, showing how this will typically lead to erroneous results and thus the likelihood of false attribution. The implications of this false attribution to the post-2000 divergence between climate models and observations is discussed. Keywords: climate; sensitivity; tropical feedback; global warming; climate change; Mount Pinatubo; volcanism; GCM; model divergence; stratosphere; ozone; multivariate; regression; ERBE; AOD ; For the period surrounding the eruption of Mount Pinatubo in the Philippines in June 1991, there are detailed satellite data for both the top-of-atmosphere ( TOA ) radiation budget and atmospheric optical depth ( AOD ) that can be used to derive the changes in radiation entering the climate system during that period. Analysis of these radiation measurements allows an assessment of the system response to changes in the radiation budget. The Mt. Pinatubo eruption provides a particularly useful natural experiment, since the spread of the effects were centred in the tropics and dispersed fairly evenly between the hemispheres. fig.6 ( This meta-study provides a lot of detail about the composition and dispersal of the aerosol cloud.) The present study investigates the tropical climate response in the years following the eruption. It is found that a simple relaxation response, driven by volcanic radiative ‘forcing’, provides a close match with the variations in TOA energy budget. The derived scaling factor to convert AOD into radiative flux, supports earlier estimations based on observations from the 1982 El Chichon eruption. These observationally derived values of the strength of the radiative disturbance caused by major stratospheric eruptions are considerably greater than those currently used as input parameters for general circulation climate models (GCMs). This has important implications for estimations of climate sensitivity to radiative ‘forcings’ and attribution of the various internal and external drivers of the climate system. Changes in net TOA radiative flux, measured by satellite, were compared to volcanic forcing estimated from measurements of atmospheric optical depth. Regional optical depth data, with monthly resolution, are available in latitude bands for four height ranges between 15 and 35 km [DS1] and these values were averaged from 20S to 20N to provide a monthly mean time series for the tropics. Since optical depth is a logarithmic scale, the values for the four height bands were added at each geographic location. Lacis et al suggest that aerosol radiative forcing can be approximated by a linear scaling factor of AOD over the range of values concerned. This is the approach usually adopted in IPCC reviewed assessments and is used here. As a result the vertical summations are averaged across the tropical latitude range for comparison with radiation data. Tropical TOA net radiation flux is provided by Earth Radiation Budget Experiment ( ERBE ) [DS2]. One notable effect of the eruption of Mt Pinatubo on tropical energy balance is a variation in the nature of the annual cycle, as see by subtracting the pre-eruption, mean annual variation. As well as the annual cycle due to the eccentricity of the Earth’s orbit which peaks at perihelion around 4th January, the sun passes over the tropics twice per year and the mean annual cycle in the tropics shows two peaks: one in March the other in Aug/Sept, with a minimum in June and a lesser dip in January. Following the eruption, the residual variability also shows a six monthly cycle. The semi-annual variability changed and only returned to similar levels after the 1998 El Nino. Figure 1 showing the variability of the annual cycle in net TOA radiation flux in tropics. (Click to enlarge) It follows that using a single period as the basis for the anomaly leaves significant annual residuals. To minimise the residual, three different periods each a multiple of 12 months were used to remove the annual variations: pre-1991; 1992-1995; post 1995. The three resultant anomaly series were combined, ensuring the difference in the means of each period were respected. The mean of the earlier, pre-eruption annual cycles was taken as the zero reference for the whole series. There is a clearly repetitive variation during the pre-eruption period that produces a significant downward trend starting 18 months before the Mt. Pinatubo event. Since it may be important not to confound this with the variation due to the volcanic aerosols, it was characterised by fitting a simple cosine function. This was subtracted from the fitting period. Though the degree to which this can be reasonably assumed to continue is speculative, it seems some account needs to be taken of this pre-existing variability. The effect this has on the result of the analysis is assessed. The break of four months in the ERBE data at the end of 1993 was filled with the anomaly mean for the period to provide a continuous series. Figure 2 showing ERBE tropical TOA flux adaptive anomaly. (Click to enlarge) Figure 2b showing ERBE tropical TOA flux adaptive anomaly with pre-eruption cyclic variability subtracted. (Click to enlarge) Since the TOA flux represents the net sum of all “forcings” and the climate response, the difference between the volcanic forcing and the anomaly in the energy budget can be interpreted as the climate response to the radiative perturbation caused by the volcanic aerosols. This involves some approximations. Firstly, since the data is restricted to the tropical regions, the vertical energy budget does not fully account for energy entering and leaving the region. There is a persistent flow of energy out of the tropics both via ocean currents and atmospheric circulation. Variations in wind driven ocean currents and atmospheric circulation may be part of any climate feedback reaction. Secondly, taking the difference between TOA flux and the calculated aerosol forcing at the top of the troposphere to represent the top of troposphere energy budget assumes negligible energy is accumulated or lost internally to the upper atmosphere. Although there is noticeable change in stratospheric temperature as a result of the eruption, the heat capacity of the rarefied upper atmosphere means this is negligible in this context. Figure 3 showing changes in lower stratosphere temperature due to volcanism. (Click to enlarge) A detailed study on the atmospheric physics and radiative effects of stratospheric aerosols by Lacis, Hansen & Sato suggested that radiative forcing at the tropopause can be estimated by multiplying optical depth by a factor of 30 W / m2. This value provides a reasonably close match to the initial change in ERBE TOA flux. However, later studies, , attempting to reconcile climate model output with the surface temperature record have reduced the estimated magnitude of the effect stratospheric aerosols. With the latter adjustments, the initial effect on net TOA flux is notably greater than the calculated forcing, which is problematic; especially since Lacis et al reported that the initial cooling may be masked by the warming effect of larger particles ( > 1µm ). Indeed, in order for the calculated aerosol forcing to be as large as the initial changes in TOA flux, without invoking negative feedbacks, it is necessary to use a scaling of around 40 W/m2. A comparison of these values is shown in figure 4. What is significant is that from just a few months after the eruption, the disturbance in TOA flux is consistently less than the volcanic forcing. This is evidence of a strong negative feedback in the tropical climate system acting to counter the volcanic perturbation. Just over a year after the eruption, it has fully corrected the radiation imbalance despite the disturbance in AOD still being at about 50% of its peak value. The net TOA reaction then remains positive until the “super” El Nino of 1998. This is still the case with reduced forcing values of Hansen et al as can also be seen in figure 4. Figure 4 comparing volcanic of net TOA flux to various estimations aerosol forcing. ( Click to enlarge ) The fact that the climate is dominated by negative feedbacks is not controversial since this is a pre-requisite for overall system stability. The main stabilising feedback is the Planck response ( about 3.3 W/m2/K at typical ambient temperatures ). Other feedbacks will increase or decrease the net feedback around this base-line value. Where IPCC reports refer to net feedbacks being positive or negative, it is relative to this value. The true net feedback will always be negative. It is clear that the climate system takes some time to respond to initial atmospheric changes. It has been pointed out that to correctly compare changes in radiative input to surface temperatures some kind of lag-correlation analysis is required : Spencer & Braswell 2011, Lindzen & Choi 2011 Trenberth et al 2010 . All three show that correlation peaks with the temperature anomaly leading the change in radiation by about three months. Figure 5 showing climate feedback response to Mt Pinatubo eruption. Volcanic forcing per Lacis et al. ( Click to enlarge ) After a few months, negative feedbacks begin to have a notable impact and the TOA flux anomaly declines more rapidly than the reduction in AOD. It is quickly apparent that a simple, fixed temporal lag is not an appropriate way to compare the aerosol forcing to its effects on the climate system. The simplest physical response of a system to a disturbance would be a linear relaxation model, or “regression to the equilibrium”, where for a deviation of a variable X from its equilibrium value, there is a restoring action that is proportional to the magnitude of that deviation. The more it is out of equilibrium the quicker its rate of return. This kind of model is common in climatology and is central of the concept of climate sensitivity to changes in various climate “forcings”. dX/dt= -k*X ; where k is a constant of proportionality. The solution of this equation for an infinitesimally short impulse disturbance is a decaying exponential. This is called the impulse response of the system. The response to any change in the input can found by its convolution with this impulse response. This can be calculated quite simply since it is effectively a weighted running average calculation. It can also be found by algebraic solution of the ordinary differential equation if the input can be described by an analytic function. This is the method that was adopted in Douglass & Knox 2005 comparing AOD to lower tropospheric temperature ( TLT ). The effect of this kind of system response is a time lag as well as a degree of low-pass filtering which reduces the peak and produces a change in the profile of the time series, compared to that of the input forcing. In this context linear regression of the output and input is not physically meaningful and will give a seriously erroneous value of the presumed linear relationship. The speed of the response is characterised by a constant parameter in the exponential function, often referred to as the ‘time-constant’ of the reaction. Once the time-constant parameter has been determined, the time-series of the system response can be calculated from the time-series of the forcing. Here, the variation of the tropical climate is compared with a linear relaxation response to the volcanic forcing. The magnitude and time-constant constitute two free parameters and are found to provide a good match between the model and data. This is not surprising since any deviation from equilibrium in surface temperature will produce a change in the long-wave Planck radiation to oppose it. The radiative Planck feedback is the major negative feedback that ensures the general stability of the Earth’s climate. While the Planck feedback is proportional to the fourth power of the absolute temperature, it can be approximated as linear for small changes around typical ambient temperatures of about 300 kelvin. This is effectively a “single slab” ocean model but this is sufficient since diffusion into deeper water below the thermocline is minimal on this time scale. This was discussed in Douglass & Knox’s reply to Robock A more complex model which includes a heat diffusion term to a large deep ocean sink can be reduced to the same analytical form with a slightly modified forcing and an increased “effective” feedback. Both these adjustments would be small and do not change the mathematical form of the equation and hence the validity of the current method. See supplementary information. It is this delayed response curve that needs to be compared to changes in surface temperature in a regression analysis. Regressing the temperature change against the change in radiation is not physically meaningful unless the system can be assumed to equilibrate much faster than the period of the transient being studied, ie. on a time scale of a month or less. This is clearly not the case, yet many studies have been published which do precisely this, or worse multivariate regression, which compounds the problem. Santer et al 2014 , Trenberth et al 2010 , Dessler 2010 b , Dessler 2011 Curiously, Douglass & Knox initially calculate the relaxation response to AOD forcing and appropriately regress this against TLT but later in the same paper regress AOD directly against TLT and thus find an AOD scaling factor in agreement with the more recent Hansen estimations. This apparent inconsistency in their method confirms the origin of the lower estimations of the volcanic forcing. The need to account for the fully developed response can be seen in figure 6. The thermal inertia of the ocean mixed layer integrates the instantaneous volcanic forcing as well as the effects of any climate feedbacks. This results in a lower, broader and delayed time series. As shown above, in a situation dominated by the Planck and other radiative feedbacks, this can be simply modelled with an exponential convolution. There is a delay due to the thermal inertia of the ocean mixed layer but this is not a simple fixed time delay. The relaxation to equilibrium response introduces a frequency dependent phase delay that changes the profile of the time series. Simply shifting the volcanic forcing forward by about a year would line up the “bumps” but not match the profile of the two variables. Therefore neither simple regression nor a lagged regression will correctly associate the two variables: the differences in the temporal evolution of the two would lead to a lower correlation and hence a reduced regression result leading to incorrect scaling of the two quantities. Santer et al 2014 attempts to remove ENSO and volcanic signals by a novel iterative regression technique. A detailed account, provided in the supplementary information[8b], reports a residual artefact of the volcanic signal. The modelled and observed tropospheric temperature residuals after removal of ENSO and volcano signals, τ , are characterized by two small maxima. These maxima occur roughly 1-2 years after the peak cooling caused by El Chichon and Pinatubo cooling signals. Figure XXX. Santer et al 2014 supplementary figure 3 ( panel D ) “ENSO and volcano signals removed” This description matches the period starting in mid-1992, shown in figure 6 below, where the climate response is greater than the forcing. It peaks about 1.5 years after the peak in AOD, as described. Their supplementary fig.3 shows a very clear dip and later peak following Pinatubo. This corresponds to the difference between the forcing and the correctly calculated climate response shown in fig. 6. Similarly, the 1997-98 El Nino is clearly visible in the graph of observational data (not reproduced here) labeled “ENSO and volcano signals removed”. This failure to recognise the correct nature and timing of the volcanic signal leads to an incorrect regression analysis, incomplete removal and presumably incorrect scaling of the other regression variables in the iterative process. This is likely to lead to spurious attributions and incorrect conclusions. Figure 6 showing tropical feedback as relaxation response to volcanic aerosol forcing ( pre-eruption cycle removed) ( Click to enlarge ) The delayed climatic response to radiative change corresponds to the negative quadrant in figure 3b of Spencer and Braswell (2011) excerpted below, where temperature lags radiative change. It shows the peak temperature response lagging around 12 months behind the radiative change. The timing of this peak is in close agreement with TOA response in figure 6 above, despite SB11 being derived from later CERES ( Clouds and the Earth’s Radiant Energy System ) satellite data from the post-2000 period with negligible volcanism. This emphasises that the value for the correlation in the SB11 graph will be under-estimated, as pointed out by the authors: Diagnosis of feedback cannot easily be made in such situations, because the radiative forcing decorrelates the co-variations between temperature and radiative flux. The present analysis attempts to address that problem by analysing the fully developed response. Figure 7. Lagged-correlation plot of post-2000 CERES data from Spencer & Braswell 2011. (Negative lag: radiation change leads temperature change.) The equation of the relationship of the climate response : ( TOA net flux anomaly – volcanic forcing ) being proportional to an exponential regression of AOD, is re-arranged to enable an empirical estimation of the scaling factor by linear regression. TOA -VF * AOD = -VF * k * exp_AOD eqn. 1 -TOA = VF * ( AOD – k * exp_AOD ) eqn. 2 VF is the volcanic scaling factor to convert ( positive ) AOD into a radiation flux anomaly in W/m2. The exp_AOD term is the exponential convolution of the AOD data, a function of the time-constant tau, whose value is also to be estimated from the data. This exp_AOD quantity is multiplied by a constant of proportionality, k. Since TOA net flux is conventionally given as positive downwards, it is negated in equation 2 to give a positive VF comparable to the values given by Lacis, Hansen, etc. Average pre-eruption TOA flux was taken as the zero for TOA anomaly and, since the pre-eruption AOD was also very small, no constant term was included. Since the relaxation response effectively filters out ( integrates ) much of high frequency variability giving a less noisy series, this was taken as the independent variable for regression. This choice acts to minimise regression dilution due to the presence of measurement error and non-linear variability in the independent variable. Regression dilution is an important and pervasive problem that is often overlooked in published work in climatology, notably in attempts to derive an estimation of climate sensitivity from temperature and radiation measurements and from climate model output. Santer et al 2014, Trenberth et al 2010, Dessler 2011, Dessler 2010b, Spencer & Braswell 2011. The convention of choosing temperature as the independent variable will lead to spuriously high sensitivity estimations. This was briefly discussed in the appendix of Forster & Gregory 2006 , though ignored in the conclusions of the paper. It has been suggested that a technique based on total least squares regression or bisector least squares regression gives a better fit, when errors in the data are uncharacterized (Isobe et al. 1990). For example, for 1985–96 both of these methods suggest YNET of around 3.5 +/- 2.0 W m-2 K-1 (a 0.7–2.4 K equilibrium surface temperature increase for 2 ϫ CO2), and this should be compared to our 1.0–3.6 K range quoted in the conclusions of the paper. Regression results were thus examined for residual correlation. Taking the TOA flux, less the volcanic forcing, to represent the climatic reaction to the eruption, gives a response that peaks about twelve months after the eruption, when the stratospheric aerosol load is still at about 50% of its peak value. This implies a strong negative feedback is actively countering the volcanic disturbance. This delay in the response, due to thermal inertia in the system, also produces an extended period during which the direct volcanic effects are falling and the climate reaction is thus greater than the forcing. This results in a recovery period, during which there is an excess of incoming radiation compared to the pre-eruption period which, to an as yet undetermined degree, recovers the energy deficit accumulated during the first year when the volcanic forcing was stronger than the developing feedback. This presumably also accounts for at least part of the post eruption maxima noted in the residuals of Santer et al 2014. Thus if the lagged nature of the climate response is ignored and direct linear regression between climate variables and optical depth are conducted, the later extended period of warming may be spuriously attributed to some other factor. This represents a fundamental flaw in multivariate regression studies such as Foster & Rahmstorf 2013 and Santer et al 2014 , among others, that could lead to seriously erroneous conclusions about the relative contributions of the various regression variables. For the case where the pre-eruption variation is assumed to continue to underlie the ensuing reaction to the volcanic forcing, the ratio of the relaxation response to the aerosol forcing is found to be 0.86 +/- 0.07%, with a time constant of 8 months. This corresponds to the value reported in Douglass & Knox 2005 derived from AOD and lower troposphere temperature data. The scaling factor to convert AOD into a flux anomaly was found to be 33 W/m2 +/-11%. With these parameters, the centre line of the remaining 6 month variability ( shown by the gaussian filter ) fits very tightly to the relaxation model. Figure 6 showing tropical feedback as relaxation response to volcanic aerosol forcing ( pre-eruption cycle removed) ( Click to enlarge ) If the downward trend in the pre-eruption data is ignored (ie its cause is assumed to stop at the instant of the eruption ) the result is very similar ( 0.85 +/-0.09 and 32.4 W/m2 +/- 9% ) but leads to a significantly longer time-constant of 16 months. In this case, the fitted response does not fit nearly as well, as can be seen by comparing figures 6 and 8. The response is over-damped: poorly matching the post-eruption change, indicating that the corresponding time-constant is too long. Figure 8 showing tropical climate relaxation response to volcanic aerosol forcing, fitted while ignoring pre-eruption variability. ( Click to enlarge ) The analysis with the pre-eruption cycle subtracted provides a generally flat residual ( figure 9 ), showing that it accounts well for the longer term response to the radiative disruption caused by the eruption. It is also noted that the truncated peak, resulting from substitution of the mean of the annual cycle to fill the break in the ERBE satellite data, lies very close to the zero residual line. While there is no systematic deviation from zero it is clear that there is a residual seasonal effect and that the amplitude of this seasonal residual also seems to follow the fitted response. Figure 9 showing the residual of the fitted relaxation response from the satellite derived, top-of-troposphere disturbance. ( Click to enlarge ) Since the magnitude of the pre-eruption variability in TOA flux, while smaller, is of the same order as the volcanic forcing and its period similar to that of the duration of the atmospheric disturbance, the time-constant of the derived response is quite sensitive to whether this cycle is removed or not. However, it does not have much impact on the fitted estimation of the scaling factor ( VF ) required to convert AOD into a flux anomaly or the proportion of the exponentially lagged forcing that matches the TOA flux anomaly. Assuming that whatever was causing this variability stopped at the moment of the eruption seems unreasonable but whether it was as cyclic as it appears to be, or how long that pattern would continue is speculative. However, approximating it as a simple oscillation seems to be more satisfactory than ignoring it. In either case, there is a strong support here for values close to the original Lacis et al 1992 calculations of volcanic forcing that were derived from physics-based analysis of observational data, as opposed to later attempts to reconcile the output of general circulation models by re-adjusting physical parameters. Beyond the initial climate reaction analysed so far, it is noted that the excess incoming flux does not fall to zero. To see this effect more clearly, the deviation of the flux from the chosen pre-eruption reference value is integrated over the full period of the data. The result is shown in figure 10. Figure 10 showing the cumulative integral of climate response to Mt Pinatubo eruption. ( Click to enlarge ) Pre-eruption variability produces a cumulative sum initially varying about zero. Two months after the eruption, when it is almost exactly zero, there is a sudden change as the climate reacts to the drop in energy entering the troposphere. From this point onwards there is an ever increasing amount of additional energy accumulating in the tropical lower climate system. With the exception of a small drop, apparently in reaction to the 1998 ‘super’ El Nino, this tendency continues to the end of the data. While the simple relaxation model seems to adequately explain the initial four years following the Mt Pinatubo event, this does not explain it settling to a higher level. Thus there is evidence of a persisent warming effect resulting from this major stratospheric eruption. Concerning the more recent estimations of aerosol forcing, it should be noted that there is a strong commonality of authors in the papers cited here, so rather than being the work of conflicting groups, the more recent weightings reflect the result of a change of approach: from direct physical modelling of the aerosol forcing in the 1992 paper, to the later attempts to reconcile general circulation model (GCM) output by altering the input parameters. From Hansen et al 2002 ( Emphasis added. ) We illustrate the global response to these forcings for the SI2000 model with specified sea surface temperature and with a simple Q-flux ocean, thus helping to characterize the efficacy of each forcing. The model yields good agreement with observed global temperature change and heat storage in the ocean. This agreement does not yield an improved assessment of climate sensitivity or a confirmation of the net climate forcing because of possible compensations with opposite changes of these quantities. Nevertheless, the results imply that observed global temperature change during the past 50 years is primarily a response to radiative forcings. Form section 2.2.2. Radiative forcing: Even with the aerosol properties known, there is uncertainty in their climate forcing. Using our SI2000 climate model to calculate the adjusted forcing for a globally uniform stratospheric aerosol layer with optical depth t = 0.1 at wavelength l = 0.55 mm yields a forcing of 2.1 W/m2 , and thus we infer that for small optical depths Fa (W/m2) ~ 21 tau In our earlier 9-layer model stratospheric warming after El Chichon and Pinatubo was about half of observed values (Figure 5 of F-C), while the stratospheric warming in our current model exceeds observations, as shown below. As the authors point out, it all depends heavily upon the assumptions made about size distribution of the aerosols used when interpreting the raw data. In fact the newer estimation is shown, in figure 5a of the paper, to be about twice the observed values following Pinatubo and El Chichon . It is unclear why this is any better than half observed values in their earlier work. Clearly the attributions are still highly uncertain and the declared uncertainty of +/-15% appears optimistic. 3.3. Model Sensitivity The bottom line is that, although there has been some narrowing of the range of climate sensitivities that emerge from realistic models [Del Genio and Wolf, 2000], models still can be made to yield a wide range of sensitivities by altering model parameterizations. If the volcanic aerosol forcing is underestimated, other model parameters will have to be adjusted to produce a higher sensitivity. It is likely that the massive overshoot in the model response of TLS is an indication of this situation. It would appear that a better estimation lies between these two extremes. Possibly around 25 W/m2. The present study examines the largest and most rapid changes in radiative forcing in the period for which detailed satellite observations are available. The aim is to estimate the aerosol forcing and the timing of the tropical climate response. It is thus not encumbered by trying to optimise estimations of a range climate metrics over half a century by experimental adjustment of a multitude of “parameters”. The result is in agreement with the earlier estimation of Lacis et al. A clue to the continued excess over the pre-eruption conditions can be found in the temperature of the lower stratosphere, shown in figure 3. Here too, the initial disturbance seems to have stabilised by early 1995 but there is a definitive step change from pre-eruption conditions. Noting the complementary nature of the effects of impurities in the stratosphere on TLS and the lower climate system, this drop in TLS may be expected to be accompanied by an increase in the amount of incoming radiation penetrating into the troposphere. This is in agreement with the cumulative integral shown in figure 10 and the southern hemisphere sea temperatures shown in figure 12. NASA Earth Observatory report that after Mt Pinatubo, there was a 5% to 8% drop in stratospheric ozone. Presumably a similar removal happened after El Chichon in 1982 which saw an almost identical reduction in TLS. Whether this is, in fact, the cause or whether other radiation blocking aerosols were flushed out along with the volcanic emissions, the effect seems clear and consistent and quite specifically linked to the eruption event. This is witnessed in both the stratospheric and tropical tropospheric data. Neither effect is attributable to the steadily increasing GHG forcing which did not record a step change in September 1991. This raises yet another possibility for false attribution in multivariate regression studies and in attempts to arbitrarily manipulate GCM input parameters to engineer a similarity with the recent surface temperature records. With the fitted scaling factor showing the change in tropical TOA net flux matches 85% of the tropical AOD forcing, the remaining 15% must be dispersed elsewhere within the climate system. That means either storage in deeper waters and/or changes in the horizontal energy budget, ie. interaction with extra-tropical regions. Since the model fits the data very closely, the residual 15% will have the same time dependency profile as the 85%, so these tropical/ex-tropical variations can also be seen as part of the climate response to the volcanic disturbance. ie. the excess in horizontal net flux, occurring beyond 12 months after the event, is also supporting restoration of the energy deficit in extra-tropical regions by exporting heat energy. Since the major ocean gyres bring cooler temperate waters into the eastern parts of tropics in both hemispheres and export warmer waters at their western extents, this is probably a major vector of this variation in heat transportation as are changes in atmospheric processes like Hadley convection. Extra-tropical regions were previously found to be more sensitive to radiative imbalance that the tropics: Thus the remaining 15% may simply be the more stable tropical climate acting as a buffer and exerting a thermally stabilising influence on extra-tropical regions. After the particulate matter and aerosols have dropped out there is also a long-term depletion of stratospheric ozone ( 5 to 8% less after Pinatubo ) . Thompson & Solomon (2008) examined how lower stratosphere temperature correlated with changes in ozone concentration and found that in addition to the initial warming caused by volcanic aerosols, TLS showed a notable ozone related cooling that persisted until 2003. They note that this is correlation study and do not imply causation. Figure 11. Part of fig.2 from Thompson & Solomon 2008 showing the relationship of ozone and TLS. A more recent paper by Soloman concluded roughly equal, low level aerosol forcing existed before Mt Pinatubo and again since 2000, similarly implying a small additional warming due to lower aerosols in the decade following the eruption. Several independent data sets show that stratospheric aerosols have increased in abundance since 2000. Near-global satellite aerosol data imply a negative radiative forcing due to stratospheric aerosol changes over this period of about -0.1 watt per square meter, reducing the recent global warming that would otherwise have occurred. Observations from earlier periods are limited but suggest an additional negative radiative forcing of about -0.1 watt per square meter from 1960 to 1990. The values for volcanic aerosol forcing derived here being in agreement with the physics-based assessments of Lacis et al. imply much stronger negative feedbacks must be in operation in the tropics than those resulting from the currently used model “parameterisations” and the much weaker AOD scaling factor. These two results indicate that secondary effects of volcanism may have actually contributed to the late 20th century warming. This, along with the absence of any major eruptions since Mt Pinatubo, could go a long way to explaining the discrepancy between climate models and the relative stability of observational temperatures measurements since the turn of the century. Once the nature of the signal has been recognised in the much less noisy stratospheric record, a similar variability can be found to exist in southern hemisphere sea surface temperatures. The slower rise in SST being accounted for by the much larger thermal inertia of the ocean mixed layer. Taking TLS as an indication of the end of the negative volcancic forcing and beginning of the additional warming forcing, the apparent relaxation to a new equilibrium takes 3 to 4 years. Regarding this approximately as the 95% settling of three times-constant intervals ( e-folding time ) would be consistent with a time constant of between 12 and 16 months for extra-tropical southern hemisphere oceans. These figures are far shorter than values cited in Santer 2014 ranging from 30 to 40 months which are said to characterise the behaviour of high sensitivity models and correspond to typical IPCC values of climate sensitivity. It is equally noted from lag regression plots in S&B 2011 and Trenberth that climate models are far removed from observational data in terms of producing correct temporal relationships of radiative forcing and temperature. Figure 12. Comparing SH sea surface temperatures to lower troposphere temperature. Though the initial effects are dominated by the relaxation response to the aerosol forcing, both figures 9 and 12 show a additional climate reaction has a significant effect later. This appears to be linked to ozone concentration and/or a reduction in other atmospheric aerosols. While these changes are clearly triggered by the eruptions, they should not be considered part of the “feedback” in the sense of the relaxation response fitted here. These effects will act in the same sense as direct radiative feedback and shorten the time-constant by causing a faster recovery. However, the settling time of extra-tropical SST to the combined changes in forcing indicates a time constant ( and hence climate sensitivity ) well short of the figures produced by analysing climate model behaviour reported in Santer et al 2014 . IPCC on Clouds and Aerosols: IPCC AR5 WG1 Full Report Jan 2014 : Chapter 7 Clouds and Aerosols: No robust mechanisms contribute negative feedback. The responses of other cloud types, such as those associated with deep convection, are not well determined. Satellite remote sensing suggests that aerosol-related invigoration of deep convective clouds may generate more extensive anvils that radiate at cooler temperatures, are optically thinner, and generate a positive contribution to ERFaci (Koren et al., 2010b). The global influence on ERFaci is WG1 are arguing from a position of self-declared ignorance on this critical aspect of how the climate system reacts to changes in radiative forcing. It is unclear how they can declare confidence levels of 95%, based on such an admittedly poor level of understanding of the key physical processes. Analysis of satellite radiation measurements allows an assessment of the system response to changes in radiative forcing. This provides an estimation of the aerosol forcing that is in agreement with the range of physics-based calculations presented by Lacis et al in 1992 and is thus brings into question the much lower values currently used in GCM simulations. The considerably higher values of aerosol forcing found here and in Lacis et al imply the presence of notably stronger negative feedbacks in tropical climate and hence imply a much lower range of sensitivity to radiative forcing than those currently used in the models. The significant lag and ensuing post-eruption recovery period underlines the inadequacy of simple linear regression and multivariate regression in assessing the magnitude of various climate ‘forcings’ and their respective climate sensitivities. Use of such methods will suffer from regression dilution, omitted variable bias and can lead to seriously erroneous attributions. Both the TLS cooling and the energy budget analysis presented here, imply a lasting warming effect on surface temperatures triggered by the Mt Pinatubo event. Unless these secondary effects are recognised, their mechanisms understood and correctly modelled, there is a strong likelihood of this warming being spuriously attributed to some other cause such as AGW. When attempting to tune model parameters to reproduce the late 20th century climate record, an incorrectly small scaling of volcanic forcing, leading to a spuriously high sensitivity, will need to be counter-balanced by some other variable. This is commonly a spuriously high greenhouse effect, amplified by presumed positive feedbacks from other climate variables, less well constrained by observation ( such as water vapour and cloud ). In the presence of the substantial internal variability, this can be made to roughly match the data while both forcings are present ( pre-2000 ). However, in the absence of significant volcanism there will be a steadily increasing divergence. The erroneous attribution problem, along with the absence of any major eruptions since Mt Pinatubo, could explain much of the discrepancy between climate models and the relative stability of observational temperature measurements since the turn of the century. Self et al 1995 “The Atmospheric Impact of the 1991 Mount Pinatubo Eruption” Lacis et al 1992 : “Climate Forcing by Stratospheric Aerosols Hansen et al 1997 “Forcing and Chaos in interannual to decadal climate change” Hansen et al 2002 : “Climate forcings in Goddard Institute for Space Studies SI2000 simulations” Spencer & Braswell 2011: “On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance” Lindzen & Choi 2011: “On the Observational Determination of Climate Sensitivity and Its Implications” Trenberth et al 2010: “Relationships between tropical sea surface temperature and top‐of‐atmosphere radiation” Santer et al 2014: “Volcanic contribution to decadal changes in tropospheric temperature” [8b] Supplementary Information: Dessler 2010 b “A Determination of the Cloud Feedback from Climate Variations over the Past Decade” Dessler 2011 “Cloud variations and the Earth’s energy budget” Forster & Gregory 2006 “The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data” Foster and Rahmstorf 2011: “Global temperature evolution 1979-2010” NASA Earth Observatory Thompson & Soloman 2008: “Understanding Recent Stratospheric Climate Change” Susan Soloman 2011: “The Persistently Variable “Background” Stratospheric Aerosol Layer and Global Climate Change” Trenberth 2002: “Changes in Tropical Clouds and Radiation” Douglass & Knox 2005: “Climate forcing by the volcanic eruption of Mount Pinatubo” Douglass & Knox 2005b: “Reply to comment by A. Robock on ‘‘Climate forcing by the volcanic eruption of Mount Pinatubo’’” [DS1] AOD data: NASA GISS Derived from SAGE (Stratospheric Aerosol and Gas Experiment) instrument. “The estimated uncertainty of the optical depth in our multiple wavelength retrievals [Lacis et al., 2000] using SAGE observations is typically several percent. “ [DS2] ERBE TOA data: Earth Radiation Budget Experiment [*] Explanatory notes: Negative feedback is an engineering term that refers to a reaction that opposes its cause. It is not a value judgement about whether it is good or bad. In fact, negative feedbacks are essential in keeping a system stable. Positive feedbacks lead to instability and are thus generally “bad”. Convolution is a mathematical process that is used, amongst other things, to implement digital filters. A relaxation response can be implemented by convolution with an exponential function. This can be regarded as an asymmetric filter with a non-linear phase response. It produces a delay relative to the input and alters it’s shape. Regression dilution refers to the reduction in the slope estimations produced by least squares regression when there is significant error or noise in the x-variable. Under negligible x-errors OLS regression can be shown to produce the best unbiased linear estimation of the slope. However, this essential condition is often over-looked or ignored, leading to erroneously low estimations of the relationship between two quantities. This is explained in more detail here: Detailed method description. The break of four months in the ERBE data at the end of 1993 was filled with the anomaly mean for the period to provide a continuous series. This truncates what would have probably been a small peak in the data, marginally lowering the local average, but since this was not the primary period of interest this short defect was considered acceptable. The regression was performed for a range of time-constants from 1 to 24 months. The period for the regression was from just before the eruption, up to 1994.7, when AOD had subsided and the magnitude of the annual cycle was found to change, indicating the end of the initial climatic response ( as determined from the adaptive anomaly shown in figure 2 ). Once the scaling factor was obtained by linear regression for each value of tau, the values were checked for regression dilution by examining the correlation of the residual of the fit with the AOD regressor while varying the value of VF in the vicinity of the fitted value. This was found to give a regular curve with a minimum correlation that was very close to the fitted value. It was concluded that the least squares regression results were an accurate estimation of the presumed linear relationship. The low values of time-constant resulted in very high values of VF ( eg. 154 for tau=1 month ) that were physically unrealistic and way out side the range of credible values. This effectively swamps the TOA anomaly term and is not meaningful. tau = 6 months gave VF = 54 and this was taken as the lower limit for the range time constant to be considered. The two free parameters of the regression calculations will ensure an approximate fit of the two curves for each value of the time constant. The latter could then be varied to find the response that best fitted the initial rise after the eruption, the peak and the fall-off during 1993-1995. This presented a problem since, even with adaptive anomaly approach, there remains a significant, roughly cyclic, six month sub-annual variability that approaches in magnitude the climate response of interest. This may be at least in part a result aliasing of the diurnal signal to a period close to six months in the monthly ERBE data as pointed out by Trenberth . The relative magnitude of the two also varies depending on the time-constant used, making a simple correlation test unhelpful in determining the best correlation with respect to the inter-annual variability. For this reason it could not be included as a regression variable. Also the initial perturbation and the response, rise very quickly from zero to maximum in about two months. This means that low-pass filtering to remove the 6 month signal will also attenuate the initial part of the effect under investigation. For this reason a light gaussian filter ( sigma = 2 months ) was used. This has 50% attenuation at 2.3 months. comparable to a 4 month running mean filter ( without the destructive distortions of the latter). This allowed a direct comparison of the rise, duration of the peak and rate of fall-off in the tail that is determined by the value of the time-constant parameter, and thus the value producing the best match to the climate response. Due to the relatively course granularity of monthly data that limits the choice of the time-constant values, it was possible to determine a single value that produced the best match between the two variables. The whole process was repeated with and without subtraction of the oscillatory, pre-eruption variability to determine the effects of this adjustment. The uncertainty values given are those of the fitting process and reflect the variability of the data. They do not account for the experimental uncertainty in interpreting the satellite data. Beyond the single slab model. A more complex model which includes a heat diffusion term to a large deep ocean sink can be seen to reduce to the same analytical form with a slightly modified forcing and an increased “effective” feedback. Both these adjustments would be small and do not change the mathematical form of the equation and hence the validity of the current method. A typical representation of the linear model used by many authors in published works is of the form: C *d/dt(Ts) = F – λTs where Ts is surface temp anomaly ; F is radiative flux anomaly and C is the heat capacity of the mixed ocean layer. λ is a constant which is the reciprocal of climate sensitivity. Eddy diffusion anomaly to the deep ocean heat sink at a constant temperature Td can be added using a diffusion constant κ With the relatively small changes in surface temperature over the period compared to the surface – thermocline temperature difference in the tropics, the square law of the diffusion equation can be approximated as linear. To within this approximation, a two slab diffusion model has the same form as signle slab model. A simple rearrangement shows that this is equivalent to original model but with a slightly modified flux and an slightly increased feedback parameter. C *d/dt(Ts) + κ*(Ts-Td) = F – λTs C *d/dt(Ts) = F – λTs- κ*(Ts-Td) C *d/dt(Ts) = F+ κ*(Td) – Ts(λ+κ) This does not have a notable effect on the scaling of the volcanic forcing herein derived, nor the presence of the anomalous, post-eruption warming effect, which is seen to continue beyond the period of the AOD disruption and the topical climate resposne.
<urn:uuid:a5974d1e-3ce5-40e3-973c-2e246e027904>
{ "date": "2017-04-24T15:04:18", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00057-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9182950258255005, "score": 3.078125, "token_count": 10040, "url": "https://climategrog.wordpress.com/category/periodic-analysis/" }
Large, old trees are in rapid decline and need to be protected says a new study in the journal Science. Large tree species play an important role in the healthy-functioning of ecosystems all over the globe and losing them puts many other plant and animal at risk suggests the study. Professor David Lindenmayer from the Australian National University, Canberra, looked at ecosystems from many parts of the world which are home to large and old tree species. He found that these species are particularly vulnerable to human activities such as logging and land clearance leading to their decline in ecosystems at all latitutdes. For example, over 95% of California’s majestic coastal redwoods, which are among the tallest trees in the world, have been lost to logging and forest clearing. "Just as large-bodied animals such as elephants, tigers, and cetaceans have declined drastically in many parts of the world, a growing body of evidence suggests that large old trees could be equally imperilled," says Professor Lindenmayer. "Targeted research is urgently needed to better understand the key threats to their existence and to devise strategies to counter them. Without such initiatives, these iconic organisms and the many species dependent on them could be greatly diminished or lost altogether." Large, old tree species provide many unique and specialist functions in ecosystems that simply cannot be fulfilled by younger and smaller individuals. The Mountain Ash tree (Eucalyptus regnans) in mainland Australia has a unique role in forests as home to more than 40 species of animals. Lindenmayer suggests that it is mainly human activities that are responsible for the loss of these species: "In agricultural landscapes, for example, chronic livestock over-grazing, excessive nutrients from fertilizers, and deliberate removal for firewood and land clearing combine to severely reduce large old trees," he says. "If we are to ensure the perpetual supply of large old trees, policies and management practices must be put in place that intentionally grow such trees and reduce their mortality rates."
<urn:uuid:fa9bd935-fd5b-4ba4-ba44-728388279d4d>
{ "date": "2017-04-30T22:44:10", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00591-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9621244668960571, "score": 3.765625, "token_count": 412, "url": "http://www.tgdaily.com/sustainability-features/67929-world-s-largest-and-oldest-trees-are-threatened" }
Four major components are utilized by the physician to define the diagnosis of the patient: specific measurements of lung function, a detailed patient history, a physical examination, and allergy testing. Spirometry and a peak flow meter are two devices to measure lung function. A spirometry is a computerized measurement of lung function performed in the office. Spirometry is often measured before and after receiving medication. The percentage of change in lung function is a factor in making the diagnosis of asthma. Peak flows are measured both in the office and in the home setting by the patient. A detailed patient history is reviewed by the physician and an examination performed. Appropriate testing will be ordered and results reviewed with the patient and family. Nursing staff will review a detailed written treatment plan that outlines proper use of medication and devices, avoidance techniques, and follow-up care. The treatment plan will also be forwarded to your primary care doctor.
<urn:uuid:486605b5-c213-4ee0-b8b7-963c1571075b>
{ "date": "2013-05-25T05:51:44", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9624170660972595, "score": 2.984375, "token_count": 183, "url": "http://www.azsneeze.com/services/first-office-visit/" }
Claudia Jones, intellectual genius and staunch activist against racist and gender oppression founded two of Black Briton s most important institutions; the first black newspaper, the West Indian Gazette and Afro-Asian Times and was a founding member of the Notting Hill Carnival. This book makes accessible and brings to wider attention the words of an often overlooked 20th century political and cultural activist who tirelessly campaigned, wrote, spoke out, organized, edited and published autobiographical writings on human rights and peace struggles related to gender, race and class. Claudia Jones was an iconic figure who inspired a generation of black activists and deserves to be much more widely known. This important book is a fitting memorial. Diane Abbott, MP, Westminster, London." Education & Reference
<urn:uuid:38d86c0d-e8b8-4232-bb07-126a99e3aa6b>
{ "date": "2016-12-10T18:42:09", "dump": "CC-MAIN-2016-50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698543434.57/warc/CC-MAIN-20161202170903-00488-ip-10-31-129-80.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9107609391212463, "score": 2.75, "token_count": 148, "url": "https://www.dymocks.com.au/book/claudia-jones-9780956240163/" }
A Short Biographical Dictionary of English Literature/Vaughan, Henry |←Vanbrugh, Sir John||A Short Biographical Dictionary of English Literature by Vaughan, Henry (1622-1695). -- Poet, b. in the parish of Llansaintffraed, Brecknock, and as a native of the land of the ancient Silures, called himself "Silurist." He was at Jesus Coll., Oxf., studied law in London, but finally settled as a physician at Brecon and Newton-by-Usk. In his youth he was a decided Royalist and, along with his twin brother Thomas, was imprisoned. His first book was Poems, with the Tenth Satire of Juvenal Englished. It appeared in 1646. Olor Iscanus (the Swan of Usk), a collection of poems and translations, was surreptitiously pub. in 1651. About this time he had a serious illness which led to deep spiritual impressions, and thereafter his writings were almost entirely religious. Silex Scintillans (Sparks from the Flint), his best known work, consists of short poems full of deep religious feeling, fine fancy, and exquisite felicities of expression, mixed with a good deal that is quaint and artificial. It contains "The Retreat," a poem of about 30 lines which manifestly suggested to Wordsworth his Ode on the Intimations of Immortality, and "Beyond the Veil," one of the finest meditative poems in the language. Flores Solitudinis (Flowers of Solitude) and The Mount of Olives are devout meditations in prose. The two brothers were joint authors of Thalia Rediviva: the Pastimes and Diversions of a Country Muse (1678), a collection of translations and original poems.
<urn:uuid:2c954319-760c-4315-9ded-0cb7bee76e31>
{ "date": "2014-09-19T19:53:11", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132007.18/warc/CC-MAIN-20140914011212-00272-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9748464226722717, "score": 2.65625, "token_count": 382, "url": "http://en.wikisource.org/wiki/A_Short_Biographical_Dictionary_of_English_Literature/Vaughan,_Henry" }
English to Hindi Hindi English Lessons English To Hindi What is the meaning of in Hindi is : Definition of word transl Examples of word Florence, 1880); BALZANI, Le cronache italiane nel medio evo, also in English transl., The presence of the relics of St. John hasn't transl ated into a tourist bonanza in any of these other resting places. The surprise is that up until now an English-language transl ation of Grossman's lengthy article has never been published in its entirety. Alexander von Humboldt – eine hebräische Lebensbeschreibung von Chaim Selig Slonimski (1810 – 1904), ed. by Kurt-Jürgen Maaß, transl. from the Hebrew by Orna Carmel, 53 – 54. Jewish Language Research Website, 2003.www. jewish-languages.org/jewish-malayalam. html; Zacharia, Scaria & Ophira Gamliel, ed. and transl. Words related to comments powered by Disqus.
<urn:uuid:6c73a0c3-820e-465d-830c-9ef2e479c61b>
{ "date": "2018-10-22T16:23:57", "dump": "CC-MAIN-2018-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515352.63/warc/CC-MAIN-20181022155502-20181022181002-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7035234570503235, "score": 2.734375, "token_count": 240, "url": "http://englishtohindi.in/meaning-of-transl-in-hindi.html" }
So is the main method the primary thread that springs in to action when we type in java filename in the command prompt. Is it a thread that does not to be invoked in the conventional manner. As in Mythread t1=new Mythread(); The thread that does not need to be elaborated on? Also, I was curious; what convention does one follow to specify the prioity level of a thread in general. What exactly does 5 mean, "the most important thread" or " the least important one". When you run Java class, the JVM looks for the "main()" method and creates a thread for it (called main thread). It assign a stack and all the consequent threds ar children of this main thread call stack. Varun Goenka wrote: the JVM "makes" a "main thread" out of the main method written by us? Yes, we need someone to start, and that someone is JVM which creates a non demon main thread for you so that you can execute your code, by executing "main()" method. Now, How JVM do that, is beyond my knowledge, I don't have a code of JVM .exe file Joined: Mar 09, 2009 So we can like,use the sleep method, pointlessly, just for fun even though we arent using any thread. Make things work slowly to "corroborate" the fact that java is slow? Look, Thread#sleep() and Why Java is Slow is two different questions, they are not related to each other. You can transfer running thread to sleep state by using sleep() method, and the same thing is true about main thread and that' doesn't make Java slow
<urn:uuid:69203e24-2daf-4f99-bed2-aa1757c6ef9a>
{ "date": "2015-07-01T19:44:37", "dump": "CC-MAIN-2015-27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095184.64/warc/CC-MAIN-20150627031815-00236-ip-10-179-60-89.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9370673894882202, "score": 3.0625, "token_count": 361, "url": "http://www.coderanch.com/t/440899/java/java/Main-method" }
Albinism refers to a group of disorders that are present at birth. It is characterized by a decrease or lack of color in the skin, hair, and eyes. What is going on in the body? Albinism refers to a group of genetic defects that cause decreased levels of the pigment, melanin, which forms color in skin, hair, and eyes. Low levels of melanin cause very light skin tone and blond-white hair. The eyes might also be affected and have an iris that is dull-gray to blue or brown. Since melanin protects the skin from ultraviolet radiation from the sun, people with albinism are easily sunburned. What are the causes and risks of the disease? Albinism is an inherited disorder that occurs in at least four different types. Most of them are inherited in the autosomal recessive manner, which means that a person with the condition has received one abnormal gene from each of his or her parents. The parents of most children with albinism have one normal and one abnormal gene each, and thus have normal melanin production and no symptoms of albinism themselves. What can be done to prevent the disease? Albinism is an inherited disease and cannot be prevented. How is the disease diagnosed? Albinism is diagnosed using a medical history and complete physical that includes an eye examination. Long Term Effects What are the long-term effects of the disease? People with albinism have a much higher risk of skin cancer because they lack a protective pigment in the skin. What are the risks to others? Albinism is not contagious and poses no risk to others. Because it is inherited, it can be passed from parents to their children at conception. What are the treatments for the disease? There is no treatment per se for albinism. People with albinism are advised to avoid excess sun exposure in order to minimize their risk of skin cancer. Large-print books, high-contrast materials, and computers with large letters can help people with visual impairments. What are the side effects of the treatments? Rarely, a person may have an allergic reaction to a certain sunscreen lotion. What happens after treatment for the disease? Albinism is a lifelong condition that cannot be cured. How is the disease monitored? Careful skin examination performed by a healthcare professional should be done periodically to check for skin cancer. Any new or worsening symptoms should be reported to the healthcare professional.
<urn:uuid:3f16d596-1828-427e-9f72-ba393869e292>
{ "date": "2015-05-29T17:14:43", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930256.3/warc/CC-MAIN-20150521113210-00074-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9565834999084473, "score": 3.359375, "token_count": 520, "url": "https://www.activeforever.com/articlelist-all/a-albinism" }
In less than six months a nation that criminally charges anyone who speaks about homosexuality will host the Winter Olympics. Sochi, Russia, the subtropical host city for 2014 winter event will host thousands of athletes from around the world. Stubbs says it is wrong to punish the athletes who have worked their entire lives to make it to the Olympics. Instead, LGBT athletes can hold the host nation accountable by performing. “As an Olympian, I lived my dream. But if you had asked me to make a choice between my sexuality – part of the core of my very being that goes to who I will love – and my love of my sport and the dream I had held since I was a child, I would ask, ‘Why me?’ ‘Why take away my dream?'” Stubbs argues host nations should be held more accountable. Instead of athletes boycotting host nations, she argues the International Olympic Committee should not consider potential host nations that do not embrace equality. Would countries change their laws if they knew they missed an opportunity because of them? Encouragement for LGBT athletes is reminiscent of the participation of black athletes in the 1936 Summer Olympics in Berlin, when Jesse Owens exposed the Third Reich’s ignorance with each of his four gold medals. Some fear gay athletes will face criminal prosecution if they were to, say, kiss their partner after winning a medal. With the world watching it would be a losing effort for Russia to persecute a foreign athlete. The last thing a host nation wants is a negative political story dominating the Games. With loud, popular support around the world protecting them, gay Olympic athletes will have an opportunity to change the way homosexuality is seen in sports. The world is starting to meet a few of its gay big-time athletes, but we are still waiting for a proud, popular, gay face to step forward. In a place so backward and at a time so eager to embrace, here’s hoping gay Olympians listen to Stubbs advice and compete with unyielding transparency.
<urn:uuid:735093d0-6d48-4323-8a2a-e88e4df47b1c>
{ "date": "2019-04-25T20:21:36", "dump": "CC-MAIN-2019-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578733077.68/warc/CC-MAIN-20190425193912-20190425215912-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9623889923095703, "score": 2.625, "token_count": 419, "url": "https://www.redribbon.com/gay-athlete-explains-lgbt-olympians-shouldnt-boycott-sochi/" }
All national parks, including Fort Matanzas, exist within the surrounding environment. Far from being islands of preserved lands separate from external influence, parks are integrally linked to the overall environment. When the environment as a whole has been perturbed in some way, the affects can often be observed, and are sometimes most apparent, within a park setting. This is because parks are largely free from the modern development that permeates our world. A change in, say, pollution levels in the air or water, the amount of man-made noise, the presence of non-native pests in a forest, or even the number of stars visible in the night sky, is more noticeable in a park than in a more altered setting. It is this characteristic of parks, their relative naturalness compared to almost any other place, which makes them bellwethers for the state of the environment as a whole. They are natural laboratories in which the effects of man's impact on the environment can best be measured.
<urn:uuid:75b13e6f-1f8d-47cf-bdb0-58674b0a9eec>
{ "date": "2015-03-30T02:25:00", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298889.70/warc/CC-MAIN-20150323172138-00030-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9586426019668579, "score": 3.109375, "token_count": 201, "url": "http://www.nps.gov/foma/learn/nature/environmentalfactors.htm" }
Officials Detail NCLB Test Flexibility for Students With Disabilities States can start taking advantage of flexibility under the No Child Left Behind Act for some of their special education students this school year, but they will have to clear several hurdles to do so, the U.S. Department of Education announced May 10. In April, Secretary of Education Margaret Spellings announced that 2 percent of students in special education who have “persistent academic disabilities” could be tested using modified assessments. The result, for some states, is that more of their students who are in special education will be deemed proficient under NCLB standards. The Education Department already allows 1 percent of students with “severe cognitive disabilities” to be counted as proficient even if they take alternative assessments that are below grade level. The additional 2 percent is intended to allow for students who, even with the best instruction, still cannot meet grade-level standards, Secretary Spellings has said. “I believe that this is a smarter, better way to educate our special education students,” Ms. Spellings said May 10 in a teleconference with reporters. The short-term option, to be used until the Education Department comes out with final rules in the fall, will allow states to adjust to adjust their adequate-yearly-progress, or AYP, goals for the 2005-06 school year. However, to receive the flexibility, states will have to meet several conditions: They must test at least 95 percent of their students with disabilities; put in place appropriate accommodations for students with disabilities; and make available alternative assessments in language arts and mathematics for students with disabilities who are unable to take the regular tests, even with accommodations. Also, the minimum number of students required to be measured for AYP purposes, or “N-size,” must be the same for special education students as for students in other subgroups. In addition, states will have to provide details on their plans to improve achievement for students with disabilities. The Education Department plans to allocate $14 million in technical assistance to the states in the next few weeks so that they can start developing plans to create tests for such students, help teachers with instruction, and conduct research. Additional money will be released in the future. Final regulations for the policy are scheduled to be issued by the fall, said Troy R. Justesen, the acting deputy assistant secretary for the Education Department’s office of special education and rehabilitative services. Three Tiers of Accountability Acting Deputy Secretary of Education Raymond J. Simon said states ultimately could have three tiers of accountability measures: tests for students with severe cognitive impairments, tests for students who with the best instruction still can’t meet grade-level standards, and tests for the remaining pupils. Though the department says that research shows that 2 percent of the student population needs the alternative assessments, it makes sense that states will have to show what they’re doing for pupils before they can take advantage of the flexibility, Mr. Simon said. “We have to make sure that they’re treating their children with disabilities appropriately now,” he said. “This groundwork is absolutely fundamental. It’s something that every state should be doing anyway.” The department’s policy shift on students with disabilities comes in the wake of disagreements between the federal government and the states on how children should be evaluated under the No Child Left Behind law. Connecticut has announced it plans to sue the federal government over the law’s testing mandates. Texas has granted waivers to many of its districts that have not followed federal guidelines in testing special education students. The federal Education Department has fined the state about $444,000 as a result. “We heard from states and from parents and from teachers and principals that they believe there were a number of children in our schools whose needs were not being met under the current structure,” Mr. Simon said. “It was obvious it was time for something to be done. This was done to benefit children.” However, Texas is still an “outlier,” Secretary Spellings said. “We’re in discussions with them right now.”
<urn:uuid:e28fcf76-20b1-43a9-bc7f-79fc0fd4fecf>
{ "date": "2016-06-28T03:39:12", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00168-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9728749394416809, "score": 2.5625, "token_count": 867, "url": "http://www.edweek.org/ew/articles/2005/05/10/37spellings_web.h24.html" }
Today is Remembrance Day, a memorial that is recognised across the globe as a mark of respect for members of the armed forces and the civilians who sacrificed their lives in the many wars that have taken place across the globe. The date it is held on, 11th November, is to commemorate the ending of First World War, with hostilities finishing on the ’11th Day on the 11th month at the 11th hour’. The symbol of commemoration is the Poppy, which signifies the flowers that bloomed in the battlefields of Flanders after the First World War. This red flower signifies not only the blood that was spilled, but also the hope that sprang from those battlefields. The Royal British Legion are celebrating their 90th Poppy Appeal this year, with a the aim to better last year’s £35m raised. The Legion’s Director of National Events and Fundraising, Russell Thompson, said – Despite the current economic times, we trust the great British public will show their support for those who have sacrificed on behalf of their country. We call on the nation to give generously and to wear their poppies with pride. The Legion offer financial help to over 100,000 service men, women and their families who have served in the recent wars in Afghanistan and Iraq. Over £1.4million a week is spent helping those who have served in the Armed Forces to help them cope and re-adjust to life after their time on the front line.
<urn:uuid:e65d6d3c-6b1c-4d70-828f-5c9f998242e5>
{ "date": "2016-06-27T18:36:10", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00178-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9721536636352539, "score": 2.71875, "token_count": 306, "url": "http://www.donation4charity.org/blog/2010/donation4charity/remembrance-day-11-11-10/" }
THE BUZZ: STUDY FINDS JAVA DRINKERS LIVE LONGER Largest study on the subject finds decaf or regular OK MILWAUKEE One of life’s simple pleasures just got a little sweeter. After years of waffling research on coffee and health, even some fear java might raise the risk of heart disease, a huge study finds the opposite: Coffee drinkers are a little more likely to live longer. Regular or decaf doesn’t matter. The study of 400,000 people is the largest done on the issue, and the results should reassure coffee lovers who think it’s a guilty pleasure that may do harm. “Our study suggests that it’s really not the case,” said lead researcher Neal Freedman of the National Cancer Institute. “There may be a modest benefit of coffee drinking.” No one knows why. Coffee contains a thousand things that can affect health, from helpful antioxidants to tiny amounts of substances linked to cancer. The most widely studied ingredient — caffeine — didn’t play a role in the study’s results. It’s not that earlier studies were wrong. There is evidence coffee can raise LDL, or bad cholesterol, and blood pressure at least short-term, and those in turn can raise the risk of heart disease. Even in the new study, it first seemed that coffee drinkers were more likely to die at any given time. But they also tended to smoke, drink more alcohol, eat more red meat and exercise less than non-coffee-drinkers. Once researchers took those things into account, a pattern emerged: Each cup of coffee per day nudged up the chances of living longer. The study was done by the National Institutes of Health and AARP. The results are published in today’s New England Journal of Medicine. Careful, though — this doesn’t prove coffee makes people live longer, only that the two seem related. Like most studies on health, this one was based on observing people’s habits and resulting health. So it can’t prove cause and effect. But with so many people, more than a decade of follow-up and enough deaths to compare, “this is probably the best evidence we have” and are likely to get, said Dr. Frank Hu of the Harvard School of Public Health. He had no role in this study.
<urn:uuid:4df95cb4-2d2e-4ec5-96db-8d072211d92f>
{ "date": "2015-11-29T21:47:52", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459875.44/warc/CC-MAIN-20151124205419-00137-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9553928971290588, "score": 2.578125, "token_count": 503, "url": "http://www.sandiegouniontribune.com/news/2012/may/17/tp-the-buzz-study-finds-java-drinkers-live-longer/" }
The planet earth contains different sources of energy that can be used to make our lives simpler. The major drawback to many of these sources is that they are harmful to the environment. However, there are greener alternative energy sources that exist that have less of an environmental impact, and you will find out more about them in this article. Discover all the different sources of energy in your community. Compare costs, and keep in mind that new legislation exists which will sometimes reward you for using renewable energy sources. You might find that you could save money by switching from an electric furnace to a natural gas one, for instance, or from using municipal water to your own well water. If you live in a sunny area, you could generate your own energy. Invest in PV cells and have a professional install them on your roof. You should have your needs in electricity assessed by a professional to make sure your solar installation will provide enough power for your home. Set your computer so that it goes to sleep when you are not using it for more than 10-15 minutes at any given time. While most people believe that screensavers save energy they do not, and should not be used as an alternative to placing your computer in a sleeping state. Replace regular light bulb with Energy Star qualified bulbs. These bulbs last about ten times as long as a traditional incandescent bulb, and use approximately 75 percent less energy, saving you about $30 in energy costs during the lifetime of the bulb. They also emit about 75 percent less heat, and are therefore much safer. Natural sources of energy can be unpredictable, which is why you should always have a back-up plan. Find out more about net-metering plans: in most towns, you will be allows to hook your system to the main power grid and use it when there is not enough sun or wind for your green energy solution to function properly. Always have a backup power source for a wind generation system. Your system needs to be able to account for low-wind days. This backup could be another type of renewable source, such as a battery system powered by solar, or a diesel generator. Another option is to have the home plugged into the utility power grid. Use rainwater to water outdoor plants and shrubs. This water can also be collected and used for kiddie pools and other outdoor water needs. Rain collection buckets are simple to install, and these reduce the amount of city or well water you use each year, saving you money and keeping your yard green. One of the cheapest and easiest ways to make your home more energy efficient is by replacing all of your standard light bulbs with green versions. Not only do such bulbs reduce your energy bill through lower wattage and higher efficiency, but these bulbs are also made to last longer, giving you a two-fold return for your investment. One way to help with reducing energy is by using solar panels in your home. Solar energy harnesses the power from the sun which is then used to provide energy to things like getting hot water, drying clothes and keeping your home warm during the winter. Solar energy is also pollution free and helps to lower the carbon footprint along with other greenhouse gases and terrible emissions. One way to help reduce energy consumption is to develop an energy savings plan. You should compare your goals with your utility bills to ensure you are staying on track. You can reduce your energy use just by being aware of what you are spending. For instance, reducing your electricity or water usage will get you into the habit of turning off appliance and lights when not used. Buy Energy Star products. In the typical home, appliance’s make up about 20 percent of the electricity use. You can purchase products that contain the Energy Saver seal and start saving money on your electric bill and use less of the world’s power sources. In order to carry the Energy Star seal, the appliance has to run efficiently. As stated before, many forms of energy can be found on earth that we use. Many are harmful to the environment, but there are some which are not. The information outlined here should have given you a clearer understanding of these green energy sources and how they can be utilized in place of other energy sources.
<urn:uuid:e8588ab4-8376-4246-92c1-43cd1e4b260e>
{ "date": "2014-10-25T21:06:58", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00289-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9572460651397705, "score": 2.921875, "token_count": 857, "url": "http://www.globalwarmingcrusade.com/how-to-convert-your-home-to-green-energy.html" }
Newest Navy Research Vessel Is Named Neil Armstrong Ship will be operated by Woods Hole Oceanographic Institution FOR IMMEDIATE RELEASE Media Relations Office September 25, 2012 Secretary of the Navy Ray Mabus announced the nation’s newest research vessel will be named the R/V Neil Armstrong, after the renowned astronaut and the first man to set foot on the moon. The ship will be operated by the Woods Hole Oceanographic Institution (WHOI). “We are honored,” said WHOI President and Director Susan Avery. “Neil Armstrong is an American hero, whose ‘small step’ provided humanity with a new perspective on our planet. When he stood on the moon and looked back at the Earth, he saw mostly ocean – the last unexplored frontier on Earth. The R/V Neil Armstrong will carry on its namesake’s legacy of exploration, enabling the next generation of oceanographic science and discovery.” Armstrong was a Navy fighter pilot who flew 78 combat missions during the Korean War, before moving to NASA’s predecessor agency as an engineer and test pilot, and later an astronaut and administrator. Despite all his accomplishments, in 2000 Armstrong described himself this way: “I am, and ever will be, a white-socks, pocket-protector, nerdy engineer.” The R/V Neil Armstrong is the first oceanographic research vessel named for a space explorer, but the link between space exploration and ocean science is not new. Each of NASA’s space shuttles was named for a famous oceanographic vessel, including the space shuttle Atlantis, whose namesake, the WHOI ketch Atlantis, was the first U.S. ship built for ocean research. In May 2010, the U.S. Navy’s Office of Naval Research informed WHOI that it had been selected to operate AGOR-27 (Auxiliary General Oceanographic Research), one of two new research vessels that will now be known as the “Armstrong-Class,” to be built by the U.S. Navy. Both are being constructed by the Dakota Creek Industries shipyard in Anacortes, Wash. “The 238-foot R/V Neil Armstrong will serve a pressing need for a new general-purpose research vessel based on the East Coast of the United States and will be deployed for a wide variety of oceanographic and ocean engineering missions. The R/V Neil Armstrong is also expected to support new initiatives in ocean observing in high latitudes, as well as new efforts to study North Atlantic ecosystems and their sustainability,” said WHOI Vice President for Marine Operations Rob Munier. R/V Neil Armstrong will provide a number of enhanced capabilities for scientists working on board. These include advanced over-the-side handling systems and state-of-the-art hull mounted bottom mapping and acoustics transducers. These systems were designed to improve the safety of scientific operations on board the R/V Neil Armstrong and enable the vessel to effectively operate in higher sea states thanexisting vessels of this size. Of the new ship’s name, Bob Frosch, WHOI Life Trustee and Guest Investigator, and NASA’s fifth Administrator said, "From time to time crew and oceanographers can look up at the moon and wink at it in remembrance of their ship's wonderful namesake." The vessel is scheduled to launch in early 2014 and be ready for service in 2015. The Woods Hole Oceanographic Institution is a private, non-profit organization on Cape Cod, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the oceans and their interaction with Earth and humanity, and to communicate an understanding of the ocean’s role in the changing global environment. Originally published: September 25, 2012
<urn:uuid:ce0e6e98-7784-4190-8b0f-a9a1f5526be5>
{ "date": "2015-10-06T18:53:16", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678979.28/warc/CC-MAIN-20151001215758-00196-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9410309791564941, "score": 2.78125, "token_count": 801, "url": "https://www.whoi.edu/page.do?pid=11795&tid=3622&cid=151389" }
A Florida man was taken under arrest after the local biologists reported to the police seeing someone poaching loggerhead sea turtle eggs from the beach. The officers increased the patrols and started to monitor for any illegal activity. In a few days, the police discovered the man taking turtle eggs from a female sea turtle. Loggerhead sea turtles reach sexual maturity at 17 years of age and migrate thousands of miles to arrive at their breeding sites. They mate at sea, and the adult females return to land to lay eggs. The small animals breed every two or four years. In order to find the most suitable place to lay the eggs, the female wanders the beach, usually during the night time, and chooses a spot to dig the nest, which is generally up to 20 inches deep. After laying the eggs, the female turtle covers the hole with sand and uses vegetation to camouflage it. Because loggerhead turtles are so careful to hide their nests, the man had to follow the animal and then take the eggs just after the unaware small animal laid them. After being seized by the police officers, the turtle eggs thief was charged with a third-degree felony. The man now faces up to five years in prison and a fine of $5,000. The police discovered no less than 107 eggs at the man’s house. Out of them, 15 were kept by the police as evidence, and the rest of them were handed over to the biologists. The remaining loggerhead sea turtle eggs will be reburied, as the biologists hope they will hatch somewhat later this year. The incubation period takes around 50 to 60 days, which means that the small turtles will emerge from their nest the beginning of autumn. However, as an evolutionary protective measure, the hatchlings will come out only during nighttime in order to avoid predators. An interesting thing about sea turtle eggs is their thermal sensitivity. Warm temperature produces female hatchlings, and cold temperature favors male hatchlings. The marine biologists and wildlife organizations representatives recommend remaining at a safe distance from wildlife and not involving in any activities that might perturb their habitats. The loggerhead sea turtle is considered to be a vulnerable species because of the fact that beaches, their preferred location for breeding, are invaded by human activity. Image Source: Public Domain Image Latest posts by Alan O’Leary (see all) - Lady Gaga Officially Postpones Her European Tour - Sep 18, 2017 - Dig Unearths A New Species Of Prehistoric Crocodile In Texas - Sep 16, 2017 - Uranus’s Moons Might Be On A Collision Course With Each Other - Sep 8, 2017
<urn:uuid:41f38209-93d9-4564-bd30-350f0f1437ff>
{ "date": "2017-09-20T02:11:01", "dump": "CC-MAIN-2017-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686117.24/warc/CC-MAIN-20170920014637-20170920034637-00296.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9591031074523926, "score": 2.671875, "token_count": 543, "url": "https://www.wallstreetotc.com/turtle-eggs-thief-under-arrest/225174/" }
Why is the Ecuadorian government proposing to extract oil in an area frequently classified by ecologists as one of the most bio-diverse rainforest regions left intact on earth? This documentary was filmed in Sani Isla and Ecuador’s capital city, Quito. It gives voice to an indigenous community in the Ecuadorian Amazon. To break the bond with the forest that has sustained their people for generations would be the death of their culture and community. Their resolve is tested in the face of corruption, bribery and greed as well as oil companies and the military threatening to take over the land by force. At first glance it might appear that the community is just another victim of big oil’s need to feed ‘our’ collective habit. But a more complex story emerges: China taking over the role of the IMF and World Bank funding overseas development in return for oil; well-meaning but under resourced and ultimately failing, local government and worldwide initiatives; the international community turning a blind eye; blatant denial of indigenous rights; as well as the desires of the community themselves, to develop in line with modern expectations. Biologists classify this region as one of the most bio-diverse regions on the planet. To extract oil in what we all know as ‘the lungs of the earth’ for 8 days worth of oil (at current rates of world consumption) would bring this particular ecosystem to the brink of collapse. In a globalised world of mass consumption run on fossil fuels, could we all play in a part in the destruction of this pristine rainforest? If so, ‘Where do you draw the line?’ In return for putting the film online for free, we ask viewers and supporters to share the film and help us promote it through world of mouth and sharing links on social media. Please like/share and support:
<urn:uuid:6ccbc44b-3a7c-4ca6-b3da-7a58eb6286c6>
{ "date": "2019-10-14T18:00:52", "dump": "CC-MAIN-2019-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986654086.1/warc/CC-MAIN-20191014173924-20191014201424-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.946231484413147, "score": 2.609375, "token_count": 379, "url": "https://wordsmithbristol.wordpress.com/full-film/" }
Loading the player ... June Chandler and Selena Skipper Cullman County Schools, Fairview High School/ Alabama Virtual Library Using the History Reference Center Database, students will look for Civil War pictures, charts, graphs, or maps to use in a classroom project. Content Areas: Social Studies Alabama Course of Study Alignments and/or Professional Development Standard Alignments: [SS2010] US10 (10) 14: Describe how the Civil War influenced the United States, including the Anaconda Plan and the major battles of Bull Run, Antietam, Vicksburg, and Gettysburg and Sherman's March to the Sea. [A.1.a., A.1.b., A.1.c., A.1.d., A.1.e., A.1.i., A.1.k.]
<urn:uuid:0e447662-3dfd-4501-ae04-7a74427d9a9b>
{ "date": "2014-09-30T10:10:06", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662882.4/warc/CC-MAIN-20140930004102-00282-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7758111357688904, "score": 3.203125, "token_count": 179, "url": "http://alex.state.al.us/podcast_view.php?podcast_id=889" }
Proper fueling procedures are important to keeping oil and gas out of our waterways. Oil and fuel in the water can impact bottom sediment, marine life and shore birds. You are responsible for any environmental damage caused by your fuel spill. So... preventing spills will be beneficial for you and the boating environment! Accidental or not, under Federal law (the Oil Pollution Act and the Clean Water Act), it is illegal to discharge any amount of fuel, oil or other petroleum product into the waters of the United States. By law, any oil or fuel spill that leaves a sheen on the water must be reported to the U.S. Coast Guard National Response Center by calling 1-800-424-8802. Many states require you to contact them as well in case of a spill so make sure you know what agency to contact in your state. It is also against the law to use detergents, soaps, emulsifying agents or other chemicals to disperse a spill. These products cause the petroleum to sink, creating even greater environmental damage. While it may only seem like a small amount, it can permanently contaminate bottom sediments. Anyone who deliberately applies soap to disperse or hide a sheen is subject to criminal penalties and high fines. To increase awareness of the issue, boats 26' and longer are required to post an oil placard (available at marine supply stores) near the engine. Why Fueling a Boat is Different: While fueling a boat is a relatively common activity, it can be tricky. This video will help explain the differences between boat and car fuel systems and why extra care must be taken with fueling boats.
<urn:uuid:5569a339-ac16-40f7-bba4-fec8a9ed4aff>
{ "date": "2015-10-13T08:54:42", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738004493.88/warc/CC-MAIN-20151001222004-00004-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9450136423110962, "score": 2.828125, "token_count": 338, "url": "https://www.boatus.org/clean-boating/fueling/" }
In half a dozen states, including California, it is illegal to use a handheld cell phone while driving but legal to talk on a hands-free device. The theory is that it's distracting to hold a phone and drive with one hand. But a large body of research shows that a hands-free phone poses no less danger than a handheld one – that the problem is not your hands but your brain. “It's not that your hands aren't on the wheel,” said David Strayer, director of the Applied Cognition Laboratory at the University of Utah and a leading researcher on cell phone safety. “It's that your mind is not on the road.” Now Strayer's research has gained a potent ally. Yesterday, the National Safety Council, the nonprofit advocacy group that has pushed for seat belt laws and drunken-driving awareness, called for an all-out ban on using cell phones while driving. “There is a huge misperception with the public that it's OK if they are using a hands-free phone,” said Janet Froetscher, the council's president and chief executive. “It's the same challenge we had with seat belts and drunk driving – we've got to get people thinking the same way about cell phones.” Lab experiments using simulators, real-world road studies and accident statistics tell the same story: Drivers talking on a cell phone are four times as likely to have an accident as drivers who are not. That's the same level of risk posed by a driver who is drunk. Why cell phone use behind the wheel is so risky isn't entirely clear, but studies suggest several factors. No matter what the device, phone conversations appear to take a significant toll on attention and visual-processing skills. It may be that talking on the phone generates mental images that conflict with the spatial processing needed for safe driving. Eye-tracking studies show that while drivers look side to side, cell phone users tend to stare straight ahead. They may also be distracted to the point that their engaged brains no longer process much of the information that falls on their retinas, which leads to slower reaction times and other driving problems. At the University of Utah, Strayer and his colleagues use driving simulators to study the effects of cell phone conversations. A simulator's interior looks similar to a Ford Crown Victoria, and a computer allows researchers to control driving conditions. Study participants are asked to drive under a variety of conditions: while talking on a handheld phone or a hands-free one, while chatting with a friend in the next seat, and even after consuming enough alcohol to put them over the legal limit. While in the simulator, drivers are asked to complete simple tasks, such as driving for several miles and finding a particular exit or navigating streets where they must brake for traffic lights, change lanes and watch for pedestrians. How fast they drive, how well they stay in their lane, driving speed and eye movement are closely monitored. The researchers have also placed electrodes on participants' scalps to gauge how they process information. Similar studies, using brain imaging, have been done at Carnegie Mellon University.
<urn:uuid:26a86330-496d-4d20-a437-4c6b1030d7c9>
{ "date": "2014-03-10T03:35:35", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010567051/warc/CC-MAIN-20140305090927-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9614937901496887, "score": 3.15625, "token_count": 646, "url": "http://www.utsandiego.com/news/2009/jan/13/1n13cell001922-hands-free-phones-not-safe-road-stu/" }
Periods of drought have the ability to devastate crops. A lack of rainfall can leave subsistence farmers, who are reliant on a successful harvest, in desperate situations. Coupled with the uncertainty which climate change is now making a reality, there is an urgent need to consider how to increase the resilience of subsistence farmers. A particular type of insurance - index based insurance - has evolved to address the needs of smallholders in regions most at risk from drought. By offering the ability to protect investments against disaster, farmers may be encouraged to invest more in agricultural inputs and new farming technologies. The availability of insurance may also result in lenders becoming more willing to provide finance to farmers, with the risk of crop failure now further detached from the risk of default. Index based insurance may pave the way for subsistence farmers, allowing them access to previously unavailable technologies to secure their development and escape the poverty traps which have so far hindered their growth. Under traditional indemnity insurance, pay-outs are based on a client’s loss. With index based insurance, pay-outs occur when an index falls below a predetermined threshold. By using rainfall as this index and setting an appropriate threshold, farmers can take out policies to insulate themselves against the effects of drought. As the pay-out under index based insurance is determined by an objective index, the need to verify losses through individual farm visits is eliminated. The requirement for verification of loss has previously limited the feasibility of traditional indemnity insurance. The objective nature of the pay-out also means the policy is more resilient to moral hazard; with the pay-out no longer dependent on the crop, farmers remain incentivised to ensure its success in otherwise difficult conditions. As the policy is determined by climate data only, there is no field loss adjustment. In theory, this should result in prompt policy pay-outs, allowing farmers to reinvest the proceeds into establishing next year’s crop. Prior to implementing a policy, the climate data needs to be analysed. A lack of reliable weather data in sub-Saharan regions may limit the ability of an insurer to set an appropriate threshold for an index based policy. The issue of “basis risk” has also been identified as a particular limitation. This is the difference between the loss experienced by a policyholder and the insurance pay-out received. With an index based policy, a policyholder may receive a pay-out even though their crops have been successful. Conversely, and more concerning, it is possible for a policyholder to experience crop failure despite the threshold for pay-out not being exceeded. Example: Ethiopian insurance Following pilot studies, the government of Ethiopia introduced index based insurance with the aim of supporting the nation’s smallholders. The policy was initially offered to around 200,000 farmers, however, this looks set to increase as awareness spreads. In October 2015, farmers in four regions of Ethiopia received their first pay-out under the policy, to cover the loss of the previous years’ harvest caused by an El Nino event. Weather shocks can trap farmers in poverty, but the risk of these shocks also limits the willingness of farmers to invest in measures that might increase their productivity and improve their economic status. The International Finance Corporation found those insured under the Global Index Insurance Facility generated 16% more in earnings and invested 19% more in their farms when compared to uninsured neighbours. Studies are showing that index based insurance is having a positive impact. Whilst not designed to protect against every peril, it is able to protect farmers where there is a well-defined environmental hazard.
<urn:uuid:237149b0-cdc0-4ad9-89b6-25b66ebc1d3f>
{ "date": "2018-02-23T20:32:04", "dump": "CC-MAIN-2018-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9545953273773193, "score": 2.78125, "token_count": 726, "url": "https://www.insideafricalaw.com/blog/crop-insurance-a-gateway-to-success" }
Intrinsic and extrinsic factors are involved in the skin aging process. Intrinsic aging is a slow, permanent degeneration that affects most of the body, with distinguishable characteristics such as wrinkling of the skin, cherry hemangiomas and seborrheic keratoses. Photoaging or photodamage is the most noticeable effect of extrinsic skin aging, caused by long-term solar UV light exposure. Photodamaged skin is illustrated by coarse wrinkles, dyspigmentation and telangiectasia, and is associated with malignant tumors. Smoking is also an extrinsic factor in skin aging. The association of tobacco smoking and cardiovascular disease, lung cancer and chronic obstructive pulmonary disease is well-documented, and several studies have documented the adverse effect of tobacco smoking on the integumentary system.1 In fact, today, multiple environmental factors are associated with facial aging; evidence suggests that smoking 20 cigarettes per day is equivalent in effect to almost 10 years of chronological aging. Therefore, lifestyle recommendations to stop or delay facial skin aging are also very useful in promoting public health. Epidemiology of Skin Aging The sallow complexion and markedly wrinkled skin of smokers was first noticed in 1856 during a large series of British insurance examinations. One year later, skin differences between smoking and nonsmoking British Army officers stationed in India were described. In 1965, the skin of 224 female cigarette smokers, ages 35-84, was evaluated and described as pale and thick, with a grayish hue and without local variations in pigmentation. However, because similar skin changes were noticed in nonsmoking women over the age of 70, these alterations were not fully interpreted.2 In 1971, Daniell studied the severity of wrinkles in 1,104 smoking subjects. After adjusting for age and outdoor sun exposure, she noticed that premature wrinkling is an important sign of smoker’s skin.3 This content is adapted from an article in GCI Magazine. The original version can be found here.
<urn:uuid:b4273bc4-e3f8-489f-b4f7-6f1dc40aeee4>
{ "date": "2014-09-20T20:01:23", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133564.63/warc/CC-MAIN-20140914011213-00084-ip-10-196-40-205.us-west-1.compute.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9565064907073975, "score": 3.109375, "token_count": 417, "url": "http://www.cosmeticsandtoiletries.com/research/biology/215078281.html" }
Welcome to PS 234. We are a vibrant, community-driven school focused on children working together with educators who are passionate about learning. We believe that all children can learn and it is our job to provide an environment where children feel enriched, supported and inspired. We have a rigorous curriculum that provides a solid foundation for our students. At the heart of our work is an interdisciplinary study program which helps students use their skills to deepen learning content. Through rich thematic units, students learn how to do inquiry-based research, while developing their communication skills in reading, writing and oral expression. Enrichment programs in art, music, library and science often support the classroom thematic studies as well. Our literacy program includes components such as read aloud, shared reading, guided reading, independent reading, book clubs, writer’s workshop and word study. All classrooms from Kindergarten-Fifth Grade use the TERC Investigations in Number, Data and Space as their core mathematics program. Through inquiry, activities and games, and the use of manipulative materials, children construct mathematical ideas, explain their thinking and practice skills. Because of our community’s generosity, we have been able to fund arts education for our school, including instrument instruction in grades 4&5, choral instruction in grades K – 3 and dance instruction in grade 5. All children in grades 4 and 5 may choose to play a brass, woodwind or percussion instrument with our resident music instructor. In the 5th grade, some students are even introduced to music composition. We also have coaches at recess to teach students how to play cooperatively and manage conflict productively. Finally, PS 234 truly believes in the education of the whole child. In addition to fostering students’ academic core, we believe in developing children’s social and emotional skills. We use the Responsive Classroom approach which “empowers educators to create safe, joyful, and engaging learning communities where all students have a sense of belonging and feel significant.” All classrooms use a consistent protocol for developing routines, working with others and respecting the learning environment. We begin each school year with developing rules in tandem with working with students at our annual “rules convention.” Leadership opportunities like “5th grade K buddies” help our oldest community members help our youngest. It is so beautiful to see moments like the 5th grade buddies helping the Kindergarteners navigate the cafeteria at the beginning of the school year! We are proud to be an open and caring community where students continue to visit long after they graduate.
<urn:uuid:1c58390f-a051-4230-b045-fb14d4fd1df2>
{ "date": "2019-12-06T08:19:51", "dump": "CC-MAIN-2019-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540486979.4/warc/CC-MAIN-20191206073120-20191206101120-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9495300054550171, "score": 2.59375, "token_count": 522, "url": "https://ps234.org/welcome-to-ps-234/" }
Why Do Fish Swim Upside Down? Why do fish swim upside down? Diseases in fish can make them to swim as if they are drunk. Some fish also do swimming in upright position usually due to these diseases. The most commonly occurring disease in the goldfish is disease affecting the swim bladder. Fish like goldfish such as orandas, ranchus, ryukins, fantails and moors are those which can succumb to this condition. The fish might recover from these diseases sometimes and sometimes they may not. The fish swim in water as they are not denser or less dense than water. The fish which are living will not have any buoyancy. They do not float or sink and will be with the same density as that of water. The density of water and its pressure alters as we go deeper into the waters. The fish possess an organ which helps them to swim and dive inside waters. The organ that helps them to do this is called as swim bladder. This organ will take off oxygen from the gills and store it to generate more buoyancy. This organ will automatically do this function and is not controlled by the fish. But, when the fish dies, it is observed to be floating on water upside down. It appears as if it is swimming upside down. When the fish are dead, the oxygen will remain stored in swim bladder and as their body no longer is open to the environment, all the gases that are accumulated in the stomach due to decomposition will make the fish to float upside down in water. If the fish are unable to exhibit balance and are not able to sustain their position in waters, then it is due to problem of swim bladder. This swim bladder disease might result in the fish not being able to stay upright or swim in a strange manner and swim in an awkward manner. The fish also will stay upside down in the tank bottom. The swim bladder that is present below the vertebral column and above the abdominal cavity of the fish will help in maintaining its buoyancy and position in water. The usual functioning of swim bladder like inflation and deflation can cause the fish to swim properly. Do you think the article can be improved? Share Your Expertise
<urn:uuid:ae35941b-dbfc-4e45-8a0c-10b1eea8d2ee>
{ "date": "2014-04-20T23:28:25", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00459-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9740961194038391, "score": 3.4375, "token_count": 447, "url": "http://www.knowswhy.com/why-do-fish-swim-upside-down/" }
A NASA rocket didn’t get far from takeoff before it exploded and crashed back to earth early on Tuesday evening. See the NASA TV broadcast video via Reuters above. “Launching rockets is an incredibly difficult undertaking, and we learn from each success and each setback. Today’s launch attempt will not deter us from our work to expand our already successful capability to launch cargo from American shores to the International Space Station,” said William Gerstenmaier, Associate Administrator of NASA’s Human Exploration and Operations Directorate in a statement. Thankfully the launch from NASA’s Wallops Flight Facility in Virginia was unmanned. There have also been no reports of casualties on the ground. The rocket (the Antares rocket and Cygnus cargo spacecraft) was on a resupply mission to the International Space Station. NASA says the crew of the International Space Station are not in danger of running out of food or other critical supplies.
<urn:uuid:4ffe4e27-d4a3-4e42-bb06-ec91ca91ee2d>
{ "date": "2018-05-22T22:20:23", "dump": "CC-MAIN-2018-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864968.11/warc/CC-MAIN-20180522205620-20180522225620-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9173346757888794, "score": 2.5625, "token_count": 196, "url": "http://www.visiontimes.com/2014/10/28/unmanned-nasa-rocket-explodes-on-launch-video.html" }
Sneezing in the rain 11 April 2011 | News story Ngwe Lwin - Myanmar It may be more common these days to hear doom and gloom stories of biodiversity loss and environmental degradation, but exciting discoveries of new species do happen and give heart to conservationists the world over. While discoveries of new invertebrate or fish species may be relatively frequent, it’s not often that a new species of primate is discovered. Ngwe Lwin, a vigilant young Burmese conservationist, was lucky enough to come across a new species of snub-nosed monkey in the Himalayan Mountains of Myanmar whilst taking part in primate surveys in early 2010. Hunters reported seeing a monkey that had prominent lips and wide, upturned nostrils—features unlike those of any snub-nosed species previously described. Because of its upturned nose, this new Mae Hka snub-nosed monkey (Rhinopithecus strykeri), has the endearing trait of sneezing when it rains! Interviewing hunters, Ngwe believes that the species is limited to forests of the Maw River area, approximately 270 km2, with an estimated population of 260-330 individuals, low enough to be classified as Critically Endangered by IUCN's Red List of Threatened Species. The surveys were being carried out by local NGO the Myanmar Biodiversity and Conservation Association (BANCA), the in-country partner of IUCN Member Fauna & Flora International (FFI) and an international team of primatologists from FFI and the People, Resources and Conservation Foundation. Sadly, this latest addition to the snub-nosed monkey family is already threatened. Logging roads built by Chinese companies intersect the area and a timber company is building two logging roads close to the species’ habitat. The Mae Hka watershed is also subject to one of Asia’s largest hydropower development schemes implemented by the China Power Investment Corporation (CPI). While the snub-nosed monkey range is not directly affected by flooding, the construction of roads will allow all-year access to the mountains. Early this year, Ngwe documented increased hunting because of the influx of Chinese construction workers and demand for wildlife products. He is now approaching the authorities in Myanmar and China to improve the enforcement of national wildlife protection laws and CITES—the Convention on International Trade in Endangered Species. Nevertheless, there is a potential win-win solution for conservation. Sedimentation caused by logging would reduce the lifespan of the dams and reduce economic revenues from hydroelectric power generation. According to Ngwe, the challenge is to convince the Chinese government to phase out logging, and collaborate with CPI to protect the watershed and create a new protected area through trans-boundary collaboration between China and Myanmar. Ngwe has reported the first success in establishing voluntary hunting restrictions. “After intensive conservation awareness work and meetings, hunters in eight villages agreed to stop shooting the snub-nosed monkey. Myanmar’s people need to increase their knowledge of the environment and participate in conservation activities,” he says. Ngwe has wanted to work in conservation since leaving University and has decided to dedicate his life to developing a community-managed conservation area and to phasing out logging. After receiving bird watching training and becoming a guide, Ngwe Lwin was recognised as an emerging conservationist and began work in the field of primate conservation. He has much work to do but his dreams are already becoming a reality. Ngwe Lwin can be contacted at [email protected]
<urn:uuid:5551e219-ac22-4678-9d37-829ad4d034fe>
{ "date": "2015-08-30T16:55:01", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00113-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9473683834075928, "score": 3.234375, "token_count": 748, "url": "http://www.iucn.org/fr/ressources/focus/du_karakorum_a_kalimantan/nos_scientifiques_en_action/?7195/Sneezing-in-the-rain" }
As Monsanto prepares to unleash its latest Genetically-Modified (GM) corn supercrop, the International Journal of Biological Sciences has revealed the true cost of these crops. The study focused on three GM corn crops -- Mon 863, Mon 801 and NK 603 -- and found that they caused statistically significant rates of kidney and liver malfunction, as well as some heart, adrenal, spleen and blood damage in rats. These crops have been approved for consumption in the U.S. and many countries in Europe without proper research into their affect on human health. GM technology inserts non-food genes into the DNA of food, sometimes making the crop more resilient to herbicides and other times causing them to produce toxic proteins that act as pesticides themselves. This process changes the structure of the food drasticcally and presents humans with substances that have never been a part of the human or animal diet. Several countries in Europe, such as Germany and France, have already banned GM crops, including Mon 801. But the U.S. FDA has done us a potentially dangerous disservice by simply taking Monsanto's word that these genetically modified crops are safe and not doing any testing! This 90-day study was just the beginning, and these GM crops must be studied further instead of being immediately available for human consumption. Tell the FDA to take these genetically modified corn varieties off the shelves until a peer-reviewed, two-year study can determine if they are safe for human consumption! A recent study published in the International Journal of Biological Sciences has shown that three Genetically-Modified corn crops produced by Monsanto -- Mon 863, insecticide-producing Mon 810, and Roundup® herbicide-absorbing NK 603 -- caused kidney and liver damage as well as some heart, adrenal, spleen and blood damage in rats. It is extremely concerning that these crops have been put on the market without proper research on their affects on humans, and this study is enough to warrant the complete removal of these products from the market until a proper 2-year peer-reviewed study can be conducted on their effect on human health. Taking Monsanto's word is simply not good enough, and is frankly a great disservice to the American people. Please read the study published by the International Journal of Biological Sciences, which can be found here: http://www.biolsci.org/v05p0706.htm. Then please act for the safety of the American people and insist on proper research on the safety of genetically-modified vegetables. You can easily embed this petition onto your site or blog. Make a difference for the issues you care about while adding cool interactive content. Your readers sign without ever leaving your site. It's simple, just choose your widget size and color and copy the embed code to your site or blog.
<urn:uuid:d6078eb3-ee2a-41ea-af6d-44e79b894f54>
{ "date": "2015-08-31T08:49:42", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065910.17/warc/CC-MAIN-20150827025425-00289-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9397079944610596, "score": 2.703125, "token_count": 575, "url": "http://www.thepetitionsite.com/takeaction/755/645/429/?z00m=19825209" }
Bringing up Kari Answer the following questions. 1. The enclosure in which Kari lived had a thatched roof that lay on thick tree stumps. Examine the illustration of Kari’s pavilion on page 8 and say why it was built that way. Answer. Kari’s pavilion was built of hatched roof that lay on thick tree stumps because it was very high and would not fall when Kari bump against the poles. 2. Did Kari enjoy his morning bath in the river? Give a reason for your answer. Answer. Yes, Kari enjoys his morning bath in the river as he lay down on the sand bank and let his friend rub his back and also lie in the river water for a long time. He squeal with pleasure when water is rubbed down his back. 3. Finding good twigs for Kari took a long time. Why? Answer. Finding good twigs for Kari took a long time because his friend had to climb all kinds of trees to get the most delicate and tender twigs. Also, if a twig is mutilated an elephant will not touch it. So, one had to be very sharp hatchet to cut down these twigs which took half an hour to sharpen it. It was not an easy job 4. Why did Kari push his friend into the stream? Answer. Kari pushed his friend into the stream because a boy was lying flat on the bottom of the river. Kari wanted his friend to save the life of that boy, so he pushed his friend into the stream. 5. Kari was like a baby. What are the main points of comparison? Answer. Kari was like a baby because he had to be trained to be good just like a baby. He had to be taught when to sit down, when to walk, when to go fast, and when to go slow. When he was naughty, he need to be scolded and if not, he will do more mischief. 6. Kari helped himself to all the bananas in the house without anyone noticing it. How did he do it? Answer. Kari stole the bananas from the table near the window in the dining room. He put his trunk through the window very much like a snake and disappeared with all the bananas without any one noticing it. 7. Kari learnt the commands to sit and to walk. What were the instructions for each command? Answer. When his friend pulled his ear and say ‘Dhat’, Kari sit down and when he pulled his trunk forward and say ‘Mali’, Kari walked. 8. What is “the master call”? Why is it the most important signal for an elephant to learn? Answer. The master call is a strange hissing, howling sound, as if a snake and a tiger were fighting each other. It is the most important signal for an elephant because whenever master is in trouble, one master call will bring the elephant near him. class 7 English An Alien’s Hand answers class 7 English An Alien’s Hand question answers class 7 English An Alien’s Hand book answers class 7 English An Alien’s Hand ncert answers
<urn:uuid:1f64f1c1-2166-4fe0-b077-2056a66e27c3>
{ "date": "2018-02-20T21:05:44", "dump": "CC-MAIN-2018-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.986064076423645, "score": 3.328125, "token_count": 678, "url": "https://academicseasy.com/2016/11/class-7-aliens-hand-bringing-kari.html" }
The reference of “salt losing its savor” recorded in Matthew and Luke occurs as part of the Sermon on the Mount (or Plain). Another reference to salt losing its quality is found in Mark 9:50 and most likely is given at another time. The rationale for seeing Mark’s account as given at a different time relates to the nature of Jesus’ teaching ministry. Often the message given was the same but the places changed such as the “Kingdom is at Hand” proclamation. Of course many of the accounts recorded in the Synoptic Gospels are parallel and given from another perspective when they are not in exact agreement, but not all of the sayings of Jesus were given only once since not all of the disciples were with Him at all occasions and others (who would become part of the 500 who witnessed His resurrected person) needed to hear the same message in different towns. Newspapers and other media did not exist so it should not be surprising that the same teachings were repeated at different times and places. The Sermon on the Mount starts as a description of the character of Jesus’ disciples (see Lk. 6.20a). Here I reproduce Mt. 5.1-12 since this section defines the “salt of the earth” “Blessed are the poor in spirit, for the kingdom of heaven belongs to them. “Blessed are those who mourn, for they will be comforted. “Blessed are the meek, for they will inherit the earth. “Blessed are those who hunger and thirst for righteousness, for they will be satisfied. “Blessed are the merciful, for they will be shown mercy. “Blessed are the pure in heart, for they will see God. “Blessed are the peacemakers, for they will be called the children of God. “Blessed are those who are persecuted for righteousness, for the kingdom of heaven belongs to them. “Blessed are you when people insult you and persecute you and say all kinds of evil things about you falsely on account of me. Rejoice and be glad because your reward is great in heaven, for they persecuted the prophets before you in the same way. Therefore, with seeing these traits, it is easy to see exactly what “the salt of the earth” is. Conversely, also, what losing its “flavor” (or savor, quality) means. V. 13: “You are the salt of the earth. But if salt loses its flavor, how can it be made salty again? It is no longer good for anything except to be thrown out and trampled on by people. A note about the word “morantha” (from moraino) translated as “loses its flavor” in both Matthew and Luke’s account given in the “Sermon on the Mount.” In this instance, as I regard the passage, it is a wrong translation. It should read: “become foolish” for these reasons: 1. To translate the word “loses it flavor (lose its saltiness)” is from Mark’s account which I have previously explained was most likely given at a different time than the Matthew and Luke sections. The related content in Mark clearly shows this is the case. 2. In Matthew and Luke Jesus is already using a figure of speech in terming His disciples as “salt”, why would He use a term such as “moraino” which clearly means “to make foolish” as another figure of speech within a figure of speech? No, in this instance, Jesus is clarifying what He means: that the disciples not turn to folly and be characterized opposite of the traits Jesus just described in verses 2-12 of Matthew chapter 5. 3. Jesus was speaking to His disciples to whom He explained figures of speech in instances where they asked. In Lk. 14.35 Jesus warns: ” The one who has ears to hear had better listen!” He wanted the disciples to clearly understand the message to them so He uses: “become foolish”. This is how both Matthew 5.13 and Luke 14.34 should both be translated: “become foolish”. “You are the salt of the earth. But if salt becomes foolish how can it be made salty again? It is no longer good for anything except to be thrown out and trampled on by people. (v.13) I. Howard Marshall in his commentary on Luke notes “no attestation” of moraino than “to make folly, become foolish” but still thinks Luke 14.34 should read: “lose its saltiness” (I respectfully disagree for the previously cited reasons). No matter how one translates the word, one thing should be clear: the meaning of “losing its flavor” is “to become foolish.”
<urn:uuid:71aae24c-1d1f-4738-beb0-d33eaa38f5c0>
{ "date": "2017-12-15T23:44:46", "dump": "CC-MAIN-2017-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948580416.55/warc/CC-MAIN-20171215231248-20171216013248-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9738715291023254, "score": 3.203125, "token_count": 1058, "url": "https://beliefspeak2.net/salt-vs-foolishness/" }