text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
As a parent there is no matter of greater importance then the health and happiness of your children. Many of our calendars are filled with doctor’s appointments for check-ups and vaccinations, especially in their younger years. However, in the rush to maintain that you child receives the best in formulas and medical care, we often forget the need to schedule in pediatric dental care. There is a common misconception among many of us that our children are too young for dental care, when the truth is the first years of your child’s life are among the most crucial to a healthy smile. When it comes to the best dental care for your child, trust your pediatric dental care experts at the local Garden Grove pedodontist at the Children’s Dental Group to help your children maintain healthy smiles for life. Pediatric dental hygiene begins for your child the moment they are brought into this world. Dental care is about much more than teeth, it begins with healthy gums for your children’s to grow into. As a parent you can begin caring for their gums in a simple manner by wiping their gum line clean after feeding with a wet piece of gauze or a clean, damp wash cloth to remove any extra food. Plaque build-up can begin only 20 minutes after eating, so it’s important to do so between feedings to insure their gum-line remains healthy. The moment your child grows their first tooth, normally at six months of age, your son or daughter is ready for their first dental appointment with your trusted Garden Grove pedodontist for a pediatric dental examination. At six months of age your child is ready to receive their first dental exam from your professional Garden Grove pedodontist where your doctor can identify any hereditary conditions and begin to work with you and your children to begin a life time of dental health care to give your little ones a beautiful smile to grow into. At six months of age you can also begin brushing your child’s teeth with an appropriately sized toothbrush and tap water. Most children are able to spit out fluids properly at the age two, and at this age your child can begin brushing their teeth with a pea-sized amount of toothpaste under close parental supervision. Children also require fluoride treatments, but such care should always be under the close instruction of a pediatric dentist, and can be easily taken care of at our kid-friendly offices at the Children’s Dental Group. Our practice is uniquely geared to the comfort, care, and education of children’s dental care unlike any other you will ever find. At the Children’s Dental Group your child can receive professional care and instruction for a beautiful smile all of their lives.
<urn:uuid:940f337d-fb83-452b-b412-4d062f401d95>
{ "date": "2019-02-22T00:25:49", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511573.67/warc/CC-MAIN-20190221233437-20190222015437-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9548606276512146, "score": 2.515625, "token_count": 554, "url": "http://santa-ana.childrensdentalgroup.com/garden-grove-pedodontist/" }
The Social Support Approach to Addictions Recovery: Recovery Support Groups Social support groups (or simply "support groups") refer to groups of people who meet to share their common problems and experiences. Support groups are not the same as therapy groups. Trained professional facilitate therapy groups with a specific therapeutic purpose. In contrast, support groups are led by non-professional volunteers. With respect to addictions recovery, support groups can divided into two basic types: 1) Self-empowering support groups: These groups believe in the power of individuals to heal themselves. These groups are not as well known as the second type. 2) 12-step support groups: These groups typically end in the word "anonymous." The most well known group is Alcoholics Anonymous (AA). These support groups believe in the powerlessness of individuals to heal themselves. Instead, these groups attribute this power to a supernatural being, presence, or force called a "higher power." This high-power is often referred to as God. First, we will review several self-empowering support groups. In the following section on the Spiritual Approaches to Addictions Recovery we will review and discuss 12-step groups. Self-empowering support groups and12-step support groups are in many ways complete opposites. 12-step support groups emphasize individual powerlessness over addiction, while encouraging a belief in a power greater than oneself. The powerfulness of a "higher power" is believed to be capable of "restoring sanity." Many, if not most people interpret the term "higher power" to mean God. Conversely, self-empowering support groups emphasize the power of each individual to triumph over their difficulties. This includes addiction. These groups stress the importance of personal responsibility and ownership of both the problem and the solution. A belief in a God or higher power is neither encouraged, nor discouraged. These differences should not be minimized. Self-empowering groups promote self-reliance and self-empowerment as the solution to addiction problems. 12-step groups consider the over-reliance on self to be the primary source of problems for addicted persons. This often manifests as an overly grandiose sense of self-importance. These differences form the basis for a great deal of unnecessary controversy. Each person's path to addiction is different. Therefore, it should come as no surprise that each person's road to recovery will be different as well. More information about the conflict between these two approaches can be found in the section entitled, Conflict between 12-Step Anonymous Groups and Science. Self-empowering support groups fill a crucial need for support groups that are non-religious. This has become particularly important given recent court decisions about the illegality of court mandated 12-step attendance. Historically, judges throughout the United States have made court mandated attendance at 12-step groups a requirement of a person's reduced sentencing. Other officers of the United States court, such as probation and parole officers, have similarly required attendance at 12-step meetings. The First Amendment of the United States Constitution reads, "Congress shall pass no law respecting an establishment of religion, or prohibiting the free exercise thereof." Beginning in 1996, five US Federal Circuit Courts of Appeal ruled that AA and other 12-step groups are religious enough that the government or its agents may not require someone to attend them. The court can require attendance at a recovery-oriented support group. It just may not specify a religious one. Therefore, self-empowering recovery groups offer an important alternative for people who wish to avoid prison, but who also wish to avoid participation in a religious group. It appears it will take some time before the legal system fully implements this law. The rulings were made in the 2nd, 3rd, 7th, 8th and 9th Circuits and apply in 25 states. By precedent, the rulings presumably apply in all states. Several state supreme courts made similar rulings. The US Supreme Court declined to hear an appeal of the 2nd Circuit ruling. The court's inaction suggests that these rulings will stand and not be over-turned later by a Supreme Court ruling. If the courts fully implement these rulings, it could prevent the purchase of AA publications with government funds. A lack of knowledge of these rulings can have severe consequences. In a recent ruling in the 9th Circuit, September 2007, the court found a parole officer could be held personally liable for the premature death of Buddhist parolee. He was sent back to prison for refusing to attend 12-step oriented treatment. The parolee died while serving a prison term that he should not have been serving. 12-step support groups are widely available and undoubtedly effective for some. Still, many people are dissatisfied or uncomfortable with these groups. The self-empowering support groups are less well known. Furthermore, they may not be available in all communities. Self-empowering groups may be just as effective as 12-step groups. As we have emphasized throughout, recovery is about matching the person to the recovery approach. This requires a diverse range of recovery choices. Self-empowering approaches have certainly contributed to this diversity. Although self-empowering support groups may not be available in many locations, most are quite accessible via the Internet. Some are not directly applicable to activity addictions but people can modify them as they see fit.
<urn:uuid:cf33d508-42cc-41eb-92ba-5c63492b667e>
{ "date": "2020-01-17T23:10:54", "dump": "CC-MAIN-2020-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00376.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9605908989906311, "score": 2.640625, "token_count": 1088, "url": "https://www.gulfbend.org/poc/view_doc.php?type=doc&id=48551&cn=1408" }
Table of Contents Important Functions of Advertising Agencies The objective of an advertising agency is to see that its client’s advertisements lead to greater profits in the long run. Therefore, an advertising agency needs to perform several functions towards achieving this objective. The size of an advertising agency, has a direct bearing over the variety of services that can be rendered to the clients. Generally, bigger agencies perform varied services than medium and small-size agencies. The functions are listed and explained below: 1. Advertising Plan Advertising agency either prepares or helps in preparing advertising plans and programmes for its clients. Preparing an advertising plan needs concerted efforts and investigative information. In performing this function, the agency should have full information about the products. It may pertain to - the product’s positive aspects, - past record, - its position in the competitive market, and - competitors’ negative aspects, strengths and weaknesses. The advertising agency should assess the present market conditions and the firm’s distribution methods. A thorough knowledge on markets (consumers) is also very important. Information on what people buy, why they buy it, where they buy, how they buy, how frequently they buy etc., are very important and useful. An advertising agency may be required to conduct a research to obtain such information. Matching the advertising team with product positioning strategy is another important task. Since an advertising agency knows the character of advertising medium, it can suggest a suitable media mix to its client. Knowledge of target market, the media habits and exposure of the target market are required for this purpose. 2. Creation and Execution An advertising plan, prepared by the advertising agency will be sent to the advertiser for approval. Once approved, its execution is normally assigned to the agency. The agency enters into contracts with the suitable media and the stage is set for creating an effective advertisement to suit the advertising media. Copy will be written, layouts are made, illustrations are drawn or photographed; commercials are produced, advertising messages are prepared. Billing for service space is done. Coordination is another important function of an advertising agency. It has to ensure a proper coordination between the clients, sales force and the distribution network to ensure long-run success of the advertising programme. The goal of the advertising programme must be to assist the efforts of sales persons, distributors and retailers to maximize sales for the client. Many agencies also render special services in such areas as market research, publicity, preparation of product literature, etc. Research may enable them to make stronger presentation to their clients. It may help the copy and art personnel, to create better advertisements for their clients. 5. Mechanical production The function of this department is to transform copy, illustrations and layout into a satisfactory printed advertisement. Obviously, this department interacts closely with the copy and art directors. In an advertising agency, the term tragic refers to scheduling and control. This department sets up a work schedule and a routing sequence for each advertisement, and then supervises its progress through various stages in the agency. Once an advertisement is prepared, it is forwarded to the media which will carry it. It can happen only after copy, illustration, mechanical production and client’s approval are on schedule. Where there is no separate traffic department in an advertising agency, the duty is assigned to the production manager or the account executive. The common assignments of the accounting department of an agency include — to check the appearance of advertisements in media, to check media invoices against release orders; to pay media bills; to bill clients and collect from them; to look after such matters as records, book-keeping, and other office routines. 8. Public Relations The fundamental objective of this department is to build and maintain goodwill with the cross sections of public. The tools used in communicating with the public are corporate advertising and publicity. The main job of this department is to build stronger relations with clients and the various sections of the public — customers, employees, middlemen and shareholders.
<urn:uuid:a89ff35c-fd62-4e86-a3da-669122ff40ff>
{ "date": "2019-03-25T04:02:47", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203548.81/warc/CC-MAIN-20190325031213-20190325053213-00496.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9464935064315796, "score": 2.625, "token_count": 817, "url": "https://accountlearning.com/8-important-functions-advertising-agencies/" }
All humans can be grouped into ABO and Rh+/- blood groups (at a minimum). Is there any advantage at all to one group or the other? This article hints that there are some pathogens that display a ... We all know that, Human blood groups are of 4 types with Negative and Positive of each types. The Wikipedia article also states so. But according to this forum thread there are some other types too: ... How did the red blood cell in humans get to lose its nucleus (and other organelles)? Does the bone marrow just not put the nucleus in, or is it stripped out at some stage in the construction of the ... I want to know can $+ve$ and $-ve$ blood group of a couple could be a cause of miscarriage in pregnancy? I do combat sports, and in these sports there are a lot of hits that can cause bruising. I've found that, over time, physical conditioning can reduce and/or eliminate the bruising to the point where ... If blood loss necessitates immediate cell division to replace lost cells, does the increase in cell division correlate to shortening of telomeres? Does it further cause the Hayflick Limit to be ...
<urn:uuid:f3fdebd6-cfc2-4902-8fa1-21cdb4b6228d>
{ "date": "2014-07-22T07:25:49", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00216-ip-10-33-131-23.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9220710396766663, "score": 2.828125, "token_count": 246, "url": "http://biology.stackexchange.com/questions/tagged/hematology?sort=frequent&pageSize=15" }
Mass Produced Electric Cars? Sooner Than You Think WASHINGTON – The still unresolved issue that will determine if and when there will be real mass demand for Electric Vehicles, EVs, is how to design and manufacture cheaper, lighter batteries for EVs with a higher energy reservoir, and therefore capable of traveling longer distances with one electric charge. The optimists tell us that we are getting there. They cite significant technological innovations and dramatic cost reductions already achieved in the past few years. All true. Batteries are cheaper. EVs now can travel farther. And the optimists also tell us that new collaborative efforts now underway may help expedite additional progress in battery design and effectiveness. Cheaper batteries, coming soon Here is a good example. “Cheaper, more powerful electric car batteries are on the horizon.” This headline appeared on ScienceDaily, 9 August 2016. The story is about a new joint effort linking the U.S. Department of Energy, several U.S. academic institutions and the private sector, under the leadership of a Binghamton University expert. “The White House —Science Daily wrote— recently announced the creation of the Battery500 Consortium, a multidisciplinary group led by the U.S. Department of Energy (DOE), Pacific Northwest National Laboratory (PNNL) working to reduce the cost of vehicle battery technologies. The Battery500 Consortium will receive an award of up to $10 million per year for five years to drive progress on DOE’s goal of reducing the cost of vehicle battery technologies.” “[Assuming success, this effort] will result in a significantly smaller, lighter weight, less expensive battery pack (below $100/kWh) and more affordable electric vehicles. M. Stanley Whittingham, distinguished professor of chemistry at Binghamton University, will lead his Energy Storage team in the charge.” “We hope to extract as much energy as possible while, at the same time, producing a battery that is smaller and cheaper to produce,” said Whittingham. “This consortium includes some of the brightest minds in the field, and I look forward to working with them to create lithium batteries that will power future electric vehicles more affordably.” According to the Science Daily story, other Battery500 Consortium members include: • Pacific Northwest National Laboratory • Brookhaven National Laboratory • Idaho National Laboratory • SLAC National Accelerator Laboratory • Stanford University • University of California, San Diego • University of Texas at Austin • University of Washington • IBM (advisory board member) • Tesla Motors, Inc. (advisory board member) Well, is this an indication that we are on the verge of a major breakthrough when it comes to the most critical component of future generation EVs? Who knows, really. Still, if I were the CEO of a major oil company, I would feel very nervous. Never mind OPEC and its mixed signals regarding its willingness and ability to freeze/cut production in order to stabilize global oil prices. Never mind the ongoing tensions between political rivals Saudi Arabia and Iran and their potential impact on oil markets. Oil will become obsolete The real scary thought is that oil may soon become obsolete. Yes, you got it right: “Oil may soon become obsolete”. Of course this will not happen suddenly. And of course there will still be a significant need for many oil derived products other than gasoline for automobiles. (Think jet fuel, diesel for heavy trucks, oil for plastics and other petrochemical products, and a lot more). Still, the fact is that on a global scale crude is used mostly to produce the gigantic rivers of oil-derived gasoline that end up in the tanks of hundreds of millions of cars powered by internal combustion engines. Tanks that need to be refilled very often with more and more gasoline. End of the conventional car If and when cheaper EVs powered by cost-effective new generation batteries hit the road, there will be a fairly rapid revolution. This will be the end of the conventional car powered by an internal combustion engine. Indeed, an electric charge is much cheaper than filling your tank with gasoline. Much cheaper batteries, assuming some companies will manage to manufacture them relatively soon, will lower the price of future electric vehicles, while increasing the distance EVs can cover with one charge. As soon as this happens, there will be a consumers-led revolution. Millions of drivers across the world will quickly switch to EVs because they will be finally affordable, dependable, and much cheaper to operate, not to mention far cleaner than their gasoline powered counterparts. (By the way: not entirely clean. EVs run on electricity, a zero emission fuel. However, a significant percentage of electricity in the U.S. and elsewhere is produced by burning coal and natural gas. Which is to say that if you consider the source of their fuel, although emissions free, EVs are still not entirely “clean”). That said, the big, open question for any oil executive is: “How much time do we have left before the whole oil sector will collapse, due to lack of demand”? It is very clear that this revolutionary transformation brought about by mass-produced EVs will happen. But nobody knows when: 5 years? 10 Years? 15 Years? And here is the big problem for the oil industry. In order to properly run their businesses, oil executives must plan ahead. And these plans entail major capital investments needed now in order to reap significant gains to be realized several years down the road in terms of new oil production coming on line. Indeed, for oil companies to stay profitable, mature wells close to exhaustion need to be replaced by fresh production. And this means investing now, sometimes on a massive scale, in order to secure continuity of future oil production. This is how the industry works. Except that now this traditional approach is no longer a sure bet. Given developments in EV battery technologies, today oil executives know that this cycle of investments-exploitation-new investments-future exploitation will no longer work indefinitely. The end of oil companies as giant players If and when EVs will become dominant because of technological and cost breakthroughs in batteries technology, this will signal the beginning of the end for major oil companies. In the not so distant future, many of them will run the risk of being caught with new expensive projects half completed that all of a sudden are no longer economically viable on account of collapsing demand for their product –oil– once coveted, and now out of fashion. Beyond these contingencies, because of EVs almost all oil companies will have to cut production, concentrating on the cheapest crude, in order to survive in a new energy era characterized by drastically diminished demand for oil and oil products. The weakest players will not be able to make it. They will go under, or they will be bought by bigger companies. Oil will still be needed Having said all this, will EVs amount to a final catastrophe for the oil sector? Not entirely. Let’s keep all this in perspective. Even assuming state of the art, cost-effective EVs quickly replacing an enormous global fleet of gasoline powered vehicles, there will still be demand for oil. Heavy trucks and ships will continue to run on oil derived diesel fuel for many, many years. Likewise, thousands upon thousands of civilian and military airplanes will still rely on jet fuel made from crude oil. Petrochemical and plastics industries across the globe will continue to need oil derived products. All this is true. However, assuming a fairly rapid switch to EVs, the global demand for oil, now driven largely by demand for oil derived gasoline, will collapse. All of a sudden, the global oil industry will face gigantic over capacity: too much oil and too little demand. Only the ultra lean, low-cost operators with a solid financial base will survive. Good bye Exxon? Hard to think of a world in which Exxon Mobil will be a mid-sized company confined to producing oil for jet fuel and diesel trucks only, since millions of cars will run on electricity, and no longer on gasoline. But we are getting there. And this may happen sooner than we think. Call it the next “oil shock”.
<urn:uuid:a6ba6725-d0f9-4556-8807-239a5c89bb7a>
{ "date": "2017-04-27T11:03:02", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00412-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9488106966018677, "score": 3.046875, "token_count": 1689, "url": "http://schirachreport.com/tag/innovation/?print=print-search" }
The word “empathy” first appeared in English in 1909 when it was translated by Edward Bradford Titchener from the German Einfühlung, an old concept that had been gaining new meaning and increased relevance from the 1870s onward. While today we often treat “empathy” as a synonym for “sympathy,” if not—and more commonly—as an improvement on it, empathy at the turn of the century was used to describe a unique combination of cognitive effort and bodily feeling thought to characterize aesthetic experience. Such experience was not limited to contemplating works of art, however; for several of its earliest theorists, empathy named our aesthetic experiences of other people. It would seem to some that a radical break had been made between sympathy, seen as a primarily moral (and moralizing) activity, and a more scientific, physico-psychological process for which the human brain was hardwired. Yet the empathy of the later nineteenth and early twentieth centuries may also be seen as sharing key features with sympathy, particularly as the latter was conceptualized by eighteenth-century moral philosophy and Romantic and Victorian aesthetics. Empathy is a hot topic these days, on the lips of cognitive scientists, philosophers, literary critics, and U.S. Presidents alike. But what is it? Consider the opening paragraph of the opening essay of a 2011 collection entitled The Social Neuroscience of Empathy. In “These Things Called Empathy: Eight Related but Distinct Phenomena,” C. Daniel Batson writes: Students of empathy can seem a cantankerous lot. Although they typically agree that empathy is important, they often disagree about why it is important, about what effects it has, about where it comes from, and even about what it is. The term empathy is currently applied to more than a half-dozen phenomena. These phenomena are related to one another, but they are not elements, aspects, facets, or components of a single thing that is empathy, as one might say that an attitude has cognitive, affective, and behavioral components. Rather, each is a conceptually distinct, stand-alone psychological state. Further, each of those states has been called by names other than empathy. Opportunities for disagreement abound. (3, original emphasis) As Batson goes on to suggest, empathy can be difficult to define because it is often invoked “to provide an answer to two quite different questions” (3). The first is this one: how do we know what others think and feel? And the second: how can we explain the impulse to respond to the feelings of others? The first is principally a question of knowledge. It asks how we are able to infer the contents of other minds, or how and to what extent we project those contents from our own. The second, centered on motivation and behavior, is primarily ethical. It seeks to understand as well as “promote prosocial action” (4). The empathy of the moment has accrued a number of unique (and contradictory) meanings and associations, from benevolent, altruistic care to the biochemical and physiological responses of our minds and bodies as we mimic or mirror the feelings of others—as, for example, in the “primitive empathy” of motor mimicry, an automatic or reactive bodily response that (some say) is carried out unselfconsciously, without intention or will (Bavelas et. al.). We might well wonder, then, what relation empathy has to sympathy, a concept with which it is sometimes aligned, sometimes treated synonymously, and sometimes contrasted. It might seem intuitively correct to assume a wide distance between the two, if one takes as a starting point such recent titles as “The Empathetic Brain” and “Imitation, Empathy and Mirror Neurons.” Empathy is without a doubt the preferred term for describing brain phenomena that, it would appear, only the most cutting-edge technology has made visible. Yet empathy in the late-nineteenth-century, Darwinian era looks surprisingly like the empathy of today. There was, of course, no talk of mirror neurons then, but the material body and the interworking of its parts had become vital to those whose interest in loosening the grip of morality on our understanding of human experience had directed their attention to muscles and nerves. Indeed, we find family resemblances when we look back farther still. When, in 1987, the psychology professors Nancy Eisenberg and Paul A. Miller distinguished empathy from sympathy—defining empathy as a vicarious “emotional matching” occurring in the apprehension of another’s affective state and sympathy as involving “sorrow or concern for another’s welfare”—they opened their essay with references to David Hume and Immanuel Kant (92). Hume is among the first in a long line of thinkers who stress the emotional origins of moral behavior; Kant stands with those for whom cognition is key. If it is true that “psychologists generally have been less concerned than philosophers with delineating the ontological nature of morality,” the status of the emotions and the thinking mind, as well as the ostensible divide between them, has continued to occupy students of empathy and sympathy since the Scottish Enlightenment (Eisenberg and Miller 91). Empathy is the decidedly younger term, at least in English. Moreover, though “empathy” is favored over “sympathy” in modern discourse (for reasons we will explore), it rose to prominence and accrued new meaning in a field—psychological aesthetics—which for many of us no doubt remains unfamiliar.When the British psychologist Edward Bradford Titchener translated the German Einfühlung into the English “empathy” in 1909, he drew upon a number of recent writings dedicated in part to revising and refining the term’s aesthetic and moral implications. Einfühlung had been used by the German Romantics to describe aesthetic experience, particularly the experience of “feeling into” the natural world, but had become the object of serious philosophical scrutiny only in the later nineteenth century through the work of German philosophers like Robert Vischer and Theodor Lipps. The remainder of this essay considers why the introduction of that new term was important, what it meant, and how it differed from (or was continuous with) existing understandings of sympathy. It argues that while empathy and sympathy may sometimes be conceived differently, and necessarily so, a brief look at the history of these terms reveals that both share common merits. Finally, it suggests that empathy should be thought of less as an improvement over a sympathy understood as old-fashioned, pitying condescension than an innovation in theorizing a relation with which the first philosophers of sympathy were also concerned, that between the thinking mind, emotion, and aesthetic form. Such issues had preoccupied writers in the eighteenth century, when the fields of psychology and aesthetics were in their infancy. Empathy does not supplant a naïve, outmoded sympathy but seeks to answer different questions or to answer old ones in new ways. Indeed, empathy, or rather sympathy, had been understood in aesthetic terms at least since Adam Smith, Hume, and their eighteenth-century contemporaries began the rigorous study of moral feeling, into which aesthetic experience fell. The highly physicalized empathy of the early twentieth century in many ways represents a continuation of, rather than a radical break from, a sympathy that in Smith and Hume’s day was not without its own bodily discourse of taste and appetites, or its embroilment in debates concerning the nature of attraction and repulsion. Even the architectural, geometrical emphasis that, as we shall see, characterized empathy for writers like Vernon Lee was not without precedent. In Beauty & Ugliness (1912; first published in two-part essay form in 1897), a work attributed to both Lee and Clementina (“Kit”) Anstruther-Thomson, Lee contends that to articulate properly the aesthetic process one must connect an analytic vocabulary to an affective one: “visible qualities” like “red, blue, tall, long, triangular, [and] square,” she writes, “tell us of no aesthetic peculiarities”; “[f]or those we must go to the names of our moods: pleasant, unpleasant, harmonious, jarring, unified, etc.” (271, original emphasis). Lee, following Lipps’s lead, goes on to describe in brilliant detail how we empathize with triangles and squares. But in A Treatise of Human Nature (1739), Hume too had had a good deal to say about the relationship between shape and color, ideation, and emotional response, as when he describes how we “revolve in our mind the ideas of circles, squares, parallelograms, [and] triangles of different sizes and proportions,” or describes a man looking at a kind of color wheel missing a particular shade of blue (22.214.171.124). “Let all the different shades of that colour, except that single one, be placed before him, descending gradually from the deepest to the lightest,” Hume writes; “it is plain, that he will perceive a blank, where that shade is wanting. . . . Now I ask, whether it is possible for him, from his own imagination, to supply this deficiency, and raise up to himself the idea of that particular shade, though it had never been conveyed to him by his senses?” (126.96.36.199). Hume answers in the affirmative, drawing a parallel between color perception and sympathy with others. Both require imagination for filling in the gaps, yet neither is simply “subjective.” Our sympathetic responses are modified and corrected in the same way as are our judgments of color, line, and size. Both become objective in accordance with general rules. Empathy’s significance to late-nineteenth- and early-twentieth-century theories of the visual and plastic arts may thus be seen as an extension of (if also distinctive from) that early connection between sympathetic responsiveness, judgment, and aesthetic form. In Empathy, Form, and Space: Problems in German Aesthetics, 1873-1893 (1994), Harry Francis Mallgrave and Eleftherios Ikonomou describe the crucial twenty-year period during which Einfühlung acquired that modern significance as an important background for understanding the aesthetic transformations that would epitomize early-twentieth-century art. The formal experimentation of Futurism, Cubism, and Neoplasticism, they suggest, “was not without intellectual pedigree,” specifically in the shift away from “the erstwhile philosophical and physiological problem of how we perceive form and space” toward “the fledgling psychological problem of how we come to appreciate or take delight in the characteristics of form and space,” and the “analogous problem of how we might artistically exploit pure form and space as artistic entities in themselves” (1-2). The emphasis on pleasure is important. Though the intellectual tradition they trace leads them all the way back to Kant’s conception of “purposiveness” (Zweckmäßigkeit), which they define as “the sense of internal harmony that we presume to exist in the world” and thus “the heuristic rule or standard by which we relate to the forms of nature and art,” the final decades of the nineteenth century were marked by a profound attention to bodily response (Mallgrave and Ikonomou 6). Translating Einfühlung as “in-feeling,” Mallgrave and Ikonomou are keen to emphasize the physiological basis of, for instance, Robert Vischer’s “muscular” empathy, which links aesthetic appreciation to the rhythmic experiences of the body’s “self-motions,” or his conviction that certain “loud” colors “might actually provoke an auditory response” (23). As Vischer puts it, “[w]e move in and with the forms” (101). Whether carried far away in observing “fleeting clouds,” or mentally attempting to “scale [the] fir tree and reach up within it,” we “caress [form’s] spatial discontinuities”; by moving in this way “in the imagination,” we reproduce the kinetic motion of our internal organs and project it into other, even stationary, objects (101). This experience of a rhythmic continuity between self and other, outside and in, defines empathy in Vischer’s view. By objectifying the self in external, spatial forms, projecting it into and becoming analogous with them, subject merges with object. Self and world unite. Mallgrave and Ikonomou underscore how radical this proposal can seem once we recognize the pervasiveness of empathy in our unconscious, everyday activities. For Vischer’s conception of empathy involves “a pervasive attitude, an openness that we maintain with the world,” which in turn suggests (as Hume had done) that the self is a fiction, a form borne of imagination and maintained “only by force of habit” (Mallgrave and Ikonomou 25). One of empathy’s most pleasurable rewards, then, might be in the letting go (or loosening up) of that fiction. External objects lose their distinctiveness from the self when my feeling and my “mental representation” of a given object “become one” (25). This has important implications for the artist whose attempts at intensifying expressions of form do not attempt to copy nature but reveal the richly affective and energic processes “concealed therein”: art might strive “to objectify the human condition in a sensuous and harmoniously refined form,” translating “the instability of emotional life and the chaotic disorder of nature into a free, beautiful objectivity” (26). As Titchener explained in his Lectures on the Experimental Psychology of the Thought-Processes (1909), an author’s “choice and arrangement of words” could produce “attitudinal feels”: visceral pressures, muscular “tonicity,” and altered breathing and facial expressions felt by authors and readers alike (181). This collection of responses he calls “empathy,” yet it is not simply that aesthetic experience leads to physiological effects. For even ordinary, run-of-the-mill thinking and understanding involve similar exertions, in a kind of “motor empathy” in which one “act[s] the feeling out, though as a rule in imaginal and not sensational terms” (185). “Not only do I see” such abstract concepts as “gravity and modesty and pride and courtesy and stateliness,” Titchener explains; “I feel or act them in the mind’s muscles” (21). Such an empathy seems to have come a long way from sympathy as it is commonly understood, especially if by that word we mean something closer to the sentimental pitying often associated with Victorian morality (as we saw at the beginning of this essay, sympathy retains for many contemporary theorists an association with pity that empathy does not). Suzanne Keen reminds us in Empathy and the Novel (2007) that Victorian novelists like George Eliot were explicit in “articulat[ing] a project for the cultivation of the sympathetic imagination,” whereby novel readers “might learn, by extending themselves into the experiences, motives, and emotions of fictional characters, to sympathize with real others in their everyday lives” (38). Sympathy was urged as an urgent ethical response to the increasingly urban, disconnected, and morally uncertain world of full-blown capitalism. For some, though, that possibility came with messages fraught with danger, as from those who worried that novel reading corrupted readers (especially girls) by causing them to think and feel things they ought not, as well as from those who, from a different angle, considered sympathy a poor, overly personal substitute for large-scale social reform. Such concerns were as old as the novel itself, as was the sneaking suspicion that sympathy provided a cover for less than noble desires. As one Madame Riccoboni wryly remarked in 1769, “[o]ne would readily create unfortunates in order to taste the sweetness of feeling sorry for them” (qtd. in Boltanski 101). Yet Keen’s use of the terms “empathy” and “sympathy” more or less interchangeably marks an effort to rescue sympathy from the bad press it received not so much then as now, once empathy, having shed the moralistic overtones that had accrued to sympathy, was judged the more modern and better of the two. As I have written elsewhere, after 1909 (if not before it), sympathy seemed to belong to the Victorians, empathy to us (“Thinking” 418). How this came to pass over the course of the nineteenth century is a story too long and complex to cover fully here. Yet in some ways the shift is easy to explain. As Neil Vickers writes, “in eighteenth-century Britain, ‘proto-psychology’ and ‘proto-aesthetics’ laboured under a common burden”: that of having to “prove their worth in moral terms” (4). The first “psychologists” we might say—admittedly, by stretching the term—were the “moral doctors” described by Karl Philipp Moritz in 1782 as the physicians of heart and head (qtd. in Vickers 5). The shift from moral medicine to morality-free scientific rigor (if it ever, finally, took place) was neither quick nor unambivalent. As Vickers reminds us, Samuel Taylor Coleridge apologized for using the term “psychological” in series of lectures he gave on Shakespeare in 1811-12 (the “patient” was Hamlet). Yet serious examination of the mind—studied attention to the relay between perception, cognition, and affective response—was beginning to lead many thinkers toward more neutral, less patently moral explanations for human behavior. And as psychology (along with many other branches of science) became increasingly professionalized as the century progressed, so a new psychological aesthetics developed in tandem with other scientific developments, and, as Carolyn Burdett explains, “physiology and psychology converged” ( “Introduction” 1).By the latter half of the nineteenth century, Burdett argues, evolutionary theory offered new ways “to understand such psycho-physiological phenomena” as reflex actions and spontaneous emotional response, thus “linking even the most seemingly sovereign of human experiences, such as the feeling of love for another person or the love of God, to vestigial instinctive behaviors which had once conferred evolutionary advantage” (1). As emotion fell under the powerful sway of physiology, so too did aesthetic response. If love was in some fundamental way biochemical, perhaps a sensation borne of the nervous system, the same might be true for experiences of the beautiful, including the sympathetic facility—now becoming “empathetic”—to have aesthetic experiences with, and of, others. As Lipps argued in “Empathy, Inner Imitation and Sense-Feelings” (1903), when in empathy with another person I experience “a spatial extension of the ego,” I assume “the place of that figure. I am transported into it. As far as my consciousness is concerned, I am totally identical with it” (qtd. in Jahoda 155). That, he continues, “is aesthetic imitation, and it is at the same time aesthetic Einfühlung” (155). For Lipps, “aesthetic enjoyment is objectivated self-enjoyment” in that it enabled a formal experience of self (Pinotti 94). For Lee, following from Lipps while carving out her own theory of “anthropomorphic aesthetics,” empathy involved emotional memory: physical motions, grown abstract through continual repetition, are sensations that in empathy we feel, thrillingly, to have been revived (Beauty 1). For many such writers, empathy necessarily involved a cognitive component, though how strong, and how dominant, varied. As Burdett notes, Lee and Anstruther-Thomson’s Beauty & Ugliness offered “an empirically-based account of aesthetic experience” in which bodily sensation precedes and leads to emotion: “we fear because we tremble,” not the other way around (“Vernon Lee” 3). But Lee balked at entirely severing morality from aesthetic experience. Inspired, Burdett argues, by a proposition central to Darwin’s 1871 Descent of Man—that “female aesthetic choice” shapes and even dictates (hetero-)sexual selection—Lee was at once receptive to arguments for the primacy of sensation and resistant to any mechanistic understanding of aesthetic response (11). From Vischer she took the idea that we imaginatively experience our own bodies via those external objects into which our emotions are projected; from Lipps’s Spatial Aesthetics and Optical Illusion (1897), she took the notion that the seeming vitality of objects, including the erectness and “balance” of a Doric column, is among those “ideas of movement” and dynamism produced by empathy (Burdett “Vernon Lee” 20, original emphasis). Such ideas are “made out of the accumulated and synthesized, ‘abstracted’ memory store of our experiences of sensory motor activity, out of imagined similar movements, and out of an unconscious knowledge of primordial dynamism as such” (Burdett 20). They are, for Lee, “tantamount to life,” for beauty confirms us to ourselves: “forms and shapes,” she believed, “are how we keep feeling that we are alive” (Burdett 20). If by 1932 empathy had become a widely accepted term amongst psychologists, sympathy had in certain circles suffered a decline, at least in popular perception. It is worth remembering, however, that the moral and social theories of Smith and Hume are thick with references to the mental activities, perceptual and imaginative acuities, and ethical conundrums that, in the parlance of our own day, can seem to belong exclusively to empathy as well as the moment. Projection and mirroring, the role of inference, fiction, and the imagination in inhabiting other minds—these were central to the broad eighteenth-century attempt to reconceive how feeling fundamentally affected nearly every aspect of human life. Though their aims and conclusions sometimes differ, Smith’s insistence that our sympathies arise not simply by viewing another’s emotional expression firsthand, but from reflecting upon “the situation which excites it,” isn’t so far from the “in-feeling” that interested Lipps, Lee, and the rest (12). It too involves an experience of form, in what Smith calls the “going along with” others: imaginatively re-creating their mental movements, crafting narrative accounts so as to understand (and simulate) their attitudes, feelings, and thoughts (83). As with Lee’s “abstracted” memories, feelings in Smith’s sympathetic process must first be abstracted—turned into the stuff of story—to be imaginatively passed on and shared. His denial that feelings could pass from one body to the next in the contagious fashion Hume described, and his resistance to the idea that our sympathies arose in response to bodily feeling of any kind, turned sympathy into a mental process. It also made emotion thoroughly social. No man can “think of his own character,” Smith wrote in The Theory of Moral Sentiments (1759), without imagining how he appears to the minds of others; lacking that mental “mirror,” even the self is an “object which he cannot easily see, [and] which naturally he does not look at” (110). Smith illustrates the point by describing a man living in total isolation: on a good day, he might “view his own temper and character with that sort of satisfaction with which we consider a well-contrived machine”; on a bad day, he is but “a very awkward and clumsy contrivance” (192-93). Selfhood is so unthinkable on the social margins that only a minimal subjectivity is possible outside it. A man who cannot “suppose the idea of some other being, who is the natural judge of the person who feels,” is also incapable of having a self (193). Physiological accounts of empathy thus share with Smithian sympathy a crucial insight: that we project our feeling into other forms in order to experience ourselves. Indeed, it is also worth remembering in closing that empathy and sympathy were overlapping terms even for those who initially sought to clarify empathy’s aesthetic dimensions or to distinguish it from the moral sentiments associated with sympathy. In his commentary on Doric columns, for instance, Lipps had written that while the arrangement of materials constituted its “technical” creation, only a “combination of aesthetic relations for our imagination constitutes a work of art”: “the essential of the work of art,” he continues, “is an imaginary world unified and self-contained” (qtd. in Lee “Recent” 434). It may be difficult to see how any ordinary morality could function in this “self-contained,” “imaginary world.” Yet in “Recent Aesthetics,” published in 1904 in the Quarterly Review, Vernon Lee cites this same passage, declaring that [t]his phenomenon of aesthetic “Einfühlung” is . . . analogous to that of moral sympathy. Just as when we “put ourselves in the place” or more vulgarly “in the skin” of a fellow creature, we are, in fact, attributing to him the feelings we should have in similar circumstances, so, in looking at the Doric column . . . we are attributing to the lines and surfaces, to the spatial forms, those dynamic experiences which we should have were we to put our bodies into similar conditions. (434) As sympathy with “the grief of our neighours” can constitute “a similar grief in our own experiences,” so too an “aesthetic attribution of our own dynamic modes to visible forms implies the realisation in our consciousness of the various conflicting strains and pressures, of the resistance and the yielding which constitute any given dynamic and volitional experiences of our own” (434). The Doric column’s valiant effort to defy gravity revives in us a sense of the human condition. Its is “a little drama we have experienced millions of times” (434). Certainly, one must be wary of overstating the similarities between eighteenth- and nineteenth-century conceptions of sympathy and a fin de siècle empathy bearing the undeniable stamp of its post-Darwinian making. At present, though, the pendulum often swings too far in the opposite direction, making empathy seem inarguably superior to its old-fashioned cousin. In Upheavals of Thought: the Intelligence of Emotions (2003), Martha Nussbaum offers a corrective, asserting the continuing relevance of a sympathy that, unlike empathy, always entails an ethical stance: “a malevolent person who imagines the situation of the other and takes pleasure in her distress may be empathetic,” she writes, “but will surely not be judged sympathetic. Sympathy, by comparison, includes a judgment that the other person’s distress is bad” (302). There are no doubt plenty of good reasons why a recent study citing evidence for the altruistic behavior of rats toward their distressed fellows should refer to such behavior as empathetic rather than sympathetic (the moral question, surely, is number one). But Nussbaum’s distinction may nevertheless give us reason to pause before jettisoning sympathy altogether in explaining more human endeavors. We will do well, as historians and scholars of a long and lengthening nineteenth century, to continue scrutinizing the empathy/sympathy relation as well as its ostensible divide. HOW TO CITE THIS BRANCH ENTRY (MLA format) Greiner, Rae. “1909: The Introduction of the Word ‘Empathy’ into English.” BRANCH: Britain, Representation and Nineteenth-Century History. Ed. Dino Franco Felluga. Extension of Romanticism and Victorianism on the Net. Web. [Here, add your last date of access to BRANCH]. Bartal, Inbal Ben-Ami, Jean Decety, and Peggy Mason.“Empathy and Pro-Social Behavior in Rats.” Science 334.6061 (2011): 1427-30. Web. 12 Dec. 2011. Batson, C. Daniel. “These Things Called Empathy: Eight Related but Distinct Phenomena.” The Social Neuroscience of Empathy. Ed. Jean Decety and William Ickes. Cambridge: MIT P, 2011. 3-13. Print. Bavelas, Janet Beavin, Alex Black, Charles R. Lemery, and Jennifer Mullett. “Motor Mimicry as Primitive Empathy.” Empathy and its Development. Ed. Nancy Eisenberg and Janet Strayer. Cambridge: Cambridge UP, 1987. 317-38. Print. Boltanski, Luc. Distant Suffering: Morality, Media, and Politics. Cambridge: Cambridge UP, 1999. Print. Burdett, Carolyn. “Introduction: Psychology/Aesthetics in the Nineteenth Century.” 19: Interdisciplinary Studies in the Long Nineteenth Century 12 (2011): 1-6. Web. 7 Nov. 2011. —. “‘The Subjective Inside Us Can Turn into the Objective Outside’: Vernon Lee’s Psychological Aesthetics.” 19: Interdisciplinary Studies in the Long Nineteenth Century 12 (2011): 1-31. Web. 7 Nov. 2011. Eisenberg, Nancy, and Paul A. Miller. “The Relation of Empathy to Prosocial and Related Behaviors.” Psychological Bulletin 101.1 (1987): 91-119. Psycarticles. Web. 4 Jan. 2012. Greiner, Rae. Sympathetic Realism in Nineteenth-Century British Fiction. Baltimore: the Johns Hopkins UP, 2012. Print. —. “Thinking of Me Thinking of You: Sympathy v. Empathy in the Realist Novel.” Victorian Studies 53.3 (2011): 417-26. Print. Hume, David. A Treatise of Human Nature. Ed. David Fate Norton and Mary J. Norton. Oxford: Oxford UP, 2005. Print. Iacoboni, Marco. “Imitation, Empathy and Mirror Neurons.” Annual Review of Psychology 60 (2009): 653-70. Annual Reviews. Web. 15 Sept. 2011. Jahoda, Gustav. “Theodor Lipps and the Shift from ‘Sympathy’ to ‘Empathy.’” Journal of the History of the Behavioral Sciences 41.2 (2005): 151-63. Academic Search Premier. Web. 4 Nov. 2011. Keen, Suzanne. Empathy and the Novel. Oxford: Oxford UP, 2007. Print. Lee, Vernon. “Recent Aesthetics.” Quarterly Review 199.398 (1904): 420-43. Vol. 199 (London: John Murray, 1904). Google Book Search. Web. 3 Jan. 2012. Lee, Vernon, and Clementina Anstruther-Thomson. Beauty & Ugliness and Other Studies in Psychological Aesthetics. London: John Lane, 1912. Print. Mallgrave, Harry Francis, and Eleftherios Ikonomou, eds. Empathy, Form, and Space: Problems in German Aesthetics, 1873-1893. Santa Monica: Getty Center, 1994. Print. —. Introduction. Empathy, Form, and Space: Problems in German Aesthetics, 1873-1893. Mallgrave and Ikonomou 1-85. Nussbaum, Martha. Upheavals of Thought: the Intelligence of the Emotions. Cambridge: Cambridge UP, 2003. Print. Pinotti, Andrea. “Empathy.” Handbook of Phenomenological Aesthetics. Ed. Hans Rainer Sepp and Lester Embree. Heidelberg: Springer, 2010. 93-98. Google Book Search. Web. 8 Jan. 2012. Smith, Adam. The Theory of Moral Sentiments. Glasgow ed. Ed. D.D. Raphael and A.L. Macfie. Indianapolis: Liberty Fund, 1982. Print. Titchener, Edward Bradford. Lectures on the Experimental Psychology of the Thought-Processes. New York: Macmillan, 1909. Google Book Search. Web. 1 Dec. 2011. Vickers, Neil. “Coleridge on ‘Psychology’ and ‘Aesthetics.’”19: Interdisciplinary Studies in the Long Nineteenth Century 12 (2011): 1-14. Web. 7 Nov. 2011. de Vignemont, Frederique, and Tania Singer. “The Empathetic Brain: How, When and Why?” Trends in Cognitive Sciences 10.10 (2006): 435-41. Science Direct. Web. 15 Sept. 2011. Vischer, Robert. “On the Optical Sense of Form: a Contribution to Aesthetics.” Mallgrave and Ikonomou 89-123. For instance, see de Vignemont and Singer’s “The Empathetic Brain: How, When and Why?” and Iacoboni’s “Imitation, Empathy, and Mirror Neurons.” Though no one was talking of “mirror neurons” in the late nineteenth century, neurons had been named in 1891 by Heinrich Wilhelm Gottfried von Waldeyer-Hartz; three years later, Franz Nissl successfully stained them with dahlia violet. My thanks to the anonymous reader of this essay who clarified this point. I cover some of this ground in my forthcoming book on sympathy and realism, forthcoming from the Johns Hopkins University Press in 2012. For a more thorough account, Keen’s work is a good place to start. Vickers’s essay is part of a special issue of 19: Interdisciplinary Studies in the Long Nineteenth Century devoted to nineteenth-century psychology and aesthetics. Lee found Lipps’s account of empathy overly abstract: “[o]ne might almost believe that it is the dislike of admitting the participation of the body in the phenomenon of aesthetic Empathy which has impelled Lipps to make aesthetics more and more abstract, a priori, and metaphysical” (Beauty 60). From Lipps’s study “has come,” she writes, “if not the theory, at least the empirical and the logical demonstration of the process to which Professor Lipps has given the convenient but misleading name Einfühlung” (60). As Gardner Murphy wrote that year in An Historical Introduction to Modern Psychology, “the term Einfühlung (‘empathy’) has in fact come into general psychological use” (qtd. in Jahoda 162). According to Jahoda, Lipps treated empathy and sympathy interchangeably except in the case of “negative Einfühlung,” which Jahoda calls “rather an elusive concept” (158). See Bartal et.al., “Empathy and Pro-Social Behavior in Rats.”
<urn:uuid:055bf631-2f8b-41bf-9575-08f38aaeccdd>
{ "date": "2015-03-28T14:18:59", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297587.67/warc/CC-MAIN-20150323172137-00038-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9279191493988037, "score": 3.859375, "token_count": 7628, "url": "http://www.branchcollective.org/?ps_articles=rae-greiner-1909-the-introduction-of-the-word-empathy-into-english" }
Over the years, the dugong has inspired many a mythical tale, but now it’s in danger of becoming nothing but a story of the past. FOR THOUSANDS OF YEARS, sailors reported ocean sightings of beautiful fish-tailed women frolicking in the waves, singing sweet music, and luring men to a briny demise. Ideas about the creatures have been perpetuated through the centuries -- in everything from Homer’s Odyssey to Arabian Nights to a number of Disney films -- by great storytellers who’ve captivated audiences around the world with mermaid mythology. The spectacle of young women swimming in mermaid costumes remains a staple of Florida tourist shows. And a mermaid image even appears in the Starbucks logo. No scaly-tailed seductress actually exists, of course. Instead, historians believe that the ancient sailors were enthralled by a strange aquatic creature that, although 10 feet long and 800 pounds, apparently reminded them of a woman. Known as a dugong, the animal exists in isolated colonies scattered throughout the oceans. But it may not exist for long, considering that it’s listed variously as endangered, rare, depleted, and extinct. It’s one of the planet’s least-understood creatures. Fossils date the dugong’s origins back to 50 million years ago, but mankind’s knowledge of the animal is practically zero. Aside from the locations dugongs have been sighted at, where they live remains a mystery, as does the exact number of them that are left. Aerial surveys of dugong populations show that their numbers are on a steep downhill slide. Scientists are saying it’s time we start to pay more attention to dugongs, the original mermaids of the seas, before they’re all gone. DUGONGS BELONG TO the scientific order Sirenia, which is named for the sirens of ancient Greek mythology, who tried to lure sailors onto their island with love songs. Sirenia comprises a tiny group of four living species that includes the three varieties of manatees and the dugong, as well as the Steller’s sea cow, which was hunted to extinction in the 1700s. The dugong evolved separately from its cousins, developing a unique tail and a set of tusks, and it is a very picky eater, limiting its diet to specific sea grasses from the ocean bottom in warm, shallow coastal waters. A prehistoric freak of nature, the dugong is the world’s only plant-eating mammal specific to the ocean. It possesses a strange, bulbous snout; little eyes; and what looks like a peculiar smile. Its eyesight is poor, but its hearing is acute. It seems to communicate via odd little chirping sounds. And although dugongs can live to a maximum age of about 70, they reproduce very slowly and tend to swim alone except when feeding or nursing. Dugong means “sea cow” in Tagalog, but the animal is not specific to the Philippines. Dugongs are found in small numbers throughout the tropics and subtropics, with the larger populations living near Australia and the land masses in the Arabian Gulf. In many areas, such as Mauritius, Madagascar, and the Maldives, the sea creature is already considered extinct. “They’re bizarre,” admits Australian scientist Helene Marsh, PhD, professor of Tropical Environmental Studies & Geography at James Cook University, who has studied dugongs for 30 years and is considered to be one of the world’s leading experts on them. “They’re a bit like a manatee … [or] they might be considered similar to a walrus. But, yes, they are very weird. Their closest terrestrial relative is the elephant. If you look at their internal anatomy, particularly the way their reproductive system works, they’re very similar.”
<urn:uuid:cd3345a3-9572-4803-875f-aaa485a120d8>
{ "date": "2014-10-25T19:00:45", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119649133.8/warc/CC-MAIN-20141024030049-00258-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9476937055587769, "score": 3.15625, "token_count": 820, "url": "http://hub.aa.com/en/aw/helene-marsh-australia-the-philippines-madagascar" }
This defines entrepreneur and entrepreneurship – the entrepreneur always searches for change, responds to it, and exploits it as an opportunity.” Peter F. Drucker, Innovation and Entrepreneurship: Practice and Principles The more I read about Thales of Miletus, the more I believe that he was the ancient version of the Renaissance man. Philosophy is a thinking exercise that usually involves a considerable amount of time. Most of the population of any age or society are involved in making a living and putting food on the table. Thales of Miletus, possessing an entrepreneurial spirit, cleverly dealt with the issue of time and money. It seems he made a fortune investing in oil-presses before a heavy olive crop harvest. All of which suggests that to be a philosopher and scientist in Ancient Greece, 7th century BCE, business skills are a notable asset. Thales significance as a philosopher centers on methodology. He was the first thinker who tried to find common, underlying principles to account for the natural world, rather than relying on the whims of anthropomorphic gods. He sought to give a naturalistic explanation of observable phenomena that still has relevance in modern scientific exploration. Thales believed that the mind of the world is god, that god is intermingled in all things, a viewpoint that would shortly emerge simultaneously in a number of world religions. Thales lived in the past, yet his thought process made him universal. He would thrive in any age. “I cannot teach anybody anything. I can only make them think”
<urn:uuid:101520a0-5317-419d-b058-146a4c1d0e7f>
{ "date": "2017-04-23T23:36:16", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00233-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9650396704673767, "score": 3.125, "token_count": 313, "url": "https://ladybudd.com/2013/04/23/the-entrepreneur-philosopher/" }
For others, it may mean giving up a favorite food for a period of time or not eating food at certain times of the day or year out of respect for various religious holidays. Strictly speaking, fasting is the voluntary absence of food. While the idea of missing even one meal might put most of us in misery, fasting does have many benefits for the body. Give your body a rest We take vacations, we have weekends off from work, we rest our tired bodies through sleep, and we "take a break" to rejuvenate from stress. One thing, though, that we hardly ever do, is take a break from food for longer lengths of time. Our digestive system is very busy and hard-working, which requires high amounts of energy; in fact, the digestive system can even drain energy needed for healing, repair and general maintenance of the body. Therefore, it makes sense to give it a vacation once in awhile. An ancient tradition The art of fasting is an ancient tradition practiced for thousands of years for curing illness of all kinds, rejuvenation, clarity and decision making, cleansing and strengthening. Have you noticed that when you're sick, your appetite diminishes? (Similarly, when animals are ill, they lie down and often don't eat or drink.) Energy goes towards healing our bodies instead of digesting food. Fasting also allows for the body's enzyme system to focus on detoxifying and breaking down toxins in the body quickly and efficiently without the job of heavy food digestion. During fasts, toxins are being circulated in the body in order for our organs to de-arm them. Therefore, it's not always wise to detoxify quickly because a flood of toxins being released at once can cause serious distress to the body that can do more harm than good. Effective ways to fast If you've never fasted before, and would like to experience a fast, have no fear. Fasting should be gentle and nurturing and can range from a one day to as long as a week. More rigorous fasts, such as a water-only fast, should only be undertaken by those experienced in fasting and detoxification. A gentle fast is great way to start -- without even having to go hungry. Here are some ideas to get you started: - Eating a raw food diet of fruits, vegetables, seeds and nuts - Eating a "mono" diet of one food (for example a fruit or rice gruel) - Consuming mineral-rich bone and vegetable broths - Drinking green smoothies - Drinking only fresh pressed vegetables/fruit juices - Eating salads exclusively - Eating kichadi ( a traditional Indian rice/vegetable dish full of healing herbs and spices) - Having an early dinner and refraining from food for a 16-hour period before eating breakfast. Fasting may seem overwhelming or daunting, but if you simply choose one day per week and practice any of the above tips, you'll get used to this healing practice. When fasting, always remember to listen to your body, letting it decide when and how long fasting should last. For those who still have doubts, seeing a Naturopathic Doctor or Holistic Nutritionist may help ease your hesitation and motivate you to get started. Fasting is a message to your body that you're embarking on a new beginning, flushing out the old and bringing in the new. Fasting is the perfect way to introduce new healthy habits and foods into your life. It can give you that jump-start, boost clarity, and clear your body toward shifting things in a positive direction. Make a resolution to give your digestive system a break once in a while. What better way to start a new year? Rachel Hynd is a Registered Holistic Nutritionist and certified Raw Food Instructor. NaturallySavvy.com is a website that educates people on the benefits of living a natural, organic and green lifestyle. For more information visit www.NaturallySavvy.com (c) 2011, NATURALLY SAVVY DISTRIBUTED BY TRIBUNE MEDIA SERVICES, INC.
<urn:uuid:2d71f307-4f95-4ba9-983c-40f346585348>
{ "date": "2014-08-02T03:25:14", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276250.57/warc/CC-MAIN-20140728011756-00444-ip-10-146-231-18.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9403071999549866, "score": 2.84375, "token_count": 843, "url": "http://www.courant.com/features/green-living/sns-green-effective-fasting-benfits,0,1066277.story" }
Sectors may be only a few degrees wide, marking an isolated obstruction, or they may be so wide as to extend from the direction of deep water to the beach. A narrow green sector may indicate a turning point or the best water across a shoal. The exact significance of each sector must be obtained from the chart. All sector bearings are true bearing in degrees, running clockwise around the light as a center. In figure 9-10, for instance, the bearings of the red sectors from the light are 135° to 178°. This sector is defined in Light Lists in terms of bearings from the ship. These bearings are 315° to 358°, the reciprocals of the preceding bearings. The light shown in the diagram would be defined thus: Obscured from land to 315°, red thence to 358°, green thence to 050°, and white thence to land. On either side of the line of demarcation between colored and white sectors, there is always a small sector whose color is doubtful because the edges of the sector cannot be cut off sharply in color. Moreover, under some atmospheric conditions a white light itself may have a reddish appearance. Consequently, light sectors must not be relied upon entirely, but position must be verified repeatedly by bearings taken on the light itself or by other fixed objects. When a light is cut off by adjoining land, the arc of visibility may vary with a ship's distance away from the light. If the intervening land is sloping, for example, the light may be visible over a wider arc from a far-off ship than from one close inshore. Figure 9-10.Light sectors. Buoys are perhaps the most numerous aids to navigation, and they come in many shapes and sizes. These floating objects, heavily anchored to the bottom, are intended to convey information by their shapes or color, or by the characteristics of a visible or audible signal, or by a combination of two or more of such features. Large automatic navigational buoys (LANBY) are major aids to navigation. They provide light, sound signal, and radio beacon services, much the same as a lightship. Some LANBYs today are replacing lightships in U.S. waters. The LANBY is an all steel, disc-shaped hull, 40 feet in diameter. The light, sound signal, and radio beacon are located on the mast. Although buoys are valuable aids to navigation, as was stated for sector lights, they must never be depended upon exclusively. Buoys frequently move during heavy weather, or they may be set adrift when run down by passing vessels. Whistles, bells, and gongs actuated by the sea's motion may fail to function in smooth water, and lights on lighted buoys may burn out. MARITIME BUOYAGE SYSTEM Until recently, there were numerous buoyage systems in use around the world. In 1982, most of the maritime nations signed an agreement sponsored by the International Association of Lighthouse Authorities (IALA). This agreement adopted a system known as the IALA Maritime Buoyage System. Two systems were developed because certain basic long-established international differences precluded adoption of a single system worldwide. Both systems, designated region A and region B, use a combination of cardinal marks and lateral marks plus unique marks for isolated danger, safe-water areas, and special purposes. The cardinal and unique marks are the same in both systems; the lateral marks are the major difference between the two buoy regions. To convey the desired information to the navigator, the IALA system uses buoy shape, color, and if lighted, the rhythm of the flashes. Buoys also provide for a pattern of topmarks, small distinctive shapes above the basic aid, to facilitate its identification in the daytime from a distance, or under light conditions when the color might not be easily ascertained. Figure 9-11 show the international buoyage regions A and B.
<urn:uuid:8c034cb9-121f-44cd-8227-b8390956e344>
{ "date": "2017-08-23T06:17:44", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00576.warc.gz", "int_score": 4, "language": "en", "language_score": 0.906492292881012, "score": 3.53125, "token_count": 898, "url": "http://navyaviation.tpub.com/14244/css/Maritime-Buoyage-System-169.htm" }
The Mind and How to Build One August 12, 2010 by Ray Kurzweil At the Singularity Summit in San Francisco at 11:00 am on Saturday, August 14, Ray Kurzweil will present an overview of “arguably the most important project in the history of the human-machine civilization”: to model and reverse-engineer the brain, with the goal of creating intelligent machines to address the grand challenges of humanity. He prepared the following statement on his talk at the conference. What does it mean to understand the brain? Where are we on the roadmap to this goal? What are the effective routes to progress – detailed modeling, theoretical effort, improvement of imaging and computational technologies? What predictions can we make? What are the consequences of materialization of such predictions – social, ethical? I will address these questions and examine some of the most common criticisms of the exponential growth of information technology including criticisms from hardware (“Moore’s Law will not go on forever”), software (“software is stuck in the mud”), the brain (“the brain is too complicated to understand or replicate”), ontology (“software is not capable of thinking or of consciousness”), and promise versus peril (“biotechnology, nanotechnology, and artificial intelligence are too dangerous”). There is now a grand project comprising at least a hundred thousand scientists and engineers working in diverse ways to understand the best example we have of an intelligent process: the human brain. It is arguably the most important project in the history of the human-machine civilization. The goal of the project is to understand precisely how the human brain works, and then to use these revealed algorithms as a basis for creating even more intelligent machines. As we learn the algorithms underlying human intelligence, we will similarly be able to engineer it to vastly extend the powers of our intelligence. Indeed this process is already well under way. There are literally hundreds of tasks and activities that used to be the sole province of human intelligence that can now be conducted by computers usually with greater precision and vastly greater scale. Was it inevitable that a species would evolve that is capable of creating its own evolutionary process in the form of intelligent technology? I will argue that it was. According to my models we are only two decades from fully modeling and simulating the human brain. By the time we finish this reverse-engineering project, we will have computers that are millions of times more powerful than the human brain. These computers will be further amplified by being networked into a vast world wide cloud of computing. The algorithms of intelligence will begin to self-iterate towards ever smarter algorithms. This is how we will address the grand challenges of humanity such as maintaining a healthy environment, providing for the resources for a growing population including energy, food, and water, overcoming disease, vastly extending human longevity, and overcoming poverty. It is only by extending our intelligence with our intelligent technology that we can handle the scale of complexity to address these challenges.
<urn:uuid:1ed2c537-c051-4266-bb1c-100a6aae1f2c>
{ "date": "2015-04-01T01:10:45", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302428.75/warc/CC-MAIN-20150323172142-00270-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9394358992576599, "score": 2.65625, "token_count": 616, "url": "http://www.kurzweilai.net/the-mind-and-how-to-build-one/comment-page-1" }
Academic Evaluation & Diagnosis offers a comprehensive approach to evaluate individuals who have specific learning disabilities or dyslexia or are gifted and talented. After the diagnostic evaluation, a prescriptive instructional plan is created and progress can be monitored. These psycho educational comprehensive evaluations measure cognitive and academic achievement for students of all ages from preschool through adult. While the specific diagnosis is important, emphasis is placed on helping parents, teachers, and service providers understand the specific needs of the student. Comprehensive instructional plans are developed with an emphasis on evidence-based practices. Progress monitoring is offered as a way to make decisions using current data. In addition, ongoing support and consultation are provided as well as cooperative collaboration between parent, school, and specialist to achieve academic success.
<urn:uuid:f29a1cca-9a1d-4150-bff2-ae74963602e1>
{ "date": "2018-05-24T00:08:31", "dump": "CC-MAIN-2018-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865863.76/warc/CC-MAIN-20180523235059-20180524015059-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9393133521080017, "score": 2.53125, "token_count": 151, "url": "https://www.academicdiagnosis.com/index.html" }
Recently, researchers have been looking again into the ways different languages affect how we think. Benjamin Lee Whorf proposed the idea 1956 in M.I.T.’s Technology Review and the theory became quite trendy, until closer examination revealed that he had little research to back up his claims and some of his generalizations were just too broad to accept. For example, he said that if we were missing a word in our language, then we couldn’t grasp the concept. Although we don’t have the word Schadenfreude in English, we can easily understand the idea: delighting in others’ misfortunes. We get it, but perhaps we think less of this perverse delight, than Germans. In “Does Language Shape How You Think?” an article in the New York Times Magazine, Guy Deutscher argues, “When your language routinely obliges you to specify certain types of information, it forces you to be attentive to certain details in the world and to certain aspects of experience that speakers of other languages may not be required to think about all the time. And since such habits of speech are cultivated from the earliest age, it is only natural that they can settle into habits of mind that go beyond language itself, affecting your experiences, perceptions, associations, feelings, memories and orientation in the world” (Deutscher 45).
<urn:uuid:d35d8390-aeef-4ed2-a76a-525f1c6356cd>
{ "date": "2018-08-21T04:24:35", "dump": "CC-MAIN-2018-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217951.76/warc/CC-MAIN-20180821034002-20180821054002-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9486362934112549, "score": 3.3125, "token_count": 283, "url": "http://ronaldbrichardson.com/tag/how-language-affects-thought/" }
Spanish-speaking countries infographic project this spanish-speaking country project in both english and spanish and can be completed digitally or handwritten. Country project for spanish-speaking countries or any country english and spanish templates can be completed digitally in ppt or printed and handwritten https://www. Spanish-speaking country project our class is being given the task of increasing interest in travel to some of the spanish-speaking countries your job is to make a. Spanish-speaking area of the world glogster introduction our school's foreign language department has won the lottery and i am offering you an all expenses paid 1. 6th grade spanish-speaking country project name: for this project, you are researching a spanish-speaking country your research will be on topics such as. Learn about families in different spanish speaking countries, their similarities and differences hispanic culture projects booklet (middle/high school. By:richard peters spanish speaking countries: puerto rico (rich port ) location puerto rico borders: the atlantic ocean, caribbean sea, cuba & jamaica. The purpose of the project is to create a facebook page on a spanish-speaking country. Irubric e5bx55: you and a classmate will be taking a trip to an assigned spanish speaking country like any good tourist, you must prepare yourself before you visit. Project resources: research project topic ideas research project topic ideas (spanish/hispanic) the musical traditions of 2 different spanish-speaking countries. We will be in the computer lab on november 26th and the 27th to begin our powerpoint projects for this chapter, we will be exploring spanish-speaking countries. Spanish speaking countries project 100 points my country is _____ due date_____ you have been assigned 1 of the 20 spanish speaking countries. Made by miranda and alexa, song is sugar we're going down by fall out boy this took us forever, but we did it made this for spanish class, please. Spanish-speaking world current event project students will be able to discuss a current event in the spanish-speaking world spanish-speaking countries in. Spanish speaking countries project - assignment and rubric - teacherspayteacherscom. Project on the spanish speaking countries you must pick a spanish speaking country and do a powerpoint presentation on that country the presentation must have a. This project is designed for a small group of students(2-4)to conduct an internet-based research on a spanish-speaking country you and your partner are to present. Spanish speaking world research project guidelines spanish speaking world research project guidelines what other countries in that part of the world. Find this pin and more on spanish projects teacher, teaching spanish, french classroom, spanish projects project for spanish-speaking countries or any. The student project includes spanish speaking country research project looking to teach about the countries and culture of the spanish-speaking or.
<urn:uuid:2a09506d-9e88-4b8a-86e8-8c1efafab938>
{ "date": "2018-06-21T02:30:20", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864019.29/warc/CC-MAIN-20180621020632-20180621040632-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9244325757026672, "score": 3.296875, "token_count": 612, "url": "http://zqhomeworkahjh.blinkingti.me/spanish-speaking-countries-project.html" }
Secrets Q & A - Written by John E. Johnson, Jr. - Published on 18 February 2010 Tests and Observations For conductors, you have to deal with some basic electrical factors. The most important are Resistance, Capacitance, and Inductance. Resistance usually refers to the DC resistance, like if you connected a battery to the ends of the cable. Capacitance and Inductance are called "Reactive Impedance", and they occur with alternating current, such as with music playing through the conductors. I was very curious to see how capacitance and inductance change when the conductors are different distances from one another. So, I took two conductors about 15 feet long and measured the capacitance and inductance with them wrapped together vs. laying on the carpet several feet apart. What I found was that when the conductors were wrapped together, the capacitance (in pico-Farads, or pF) was higher by a factor of two hundred than when the conductors were several feet apart, i.e., 1,355 pF together vs 7 pF apart. However, the inductance was lower by a factor of 10 with the conductors wrapped together, i.e., 0.002 mH (milli-Henrys) together vs. 0.012 mH apart. This is because even a single wire has self-inductance as AC signals pass through it, but when the + and - conductors of the cable are close together, the opposite magnetic fields in the two conductors causes some cancellation. What surprised me though was the huge - and I mean HUGE - difference in capacitance with the conductors far apart. These measurements were for 15 feet of cable. So, to get the impedance value per foot, which is what A/V cable companies specify, divide the numbers I gave by 15. It is generally felt that interconnects should have low capacitance and speaker cables should have low inductance. The reason is that for preamplifier output connections to a power amplifier input, you are dealing with higher impedances than with the output impedance of the power amplifier and the input impedance of the speaker, and capacitance has more of an effect with the higher impedance connections between the preamp and power amp. So, for speaker cables, it is relatively straightforward, as the conductors are wound around each other or braided together, and therefore, the conductors being tightly arranged together, there is low inductance. For interconnects though, it is a different story. If you place the + and - conductors farther apart, you can end up with significant hum, because the - conductor is usually configured to act as shielding. One way around this is to use XLR balanced cables and separate the conductors by an inch or so. Hum that is picked up in the + and - (actually these are called the hot and cold, and are separate from the third conductor which is the ground) conductors are cancelled when the signal reaches the amplifier because one of the legs is inverted and added to the other. This is called common mode rejection. The new Argento "Flow" XLR cables are designed this way, but I have not had the chance to test them. From what I can tell, many A/V cable designs are of the "Litz" configuration. The word Litz is derived from the German word that means "woven wire". Each conductor in the cable is insulated from the others, and the conductors are aranged in a spiral or braid, or both. This reduces the skin effect which refers to higher frequencies traveling along the surface of the conductor (this is why each conductor is small diameter and insulated from the others), and also reduces the proximity effect, which is the effect that electrical current flowing through a conductor has on adjacent conductors. What causes the skin effect is the delay in the collapse of the magnetic field when the current changes direction, which pushes the current torwards the surface because it is now attempting to move in an opposing direction of that residual magnetic field. This effect is larger with higher frequencies, and therefore, causes a loss in power (frequency response) at those high frequencies. By braiding the conductors, each conductor has an equal amount of area on the outer edge of the cable as it does in the center, where the proximity effect is the strongest. The whole idea is to have each individual conductor passing current in the same way as all the rest of the conductors in a single cable. Here is a photo of various Litz configurations that New England Wire can supply at about $1,500 for a 250 foot roll. Many audio cables are similar to these configurations. (Photos copyright New England Wire Technologies) You can easily make your own Litz design speaker cables, with a measured, very low inductance. Here is the link. If you wanted to upgrade the conductors to silver-plated copper with Teflon dielectric, Daburn sells model 2401 in 16 gauge for about $66/100 feet. Total cost of a pair of 8-12 foot cables, including connectors and wrapping, would be about $150. For the connectors, I would recommend the locking bananas or spades from DH Labs. Their spades are pure copper, plated with gold, while the bananas are gold-plated brass. Flat cables are also available, with conductors side by side. For example, Cable Organizer supplies these paper thin flat cables. They are made to go on walls and then you paint over them, and I suspect that they would have very low skin effect although that is not their purpose. The cost is about $2 per foot in 18 Gauge. (Photo copyright CableOrganizer dot com) Here is another type of flat cable that has been around for a long time. This particular one is made by Daburn cables. It has 20 conductors, 28 Gauge each (made up of several 36 Gauge strands, each of which is silver plated copper). Teflon is used as the dielectric. You can choose the number of conductors, and some SCSI cables are made like this, but of course, one can also use them for speaker cables. The cost of this particular configuration is about $13/foot, and that is the cost to the consumer, not the manufacturing cost. (Photo copyright Daburn) But, the main point here is the continuing escalation of A/V cable prices, when they remain basically just a set of wires, mostly in common configurations with slight variations, and not extraordinarily expensive to manufacture. There simply is no justification for the product to be in the four and five figure range. Custom winders, custom extruders, yes they have to be built, and they are expensive. But the custom CNC machines to build the parts for DVD players, amplifiers (PC board stuffers), and speakers are also expensive. So why should one pair of cables cost as much as a complete home theater system that also uses custom built, expensive CNC machines, and a lot more of them in total number? When we spend thousands of dollars for speakers or other hi-fi components, there are graphs published by various magazines that show the performance of those components. There are DATA to justify the high cost. So, if a cable manufacturer wants to charge $5,000 for a pair of interconnects or speaker cables, shouldn't they be required to show us experimental test data to back it up? That's the kind of thing I want us to discuss here. Please post your comments at the end of the article in the comments window. Written by Walden , February 19, 2010 The objective measurements prove that cables (in audio) can not have a sonic impact, simple. Of course the subjective side does not believe in measurements but at the same time they refuse to even listen. Anyone no matter what the think they can hear always fails a double blind test. Remember the thread on avs where the user could not hear a difference from basic monster cable to $30,000 transparent cables. That was in his room with all of his equipment and he could still not hear a difference. Written by SHV , February 19, 2010 About five years ago I got some unpaired "surplus" boutique inter-connects from friend. IIRC there were about five different "brands. They were all in the "kilo" dollar retail price range. I took them apart. From my small sample, several were nothing special, one had sloppy solder connections and one had "cold" solder connections. On well known brand had a large plastic box incorporated into the interconnect. I took it apart with a hacksaw as it was totally sealed to prevent easy inspection. Inside was mostly empty space. There were several cheap appearing resistors wired parallel with the main conductor and several capacitors wired in a fashion that didn't appear to have any electrical effect. That finished me on the idea of high-end wire. I now make my own with Canare wire and Neutrik connectors. I suspect that they aren't any better than the ones that come with a Wal-Mart DVD player but can do custon length. Written by JEJ , February 19, 2010 Well, I do think that good cables can make a difference, but the differences are subtle, not dramatic, and this is the problem. But, some blind testing has yielded positive results: http://tech.yahoo.com/blog/null/65929 However, the cost of some cables is simply not justified. There is no issue for me in paying a few hundred dollars for a pair of speaker cables that are well built and have good solid gold-plated connectors. But $10,000, $20,000, $25,000? That is absurd. ∨CLICK TO VIEW MORE COMMENTS Different kind of study Written by Rick Schmidt , February 20, 2010 agree completely that the cost of many cables and power cords cannot be justified and that the practice of imposing these high prices undermines the hifi industry. BUT - double blind is not the optimal study for cables or hifi components. I think a 'Longitudinal Study' where subjects record their listening habits over time IE, 'how long did you listen this evening?' is better. John said that the differences are subtle, I think that is the case to the untrained ear. A double blind study with a group of uninitiated listeners sitting together, perhaps nervous about the whole thing, doesn't hold much water for me. Written by JEJ , February 20, 2010 Well, I have a trained ear, and the differences have always been subtle to me. But, on the other hand, I did hear a huge difference when I first started out getting good audio equipment. I had a long run of 13 gauge zip cord running to the rear speakers. The cable was hanging from nails along the top of the side walls at the ceiling. Susan told me to get the nails out and put the cable out of sight. So, I bought some Nordost Flatline to put under the rug. All of a sudden, there were more highs in the sound, and it was very easy to hear that change. But, for all other cable tests I have done, or listened to other people making cable changes, if I heard any difference at all, it was subtle. In fact, so subtle, I wondered if it was just my imagination. This tendency for cable differences to be subtle is why there is so much controversy. If it always made a big difference, no one would be arguing about it. But remember now, this article isn't really about whether cables make a difference in the sound or not. It is about the total absurdity of pricing some of them in the tens of thousands of dollars. Written by ChrisHeinonen , February 20, 2010 I've been getting close to the point where I need to upgrade the cables in my system. Mostly, I've always felt that my money was always better spent other places, since instead of spending $500 on speaker cables, I could easily spend a little more and get a better subwoofer, which would make a bigger impact for sure. However, now I want to get cables outfitted on my system so that when I review equipment, I won't get complaints that whatever I have is being held back by inferior connections, which I've heard many times. However, I think there's a definite point of diminishing returns. I think if I outfit my RCA and power cords with cables from Emotiva, or Pangea, or someone else that makes high quality, affordable equipment, that no one will really be able to fault them. I might splurge a bit on some Kimber 8TC speaker cables, but I think speaker cables are more vulnerable to noise, and I've yet to read anything negative about the Kimber in my reading (the surrounds will make due with flat speaker cable I ran under the carpet last year). I do agree that the prices of cables does seem to have gotten out of hand, but when you can find speakers that cost 6-figures, I'm sure anyone that buys that doesn't want to feel that there's even the slight chance their cables are keeping them from getting the maximum enjoyment out of their purchase, which I can understand. Written by Paul AB , February 20, 2010 Great series. Though the critique of megabuck cables could apply to anything monstrously overpriced. Some of it's simply the fact that there's a market for lux goods, and thus people with deep pockets buy the stuff and the greedy companies are happy to shill it to them. I wish we had R&D, production and marketing costs for all expensive products because its the only way to make an informed choice as a consumer. (The ultimate point of this series). Absent that, you have to do your homework and trust your judgment. My entire system is cabled out in the $2500 retail range, and a lot of the wire I bought used. I can hear the difference and am happy with the sound. But more than that I am not willing to spend. Consider the opportunity costs. Show me the science Written by steveg , February 20, 2010 "the differences are subtle" Can the differences be measured? If they could the cables would sell themselves. Fuel additives, astrology, crystals, and audiophile speaker cables... all on the same level to me. Having said that, if spending $000s makes you happy, by all means do it. Written by Robert Learner , February 20, 2010 I agree totally w/the above post. Differences between good cable (and I include Belden, etc. in this) are just about nada IMO. A difference you have to really 'listen for', is by nature very subtle and more than likely imagination/justification. My "Flatline' experience was replacing some of the original heavy stranded Monster speaker cable with some simple, cheaper wire recommended by the speaker manufacturer (Epos). It wasn't subtle - a magnitude of slurring was gone and everything snapped into focus. Cables that really sound different are voiced by their designers. Cardas has a line that smooths/rolls off highs. More to the point, cables are a lousy value propositions. There are so many more cost effective ways to improve a system: room treatment, better amps, speakers, etc. -- that provide obvious improvement, not something you have to listen for. Written by JEJ , February 21, 2010 Taking into account the cost of materials and manufacturing, I don't believe a pair of speaker cables should go into the four figure range, let alone five. Written by Walden , February 21, 2010 Oh and if you "think" you can hear a difference there are lots of money challenges waiting out there for you to collect, including the $1,000,000 prize from james randi. As we see most users on here you think they hear a difference also bash and blind or double blind listening tests. Hence people think what they want to. Even if cables made a difference what about the basic wire and connectors inside the equipment? What about the crossover in the speaker and the huge amount of basic copper in the speakers drivers? We are fundamentally looking at cables incorrectly Written by Dominick , February 21, 2010 I have learned to look at cables on a more horizontal plane, rather than a vertical one. On a vertical plane, we have the tendency to assume that sonic characteristics improve as price goes up. Fundamentally, we all have different ears and we hear things very differently. This has been proven. On a horizontal plane, we view cables as simply different. One may be more pleasing to me than another, but my choices may be totally reversed for another person. On a horizontal plane, there is no "ultimate cable". We are all too different to categorically state one cable as being sonically superior to another. Another good example of this is that some people still prefer tubes. Written by ej , February 21, 2010 Other than making sure you have a sold connection, and wire of a quality where the information makes it to the other side intact, I refuse to believe the hype. I equate it to "just knowing" my car is running better after I get it washed. Let's remember that this is a hobby. Written by Phil Miller , February 22, 2010 There is no question that effect of cables - along with various other items of audio gear- is greatly exagerrated. Several items quickly come to mind - stones or, alternatively, coins that are placed on speakers to improve focus; tape that is placed on the driver surrounds to also improve focus and imagaing. Much of this stuff is snake oil and we all (should) know it. However the fact of the matter is that some of us continue to believe we hear improvement. This is a hobby and we follow it for the fun it provides - if that includes cables that are astronomical rip-offs, so be it. (Cars do run better after they have been washed.) Phil Written by Walden , February 22, 2010 "Fundamentally, we all have different ears and we hear things very differently. This has been proven." No, people can hear better then others but that is about it. Cables have been proven to not have a sonic impact in theory and in practice, what else is there to prove? I can understand that people still assume some high end audio companies sound better based on name and price but why are these same audiophools still hanging on to this notion that cables sound different? Written by JEJ , February 22, 2010 We cannot prove that something does not exist, only that it does exist. Negative findings do not prove that there are no differences in cable performance. And, you should not ignore the fact that there are blind studies in which differences were heard, such as http://tech.yahoo.com/blog/null/65929. Electrical conduction is affected by the conductor configuration. The problem is that there are other variables, such as the output impedance of the source and the input impedance of the component at the other end. Also, as the article in AudioXpress indicated, oxidation on the connectors changes soon after connectors are plugged in, and this is measurable. You really need to keep an open mind about the whole thing. I am a believer in having good cables, I just don't think that prices in the four and five figure range are justifiable. Written by Scott_R_K , February 22, 2010 If you want to get kicked out of a Tradeshow , start poking your nose into the insides of Loudspeakers . Start asking questions on wire gauge , soldered , crimped , shielded , etc . Then go ask the python-gauge Cable makers to explain why their product would work on such a poorly built speaker . Whoosh...out you go ! If the Source and Sink Devices aren't using the same quality of wire and termination techniques as the best Cable makers then you will lose all the benefit of even a single great cable . Actually, there is a difference! Written by Michael , February 22, 2010 I did quite a different test. Instead of measuring resistance, capacitance, etc., I went to measure full system response. For this purpose I used Room EQ Wizard (REW) - free acoustics analyzer application. I took MIT, XLO and some other 3rd brand. Calibrating SPL before each test as required by REW, I measured frequency response and group delay. Well, there was audible difference in high frequencies of few dBs, and the same was measured. I have the graphs, I can upload them somewhere so, guys, cables do have a difference! Written by some guy , February 22, 2010 You have to keep it all in perspective. Fundamentally, all that is needed with the vast majority of audio equipment are connectors and cable that follow the fairly basic rules of inductance, capacitance, and certain field effects that crop up with an AC signal (like music). This is not particularly hard to do, with some judicious choices and paying attention to a few principles. There is a tendency in audio (and now video or A/V) to rely too much on the experience and recommendations of others. Whether it's by reading reviews in magazines or online, or by ignoring those sources and asking people in forums, people take too little responsibility to determine what is right themselves. Certainly there is nothing wrong with being aware of what is available in the market, but I cringe when I see (and this is very common) questions like "what is the best receiver for x dollars?" topics posted in forums. That is a poor question to ask; all you get are other people's responses basically suggesting something they own. Similarly, reviewers, whether they like it or not, effectively make buying decisions for consumers, because consumers rely too much on a positive review (which the consumer reads as "endorsement") of a product. This creates demand and a certain cachet for certain products. With respect to cabling, it creates an anxiety and a desire, when the better result would be the consumer making his or her own judgements, by listening, by assessing their needs, their wants, and their budget realistically. So, when reviewers have boutique cabling that creates demand in consumers, and perhaps that demand is beyond actual need. Now, reviewers get stuff sent to them for review (obviously) and it's no surprise that these items are commercially for sale. Why would you not use a cable that "you like" if cable is essentially always available to you? It does not automatically follow, however, that consumers should build systems the same way as reviewers do ... there is an imbalance of opportunity and an imbalance of cost. Reviewers are always going to have cables about the house ... it's the nature of the job. Consumers have to buy them. When it's so difficult for some consumers to just go and choose products as simple as a receiver or disk player for themselves, it's no wonder that they rely on outsiders to choose what level of cabling is appropriate. And when reviewers have ample opportunity to sample cabling, it's not much of a stretch to expect they will use the stuff constantly thrust upon them in their systems, and say so in print from time to time what that product is. Few people have the opportunity to listen to a truly revealing system. There are levels of audio refinement that almost no-one except a very dedicated minority have access to. I would expect that reviewers of audio in some publications have systems that no-one in my entire city can compare to (200,000 people). I have heard some truly great audio, but I would have to admit that "truly great" probably does not describe what some reviewers whom I read in the usual magazines use every day. If you have such a system, encompassing the very best the world has to offer, I would not be surprised to learn that you can discern differences in cable rather evidently. I would also not be surprised to learn that, when you are at the pinnacle of the art, there may be nothing left to improve in the hardware. So, to that person, a cable of some stunning price tag may be worthwhile ... it's the only way to take it to the next level, and if you didn't want to take it to the next level, you are a fool for owning components that each cost in the five figures. I have to assume they would, then, see a very expensive cable as worthwhile. The problem is we ordinary folk think we have stunning systems, but we don't. We have "truly great" systems to one degree or another. We should not be aspiring to 5,000 or 10,000 cables any more than we should be aspiring to gold plated plumbing. But, some do have gold plated plumbing. A decent set of fundamentally good cabling is all we need. People have lost perspective to a huge degree. It's not that there is no such thing as a worthwhile five-figure cable, it's that if it were worthwhile, we are not in the league where it matters enough to go there ... our parents used to talk of "keeping up with the Joneses" and there was a lesson in there. Perhaps we've forgotten the lesson. For the vast, vast, vast majority of people, one you choose a fundamentally decent cable that you can afford, just stop right there. Better improvements can be made elsewhere at a better value. When you join the "lunatic fringe" ... well, you'll know it by more than just your hifi system. Till then ... don't sweat it. Worry about your own stuff first. There is one thing about the debate about the value of cabling that I think should be mentioned. Whenever there is any discussion about the subject, and this article is no different, talk comes down to the principles of electronic theory, perhaps a little metallurgy but only in terms of it's electrical properties ... this factor is important, and the right value should be this, this material has this conductivity that you can look up in the textbook, and so on. But all this theory approaches the subject with huge assumptions about the purity of metals, the ideal interface, an environment free of RF, "typical" input impedances, shielding assumed to be 100% coverage, and so on. But the real world is a bit more imperfect than that. I do believe cabling matters (and let's keep the budget reasonable ... affordability is the first, not last criteria you should be considering) and I believe that it does, mostly because in the real world, we wallow in imperfections that affect our hifi systems. Cabling can exaggerate or mitigate those imperfections, and if it does, we'll hear something. Whether it's "better" or "worse" is really our own opinion, but "different" I think is a given. Just not that different, most (not all ... some stuff is just plain bad) of the time. Written by Piero , February 22, 2010 Scott, you make a generalized and incorrect statement. No matter, you all seem off topic, JJ is merely saying at what point are these cables too expensive. Written by JEJ , February 23, 2010 By "too expensive", I am referring to what is justifiable in terms of R&D and manufacturing costs, and that is only my opinion. To Michael, "actually there is a difference", please post your graphs in the CAVE. Start a thread in the cables section and upload the graphs as gifs or jpgs. Written by Walden , February 23, 2010 Surprise: People who visited the booth and listened to both sets of equipment (not in view) preferred the expensively cabled audio equipment 61 percent of the time. Sorry but this is just the same as flipping a coin and does not prove anything. If it is not 100% then there is no difference. Written by Walden , February 23, 2010 The slight differences in measurements are irrelevant. Price does not equal value, and value is not a fixed entity. Written by some guy , February 23, 2010 " ... written by JEJ , February 23, 2010 By "too expensive", I am referring to what is justifiable in terms of R&D and manufacturing costs, and that is only my opinion. ..." I appreciate that it's just your opinion, and I thank you for saying it out loud. I don't have a magic portal into the workings of the purveyors of stratospherically priced cabling, but I have my guesses (which you could, rightly, retort that they are only my opinion). But I suspect that these companies do spend some time and money researching how cabling affects the signal between boxes. I also suspect that the pinnacle of the product line is priced in such a way as to recover some of those costs, perhaps well out of proportion to the manufacturing cost of that particular cable compared to the journeyman offerings in the product line; the stuff mere mortals can afford. That is fine, as far as I'm concerned. If it's true, then it lowers the cost of what I normally buy to a certain extent. I don't see how anyone could have a problem with that, but I suppose you could object solely on principle. But even if it isn't, I don't object to $25,000 cables anymore than I object to $400,000 Ferraris. The beauty of the free market is there is no "wasting" of capital when you buy a product ... money is re-spent on furniture and food eventually, regardless of how it actually changes hands in the initial transaction. It's not critical to the cabinet maker or the farmer that the initial transaction involved a product most would consider unfathomable, taken at it's face. Their kids get shoes either way. I can think of a thousand things I cannot afford that I believe are overpriced, or offer little value for the money, and I'm sure there are people out there who think my purchase of ... get this ... a $400 phono preamp (a purchase that was considered ludicrous before Mark Levinson and the likes of dB Systems introduced their groundbreaking products 35 years ago) is proof that I've lost all perspective about the value of a dollar, and amongst the less politically generous amongst them, that "something should be done" to curb my expression of "insanity". Others would consider my purchase either something of a bargain, or perhaps scraping the lower levels of what is possible, or prudent, or appropriate for the task at hand. The opinions on the merits of my purchase decision could easily run from "completely unnecessary" to "a pathetic compromise", and everywhere in between. You can substitute "interconnect" for "phono preamp" and arrive right smack in the middle of this discussion. I welcome some aspects of this series in *Secrets*, but the debate about whether there is such a thing as any value whatsoever in a cable no-one reading is likely to ever buy seems to me somewhat pointless, and I would be shocked to discover in the future that this article somehow ends up being the final word on the subject. I build my own cables, and don't break the bank going there, but certainly it costs some money. To do so is to accept that nothing I use will ever be recommended, or reviewed, and I'm OK with that. But if by some bizarre set of improbable circumstances someone would offer me $25,000 for a set of my interconnects, and declare them "the best in the world", I'd save my doubts until after I cashed the check. And then I'd buy shoes for the kids Written by JEJ , February 23, 2010 A $400,000 Ferrari is hand made, part by part. A cable is made by spinning wires together by a machine or forced (extruded) through a die. It comes out like pasta through a pasta-making machine. It is wound onto a roll and cut to desired lengths. The basics of electrical conduction through wires are already known. They just program the CNC to wind or extrude the conductors in various ways, according to what the company thinks might be a good configuration to try out. It is not a complex procedure, and once the presses are rolling, the cable just comes out foot by foot, meter by meter. There is no way a pair of 5 meter speaker cables should cost $25,000. And I never intended for this article to be the final word on the subject of high priced cables, or cables vs. sound quality for that matter. It is simply a technical discussion that we have not had before at Secrets, and I want to voice what I think about overpricing. Written by some guy , February 23, 2010 " ... A $400,000 Ferrari is hand made, part by part. A cable is made by spinning wires together by a machine or forced (extruded) through a die. It comes out like pasta through a pasta-making machine. It is wound onto a roll and cut to desired lengths. ..." With all due respect, Ferrari makes their cars the same way everyone else does, and that involves a lot of outsourced parts and a lot of robots. A great deal of the cost in such a car is low volume manufacturing, which forces a proportion of the design cost, back in the office, which Ferrari does do a lot of, to a disproportionate level on a per-car basis. And despite what the process of manufacturing cable involves, when you sell a 100 meters or less of it during the life of the product (unless the lunatic fringe is far more spendthrift and significantly more numerous than my experience suggests) no factory is going to give you much of a break on the tooling costs, and computers, business class software, salaries, and office space cost the same in New Jersey as they do in Modeno, which will result in ... wait for it ... a disproportionate level of the design cost on a per-cable basis. There are businesses on the planet that pay much more than $1000 a foot for signal cable ... go out at night and wait for a bright example of their handiwork to fly by. Written by twiceaday , February 23, 2010 I have a few questions for those arguing in favor of high-end cables (or for that matter, anything beyond well-made basic cables): Do you control the humidity in your listening room? What about the oxygen/nitrogen balance? Either of these would have measurable effects on the soundwave as it travels from the speakers to your ears. Do you hold absolutely still while listening to your audio system? Even a few millimeters of motion will substantially alter the perceived signal from your speakers. Do you have either a generator or a dedicated line from your power company? If not, substantial interference is introduced long before the electrical signal ever reaches your amplifier. Yes, expensive cables look pretty and basic ones look ugly, but in terms of sonic fidelity, the air in your room and the precise location of your ears have a much greater impact. Written by ChrisHeinonen , February 23, 2010 Though off topic, if you think that a Ferrari is built the same way as any other car, you haven't really watched a tour of the factory before: It's in Italian unfortunately (I've seen a tour on Discovery HD before), but while parts are machined like other car companies, they are then hand inspected, polished, and tuned, in addition to fully assembled by hand, the fabric is all stitched by hand, and there is a fantastic amount of work that goes into each of those cars. Yes, it's on a small scale, but it's also done in such an exacting way, you know where your money has gone. I can't talk about how cables are made, as I've got no idea about that, but watching that video will give you a better idea of how much work goes into a Ferrari. You'll also see some gardens in the plant, that was done to keep the humidity at a better level for working on them, and was friendlier than just adding humidifiers. Written by Piero , February 23, 2010 So do we think the expense is based on what the market will bear? Certainly these ultra-expensive cables wouldn’t be manufactured if nobody wanted them, however if the markup is so great, it’s not like these cables are made and sit around using up valuable capital? Ferraris are made to an extreme limit and therefore valuable and pre-ordered for a couple of years. They could almost ask what they want? Mass vs. Batch Manufacturing Written by Andrew Yang , February 23, 2010 By nature, for low unit volume runs the manufacturing process will differ from high unit volume runs. The economics of automation are not favorable until a minimum unit threshold is achieved, typically in excess of 100k units. All this to say, in addition to the supporting evidence from the video, that Ferraris (or Bugattis, Rolls Royces, Bentleys, et al) are not manufactured in the same manner as your average Ford. The unit cost correctly would include COGS, SG&A and Depreciation. At every point a Ferrari has a higher cost, which results in a price that is an order of magnitude higher than mass manufactured automobiles. Written by JEJ , February 23, 2010 Once the coils of wire and the extrusion die are set up, the cable is mass produced. Smoke and Mirrors Written by Vilip , February 24, 2010 I have a lot of expensive audio equipment. I've justified and rationalized my audio expenditures. I like what I have. Would I have desired to spend less - yes... But the market always balances out. Sellers always sell for the most they can get and buyers always pay as little as possible. And the reality is always somewhere in between. Mercedes is a good example. Mercedes is generally considered to have the best engineering in the automotive world - and perhaps their most expensive cars are the best engineered. Everything else in their product line benefits accordingly. What we perceive as good value varies greatly. The more personal the experience is, the easier it is to rationalize. Audio is very personal. And for each and every one of us who are bitten there will always be a certain disconnect about pricing. We spend on audio what we will and not as we should as with other things. We all agree that the included cable in the box sucks because you can't get something for nothing (even though the cost of the cable is buried in the price - so it isn't really free). Why should cable cost $1 per foot or $100 per foot or $10,000 per foot? The "real" cost is always a moving target, depending on the vagaries of the perception of value. What we choose to connect the gear is always ancillary (and some would argue the gear is also ancillary). We supplement because we can. We think it works better somehow. Start the rationalization. Beyond what works is simply smoke and mirrors. And we have all taken the blue pill... Written by some guy , February 26, 2010 " ... I have a few questions for those arguing in favor of high-end cables (or for that matter, anything beyond well-made basic cables ..." I am not arguing "for" high-end cables. I am arguing that they have a right to exist in the marketplace, and that if no-one bought them, they would soon go away, as the free market pretty much insures they would. As for all cables being made the same way, there are cables made of liquid metals (metals that are liquid at room temperature ... an easy to grasp example of such a metal is mercury) that, obviously, cannot be extruded by dies. Since the main premise of the article is about the price, not the material, I suggest they be included as well. Written by JEJ , March 01, 2010 Apparently, there are such things as liquid cables. Here is the link: But, it is not mercury in the cable. It is a mixture of gallium, indium, and tin. Gallium becomes a liquid at just above room temperature, and Indium/Tin Oxide is used in liquid crystal displays. Written by JEJ , March 01, 2010 See also a discussion in the CAVE (link is shown below), where one of our readers measured the room response with different cables, and found a difference that can be seen on graphs. Hello!!! Has anyone tried this before? Can the proof that cables make a difference be this simple? Criticisms (valid ones) anybody? Written by Tyler , March 02, 2010 I would suspect a very poor design with one of those cables if those measurements are legit. Anyone with any kind of engineering knowledge knows that you would not experience differences of that magnitude between properly designed cables. Written by Tyler , March 02, 2010 "But I suspect that these companies do spend some time and money researching how cabling affects the signal between boxes." NOT. If this was true, companies would be willing to actually publish scientific data to support their work...which they don't. Hell, Nordost is the only boutique company I know of that at least publishes specs for their wire, even though their products are ridiculously overpriced. I'm convinced that most of the cable companies come up with a layout that looks fancy and expensive,then have a marketing department create some BS that sounds scientific to ignorant buyers, and then proceed to price the product at 1000%+ of the manufacturing cost. Clean signals? - Real world experience from a different arena. Written by Dale DuVall , March 03, 2010 Put me in the mostly, but not completely, skeptics camp. I am educated (and I use that term loosely!) in physics, but have spent most of my career designing and manufacturing custom precision electronics. I concede that this does not indicate I know "squat" about audio. However, I do believe some of my experiences may be applicable. I am not attempting to address any one issue in detail, but wish to provide a few random musings (some obvious, some possibly wrong). They should be adequate to handle the current (must are). They should have quality connectors (most do). If the power cables are going to run anywhere close to other wiring or equipment, twisted shielded pairs are preferable, perhaps necessary. Keep them as short as possible. Some of the claims I see for power cords are beyond my level of understanding. For instance, how can a power cord help deliver a huge demand for instantaneous current when the AC signal is crossing zero?? I want some type of line surge protection for all my electronic equipment, including phones, computers, etc. Surge protection and high frequency filtering is a good thing, and possibly regenerating AC power if one suffers significant voltage distortion and/or variations. (My personal preference might be to use paramagnetic transformers, but they are expensive, heavy, and not particularly pretty.) When told about all the evil phenomena that can creep into my system without perfectly clean power, megajoules on demand, etc., I wonder what the hell is the power supply in the equipment supposed to do? Interconnect cables should be shielded. Balanced pairs should be twisted and shielded. Keeping the runs as short as possible is as important, if not more so, than any other thing. ( I'd probably choose a 1/10" paper clip connection over a 10' $10K masterpiece.) Terminations of wires to connectors should use gas-tight crimps for reliability. Good quality, gold plated connector pins are recommended (I don't know enough about silver plating to comment). Do not make too sharp a bend in cables. This can disrupt the shielding and even separate the insulation from the wire, which is disastrous in impedance controlled coaxial cable. I have auditioned cables, and must confess, I cannot tell much, if any difference among many samples. Maybe I just don't have that Golden Ear, (or vivid imagination?). I don't buy junk, but don't see the need for extremely expensive cables, either. (P.S. It's probably a good idea to clean connectors occasionally, but with good, tight fitting contacts, this should be rare). I wonder if cable auditioners take great pains to keep all other parameters fixed (cable lengths, cable routing, etc., not to mention the glasses of wine...)?? Speaker cables, especially if running near any other wiring, should probably be shielded. Maybe not so much for what they might pick up (this part of the system is low Z) but what they may radiate. They should have a low enough impedance to handle large currents demands. I'm not sure how you can match any cable perfectly to a speaker, since it's Z wanders all over the place. The entire outside of any electrical chassis should be grounded. I would hope that manufacturers take great pains to make this so, but with so many panels anodized, painted, etc., I wonder if this is true. Unfortunately, this is difficult to check unless you scratch through the coatings or open the chassis and check from the inside! (Screwheads don't count!) If cables cross close to one another, try to keep them at right angles to one another. System grounding is a big deal, but often very little can be done about it. This is because each piece of equipment is designed independently from the rest of the system, and must meet agency approvals, etc. This is a subject unto itself, as I have found ground loops a very misunderstood topic. If possible, make sure your safety ground at the wall outlet really has a very low impedance path to earth ground. I have seen people run a separate earth ground from their equipment straight to a buried copper rod in the earth. While this may seem extreme, I would never say you're wasting your time doing it. In summary, I would suggest that proper shielding, grounding, and cable routing play a more significant role in clean sound than does most of the latest, greatest, double mojo, hexihelical wound, superconducting, triple-knotted wonder cables. Cables, has anyone tried making cable out of Daburn FEP ribbon cable? Written by JBK , March 05, 2010 Has anyone made speaker cables out of the Daburn FEP ribbon cable? How did you terminate them? What were the results? To put things in prespective, I don't believe in megabuck interconnects, speaker or power cables. My experience has shown there is a point of diminishing returns and in some cases the more expensive cable degraded the sound of my system. So I like to experiment a little. Hence the question about using the Daburn for speaker cables. Written by Walden567 , March 05, 2010 "Speaker cables, especially if running near any other wiring, should probably be shielded." Wrong, obviously you need to do some very basic research. This is novice information that you should know about. Written by some guy , March 06, 2010 " ... Apparently, there are such things as liquid cables. Here is the link: But, it is not mercury in the cable. It is a mixture of gallium, indium, and tin. Gallium becomes a liquid at just above room temperature, and Indium/Tin Oxide is used in liquid crystal displays. ..." I did not say such cables used mercury. I provided an example of a metal that is liquid at room temperature, because at first glance the concept is a bit alien to most people. " ... there are cables made of liquid metals (metals that are liquid at room temperature ... an easy to grasp example of such a metal is mercury) ..." Just like Orange Juice Written by Arjan , March 08, 2010 It's just like with orange juice: it all comes out of the same tanker-ship and only later they add some extra flavours. There are many cable-companies but only a few cable manufacturers. The companies order their wire here, and they ask to put something extra to make it look more fancy. In general you get the same copper at the DIY store. That wire comes from the same factory. That extra flavour can cause a 5% better sound experience in the long run. But $2000 buys you the best tweeter in the whole world, that will make a 300% difference. Just the facts mame Written by Norm , March 08, 2010 I believe you're really preaching to the choir on this one. But, good for you anyway. Back in 1991, The Audio Critic published some no nonsense technical articles (Numbers 16 & 17) about the reasons cables can "sound" different. Sounding different does not make a cable better though, and too many audiophiles treat them as tone controls. For those interested, here's the link: Thanks for the article! Written by David Gibbons , March 09, 2010 The average Joe or Jane goes down to Best Buy, and for those people to buy expensive interconnects hurts them. The high cost of the 'fancy' interconnects they get pressured into buying means that too much of their budget goes to interconnects, ad so they buy an inferior TV, or Amp, or DVD/Blu-ray player, or particularly inferior speakers. The degradation of sound or picture from the inferior components will NEVER be fixed by the fancy cables... The high profit margins on the interconnects earns them a lot of attention by the stores who sell them. How do we educate the average HT buyer to choose interconnects that are appropriate to their purchase? I posted advicet on my website - how about you folks? Written by JEJ , March 10, 2010 The concept of "how much you should spend on cables" has been beaten to death on every A/V website. The general advice is to spend 10% of your total A/V budget on cables. What the makers use. Written by James M , March 11, 2010 As one comment has already pointed out, have a look at what the speaker and component manufacturers use,especially the higher end ones. This will give you a good indication of what good quality and good sounding cable is like. My suggestion is to buy well made 12g speaker cable and short lengths of interconnects with tight fittings. By all means pay what you want, but you will get great sound and vision without going all high end. Spend the savings on music or movies and enjoy. What's the harm Written by David D , March 15, 2010 So some rich guy (mostly guys) wants to spend more money on his hobby. I doubt the rich guy is going into this with his eyes closed. And as long as the seller is not making fraudulent claims, why is everyone getting so upset? The harm, if their is any, is limited to someone who can afford it. It is a different story with mid-priced products like Monster sold to average guys for whom this is a one-off purchase. Caught up in the emotion of making a major purchase, many consumers place too much faith in the advice of salesmen. I think that $100 Monster HDMI cables represent a far bigger swindle than $10,000 Nordost Valhalla interconnects. There might be some harm when Joe Moneybags buys Valhallas, but it is pretty subtle. From an economic standpoint (I am an academic economist), the potential harm is that resources that might have gone to some other greater good have been diverted to making the Valhallas. How much better off would we be if those greedy cable makers were hard at work designing, oh, Apple iPads or whatever? Probably not much. More Cable Effect with Bad Equipment? Written by John Meyer , March 21, 2010 I've been a cable skeptic from the first demo I witnessed 30 years ago. There were 2 foot sections of the huge expensive cable and something of the lampcord genre switched by a switch mounted on a portable table. The connections were underneath. The difference was quite large but being a loudspeaker designer, I recognized that most of the difference was level, not sound quality. I asked to see under the table and examine the switch mechanism. Request denied several times. A 2 or 3dB difference in level over 2 feet isn't possible so I had just witnessed my first cable scam. I have heard the difference between a standard ac power cord and a high end unit. Subtle but distinct. However, I wonder how dependent the effects some people claim to hear are the result of poor electronics reacting to a change in cable characteristics. We know tube amps are incredibly sensitive to speaker impedance. Is it not possible that gear with poor grounding, unstable power supplies and other design issues are sensitive to different loads of the magnitude JEJs tests demonstrated ie capacitance and impedance varying by a factor of 10+? If that is true, the better designed electronics would be less subject to cable differences than gear "with more character". Also, the differences are far greater in the analog domain than in the digital domain. Any comments JEJ? Written by JEJ , March 22, 2010 JM - Yes, I completely agree that impedance characteristics could play a major role in the sound of two components linked together by those cables. That is part of the selling point of cable manufacturers, namely, to have low inductance and capacitance. I have to say that the only time a sound improvement was obvious to me was when I changed from 13 gauge lamp cord to Nordost Flatline, which has flat conductors with Teflon dielectric. I could hear more highs, so I suspect the lamp cord was rolling off the high frequencies. But, beyond that experience, I really have never heard anything that I could be absolutely sure was not just my imagination. One exceptions was some Legenburg cable that seemed to give me more bass, and I don't have any explanation for it. Written by RAllen , April 06, 2010 JM You heard the difference between an expensive and regular power cord? How could a power cord possibly affect the sound of a component? Did it somehow tag the electrons so they would all line up in the capacitors and power supply so they'd come out of the transistor/tube in the correct order? Unless the "regular" cable so contrained current flow that it caused a serious voltage drop (which means it would be glowing) there just isn't any possible way it could change anything about the sound. Overpriced cables are silly and power cords are absurd. If it makes them happy... Written by Mike , April 07, 2010 The value of something is determined by what someone is willing to pay for it. Exclusivity and bragging rights are obviously worth a lot to some cable lovers. This will never end Written by rhassle , September 18, 2010 I think that it takes enormous hubris to think that we can reduce the complex interactions between an audio signal traveling though cables connected to complex electronics to a few simple parmeters. We don't know what we don't know about the physics of these interactions. Reductionists love to talk about this as though it is simple. I don't care what anyone on this or any other forum thinks they know about cables. I have changed cables throughout my system and better cables yield better sound. I have continued to go up the product lines in my system and I have heard improvements at every step and they are not subtle. Some of you have spoken about the maunfacturers of these cables as though they are holding guns to your heads and making you buy expensive cables. They are not. Others have talked about how they are ruining the audio industry. They are not. Get over all of this and move on to a topic that really does something besides raising the ire of people on both sides of the argument. help the little guy! Written by David Gibbons (Thinking about Home Theater) , February 04, 2011 I want to back up the thought that we need to help the little guy who is shopping at Best Buy. I concur with those who note that dollars spent on better speakers or displays will have a vastly greater effect on the listening/viewing experience than spending those dollars on high-end interconnects for such folks. Stores maximize their profits by selling the high-profit margin interconnects. That's good business. Still, it's not in the best interests of the customer, if the point is the best sound/picture for their hard-earned dollars. The real issue is how to communicate this to the general A/V consumer. The stores and the magazines largely cater to the interconnect makers, as there is money to be made. Consumer reports has done a bit of work on this topic, and I salute them for their efforts, not least because ordinary consumers look to them for reasoned advice. For the stinkin' rich person who just wants to show off how much money the system cost - well, gold-plated toilets are on offer too.. Expensive power cables are ludicrous Written by Stewart McKibben , November 20, 2011 So you go buy power cables for $100's with expensive plugs. Then stick them in a contractor-grade ($0.99) outlet wired through no telling how many other back-wired devices, sharing the circuit with unknown noise sources. Are you nuts? Take the bucks, hire an electrician and install BX (metal sheathed) 10ga wire to the main panel with it's own 20A breaker and set a hospital-grade isolated-ground outlet for the audio system. Now you have slain numerous dragons that can affect your audio quality. If the electrician is industrially-profecient he probably has a Fluke analyzer that can show noise on the line. Noisy? find the source and treat it directly, and/or use a powerline conditioner, or better yet find a big surplus isolation transformer. That is where the smart bucks are spent.
<urn:uuid:ec8df41a-51b0-41f9-8da9-6e4973826318>
{ "date": "2014-08-27T17:09:11", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829661.96/warc/CC-MAIN-20140820021349-00320-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9542917013168335, "score": 3.25, "token_count": 12098, "url": "http://hometheaterhifi.com/technical-articles/technical-articles-and-editorials/audiovideo-cables-science-and-insanity/page-3-tests-and-observations.html" }
The generic name for all analytical methods that are based on the introduction and processing of test samples in flowing media. PAC, 1994, 66, 2493 (Classification and definition of analytical methods based on flowing media (IUPAC on page 2496 PAC, 1990, 62, 2167 (Glossary of atmospheric chemistry terms (Recommendations 1990)) on page 2189 IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). XML on-line corrected version: http://goldbook.iupac.org (2006-) created by M. Nic, J. Jirat, B. Kosata; updates compiled by A. Jenkins. ISBN 0-9678550-9-8. https://doi.org/10.1351/goldbook
<urn:uuid:16f77cb6-6bb7-4a15-bc20-fce90985a90c>
{ "date": "2018-12-12T10:43:52", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00296.warc.gz", "int_score": 3, "language": "en", "language_score": 0.6688089370727539, "score": 2.59375, "token_count": 199, "url": "https://goldbook.iupac.org/html/F/F02433.html" }
Your diet in the light of the Quran We are told on an almost daily basis by doctors, nutritionist and health experts to eat healthily, but what does eating healthily mean in the context of the Qur’an and Sunnah and why should we be concerned with this? Our bodies are entrusted to us an amanah (trust) from our Lord. The Qur’an was sent to us an instruction manual and the Prophet Muhammad ﷺ as our guide. As our creator, Allah Most High knows what is best for our bodies- you wouldn’t ignore a car manufacturer’s instructions to not use petrol in a diesel engine vehicle, yet we are surrounded by many who ignore Allah’s instructions in foods that are most beneficial to us. Allah says in the Qur’an: “Therefore eat of what Allah has given you, lawful and good (things), and give thanks for Allah’s favour if Him do you serve.” (16:114) Lawful, as we know, applies to those foods permitted in the Qur’an, or those that are halal. We tend to focus on these foods but what about those that are good? There are several foods mentioned in the Qur’an, amongst them are olives, herbs, fish, grapes, garlic, onion, ginger, pomegranates, dates, bananas, cucumbers, figs, honey and many more. Whilst these foods were mentioned by Allah over 1400 years ago, in recent years science has discovered many benefits to our health and wellbeing from these foods. Looking at these benefits in more detail we are able to understand why we have been instructed to partake of them. Olives and olive oil are mentioned in several places in the Qur’an (95:1, 80:29, 23:20, 16:11). In Surah Al Mu’minoon (23:20) Allah says: “And [We brought forth] a tree issuing from Mount Sinai which produces oil and food for those who eat.” So what benefits are to be found from olives? Recent studies looking at different varieties of olives, how they are processed and the changes that take place in their nutrients has shown that Greek-style black olives, Spanish-style green olives, Kalamata-style olives, and many different methods of olive preparation provide us with valuable amounts of many different antioxidant and anti-inflammatory nutrients. Hydroxytyrosol, a phytonutrient, found in olives, has long been linked to cancer prevention, and it is now regarded as having the potential to help us prevent bone loss as well. Olives are also known to be a high fat food, however the fats they contain are monosaturated fats, which are considered “good” fats. A diet high in monosaturated fats but low in saturated fats can lead to a decrease in cholesterol levels and reduce the risk of heart disease. The many benefits of olives are too numerous to list and can be researched for further detail, however we can see from this that eating olives and olive oil regularly can lead to overall health benefits. Of fish, Allah says in the Surah An Nahl (16:40), “And it is He who subjected the sea for you to eat from it tender meat and to extract from it ornaments which you wear. And you see the ships ploughing through it, and [He subjected it] that you may seek of His bounty; and perhaps you will be grateful.” Fish is one of the healthiest foods on the planet and is full of nutrients that are important for our health, such as protein and Vitamin D. It is also the best source of omega-3 fatty acids, which are incredibly important for your body and brain. Generally speaking, all types of fish are good for you. They are high in many nutrients that most people aren’t getting enough of. This includes high-quality protein, iodine and various vitamins and minerals. However, some fish are better than others, and the fatty types of fish such as salmon, trout, sardines, tuna and mackerel are considered the healthiest as they are higher in fat-based nutrients such as the fat-soluble vitamin D, a nutrient that most people are deficient in. It functions like a steroid hormone in the body. Fatty fish are also much higher in omega-3 fatty acids which are crucial for your body and brain to function optimally, and are strongly linked to reduced risk of many diseases. To meet your omega-3 requirements, eating fatty fish at least once or twice a week is recommended. Fish is largely considered to be among the best foods you can eat for a healthy heart. Researchers believe that the fatty types of fish are even more beneficial for heart health, because of their high amount of omega-3 fatty acids. Again the benefits of fish are considerable and are an important part of any healthy balanced diet. Lastly, honey is beneficial not only as a food but also for healing. Allah says in Surah An Nahl (16:69), “Then eat from all the fruits and follow the ways of your Lord laid down [for you]. From their bellies comes out a drink of various colours in which there is cure for people. Surely, in that there is a sign for a people who ponder.” Possible health benefits of consuming honey have been documented in early Greek, Roman, Vedic, as well as Islamic texts. Modern science is finding that many of the historical claims that honey can be used in medicine may indeed be true. Honey also possesses antiseptic and antibacterial properties. In modern science, useful applications of honey in chronic wound management have been found in honey. Honey has been shown to help prevent cancer and heart disease. It contains flavonoids that are antioxidants which help reduce the risk of some cancers and heart disease. It can also help reduce ulcers and other gastrointestinal disorders. Honey is known to be anti-bacterial and anti-fungal due to an enzyme added by bees that makes hydrogen peroxide. It is also known to reduce coughs and throat irritation, particularly buckwheat honey. Honey can also help to heal wounds and burns and is now being used as a component in medical dressings. There are many more benefits to health through the use of honey. Whilst we have only looked at a few foods mentioned in the Qur’an and their benefits, we can see that the good derived from these foods for our health is immense. It has been related in the Sunan of Imam Tirmidhi: The final messenger of God, Prophet Muhammad ﷺ mounted the pulpit, then wept and said, “Ask Allah for forgiveness and health, for after being granted certainty, one is given nothing better than health.” We owe it to ourselves and our Creator to ensure that we educate ourselves, take care of our diet, and maintain our health through the food that Allah has blessed us with.
<urn:uuid:3e4c7254-59c2-4057-ad97-2adc59401ed7>
{ "date": "2017-09-23T09:21:36", "dump": "CC-MAIN-2017-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689615.28/warc/CC-MAIN-20170923085617-20170923105617-00616.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9750018119812012, "score": 2.953125, "token_count": 1466, "url": "https://www.as-suffa.org/newsletter/2017-2/ramadhan-edition/diet-light-quran/" }
The Brong Ahafo Region, formerly a part of the Ashanti Region, was created in April 1959. It covers an area of 39,557 square kilometres and shares boundaries with the Northern Region to the north, the Ashanti and Western Regions to the south, the Volta Region to the east, the Eastern Region to the southeast and La Cote d’Ivoire to the west. It has 19 administrative districts, with Sunyani as the regional capital. The region lies in the forest zone and is a major cocoa and timber producing area. The northern part of the region lies in the savannah zone and is a major grain- and tuber-producing region. The region has a population of 1,815,408, indicating an intercensal growth rate of 2.5 per cent over the 1984 population figure. Enumeration covered all the 17,546 localities in the region . There are 19 districts headed by District Chief Executives who, in turn, are under the political and administrative jurisdiction of the Regional Minister. Socio-demographic characteristicsThe dependency ratio (i.e. the ratio of the non-economically active age groups of 0-14 and 65 and older to the active age group 15-64) for the region is 90.8 in 2000, a reduction from the 1984 figure of 100.8. While the Asunafo, Kintampo and Sene Districts each has a dependency ratio of more than 100, the Sunyani District has the lowest dependency ratio of 73.3. The sex distribution gives a sex ratio of 100.8 males to 100 females, that is, number of males is almost the same as that of females. There are more females than males for children under five years, except in the Tano District while males outnumber females in six districts, Tano, Sunyani, Dormaa, Jaman, Berekum and Techiman. The total fertility rate (TFR) for the region is 4.2, which is higher than the national level of 4.0. The Asunafo (5.3), Asutifi (5.1) and Sene Districts (5.6) have TFRs higher than 5.0, while the Sunyani (2.8) is the only District with a TFR below 3.0. The mean number of children ever born for the region is 5.8, which is slightly above the figure captured (5.7) in the GDHS 1998 . The urban population constitutes 37.4 per cent of the total population of the region. Sunyani, Techiman and Berekum are the only Districts with more than 50.0 per cent of the population in urban settlements. The Sene District has the lowest urban population of 8.6 per cent. Out of the 342,808 households in the regionfemales head 34.3 per cent. The average household size is 5.3, slightly higher than the national average of 5.1. The Sene (6.0), Jaman (6.0) and Atebubu (6.0) Districts have the largest household sizes compared with the Sunyani District, which has the lowest (4.7) in the region. A little more than half (51.3%) of persons aged 12 years and older are in marital unions, and every two out of five are never married. More females than males in all the districts are in marital unions. The proportion married or in a consensual union is 57.6 per cent for the population aged 15 years and older. The Akan constitute the predominant ethnic group in the region, and in all the districts, except Sene, where the Guan constitutes the largest ethnic group. The Mole Dagbon constitutes the second largest ethnic group in the region and in all districts, except Sene and Atebubu. Three out of every five Akans in the region are Brong. Non-Ghanaians constitute less than 3.0 per cent of the population. Christians (70.8%) outnumber all other religious groups in the region. Islam, mostly practised in the Kintampo and Atebubu Districts, has the second largest following. The largest following of traditional religion, as well as those who profess no religion, are in the Sene District. More females (73.5%) than males (68.2%), profess the Christian faith but the opposite is true for Islam (17.0% males, 15.3% females), traditional (4.9% and 4.4%) and no religion (9.2% and 6.4%). The proportion of the population who have never been to school is 42.4 per cent (37.2% males and 47.7% females). The Sene (64.4%), Atebubu (60.8%) and Kintampo (56%) districts have the highest proportions of persons who have never been to school, while the Sunyani (72.2%), Dormaa (69.1%), and Berekum (68.3%), have the highest percentage of those who have ever attended school. The proportion of illiterates in the region is higher than the national average, by almost 6 percentage points. As with the educational attainment, the level of illiteracy is higher for females than males in all districts in the region. 819,190 persons, representing 79.2 per cent of the population, are economically active, two-thirds (66.4%) of whom are in Agriculture/Forestry/Hunting. With the exception of the Sunyani District, agriculture is the major source of income for households in all the districts. Majority of the economically active are self-employed with or without employees (74.7%), followed by employees (9.7%). Over four fifths (83.0%) of the economically active population work in the informal sector. The phenomenon of children under 15 years engaging in economic activity, is pronounced in the Kintampo, Atebubu and Sene Districts, with activity rates between 18.0 and 22.0 per cent for the age group 7-9 years, and between 34.0 and 45.0 per cent for the age group 10-14 years. Housing and community facilities The region has 9.9 per cent of the total housing stock of the country. The Jaman District has the highest population per house ratio and the Sene District has the lowest. Compound houses (48.1%) predominate in all districts, with the Jaman (63.6%) and Berekum (61.4%) districts having the highest proportions. Corrugated metal sheets for roofing and cement/concrete for walls and floors are the main construction materials used in the region. The Sene, Kintampo and Atebubu Districts, however, have more than half of the houses roofed with thatch, which is the second main roofing material in all the other districts. The river/stream (31.8%) is the main source of drinking water in most of the Districts, followed by the borehole (25.3%) and pipe-borne water (23.5%). The Sunyani District has the highest proportion (55.0%) of households using pipe-borne water, even though the supply is very erratic because of the gradual drying up of the Tano River, which feeds the reservoir of the treatment plant. Modern methods of liquid and solid waste disposal are not practised by majority of the households in the region. The pit latrine and public toilets are the commonest facilities in the region, and liquid waste is mostly thrown onto the street or anywhere outside the house. There are only 24 hospitals in the region, six of which are government-owned, with one quasi-public and 17 privately-owned. Sene is the only district that has no hospital. Other health facilities are health centres (35), rural clinics (106) and maternity homes (54). Traditional healers and healing facilities are wide spread throughout the region and are most accessible to the population than all the other facilities. Telecommunication facilities are poorly and inadequately distributed. The Sene District has no post office but has 2 postal agencies. In addition to the fixed telephone lines of Ghana Telecom, a few localities have access to mobile phone services. Junior secondary schools (769) are less than half the number of primary schools (1619). Senior secondary schools (60) are even fewer compared to junior secondary schools. Background of the region Creation of the region The Brong Ahafo Region was created on 4th April 1959 (by the Brong Ahafo Region Act No. 18 of 1959). The Act defined the area of the Brong Ahafo Region to consist of the northern and the western part of the then Ashanti Region and included the Prang and Yeji areas that before the enactment of the Act formed part of the Northern Region. Before the Ashanti Empire was conquered by the British in 1900, the Brong and Ahafo states to the north and northwest of Kumasi (the capital of Ashanti empire and the present Ashanti Region) were within the empire. Nana Akumfi Ameyaw III traces his ancestry to King Akumfi Ameyaw I (1328-63), under whose reign the Brong Kingdom with its capital at Bono Manso grew to become the most powerful kingdom of its time. Indeed oral tradition has it that nearly all the different groups of the Akans, including the Asante, trace their origins to Bono after migrating from the “north”. The first remembered King of the Bono Kingdom is King Asaman, who is credit with leading his Akan people from what may be present day Burkina Faso or even further north, to Bonoland (Buah, 1998). Later migrations led to the Asantes, Fantes, Denkyira and other Akans settling in their present locations. Nana Akumfi Ameyaw is credited with the creation of gold dust as a currency and gold weights as a measure, later developed and adopted by all the other Akan groups, particularly the Asante. Legend has it that he even supported his yam shoots with sticks made of pure gold. It was when King Opoku Ware of Asante defeated Bono in 1723 and destroyed Bono Manso that the capital moved to Techiman (Takyiman). Techiman and other Bono states therefore came under the Asante Empire until 1948 when Akumfi Ameyaw III led the secession of Bono from Asante, supported by other Bono states such as Dormaa. The most significant change the British administration in Ashanti brought to the people of the Brong and Ahafo states until 1935 was that it made them independent of Kumasi clan chiefs (Busia, 1951, pp. 165-166). The British administration worked out a strategy that severed the interference of the Kumasi clan chiefs with the internal affairs of the Brong and Ahafo states. When the Ashanti Confederacy was restored in 1935 by the British administration, however, most of the Brong and Ahafo states saw that their independence from Ashanti was being threatened, because by restoring the Ashanti Confederacy, they were to revert to their former overlords in Kumasi. Though the Brong states joined the Ashanti Confederacy, most of them were not happy with the re-union because they felt their long historical association with Ashanti had brought them nothing. The opportune time came when in 1948 Nana Akumfi Ameyaw III, the Omanhene (paramount chief) of Techiman led Techiman to secede from the Ashanti confederacy (Austin, 1964, p. 294). The secession of Techiman was supported by some of the Brong states and this led to the formation of the dynamic Brong political movement, Brong Kyempem Federation. The movement was formed in April 1951 at Dormaa Ahenkro under the auspices of the Dormaa State.3 The main objective of the movement was to struggle for a separate traditional council and a separate region for the Brong Ahafo states. The name of the movement was later changed to the Brong Kyempem Council. In March 1955, the Prime Minister informed the National Assembly that the government was considering “the possibility of setting up a Brong Kyempem Council” to fulfil the desire of the Brongs for the establishment of a development committee for their area and that the government would “examine the case for the establishment of two administrative regions for Ashanti”. In March 1959, the Brong Ahafo Bill was passed under a certificate of urgency by Parliament. The Brong Ahafo Region Act was enacted after receiving the Governor General’s assent. Sunyani was made the capital of the new region Brong Ahafo, with a territorial size of 39,557 square kilometres, is the second largest region in the country (16.6%). The region shares boundaries with the Northern Region to the north, the Volta and Eastern Regions to the south-east, Ashanti and Western Regions to the south, and Cote d’Ivoire to the west. The central point of the landmass of Ghana is in the region, at Kintampo. The region has a tropical climate, with high temperatures averaging 23.9oC (750F) and a double maxima rainfall pattern. Rainfall ranges, from an average of 1000mm millimetres in the northern parts to 1400 millimetres in the southern parts. The region has two main vegetation types, the moist semi-deciduous forest, mostly in the southern and southeastern parts, and the guinea savannah woodland, which is predominant in the northern and northeastern parts of the region. The level of development and variations in economic activity are largely due to these two vegetation types. For example, the moist semi-deciduous forest zone is conducive for the production of cash crops, such as cocoa and cashew. Brong Ahafo is one of the three largest cocoa producing areas in the country, mainly in the Ahafo area, which shares common border with western Ashanti. A lot of the cashew in Ghana is produced in Brong Ahafo, some of which are processed into brandy and cashew wine at Nsawkaw in Wenchi. Timber is also an important forest product, produced mainly in the Ahafo area around Mim, Goaso and Acherensua. Other cash crops grown in the forest area are coffee, rubber and tobacco. The main food crops are maize, cassava, plantain, yam, cocoyam, rice and tomatoes. Yam production is very high in the guinea savannah zone, around Techiman, Kintampo, Nkoranza, Yeji, Prang and Kwame Danso. Tourist attraction sites The ecology of the region has produced lots of tourist attractions. Some rivers create beautiful tourist sites as they flow on rocky landscapes. The Pumpum River falls 70 metres down some beautiful rocky steps to form the Kintampo Falls, as it continues its journey towards the Black Volta. The Fuller Falls, 7 kilometres west of Kintampo, (the centre point of the country), also provides a scenic beauty as River Oyoko gently flows over a series of cascades towards the Black Volta. Another scenic site is the River Tano Pool which houses sacred fish that are jealously protected by the local community who live along the river near Techiman. There is also a pool on the Atweredaa River, which runs through the Techiman market. Another type of tourist attractions are caves, sanctuaries and groves. The Buabeng-Fiema Monkey sanctuary, located 22 kilometres north of Nkoranza, covers a forest area of 4.4 square kilometres. It serves as home for black and white colobus and mona monkeys. The forest also provides a natural habitat for different species of butterfly. Buoyem caves, which are hidden in a dry semi-deciduous forest, house a large colony of rosetta fruit bats. The Pinihini Amovi caves are also historic underground caves near Fiema The tourist attraction sites in the region cannot be complete without mention of the Tanoboase Sacred Grove. It is believed that the grove is the cradle of Brong civilization. The grove served as a hideout to the Brongs during the 18th century Brong-Ashanti wars. It is currently used for hiking and rock climbing. The Bui National Park, stretching from Atebubu through Banda to the proposed site of the Bui Dam, is home to many rare wildlife and vegetation. Part of the Volta Lake flows through the region and Yeji, Prang, and Kwame Danso are important towns along the banks of the lake, which can serve as growth poles for tourism development in the region. Political and administrative structure Brong Ahafo has 19 administrative districts, with District Chief Executives (DCEs) as the political heads. The DCEs are assisted by District Co-ordinating Directors (DCDs) who are responsible for the day to day running of the districts. The DCEs work under the Regional Minister (the political head of the region), while the DCDs are under the Regional Coordinating Director. Sunyani is the administrative headquarters of the region, where the Regional Minister resides. The legislative wing of the political and administrative structure is the District Assembly. One third of its membership is appointed by Government in consultation with local leaders, while the remaining are elected on non-party lines. The District Assembly elects its own Presiding Member. The District Assemblies are divided into Town and Area Councils, depending on the population and land area of the district. A compact settlement or town with a population of 5,000 or more qualifies to have a Town Council status. An Area Council is made up of 2 or more towns which when pulled together has a population of 5,000 or more. The region has 37 Town Councils and 106 Area Councils. Eight of the districts bear the name of the district capital, with the remaining five (Asunafo, Asutifi, Tano, Jaman and Sene) named after geographical land marks or historical events. Another aspect of the political and administrative structure relates to constituencies and areas for electoral purposes. The region is divided into 21 constituencies, which are further subdivided into 582 electoral areas or electoral units. These electoral areas consist of 2,292 basic units called polling stations. Each of eight districts has two constituencies with the remaining five having one constituency each. Wenchi, one of the districts with two constituencies has the highest number of electoral areas (54), electoral units (214) and polling stations (223). Seven districts have 48 electoral areas each. The Sene district has the least number of electoral areas (30) and polling stations (98). There has been the need for the creation of six new districts. Cultural and social structure Ghanaians by birth and parenthood constitute 94.0 per cent of the population of the region. This is higher than the national proportion of 92.2 per cent. Naturalized Ghanaians constitute an additional 3.4 per cent, while other ECOWAS nationals make up 1.9 per cent with other Africans and non-Africans being 0.8 per cent. The sex-composition of Ghanaians by birth indicates that there are more female Ghanaians by birth than males, while there are male non-Ghanaians than females. Nearly 71 per cent of the population are born in the localities where they were enumerated, with a further 7.5 per cent born in another locality within the region. The rest of the population originate from outside the region, with most of them from the regions which share border with the region. Favourable climatic conditions, abundance of arable land and proximity may be factors that attract people from the north. The predominant ethnic group is the Akan, (62.7%) followed by the Mole-Dagbon (15.4%) and Grusi (4.2%), as shown in Figure 1.1. Within the Akan group, the Brong (Bono, including Banda) are the largest subgroup (61.4%), followed by the Asante (13.3%) and Ahafo (9.5%). Among constituents of the Mole-Dagbon group, the Dagaaba are the largest (44%) subgroup. Christianity has the largest following (71.0%), followed by Islam (16.1%) and traditional religion (4.6%). A significant proportion (7.8%) reported affiliation with no religion. Catholics are the largest denomination of the Christian faith (22.6%), followed by Pentecostal/ Charismatic (20.8%) and Protestant (17.0%). More females (73.5%) than males (68.2%) profess the Christian faith. The reverse is true for Islam, traditional religion and those with no religion. Education forms an important determinant of the quality of manpower. As such, the educational level of the population, to some extent, reflects the level of social and economic development of a country or a community. It is also well known that education constitutes one of the most important factors influencing demographic behaviour and the level of fertility of a population. Statistics on literacy provide a measure of progress in the educational development and are necessary in planning for the promotion of adult literacy. Literacy is defined as the ability to read and write in any language and relates to those aged 15 years and older. 48.5 per cent of the population of the region, aged 15 years and older, are not literate. This picture is only better than that of the three Northern Regions where the illiteracy level is more than 70.0 per cent. Since much information is written and transmitted in English, effective literacy level is based on those literate in English and a Ghanaian language. This means that effective literacy level for the region is 49.0 per cent, which is lower than the national average of 54.5 per cent. Information flow in terms of posters, brochures, and written adverts will seriously be hampered because of the low literacy level. The differences between male and female literacy levels. There are significant differences between the sexes in the not literate and the literate in English and Ghanaian Language groups. Among the males, 41.1 per cent are illiterates, which is far lower than that of females (56.0%). A little over two fifths of the population (42.0%) aged six and older, have never been to school, as shown in Fig 1.4. The proportion of the population that has attained primary (22.3%) and middle/JSS (23.3%) is almost the same; only 11.2 per cent have attained a level above the middle/JSS. The education attainment is the same for males and females at the pre-school level (1.2% each) and the primary school level, (22.5% males and 22.0% females). Above these two attainment levels, male attainment is higher than that of females at each subsequent level. This low attainment level for females has implication for the economic characteristics of the population as well as fertility behaviour. A higher percentage of females (68.5%) than males (63.9%) are currently in pre-school and primary school. The percentage of males (60.2%) is lower than that of females (64.3%) at the primary school level but the pattern changes to that of a higher percentage of males than females, at each subsequent higher level after the primary school level, (Figure 1.5). More than three fifths (62.1%) of those currently in school are in the primary school, followed by those in middle/JSS (22.4%). The proportion of the population currently at the post-secondary level (1.3%), (including training college, nursing, etc.), is the lowest. The total population of the region is 1,815,408, representing 9.6 per cent of the country’s population. The region is more populous than only four other regions though it is the second largest in terms of land area. The region’s population density of 45.9 persons per square kilometre is denser than that of only two regions, Upper West and Northern. It has a balanced sex ratio of 100.8 males to 100.0 females, with 37.4 per cent of the population living in urban areas. The population has a broad base (0-4) and thereafter decreases gradually with age; this is true for both males and females. From the cumulative frequencies, more than 50.0 per cent of the population of both sexes are less than 20 years, with less than 11.0 per cent being 50 years or older. The currently married and those in consensual union constitute the majority of the 1,033,609 persons who are 15 years or older in the region, followed by the never married. The once married but no more in a stable marital relation constitute 10.0 per cent of the marriageable population. The proportion of never married males (40.2%) is higher than that of females (24.4%). On the other hand, the proportion of married females (50.5%) or in a consensual union (10.5%) is higher than that for males (46.5% and 7.7%). Similarly, the proportion of females once married (14.6%) is higher than that for males (5.6%). The main occupation of the workforce of the region is Agriculture and related work (66.4%) for both sexes. The rural/urban occupational distribution also shows the dominance of Agriculture. Production and Transport Equipment work (11.3%), Administrative and Managerial work (0.2%), and Sales work (7.6%) are the other three occupations that stand out. Between the sexes, a significant difference in the occupational distribution is observed in the Sales work for females (10.8%) and males (4.4%), while Clerical and related work and Production, Transport and Equipment work are more common among males than females. The three major industrial activities in the region are: Agriculture/Forestry/Hunting (68.6%), Manufacturing (6.7%) and Wholesale/Retail trade (7.4%). Male predominance is observed in Construction, Financial Intermediation, Public Administration, and Education in all districts. On the other hand, a higher percentage of females than males are engaged in Wholesale/Retail trade, Hotels and Restaurants, Private Households and other Community, Personal and Social Service activities. Employment status and sector About three quarters of the population (74.6%) are self-employed with no employees, followed by employees (9.7%) and unpaid family workers, (6.4%) in that order. This picture is the same for both sexes. About 83.0 per cent of the working population is in the private informal sector, and the proportion in the public sector is low 5.1 per cent. Such an employment structure accounts for the tax net being narrow and poses a challenge to effective mobilisation of taxes. The self-employed without employees, are mainly very small one-person businesses with a small capital base. Such a situation does not promote rapid economic growth and expansion, as all such businesses are non-competitive and operate at subsistence level. The coverage of population statistics is quite comprehensive, detailing the number of persons in the population (size), the spatial distribution, the sex-age structure, growth or decline of the total population. These population parametres are further divided into much detailed categories to cover nationality, ethnicity, religion, marital status, place of birth, literacy, educational attainment and many others. Changes in the population brought about mainly by births, deaths, in-migration and out-migration are important in the study of the characteristics of the population. The mechanism of population change therefore constitutes an important aspect of demographic analysis. Population size, growth rates and density The population of the region is 1,815,408, accounting for 9.6 per cent of the country’s total population. The population of the region therefore experienced a decline in growth between 1984 and 2000. This is further buttressed by the fact that unlike previous inter-censal periods when its growth rate exceeded the national average, the rate for the 1984-2000 period was lower than the national. The region’s population density of 45.9 persons per square kilometre in 2000 is lower than the national figure (79.3 persons/km²) and higher than those for only Northern (25.9 persons/km²) and Upper West (31.2 persons/km2), which is similar to the situation in 1970 and 1984. There is therefore not much pressure on land, even though there has been a gradual increase in population density over the years, from 15/sq km (1960), 19/sq km. (1970) and 31/sq km (1984) to 45.9/sq km in 2000. The population densities and the inter-censal growth rates between the previous censuses cannot be calculated for the districts, because boundaries of administrative units within regions have changed between censuses. The 1984 census was conducted with 140 local councils as administrative units, which had different boundaries from the 110 districts in the 2000 Census. For example, two local councils, Goaso and Kukuom were combined to form the Asunafo District, Bechem and Duayaw Nkwanta local councils as Tano District while Kintampo which did not exist as a local council in 1984 was carved out from portions of Wenchi, Atebubu and Nkoranza local council areas. Age and sex structure The age structure of the population for the country indicates a broad base that gradually tapers off with increasing age. This national picture is reflected at both the regional and district levels. A large proportion (43.1%) of the region’s population is under 15 years, with a small proportion (4.5%) older than 64 years (Fig 2.1). The proportion for the under 15 years for the region is higher than that for the total country (41.3%) but shows a 3.4 per cent decline from the corresponding 1984 figure. The reverse is the case for the elderly populations, with the proportion lower than the total country’s (5.3%) but greater than the 1984 figure by 0.8 per cent. The region has only slightly more males than females, with a sex ratio (males to 100 females) of 100.8. Indeed, the Region and the Northern Region are the only two Regions at the national level with an almost equal proportion of males and females. Eight of the districts have sex ratios of more than 100. There are however more females than males for children under five years, than in the reproductive ages between 20 and 39 years, and the elderly (over70 years). The age group 25-29 years has the lowest sex ratio of 90.0 while the age group 45-49 years has the highest of 121. The low sex ratio for the 25-29 years age group could be due to out-migration of males of that age to seek employment elsewhere, particularly to the Western Region as tenant cocoa farmers. Indeed, the largest concentration of Brongs outside of Brong Ahafo is in the Western Region (59,520). Tano is the only district with more infant males than females, a deviation from the regional pattern. Kintampo and Sene have more males than females in the two reproductive age groups of 20-24 and 35-39 years and Atebubu in only the latter age group. Dependency ratios show the relative predominance of persons in dependent ages (youth under 15 years and persons 65 years and older) and those in productive ages (15 to 64 years). The dependency ratio for the region reduced from 100.8 in 1984 to 90.5 in 2000. Though the reduction is significant, the ratio is higher than the national figure (87.1). There is therefore an improvement in the dependency on the active population in the region though the situation may still be unsatisfactory if compared with the other regions. Only three districts (Asutifi, Kintampo and Sene) have dependency ratios of more than 100, meaning each person in the productive age had more than one person to support Three districts, Techiman (81.3%), Dormaa (85.5%) and Berekum (86.6%), however, have ratios below the regional average. In addition, Sunyani the most urbanised district has the lowest dependency ratio of 73.3 per cent. 2.4 Birthplace and migratory pattern Migration is one of the three components of population dynamics. Inter-regional movements may be a crude method for measuring migration patterns, but they nevertheless provide crucial information on population movements. More than three quarters (78.2%) of Ghanaians born in the region were enumerated in the region. This indicates that majority of the people in the region are usual residents. This proportion is higher than those of only two regions, Greater Accra (68.7%) and Western (70.7%). It is also higher for females (79.8%) than for males (76.6%) in the region. About a fifth, (21.3%), of the region’s enumerated population reside in other regions; half (12.0%) reside in the three northern regions. The male population tends to be more migration-oriented than females in the region. More than three-quarters (76.6%) males and about four-fifths (79.8%) females enumerated in the region reside in the region. A greater proportion of the males (13.2%) than females (10.9%) were born in the three northern regions. About the same proportion for males (0.7%) and females (0.6%) from the region were born in Ashanti compared with 4.1 per cent of males and 4.0 per cent of females born in the Eastern Region. The remaining proportion of males (4.9%) and females (4.2%) were born in the remaining six regions. Movement from Upper West into Brong Ahafo was more pronounced than the other two northern regions. More than two fifths (44.2%) of the northerners born in the region were from Upper West. The region, (37.4%) is the fourth most urbanized, coming after Greater Accra (87.7%), Ashanti (51%) and Central (37.5%), Regions. Only four districts, Sunyani (73.8%), Techiman (55.7%), Berekum (54.7%) and Tano (43.2%), have levels of urbanization above the regional average, with Sunyani, Berekum and Techiman having much higher proportion of urban than rural population; Sene has the highest proportion of rural population The level of urbanization is influenced by the growth of some localities in the districts, especially their capital towns. The district capital’s share of the district population for the three most urbanized districts, Sunyani (34.6%), Berekum (42.5%) and Techiman (32.2%), for instance, accounts for more than half or nearly so of the urban population. Sunyani, being the regional capital, has a good infrastructure base, which attracts migrants; Techiman is a major market centre and a nodal town or entrepol, where roads from the three northern regions converge. Trunk roads from Sunyani, Kumasi, Wa and Tamale all meet at Techiman, thus making it a bustling food crop market and commercial centre. Berekum’s urban nature can be attributed to good infrastructure and concentration of wood processing firms, as well as important educational and health centres. The Berekum Secondary School, Techiman Training College and Holy Family Hospital all serve a large catchment area. The four largest localities in the four most urbanized districts account for nearly half the district population or more. But the four largest localities in the three least urbanized districts, Sene (18.2%), Asutifi (24.8%) and Kintampo (29.7%), account for less than a third of the district’s population. The increase in urban population since 1984 has been due partly to some urban centres growing rapidly, while other rural towns have grown even more rapidly into new urban centres. For example, Kenten, which is now the second largest urban town in Techiman and the twentieth in the region with a population of 10,599, was a small village with a population of 265 in 1970, which grew to 828 in 1984. The growth may be largely due to population spillover from Techiman (56,187), which is close to Kenten, as a result of real estate development and expansion in trade. Similarly, Sampa, a border town in the Jaman district, has grown from a population of 3906 in 1970 to 11,348 in 2000, mainly due to cross-border trade with Côte d’Ivoire. Fertility and child survival Fertility is one of the most important components of demographic change. It is the frequency of childbearing among the population, and fertility rates measure the relative frequency with which births occur within a given population. Four conventional measures of fertility, the crude birth rate (CBR), general fertility rate (GFR), total fertility rate (TFR) and the mean number of children ever born (MCEB) are discussed for the region and districts. The CBR, GFR and TFR are based on births in the last 12 months and computed respectively for women, (12-49) per 1,000 population, (15-44) per 1,000 women and (15-49) per woman. The MCEB is for children ever born for women aged (15-49) CBR measures the contribution of current fertility to the overall population while the GFR is on the women in the reproductive age. TFR is the number of children a woman would have from age 15 to 49 (childbearing age) if she were to bear children at the prevailing Age Specific Fertility Rates (ASFRs). Current fertility Fertility varies not only with age but also with other factors such as marriage, area of residence and educational attainment. Fertility differentials can therefore be studied in terms of economic and social characteristics. Sunyani and Berekum, the most urbanised districts, have the lowest fertility levels, while the more rural, Sene and Asunafo have the highest levels. Other districts with relatively higher rates are Asutifi and Atebubu. All three rounds of the Ghana Demographic and Health Surveys (1988, 1993 and 1998) confirm that fertility indicators are lower for urban than rural women, and also that the higher the educational level of the woman the lower the fertility indicator. These reasons may account for the relatively low fertility in urbanized districts as Sunyani and Berekum, and the higher levels in rural districts as Sene, Asunafo, Asutifi and Atebubu. The total fertility rate is a summation of age specific fertility rates, so they help to examine the frequency of childbearing from one age group to another to understand the current child bearing performance of women in the reproductive age groups. The data show that for every age group, Sunyani and Berekum (the most urbanised districts) have lower fertility than all other districts. This underlines the fact that urban women generally tend to delay and space childbearing due to education, economic and other activities, which are incompatible with high fertility. In contrast, the more rural districts, Sene, Asunafo and Asutifi generally have higher age-specific fertility levels. Survival rate for the region is 82.3 per cent, implying that less than 16 per cent of children born to women (12-49) years were dead. Survival rates for the districts range from 79.8 per cent in Wenchi to 85.1 per cent in Asunafo, showing that child survival in the region is high, with little distinct differential. Reasons for this must be identified and the factors sustained, while at the same time intensifying fertility reducing programme activities. The household recorded 342,808 in the Brong Ahafo Region. This constitutes 9.3 per cent of the total number of households for the country. Females head more than a third (34.3%) of the households, which is the same for the total country. Household heads constitute 18.9 per cent of the region’s population. The average household size for the region is 5.3, higher than the national average of 5.1. Asunafo, Wenchi, Kintampo, Atebubu and Sene have less than one third of female-headed households, with Berekum having the highest proportion. Berekum has the highest proportion of females who have been in a union before, but currently not. This might be the reason for the high female-headed ratio. While Dormaa and Sunyani have average household sizes of less than 5, Jaman, Atebubu and Sene have the largest average household size of 6, followed by Kintampo with 5.8. Under a fifth (18.9%) of household members are heads (including temporary heads). This ranges at the district level, from 16.6 per cent in Jaman, 16.8 per cent in Atebubu to 21.2 per cent in Sunyani. Children of heads of household form 40.0 per cent of household members; with corresponding proportions ranging from 36.6 per cent in Berekum to 44.9 per cent in Sene. Other relatives (20.5%), who constitute the next highest percentage of household members in each district, ranges from 17.9 per cent in Asunafo to 27.5 per cent in Berekum. The highest percentage of spouses of the heads of households is in Sene (10.5%), Atebubu (10.1%), followed by Kintampo (9.7%), Techiman (9.3%) and Nkoranza (9.2%). Dormaa (6.7%) and Berekum (6.4%) have the lowest percentages of spouses in households in the region. Grand children (7.4%) form the next highest proportion among the different cateories of the relatives of heads of household; it accounts for more than 7.0 per cent of household members in seven of the 13 districts compared with Kintampo (4.8%), Atebubu (4.7%) and Sene (4.5%). There are at least 2.0 per cent of non-relatives in households in each of the 13 districts in the region. The household composition and structure in the region indicate that the traditional family structure still exists in the region. More than half (57.6%) of the population aged 15 years and older in the region are in marital union. Nearly a third have also never married. Proportions for females who are married or in loose union in the region are more than the corresponding proportions for males. The proportion for the never married males is significantly higher by 15.8 per cent than that of the females, while the proportion of females who have been in union before (14.6%), is higher than the proportion of males (5.6%) in that category. The same picture can be observed in all districts in the region. The high proportion of females in this category (divorced, widowed, separated) may be due to several factors, including the fact that polygamous males who divorce one wife are still recorded as married and that males who are divorced or widowed are more likely than females to remarry. Sunyani, Dormaa and Berekum, all major urbanized districts, have the lowest proportions of the married, with Kintampo, Atebubu and Sene having the highest proportions for both sexes. As explained earlier, urbanisation and educational level, which are linked to mean age of first marriage, may be the reasons for the differences in marital status for the districts. More than two out of five persons of the population in four districts, Sunyani, Jaman, Berekum and Techiman, have never married. The data further show that the marital status pattern for females and males in all the districts follow the same pattern as those for the region. Nationality and Ethnicity The composition of the population by nationality is summarised below. More than 97 per cent of persons in the region are Ghanaians, with 94 per cent being Ghanaian by birth. The proportion of Ghanaians by birth in the districts ranges from 91 to 97 per cent, with Sunyani having the highest (96.7%). Ghanaians by naturalization constitute between 5 and 6 per cent of the total populations of Sene, Kintampo, Nkoranza, Jaman, Dormaa and Asutifi. Atebubu district has the highest proportion of other ECOWAS nationals (3.8%), while Berekum has the highest proportion of other African nationals (1.5%) and non-Africans (1.2%). Foreign nationals deal in wood-processing activities a lot, and may account for the small but significant proportion of non-Africans in Berekum, where wood-processing is one of the main industrial activities. Berekum also has some religious organisations, mainly Catholic, and other foreign NGOs with significant expatriate personnel carrying out social work. The high proportion of ECOWAS nationals in Atebubu (3.8%) and Sene (2.8%) is difficult to explain, since the districts do not share a border with any of the neighbouring countries. It may be due to migration from Togo, Benin, Burkina Faso and Cote d’voire. The predominant ethnic group in the region and in all the districts is Akan, except in Sene where the Guans predominate. Apart from Sene and Atebubu where the Ewes and Gurmas are the second predominant ethnic groups, the Mole-Dagbon ethnic group is the second largest in all the other districts. Three other groups of northern origin, Gurma, Grusi and Mande-Busanga are one-tenth of the region’s population. Ethnic groups of northern origin are therefore slightly more than a quarter of the region’s population. The large proportion of Ewes in Sene is due to the fishing activities along the region’s side of the Volta Lake. The presence of the Guans in large proportion in Atebubu and Sene may not be due entirely to migration. That part of the region was formerly part of the Northern Region, inhabited by the Gonjas, one of the Guan sub-groups, before it was made part of Brong Ahafo in 1959. More than three-fifths of the Akans in the region are Brongs. Asantes and Ahafos are two other recognisable Akan groups in the region. Dagaabas constitute the highest proportion of Mole Dagbons. Three other ethnic groups, Kusasi, Nabdom and Dagomba, constitute more than one third of the Mole Dagbons. The remaining groups from the south, Guans, Ewes and Ga-Dangme are less than one tenth of the region’s population. The distribution of the population by the various religious denominations in the region is nearly the same as the total country, except traditional religion and no religion that exchange the order. Christianity (70.8%) has the largest following, while Islam (16.1%) and no religion (7.8%) are the significant others. Another change of order different from the national is that Catholics (22.6%) outnumber Pentecostals (20.8%). Brong Ahafo has a strong Catholic legacy, with many Catholic institutions including 7 hospitals in 7 districts. It is therefore no surprise that the Church chose Fiapre in the Sunyani District for the establishment of the first Catholic University in the country. Large followers of Christianity are in all districts. Over four-fifths of the population in Berekum (87.4%), Jaman (83.9%), Sunyani (80.9%) and Dormaa (80.3%) are Christians. The protestant churches (28.6%) have the largest following in Berekum, followed by the Pentecostal (28.0%). Pentecostals outnumber Catholics in eight districts, the most prominent being Sunyani where the difference is more than 10 percentage points. Jaman has the largest proportion of Catholics, where nearly two out of every five people are Catholics. Though more than half of the population in Atebubu (50.5%), Kintampo (51.4%) and Sene (56.6%) profess to be Christians, the proportion of Christians in these districts is low compared to the other districts. Islam is practised mainly in Kintampo (29.7%) and Atebubu (24.4%), where Moslems outnumber the two most professed Christian denominations, Catholics (21.4%) and Pentecostals (17.6%). The Moslems are mainly Mole-Dagbon who are quite a substantial group in the districts. Techiman (20.7%) and Wenchi (20.0%) also have a sizeable number of Moslems, though Catholics outnumber them. Islam (6.1%) and traditional religion (10.6%) are least practised in Berekum. Traditional religion is most practised in Sene (18.8%), followed by Atebubu (15.7%) and Kintampo (10.0%). Sene also has the largest proportion professing no religion (13.6%). Traditional religion ranks second after Pentecostal while no religion ranks fourth after Catholic in the district. Nkoranza also has more than one tenth (11.6%) of the population professing no religion. The proportion of females professing the Christian faith (73.5%) is higher than males (68.2%) in the region, in all districts in the region and total country. Apart from Catholics in Sunyani and Berekum where the proportion of males is higher than females, and Sene where the proportion of male Pentecostals is higher than females, the proportion female is larger than male in all three major Christian denominations in all districts. On the other hand, the proportion of males professing Islam, traditional and no religion, in all districts of the region, is higher than females. Educational attainment and literacy Statistics on educational attainment help in knowing the present educational levels of adult population as well as anticipated future requirements of educated manpower for various types of economic activity. Such data would be useful for policy makers to plan development and improvement of educational systems on one hand, and to plan economic development programmes in the light of manpower requirements, on the other. More than two fifths (42.0%) of the population, aged 6 years and older, have never been to school, a very discouraging picture. Disparity in educational attainment is pronounced among the districts in the region. The proportion of the population that has never been to school is high in all districts but it is much higher in some districts than in others. Thus, more than three fifths of the population of Sene (63.9%) and Atebubu (60.3%), and a little more than half, (57.6%) of the population of Kintampo have never been to school. All the other districts have less than half of their population having never attended school, with Sunyani having the lowest proportion (27.8%). The starting age for the first level of formal education in Ghana is six years. Pre-school which comprises nursery and kindergarten for ages below six years is now gaining popularity in the country. The 2000 Census shows that 1.2 per cent of the population older than six years are in pre-school. The disparity in educational attainment between the sexes is glaring. The proportion of males who have attained the primary through to tertiary level is higher than the proportion of females in all districts. The proportion of females who have never been to school or been beyond pre-school is larger than it is for males. Among the female population who have ever been to school, the highest level attained by the largest proportion is the primary level (23.0%) followed closely by the middle/JSS (21.1%). However, in six districts, Tano, Sunyani, Dormaa, Berekum, Wenchi and Techiman, the middle/JSS is the highest level attained closely followed by the primary/JSS. The situation for males, in all districts, is that the highest proportion attained is middle/JSS except Kintampo, Atebubu and Sene where the highest level for the largest proportion is the primary school level. Current school attendance The proportion of attending primary school is higher (64.2%) than that for males (60.1%), at the regional level. However, at the middle/JSS, SSS and beyond, the proportion of males exceed that of females at every level. This is also true for all districts except Sunyani and Berekum where female proportions for middle/JSS are slightly higher (24.2%) and (22.9%) than those for males (24.0%) and (22.3%), respectively. Most information is transmitted in written form and therefore the ability to read and write is very essential. The proportion of the population not literate (48.5%) in the region is higher than the national average (42.1%). The level of literacy for the region in all four-language categories, English, Ghanaian language, English and Ghanaian language and other languages, is also lower than the national level. Literacy (15 years and older) by district Sene has the highest proportion illiterate (71.4%) and Sunyani the lowest (32.0%). Sunyani also has the highest proportion of the population literate in both English and a Ghanaian language (51.7%). The level of illiteracy is higher for females, than for males, in all the districts. Apart from Ghanaian languages, the level of literacy in the other language categories for males is higher than for females. Four districts, Asutifi, Jaman, Kintampo, and Atebubu have a higher proportion of males who are literate in a Ghanaian language than females. Sunyani has the lowest level of illiteracy (26.0%) for males, followed by Berekum (26.3%). Atebubu and Sene have more than three-fifths of the male population not literate and more than three-quarters of females not literate. Sunyani (51.7%), Tano (46.8%) and Berekum (46.7%) have the highest proportion of male population literate in both English and a Ghanaian language while Atebubu (16.3%) and Sene (15.7%) have less than a fifth of the male population literate in both languages. Sene has only one tenth (10.2%) of the female population literate in both English and a Ghanaian language. Economic goods and services are produced and supplied to the market through these earning activities. Statistical data on economic activities and economic characteristics of the population, therefore, are essentially required for social and economic development planning. Type of activity 70.2 per cent are employed and 3.2 per cent had jobs but were not at work during the reference period. Only a small proportion, 5.8 per cent are unemployed. The level of the working population (that is the employed and those with job but not at work) ranges from a low of 65.7 per cent in Sunyani to a high of 83.0 per cent in Sene. Apart from Sunyani, three other districts, Asutifi (69.9%), Tano (69.9%), and Berekum (65.9%) have proportions below 70.0 per cent. All the others have proportions about 70.0 per cent. Among the districts there are significant variations in the proportions unemployed. About half of the 13 districts have proportions unemployed lower than the regional average of 5.8 per cent. Of the rest, Asutifi (9.4%), Berekum (9.2%), and Tano (8.3%) have relatively high levels of unemployment. 37 The data also show that students form a large proportion of those who are not economically active (8.8%). Higher proportions of students are mainly in Jaman (11.5%), Berekum (10.1%) and Sunyani (11.4%) districts which have high proportions of school age population in school. As expected, Sene (5.4%) and Kintampo (5.8%) have low proportions of students. The homemaker category constitutes only 5.5 per cent. This is fairly evenly distributed among the districts with the exception of Sunyani, which has a relatively high proportion (8.1%), and Sene with quite a low proportion (3.7%). Age-specific activity rates, present a clear picture of the proportion of economically active population in each age group. Kintampo (53.8%), Atebubu (55%) and Sene (66.4%) have the highest activity rates for the two age groups below 15 years, while Sunyani (15.3%) has the lowest. The high activity rates for the youth in Kintampo, Atebubu and Sene are a reflection of the fact that more than three-fifths of the population in the Sene (63.9%) and Atebubu (60.3%) Districts, and 57.5 per cent in the Kintampo District have never been to school. Age groups between 30 and 60 years have activity rates over 90.0 per cent in all districts. The activity rate for the population above 75 years and older is between 50.0 and 70.0 per cent, with the highest in the Kintampo District (69.9%) and the lowest in the Sunyani District (50.2%). With the lack of adequate welfare schemes for the aged in the country, apart from social security run by SSNIT, which is patronised by formal sector and a small proportion of informal employees, the aged are compelled to work if there is no support from children or family members. Old age as a cause of inactivity constitutes an average of 11.2 per cent against a low proportion of 1.9 per cent for the retired. This means there may be many of the aged who are not adequately covered by pension, probably due to their employment status at their working ages and therefore work in their retirement years. The proportion of the persons with disability was higher than the retired in all districts. Agriculture and related work is the major occupation in all districts, accounting for 66.4 per cent of the region’s economically active population. It is the main occupation for about two-thirds of the economically active group in nine of the 13 districts. In the three most urbanised districts, Sunyani (45.9%) Berekum (50.9%) and Techiman (57.1%), Agriculture and related work account for between 45.0-60.0 per cent. Sene, the most rural district, in particular, has 4 out of 5 economically active population in this sector. Significant proportions of the economically active persons are engaged as Production, Transport operators and Labourers (11.3%), Sales workers (7.6%), and Professional and related workers (5.8%). 9 out of the 13 districts have proportions of Productive, Transport operators and Labourers above 10.0 per cent. 3 out of the nine, Sunyani (14.9%), Berekum (14.8%) and Kintampo (13.8%) have the highest proportions. The other 4 districts have less than 10.0 per cent. At the regional level Sales workers form only 7.6 per cent. However, at the district level, Techiman (13.7%), Sunyani (13.4%) and Berekum (11.2%) have relatively high proportions engaged in sales. This is expected as Techiman is the largest market centre in the region. In addition, Sunyani and Berekum are urbanised districts, where sales workers are usually predominant. Proportions of Professional, Technical and related workers are generally low in most districts but Sunyani (9.0%) and Berekum (8.7%) have relatively high proportions. These same districts also have appreciable proportions of service workers 8.6 and 7.0 per cent respectively. Analysis of the sex composition by occupation shows that four districts, Techiman, Kintampo, Atebubu, and Sene, recorded more males than females in Agriculture and related work, while all the other districts recorded more females than males, although the differences were small. Females outnumber males in Service and Sales work in all the districts, and also in Production, Transport and labourer work in all districts, except Kintampo and Asutifi. On the other hand, males are predominant in Professional, Technical and related work in all districts, with only the Kintampo District recording the same proportion for both sexes. Five working days is the predominant working period in eight districts, with six working days in the remaining five, Nkoranza, Techiman, Kintampo, Atebubu, and Sene. These five districts are all predominantly agricultural and a six-day working week is normal. About one-eight of the active population worked for all the seven days in Berekum (12.1%) and Sunyani (14.3%), the most urbanised districts in the region. Irrespective of sex and locality of residence, Agriculture and related work absorb the highest proportion of the economically active. Apart from Agriculture and related work, the proportion of urban workforce is higher than the rural workforce in the other occupations and almost equal for administrative and managerial workers. Changes in structural composition of economically active population often reflect the course of social and economic development; for instance with progress of industrialisation, the proportion of workers in Agriculture decreases while those of workers in Manufacturing, Wholesale, Retail trade, and Service activities increase, implying changes in the main source of livelihood. This further implies that the more urbanised a district is, the lower the proportion of workers in Agriculture, Hunting and Forestry. More than two thirds (68.6%) of the workforce in all districts are in Agriculture, Hunting, and Forestry, except Sunyani (the most urbanised) (48.0%). Fishing is the second major industry in Sene (21.5%) and Atebubu (8.0%) because of the proximity of these districts to the Volta Lake. The remaining districts have 2.0 per cent or less of the workforce in Fishing. The manufacturing sector also employs a significant proportion of the workforce in the region (6.7%). Several small-scale businesses engage in manufacturing of garments, leather products, metal fabrication and spare parts, carpentry and joinery, are scattered throughout the region. The concentrations are in Sunyani (the regional capital), Berekum (abounds in wood processing establishments) and Kintampo (fabrication of farm implements, storage containers, donkey carts etc.), where a little over 10 per cent of the workforce is in manufacturing. Wholesale and retail trade industry employs more than 10.0 per cent of the workforce in only the three most urbanised districts, Sunyani (13.8%), Berekum (11.0 %) and Techiman (15.9%). The reason for Techiman having the highest proportion in the trade industry is that it is one of the major week long markets in Ghana, with the main market days being Tuesday through Friday. It attracts traders from the north and south of the country and even some from neighbouring countries. Sunyani (the regional capital) has the highest proportion of the workforce engaged in all the rest of the industries. The proportion of females engaged in wholesale and retail trade, hotels and restaurants, private household with employed persons, and other community, personal and social service activities are more than that of males in all districts. On the other hand, more males than females work in construction (which is mostly considered a masculine job), financial intermediation, public administration, defence, and education industries, in all districts. In the more industrialised countries or communities, the proportion of employees is higher relative to the self-employed, but in agricultural countries, the proportions of self-employed without employees (own account workers) and unpaid family workers are usually higher. As such the distribution of the workforce by employment status is often used as an indicator of progress in the modernisation of employment and the economy. It also measures the relative capacity of the various sectors of the economy to create jobs. There are significant differences between the national and regional proportions of employees and self-employed without employees. At the national level, 15.2 per cent of the economically active are employees compared to 9.7 per cent at the regional level. In contrast, the proportion self-employed without employees (74.6%) for the region is relatively higher than the national proportion of 67.5 per cent. The majority of the economically active population are self-employed without employees. They are engaged in small-scale economic enterprises operated by individuals. Many people are also peasant farmers engaged in subsistence level agriculture, the main occupation of the workforce. Many of the self-employed are not registered and have a very low capital base. This makes tax deduction at source, which is the easiest way of collecting tax, difficult if not impossible. It also poses a challenge to the effective disbursement and retrieval of loans and other financial assistance to these people for investment and expansion of their businesses. With so many individuals engaged in such enterprises, there is a resultant loss of capability to create employment. Only four districts (Sunyani 20.3%, Asunafo 16.3%, Berekum 13.3% and Asutifi 10.3%) have more than 10.0 per cent of the workforce as employees. Some of the reasons for this feature are that timber logging and wood processing are operated on large scale in Asutifi, Asunafo and Berekum. Sunyani, the administrative capital of the region, has the largest number of public and private formal institutions, which are avenues for public employment. Kintampo, Atebubu, and Sene Districts have proportions of employees lower than the regional average but higher proportions of unpaid family workers. These are mainly rural agrarian districts with large proportions of farmers and artisinal fishermen and fish processors. There are no significant differences in the inter-district proportions in self-employed (with employees), apprentices (except Berekum), and domestic employees. The private informal sector provides employment to about four out of every five members of the workforce in the region, with seven districts having proportions exceeding the regional average (83.0%). Sene District has the highest (90.9%), followed by the Atebubu District (90.3%) with the Sunyani District having the lowest (73.3%). Sunyani, as the regional and district capital, with several Government and quasi government organisations, has the most significant proportion of public sector employees (11.3%) while Sene has the least (2.2%). The private formal sector is relatively small, with its impact (in terms of employment) most recognised in Asutifi (15.8%), Sunyani (14.2%) and Berekum (15.8%), with the least in Atebubu (6.1%). Semi-public/parastatals, as a source of employment, are relatively insignificant in all districts. However, both Berekum and Asutifi have private wood processing companies and NGOs that employ large numbers of people. HOUSING AND COMMUNITY FACILITIES Good housing is one of the basic requirements of man. An appropriate house provides protection from unfavourable natural conditions, such as inclement weather, and defence against disturbing hostile forces (e.g. robbery) or various nuisances (e.g. pests and rodents). A properly built house also provides privacy and comfort in an enclosed environment for the individual household. Housing condition therefore constitutes an important parameter for measuring welfare in a country or community. < The stock of housing units in the region has witnessed a sturdy growth since 1960, increasing by 73.0 per cent in the 1960-1970 period, 43.2 per cent in 1970-1984 and 86.6 per cent in the 1984-2000-intercensal period. Brong Ahafo ranks among the regions with the highest growing housing stock build-up in the country (Ghana Statistical Service, 1995, p. 229). With a rate of increase in housing stock (5.4% annually) higher than that of population (3.2% a year) and household formation (2.8%), both the population per house and household per house reduced considerably between 1984 and 2000 A total of 216,275 residential houses, which includes all types of shelter used as living quarters, such as flats, apartments, huts or a group of huts enclosed as a compound, kiosks, shipping containers and tents. The region has 9.9 per cent of the total stock of residential houses in the country. Being predominantly rural (63.4%), the region has 71.1 per cent of residential houses in rural settlements. Sene has the smallest household membership per house and Sunyani and Berekum (the most urbanised districts) the highest. There is a negative correlation between the household per house and the proportion of rural settlements in the region. That is, districts with a large proportion of rural households have lower household per house ratios. Sene again has the lowest population per house ratio, while Jaman District has the highest. Type of dwelling Rooms in compound houses are the predominant occupied units by households in most districts, except Kintampo (31.8%) and Sene (41.4%) where the separate house is the predominant dwelling unit. Jaman (62.1%) and Berekum (59.8%) have the highest proportion of households occupying rooms in compound houses, with four districts (Sunyani, Tano, Wenchi and Techiman) having between 50.0 per cent and 60.0 per cent of households occupying such units. Flats and apartments are used more in Sunyani (4.6%) than in any other district. Except for Berekum (3.4%) and Asunafo (2.1%), all other districts have less than 2.0 per cent of households occupying flats and apartments. The use of huts as occupied units is most common in Sene (because of the large rural settlements) while Sunyani and Berekum (the most urbanised districts) have most of the improvised homes (kiosk/container). Tents are the least used occupied units. House ownership status Many planners are interested in the tenure status of households occupying living space. A primary distinction between owner-occupied dwellings and others would be particularly meaningful for housing programmes in general. Proportionately, household members own more occupied dwelling units than any other ownership status in all the districts. In fact, more than half of households in all districts own their dwelling units, with the exception of Sunyani. In Sene, about 4 out of 5 households own their dwelling units. Sizeable proportions also live in dwellings owned by relatives who are not members of households (17%) and other private individuals (15.8%). Private employers also own a recognisable proportion (2.1%). Sunyani, being the regional capital, has 5.0 per cent of dwellings owned by public/government institutions. The type of material used for constructing various parts of a dwelling unit determines the quality and durability of dwelling unit. The main material for roofing of dwelling units is corrugated metal sheet. On the average, 70.1 per cent of dwelling units are roofed with this material. Berekum has the highest proportion (94.5%) of dwelling units roofed with corrugated metal, while Sene, Kintampo, and Atebubu have less than 50.0 per cent of their dwelling units roofed with corrugated metal. In these three districts, thatch and palm leaf are the main materials for roofing, ranging from 54.3 per cent in Atebubu to 71.1 per cent in Sene. These three districts aside, thatch and palm leaf rank second as main material for roofing in all other districts. Roofs made of thatch and palm or raffia leaves have a very short lifespan and require constant replacement almost every year. These roofing materials are also susceptible to fire. Sunyani has the most significant, though relatively small, proportion (2.7%) of houses roofed with slate or asbestos. The use of this material is now almost non-existent due to its toxicity and carcinogenicity. Cement and roofing tiles, which are a new phenomenon in housing construction in Ghana, have not made any significant impact in the region. All other roofing materials are not widely used in the region. Cement/concrete (64.2%) and earth/mud bricks (34.2%) are the two main materials used for floors in the region. Cement/concrete, however, is used most in all districts, with the exception of Sene where earth/mud bricks (51.6%) is the main material used for floors. Cement/concrete as material for the floor is used in about four out of five houses in Berekum and Sunyani. For all other categories of floor material, only a small proportion of dwellings use them. The use of available inexpensive but non-durable material for building, especially in the rural areas, reduces the lifespan of houses, which either collapse easily during rainstorms or fire outbreaks or become death traps. The advantage, however, is that it gives rural dwellers a place of abode, where other sources of housing are either not available or unaffordable. Unfortunately, several attempts over the years to produce relatively more affordable but quite durable materials such as clay bricks, improved landcrete and pozzolana, have not been readily accepted by the population. Household facilities and amenities Information on household facilities and amenities give clear indication of how accessible certain basic facilities and necessities are, to communities. Room for occupancy The average household size for the region is 5.3 persons. A look at room occupancy per household gives the impression that there is congestion in rooms. One-room occupancy for a household is the predominant feature in all districts (except Sene), with Sunyani having more than half of households occupying single rooms. The situation in Sunyani may be attributed to the fact that 73.8 per cent of households live in the urban areas where rent charges are high. Renting more rooms, therefore, would be out of reach for many households and this compels them to live in kiosks and tents. Berekum (with a large urban population), also has a significant proportion of households, 46.4 per cent, occupying single rooms. Nine other Districts have more than 30 per cent but below 40 per cent of households occupying single rooms. Sene (25.8%) and Atebubu (28.9%) are the only districts with less than 30 per cent of household in single rooms. These two are also the only districts with about 40 per cent of households occupying between 2 and 3 rooms. Main Source of Lighting Information on the distribution of dwelling units, households and persons in living quarters by type of lighting is no doubt useful for planners as an indication of areas to be covered by the extension of community lighting system in the future. This, cross-classified with income levels, can go a long way to help provide the best and affordable energy type for the community. With the exception of Sunyani (63.6%), Techiman (52.6%) and Berekum (61.7%), where main source of lighting is electricity, the kerosene lamp is the main source of lighting for the rest of the districts. More households use the kerosene lamp in Sene than any other district. The type of houses would even be a hindrance to rural electrification. There is a correlation between urbanisation and the use of electricity. For all the districts with more than half of the population living in the urban areas, electricity is the main source of lighting. Thus, Sunyani, Berekum and Techiman, all highly urbanised districts, have high proportions of households using electricity. Solar energy is the least source of lighting, and is used in only three districts, Berekum, Nkoranza and Tano, where just 0.1 per cent of the households use it. Even though the initial capital outlay can be high, solar-powered lighting system can, in the long run, become the most economical way of extending electricity for lighting and non-industrial use to the rural areas, especially facilities like hospitals, clinics and schools. Room occupancy per household (sleeping room) by district The kerosene lamp (63.6%) and electricity (35.5%) are the main sources of lighting for households in all districts. Each of the other sources of lighting is used by less than 1.0 per cent of households in each district. It is only in Tano, Sunyani, Berekum, and Techiman that the proportion of households using electricity exceeds the regional average. Main source of drinking water Sources of water are of great concern to every nation, because, not only is water a necessity but a source of many diseases (water borne diseases). The supply of potable water (that is, treated water), is closely connected with sanitary conditions of living quarters, and is particularly essential for the prevention of communicable diseases, as well as cleanliness and general comfort of the residents. At the regional level, nearly half of households have access to potable water (defined as pipe-borne water and borehole), 15.6 per cent use the open well and the remaining 35.6 per cent use other sources, such as river, stream, rainwater and dugout. Provision of potable water, at the district level, follows to some extent, the pattern of urbanisation of the districts. The percentage using potable water is higher than 60 per cent in four districts, Berekum (75.0%), Jaman (69.9%), Sunyani (69.3%) and Tano (60.5%). The percentage is higher than 50 per cent in three districts, Wenchi (57.4%), Dormaa (56.6%) and Techiman (53.1%). The high proportion of households that have access to potable water is directly related to relatively high proportion of boreholes. In fact, the high proportion of boreholes in Jaman (62.3%), Sene (40.3%), Berekum (35.9%), Wenchi (33.0%), Nkoranza (29.2%), Tano (26.9%) and Asutifi (24.7%) account for the relatively high proportion of households in these districts having access to potable water. The low level of use of potable water in the districts is compensated for by the use of the well, which is generally a safer source of water than the natural sources, such as the river, stream and rainwater. Stagnant water from dugout is considered the worst of the water sources, and about one-tenth of households in Sene use water from this source. This source provides water for livestock, which at times drink and swim directly from it, posing serious health hazards if the water is not boiled before drinking. Areas where streams, rivers and dugouts are major sources of water have serious implications on the health of the households. For example, guinea worm cases are high in Atebubu, Kintampo and Sene. These three districts contributed to 97.0 per cent in 2000 and 95.0 per cent in 2001 the total guinea worm cases in all the 11 endemic districts in the region (Ghana Health Service, 2001). Cholera outbreaks are prevalent in Atebubu, Asunafo and Sene. These three districts had case specific mortality rates (the number of deaths from specific diseases during a defined period) of 5.4 per cent in 2000 and 8.6 per cent in 2001 for cholera cases. Buruli ulcer cases are found mainly in communities along the Tano River. Space for cooking is well provided for the 342,695 households in the region. At the regional level, three types of cooking facilities, separate room for exclusive use of the household (29.7%), open space in the compound (22.2%) and separate room in the compound, shared with other households (21.4%), account for 73.3 per cent of cooking facilities. These are distantly followed by the use of a structure with a roof, without a wall (8.6%) and cooking on the veranda of a room (7.5%). A small proportion of households (1.9%), however cook in the hall or the bedroom while an additional 1.7 per cent use an enclosure without a roof for cooking; 6.0 per cent do no cooking. The regional pattern of cooking facilities is reflected in the districts. The separate room for exclusive use accounts for over 30.0 per cent of cooking facilities in five districts, (Asunafo 43.8%), Dormaa (43.6%), Asutifi (36.2%), Jaman (32.5%) and Nkoranza (31.5%). In seven of the remaining eight districts, the separate room for the exclusive use of the household accounts for between 21.3 and 28.6 per cent of cooking space facilities, leaving Atebubu (18.0%), as the only district with lower than 20.0 per cent of household using a separate cooking facility for exclusive use. On the other hand, the open cooking space, in the compound, is the major type of cooking facility in Atebubu (41.7%), Sene (40.7%) and Kintampo (37.5%), followed by Techiman (28.2%), Nkoranza (28.0%) and Wenchi (26.0%). In the remaining six districts, the open cooking space in the compound accounts for between 11.9 and 17.0 per cent in five districts and below 10.0 per cent, in Asunafo (9.8%) and Asutifi (9.6%). The shared separate room for cooking is highest in Tano (33.5%), Jaman (32.7%), Berekum (31.6%) and Asutifi (30.9%), in which districts its use varies between 30.9 and 33.5 per cent of all cooking facilities. In addition, the shared separate cooking facility accounts for between 20.0 and 24.0 per cent in four other districts, Asunafo (21.7%), Dormaa (22.5%), Sunyani (22.9%) and Wenchi (23.4%). In four of the remaining five districts, the separate shared room for cooking accounts for more than 10.0 but less than 20.0 per cent in Techiman (17.4%), Nkoranza (14.1%), Atebubu (11.9%) and Kintampo (10.6%). It is only in Sene that the proportion of this is very low (4.3%). The roofed structure without a wall (8.6%), as a cooking facility, is not common in the region. Households cooking in this facility exceed 10.0 per cent but not more than 18.0 per cent in five of the 13 districts and less than 10.0 per cent, varying between 5.2 and 9.8 per cent, in the remaining eight districts. Cooking on the veranda of the dwelling unit (7.5%) is equally not common in the region. It exceeds 10.0 per cent only in two districts, Techiman (13.5%) and Sunyani (16.0%), and 5.0 per cent in five other districts. The veranda is rarely used for cooking in the remaining six districts with less than five per cent of households, varying from a low of 2.8 per cent in Asutifi to 9.6 per cent in Kintampo. The use of an enclosure without a roof or any other makeshift structure for cooking exceeds 2.0 per cent in only three districts, Asunafo (2.3%), Nkoranza (2.6%) and Sene (5.5%). In view of the importance attached to home-cooked food in the region, it is to be expected that adequate provision be specifically made for a space for cooking meals. Main source of fuel for cooking In spite of the promotion of cooking gas, wood still remains the main source of cooking fuel in all districts, with an average of 75.6 per cent of households in the region using wood. For Sene and Asutifi, about nine out of ten households use wood for cooking. Charcoal is the second major source of cooking fuel, used by 17.3 per cent of households in the region, with Techiman (34.2%) having the highest proportion of households using it. The same district is known to supply large quantities of charcoal to other parts of the country. The use of gas for cooking is significant in Sunyani (7.0%) and Berekum (2.4%) only. The campaign of the government and non-governmental organisations on protecting the forest would be difficult to achieve if affordable materials used for cooking are not promoted to minimize the use of wood and charcoal. Bathing facility Households in the region are well provided with bathing facilities. Over a third (37.7%) of households have a shared separate bathing facility, a fifth (20.6%) have a bathing facility for exclusive use; over one-tenth use a shared open cubicle (11.6%), a private open cubicle (8.8%) or a bathing facility in another house (7.2%). Although bathing in a river or pond, lake, etc., is almost nonexistent (0.5%) in the region, about one in eight households (12.7%) in the region take their bath in an open space. The shared separate bathroom is the commonest (37.7%) bathing facility in each district. It accounts for over a fifth (20.0%) of all bathing facilities in each district, varying between 51.4 per cent in Berekum to 22.6 per cent in Sene. The own bathroom for exclusive use, which is the second commonest bathing facility in the region, varies between 20.0 and 27.8 per cent in six districts, and from 15.8 to 19.8 per cent in the remaining seven districts. There is no district in the region with less than 15 per cent of bathrooms owned by households for their own exclusive use. There are only four districts, Sene (13.2%), Asunafo (13.1%), Asutifi (11.4%) and Kintampo (10.3%), where the private open cubicle accounts for more than 10.0 per cent but not exceeding 14.0 per cent of bathing facilities. In seven of the remaining nine districts, the private open bathing cubicle accounts for between 7.2 and 9.9 per cent and below 5.0 per cent in Jaman (4.7%) and Berekum (4.5%). The shared open bathing cubicle, as the private open bathing cubicle, is not common in the region. It varies within the narrow range of between 10.9 and 15.4 per cent in nine of the 13 districts. In the remaining four districts, Berekum (9.8%), Asutifi (9.4%), Dormaa (8.9%) and Jaman (8.7%), this category of bathing facility accounts for less than 10.0 per cent of bathing facilities. Household members bathing in another house is equally not common in the region. Household members bathing in the open space (12.7%), which makes up about one out of every eight households, is rather commoner than bathing in another house (7.2%). It is only in one district, Atebubu (20.5%), where members of one out every five households bathe in the open. Of the remaining 12 districts, seven have between 10.0 and 19.9 per cent of households whose members bathe in an open. In the remaining five districts, Sunyani (9.5%), Asutifi (9.3%), Berekum (8.9%), Dormaa (7.7%) and Jaman (5.9%), lower than 10.0 per cent of households bathe in an open space. Despite the fact that households in the region are relatively well provided with bathing facilities, much more remains to be done to reduce the rather high proportion (7.2%) of households and in particular, in Sene (19.9%) and Atebubu (20.5%), whose members bathe in the open space. Information on toilet facilities is also considered important for housing as well as public health policy. Pit latrine inside the dwelling and public toilets, which could be WC, KVIP, pit or bucket, are frequently used toilet facilities in all districts.Where one of these two facilities is predominant, the other comes next. A disturbing fact, however, is evident in Kintampo, Atebubu, and Sene where more than a third of the households have no toilet facility (use the bush or field). An average of 7.7 per cent of households use KVIP in their homes. The water closet (WC) is not common with households in most districts, possibly because of the need for piped water for its use. Sunyani, where the use of pipe borne water is significant, leads in the use of WCs. Waste disposal facilities Liquid waste disposal Households in almost all the districts dispose of liquid waste on the street or outside the house. It is only in Atebubu and Sene, where households dispose of liquid waste in the compound, more than on the street or outside the house. All districts have less than 10.0 per cent of their households disposing liquid waste into the gutter, with the exception of Sunyani, where 17.0 per cent of households dispose of liquid waste through this medium. It is also in Sunyani that 2.7 per cent of households dispose of liquid waste through a proper sewerage system; all the other districts have less than 2.0 per cent of their households using the sewerage system to dispose of liquid waste. The high proportion of persons disposing of liquid waste in gutters in Sunyani, typifies an increasing but unacceptable phenomenon, in virtually all urban towns and cities in the country as a whole. Open drains and gutters normally border roads constructed in these urban places. Instead of serving their intended purposes as storm drains, they have virtually all become receptacles for all types of waste, including solid and liquid waste. These in turn accumulate stagnant water and serve as breeding grounds for mosquitoes and other household pests. The municipal and metropolitan authorities need to draw up a comprehensive and long-term plan of building proper sewerage systems and connecting all dwelling units to them, to avoid a looming environmental disaster that may prove far more expensive to rectify. Solid waste disposal The bulk (92.9%) of the solid waste generated in the region are either disposed of in a public dump (70.3%) or are dumped anywhere (22.6%). Two-thirds or more of households in 10 districts dispose of their solid waste in public dumps. The proportions vary from 66.6 per cent in Asunafo to 87.8 per cent in Jaman. At least 40.0 per cent of households in each of the remaining three districts, Kintampo (55.5%), Atebubu (45.8%) and Sene (41.8%), also dispose of solid waste in a public dump. While almost half of the households in Sene (48.8%) dispose of solid waste elsewhere, other than a public refuse dump, 45.7 per cent of households in Atebubu and 36.7 per cent in Kintampo also dispose of solid waste elsewhere other than a public dump. In addition, over a fifth (20.0%) of households in four districts and over a tenth (10.0%) in five other districts, also dispose of solid waste elsewhere. It is only in Jaman (7.7%) that less than 10.0 per cent of households dispose of solid waste elsewhere. Burning of solid waste (3.4%) is rather rare in the region, exceeding 5.0 per cent in Sunyani (6.4%), Asutifi (5.2%) and Sene (5.0%). Burying of solid waste (2.4%) is rarer still in the region; the practice does not exceed 4.0 of households in any district. Disposing of solid waste anywhere, other than the public refuse dump, burning or burying it, can create hazardous and unsanitary environmental conditions. The practice must be guarded against by District Assemblies ensuring that removable public refuse dumps are available at places convenient to households, for disposal of their solid waste. Medical establishments in the region comprise hospitals that provide both in-patient and outpatient care including sanatoria, mental institutions; clinics that provide out-patient care exclusively, including dispensaries, health centres. The region can boast of 25 hospitals, 35 health centres, 106 rural clinics, and 54 maternity homes. Government owns more than half of all the health facilities; it totally owns all health centres, and two-thirds of rural clinics. Three-quarters of hospitals and almost all maternity homes, however, are privately owned. Since the private sector is a major partner in the development of the country, analysis of health facilities will be done on the type and distribution rather than ownership. Some of the private hospitals, particularly mission hospitals, have government-paid/seconded personnel. The Sunyani District has the highest number of health facilities. It has a quarter of all the hospitals in the region. A new-state-of-the-art hospital, one of only three recently built, is now in operation. The old regional hospital has become a district hospital. The only district that has no hospital is Sene, while Jaman has the highest number of rural clinics and maternity homes. Though it is not possible to have a health facility in every community, the available facilities in the region fall short of the recommended standards with regard to the spread. The Health Ministry recommends a distance of eight kilometres of a facility from a locality. Tano and Techiman are the only districts where a hospital is located within 10 kilometres of about half of the localities. The remaining districts have less than 40.0 per cent of localities within 10 kilometres of a hospital, with Sene which has no hospital, having 7.6 per cent of localities and Dormaa with 6.2 per cent of localities within 10 kilometres of a hospital. For these two districts, hospitals are more than 30 kilometres away from more than half of the communities. Clinics are more accessible than hospitals in terms of distance. This is a reflection of the stock of these facilities in the region. With the exception of Kintampo, Atebubu and Sene, which have less than 40.0 per cent of localities within a 10-kilometre radius of a clinic, the remaining districts have more than 50.0 per cent of localities within a 10-kilometre radius of a clinic. The services of traditional healers are available in many localities in the region. Over 90.0 per cent of localities in Kintampo, Atebubu, and Sene have traditional healers. Berekum has the lowest proportion of about 38.0 per cent of localities having traditional healing facilities within the localities, while the rest have more than 50.0 per cent. In localities where there are no traditional healers, accessibility to the nearest healer for over 90.0 per cent of localities is within 10 kilometres. The current health facilities and their spread cannot support an effective health insurance scheme. Traditional healers, who are more accessible in the localities, are not covered by the national health insurance scheme. On the other hand hospitals which are covered by the scheme are so far away from localities that they are not likely to be well patronised. The data on health manpower comprise those statistics regarding physicians, dentists and nurses who provide the large proportion of direct services, and members of the allied health profession. In many actual instances, the statistical data of this kind are obtained from administrative records regularly collected by health authorities in addition to some data gathered from censuses and surveys. There is a shortfall in all categories of manpower requirement for the region. There is a serious shortage of personnel providing direct health service, with pharmacists being the worst affected (50.0%), followed by nurses (21.5%) and doctors (17.6%). Quality health service cannot be provided under these conditions and will lead to loss of confidence in orthodox health care, which will in turn affect the health insurance scheme. Postal and telecommunication facilities All districts have full postal offices with the exception of the Sene. The highest number of full postal offices in a district is three and this can be found in five districts. Two other districts have two postal offices each; the remaining five districts have one each. All districts have postal agencies, with Jaman having the highest and Kintampo the lowest. Berekum, Kintampo and Sene have the least number of postal facilities. Accessibility to postal services, in terms of distance to post offices and postal agencies, is very poor. Not more than 2.0 per cent of localities in the region have postal facilities. Dormaa, Nkoranza, Kintampo, Atebubu, and Sene have less than 40.0 per cent of localities within 10-kilometres of a postal facility. In fact, postal services are more than 30 kilometres away from more than 50.0 per cent of localities in Sene. Berekum has the best spread of facilities, with no locality being more than 25 kilometres away from a postal facility. Three districts (Sene, Jaman and Asutifi) have no direct telephone facilities. All the other district capitals are connected to Ghana Telecom lines. Two mobile phone services, Areeba and One Touch, are available in some towns in the region. Tele-density for the region (0.1) is far below the national figure of 0.7, and almost insignificant if compared to that of Greater Accra region (3.2). Telecommunication facilities are not easily accessible to many localities in the region; in fact, it is worse than postal services. The principal mode of transportation in the region is by road. The region’s road network consists of highways, urban roads and feeder roads. The villages and small towns are connected to each other by feeder roads, while small towns, large towns are connected by highways. The Department of Urban Roads provide the road network within the urban centres. Sunyani, the administrative capital, is the focal point of most of the roads in the region. The region at present has 1,894.9 kilometres of major roads, which represent 13.1 per cent of the total network of major roads in the country, thus making it the region with the second widest network of major roads after Northern Region (regional Coordinating Council, 2001). About a third (33.1%) of the region’s major roads are paved, this forms 11.1 per cent of the national paved or asphalted roads. These include the Kumasi-Dormaa Ahenkro road, the Yamfo road, Sunyani-Techiman road, Techiman-Nkoranza road, Techiman-Wenchi road and Kumasi-Yeji road. In addition to the major roads, the region has the longest network of feeder roads (3,463.0 kilometres). In terms of total road network, therefore, the region has the longest road network in the country, measuring 5,357.9 kilometres, followed by the Northern Region, with 5,170.8 kilometres, the Ashanti Region with 4,782.2 kilometres and Western Region with 4,452.4 kilometres. The land area of the region is the second largest after Northern Region. The length of the road networks in the two regions is therefore a reflection of the land areas and not necessarily the required road capacity of the regions, neither does it reflect the quality of roads. Travelling by boat is the principal mode of transport for communities along the Volta Lake. Yeji is the largest community on the Brong Ahafo side of the Volta Lake and has a port facility for cargo and passenger boats in addition to being the southern terminus of the ferry crossing connecting to Makango and Salaga in the north. There is an airport at Sunyani which connects the region by air to Kumasi, Accra and Takoradi, but does not play a major role in the transportation system. Indeed the airport has not operated commercially for a long time and only military aircraft currently use the facility. A distinction is often made between public schools, which are operated by a public authority, and private schools, which are maintained or administered by private bodies. The origin of financial resources is not always the main criterion, since private schools may have financial support from public authorities in many instances. Wenchi has the highest number of pre-schools, with Asunafo leading in the number of primary schools. Ideally, the number of primary and junior secondary schools should be nearly the same to absorb all pupils who complete the six-year primary school level. In reality, however, the number of JSSs is about half that of primary schools in all districts, except in Sunyani and Berekum where the difference is relatively small. The number of senior secondary schools is not encouraging. The region can boast of only 60 senior secondary schools as compared to 769 junior secondary schools. Sunyani has the highest number of secondary schools, (88 JSS and 8 SSS) with Sene (22 JSS and 2 SSS) having the least. There are three Teachers’ Training Colleges in the region, located in Atebubu, Berekum, and Bechem. There are also 24 Technical, Commercial and Vocational institutions, all privately owned, as well as three specialised schools and one Polytechnic. Kintampo has the highest proportion (30.6%) of localities with primary schools within the locality, followed by Sene (27.4%) and Atebubu (24.4%). On the other hand, these same districts have the highest proportion of localities more than 30 kilometres from the nearest primary school. Most of the localities (more than 50.0%) in the remaining districts are between one and five kilometres away from the nearest primary school. More localities are further away from junior secondary schools than primary schools in all districts. With around 50.0 per cent of primary schools not having a corresponding junior secondary school, many children who out of necessity have to change schools between primary and Junior secondary are sometimes forced to drop out of school because of the distances they have to travel to have access to a school. In the case of senior secondary schools, more than 70.0 per cent of the localities are over 10 kilometres away from the nearest facility, but since most of such schools have boarding facilities, distance is not so much a factor as affordability and quality in determining whether a child attends a senior secondary school and where. On the average, there are five teachers to a primary school in the region, falling short of one teacher from the ideal number of six teachers to a primary school, the standard set by the Ghana Education Service (GES). The only district that meets this standard is Tano. Asunafo, Berekum, Kintampo and Atebubu have a teacher/primary school ratio of 4, and Sene has a ratio of 3, the worst in the region. All the remaining districts have a ratio of 5. In the districts where the teacher/school ratio falls below the standard, effective teaching will be lacking since teachers have to leave one class to attend to others. Lack of teachers in Sene may be a reason for the low current school attendance, low school attainment and high illiteracy. In the JSS category, the regional average of teacher/school ratio is 6, which is slightly above the national standard of 5. This is however far from the ideal because in JSS, in addition to general subject teachers, each school is expected to have specialised teachers for subjects such as French, Ghanaian languages, Mathematics, Vocational Skill, Science and Technical Skills. Sunyani has the highest teacher/school ratio (26) for the SSS category, with Asunafo the lowest (11). For SSS, a teacher without a diploma in education is classified as untrained even if he/she graduated from the university or other tertiary level institution. The overall picture for the region shows that pre-schools have the largest proportion of untrained teachers (82.7%). Apart from Techiman (50.7%), Sunyani (39.6%) and Tano (22.0%), the remaining districts have less than 15.0 per cent of trained teachers in the pre-schools. Sene has the lowest proportion (1.6%) of trained pre-school teachers. The proportion of untrained teachers (30.8%) in primary schools in the region is far less than that of the pre-schools. Berekum, Tano, Techiman and Sunyani have more than 90.0 per cent trained primary teachers. The remaining districts, except Nkoranza (27.8%), have untrained primary teachers above the regional average, with Asunafo having the highest (55.2%). The JSS level has the lowest proportion of untrained teachers in the region. As with the primary, Berekum, Tano, Techiman, Sunyani and Atebubu have less than 10.0 per cent untrained teachers. Nkoranza and Asutifi have proportions of untrained JSS teachers between 15.0 and 20.0 per cent, with the remaining districts having proportions above 20.0 per cent. Exceptionally, all SSS teachers in Tano are trained. More than 30.0 per cent untrained teachers can be found in Atebubu, Sene, Asutifi and Kintampo. The population density of the region is lower than the national average. On the other hand, the proportion of rural population is higher than that of the national. The average household size is also higher than the national figure. Fertility, as measured by TFR, is higher for the region than it is for the national. The region falls below the national average in development indicators, such as the level of education, access to potable water and electricity, and availability of modern toilet facilities. The distribution of the economically active population is much concentrated in primary industry, which further emphasises the low level of development of the region compared to the national distribution. The self-employed with no employees and private informal sector workers predominate the employment landscape; but the proportions for the region are even higher than the national. This further shows the low quality of manpower in the region. In addition, rural housing is of poor quality, with the structures built with cheap non-durable materials. The housing conditions in the rural areas, especially, require qualitative improvement and provision of some basic amenities for healthy living. The distribution of resources among the districts in the region depicts an unbalanced development, with Sunyani the most developed district and Sene the least developed. Sunyani, Berekum, and Techiman are far ahead of Sene, Atebubu, and Kintampo in terms of development. The Ministry of Local Government and Rural Development would need to seriously tackle the unbalanced development among the districts by channelling more resources through the District Assemblies towards the provision of infrastructure and social amenities. This could help curb migration from the less endowed districts to the relatively well-endowed ones. The issue of major concern is the rate of growth of the population of the region rather than the number of people. Though the inter-censal growth rate of 2.5 per cent is lower than the national rate of 2.7 per cent, it is still high in relation to the resources of the region. A rapid growth rate of population disproportionate with the pace of social and economic development will intensify problems such as chronic underemployment and unemployment, especially in rural and urban informal sectors of economic activity. It will also exert greater pressure on social amenities such as education and health. Despite the Economic Recovery and Poverty Alleviation Programmes, the high population growth rate may offset any economic gains in real terms. The environmental implication of the high population growth rate is the increase in the demand for fuel wood (used by 75.6% of households in the region) and agricultural land, which, in turn, results in an increased rate of deforestation. Deforestation may also lead to increased soil erosion and loss of reliable water supply, already a problem in a number of districts. The ultimate result will be a decrease in agricultural productivity and a lowered standard of living. The region has a ‘young’ population characterized by a high proportion (43.1%) of persons under 15 years and a low percentage (4.2%) of persons at age 64 and older. Such a structure of the population implies a high proportion dependent population. In addition, the number of entrants into the work force in the near future may increase. These circumstances are likely to lead to unemployment (which presently stands at 8.2%) among younger workers. The mean number of children ever born in all districts is around 5 children, which is very high. Asutifi and Asunafo have a TFR of 5 births per woman, which is higher than the regional average of 4.2 and even higher than the national average. These same districts have the highest dependency ratios in the region and are likely to have serious reproductive and child health problems if nothing is done about the population issues identified. Education, health and access to safe water are variables often labelled “basic needs”, which can be used as complementary to consumption expenditure as indicators of poverty in a community. Education constitutes one of the most important factors determining the demographic behaviour of people and the level of fertility. Education also constitutes an important determinant of the quality of manpower. As such, the educational level of the population reflects roughly the level of social and economic development of a country or community. The level of socio-economic development of the region can, therefore, be linked directly to the level of education of the population. The proportion of those who have never been to school in the region (42.0%) is high; as a consequence, the illiteracy rate (48.5%) is also very high. Further examination reveals that, of those who have attended school, Primary school is the highest level attained by majority of females (41.7%), while middle/JSS is the highest for males (40.3%). This implies poor quality of manpower in the region, reflected in the occupational and industrial distribution of the workforce. This picture should also alert policy makers and planners that public education and information transmitted in writing or through the print medium will not be effective. More males are enrolled in schools than females, with the discrepancy widening as one climbs the educational ladder. The worst affected districts are Sene, Atebubu, and Kintampo. The low level of education in these three districts is further translated into the type of economic activity of the population. The proportion of the population under 15 years, who are economically active in these districts, is the highest in the region. In Sene, for instance, 21.7 per cent of the population aged 7-9 years, and as high as 44.7 per cent of the population aged 10-14 years, are economically active. The situation is not very different in Kintampo and Atebubu. The high proportion of child labour in the region especially in the fishing industry along the Volta Lake has given rise to media attention in recent times. The District Assemblies in these districts, in close collaboration with the Ministry of Women and Children’s Affairs, should intensify efforts to reduce and eventually put a stop to this practice. Majority of the economically active population are in the primary industry comprising Agriculture, Hunting and Forestry. The same can be observed of the occupational distribution. This is further translated into the type of economic sector and status, consisting mainly of the informal and self-employed without employees. All the four rounds of the Ghana Living Standards Survey (GLSS) have revealed that people in this sector of the economy are mostly poor. With such a large informal sector, it may be difficult to mobilize revenue and improve upon the economic well-being of the population. The Government’s efforts, therefore, should be geared towards improvement in activities in the primary industry. The proportion of homemakers, about a quarter (26.6%) of the not economically active population, of the region is high, with a proportion higher than the regional average in the Sene, Atebubu and Kintampo, Districts. Since homemakers may not be in a position to contribute much to household income, the burden of financial responsibility therefore falls on few household members, resulting in poverty. Cheap and non-durable materials are used for building, most of which are in the rural areas. The Sene, Atebubu and Kintampo, Districts have the largest stock of such buildings. The very high cost of building materials eliminates a greater proportion of prospective builders from acquiring decent houses, and compels them to use cheaper building materials. These buildings pose a threat to human life because they are not durable. For example, buildings roofed with thatch catch fire easily and also harbour pests. Room occupancy in the region shows crowding in relation to average household size of 5.3 persons. The rate of urbanisation has increased the need for housing beyond what urban areas can provide. This has led to the creation of shantytowns, slums and unwarranted extensions of existing buildings, resulting in overcrowding and unhealthy environmental conditions. The spread of communicable diseases is easy under such circumstances. Majority of households do not have any toilet facility in the Sene, Atebubu, and Kintampo, Districts and, as such, use the bush, field or drains. This can have serious implications on the environment. In these districts, rivers, streams, and dugouts constitute the main sources of water for households. Human waste, therefore, can easily pollute these water sources. A large majority of households in the region do not have access to potable water (piped borne and borehole). Water borne diseases are likely to infect the population as a result. Access to amenities and utilities is very poor in the region. The proportion of households connected to the national electricity grid is lower than 40.0 per cent. Small-scale enterprises that use electricity cannot operate in most rural areas. A documentary on the activities of the Renewable Energy Systems Project (RESPRO) revealed that areas where their services are being piloted have shown an increase in the working hours (especially at night) of the beneficiaries, leading to increased income. Post and telecommunication facilities are also woefully inadequate, as shown by the distances from the localities to the nearest facility. The Sene, Atebubu and Kintampo, Districts are the worst affected, while the Sunyani, Techiman and Berekum, Districts are relatively well endowed with these facilities. Districts with more households using electricity and postal and telephone services have the potential to develop faster than districts where these facilities are lacking. The availability of these facilities in certain areas will attract the population from the deficient areas, with the attendant problems. The growing interest in improving the quality and efficiency of health services has led to an increasing demand from administrators for statistical data showing the types of services used by various segments of the population. With the severe shortfall in health personnel, especially doctors and nurses, more doctors are required to care for the rapidly increasing population. The increased health risks of childbearing of women aged 15-49 years, and children aged 0-4 years who are susceptible to disease, put a strain on the few maternal and child health resources. The Sene District has the least number of health facilities. The district is the only one in the region, which has no hospital. This is a major health concern, since all serious health cases have to be referred to hospitals in other districts. Increasing attention should also be paid to paramedical personnel, such as laboratory technicians, pharmacists and ward assistants, because they constitute the backbone of health institutions. The shift from medical to health personnel and the emphasis on interdependence of medical and paramedical personnel, need to be encouraged. In rural areas, even the licenced chemical seller becomes the first line of contact in minor and emergency health situations. Several recent studies indicate that a reduced rate of population growth played a key role in the economic development of many Asian countries, such as South Korea, Taiwan, Thailand, Singapore, Indonesia and Malaysia. Specifically, these studies have found that: • Fertility decline slowed the growth in the number of school age children. By keeping educational expenditures high, these countries were able to increase the enrolment rates and the quality of education received by each child. • Savings increased as household size declined. As dependency rates declined, families were able to save more of their income. These savings replaced foreign capital as the major source of domestic investment. • Fertility decline eventually led to slower growth in the labour force. As a result, both wages and capital investment per worker rose. The above results mean that when the growth of the population is slowed down, many of the problems and their implications could be adequately addressed. Married couples should be encouraged to raise small families and practice family planning. The 1998 Ghana Demographic and Health Survey (GDHS) revealed that modern contraceptive use among married women in the region is 14.8 per cent, which is very low. Reducing fertility improves the chances of infant and child survival and has beneficial impact on population growth. Family planning helps women avoid births that are too early, too late, or too frequent. Family planning activities in the region should be stepped up to reduce the high total fertility rate, especially in the Sene and Asunafo, Districts. Long-term and permanent family planning methods should be encouraged. The Free Compulsory Universal Basic Education (fCUBE) programme (1996-2005), which is a mandate of the 1992 constitution of the 4th Republic of Ghana, was launched in October 1996 to address the low school enrolment and attainment levels. A girls’ education unit was established in February 1997 under the basic education division to be solely responsible for addressing equity in gender issues. Specifically this unit was to work toward achieving maximum enrolment and retention of girls in schools through community sensitisation and advocacy against negative religious and cultural beliefs and practices. The problems still exist and the Ministry of Education needs to double its efforts in identifying shortcomings in the educational reforms and rectify them. Functional literacy programmes, by which the ability in reading and writing could be extended to cover a greater proportion of the population to enable them to effectively engage in normal socio-economic and cultural activities, should be intensified. Efforts should be made to equip the workforce in the informal sector with financial and management skills and experience to improve their competitiveness by: (a) Developing systems to facilitate co-ordination and linkages between the formal and informal sectors of the economy; (b) Promoting technological proficiency and advancement of the labour force in the informal sector; and (c) Reforming and strengthening the traditional apprenticeship system. The rural environment can be transformed through agro-based industrialisation, effective decentralisation and private sector development. Access to potable water and good sanitation should be increased to achieve the health outcomes and sustainability of poverty reduction. The Community Water and Sanitation Agency (CWSA) should be well resourced to enhance the operation and maintenance of water facilities in rural areas. The timely disbursement of the District Assembly Common Fund will also go a long way to support the maintenance of water facilities in the rural areas. The disbursement of the common fund should further be decentralised to the area and town council levels for accelerated development of poor communities. To halt the rapid destruction of the forest through felling of trees for firewood, fast growing trees that can also be harvested for firewood should be made available for cultivation. Biogas plants should be built in the communities by the District Assemblies as a cheaper source of gas for cooking as well as for solving the inadequate toilet facility problem. The degraded and deforested areas, particularly along major truck roads, should be reclaimed through afforestation programmes, including the cultivation of agro-based crops and cash crops such as cocoa or rubber. For effective and safe liquid and solid waste disposal, District Assemblies should institute critical measures, including rationalising and up-dating of byelaws, to ensure safe management of liquid and human waste at the household level. They should also enforce laws on the provision of sanitation facilities by landlords. Simplified sewerage systems should be introduced for poor areas with high population density. The various District Assemblies should help the communities with KVIP toilet facilities and also educate them on keeping the environment clean. Sanitary inspectors should be given incentives to work effectively and efficiently. The problem of working children, especially in the Sene, Atebubu, and Kintampo, Districts should be tackled with all seriousness. Some NGOs have initiated moves to reintegrate child slaves with their families. In districts where the problem of child work/child slave is prevalent, the District Assemblies should provide the necessary support to families to enable them to sustain and retain children in the school system instead of the street or the work place at such young and tender ages.
<urn:uuid:12dad9bc-a079-4c9e-a85b-53aee63e8353>
{ "date": "2015-07-06T22:30:16", "dump": "CC-MAIN-2015-27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098849.37/warc/CC-MAIN-20150627031818-00016-ip-10-179-60-89.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9516952037811279, "score": 2.515625, "token_count": 25785, "url": "http://www.modernghana.com/GhanaHome/regions/brongahafo.asp?menu_id=6&menu_id2=14&gender=" }
For many young people with disabilities, the Supplemental Nutrition Assistance Program (SNAP) – aka food stamps — is a lifeline. More than 11 million people with disabilities receive vital nutrition assistance through SNAP, according to the Center on Budget and Policy Priorities. But SNAP eligibility turns on a range of factors. For some people seeking SNAP benefits, being part of their families’ household may increase their likelihood of being approved for SNAP, especially if their parents are low income. But for others, their parents’ incomes may push them above the eligibility limits. SNAP eligibility depends on a household’s income. Generally speaking, people are SNAP eligible if their household’s gross income is below 130 percent of the federal poverty line and the household’s net income (gross income minus certain deductions) is below 100 percent of the federal poverty line. A household with a person receiving disability payments must only meet the net income test. (For eligibility details, click here.) SNAP also contains asset limits, though many states waive this requirement. A household is defined as a group of people who live under the same roof and buy and prepare food for 11 or more meals a week. People under age 22 living with their parents are automatically included as part of their parents’ households. If the family income is too high, the child will not be eligible for SNAP. Individuals with disabilities over age 22 who are unable to purchase and prepare their own food may be eligible for SNAP benefits even if they live at home with their parents. This is the case as long as the majority of the food they consume is purchased with their income and prepared separately from the rest of the family, which can be burdensome for some families. There are separate rules for people living in certain institutional settings, such as group homes, provided that the facility has 16 or fewer residents and it prepares the individual’s meals. Whether a person qualifies as having a disability under SNAP could also affect eligibility for certain deductions and might exempt recipients from work obligations, recertification requirements and other constraints well worth exploring in determining SNAP eligibility. Generally, a person is considered to be disabled for SNAP purposes if he or she receives federal disability or blindness payments under the Social Security Act, including Supplemental Security Income (SSI) or Social Security disability or blindness payments. For other paths to disability status under SNAP, click here. In light of SNAP’s complex deductions system for calculating gross income, applicants denied benefits may greatly benefit from the assistance of special needs planners and other legal advocates when appealing benefit denials. Click here to read more from the United State Department of Agriculture about SNAP’s special rules for the elderly and people with disabilities. Click here to read a recent report from the Center for Budget Priorities about the numerous benefits of the SNAP program for people with disabilities.
<urn:uuid:8751343e-2be9-425e-aed1-90d741c0e19f>
{ "date": "2019-02-16T09:04:35", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480240.25/warc/CC-MAIN-20190216085312-20190216111312-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9586836099624634, "score": 3.390625, "token_count": 579, "url": "https://www.beyerslaw.com/securing-food-stamps-as-a-young-person-with-disabilities/" }
Our program is based on the AADE 7 Self-Care Behaviors: Making healthy food choices, understanding portion sizes, and learning the best times to eat are central to managing diabetes. Our registered dietitians/certified diabetes educators can help establish an eating plan that is right for you. Being Physically Active Regular physical activity is important for overall fitness, weight management and blood glucose control. With appropriate levels of exercise, those at risk for type 2 diabetes can reduce that risk, and those with diabetes can improve glycemic control, enhance weight loss, help control lipids and blood pressure and reduce stress. Being active most days of the week is key to improving your glucose control. Daily self-monitoring of blood glucose provides people with diabetes the information they need to assess how food, physical activity, and medications affect their blood glucose levels. Monitoring, also, includes checking blood pressure, weight, and cholesterol levels. Diabetes is a progressive condition. Sometimes different medicines may be required to help improve blood glucose control, cholesterol, or blood pressure, It is important to understand how your medications work. Reducing the Risk of Complications Effective risk reduction behaviors such as smoking cessation, and regular eye, foot and dental examinations reduce diabetes complications and maximize health and quality of life. Our program helps you learn to understand, seek and regularly obtain an array of preventive services. A person with diabetes must keep their problem-solving skills sharp because on any given day, a high or low blood glucose episode or a sick day will require them to make rapid, informed decisions about food, activity and medications. We will review how to handle these unique situations that may arise. Living with diabetes can be a challenge emotionally. Stress can affect your glucose control. We can help you with strategies to deal with stress and emotional issues related life with diabetes. We are here to help you in your journey to better glucose control. Contact us today!
<urn:uuid:87aa5aa7-9d46-4bca-abf6-15d2e66b1c6a>
{ "date": "2016-10-26T23:12:06", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00098-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.916510820388794, "score": 2.703125, "token_count": 395, "url": "https://www.memorialmedical.com/services/diabetes-services/our-program" }
With Bach, the Baroque era went out with a bang. Though the seeds of classicism were very much sown during his lifetime, his sons being some of the prime shapers of that movement, Johann Sebastian Bach remained largely fixed in the Baroque traditions yet was able to fashion them as no-one else, either before him or since, into a towering peak of structural grace and formal perfection. It is this supreme craftmanship, largely unrecognised at the time, which has earned Bach an enormous stature in later years among composers and musicians. Though his contemporary Handel moved to England, Bach remained for most of his musical career in his native Germany. He had held a number of posts in various locations as musician or music director to a number of Dukes and Princes, when his first wife died leaving 7 children. In the early 1720s, Bach married his 2nd wife, Anna Magdalena Wulcken (herself a musician) and took up the post in Leipzig where his duties included directing the musical requirements of the local church and associated school. While employed there, the couple extended the family by another 13 (though 7 children did not survive into adulthood) as well fulfilling the demands of the employment. Unsurprisingly the family were all musically gifted, Bach's eldest son (Wilhlem Friedermann Bach) was a great organist like his father, Carl Philipp Emanuel became a musician in the court of the future Frederick the Great, and Johann Christian was also an organist and moved to London in the employ of Queen Charlotte. Those latter two sons were very influential in the development of classical forms from their precursors in baroque forms such as the Suite. But while his sons were to help found the new school, it was the old school training from the father which sowed this seed. By all accounts, the Bachs became a nerve centre for all things musical in the area, with their extended family of relatives, friends and musicians both local and visiting. It may well be that some of the output from that time would not have survived if Anna Magdelena had not recorded many examples of smaller works in her two "notebook" collections of which the following four pieces are from the 2nd Notebook: There are many books with selections from the Anna Magdelena Bach notebooks which are a good place to start learning baroque keyboard music. Here are two selections from Sheet Music Plus in the US or The Music Room in the UK. It was as a performer that Bach was perhaps best-known in his day. He was a master of the keyboard instruments of his day, particularly the Organ, Harpsichord and Clavichord. When the "well-tempered" method of tuning was adopted for the early stringed keyboard instruments, Bach was inspired to compose his 48 preludes and fugues (now usually played on the modern piano). He was a prolific composer of keyboard works for these instruments, of suites and other works for orchestras and cantatas and other works for singers. Although he occasionally traveled to entertain and meet other musicians, much of his life was spent heading up a cottage industry creating works required for various occasions as demanded by his employment at the time. As per the baroque style, much of his music is contrapuntal in nature meaning that several independent voices are used to weave a tapestry of sound. The king of this polyphonic style is the fugue where rules dictate a certain structure to the interaction of the voices, yet the skill is within these confines to exhibit creative invention. In some ways this theme of freedom within an ordered world mirrors Bach's lifestyle, and he himself became the supreme master of the fugue. His final work, called the Art of Fugue, demonstrating how he could construct a wide variety of fugues with different numbers of voices from a single musical idea. The 6 Brandenberg concertos are of a form known as "concerti grossi" which is something a little more unified than a suite, and later to evolve into the symphony, concerto and other works based on sonata form. These concertos are fairly early works in Bach's career yet they exhibit much invention in the use of different instrumental colours. Here is an interesting video made by the New York Philharmonic Orchestra which is an excellent introduction to these wonderful concertos. Among many now famous works, there are his Mass in B minor, the St. John and St. Matthew Passions, the Christmas Oratorio, the Goldberg variations, the Italian concerto (for solo keyboard), Preludes and Fugues for Clavier, and Preludes and Fugues or Toccatas and Fugues for Organ. Bach is often regarded as being self-taught to a large extent and relatively uneducated. While this may have some basis in truth it is surely something of an exaggeration, likewise the claim that Bach was an unrecognised talent in his own lifetime. Though Bach's works were known to composers such as Haydn, Mozart and Beethoven, it was not until Mendelssohn played the St. Matthew Passion in 1829 that Bach's previously hidden talents as a composer began to get a more widespread recognition. Since then Bach's music has frequently inspired musicians such as Chopin, many composers arranging and adapting his works. In terms of classical composers, Gounod used the first prelude from book 1 of the 48 as the basis for his "Ave Maria", Busoni created a supremely elegant piano version of Bach's Chaconne from a violin work, and Liszt and Rachmaninov both transcribed Bach works for piano. Many composers have studied his work in much detail and Shostakovich has written his own set of 24 Preludes and Fugues. Even in today's world of popular music, you can still hear singles based on say his Toccata and Fugue in Dm for organ, or his "Air on a G string" taken from a suite. A superb tribute to Bach's undiminished ability to inspire today's musicians is the film Bach & Friends. Most piano students of Bach will learn much from playing his Preludes and Fugues, and the 2-part and 3-part Inventions. We provide a small sample of these works here as a taster. The complete Book 1 and Book 2 are available from Sheet Music Plus, and the set of 15 2-part and 15 3-part inventions is available from Sheet Music Plus in the US or The Music Room in the UK. Bach created an arrangement of a melody by an earlier German composer and organist called Hans Leo Hassler (1564-1612). The melody is now best known as the hymn tune O Sacred Head, Now Wounded and Bach's arrangement appeared in his "St. Matthew Passion" and (with different words) twice in his "Christmas Oratorio". A version of Bach's arrangement is often included in hymnals as the "Passion Chorale", and the melody has been used by several other composers. Hassler's original melody was in fact a love song called "Mein G'mut ist mir verwirret" until it was borrowed as a hymn tune, initially in German and then in English.
<urn:uuid:5fd4806f-97e7-4375-8d7f-3c6c22b92507>
{ "date": "2014-11-26T12:31:52", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006855.76/warc/CC-MAIN-20141125155646-00228-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9867348074913025, "score": 2.875, "token_count": 1502, "url": "http://www.mfiles.co.uk/composers/Johann-Sebastian-Bach.htm" }
Germs are not all bad. Even H. Pylori that can cause peptic ulcers may be healthy for some. It is starting to become generally accepted that germs are not all bad. Many people take pro-biotics to help restore healthy gut bacteria, fermented foods are all the rage, and there are even gastroenterologists that perform transplants of healthy gut bacteria. Research into our resident bacterial population or microbiome is suggesting a two-way relationship between many disease states and changes to the microbiome, for instance changes to gut bacteria types and populations may be associated with depression, but it is just as likely that the depression may change the bacteria as the bacteria lead to depression. But it’s taken a while to get here. Ever since Louis Pasteur found that the wrong bacteria could spoil beer the focus of a lot of medicine has been to find a microbe responsible for a disease, and to destroy it or prevent it causing illness. This has been a brilliant strategy and pasteurisation, good hygiene, vaccination, and anti-biotics have saved millions of lives and increased life expectancy and quality. But an extrapolation of the knowledge that some microbes can cause disease into a belief that they all should be eradicated has given us such things as anti-bacterial liquid soaps containing parabens, compounds that are probably carcinogenic and could cause premature birth or lower birth rate. The focus on micro-organisms has sometimes lead to them being identified as a cause of a disease when they may only be correlative accompaniment to a disease state. A particular case is Helicobacter Pylori. Australians Barry Marshall and Robert Warren were awarded a noble prize for isolating this bacterium and investigating its role in gastritis and peptic ulcer disease. (Oi Oi Oi.) This knowledge has changed peptic ulcer disease from a chronic, intractable condition to one that is easily treatable, and has improved the lives of many. But while the benefit of eradicating H. Pylori to treat peptic ulcers is clear, the benefit of doing so in other circumstances is controversial and complicated. It is certainly not obvious that eradicating H. Pylori always improves health outcomes. The outcome of eliminating H. Pylori to treat gastritis seems to differ according to which part of the stomach is colonised, which may even change with nationality;i infection with H. Pylori may reduce the incidence of reflux oesophagitis, and again this could be affected by racial characteristics;ii It could even be that H. Pylori infection prevents some diseases such as Barrett’s oesophagous and oesophageal cancer, and that this could be a stronger effect in some race groups.iii Some have proposed that rather than being a cause of excessive stomach activity, H. Pylori is used by the body to regulate stomach acidity, as the bacterium actually neutralises acid.iv We diagnose and treat people by asking them to describe how they feel. So we treat the stomach pain, heartburn and reflux that someone experiences without testing for bacteria. When patients come in and say that they have been diagnosed with gastritis from H. Pylori infection we ask the patient to tell us how that feels. They describe their pain, or reflux, or other symptoms and we treat to relieve these problems. We do target a cause of the problem, but it is a cause diagnosed in the Chinese medicine picture of the patient, not H. Pylori. Sometimes the way someone’s condition is described leads to medicinals being chosen that have been found to have anti-microbial properties, but that is not why they are chosen. And we will judge the success of a treatment on the entirely unscientific and non-empirical outcome of the patient feeling better and not having the ailments that they describe, rather than by testing for a bacterium. Sometimes without being targeted the bacterium it is subsequently found to be eradicated, and we could consider that in such people it may have been a part of the problem. Other people may recover from their ailment but still carry H. Pylori, in which case we could consider that the bacterium wasn’t the problem, and that it can go on preventing disease and regulating acidity. We think that this flexible approach allows us to cope with the complexity of biological systems with many unknown and unmeasurable variables, where the presence of a particular bacterium may sometimes cause one disease, sometimes protect against another, may be harmful in one person, and beneficial in someone else. In fairness, there’s a good article here here by a microbiologist about our complex symbiosis with microorganisms: i Nemura N, Okamoto S, Yamamoto S, Matsumura N, Yamaguchi S, Mashiba H, Sasaki N, Taniyama K (2000). Changes in Helicobacter pylori-induced gastritis in the antrum and corpus during long-term acid-suppressive treatment in Japan Aliment Pharmacol Ther. 2000 Oct;14(10):1345-52 ii Hassan Ashktorab , Omid Entezari, Mehdi Nouraie, Ehsan Dowlati, Wayne Frederick, Alfreda Woods, Edward Lee, Hassan Brim, Duane T. Smoot and 3 more (2012). Helicobacter pylori Protection Against Reflux Esophagitis Digestive Diseases and Sciences November 2012, Volume 57, Issue 11, pp 2924-2928 iii Rubenstein JH, Inadomi JM, Scheiman J, Schoenfeld P, Appelman H, Zhang M, Metko V, Kao JY (2014). Association Between Helicobacter pylori and Barrett’s Esophagus,Erosive Esophagitis, and Gastroesophageal Reflux Symptoms Clin Gastroenterol Hepatol. 2014 Feb;12(2):239-45 iv Keilberg D, Ottemann KM (2016). How Helicobacter pylori senses, targets and interacts with the gastric epithelium. Environ Microbiol. 2016 Mar;18(3):791-806.
<urn:uuid:592b0ebf-cac3-4d3c-8f6d-52f333b1a30b>
{ "date": "2018-09-23T16:49:27", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159561.37/warc/CC-MAIN-20180923153915-20180923174315-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9356926083564758, "score": 3.125, "token_count": 1297, "url": "http://straightupchinesemedicine.com/2017/05/germs-are-not-all-bad-even-h-pylori/" }
During the US-Canada Coast Guard Summit, held in Grand Haven, Michigan, the two countries signed a 2017 update of the Joint Marine Pollution Contingency Plan. The Joint Marine Pollution Contingency Plan serves as a coordinated system for planning, preparedness, and responding to harmful substance incidents in the contiguous waters along the shared maritime borders of the US and Canada. This plan supplements each country’s national response systems and coordinates the interface of these systems for boundary areas. In addition, during the Summit, senior representatives from each organization discussed issues specific to executing responsibilities to prepare for and respond to oil and hazardous substance events under the auspices of their bilateral Joint Marine Pollution Contingency Plan. The group also provided updates on joint initiatives specific to the Arctic, enhancing shipping safety and security, and enhancing cooperation with the critical indigenous populations of the U.S. and Canada. The Summit was an opportunity for the two countries to further boost their coordination, as the two Coast Guards already share a long history of cooperation in numerous missions across their shared maritime border.
<urn:uuid:c9ab74ca-3db6-475a-a777-1868ca063977>
{ "date": "2017-08-22T09:22:19", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00496.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9255944490432739, "score": 2.703125, "token_count": 218, "url": "https://www.green4sea.com/us-canada-extend-coordination-against-marine-pollution/" }
I'm going to research what affects social networking sites have on society. I will focus on one social networking site, Facebook, and the affects it has on business, schools, and relationships. I want to see if Facebook primarily has positive or negative affects on business, schools, and personal relationships within a society. To further my research, I have found the current use of the social networking site, Facebook including how many people use it and the demographics of those people. Facebook is used by a variety of different people across a variety of different countries and a variety of different demographics. I am going to focus on the United States as my country of focus and see how many people currently use Facebook. I want to see what the main age group of users is and what people mainly use it for. Some businesses use social networking sites such as Facebook to promote sales and get their name out there. Some schools use social networking sites such as Facebook to connect teachers with their students. Relationships might be the most affected by Facebook because of the exposure of people’s personal lives and the ability for anyone to view a person’s profile or contact someone. Security and account privacy settings that the social networking site, Facebook has to offer are very important but how many of its users actually use them? Facebook usually updates its privacy settings and options quite frequently. As a user myself, I get updates at least monthly saying that the privacy settings have been modified. The creators of Facebook have increased security so that users are able to block access to their profiles to certain people, only allow their designated friends to view their profile including their pictures, posts, etc. This increasing security allows people to choose who can see anything they put on the website and who is not allowed access. There are of course some loopholes around these restrictions, however, that is why Facebook is always updating its privacy settings and creating more options for a safer, more secure social networking site. Ethical and Social Implications There are ethical principals that should be enforced on Facebook because of the negative outcomes that bad ethics have on relationships, business, and school. Facebook ethics includes, but is not limited to, netiquette, only posting appropriate posts and pictures, etc. Even though these ethical principals seem to be common sense to most people, others who are mean spirited use Facebook as a way to bully others. Online bullying, or cyber bullying, has become a huge problem and has even lead to people committing suicide. Future UseThere seems to definitely be a future for Facebook but also other upcoming and newly popular social networking sites as well. There has been emergence of other sites that serve different purposes instead of such a broad social networking purpose that Facebook offers. These new sites include, linkedin, instagram, and twitter. Linkedin is a more professional, business social networking site that can connect people. Instagram is where the creative people come together to share and compare photos. Twitter is a more compact version of social networking where users are limited to how much they can include in a post or ‘tweet’. These are other newly emerged social networking sites that many Facebook users are choosing to join as well. Facebook has also recently put itself out on the market of stock. Will people continue to invest in Facebook, or will they start to invest more in these other upcoming sites?
<urn:uuid:69cd1630-2273-4359-850b-758f8f22eff7>
{ "date": "2015-02-27T11:34:41", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461216.38/warc/CC-MAIN-20150226074101-00059-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9708032608032227, "score": 2.703125, "token_count": 670, "url": "http://socialnetworkingaffectsonsociety.blogspot.com/2013/02/the-affects-of-social-networking-on.html" }
I have written many articles and discussed many times on my podcast a number of tips to decrease injuries in youth sports. In baseball, recommendations to decrease amounts of pitching, avoiding showcase events, and correcting pitching mechanics have received tremendous attention. In soccer and other sports with large number of female athletes, ACL injury prevention programs have been emphasized. And generally, efforts to discuss the risks of single-sport specialization in any sport at a young age have spread in recent years. If there is a risk factor in any sport that can be shown to cause a significant number of injuries, then it seems worthwhile to identify it and try to correct it, right? A study by Collins et al published in the journal Injury Prevention highlights a risk that we rarely hear discussed when we try to promote youth sports safety measures. Injuries related to fouls and other illegal activity in sports The authors used the RIO (Reporting Information Online), an injury surveillance system, to collect data on injuries in high school sports in the United States. For the 2005–06 and 2006–07 academic years, they captured injuries in boys’ football, soccer, basketball, wrestling, and baseball and girls’ soccer, volleyball, basketball, and softball. They attempted to compare differences between sports (boys’ and girls’) for injury rates, and in particular, the proportions of those injuries related to illegal activities and fouls. The study offers some surprising findings: The authors estimated that 98,066 injuries occurred nationwide during those years as the result of an action that was ruled illegal activity by a referee/official or disciplinary committee. They calculated an injury rate of 0.24 injuries related to illegal activity per 1000 athletic competition-exposures. Boys’ and girls’ soccer had the highest rates of injury related to illegal activity. Girls’ volleyball, girls’ softball, and boys’ baseball had the lowest rates of injury related to illegal activity. Boys’ and girls’ sports overall had similar rates of injuries related to illegal activity. Of all injuries in these sports, 6.4% were related to illegal activity. The highest proportions of injuries related to illegal activity were found in girls’ basketball, girls’ soccer, and boys’ soccer (in that order). The lowest proportions of injuries to illegal activity were found in girls’ softball, boys’ football, and girls’ volleyball. The head and face, ankle, and knee were the body parts most often reported as injured during illegal activity. Almost 1/3 of injuries related to illegal activity affected the head and face areas. In fact, a much higher percentage of injuries related to illegal activity were to the head and face (32.3%) than injuries that occurred from non-foul activities (13.8%). Over 1/4 of the injuries related to fouls and similar activity were concussions. In terms of the severity of the injuries related to illegal activity, 5.7% required surgery. 10.5% resulted in the player being held out for the rest of the season. Take home points Every sport has rules created to keep play and competition fair between athletes and teams. However the rules also serve to prevent or decrease the chances of athletes getting hurt from unsafe plays, moves, and activities. This study shows that at the high-school level, injuries from fouls and illegal activity caused more than 10% of all injuries in four of the nine sports (boys’ soccer, girls’ soccer, boys’ basketball, and girls’ basketball). The authors argue – and I agree with them – that any risk factor that causes such a high percentage of injuries should be examined for ways to modify that risk. In that sense, better rule enforcement and punishment of players guilty of fouls and other illegal activity might actually decrease a sizable portion of youth sports injuries. Since over 5% of these injuries needed surgery and 10% were season-ending injuries, it seems to be an especially important effort. Furthermore, with an ever-increasing focus on concussions and their long-term effects, cutting down foul-related injuries is a no-brainer. The authors showed a huge discrepancy between concussions (and head and face injuries in general) caused by illegal activity and injuries that occurred in sports naturally. In theory, cutting down on injuries related to dirty play and fouls makes sense. It probably would be harder to actually achieve. But better rule enforcement and punishment by referees and education of athletes, parents, coaches, and referees by sports medicine healthcare providers might be a much needed first step. Do these findings surprise you? Do you have any suggestions for decreasing the injuries that occur from fouls? I would love for you to offer your thoughts below! Collins CL, Fields SK, Comstock RD. When the rules of the game are broken: what proportion of high school sports-related injuries are related to illegal activity? Injury Prevention. 2008;14:34-38.
<urn:uuid:6216ed9d-45b6-4937-a8a1-ecb168e3c470>
{ "date": "2014-10-01T22:18:49", "dump": "CC-MAIN-2014-41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663611.15/warc/CC-MAIN-20140930004103-00264-ip-10-234-18-248.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9683130383491516, "score": 2.890625, "token_count": 1026, "url": "http://www.drdavidgeier.com/would-eliminating-dirty-play-decrease-injuries-youth-sports/" }
The Natural History Museum’s Butterfly Pavilion is more than a place to see butterflies, it’s a place to learn what to grow so you can attract them to your garden. I spent some time there last week taking in the beauty of the butterfly, and words just can’t express it as well as a photograph. It’s my first time using the “gallery” feature on this blog post, so forgive any formatting glitches when you try to click on the photos to enlarge. Just click on the link to the next photo below the picture, or use you back button and all will be well. Inside the pavilion we found other plants like Dill and Lilac Verbena, as well as California Dutchman’s Pipe, all of which support the butterfly lifestyle. We saw Monarchs all over some California Asters. For a list of other plants that attract butterflies, visit http://www.almanac.com/content/plants-attract-butterflies. For more information on the Natural History Museum’s Butterfly Pavilion, visit http://www.nhm.org/site/explore-exhibits/special-exhibits/butterfly-pavilion
<urn:uuid:209a142e-2461-4347-b2e6-be7c461fac10>
{ "date": "2014-12-18T04:10:13", "dump": "CC-MAIN-2014-52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765610.7/warc/CC-MAIN-20141217075245-00155-ip-10-231-17-201.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8707001209259033, "score": 2.625, "token_count": 256, "url": "http://gardenerd.com/field-trip-butterfly-pavilion/" }
Helping the Body to Unlock Nutrients from Food Why is enzyme supplementation necessary? First, today’s diets rely heavily on cooked and processed foods. Unfortunately, cooking and processing methods often kill the enzymes in foods. Even when foods are consumed in their raw form, they are rarely backyard-garden fresh. Modern lifestyles have created a virtually universal need for food enzymes. Second, digestion requires energy…lots of it. And the more energy it takes to digest food, the less that’s available for other physical and mental activities. Digestion of enzyme-deficient food is especially hard on the body, sapping its natural vitality and feelings of well-being. But there is a way to optimize the nutritional value received from food. New Earth’s Enzymes contain natural food enzymes to help the body break down fats, carbohydrates, protein, and fiber, and to help enhance the digestive process.* Each capsule is microblended with a blend of sixteen different natural, plant-based food enzymes, and a small amount of Wild Bluegreen Algae to give the enzymes specific vitamins and minerals that help the body break down and assimilate a complete range of nutrients from your food. These Enzymes have an improved formula so even though the bottle contains 1/2 as many capsules as the old formula, the serving size is also 1/2 — 1 enzyme instead of 2 per serving. A Word About Enzymes Even with the most sound diet, the body does not benefit until nutrients are unlocked and absorbed from food that is consumed. Unfortunately, in today’s highly processed foods many of the enzymes the body depends on for effective digestion have been destroyed. That’s why it is so important to provide the body with vital enzymes. Natural miracles of miniature engineering, enzymes are critical to the proper functioning of everything from breathing to thinking to circulating the blood. The metabolic enzymes present in every cell, tissue, and organ in the body are responsible for every chemical reaction associated with the metabolism of the body. Digestive enzymes are manufactured and secreted by the body. We also get enzymes from raw foods, which we refer to as food enzymes. Food enzymes work together with the body’s digestive enzymes–without an abundant supply of both, digestion can be severely impaired. - Provide more nourishment from foods with this dynamic combination of active enzymes - Contain amylase, cellulase, lipase, protease, and lactase for more efficient digestion to avoid the after-meal energy slump - Help break down fats, carbohydrates, protein, and fiber - Formulated with Wild Bluegreen Body to support the enzymes’ ability to function - 90 Capsules |*These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure or prevent any disease.|
<urn:uuid:31ad08e0-ffde-4c50-b8c5-64ba43b0bb8d>
{ "date": "2018-03-21T20:12:35", "dump": "CC-MAIN-2018-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647692.51/warc/CC-MAIN-20180321195830-20180321215830-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9401475787162781, "score": 2.890625, "token_count": 584, "url": "https://prosperity-abounds.com/product/enzymes/" }
Writing five paragraph essay worksheet Why Everyone Should Learn to Write A Five Paragraph Essay. The reason five paragraph essays are so. confident in their writing. Basically there are five. Five paragraph essay August 08 This is the last paragraph in your writing, it should therefore include. Summary of three main arguments given in the body element. Use the Compare/Contrast Essay Worksheet. CLRC Writing Center 2/09 Writing a Compare/Contrast Essay. Body Paragraph 1 Topic Sentence. The 5-paragraph essay is a model that instructors use to. The fifth paragraph of your five-paragraph essay will be. Writing a Five-Paragraph Essay. Five paragraph essay lesson plans and worksheets from thousands of teacher. Students fill in the blanks on a writing worksheet to create ideas for a five paragraph. How to Write A Five-Paragraph Essay The type of practice likely to prove most helpful to students facing high-stakes writing tests is the five-paragraph essay. Writing A Five Paragraph Essay Worksheet A Level Physics Research Coursework Ideas Essay On My Ambition In Life To Become A Teacher With Quotations Essay. In this educational animated movie about English learn about topic sentences, structure, intros, conclusions, thesis, and essays. The ESOL Essayist- Writing the Five-Paragraph Essay. ©2001 ©2001. HomePage » Samples Best Essay Writing provides you free sample for perusual. You may go through the following documents to have an idea about the way we write. Line-by-line color-coded organizer to familiarize students with the nuts and bolts of basic essay-writing and. students use it to create a five-paragraph essay. The five paragraph essay process is broken into 8 steps Paragraph and essay writing assignment for middle school Worksheet: Writing. About This Quiz & Worksheet. The five-paragraph essay structure is widely used by writers Go to Essay Writing 7 - Reading and Understanding. Help students write five-paragraph essays with a graphic organizer Five-Paragraph Essay Five. Worksheet. Fill-in Halloween Story:. Writing five paragraph essay worksheet Learn everything that is important about writing the five paragraph essay. Writing prompts are included for practice. Hook" which moves the reader to the first paragraph of the body of the essay custom essay writing. 5 paragraph essay topics are not. Writing Skills: The Paragraph - Duration:. Essay Writing Video With Three Full Examples. How to write a five paragraph essay? - Duration:. This paragraph writing worksheet directs the student to use the given template to write a five paragraph essay Five Paragraph Essay Writing Worksheets. FIVE PARAGRAPH ESSAY MADE SIMPLE: INTRODUCTORY GRABBER. Introductory writing lesson on how to use a grabber for writing a simple five paragraph essay. Provided by the Regent University Writing Center as one approach for writing a five-paragraph essay Microsoft Word - A Five-Paragraph Essay Worksheet.doc. Plan your lesson in Persuasive Writing and Writing with helpful tips from. Students read through a second five-paragraph essay on their own and label it as. However, when writing an essay, some of your paragraphs may have more or less than three supporting sentences.). Worksheet for Writing a 5-Paragraph Essay. Writing a five-paragraph theme is like riding a bicycle with training wheels; it’s a device that helps you learn persuasive essay—in fifty minutes or less. Home > Worksheets by Grade Level > Grade 5 > Language Arts > Writing > Paragraph Writing Writing Worksheets Worksheet Areas : Language Skills. Exercises & worksheets: eslflow webguide. attention getters for essay and paragraph writing Introductory Paragraph: Hook Strategies worksheet. Developing a 5 paragraph essay: preparation and writing. The Five Paragraph Essay. Introductory Paragraph. This worksheet introduces kids to the fascinating true story of Journey and guides them to write a five paragraph essay about. A Sporting Event Writing Worksheet. How to write an industry analysis how will technology affect the future professional resume writing services ottawa writing a five paragraph essay worksheet apes. Five-paragraph Essay Organizer PARAGRAPH ONE: INTRODUCTION Sentence one. Sentence five: PARAGRAPH FIVE: CONCLUSION Sentence one (Summarize first. These writing worksheets are great for working with writing Writing a Paragraph Worksheet Writing Five Paragraph Essay Worksheets. Writing Five Paragraph Essay. General 5- Paragraph Essay Outline This is a sample outline. Number of paragraphs and paragraph length will vary. I) Introduction A) Attention Statement. Five-paragraph essay is a special structural type of writing The topics of five-paragraph essays vary. Example of 5-paragraph essay written in the proper. WORKSHEET/OUTLINE FOR ANALYTICAL/ARGUMENT ESSAYS 1 THE FIVE-PART ESSAY:. Supporting Paragraph #1. Step 5 To help students with the brainstorming process for writing a paragraph on. the Sample Paragraph 2 worksheet Writing a Well-Structured Paragraph. 5 Paragraph Essay Outline Worksheet. A 5 paragraph essay outline worksheet is one of the great tools used in writing. It is not an easy task to write essay paper. Five Paragraph Essay Guided Writing Worksheet Example Essay Important Person Essay. Career Aspirations Essay Medicine.five paragraph essay guided writing worksheet. Worksheets for Essay Writing Paragraph Writing; Essay Writing; Narrative Writing; Informative Writing; Middle School. Basic Mechanics; Writing Enhancement. Writing A Five Paragraph Essay Worksheet Romeo And Juliet Essay Introduction Egg. Essay On Barriers To Oral Communication.writing a five paragraph essay worksheet. Paragraph structure PRACTICE WORKSHEET. Paragraph #1 My Dog Romeo is so much fun to play with. One reason he’s fun is because he loves to play catch. Introduction Paragraph Conclusion paragraph. Very brief review of the essay Five Paragraph Outline Worksheet Author. Writing Practice; Parts of Speech; Count. students must choose the best way to correct errors highlighted in the given paragraph Advanced Paragraph Correction. Writing 5 paragraph essay worksheet Examples of petrarchan sonnet examples of how to write a magazine article totalitarianism propaganda writing 5 paragraph essay.
<urn:uuid:9dc015c1-4bda-4772-8281-f0b5d69a393b>
{ "date": "2017-04-25T04:37:11", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00528-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8549318909645081, "score": 3.046875, "token_count": 1341, "url": "https://gfpaperjgbt.rguschoolhillcampus.com/writing-five-paragraph-essay-worksheet.html" }
From The Aquarium Wiki Mulm is usually the unattractive dark brown or black material that settles on the substrate of a tank. It is caused by the waste material ejected by the aquarium animals and left over food and decomposing plants. In a tank this mulm is slowly digested over months up by the bacteria living there and is broken apart into useful chemicals that plants and other life living in the gravel can absorb. While non-planted tank owners often remove this material to keep their tanks clean, it is rich in essential chemicals for plants, so planted tank owners rarely remove it.
<urn:uuid:80b744a0-3615-465a-839f-2951704d52a9>
{ "date": "2014-10-25T11:51:35", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648148.32/warc/CC-MAIN-20141024030048-00165-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9474373459815979, "score": 2.71875, "token_count": 122, "url": "http://theaquariumwiki.com/Mulm" }
Ankylosing spondylitis treatment can be a frustrating and confusing path for most patients to navigate, since doctors do not generally even agree on what causes AS, forget about how to effectively treat it. The vast majority of indicated modalities are symptomatic in nature and will provide no hope of actually curing or resolving the disorder. This is so common with musculoskeletal conditions and is particularly true of most autoimmune diseases, just like AS and rheumatoid arthritis. Research clearly shows that most cases of spondylitis which eventually resolve or stabilize do so through natural means and is not because of the positive effects of any treatment. Basically, patients can invest more hope in fate and circumstance than they can in medical science when it comes to actually curing AS. This dissertation examines the successes and failures of treating ankylosing spondylitis. Here are the usual methods used to treat ankylosing spondylitis and how they will provide benefits to affected patients: Physical therapy will help to maintain joint mobility and improve functionality. However, physical therapy can be very painful for some AS sufferers. Pain management drugs are used in large quantities by many patients. DMARDs are used, just like in other inflammatory musculoskeletal conditions. These drugs may or may not help reduce progression of the disease. Results are often not consistent patient to patient. Immune suppressors can be very effective for AS victims, since the symptomatic process is enacted by the patient's own immune system. Of course, the side effects of this type of therapy can be severe or even fatal, because a reduced immune system makes the patient prone to disease and infection which may be lethal in some cases. Of course, all the usual risks of pharmaceutical therapy apply with all of teh above products. Back surgery is used in some circumstances to improve movement and functionality or to prevent serious organ complications which can result from significant spinal deformity. Surgery generally offers very limited benefits for most patients, but provides many substantial risks to consider. AS treatment using traditional medical methods is a risky endeavor to be sure. The modalities are dangerous, often addictive and can enact a host of troublesome side effects and health issues. Worse still is the unpredictable nature of the effectiveness of said treatments. Some patients respond and others do not. Psychoemotional therapies, such as knowledge therapy, can be highly effective for those who are open to the logical idea of a psychogenic causation or contribution. The best part of this path is the cost-free and risk-free nature of treatment and the lack of side effects. Even if this path can provide partial relief, it is worth investigating for many sufferers. I am an avid supporter of the various forms of knowledge therapy for all manner of autoimmune diseases. As a group, these conditions have stumped medical science, since doctors are ever burdened with the illogical prejudice of needing to find a structural cause. In conditions where a definitive cause does not exist, the patient is doomed to suffer under unenlightened and often ridiculous treatment modalities until the condition resolves on their own or the patient succumbs to the disease. Do not let this be your destiny. Instead, do your on research and realize that the path back to health may be the one least traveled in the AS treatment sector today. If you do decide on pharmaceutical or surgical care, be sure to at least understand the risk/benefit ratio and hold your doctor to their prognosis for treatment.
<urn:uuid:a72bf932-d6c5-4571-bcee-e800061fbdba>
{ "date": "2017-03-25T17:23:21", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189031.88/warc/CC-MAIN-20170322212949-00446-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9384073615074158, "score": 2.59375, "token_count": 713, "url": "http://www.cure-back-pain.org/ankylosing-spondylitis-treatment.html" }
Cavalry Embarking at Blackwall, (probably Perry’s Dock ) 24 April 1793 by William Anderson 1793 (National Maritime Museum) Anyone who walks along the river near Blackwall and the Virginia Settlers Monument could be forgiven for believing it is a bit of a backwater, however for over 400 years this was the site of great importance for British Naval history for it was in this spot that hundreds of Merchant and Royal Navy ships were built that helped to forge an Empire. Blackwall’s location just before the bend of the Isle of Dogs and its popularity as an anchorage from which travellers embarked and disembarked was important from as early as the fifteenth century. However Blackwall also became known in the fifteenth and sixteenth centuries for ship repairs, a number of royal ships were repaired most famously the Mary Rose who was repaired here in 1514. Shipbuilding was rarely undertaken until 1614, when the East India Company decided to build a shipyard at Blackwall. The building of the dockyard was to cope with the demand in trade in which the company quite often rented their ships out to rich merchants. In 1652 the East India Company sold Blackwall Yard, to the shipwright Henry Johnson who extended the dockyard. Samuel Pepys working for the Royal Navy commissioned a numbers of ships from Blackwall in the late 17th Century from one of the largest private shipyards in the country. Launch of the ‘Venerable’, 74-guns, at Blackwall, Francis Holman 1784 (National Maritime Museum) HMS ‘Venerable’ was launched in April 1784 at Perry’s yard in Blackwall. In the 18th Century, Blackwall was taken over by the Perry family who continued to build and repair ships for the East India Company and for others.It was the Perry family that built the Brunswick dock that opened in 1790. View of Mr Perry’s Yard, Blackwall by William Dixon 1796 (National Maritime Museum ) In 1803 the East India Dock company bought part of the site including the Brunswick dock to turn int0 the East India Export Dock. The Mast House and Brunswick Dock at Blackwall by William Daniell 1803 (National Maritime Museum) Eventually Perry’s was taken over by Wigram & Green who in 1821 built their first steamship and the internationally famous Blackwall Frigates. The Blackwall Frigate ‘Maidstone’ at Sea by H.J. Callow 1869 (National Maritime Museum) Blackwall Frigates was the common name for a type of three-masted full-rigged ship built between the late 1830s and the mid-1870s. The first Blackwall Frigates were built by Wigram and Green at Blackwall to replace the East Indiaman ships that had been built on this site for centuries. Although not as quick as a “clipper” they were still used on the long voyages between England and Australia. Wigram and Green eventually became just Greens who became famous for building Naval vessels including the first Iron ship the HMS Warrior built in 1866. Blackwall, London 1872 by Charles Napier Henry ( Museum of London ) At the beginning at the 20th century the site became too small for the larger ships and although still shipbuilding and ship repairs were carried out they were in much smaller scale than the sites heyday. Nevertheless the site remained active under different management till 1989 when most of the docks were filled in and buildings built on the site. Blackwall’s illustrious past is generally forgotten, however there is no doubt that Blackwall was for centuries one of the most important maritime sites in Britain.
<urn:uuid:97dbf316-2ce4-4988-bda3-5b2a6827c14d>
{ "date": "2017-10-22T17:09:31", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00556.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9765157699584961, "score": 3.203125, "token_count": 781, "url": "https://isleofdogslife.wordpress.com/2013/05/16/rule-britannia-when-blackwall-ruled-the-waves/" }
Upon visiting the official webpage of the Association of Zoos and Aquariums, one is introduced to this North American institutional network through a series of numbers: 230 accredited zoos and aquariums across forty-five states, accommodating approximately 800,000 animals, 6,000 species out of which about 1,000 are endangered, providing 208,000 jobs, and with an annual budget of $218 million for wildlife conservation.1 With over 196 million visits a year American zoos remain one the most popular leisure-oriented destinations. How did zoos become such powerful sites for encounters with wild nature for urban publics? Where did these exotic species on display come from? Who and why gets to work in the zoo? What is the role of these animal collections in wildlife conservation, especially in the face of alarming losses in biodiversity brought by the sixth mass extinction? Daniel E. Bender’s latest book not only answers all of these questions by setting the issues of animal trade, labor conditions in the zoo, as well as its changing mission and development against a vivid historical background, but also complicates well-trodden paths in zoo scholarship. Zoos have been researched from a variety of perspectives, including cultural, literary, philosophical, science and technology studies, and historical ones. 2 However, retelling the stories of these multi-dimensional sites, their inhabitants, workers, and visionaries without repeating official historiographies, ones geared towards “recuperative remembrance” of these highly contested public institutions, still poses a challenge for critical scholars. Bender is aware of these pitfalls and manages to bypass the politics of “closed doors” that many zoo archives practice, by carefully tracing other sources such as personal correspondence between zoo directors, animal collectors, and zookeepers; memoirs, fiction stories, scientific publications, and guides they authored; visitors’ notes, and paraphernalia (374–5). This rich material allows him to unravel the hidden histories of animal business with great depth and precision. Bender skillfully balances both human and animal biographies in his detailed account of the twentieth-century history of collecting, displaying, and caring for wild animals in American zoos.3 Apart from the usually recounted names of influential animal traders like Carl Hagenbeck, or visionary directors like William T. Hornaday, the author follows an array of unorthodox actors contributing to the changing mission of the zoo amidst the turmoil of two world wars, the Great Depression, and decolonization. From nonhuman and human celebrities, adventurers, and trappers, to clerks, zookeepers, and middlemen in the colonies, the book presents many underexplored aspects of wildlife conservation such as workers’ strikes and unionization, rivalry between zoos, circuses, adventure films, and wildlife television shows. The Animal Game offers a fresh insight into how exotic animals made their way to American zoos, homes, and the silver screen, and more importantly, into “how we have learned to look at faraway places, environments, and peoples through the lens of animals on display at zoos and for sale in the animal business” (4). The strongest side of The Animal Game is its finely crafted analysis of the interlocked workings of gender, race, and class made tangible through the eclectic stories that introduce each chapter. In Bender’s account, “zoos were born of elite dreams of natural order; their life was one of disorder because of what the animal trade could offer, what visitors desired, and how animals behaved” (20). This tension between the social order imagined through “tidy taxonomy” and the disorder of animal and human resistance is sustained throughout the book, and allows for exploring various power relations at play in the zoo, on the safari, and on the colonial market (24). Starting with class, Bender contrasts the glamorous life of the fin-de-siècle animal collectors browsing animal markets in Asian and African port cities, sipping cocktails in luxurious lounges, and navigating colonial bureaucracy, with the rough working conditions of the local trappers, hunters, and porters who “labored for fresh meat but were expected, ultimately, to keep animals alive,” all while the imperial rule restricted native hunting (73). This paradox along with the introduction of piecework pay adapted from American factories led to strikes and rebellions. The theme of labor rights comes back in chapters four and six, recounting respectively, the federal programs aimed at modernizing the zoos after the Great Depression (116), and the wave of strikes in the U.S. public sector after the New Deal era, including zookeepers demanding better work conditions (177). According to Bender, “the campaign to save the zoo became a mass social movement,” and more importantly, was modeled on union campaigns (120). This unique account of union activism in American zoos and the successive shift from blue-collar unskilled keepers to professionalized white-collar technicians is narrated along the transforming public mission of the zoo, putting more emphasis on care for endangered species. This shift was partially induced by the way “the union and the zoo jostled to claim the mantle of animal care” (193). The principle of motherly care re-animating zookeeping labor and institutional management is a gendered one. As the military-style zoo guards gave way to keepers encouraged to interact with visitors, more women applied for jobs in this previously male-dominated profession. The codes of masculinity are exemplified in the book by such figures like Frank Buck, the famous animal collector, who directed his career towards showmanship, “turning the animal business into a popular culture industry” (95). However, this manly hero of the jungle brawls, in his signature pith helmet and khaki suit, is contrasted with tender “zoo ladies.” By portraying two female figures, Genevieve Cuprys, the famous animal trader known as “Jungle Jenny,” and Belle Benchley, the first woman zoo director in the United States, Bender shows how “domesticity, love, and family reshaped animal business” (242). The author underscores the successive investment in family-oriented leisure and ideals of domesticity materialized in animal families on display as key for the postwar zoo. The spectacle of white women (often directors’ wives, or female employees) raising orphaned or rejected animal babies in their homes, not only domesticated conservation, but also marked the turn towards breeding captive populations after the introduction of international treaties curbing wildlife trade (240). However, the image of white women cuddling ape babies is teeming with racial tension, as “the family became a dominant metaphor for a new understanding of racial difference among peoples at a time when strict conceptions of racial hierarchy were tainted with Nazism” (247). In chapter five, Bender explores the unsettling likeness of apes to humans that easily slips into racism, especially in popular zoo shows featuring monkeys and chimpanzees performing “civilized” tasks. These performances “encouraged visitors to measure the differences of savagery and civilization separating white, black, and ape,” and according to the author, were another form of blackface and minstrelsy (145, 149). Racial relations are well examined throughout the book, from the white collectors who depended on African and Asian hunters and handlers while portraying them as loyal and docile assistants, to the casting of indigenous populations as exotic specimens or as poachers decimating the shrinking wildlife resources at the twilights of the colonial empires. Along the “shift from colonial business to postcolonial diplomacy,” the zoo reinvented itself as the ambassador of wildlife conservation by transforming endangered species into a new currency (227). Bender convincingly presents the global dimension of the animal trade largely shaped by the rise and fall of colonial empires. The Animal Game is a brilliantly written study that explores many neglected aspects of the modern zoo. This original book will be an engaging read for environmental historians, scholars interested in colonialism, decolonization, and the twentieth-century United States, as well as for wider audiences. doi: 10.1093/dh/dhy023 Footnotes 1 “Association of Zoos & Aquariums,” accessed November 30, 2017, https://www.aza.org/. 2 Lisa Uddin, Zoo Renewal: White Flight and the Animal Ghetto (Minneapolis, MN, 2015); Randy Malamud, Reading Zoos: Representations of Animals and Captivity (New York, 1998); Ralph R. Acampora, ed., Metamorphoses of the Zoo: Animal Encounter after Noah (New York, 2010); Keekok Lee, Zoos: A Philosophical Tour (New York, 2005); Carrie Friese, Cloning Wild Life: Zoos, Captivity, and the Future of Endangered Animals (New York, 2013); Harriet Ritvo, The Animal Estate: The English and Other Creatures in the Victorian Age (Cambridge, MA, 1987); Elizabeth Hanson, Animal Attractions: Nature on Display in American Zoos (Princeton, NJ, 2002); Eric Baratay and Elisabeth Hardouin-Fugier, Zoo: A History of Zoological Gardens in the West (London, 2004); Nigel Rothfels, Savages and Beasts: The Birth of the Modern Zoo (Baltimore, MD, 2008). 3 Eric Baratay, Biographies animales. Des vies retrouvées (Paris, 2017). © The Author(s) 2018. Published by Oxford University Press on behalf of the Society for Historians of American Foreign Relations. All rights reserved. For permissions, please e-mail: [email protected]. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) Diplomatic History – Oxford University Press Published: Apr 17, 2018 It’s your single place to instantly discover and read the research that matters to you. Enjoy affordable access to over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
<urn:uuid:e31eefdb-46d6-4aaa-9399-7b89ac885146>
{ "date": "2018-06-23T06:24:47", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864943.28/warc/CC-MAIN-20180623054721-20180623074721-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9166791439056396, "score": 3.015625, "token_count": 2357, "url": "https://www.deepdyve.com/lp/ou_press/viewing-the-world-through-the-american-zoo-2NIA4wEM0K" }
MONDAY, Jan. 30, 2012 (HealthDay News) -- The need for a connection to other people is so powerful that being ignored by a stranger can make someone feel left out, according to a new study. People need to feel they are part of a group or connected to others in order to be happy, the researchers explained. This sense of belonging can come from joining a club, a friendly neighbor or -- as this study reveals -- even eye contact from a stranger. In conducting the study, researchers randomly chose people walking on the Purdue University campus in West Lafayette, Ind. A research assistant either looked them in the eye, looked them in the eye and smiled or looked in their general direction but not directly at them. Once they passed the research assistant, the study subjects were asked how connected they felt to others. The study, published in Psychological Science, found those who had gotten eye contact from the research assistant felt less disconnected than those who were ignored -- even when they didn't get a smile. "These are people that you don't know, just walking by you, but them looking at you or giving you the air gaze -- looking through you -- seemed to have at least momentary effect," said study co-author Eric Wesselmann of Purdue University in a school news release. "What we find so interesting about this is that now we can further speak to the power of human social connection. It seems to be a very strong phenomenon." The researchers noted previous studies have shown that being excluded by a group -- even one that they condemn -- can make people feel left out. The Stanford University Encyclopedia of Philosophy provides more information on friendship. SOURCE: Psychological Science, news release, Jan. 25, 2012 Copyright © 2012 HealthDay. All rights reserved.
<urn:uuid:a301fe1b-e28b-41ad-9b29-650156c4e491>
{ "date": "2014-08-20T04:51:10", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800168.29/warc/CC-MAIN-20140820021320-00020-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.969052255153656, "score": 2.515625, "token_count": 363, "url": "http://consumer.healthday.com/mental-health-information-25/behavior-health-news-56/even-strangers-can-make-you-feel-left-out-661165.html" }
Schools To Get Free Access To Civil Rights Documentaries By Dr. Marciene Mattleman PHILADELPHIA (CBS) - “The Butler,” featuring a long time White House staffer and “42” about Jackie Robinson’s rise in national baseball are two recent films based on true stories dealing with discrimination, that have drawn wide audiences. According to Alyssa Morones in the ‘News In Brief‘ section of Education Week, a new initiative of the National Endowment for the Humanities is providing schools and communities free access to documentaries tracing the civil rights movement that will help us learn more. Among the selections are “Freedom Riders,” highlighting more than 400 black and white Americans who risked their lives riding on public transportation defying Jim Crow laws and “The Abolitionists” giving details of brave early efforts to outlaw slavery. Films cover material starting with the “seeds of change” in 1820 to the 1967 Supreme Court decision overturning the ban on interracial marriage. A website has been launched in the wake of the 150th anniversary of the Emancipation Proclamation and the 50th anniversary of the March on Washington.
<urn:uuid:48eb64a2-2e5b-4fd8-ab6b-9ec71ce4eeb0>
{ "date": "2014-08-21T18:50:21", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00014-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9115105271339417, "score": 3.140625, "token_count": 254, "url": "http://philadelphia.cbslocal.com/2013/10/22/schools-to-get-free-access-to-civil-rights-documentaries/" }
Maps are a brilliant way of communicating information. For me, maps work on two levels. The first is that they provide a visual representation of a landmass — or figurative landmass, like the organisation of a company, the brain, or the Dewey Decimal System — some structure with which we are largely unfamiliar and need to be better acquainted. The world gets smaller when you can map it and contain it within a single image: by delineating the boundaries, you are effectively constraining what lies in the Here Be Dragons quadrant of known unknowns. Having a map of the terrain is useful for developing confidence: just as you wouldn’t tackle a mountain without having checked out the map first, students find it reassuring when they know what you are going to cover in a lecture, even if they don’t yet have a handle on the details. The second reason maps are useful is to provide a familiar structure for new information. The most obvious recent example of this is Mark Newman’s fantastic 2008 electoral maps of the US, in which this though by that point it almost starts looking like something out of Babylon 5. Because — it is assumed — we are sufficiently familiar with the underlying structure, we are free to explore the new data: how did a given state, county or timezone vote? What could potentially be a really complex information set if just dumped on us wholesale (for example, in the form of statistics) now becomes easily graspable, because it’s framed by a known structure. We could do this more in teaching: provide an early, basic road-map to students about the borders of the area under discussion, and progressively revisit and colour in the missing pieces. This is not always how we do things: a popular pedagogic M.O. seems to be to introduce Topic A and then fill in all the details before moving on to Topic B, etc. — but what we could do is show a map of Topics A through H first, and then revisit each topic once students have understood where the edges of the map are. Good teaching practice means being more explicit about maps.
<urn:uuid:70362b1b-1206-4116-bb47-05c3067b3d01>
{ "date": "2019-10-22T05:08:46", "dump": "CC-MAIN-2019-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987798619.84/warc/CC-MAIN-20191022030805-20191022054305-00256.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9422453045845032, "score": 3.15625, "token_count": 436, "url": "https://finiteattentionspan.wordpress.com/2008/11/19/why-cartography-is-such-a-good-communication-tool/" }
Types of Toothpaste There are many different types of toothpastes on the market. The "all in one" toothpaste contains a combination of agents to reduce tartar formation, improve gum health and prevent dental caries. It is important to verify that the effectiveness of toothpastes advertising improved or new formulations have been "clinically proven" by seeking information from dental public health personnel with expertise in the field. Fluoride toothpastes make up more than 95% of all toothpaste sales. It is well recognised that the decline in the prevalence of dental caries recorded in most industrialised countries over the past 30 years can be attributed mainly to the widespread use of toothpaste that contain fluoride. Investigations into the effectiveness of adding fluoride to toothpaste have been carried out since 1945 and cover a wide range of active ingredients in various abrasive formulations. Fluoride compounds and their combinations which have been tested for the control of dental decay include sodium fluoride, stannous fluoride, sodium monofluorophosphate and amine fluoride. The most widely used fluoride compounds in the Republic of Ireland are sodium fluoride and sodium monofluorophosphate. Amount of fluoride in toothpaste The amount of fluoride contained in fluoride toothpaste should be indicated on the toothpaste tube, although this information may sometimes be hard to locate. It may appear after the label "Active ingredient" or as a component under "Ingredients" on the toothpaste tube. Whereas previously fluoride content was given as a percent of volume (% w/v) or weight (% w/w), it is now accepted that the most efficient method of informing people of the amount of fluoride in a toothpaste is to give the "parts per million" fluoride (ppm F). Most manufacturers now give fluoride content in ppm F. Under EU Directive 76/768/EEC, toothpastes are classified as cosmetic products. EU Directives governing cosmetic products prohibit the marketing of cosmetic products (including toothpastes) with over-the-counter levels of fluoride greater than 1,500 ppm F. At present, most toothpastes in Ireland contain 1,000-1,500 ppm F. Fluoride toothpastes are more effective at preventing tooth decay at higher fluoride concentrations.50 If needed for therapeutic easons, toothpastes containing more than 1,500 ppm F (e.g., 2,800 ppm F) are available but may be obtained only with a prescription. Fluoride toothpaste for children Because young infants and children under age 2 years can swallow most, if not all, of the toothpaste when brushing, there has been concern that the use of fluoride toothpaste containing 1,000-1,500 ppm F could give rise to enamel fluorosis of the front permanent incisors. Enamel fluorosis is a condition which can vary from minor white spots to unsightly yellow/ brown discolouration of the enamel due to excessive intake of fluoride. In response to the concern over enamel fluorosis, some manufacturers now market low fluoride "children's" or "paediatric" toothpastes containing less than 600 ppm fluoride. The effectiveness of these low fluoride 'children's' or 'padeiatric' toothpstes in preventing caries has not been established. What has been shown by a number of systematic reviews is that toothpastes with a low fluoride concentration of 250ppm F are less effective than toothpastes with the standard 1,000-1,500 ppm F at preventing caries in permanent teeth. Recommendations on the use of fluoride toothpaste in children have been produced by the Expert Body on Fluorides and Health (http://www.fluoridesandhealth.ie/). These recommendations aim to minimise the risk of fluorosis from fluoride toothpaste while maximising its caries-preventive benefits. These recommendations can be found here. Click here for more information relating to your children's dental health under our Information & Support section
<urn:uuid:f5580fb7-6c72-4614-8add-dafe8a3d523b>
{ "date": "2013-12-04T16:11:58", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163035819/warc/CC-MAIN-20131204131715-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9218817353248596, "score": 3.3125, "token_count": 824, "url": "http://www.dentalhealth.ie/dentalhealth/teeth/fluoridetoothpastes.html" }
Our History in Gender Integration CARE's History in Gender Equality With over 20 years of experience prioritizing gender equality, CARE has developed an amazing range of tools and lessons about becoming a women’s empowerment organization. Explore more about how to embed gender in all aspects of an organization. How Did We Get Here? Take a look at the key moments in our history of prioritizing gender equality. Strategic Impact Inquiry 5 years, 350 staff, 400 projects, 24 countries. What do we really know about women’s empowerment? How do we improve our work? We prioritized gender equality in all aspects of the organization, including HR. We ask every recruit about gender equality while interviewing, and all staff members take GED training. Create a Movement CARE counts on all of its staff to champion gender equality, and to work gender issues into the programs they touch. Our very culture counts gender equality as core and central to our programming.
<urn:uuid:1d6c3915-a102-4fdd-a8a0-8d16293f2748>
{ "date": "2015-05-29T16:13:50", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930256.3/warc/CC-MAIN-20150521113210-00074-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9266985654830933, "score": 2.515625, "token_count": 198, "url": "http://care.org/our-work/womens-empowerment/gender-integration/our-history-gender-integration" }
The standard K ration was commissioned by the U.S. Army in 1941, when they employed Dr. Ancel Keys, a University of Minnesota physiologist, to design a non-perishable emergency combat ration. The ration was intended to supply soldiers with three meals containing enough calories and nutrition to sustain combat operations for a short duration. Keys' initial menu consisted of hard biscuits, sausage, chocolate, hard candy, and a vitamin. The menu was dubbed "better than nothing" by initial test subjects, and a new menu was commissioned, eventually including more variety: Breakfast: a canned entree such as chopped ham and eggs or veal loaf; hard biscuits; dried fruit bar or cereal bar; water purification tablets; a 4-pack of cigarettes; chewing gum; instant coffee; and sugar Lunch: a canned entree such as processed ham and cheese; hard biscuits; malted milk tablets or caramels; sugar; salt; 4-pack of cigarettes; book of matches; chewing gum; powdered beverage packet Dinner: canned meat such as chicken paté or pork luncheon meat; carrot and apple; hard biscuits; two-ounce D ration emergency chocolate bar; commercial sweet chocolate bar; packet of toilet paper tissues; 4-pack of cigarettes; chewing gum; bouillon soup cube or powder packet Tests were conducted in Panama over rolling hills at a light march pace in 1942, and after three days, none of the soldiers in the test had lost significant weight. The complete daily intake of calories totaled between 2,800 and 3,000 and was recommended for only up to 15 days' use. However, the caloric needs of men on extended marches, digging trenches, or other combat-related activities well exceed that total, especially in very hot climates such as in the Pacific Theater. Malnutrition became a factor for these men, and for those in besieged European areas where K rations were eaten for months at a time. Men in Burma lost in excess of 35 pounds during their campaigns and became less resistant to tropical diseases. Some soldiers were able to supplement their meals with rations taken from other soldiers (German rations, in particular, were coveted because their cheese and sausage tasted better), but the pork loaf and acidic lemon power were considered completely unpalatable by many. They simply tossed the offending items, thus reducing the calories available to them. Jungle and mountain rations were also produced, but Army supply officers hated them because their production required additional local contacts to supply fresher foods, and the expense of supplying 4,000 daily calories per man was considered too high. Both the jungle and mountain rations were cut completely in 1943 in favor of the inadequate--yet cost effective!--K rations. But that same year, the Army declared that K rations should only be used for up to five days, but by then many units were already in the field and had little access to updated replacements. Other rations such as the C ration were deemed too unwieldy on long marches, although they were more nutritious, and the monotony of the menu caused morale problems. Accessory packs containing different flavors of water tablets, candy, and hard crackers helped, but even these improvements couldn't assuage the dullness of weeks and even months eating the same food every day.
<urn:uuid:946a0e83-c072-42ba-b06a-29c844fd0fca>
{ "date": "2018-01-19T01:17:41", "dump": "CC-MAIN-2018-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00656.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9785424470901489, "score": 3.5, "token_count": 675, "url": "https://unusualhistoricals.blogspot.com/2009/03/food-drink-wwii-rations.html" }
DNAPL History and Transport: A Summary This post was prompted by interest in a lecture I attended as part of the Princeton Groundwater Pollution and Hydrogeology course (recommended!). North America’s most prominent DNAPL expert, John Cherry, spoke for approximately 5 hours about the topics I have summarized here. Cherry’s notes are copyrighted, as is the whole course, so I have paraphrased the take-home messages. An annotated list of references will follow in a subsequent post (soon)! DNAPL is Dense Non-Aqueous Phase Liquid. The term refers to the group of groundwater contaminants that have a density greater than water (commonly cited as 1.0 g/cm3). These are a tricky group of compounds for contaminant hydrogeologists because the source tends to sink and can be difficult to find. DNAPLs include chlorinated solvents (tetrachloroethylene (PCE), trichloroethylene (TCE), trichloroethane (TCA), cis-1,2-dichloroethylene (DCE), vinyl chloride(VC)), creosote, coal tar, polychlorinated biphenyls (PCBs) and undiluted pesticides. The most widespread DNAPLs are chlorinated solvents because PCE is the main chemical used for dry cleaning, TCE is a heavily-used industrial solvent, and DCE and VC are the sequential daughter products. (VC degrades further into ethene and ethane, which are the desired end-products of in situ degradation of a chlorinated solvent.) Chlorinated solvents were reportedly developed in Germany in the late 19th Century; their use increased drastically during WWII. Until 1979, MSDSs for chlorinated solvents recommended disposal in dry soil because they were thought to volatilize. Dissolved plumes caused by DNAPLs were discovered in the 1970s but DNAPL (the free phase, not dissolved phase) was not discovered until the mid-1980s. This is partially because monitoring wells are a poor method to detect DNAPL; it is rarely found in wells. Discovery was precipitated by legislation introduced during the previous decade: Safe Drinking Water Act (1974), Resource Conservation and Recovery Act (RCRA, 1976) and the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA, commonly known as Superfund, 1980). This legislation required sampling of municipal wells specifically for chlorinated solvents, which were summarily discovered in some drinking water systems. Unlike some other contaminants, such as my favorite, methyl tert-butyl ether (MTBE), chlorinated solvents have high taste and odor thresholds, meaning that people don’t taste or smell the compounds in water until a relatively high concentration. Chlorinated solvents have taste thresholds around several hundred ug/L; MTBE is at least one if not two orders of magnitude less. Taste thresholds are highly dependent on the individual. Fate and Transport DNAPL transport has been studied in three categories, in order of decreasing scientific understanding: sand and gravel, fractured clay and fractured rock. Research on DNAPL fate and transport in fractured rock is nascent and appears to be dominated by Beth Parker and John Cherry at the University of Guelph (but of course I learned this at a course taught by Cherry). Sand and Gravel A series of field experiments were conducted in the fine sand aquifer of Canadian Forces Base Borden (Ontario), starting in 1989, to explore the transport of different chlorinated DNAPLs with different release scenarios. This work isn’t news anymore, so I won’t go into more than to say that the DNAPL traveled deeper than expected and along very thin, slightly coarser-grained beds in an fairly homogeneous sand, forming multiple layers of free phase product at discrete depths. The extent of the fractures in an aquitard are the most relevant bit of information: do fractures extend through the unit and connect to an underlying aquifer or not? The importance of aquitard fracture extent was discovered by accident during a controlled release experiment of PCE at Base Borden in 1991 (I can’t find a publication of this specific experiment, though I believe it became part of Parker’s PhD dissertation). A falling head hydraulic test indicated that the aquitard was a sufficient barrier to flow; however, the hydraulic test failed to indicate slight leakage, which DNAPL is happy to exploit. It’s easy to forget that aquitards are defined in terms of water supply, not contaminant transport, and methods to test the aquitard’s hydraulic properties are likely insufficient to determine contaminant transport properties. How to determine that an aquitard has continuous fractures? Carefully inspect cores. There is still very little literature on DNAPL in fractured rock. There are several references addressing fluid flow in fractured rock (conference proceedings, National Resource Council “Rock Fractures and Fluid Flow” and a guide to regional groundwater flow available as a PDF here). Modeling demonstrated, before any field experiments were conducted, that the “orderly” and interconnected fracture network in sedimentary rock would generate a dispersion-dominated plume. In the 1997, a major field effort was initiated at Santa Susana (Simi, California), to study the fate of large volumes of TCE, which had been used to clean rocket test components on top of a sizeable shale/sandstone hill. Cherry spent a lot of time on the methods (message: don’t open a hole that becomes a transport pathway) but the surprising discovery at this site was that despite the high water table beneath the source area, TCE was not appearing at discharge locations. Analysis of numerous rock cores indicated that transport of DNAPL was dominated by matrix diffusion, not advection, and diffusion had sufficiently retarded TCE such that it never appeared at discharge points in the valley below. After four decades, no DNAPL source zone remained. (Parker presented this at the Battelle Conference on Chlorinated and Recalcitrant Compounds, Monterey, CA, May 24-27, 2010.) If you made it this far, thank you! This is my first real blog post. I encourage you to leave comments or questions. Were you hoping that some nugget of DNAPL properties would be addressed here that wasn’t? Let me know and I’ll do my best. If you have suggestions on how I could write better, I would really appreciate those comments. If you don’t want to leave a comment in this public forum, feel free to email me or DM me on Twitter. Cheers.
<urn:uuid:229948fb-4057-4a9c-89f0-a6f37d321b63>
{ "date": "2014-04-18T08:19:25", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9501040577888489, "score": 3.078125, "token_count": 1406, "url": "http://perrykidposts.wordpress.com/2010/08/08/dnaplsummary/" }
Natural freedom gave place to civil freedom by a social contract. Since the conditions in the state of nature were intolerable and men longed for peace the people entered into a kind of contract to ensure for themselves security and certainty of life and property. Second, because people surrender themselves unconditionally, the individual has no rights that can stand in opposition to the state. Rousseau uses three pieces of evidence to support this argument. He knew neither right nor wrong and free from all notions of virtue and vice. Thomas Hobbes an English thinker was of the opinion that society came into being as a weapon for the protection against the consequences of their own nature. Thus in order to protect himself against the evil consequence of his own nature man organized himself in society in order to live in peace with all. Rationally further defined by putting checks on our impulses and desires, and therefore learn to live morally. According to this theory all men are born free and equal. Population increased and reason was dawned. Rousseau argued men were free from the influence of civilization, and sought their own happiness uncontrolled by social laws and social institutions. Sovereign generally defined as the ultimate authority with regard to a certain group of people. The essence of their argument is as follows. Summarizing his statements, Rousseau argues that not just freedom but rationality and morality are only attainable through civil society. But refuses the common belief of his time that an elite group or single monarch can act as sovereign. Third, because no one is set above anyone else, people do not lose their natural freedom by entering the social contract. He was independent, contented, self-sufficient, healthy, fearless and good. Rousseau is not the only philosopher to define real freedom as the ability to think rationally. These thinkers suggested that in exchange for protection and safety from the state of nature people would consent to be governed or ruled by an absolute monarch. Once Rousseau establishes his preference for civil society over state of nature he begins to reveal key elements within his ideal republic; sovereignty, general will, and common good. Difference between stronger and weaker, rich and poor, arise.General Will and Rousseau's Social Contract Essay - When Jean Jacques Rousseau wrote the Social Contract, the concepts of liberty and freedom were not new ideas. Many political theorists such as Thomas Hobbes and John Locke had already developed their own interpretations of liberty, and in fact Locke had already published his views on the. Essay on The Social Contract Throughout Rousseau’s work, The Social Contract, he reveals many theories and components of government. He continually brainstorms on the particular question of, “How freedom may be possible in civil society?”. The Social Contract study guide contains a biography of Jean-Jacques Rousseau, literature essays, quiz questions, major themes, characters, and a. By forcing its subjects to obey the social contract, the sovereign essentially forces its subjects to maintain the civil freedom that is part and parcel of this social contract. Some commentators have gone so far as to accuse Rousseau of. The social contract theory throws light on the origin of the society. According to this theory all men are born free and equal. Individual the classical representatives of this school of thought are Thomas Hobbes, John Locke and J.J. Rousseau. The book "On the Social Contract" published on by Jean-Jacques Rousseau is one of his most important works, which points out the basis for a genuine political order and freedom. One of Jean-Jacques Rousseau.Download
<urn:uuid:0bec79e5-cb2e-4d60-b885-c2b7cfc597c8>
{ "date": "2018-12-12T09:43:10", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00296.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9626006484031677, "score": 3.359375, "token_count": 728, "url": "http://vabyjetypa.killarney10mile.com/essays-on-the-social-contract-rousseau-104333tel571.html" }
The US Census is a count of all residents of the country. The count will include people of all ages, races, ethnic groups, citizens and non citizens. Census data results will guide critical decisions on federal, state, and local levels which means that that achieving a complete and accurate count is essential.
<urn:uuid:bae02c49-9afb-4af0-aa1d-cd8b908ffb7f>
{ "date": "2015-08-05T08:22:46", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00128-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9346354007720947, "score": 2.953125, "token_count": 65, "url": "http://www.burienwa.gov/index.aspx?NID=873" }
Culture » February 14, 2017 How Today’s White Middle Class Was Made Possible By Welfare Whites, angered at blacks and immigrants receiving “government handouts,” forget they were lifted out of poverty through racially exclusive welfare programs in the 30s. Today, the federal government’s role in building and subsidizing the homestead communities—and the larger government programs to subsidize construction of white suburbs across the nation—is all but erased from history. Between 2001 and 2010, Westmoreland County, Pa., lost at least 8,000 manufacturing jobs. That’s one explanation for why this once-blue region gave more votes to Donald Trump than did any other Pennsylvania county, helping swing the state in his favor and propelling him to a surprise victory. “We want our jobs back,” John Golomb, a retired steelworker in Westmoreland County and lifelong Democrat who voted for Trump, told the Wall Street Journal, adding that previous presidents from both parties “forgot us.” A form of historical amnesia also afflicts Westmoreland County. Largely absent from discussions of its decline are the ambitious social welfare programs that once helped its residents climb out of poverty. Two generations ago, this area of rural Pennsylvania was the site of a sweeping—and successful—federal housing program. The New Deal subsistence homestead program, launched in 1933 with $25 million, built modern homes for low-wage industrial workers and gave them plots of land for subsistence farming. In this corner of coal country devastated by dangerous labor practices and low wages, federal officials constructed a new community that gave poor white families a stepping-stone to home ownership and the middle class. The story of this housing program is told by historians Timothy Kelly, Margaret Power and Michael Cary in Hope in Hard Times: Norvelt and the Struggle for Community During the Great Depression. Norvelt, one of 34 communities in 18 states completed under the Roosevelt administration’s subsistence homestead program, remains today as a village of more than 1,000 residents in Westmoreland County. The median household income in Norvelt is more than $56,000, just above the state median. Fewer than three percent of residents live in poverty, a lower rate than any of the surrounding communities. It’s a monument to the potential for “an ambitious and innovative federal government” to “work positively in people’s lives,” the authors write. But it is also a reminder of the federal government’s inability—or refusal— to address the unyielding racial segregation in America’s housing markets. The authors can document just one African-American family living in Norvelt in the late 1930s, and the community is still largely white today. Most of the community’s first residents were the children or grandchildren of immigrants from southern or eastern Europe and had lived in “coal patch” communities owned by Henry Clay Frick. The move to Norvelt could not have been more stark. The men had worked 10-to 12-hour days in Frick’s coal mines and coke ovens while residing in “patch” communities, in houses of four to six rooms owned by the mining company. Into the 1930s, families shared outhouses, carried water from a communal pump and heated their homes with coal stoves. Coal dust and contagious diseases spread from house to house, making the lives of wives and mothers a constant war against dirt, malnutrition and the diseases that put child mortality rates among the highest in the nation. Norvelt’s houses, by contrast, had indoor bathrooms, kitchen sinks with running water, furnaces and electrical appliances, providing the miners and their families with middle-class living standards. They worked collectively in the hatchery and vegetable gardens, producing food for their households and subsidizing their low wages at the mines. By all accounts, Norvelt was a healthy, prosperous community. The town was named for Eleanor Roosevelt, a strong supporter of the federal program, who visited Westmoreland Homesteads on May 21, 1937. Roosevelt had taken the train from Washington to Greensburg, Pa., then insisted on driving herself the eight miles to the settlement, where she was greeted by a welcoming committee and given a three-hour tour. Following her visit, the residents of Westmoreland, grateful for the first lady’s support, renamed their community Norvelt. The homestead experiment ended after WWII, when residents took ownership of their houses, incorporated their towns, and turned the cooperative farms into individually owned yards. A generation or two after the federal government built Norvelt and other homestead communities, many children and grandchildren of the original beneficiaries became Nixon’s silent majority or Trump’s Rust Belt whites, angered by what they perceived as government handouts to African Americans and immigrants. It’s an ugly irony that the book’s authors do not explain. When most Americans think about public housing today, they picture the widely despised high-rise projects blamed for destroying black urban neighborhoods. This was federally funded housing for the urban working class, introduced under the 1937 Housing Act. Early public housing offered comfortable, modern apartments for both white and black families, but racial segregation was enforced. By the 1960s, high-rise public housing was underfunded, poorly maintained, and considered little more than a warehouse for the black urban poor. Today, the federal government’s role in building and subsidizing the homestead communities—and the larger government programs to subsidize construction of white suburbs across the nation—is all but erased from history. This allows contemporary white Americans to assume they came by home ownership, and the family wealth it produces, through individual hard work. It also sustains their refusal to recognize the ways white privilege—or what W.E.B. Du Bois called “the wages of whiteness”—propelled white workers into middle-class economic stability. Deindustrialization in the 1970s and 1980s—coupled with 40 years of stagnant wages and the 2008 housing crisis—have eroded the value of whiteness, though certainly not eliminated it. Eighty years after Norvelt, the right-wing elite plays on the anxiety of working-class whites who have lost some of the economic privileges that their grandparents took for granted. The story of Norvelt reinforces the ways race and class are intricately bound together in American policy. Government support for housing the laboring poor was among the New Deal’s most innovative programs, one so effective that government’s role in building a nation of homeowners is now largely forgotten. The success of New Deal homestead towns, in particular, reminds us that sweeping social welfare programs can benefit and win the support of the rural working class. But like so many New Deal programs, the homestead program was shot through with contradictions. It mixed idealism with opportunism, collective values with individualism, working-class uplift with racial exclusion. Ultimately, this helped to forge America’s massive racial wealth gap. It’s a piece of history worth recovering. Help Support Our Fall Fundraising Drive Here’s a sobering fact: Over the past 20 years, journalism has lost jobs at a faster rate than the coal mining industry. Far too many excellent publications have disappeared completely. But because of supporters like you, In These Times has been able to walk a different path. We are not managed by a corporate parent company, nor are we dependent on one benevolent philanthropist. Instead, we are supported by individual donations from you and thousands of other readers like you. This is our promise to you: We’ll keep publishing as long as you keep supporting In These Times. Please, make a tax-deductible donation today to help keep In These Times going strong. Margaret Garb is the author of Freedom’s Ballot: African American Political Struggles in Chicago from Abolition to the Great Migration.. She is working on a history of poverty and work in the U.S. from the Civil War to the Reagan era. if you like this, check out: - When We Talk About Cultural Appropriation, We Should Be Talking About Power - Trolls Are Sowing Discord Between Sanders and Warren Supporters - 33% of Parents Went Into Debt to Pay for Summer Childcare in 2018 - What the Media Gets Wrong About Antifa - A CAP Analyst’s Red-Baiting Book Accidentally Makes the Case for Socialism
<urn:uuid:5a9f8fe8-6f3b-4d4a-bb10-86c742ee299e>
{ "date": "2019-10-22T17:10:29", "dump": "CC-MAIN-2019-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.955230176448822, "score": 3.046875, "token_count": 1755, "url": "http://padlock.inthesetimes.com/article/19891/middle-class-white-welfare-new-deal" }
Current president of Nicaragua (1984 - 1990, 2007 - present) Daniel Ortega was a revolutionary leader in charge of the FSLN. As a member of the Sandinista Junta, he was looked upon as an icon for the poor against the Contras. He was captured in 1967, but later got out of prison at the end of the war. He is on his third presidential term which caused a big controversy, but he still has a mass support from the people of Nicaragua, especially the poor. The themes of history that fit Ortega the best would be Change and Overthrow. Change, because while Nicaragua was ran by several military dynasties before, it had a revolution is the 60's that changed it to a presidential republic democracy. Overthrow, because the Sandinistas took power over the Somoza regime in 1979. While another anti-Sandinista party won an election leading to the first female Nicaraguan president, Ortega came back into power. "Daniel Ortega." Britannica School, 2017. Encyclopaedia Britannica Online School Edition. Accessed 20 Mar. 2017.
<urn:uuid:1892bffc-84cb-41e1-a4fa-1e70ee6233eb>
{ "date": "2017-08-21T06:25:09", "dump": "CC-MAIN-2017-34", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107720.63/warc/CC-MAIN-20170821060924-20170821080924-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9450108408927917, "score": 2.84375, "token_count": 232, "url": "https://spark.adobe.com/page/P4GyfqbjPQZr3/" }
The orb-web spider Nephilengys malabarensis is into rough sex. So rough that, in order to avoid being eaten alive (as shown in the photo), the male will often voluntarily break off his whole sex organ, or palp, while it's still lodged in the female's abdomen (red box in photo), living out the rest of his life as an eunuch. Now a group of researchers think they know why evolution has allowed this dead-end dad to survive. They collected 25 pairs of spiders and introduced them to one another. After each pair had mated and the male's palp was left in the female, the researchers dissected the female and counted the sperm in her abdomen and the amount remaining in the embedded palp. That organ, they report online today in Biology Letters, continues to transfer sperm into the female long after the male has fled or been consumed. The longer it's embedded, the more sperm it transfers, and it's even more efficient when the male breaks it off himself to run away, rather than letting the female do it while eating him. So for the male, it's a sacrifice worth making. See more ScienceShots.
<urn:uuid:3fd1963e-8079-47ed-aa6e-cef92b59a8dd>
{ "date": "2015-04-01T01:10:18", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131302428.75/warc/CC-MAIN-20150323172142-00270-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9742082953453064, "score": 3.078125, "token_count": 239, "url": "http://news.sciencemag.org/2012/01/scienceshot-cowardly-spider-can-inseminate-female-afar?rss=1" }
The Truth About Common Core: Why Your Anger Is Misdirected If you search the Internet for “Common Core,” you will get hundreds of hits about people who are angry about the “poison” that is the Common Core. They are all up in arms about how difficult materials are or how everything is focused on testing now. A recurring complaint I hear is how Common Core math is so hard for kids (and parents!) to understand. It’s hard to ignore all the anger and frustration because it’s all over Facebook, Twitter, YouTube, and blogs. I totally understand this anger. I watched the mom from Arkansas tremble with rage as she addressed the school board. And I’m not saying her distress is not without merit. While I understand the math problem she used as an example, and I understand that the 100+ steps were to show the process (I believe the 90 hash marks counted as “steps”) were to help students understand the process of division rather than blindly doing it. While I get that, I can still see why she was mad. She was concerned about the time this was taking to do in class instead of moving forward with other things. She thinks -- and maybe correctly (but that is hard to base on just this one math problem example) -- that her kids are being short-changed, that her kids are not learning “the basics.” And she blames the Common Core for this tragedy. The problem? Her anger -- and that of most of America -- is misinformed and misdirected. The problem isn’t really with the Common Core; it’s with how the Common Core is being implemented in states/districts. I have said it a million times before: Standards and curriculum are not the same thing. A standard is a requirement or a level of quality. In school, it is the expectation that will be met. Common Core State Standards (CCSS) aside, there have always been standards in education -- certain levels of achievement that students are expected to reach by each grade level. The standard that Arkansas Mom is talking about for Fourth Grade math is most likely this one: CCSS.Math.Content.4.OA.A.2 Multiply or divide to solve word problems involving multiplicative comparison, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem, distinguishing multiplicative comparison from additive comparison. In my experience* unpacking the CCSS, I have noticed that they fall under two different types: concept knowledge and procedure performance. An example of concept knowledge includes things like the fourth grade standard that students will “know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec.” (CCSS.Math.Content.4.MD.A.1) A procedure performance would be like the one the Arkansas Mom was referring to: the ability to multiply or divide. One standard asks the students to have knowledge of something; the other asks students to be able to do some sort of procedure. (For a list of all the CCSS for all grades, go here). After looking over a number of the math standards for multiple grades, I have a feeling that parent (and even educator) frustration comes from two places, neither of which are really the “fault” of the CCSS. The largest source of all the hate comes from the confusion between standards and implementation of the standards. Implementation is the process of getting to a goal, or a standard. Part of the way standards are implemented are through curricula. A curriculum is an all-encompassing entity that has the standards, materials needed, and processes for implementation included. Curriculum and Standards ARE NOT SYNONYMOUS. Standards are just PART of the curriculum -- the driving force -- but not the whole thing. Parents (and educators) complain that students are now doing more testing and the processes that students are to follow to solve problems is a mess. They complain about the curriculum and call it Common Core. This is where the misdirection of anger occurs. I would dare to bet that the Arkansas Mom is not angry that her child needs to “multiply and divide to solve word problems” in fourth grade; she is angry at the process the teacher/district/state has put in place to teach her child that standard. That is not the fault of the standard. The CCSS do not dictate how to implement the standards. Let me repeat that: THE CCSS DO NOT DICTATE HOW TO IMPLEMENT THE STANDARDS. Another argument against the CCSS is that students are not learning the basics anymore. This is again, false. Instead of just memorizing rote multiplication tables (which face it, only works for some people; memorizing was not my bag and I still don’t remember them all -- and I am a graduate degree holding professional educator), they are being taught to understand the concept of what multiplication actually is. This makes some parents angry because they simply don’t understand the concept themselves. Asking our kids to learn to think rather than memorize is not a bad thing. The CCSS are based on higher-level thinking -- more complex thinking -- based on ideas like Bloom’s Taxonomy (see below). The bottom of the pyramid contains the most basic thinking skills. The idea of the CCSS is to push students from rote memorization into the highest levels of analysis, evaluation, and creation. Colleges and careers needs students to be ready to think beyond just memorized facts. They need students to be problem solvers, problem/solution analyzers, and creators. Before you go before your school board or your legislature, do your research. Read the standards for your child’s grade and decide with whom your gripe is. If you are angry about how your child is learning, demand information on how the curriculum was chosen. Volunteer to be on committees that help choose texts and curricula for your school. While doing my pre-writing for this post. I talked to our high school math department head and our district’s superintendent about our K-12 math curriculum. Our district uses elements from Scott Foresman and Singapore math along with a bunch of supplements because it’s been our elementary curriculum for years. We will soon re-evaluate our curriculum once we see the Smarter Balance Test (the one that aligns with the CCSS and will take the place of our current Michigan Merit Exam). Currently at ALL teachers at all levels (K-12) in my district are working hard to gear math (and other subjects) more toward process/project-based learning to align more easily to the type of thinking the CCSS asks of our students. And we are very proud of the results we are getting. It’s easy to look at our children’s homework and become frustrated and blame something like the CCSS -- which are the new element. It’s easy to rant and vent all over social media. But it’s important to be informed. Do your research. Read the standards. Then decide what it is you are really angry about. *For those of you new to Sluiter Nation, I am a high school and college adjunct ENGLISH teacher. I am not a math teacher. But I am a parent and as a parent it is my duty to be informed about ALL of the CCSS. Besides being a high school English teacher and college English Adjunct Professor, Katie Sluiter has also appeared as an elite blogger for US BabyHuddle, a featured writer on Borderless News and Views, and in syndication on BlogHer. She also currently works a freelance journalist for iAquire. Her writing has appeared in the May 2013 issue of Baby Talk Magazine and in the book, Three Minus One (to be released in May 2014).
<urn:uuid:9b44c051-3e43-4bd2-a921-3913364c39f6>
{ "date": "2016-05-03T21:35:57", "dump": "CC-MAIN-2016-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121776.48/warc/CC-MAIN-20160428161521-00068-ip-10-239-7-51.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9559813737869263, "score": 2.734375, "token_count": 1679, "url": "http://www.blogher.com/misdirected-anger" }
California’s native fish face extinction UC DAVIS (US) — Salmon and other native freshwater fish in California will likely go extinct within the next century due to climate change if current trends continue. The study, published online in May in the journal PLOS ONE, assessed how vulnerable each freshwater species in California is to climate change and estimated the likelihood that those species would become extinct in 100 years. The researchers from the Center for Watershed Sciences at the University of California, Davis, found that, of 121 native fish species, 82 percent are likely to be driven to extinction or very low numbers as climate change speeds the decline of already depleted populations. In contrast, only 19 percent of the 50 non-native fish species in the state face a similar risk of extinction. Delta smelt are fifth on the list of native California fish most likely to become extinct in the state within 100 years due to climate change. (Credit: USFWS Pacific Southwest Region/Flickr) “If present trends continue, much of the unique California fish fauna will disappear and be replaced by alien fishes, such as carp, largemouth bass, fathead minnows, and green sunfish,” says Peter Moyle, a professor of fish biology who has been documenting the biology and status of California fish for the past 40 years. “Disappearing fish will include not only obscure species of minnows, suckers, and pupfishes, but also coho salmon, most runs of steelhead trout, and Chinook salmon, and Sacramento perch,” Moyle says. Fish requiring cold water, such as salmon and trout, are particularly likely to go extinct, the study says. However, non-native fish species are expected to thrive, although some will lose their aquatic habitats during severe droughts and low-flow summer months. The top 20 native California fish most likely to become extinct in California within 100 years as the result of climate change include (asterisks denote a species already listed as threatened or endangered): - Klamath Mountains Province summer steelhead - McCloud River redband trout - Unarmored threespine stickleback* - Shay Creek stickleback - Delta smelt* - Long Valley speckled dace - Central Valley late fall Chinook salmon - Kern River rainbow trout - Shoshone pupfish - Razorback sucker* - Upper Klamath-Trinity spring Chinook salmon - Southern steelhead* - Clear Lake hitch - Owens speckled dace - Northern California coast summer steelhead - Amargosa Canyon speckled dace - Central coast coho salmon* - Southern Oregon Northern California coast coho salmon* - Modoc sucker* - Pink salmon The species are listed in order of vulnerability to extinction, with number one being the most vulnerable. Climate change and human-caused degradation of aquatic habitats is causing worldwide declines in freshwater fishes, especially in regions with arid or Mediterranean climates, the study says. These declines pose a major conservation challenge. However, there has been little research in the scientific literature related to the status of most fish species, particularly native ones of little economic value. Moyle saw the need for a rapid and repeatable method to determine the climate change vulnerability of different species. He expects the method presented in the study to be useful for conservation planning. “These fish are part of the endemic flora and fauna that makes California such a special place,” says Moyle. “As we lose these fishes, we lose their environments and are much poorer for it.” The California Energy Resources Conservation and Development Commission Instream Flow Assessment Program funded the study. Source: UC Davis You are free to share this article under the Creative Commons Attribution-NoDerivs 3.0 Unported license.
<urn:uuid:6d206c1e-46d3-4bc2-a494-cf4178e11a09>
{ "date": "2014-11-23T19:48:23", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379916.51/warc/CC-MAIN-20141119123259-00224-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9224474430084229, "score": 3.3125, "token_count": 802, "url": "http://www.futurity.org/californias-native-fish-face-extinction/" }
From George Herbert Mead, Mind, Self, and Society. Chicago:University of Chicago Press, 1934. The self is not so much a substance as a process in which theconversation of gestures has been internalized within an organicform. This process does not exist for itself, but is simply a phaseof the whole social organization of which the individual is a part.The organization of the social act has been imported into theorganism and becomes then the mind of the individual. It stillincludes the attitudes of others, but now highly organized, so thatthey become what we call social attitudes rather than roles ofseparate individuals. This process of relating one's own organism tothe others in the interactions that are going on, in so far as it isimported into the conduct of the individual with the conversation ofthe "I" and the "me," constitutes the self. The value of thisimportation of the conversation of gestures into the conduct of theindividual lies in the superior co-ordination gained for society as awhole, and in the increased efficiency of the individual as a memberof the group. It is the difference between the process which can takeplace in a group of rats or ants or bees, and that which can takeplace in a human community. The social process with its variousimplications is actually taken up into the experience of theindividual so that that which is going on takes place moreeffectively, because in a certain sense it has been rehearsed in theindividual. He not only plays his part better under those conditionsbut he also reacts back on the organization of which he is a part. The very nature of this conversation of gestures requires that theattitude of the other is changed through the attitude of theindividual to the other's stimulus. In the conversation of gesturesof the lower forms the play back and forth is noticeable, since theindividual not only adjusts himself to the attitude of others, butalso changes the attitudes of the others. The reaction of theindividual in this conversation of gestures is one that in somedegree is continually modifying the social process itself. It is thismodification of the process which is of greatest interest in theexperience of the individual. He takes the attitude of the othertoward his own stimulus, and in taking that he finds it modified inthat his response becomes a different one, and leads in turn tofurther changes Fundamental attitudes are presumably those that are only changedgradually, and no one individual can reorganize the whole society;but one is continually affecting society by his own attitude becausehe does bring up the attitude of the group toward himself, respondsto it, and through that response changes the attitude of the group.This is, of course, what we are constantly doing in our imagination,in our thought; we are utilizing our own attitude to bring about adifferent situation in the community of which we are a part; we areexerting ourselves, bringing forward our own opinion, criticizing theattitudes of others, and approving or disapproving. But we can dothat only in so far as we can call out in ourselves the response ofthe community; we only have ideas in so far as we are able to takethe attitude of the community and then respond to it. I have been presenting the self and the mind in terms of a socialprocess, as the importation of the conversation of gestures into theconduct of the individual organism, so that the individual organismtakes these organized attitudes of the others called out by its ownattitude, in the form of its gestures, and in reacting to thatresponse calls out other organized attitudes in the others in thecommunity to which the individual belongs. This process can becharacterized in a certain sense in terms of the "I" and the "me,"the "me" being that group of organized attitudes to which theindividual responds as an "I." What I want particularly to emphasize is the temporal and logicalpre-existence of the social process to the self-conscious individualthat arises in it. The conversation of gestures is a part of thesocial process which is going on. It is not something that theindividual alone makes possible. What the development of language,especially the significant symbol, has rendered possible is just thetaking over of this external social situation into the conduct of theindividual himself. There follows from this the enormous developmentwhich belongs to human society, the possibility of the prevision ofwhat is going to take place in the response of other individuals, anda preliminary adjustment to this by the individual. These, in turn,produce a different social situation which is again reflected in whatI have termed the "me," so that the individual himself takes adifferent attitude. Consider a politician or a statesman putting through some projectin which he has the attitude of the community in himself. He knowshow the community reacts to this proposal. He reacts to thisexpression of the community in his own experience--he feels with it.He has a set of organized attitudes which are those of the community.His own contribution, the "I" in this case, is a project ofreorganization, a project which he brings forward to the community asit is reflected in himself. He himself changes, of course, in so faras he brings this project forward and makes it a political issue.There has now arisen a new social situation as a result of theproject which he is presenting. The whole procedure takes place inhis own experience as well as in the general experience of thecommunity. He is successful to the degree that the final "me"reflects the attitude of all in the community. What I am pointing outis that what occurs takes place not simply in his own mind, butrather that his mind is the expression in his own conduct of thissocial situation, this great co-operative community process which isgoing on. I want to avoid the implication that the individual is takingsomething that is objective and making it subjective. There is anactual process of living together on the part of all members of thecommunity which takes place by means of gestures. The gestures arecertain stages in the co-operative activities which mediate the wholeprocess. Now, all that has taken place in the appearance of the mindis that this process has been in some degree taken over into theconduct of the particular individual. There is a certain symbol, suchas the policeman uses when he directs traffic. That is something thatis out there. It does not become subjective when the engineer, who isengaged by the city to examine its traffic regulations, takes thesame attitude the policeman takes with reference to traffic, andtakes the attitude also of the drivers of machines. We do imply thathe has the driver's organization; he knows that stopping meansslowing down, putting on the brakes. There is a definite set of partsof his organism so trained that under certain circumstances he bringsthe machine to a stop. The raising of the policeman's hand is thegesture which calls out the various acts by means of which themachine is checked. Those various acts are in the expert's ownorganization; he can take the attitude of both the policeman and thedriver. Only in this sense has the social process been made"subjective." If the expert just did it as a child does, it would beplay; but if it is done for the actual regulation of traffic, thenthere is the operation of what we term mind. Mind is nothing but theimportation of this external process into the conduct of theindividual so as to meet the problems that arise. This peculiar organization arises out of a social process that islogically its antecedent. A community within which the organism actsin such a co-operative fashion that the action of one is the stimulusto the other to respond, and so on, is the antecedent of the peculiartype of organization we term a mind, or a self. Take the simplefamily relation, where there is the male and the female and the childwhich has to be cared for. Here is a process which can only go onthrough interactions within this group. It cannot be said that theindividuals come first and the community later, for the individualsarise in the very process itself, just as much as the human body orany multi-cellular form is one in which differentiated cells arise.There has to be a life-process going on in order to have thedifferentiated cells; in the same way there has to be a socialprocess going on in order that there may be individuals. It is justas true in society as it is in the physiological situation that therecould not be the individual if there was not the process of which heis a part. Given such a social process, there is the possibility ofhuman intelligence when this social process, in terms of theconversation of gestures, is taken over into the conduct of theindividual--and then there arises, of course, a different type ofindividual in terms of the responses now possible. There mightconceivably be an individual who simply plays as the child does,without getting into a social game; but the human individual ispossible because there is a social process in which it can functionresponsibly. The attitudes are parts of the social reaction; thecries would not maintain themselves as vocal gestures unless they didcall out certain responses in the others; the attitude itself couldonly exist as such in this interplay of gestures. The mind is simply the interplay of such gestures in the form ofsignificant symbols. We must remember that the gesture is there onlyin its relationship to the response, to the attitude. One would nothave words unless there were such responses. Language would neverhave arisen as a set of bare arbitrary terms which were attached tocertain stimuli. Words have arisen out of a social interrelationship.One of Gulliver's tales was of a community in which a machine wascreated into which the letters of the alphabet could be mechanicallyfed in an endless number of combinations, and then the members of thecommunity gathered around to see how the letters arranged after eachrotation, on the theory that they might come in the form of an Iliador one of Shakespeare's plays, or some other great work. Theassumption back of this would be that symbols are entirelyindependent of what we term their meaning. The assumption isbaseless: there cannot be symbols unless there are responses. Therewould not be a call for assistance if there was not a tendency to respond to the cry of distress. It issuch significant symbols, in the sense of a sub-set of social stimuliinitiating a co-operative response, that do in a certain senseconstitute our mind, provided that not only the symbol but also theresponses are in our own nature. What the human being has succeededin doing is in organizing the response to a certain symbol which is apart of the social act, so that he takes the attitude of the otherperson who co-operates with him. It is that which gives him a mind. The sentinel of a herd is that member of the herd which is moresensitive to odor or sound than the others. At the approach ofdanger, he starts to run earlier than the others, who then followalong, in virtue of a herding tendency to run together. There is asocial stimulus, a gesture, if you like, to which the other formsrespond. The first form gets the odor earlier and starts to run, andits starting to run is a stimulus to the others to run also. It isall external; there is no mental process involved. The sentinel doesnot regard itself as the individual who is to give a signal; it justruns at a certain moment and so starts the others to run. But with amind, the animal that gives the signal also takes the attitude of theothers who respond to it. He knows what his signal means. A man whocalls "fire" would be able to call out in himself the reaction hecalls out in the other. In so far as the man can take the attitude ofthe other--his attitude of response to fire, his sense ofterror--that response to his own cry is something that makes of hisconduct a mental affair, as over against the conduct of the others. But the only thing that has happened here is that what takesplace externally in the herd has been imported into the conduct ofthe man. There is the same signal and the same tendency to respond,but the man not only can give the signal but also can arouse inhimself the attitude of the terrified escape, and through callingthat out he can come back upon his own tendency to call out and cancheck it. He can react upon himself in taking the organized attitudeof the whole group in trying to escape from danger. There is nothingmore subjective about it than that the response to his own stimuluscan be found in his own conduct, and that he can utilize theconversation of gestures that takes place to determine his ownconduct. If he can so act, he can set up a rational control, and thusmake possible a far more highly organized society than otherwise. Theprocess is one which does not utilize a man endowed with aconsciousness where there was no consciousness before, but rather anindividual who takes over the whole social process into his ownconduct. That ability, of course, is dependent first of all on thesymbol being one to which he can respond; and so far as we know, thevocal gesture has been the condition for the development of that typeof symbol. Whether it can develop without the vocal gesture I cannottell. I want to be sure that we see that the content put into the mindis only a development and product of social interaction. It is adevelopment which is of enormous importance, and which leads tocomplexities and complications of society which go almost beyond ourpower to trace, but originally it is nothing but the taking over ofthe attitude of the other. To the extent that the animal can take theattitude of the other and utilize that attitude for the control ofhis own conduct, we have what is termed mind; and that is the onlyapparatus involved in the appearance of the mind. I know of no way in which intelligence or mind could arise orcould have arisen, other than through the internalization by theindividual of social processes of experience and behavior, that is,through this internalization of the conversation of significantgestures, as made possible by the individual's taking the attitudesof other individuals toward himself and toward what is being thoughtabout. And if mind or thought has arisen in this way, then thereneither can be nor could have been any mind or thought withoutlanguage; and the early stages of the development of language musthave been prior to the development of mind or thought. 1. According to this view, conscious communication develops out ofunconscious communication within the social process, conversation interms of significant gestures out of conversation in terms ofnon-significant gestures; and the development in such fashion ofconscious communication is coincident with the development of mindsand selves within the social process. 2. The relation of mind and body is that lying between theorganization of the self in its behavior as a member of a rationalcommunity and the bodily organism as a physical thing. The rational attitude which characterizes the human being is thenthe relationship of the whole process in which the individual isengaged to himself as reflected in his assumption of the organizedroles of the others in stimulating himself to his response. This selfas distinguished from the others lies within the field ofcommunication, and they lie also within this field. What may beindicated to others or one's self and does not respond to suchgestures of indication is, in the field of perception, what we call aphysical thing. The human body is, especially in its analysis,regarded as a physical thing. The line of demarcation between the self and the body is found,then, first of all in the social organization of the act within whichthe self arises, in its contrast with the activity of thephysiological organism (MS). The legitimate basis of distinction between mind and body is between the social patterns and the patterns of the organism itself.Education must bring the two closely together. We have, as yet, nocomprehending category. This does not mean to say that there isanything logically against it; it is merely a lack of our apparatusor knowledge (1927) . 3. Language as made up of significant symbols is what we mean bymind. The content of our minds is (1) inner conversation, theimportation of conversation from the social group to the individual(2) . . . . imagery. Imagery should be regarded in relation to thebehavior in which it functions (1931). Imagery plays just the part in the act that hunger does in thefood process (1912). 1. Mead's major articles can be found in: Andrew J. Reck (ed.),Selected Writings: George Herbert Mead (Indianapolis:Bobbs-Merrill, 1964). 2. The volumes were: The Philosophy of the Present (1932);Mind, Self, and Society (1934); Movements of Thought in theNineteenth Century (1936); and The Philosophy of the Act(1938). An excellent brief introduction to Mead's social psychologycan be found in an edited abridgement of his works: Anselm Strauss(ed.), The Social Psychology of George Herbert Mead (Chicago:University of Chicago Press, 1956). The major critical work dealingwith Mead's position is: Maurice Natanson, The Social Dynamics ofGeorge H. Mead (Washington, D.C. Public Affairs Press, 1956) . 3. Several varieties of Symbolic Interactionism exist today; cf.,Manford Kuhn, "Major Trends in Symbolic Interaction Theory,"Sociological Quarterly, 5 (1964), 61-84; and Bernard Meltzerand John W. Petras, "The Chicago and Iowa Schools of SymbolicInteractionism," in T. Shibutani (ed.), Human Nature andCollective Behavior: Papers in Honor of Herbert Blumer (EnglewoodCliffs, N.J.: Prentice-Hall, 1970). The best known variety ofsymbolic interactionism today is represented by the position ofMead's student Herbert Blumer; cf., Herbert Blumer, "SociologicalImplications of the Thought of George Herbert Mead," AmericanJournal of Sociology, 71 (1966), 534-544; and Herbert Blumer,Symbolic Interactionism: Perspective and Method (EnglewoodCliffs, NJ: Prentice-Hall, 1969). For a variety of studies done bymembers of this school, see: Arnold Rose (ed.), Human Behavior andSocial Processes (Boston: Houghton Mifflin, 1962); J. G. Manisand B. N. Meltzer (eds.), Symbolic Interaction: A Reader in SocialPsychology (Boston: Allyn and Bacon, 1967); and Gregory P. Stone(ed.), Social Psychology through Symbolic Interaction(Waltham, Mass.: Ginn-Blaisdell, 1970). Numerous modern theoreticalapproaches also owe a great debt to the work of Mead, for example,Walter Coutu, Emergent Human Nature: A New Social Psychology(New York: Knopf, 1949) . Back tothe Syllabus
<urn:uuid:7c1097d8-f3a0-4e6b-902f-c186a49fd260>
{ "date": "2017-06-27T10:36:38", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321309.25/warc/CC-MAIN-20170627101436-20170627121436-00257.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9592152833938599, "score": 2.640625, "token_count": 3983, "url": "http://media.pfeiffer.edu/lridener/courses/MINDSELF.HTML" }
Long Island, New York, beaches are important breeding grounds for piping plovers, a species listed as federally threatened. The Atlantic Coast population consists of only about 800 breeding pairs and 200 of them nest in New York. I have been photographing the plovers on a Long Island Sound Beach near my home for seven years. I have captured their entire breeding cycle from arrival in mid-March to mating, scrape building, brooding, hatching, early flight practice, feeding and departure in the late summer and early fall. One evening on the beach, a mother and her child approached me to get a better look at what I was photographing. I pointed out the plovers and their scrape (nest). They were taken aback and responded that they thought that birds nested in trees. It had never occurred to me that some people don’t realize why there are signs to stay outside the roped-off areas or keep their dogs off the beach. They just do not know they could be putting a species in harm’s way. Continue reading →
<urn:uuid:e75e28a8-de7a-4c4b-b516-838e4cdd13d5>
{ "date": "2020-01-21T23:19:55", "dump": "CC-MAIN-2020-05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9781395792961121, "score": 2.546875, "token_count": 215, "url": "http://www.nanpa.org/tag/grace-scalzo/" }
A lifetime’s propensity to some forms of nervous exhaustion, made much worse by mercury poisoning in 1851, resulted in Pugin’s decline into madness. He was admitted first to a private mental hospital, and then, when he showed but little improvement, to Bethlem Royal Hospital in London in June 1852. His wife took him home to Ramsgate on the Isle of Thanet in East Kent in early September, and he died there on September 14. He was buried in the church he had designed next to his house in 1844, Saint Augustine’s Abbey, in a tomb ornamented with sculptures of members of his family. But what of Pugin as an architect, not least of many Catholic churches, Saint Giles’ among them? Between 1832 and 1834 he made several trips abroad to study medieval buildings. Beginning in 1835 he collaborated with Sir Charles Barry (1795–1860) on designs for Barry’s best-known commission, the New Palace of Westminster (Houses of Parliament), where Pugin was responsible for nearly all the lavish interior decoration.1 Soon after, he wrote the first of many books, Gothic Furniture in the Style of the Fifteenth Century, which was published in 1835. Pugin subsequently wrote on subjects as varied as designs for gold and silver, and iron and brass. His book Contrasts, or A Parallel Between the Noble Edifices of the Middle Ages, which he self-published in 1836, made his reputation and heralded the revival of the Catholic style of the Middle Ages. With the authority and confidence of those born to wealth, the incorruptible Menil rejected the commercialism that has spread from the art market and infected once-sacrosanct institutions like a plague. She disdained the trendy and superficial, championed the arcane and challenging, considered philanthropy used for self-promotion to be not merely vulgar but immoral, and felt that museums that stooped to anything to attract ever-larger audiences betrayed a sacred trust. “Art is what lifts us above daily life,” she wrote, in the most succinct summary of her aesthetic philosophy. “It makes us more open, more human, more refined, and even more intelligent.”* If Menil had one contemporary rival as America’s premier postwar Maecenas, it was her fellow connoisseur-collector and museum builder Paul Mellon (1907–1999). Both these formidable figures possessed an uncanny “eye” and instinct for the best, though each had quite different tastes (his aristocratic and Anglophile, hers austere and global) and diametric temperaments (she spiritual and religious, he psychoanalytical and secular). Last year, the hundredth anniversary of Mellon’s birth was celebrated by commemorative exhibitions at the several museums and galleries he founded and enriched. There will be no such formal observances in honor of Menil, which is wholly appropriate because she seemed to exist outside normal notions of time. Those who knew her invariably describe her as otherworldly. On the few occasions Menil and I met, she was gracious and comme il faut to a fault, but gave the distinct impression of being somewhere else, as if listening to Gregorian chants only she could hear. Nonetheless, she was more acutely attuned to the creative forces of her times than almost anyone.
<urn:uuid:22f1d56d-0a81-4a2c-9654-a74c3450c668>
{ "date": "2015-07-29T22:02:55", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00196-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.977333664894104, "score": 2.578125, "token_count": 699, "url": "http://www.themagazineantiques.com/articles/the-real-menil/3/" }
Description from Flora of China Herbs perennial, glabrous, with fibrous roots. Stems simple or several branched. Leaves basal, or both basal and cauline, sometimes distal cauline ones palmately lobed, orbicular, reniform, or ovate, base cordate, margin dentate or entire; petioles sheathed at base. Flower solitary, terminal, or 2 or more in a simple or complex monochasium opening nearly flat. Sepals 5 or more, petaloid, yellow, rarely white or red, obovate or elliptic, caducous. Petals absent. Stamens numerous; anthers elliptic to oblong; filaments linear. Follicles 5--40, sessile, sometimes stipitate, with branching transverse veins, styles distinct or nearly absent; ovules several to many. Seeds several in a follicle, ellipsoid-globose, smooth. About 15 species: temperate and cold-temperate regions of N and S hemispheres; four species (one endemic) in China. (Authors: Li Liangqian; Michio Tamura)
<urn:uuid:b1c9df58-5526-415a-8b02-474befe01356>
{ "date": "2013-12-11T05:38:32", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164031957/warc/CC-MAIN-20131204133351-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8147138953208923, "score": 2.796875, "token_count": 245, "url": "http://www.efloras.org/florataxon.aspx?flora_id=601&taxon_id=105256" }
by William Alley (historian and certified archivist) Published in Southern Oregon Heritage Today, Nov 2002. 4:11 When Frank L. KNIGHT decided to expand his Portland-based pickle and vinegar packinghouse to include the manufacture of catsup, he spent two years studying the processes involved and seeking a suitable location. Ultimately Knight accepted the recommendations of the experts at Oregon Agricultural College in Corvallis and selected Medford as the location of his new catsup plant. "The Rogue River Valley," Knight was told, "produces a tomato that is particularly adapted to catsup manufacture." By locating his plant in Medford, Knight could turn the tomatoes into catsup within a few hours of their being picked; "That is a mighty important factor in making high grade catsup." The Knight Packing Company opened its Medford plant on the south end of Front Street in the summer 1916. Initial production capacity was estimated at fifteen tons of tomatoes per day, with room to expand to thirty tons in the future. By 1925, the plant was processing thirty-five tons of produce per day, the equivalent of 2,750 gallons of catsup. After harvesting, Rogue Valley tomatoes were delivered to the plant where they were washed in large tanks. They were then scooped onto a conveyor belt, passing by employees who trimmed the tomatoes and removed any defective ones. The fruit was then washed again and steamed before being dropped into the chopper, which separated the seeds and skin; the remaining pulp was then sent to large kettles where it was cooked with onions, garlic, and spices. After cooking, vinegar, salt and sugar were added to make the finished product. The catsup was then packed into five-gallon cans and shipped to Knight's Portland facility where it was bottled in sixteen-ounce bottles. Knight's Rogue River Catsup, the only catsup manufactures in Oregon, was an immediate success. At the end of the first eight years, the company could boast a 75 percent share of the Portland catsup market, and distribution had expanded to include parts of Washington, Idaho, and California. The company even went so far as to copyright the name "Rogue River" in connection with any tomato-based product. The presence of the Knight catsup plant had an immediate impact locally. In addition to providing a significant payroll, the acreage devoted to commercial tomato cultivation soon increased. In 1924, the plant's production capacity was doubled to take advantage of the increased availability. The plant was again expanded in 1936 with the arrival of new, modernized equipment. No longer did the catsup need to be shipped in bulk to Portland for bottling. With the new equipment, the Knight Medford plant was now producing catsup at a rate of fifteen bottles per minute. Knight's Rogue River Catsup flourished in Medford for twenty-five years, but the end came suddenly in the early 1940s. By 1942, the Knight Packing Company had disappeared form the local directories. The October 1925 issue of The Volt, the newsletter of the California-Oregon Power Company, now preserved in the collected of the Southern Oregon Historical Society, gives us a brief snapshot of a now-forgotten local industry.
<urn:uuid:461e3e3d-e46f-4594-89c0-f74885c67204>
{ "date": "2014-07-25T23:02:47", "dump": "CC-MAIN-2014-23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894865.50/warc/CC-MAIN-20140722025814-00048-ip-10-33-131-23.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.974003255367279, "score": 3.125, "token_count": 653, "url": "http://thefamilyorchard.blogspot.com/2010/12/rogue-river-catsup.html" }
Concentrated solar power is technology that utilizes reflectors to focus the sunlight hitting a large area onto a smaller area. Primarily this is done to drive heat engines but concentrators also exist that focus sunlight on photovoltaic panels. Concentrated solar power (CSP) systems are primarily commercial power plant type installations although some small scale systems have been created by hobbyists. One of the earliest examples of CSP is the story of Archimedes mirror, this technology was supposedly used as a weapon to ignite ships but it is doubted by historians. First confirmed use of CSP was in 1866 by Auguste Mouchout who used a parabolic trough system to power a steam engine. Solar Stirling Engine Solar Stirling engines make use of a parabolic dish that reflects light onto a receiver mounted at the focal point of the dish. This receiver is typically a cylinder that contains a thermal working fluid such as water. When the receiver is heated by the focused sunlight the working fluid contained within is also heated causing it to expand. Expansion of the working fluid drives a piston that simultaneously drives a flywheel and also presses the fluid away from the heated side of the receiver. Removed from the heat the fluid begins to cool and contracts driving the piston the other way and exposing the fluid to the heat again where the cycle continues. Common configurations include a dual piston and single piston setup both of which drive a flywheel which is connected to a generator. Typically solar Stirling engines will have a tracking system that properly positions the parabolic reflector in relation to the Sun known as heliostats. Solar towers utilize an array of Sun tracking reflectors to concentrate light onto a single receiver mounted on a tower. Within this receiver is a working fluid (commonly molten salt) that can reach as high as 1,800 degrees Fahrenheit. Typically this working fluid will be run through pipes in contact with another pipe containing water. Heat transfers from the working fluid to the water causing it to vaporize and become steam which is then used to power steam turbines generating electricity. Solar tower CSP systems are very efficient and offer better energy storage than other solar power technology sometimes being able to generate electricity 24 hours a day. Torresol’s 19.9 MW concentrating solar power plant became the first to accomplish this goal in July of 2011. Concentrated photovoltaics (CPV) use reflectors or lenses to concentrate sunlight from an area larger than a photovoltaic cell onto the cell in order to increase electrical generation. This method allows for less expensive photovoltaic energy to be produced by increasing the efficiency of a given solar cell. Unfortunately solar cell efficiency also decreases as temperature rises so it becomes a challenge of controlling temperature to ensure a net gain in efficiency. Another problem is that reflectors require heliostats to track the Sun in order to maintain focus on the solar cell which can consume more power and increase cost of the system eliminating gains from concentration technology. In spite of these drawbacks CPV technology can be advantageous in some situations. Below is a video showcasing very small high efficiency solar cells in combination with reflectors to produce cheaper high efficiency solar panels. Parabolic trough CSP technology can be thought of as a distributed version of solar power towers. Instead of concentrating light onto a tower linear parabolic reflectors (imagine a hollow half circle) focus light onto a pipe that is suspended along the focal point of the reflectors. Inside this pipe is a working fluid (commonly molten salt) that is heated by the concentrated sunlight to temperatures reaching 680 degrees Fahrenheit. Many rows of parabolic troughs are commonly used and the working fluid travels through the pipes back to a central location. There it passes next to pipes containing water which is heated and vaporizes creating steam to power steam turbines for the generation of electricity. Fresnel reflectors operate similarly to parabolic troughs but use multiple rows of reflectors to concentrate light onto a single pipe system. Fresnel reflectors are cheaper to produce than parabolic troughs but commonly aren’t as efficient. While this is a newer CSP technology it is quickly becoming popular.Sponsors:
<urn:uuid:3f309fdc-d4df-44f1-9892-0c6cc7921b30>
{ "date": "2016-10-24T05:17:16", "dump": "CC-MAIN-2016-44", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719468.5/warc/CC-MAIN-20161020183839-00510-ip-10-171-6-4.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9415590167045593, "score": 3.875, "token_count": 845, "url": "http://renewableenergyindex.com/solar/concentrated-solar-power" }
People often have a rapid response, especially if they find themselves in uncomfortable or dangerous situation. But did you know when the fastest-reacting? One study found that the fastest reaction in most people causes crying. Scientists from Oxford subjects were given the task to respond to various sounds by pressing the button. Even the fastest 95 percent of respondents responded to the cries of infants, and these scholars interpret the evolution of the reaction. The baby's cries warned that something was wrong, and every man has within himself the urge to protect their offspring. What is interesting to scientists is that the subjects released the votes of others children, and still can not explain why it causes such an instant reaction. Crying child by far is "won" a call for help or cvilenje adult animals.
<urn:uuid:a7e5e699-da3c-47c0-b6e1-7d38aba2a416>
{ "date": "2018-11-18T21:50:04", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744649.77/warc/CC-MAIN-20181118201101-20181118223101-00096.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9726574420928955, "score": 3.140625, "token_count": 158, "url": "http://happyandhealthypregnancy.blogspot.com/2012/05/crying-accelerates-reaction.html" }
Romeo and juliet revenge essay The want for revenge leads many of the characters in romeo and juliet into murderous acts which eventually leads to severe punishments and a further need for revenge the everlasting revenge in romeo and juliet is first born from ancient grudge between the capulets and the montagues, which is ultimately settled with the tragic, abrupt. Tragedy in the play romeo and juliet english literature essay print romeo and juliet dead he still wants to seek revenge and the violence is never. Check out our top free essays on romeo and juliet revenge to help you write your own essay. Get free homework help on william shakespeare's romeo and juliet: play summary, scene summary and analysis and original text, quotes, essays, character analysis, and. Essay writing guide learn the art shakespeare's play 'romeo and juliet' is a good example of a revenge tragedy shakespeare's play 'romeo and juliet' is a. Romeo and juliet: expository essay romeo and juliet, by shakespeare, is a tybalt thinks that they crashed the capulets ball and know he wants revenge. Read romeo & juliet the movie vs william shakespeare’s play free essay and over 88,000 other research documents romeo & juliet the movie vs. We will write a custom essay sample on themes of romeo and juliet revenge romeo and juliet suggests that the desire for revenge is both a natural and a. Is revenge ever justified essay below is an essay on is revenge ever justified for instance in shakespeare’s romeo and juliet. Free romeo and juliet papers, essays, and research papers. The act three scene one of romeo and juliet, a play by william shakespeare pages 2 words 582 most helpful essay resource ever - chris stochs, student @ uc. Romeo and juliet study guide tybalt seeks out romeo and kills mercutio from a half-cooked desire for revenge over romeo's essays for romeo and juliet. An essay or paper on revenge in romeo and juliet in romeo and juliet, william shakespeare brings together the conflicting political and legal ideologies about the. Included: shakespeare essay content preview text: romeo and juliet, by shakespeare, is a play which shows how prejudice leads to escalating violence prejudice leads. Violence and conflict central to romeo and juliet the deaths of romeo and juliet this essay romeo's presence and his promise of revenge. And revenge throughout romeo and juliet william shakespeare's romeo and juliet essay more about theme of hatred and revenge in william shakespeare's romeo. The theme of revenge in romeo and juliet by william shakespeare pages 4 more essays like this: romeo and juliet, william sign up to view the rest of the essay.
<urn:uuid:203f0d10-afa3-4195-be4c-0385672c5d97>
{ "date": "2018-10-18T19:42:56", "dump": "CC-MAIN-2018-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512014.61/warc/CC-MAIN-20181018194005-20181018215505-00216.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8693497776985168, "score": 3.109375, "token_count": 668, "url": "http://ptassignmentgsxk.artsales.biz/romeo-and-juliet-revenge-essay.html" }
In most societies, newly married couples do not establish their own residence but instead become part of an existing household or compound occupied by relatives. Which relatives are favored is culturally prescribed. However, there are a few common patterns around the world including patrilocal , matrilocal , avunculocal , ambilocal , and neolocal residence. In order to understand the rationale for each of them, it is essential to know that the most important determining factor is the specific type of kinship system. Of secondary importance usually are economic concerns and personal factors. Patrilocal residence occurs when a newly married couple establishes their home near or in the groom's father's house. This makes sense in a society that follows patrilineal descent (that is, when descent is measured only from males to their offspring, as in the case of the red people in the diagram below). This is because it allows the groom to remain near his male relatives. Women do not remain in their natal household after marriage with this residence pattern. About 69% of the world's societies follow patrilocal residence, making it the most common. Matrilocal residence occurs when a newly married couple establishes their home near or in the bride's mother's house. This keeps women near their female relatives. Not surprisingly, this residence pattern is associated with matrilineal descent (that is, when descent is measured only from females to their offspring, as in the case of the green people below). Men leave their natal households when they marry. About 13% of the world's societies have matrilocal residence. Avunculocal residence occurs when a newly married couple establishes their home near or in the groom's maternal uncle's house. This is associated with matrilineal descent. It occurs when men obtain statuses, jobs, or prerogatives from their nearest elder matrilineal male relative. Having a woman's son live near her brother allows the older man to more easily teach his nephew what he needs to know in order to assume his matrilineally inherited role. About 4% of the world's societies have avunculocal residence. Ambilocal residence occurs when a newly married couple has the choice of living with or near the groom's or the bride's family. The couple may also live for a while with one set of parents and then move to live with the other. About 9% of the world's societies have ambilocal residence. Neolocal residence occurs when a newly married couple establishes their home independent of both sets of relatives. While only about 5% of the world's societies follow this pattern, it is popular and common in urban North America today largely because it suits the cultural emphasis on independence. However, economic hardship at times makes neolocal residence a difficult goal to achieve, especially for young newlyweds. Elsewhere, neolocal residence is found in societies in which kinship is minimized or economic considerations require moving residence periodically. Employment in large corporations or the military often calls for frequent relocations, making it nearly impossible for extended families to remain together. There are several other rare residence patterns found scattered around the world. These include virilocal , uxorilocal , and natolocal residence. For those who wish to understand them as well, the glossary of this tutorial provides brief explanations. Regardless of the culturally preferred post-marital residence rules, at times there are unique personal circumstances which result in a deviation. In many societies, it is possible also to create a fictive kinship status to allow what would otherwise be unacceptable marriage and residence patterns. Resident Family Size Residence rules have a major effect on the form of family that lives together. Neolocality leads to independent households consisting of single nuclear families--that is, a man and a woman with their children (shown in the diagram below). This is a relatively small, two generation family. All other common residence rules potentially result in the formation of larger family groups. These larger groups are most often in one of three general forms: an extended family, a joint family, or a polygamous family. Extended families consists of two or more nuclear families linked together by ties of descent (as shown below). They consist of living relatives from three or more generations. Extended family in Samoa Members of an extended family household usually share farming, animal herding, and domestic household tasks. Such families can be efficient collective work units. However, each generation, the number of family members tends to get larger, which inevitably puts a severe strain on resources. This results in personal conflicts which cause the extended family and its household to divide into two or more independent families. This dynamic segmentation process usually repeats every few generations. Joint families consist of two or more relatives of the same generation living together with their respective spouses and children. Polygamous families potentially consist of all spouses and their children. This is difficult to diagram two-dimensionally, particularly when there are three or more wives in the case of polygynous families. Residence rules and the size of family residential groups often change as the economy changes. In other words, family household type correlates with subsistence base. The following graph summarizes this relationship. Both modern large-scale societies and hunting and gathering societies in marginal environments have a high degree of geographic mobility that is mandated by their economies. In the former case, jobs often require periodic relocation to other parts of the country or the world. Among foragers in harsh environments such as deserts and arctic regions, there is usually a seasonal need to disperse the community when food sources become scarce. Both situations make it difficult for much more than nuclear families to stay together year round. In contrast, big families are economically advantageous among small-scale farmers and pastoralists because larger, permanent labor groups are needed to farm or tend herds of animals. Despite cultural preferences and the type of subsistence base, there may not be a father in a home due to divorce, death, or his abandonment of the family. As a result, a matricentric , or matrifocal , family household may exist. Such a household consists of a woman, her children, and sometimes her grandchildren as well. Matricentric family households have become common in North America during the late 20th and early 21st centuries. Approximately 70% of African American children are now being raised in such families. There are a smaller but growing number of family households in North America that do not have a mother in residence. These could be referred to as patricentric or patrifocal family households. This page was last updated on Friday, October 19, 2007. Copyright © 1997-2007 by Dennis O'Neil. All rights reserved.
<urn:uuid:603b872c-af08-4120-8941-ba3d2ad977f2>
{ "date": "2014-11-22T08:35:43", "dump": "CC-MAIN-2014-49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400376728.38/warc/CC-MAIN-20141119123256-00240-ip-10-235-23-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9679684042930603, "score": 3.359375, "token_count": 1369, "url": "http://anthro.palomar.edu/marriage/marriage_5.htm" }
When exactly should you stop for a school bus? Eric Lai answers readers’ auto questions every week for Wheels. Q: School bus drivers often apply the flashing red lights while they are still in motion. This would cause me to stop, and they would then drive by. Is it the retractable stop sign on the bus that’s the deciding factor on when traffic must stop? Could I be charged if I pass the still-in-motion school bus with flashing red lights? A: Section 175(11,12) HTA states that drivers on a highway approaching, from the front or rear, a stopped school bus with its overhead red signal lights flashing, shall stop at least 20m before reaching the bus and shall not proceed until the bus moves or the overhead lights have stopped flashing. Drivers need not stop if they are on the opposite side of a median from the school bus. As you can see, it’s the overhead lights that are key as there is no mention of the retractable stop sign often used as additional safety equipment on school buses in the section above. While a school bus driver might begin to activate the overhead lights prior to coming to a full stop (likely as a warning to motorists that they are about to stop), the law specifies that traffic in both directions is required to halt for a “stopped” school bus with overhead red lights activated. If a school bus has its overhead lights activated as it slows down, other drivers should best slow down in response and be prepared to stop a safe distance away as the bus comes to a full stop. (Stopping isn’t necessary if a moving bus passes you by in the opposite direction.) Where a school bus activates the overhead lights only after coming to a full stop, other traffic should then stop. However, if you were truly too close to stop safely at the very instant the bus came to a halt with its lights activated – and an observing police officer would agree on this point – you may proceed. Don’t over-think it. It’s the safety of schoolchildren that’s at stake, so simply put, ALWAYS STOP, unless it’s genuinely not possible to do so safely. Information above is of a general nature only and should not be taken as legal advice or opinion.
<urn:uuid:df27af20-c096-44aa-a87b-a582af142e9d>
{ "date": "2015-03-04T15:13:41", "dump": "CC-MAIN-2015-11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463606.79/warc/CC-MAIN-20150226074103-00092-ip-10-28-5-156.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9574449062347412, "score": 2.78125, "token_count": 478, "url": "http://www.wheels.ca/news/when-exactly-should-i-stop-for-a-school-bus/" }
8 Lesson 3: Stopping the Bullies Bullying is a bigger problem than a lot of people realize. Bullying is physically and emotionally harmful, and students who are bullies often have problems with the law later in life. Those who are victims of bullies can carry emotional scars forever. On the Web site below you will learn more about bullying and what can be done to stop it. Link to explore: Bully Beware: http://www.bullybeware.com/moreinfo.html - Start at the Bully Beware Web site. - Read through the page on bullying. - Take notes as you read. - When you are done reading, answer the questions - Finally, using the information from the links, create a “Bully Bookmark” with tips for what to do if you are bullied on one side, and tips for what to do if you see someone else being bullied on the other. - What is bullying? - What are the three characteristics of bullying? - What are the four kinds of bullies? - What are the behavior and personality traits of people who bully? - What are three reasons to stop bullying? - Bullying is a series of repeated, intentionally cruel incidents, involving the same children, in the same bully and victim roles. - Bullying : - Occurs between two people who are not friends. - Occurs when there is a power difference between the bully and the victim. - Occurs when the intention of the bully is to put the victim in some kind of distress. - The four kinds of bullies are: - Action-oriented physical bullies. - Verbal-oriented bullies who use words to cause distress. - Relational bullies who try to convince their peers to exclude or reject a certain person. - Reactive bullies who go back and forth between bullies and victims. - People who bully: - Have greater than average aggressive behavior patterns. - Have the desire to dominate peers. - Have the need to feel in control, to win. - Have no sense of remorse for hurting another child. - Refuse to accept responsibility for their behavior. Additional Resources for Teachers - Three reasons for taking action to stop bullying are: - Many bullies become criminals. - Victims of repeated bullying sometimes see suicide as their only escape. - The emotional scars from bullying can last a lifetime. Below are some additional resources on bullying. Try having students brainstorm about ways they can implement an anti-bullying campaign at their school. - Don’t Suffer in Silence: - Cyber Bullying:
<urn:uuid:ee3eb52e-6de4-4dd7-bad8-bdfd52960f39>
{ "date": "2014-10-23T10:06:36", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413510834349.43/warc/CC-MAIN-20141017015354-00359-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9227555990219116, "score": 3.625, "token_count": 551, "url": "http://glencoe.mheducation.com/sites/dl/free/007869762x/373412/th3_swa_ch08_less03.html" }
Publications International, Ltd. Odds are that you know someone with diabetes mellitus, possibly even someone who has to take insulin each day to manage the disease. Diabetes is a growing health problem in the United States and has risen about six-fold since 1950, now affecting approximately 20.8 million Americans. About one-third of those 20.8 million do not know that they have the disease. Diabetes-related health care costs total nearly $100 billion per year and are increasing. Diabetes contributes to over 200,000 deaths each year. To understand diabetes, you first need to know about how your body uses a hormone called insulin to handle glucose, a simple sugar that is its main source of energy. In diabetes, something goes wrong in your body so that you do not produce insulin or are not sensitive to it. Therefore, your body produces high levels of blood glucose, which act on many organs to produce the symptoms of the disease. In this article, we will examine this serious disease. We will look at how your body handles glucose. We'll find out what insulin is and what it does, how the lack of insulin or insulin-insensitivity affects your body functions to produce the symptoms of diabetes, how the disease is currently treated and what future treatments are in store for diabetics.
<urn:uuid:fdfd785c-4ef0-4854-b95b-3a51b8d70daf>
{ "date": "2015-03-30T10:38:40", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299261.59/warc/CC-MAIN-20150323172139-00278-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9547987580299377, "score": 3.078125, "token_count": 264, "url": "http://health.howstuffworks.com/diseases-conditions/diabetes/diabetes.htm" }
Art has always been fundamentally intertwined with technology. New techniques and materials have constantly allowed artists to innovate and create new types of works. In this series we look at the impact of digital technologies on art and how artists are creating entirely novel forms of art using these modern tools. We've previously examined the fields of "datamoshing", ASCII art, BioArt, Minecraft Art and Internet Art. In this instalment we examine a fascinating world where scientists are teaching robots how to paint works of art. Artificial intelligence systems are currently excelling at producing elaborate digitally generated works of art. Every other week we seem to see a new neural network developed to mimic a famous artists' aesthetic or convert a photograph into a painterly image. But what about machines actually mimicking the process a human artist uses to paint on a canvas? That particularly human skill seems to be a lot harder for machines to replicate. NEW ATLAS NEEDS YOUR SUPPORT Upgrade to a Plus subscription today, and read the site without ads. It's just US$19 a year.UPGRADE NOW In 2016, the RobotArt competition was founded by Stanford educated mechanical engineer Andrew Conru. The competition was designed to stimulate robotic engineers to create new mechanical painting devices. In setting up the competition Conru noted that many of the initial entries were expected to be variations of a simple mechanism where a robotic arm mimics the movements of a human artist, but many teams took the challenge a step further. The competition saw a variety of different entries, from a team using an eye-tracking system to control a robot's movement, to a system that had users remotely control a robot via internet-directed brush stroke commands. All the weird and wonderful results reinforced the question of how truly creative a robotically generated work of art could really be. Below are the recently announced winners of the 2017 RobotArt competition. Be sure to click through to our gallery to get a broader look at each winner's work. Winner - PIX18 / Creative Machines lab From a mechanical engineering team at Colombia University we get the winner of RobotArt 2017, a bot by the name of PIX18. Apparently this is the third generation of a system developed with the goal of creating a robot capable of creating original artwork using the classic medium of oil on canvas. Judging comments applauded this robot's ability to produce "some lovely paintings from sources or scratch" and noted that the work had "brush strokes evocative of Van Gogh". 2nd Place - CMIT ReART The ReART system uses a haptic recording system to record artists painting a work. The system tracks the position of the brush, the force being exerted and a variety of other data points. A robot then "plays back" the recording, creating a perfectly mimicked ink brush drawing. The project is from the Department of Electrical Engineering at Kasetsart University in Thailand and looks to develop motion control robotics for a variety of industrial and creative uses. 3rd Place - CloudPainter CloudPainter is one of the most technically sophisticated projects in the RobotArt competition. Utilizing AI and deep learning systems, the project aims to get the machine to make as many individual creative decisions as possible. According to the creators, currently "the only decision made by a human is the decision to start a painting." More info on their process can be found on their website. One of the judges said of the machine's work, "Spontaneous paint, "mosaicing" of adjacent tones, layering effects and the graphical interplay between paint strokes of varying textures, are all hand/eye, deeply neurally sophisticated aspects of oil painting..." 4th Place - e-David e-David is an evolving robotic painting system that uses a visual feedback loop to constantly record and re-process how the machine is interpreting its recreation of an input image. Using an ordinary industrial welding robot combined with cameras, sensors and a control computer, the system can correct errors as it paints, while also understanding what the makers call "human optimization processes". 5th Place - JACKbDU This is one of our favorite works from the competition. From a student at New York University Shanghai, this project is inspired by the aesthetic of American artist Chuck Close. The system starts with an input image that is converted to a low resolution and painted pixel by pixel using a mobile robot with omni wheels. Each oversized, low-res pixel that is cribbled by the robot is roughly the size of a human hand and each entire artwork is 176 X 176 cm (5.7 x 5.7 ft), or just about as tall as a human being. 6th Place - HEARTalion HEARTalion is a project from Halmstad University in Sweden that attempts to develop a system that can recognize and subsequently depict a person's emotional state. The system captures emotional signals using a Brain-Machine Interface (BCI) and a robot then attempts to convey the emotions visually based on a model that was developed with advice from two local painters in Halmstead, Peter Wahlbeck and Dan Koon. One of the impressed RobotArt judges remarked in reference to HEARTalion, "If this body of work was exhibited at a gallery and I was told that the artist aimed to capture emotion through color, composition, and textures — I would buy." 7th Place - Late Night Projects This independent entry from an electronic engineer who put in most of the work after his wife and kids had gone to bed uses a simple XYZ axis painter bot guided by two basic behavioral rules. All of this project's work is from reinterpretations of input images, but because the robot receives no feedback from sensors or cameras, the mixing of colors isn't faithful to the source. However, the novel strength of this project comes from its gorgeous use of watercolor paint. 8th Place - Wentworth Institute of Technology Using the precision of a robotic artist to its advantage, this project created a system that minutely controls the pressure and movement of single brush strokes to create stunning images that a human would struggle to accurately produce. The members of the team describe their process in greater detail here and have also publicly offered up their source code in the hope others will build upon their work. 9th Place - CARP CARP, or Custom Autonomous Robotic Painter, comes from a team at the Worcester Polytechnic Institute in Massachusetts. The system uses image decomposition techniques to dissemble input images, which are then reconstructed by a robot. Visual feedback systems are also incorporated into the process allowing for dynamic corrections to be applied to the work as it is being created. 10th Place - BABOT An experimental project from a team at MIT. This is an evolving robot arm that was saved from an existence as a decorative coat rack and has slowly been given more peripherals, such as an auto-brush cleaner and wireless control via a video game controller. Equipped with machine learning abilities, the robot can grow its skill set from project to project. Take a closer look through some more of the amazing and varied robot painted artworks in our gallery.View gallery - 53 images
<urn:uuid:5720ef38-20ea-4df0-bd22-a83501110fe1>
{ "date": "2017-05-23T12:49:12", "dump": "CC-MAIN-2017-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607636.66/warc/CC-MAIN-20170523122457-20170523142457-00027.warc.gz", "int_score": 3, "language": "en", "language_score": 0.957905113697052, "score": 3.21875, "token_count": 1466, "url": "http://newatlas.com/art-ones-and-zeros-robotart-painting/49538/" }
Of the 250 types of bee in the UK only the honeybee swarms. A swarm of honeybees contains many thousands of bees and are normally seen in a clump hanging from a tree or gatepost, just like in the picture. Individual honeybees are about the same size as a housefly, they have golden brown or dark brown bands and are slightly furry. Beekeepers work only with honeybees and our volunteer swarm collectors are only able to collect, and find a home for, a swarm of honeybees. They are not pest controllers so DO NOT collect or destroy nests of: wasps, bumblebees, solitary bees or hornets. We get many calls each year from people reporting “a swarm of honeybees” only to find a wasp nest or colony of bumblebees in a bird nest box. These are the smooth, yellow insects with black stripes that come after your picnic food. They have a round paper-like nest that may be found hanging from a tree, inside a shed or in your loft. Please contact your local council for advice on termination. Some types of solitary bees can be mistaken for honeybees but do not swarm. A small number (10 or more) may live close together. They are not aggressive and rarely sting. They will disappear by mid to late summer. These are bigger, rounder and fluffier than honeybees. They are generally not aggressive and rarely sting. Tree bumblebees often set up home in bird boxes and male bees waiting for virgin queens to emerge can be mistaken for a swarm. This is for a short time only, but if you get too close at this time they can be aggressive. They will disappear by mid to late summer. If this information has sparked an interest in honeybees you might like to join an open inspection session or even become a member of the association – if so please…
<urn:uuid:d98afbfc-94e2-4552-ae4a-1b821a02cc2e>
{ "date": "2019-12-12T21:56:51", "dump": "CC-MAIN-2019-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547165.98/warc/CC-MAIN-20191212205036-20191212233036-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9460475444793701, "score": 3.125, "token_count": 385, "url": "https://wdbka.org.uk/swarms/" }
Washington, August 9 (ANI): A team of scientists have released their first version of a 3D map of the universe. The team consisting of astronomers from Kyoto University, the University of Tokyo and the University of Oxford released the map from its FastSound project , which is surveying galaxies in the universe over nine billion light-years away. Using the Subaru Telescope's new Fiber Multi-Object Spectrograph (FMOS), the team's 3D map includes 1,100 galaxies and shows the large-scale structure of the universe nine billion years ago. The FastSound project, one of Subaru Telescope's Strategic Programs, began its observations in March 2012 and will continue them into the spring of 2014. Subaru Telescope's FMOS facilitates the project's goal of surveying a large portion of the sky. FMOS is a powerful wide-field spectroscopy system that enables near-infrared spectroscopy of over 100 objects at a time; the spectrograph's location at prime focus allows an exceptionally wide field of view when combined with the light collecting power of the 8.2 m primary mirror of the telescope. The current 3D map of 1,100 galaxies shows the large-scale structure of the universe nine billion years ago, spanning 600 million light-years along the angular direction and two billion light-years in the radial direction. The team will eventually survey a region totaling about 30 square degrees in the sky and then measure precise distances to about 5,000 galaxies that are more than ten billion light-years away. Although the clustering of galaxies is not as strong as that of the present-day universe, gravitational interaction will eventually result in clustering that grows to the current level. The final 3D map of the distant universe will serve a primary scientific goal of the project: to precisely measure the motion of galaxies and then measure the rate of growth of the large-scale structure as a test of Einstein's general theory of relativity. Although scientists know that the expansion of the universe is accelerating, they do not know why; it is one of the biggest questions in contemporary physics and astronomy. An unknown form of energy, so-called "dark energy," appears to uniformly fill the universe, accounting for about 70 percent of its mass-energy content and apparently causing its acceleration. (ANI)
<urn:uuid:0d5b2cb6-8bc4-4a63-9527-64f40c540cd4>
{ "date": "2017-03-29T07:27:28", "dump": "CC-MAIN-2017-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00186-ip-10-233-31-227.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9274111390113831, "score": 3.296875, "token_count": 471, "url": "https://cricket.yahoo.com/news/1st-3-d-map-large-scale-structure-universe-102932998.html" }
By Gary Novak Physics corruption was largely abstract and academic, until it spread into the global warming issue, which is now resulting in energy systems being destroyed and economies being bankrupt based on scientific fraud. Climatologists do not have the concepts or procedures which would give them the slightest ability to theorize or measure what carbon dioxide did in the atmosphere before or after humans influenced the result. There are no mechanisms in science for correcting the problem of fraud. The fraud stems from total unaccountability to anyone outside science. Scientists who criticize are denied grants and the ability to publish (http://nov79.com/gbwm/firing.html). Four centuries of unaccountability has turned physics into a culture of criminality. There has not been an iota of physics published since Newton's time which has not been totally in error, usually with the intent to be in error due to incompetent persons forcing their way into science to monger power and attacking the rationality which they cannot handle. The biological science were correctable, which allowed truth to prevail, until recently, as power mongers have taken over all of science and are shoving out real scientists. In 1686, an erroneous definition of energy was formulated by Gottfried Leibniz through misrepresentation. I show mathematical proof of the error on my web site. In 1845, James Joule supposedly substantiated the Leibniz definition of energy by stirring water in a wooden bucket to determine the amount of heat produced. Joule did not have the slightest ability to conduct such a measurement, as I explain on my web site. Supposedly, later experiments show Joule to be only off by four parts per thousand; but since there is no such number, it shows that physicists are contriving it to this day. There is no explanation available to the public for the methodology used to measure the number (the mechanical equivalent of heat, or Joule's constant), and all imaginable procedures would have a very large error, like 10-50%, while the given number is 4.1868 Newton-meters per calorie, implying 0.0024% imprecision. Errors such as Planck's constant appear to be misinterpretations of the influence of light upon matter, except there are admitted contradictions in claiming light contains energy packets called photons. Packets (photons) must have length, width and height, while energy cannot. Frauds in physics abandon any pretense of rationality with relativity, where the starting point serves no purpose but to muddle the subject, while monumental results are synthesized out of nothing. The claimed E=mc² is nothing but a vague parallel to the misdefinition of kinetic energy with no relationship to anything in relativity. Since I show the definition of energy to be incorrect due to a squaring of velocity, a real parallel should not have the velocity of light squared; so it would be E=mc. When physicists apply Einstein's equation to determine the amount of energy in fusion reactions, they get a huge quantity due to the squaring of the velocity of light. When they conducted an experiment using lasers, they got no significant energy from the result, as would be expected with the incorrect definition of energy and the non-squaring of light. It means assumptions about the amount of energy in fusion reactions are misdirected by the misdefinition of energy being paralleled in supposed relativity. With such standards of fraud being engrained in physics, the concept of greenhouse gases creating global warming was a total contrivance with no relationship to valid science. The entire subject is based on modeling climate with infinite complexities being mentioned without specific descriptions of procedures used for evaluation or why the points are relevant. The absurdities show the intent of muddling the subject with irrelevancies and contriving the result out of the muddle. Global warming science was divided into two parts-a primary effect by carbon dioxide (or other greenhouse gases) absorbing energy and infinite secondary effects referred to as feedback which enhance the primary effect. The sometimes-stated analysis is that the primary effect was that humans increased the global average temperature by 0.2°C, while feedback increased it by a factor of three to 0.6°C. But a consistent logic does not exist. A determination of the primary effect cannot be located with a consistent or credible logic. Sometimes, such as Hansen et al, 1984 and 1988, "empirical observation" is said to be the source of the primary effect, which means a supposed temperature increase of 0.6° since the industrial revolution combined with 100 parts per million carbon dioxide increase sets the pattern for the future. But empirical observation includes secondary (feedback) effects, while it is used to define the primary effect. No real scientists would assume that all temperature increase of the recent past was due to greenhouse gases without some method of verification. Instead of verification, a fake hockey stick graph was constructed to indicate a totally flat temperature leading up to the industrial revolution and then an upward bend. The upward bend was used to convince the unwary that humans are destroying the planet. Over recent years, the hockey stickgraph was so discredited that it has largely been abandoned, while the primary effect of CO2 has no valid origins. A second method of contriving the primary effect was the use of "radiative transfer equations." Such equations will not yield anything resembling the result in question. Radiative transfer equations have the purpose of determining how radiation is depleted while a gas is increased in concentration. The rate of radiation depletion tells nothing of the amount of heat produced by the radiation. There is no description of methodology. Endless modeling of atmospheric influences is mixed with the claimed derivation of the primary effect through radiative transfer equations. There is no logic to applying atmospheric complexities to the primary effect. The descriptions serve no other purpose than muddling the subject and contriving the result with no accountable methodology. In the descriptions for the derivation of the primary effect, Ramanathan et all, 1979, page 4949, say warming is "due to the enhancement of the CO2 longwave opacity." Opacity (absorption of radiation) has infinite complexities which will not indicate a primary effect. The radiation moves from ten meters (http://nov79.com/gbwm/hnzh.html) at the center of the primary absorption peak to longer distances on the shoulders. Changing the distance is not increasing the heat. The source of emission is cooled, while the absorbing point is heated. Ramanathan et al, 1979, also state, "The net radiative flux at the tropopause and at the surface decreases due to increased CO2, and this decrease of course denotes a heating." They are saying that the planet is cooled by radiation which leaves from the tropopause and surface. This description omits radiation leaving throughout the troposphere, which is represented as ninety percent of the radiation which cools the planet on the NASA chart referred to as the "Earth's Energy Budget." Missing ninety percent of the radiation which cools the planet not only induces a one thousand percent error in calculations, but more importantly, it misses the equilibrium effect which nullifies the entire concept of greenhouse gases heating the atmosphere. The obvious logic is that the planet is cooled by radiation which goes around greenhouse gases, not through them. Radiation going around greenhouse gases creates an equilibrium effect, where radiation leaving the planet equals radiation entering from the sun. Equilibrium sets the temperature of the atmosphere uninfluenced by greenhouse gases. Ramanathan et al, 1979, state, "the enhancement of the CO2 longwave opacity occurs in the 12- to 18- µm, 9- to 10- µm and 7.6-µm spectral regions." Using a bandwidth for the primary absorption peak for CO2 at 12- to 18- µm is absurd, as it is normally shown to be 14-16 µm. This band cannot widen with increases in CO2, because the energy state of the molecules cannot change. In fact, the bandwidth significantly decreases with height in the atmosphere, because lower pressure reduces collisions which modify the energy state of the molecules. The extremely wide bandwidth given by Ramanathan et al, 1979, points to an erroneous concept, where spectrum analysis was done high in the atmosphere using word war II propeller aircraft, and all they got was engine noise with wide sine waves. Determining the primary effect through radiative transfer equations was supposedly refined by Myhre et al, 1998. Again, no methodology was described beyond endless blather on modeling atmospheric influences which have no conceivable relationship to the primary effect. The authors produced the reference for the primary effect which is now used throughout climatology and is assumed to be without question. It is stated as a three component fudge factor representing a curve, which is this: Heat increase = 5.35 ln C/C0 (http://nov79.com/gbwm/equations.html). Strangely, heat is given as watts per square meter, while the atmosphere has no surface. To not convert into units of mass, such as kilograms or cubic meters, shows a detachment from scientific standards which would only be possible with an attitude of deliberately subverting the process. Real science would create unmeetable demands on every point resulting no ability to realistically study the claims being made. Rather than meet those demands, climatologist chose convenience procedures aligned upon rationalism with no concern for legitimacy. If a primary effect did exist, it could not be determined by chasing heat through the atmosphere through modeling procedures, as publications pretend to be doing. Climatologists have no ability to unravel the complexities of the infinite and minute atmospheric effects which they refer to, and if they could, those influences would only relate to secondary effects. Yet such influences are applied to publications on the primary effect, with no clarifications as to why. Besides equilibrium, another reason why the primary effect does not exist is saturation, which means molecules absorb all radiation available to them, so more of such molecules cannot absorb more radiation. Heinz Hug indicates that the center of the main absorption peak for carbon dioxide absorbs all radiation available to it in traveling ten meters from the point of origin. Farther down on the shoulders of the absorption peaks, where there are fewer carbon dioxide molecules, the distance increases in proportion to the number of molecules. Doubling the amount of carbon dioxide in the atmosphere reduces those distance to one half. Changing the distance is not increasing the heat. Nowhere in the climatology which promotes the greenhouse effect is the distance radiation travels mentioned. Any mention of distance would immediately prove the entire subject to be a fraud. Not only is a change in distance not a change in heat, but the increase in distance on the shoulders of the absorption peaks dilutes the heat proportionately, which lowers the temperature produced by the heat proportionately. There is a compounded effect which occurs like this: Where there are one hundredth as many carbon dioxide molecules with shoulder characteristics (perhaps absorbing at 14.3 µm instead of the 15 µm at the center of the absorption peak) the distance radiation must travel is one hundred times that which is absorbed in the center of the peak. This means the distance is one thousand meters instead of ten meters. This also means that one hundredth of the heat can be attributed to those molecules. Therefore, the temperature increase attributable to such shoulder molecules is one hundredth of the heat multiplied times one hundredth as much temperature increase for each unit of heat. One hundredth of one hundredth is one ten thousandths of the temperature increase. If the primary effect of CO2 before human influence is 1°C, as sometimes claimed, the shoulder effect mentioned here would be 0.0001°C. Yet this effect is said to be the global warming created by humans adding carbon dioxide to the atmosphere. Climatologists who promote the greenhouse effect have produced three responses to the concept of saturation. At first, they said the shoulders of the absorption peaks are not saturated, as explained in the above paragraph. This concept would not stand up to criticism; so they said the effect occurs high in the atmosphere, where saturation does not occur, and they generally claim this location is nine kilometers up. Even at that height, the distances only increase by a factor of three, since the atmospheric pressure is thirty percent of that at sea level. Increasing the distances by three does not escape the effects of saturation. But even worse, to get the heat back to the near surface requires radiating it downward. There has to be twenty four times as much temperature increase at nine kilometers as occurs near the surface, by simple calculations. Such temperature increase has never been found. But since oceans will absorb most of the back radiation, the actual amount would have to be thousands of times higher to get a 1°C temperature increase in the near earth atmosphere. The third rationalization is that satellites measure key radiation escaping from nine kilometers up, and this shows that saturation is not occurring. Firstly, saturation can easily be determined by measuring in a test tube in a laboratory. Such rock solid science cannot be contradicted by wishy washy interpretations of satellite absorption. Satellites cannot produce such information. They cannot determine the height from which narrow bands of radiation come from. Satellites are said to show the height from which total heat comes from, but the height is determined by shift in wavelength. Shorter wavelengths do not travel as far through the atmosphere. But CO2 only absorbs very narrow bands of radiation, which means shift in wavelength will not indicate the height. In other words, satellites will pick up something from the top of the stratosphere regardless of saturation, and there is no indication of saturation in the result. In 2001, the IPCC (AR3) stated that saturation exists in these terms: "Carbon dioxide absorbs infrared radiation in the middle of its 15 mm band to the extent that radiation in the middle of this band cannot escape unimpeded: this absorption is saturated. This, however, is not the case for the band's wings. It is because of these effects of partial saturation..." In other words, the rationalizers cannot get around saturation, and it precludes all other effects. Saturation means the primary effect of increased carbon dioxide heating the atmosphere is zero. This result is only evident when distance is considered. Without distance, muddled effects attempt to mask the truth and contrive an effect where none exists. With zero primary effect, there are no secondary effects. All of the claimed studies of such an effect are total contrivances. Once contrivers get a dark pit constructed, there is never a flaw that comes out of it. Out in the open, they cannot produce rationality. That's why the dark pits without a description of methodology are a fraud in science. There are legitimate authorities in Donetsk and Luhansk republics now, with which Russia can implement the project of the economic integration of the Donbass There are legitimate authorities in Donetsk and Luhansk republics now, with which Russia can implement the project of the economic integration of the Donbass Russia has been developing an energy module on the basis of the megawatt-class nuclear power plant since 2010. The spaceship needs neither sunlight nor solar batteries
<urn:uuid:55e7d546-acfd-49ae-9f6a-18f36ded12c6>
{ "date": "2018-11-15T02:12:58", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742338.13/warc/CC-MAIN-20181115013218-20181115035218-00176.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9480267763137817, "score": 2.765625, "token_count": 3106, "url": "http://www.pravdareport.com/science/earth/31-12-2013/126523-criminal_global_warming_fraud-0/" }
Keeping students safe is of primary importance. How to do this effectively and still provide opportunities and experiences that will enrich, motivate and instruct is a challenge that requires planning, development of content and activities and vigilant supervision. It is easiest to simply block anything that may possibly have some content that is not appropriate, with a firewall. This protects all students from anything potentially harmful and of course also protects a school system from potential legal action. The downside to this type of blocking is that A) sites may be erroneously judged B) blocking for an elementary school child is not necessarily the same as blocking for a high school child and C) we are not helping students to discern between useful and appropriate and not useful and inappropriate. The careful monitoring and releasing for access of websites or virtual worlds does require time, work, and judgment. Much of this falls on IT staff and not instructional staff. IT staff members are typically given parameters for blocking and they apply the firewall to adhere to these parameters. Instructors and other personnel may request for “unblocking” of specific sites and provide justification. Ultimately, that decision is made by someone other than the classroom teacher. For the most part this practice protects the student, the teacher and the school system. I cannot help but wonder about the learning that could take place, the guidance that could be provided as technology becomes more ubiquitous in our daily lives. Should we be using social networking and virtual worlds in our instruction? Do the benefits outweigh the risks? Can teachers supervise effectively enough to ensure the appropriate use of these technologies? Should our instruction include what is available in the world beyond the classroom walls? Can the technology provide access to some areas for some children? Are teachers prepared to discern between what is appropriate and what is not? Can the issue be reviewed holistically? Certainly everything carries risk – getting on the highway is one of the riskiest activities we have yet we put children in school buses and send them on their way, without seat-belts. Perhaps we need to develop “seat-belts” for the ride on the Internet rather than blocking all the ramps. Ultimately we need to prepare our children for a future we do not know, that future includes access to the Internet and the ability to determine value of what is found there.
<urn:uuid:bce4fcd8-4478-4033-9822-d3b74bbd6223>
{ "date": "2017-10-18T12:58:25", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822966.64/warc/CC-MAIN-20171018123747-20171018143747-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9614235758781433, "score": 3.265625, "token_count": 460, "url": "https://gridjumper.net/2010/06/12/safety-in-the-virtual-world/" }
because it pays those countries to avoid strict emission control policies. Future studies of the economics of climate change should look at offsets not just for their impact on compliance costs but also for their influence on strategic interaction between industrialized and developing countries. The analytical questions are not just whether offsets are “additional” but whether they produce positive (or even negative) leverage on the emissions from developing countries. Over the long term the largest leverage that the industrialized countries have on the global warming problem will come from their strategy to engage developing countries. Because of perverse incentives embedded in any offsets scheme, one element of that strategy must be a credible sunset for offsets. Second, the design of CDM rules has been highly political, which is hardly surprising since it is mobilizing and allocating large amounts of capital and the other benefits that flow alongside investment. At present, the CDM pipeline is probably worth several tens of billions of dollars. By 2020, the Copenhagen Accord envisions that the CDM and other offsets markets might annually channel $100 billion to developing countries, which would exceed total current annual spending on official development assistance from all sources for all purposes. Politically organized interest groups have favored some technologies (e.g., small hydropower) while abhorring others (e.g., nuclear power). Those forces are evident in the current and prospective flow of CDM credits. The most important effect of politics on the design of the CDM has been the strong political pressure to generate high volumes of offset credits at the expense of quality. Firms and governments in industrialized countries seek offset credits to assure that they will be able to comply with strict emission targets. Developing countries that host projects want to maximize the revenues that are linked to the flow of credits. By contrast, the interest groups that would press for higher quality and strict administration, which would lead to much lower and more uncertain flows of emission credits, are much less well organized and influential. A similar constellation of political forces is now mobilizing around U.S. policy on offsets. There are well-organized industrial forces that favor generous offsets rules. (Those forces are not wrong—indeed, if well administered, an unfettered offsets system would be a good policy.) But the crucial administrative questions have been left vague and are most deferred until the future. Interest groups that would favor strict administration are much less coherently organized. One remedy for these pressures is to set a credible safety valve on emission prices, which would remove the incentive for purchasers of offsets to seek high offset volumes as their only means of managing compliance costs. Third, many of the troubles in the CDM arise because it was designed by committee with very little attention to political economy. A much more strategic approach to the design of offsets is feasible and badly needed. In theory, most of the power in the creation of an offsets market originates with the largest purchasers of offset credits—today the EU and Japan (via the CDM) and eventually the United States, once a U.S. emissions policy is reliably in place. So far the EU and Japan have ceded much of their potential power to the Executive Board created under the Kyoto Protocol to manage the CDM. That Board is a cumbersome and largely ineffective system for administration. This is not news to the governments of the EU and Japan, but these countries have not pressed harder for such reforms nor created their own, better parallel system because they had no other alternative means of meeting the Kyoto targets. The United States has the luxury of starting over. The United States should use its market power more wisely by setting rules for price offsets according to quality, creating a system of buyer liability, and adopting other rules that will create stronger private incentives to identify and reward (with higher prices and better delivery terms) high quality projects. As such, U.S. rules could create a competition for quality rather than a race to the bottom. This is a hypothesis that merits some modeling effort since it suggests that the United States could have inordinate leverage on the quality of worldwide efforts to engage developing countries through the rules it sets in its home market. Fourth, the studies presented at this conference suggest that the offsets supply market will not be competitive. A few activities—forestry in Brazil and possibly Indonesia as well as the electric power sector in China—are likely to be the largest suppliers of offsets.81 All are dominated by government-owned corporations or government The actual supply of forestry credits will depend on the combination of available forestry projects as well as the systems for administering those projects. Brazil combines large potential supply of such projects with decent public administration and could be the dominant supplier in forestry. In energy-related offsets, see Blanford (2010) for a striking set of supply curves suggesting that perhaps half of the supply of offset credits from developing countries would come from the Chinese electric sector. See also Victor (2009) for an argument why most of the non-electric activities in developing countries are much more difficult to include in crediting schemes—because monitoring of emissions and government control over the electric sector is usually much more decisive than in most other segments of the economy.
<urn:uuid:6de2547c-aff9-4240-9075-44c56b90f057>
{ "date": "2013-12-04T16:15:55", "dump": "CC-MAIN-2013-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163035819/warc/CC-MAIN-20131204131715-00002-ip-10-33-133-15.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9643802642822266, "score": 2.734375, "token_count": 1036, "url": "http://www.nap.edu/openbook.php?record_id=13023&page=133" }
Your baby is developing and growing by leaps and bounds. This is a particularly exciting age as your baby develops skills that will enable him or her to explore the world. New discoveries are aided by ever-more complex mental development and through the increased use of hands (fine motor skills) and increased mobility (gross motor skills). Your infant now realizes that objects are permanent, and out of sight does not mean out of mind. Separation reactions occur now and are signs of healthy attachment. Parents can support mental growth at this age by allowing the child to experiment with simple everyday objects and toys in an environment that is stimulating, developmentally appropriate, and safe. Formula or breast milk should be continued until 1 year of age. By this time, solids should have been introduced and your infant probably has a large repertoire of foods. At this age, finger foods are important. Your child can now pick up and hold small objects and is interested in new tastes and textures of foods. Finger foods are also important as your baby strives to become more independent. Avoid peanuts, hot dog pieces, popcorn, frozen peas, beans, raw carrot sticks, pieces of raw apple, grapes, and raisins because they can cause choking. Good alternatives are soft cheeses, toast, soft-cooked carrots and other vegetables, wedges of soft banana, canned pears and peaches, cooked rice, mashed potatoes, and teething crackers. Because your infant is now more mobile, safety measures need to expand to anticipate new activities. The same safety concerns from previous visits remain important. - As children begin to pull themselves up, they might grab and pull down tablecloths on which heavy or hot objects have been placed. - Increased mobility might lead to falls. Use gates at stairwells and install safety devices on windows and screens if necessary. Avoid gates with diamond-shaped slats, which provide footholds for climbing toddlers. Instead use gates with straight, vertical slats and a swinging door. - Keep sharp objects (knives, scissors, tools, razor blades) and other hazardous items (coins, glass objects, beads, pins, medicines) in a secure place. - Secure electrical extension cords to baseboards and cover electrical outlets. - Do not store toxic substances in empty soda bottles, glasses, or jars. - All poisonous substances should be placed in a locked cabinet. In the event of an accidental poisoning, call the POISON CONTROL CENTER toll-free at 800.222.1222. - Upgrade to a toddler car seat when your child weighs 20 pounds. - The hot water tap should be set at less than 120 degrees Fahrenheit. Most burns occur in the bathroom. - Never drink hot liquids or smoke while holding your baby, especially now that your baby can reach out. - March of Dimes: Feeding Your Baby - March of Dimes: What is Normal Development? - March of Dimes: Babies (0-12 months) © 1995-2017 The Cleveland Clinic Foundation. All rights reserved. This information is provided by the Cleveland Clinic and is not intended to replace the medical advice of your doctor or health care provider. Please consult your health care provider for advice about a specific medical condition. This document was last reviewed on: 8/15/2012...#4747
<urn:uuid:a3f47843-dce1-4770-a2eb-a107f24f40b0>
{ "date": "2017-04-30T16:53:12", "dump": "CC-MAIN-2017-17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125719.13/warc/CC-MAIN-20170423031205-00532-ip-10-145-167-34.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9269604682922363, "score": 3.578125, "token_count": 679, "url": "https://my.clevelandclinic.org/health/articles/well-child-care-9-month-visit" }
This diet has a disadvantage: the amount of calories is not as important as the composition of the food, which should also be monitored. Type 4: Diet low in carbohydrates The low-carbohydrate diets involve a full or partial waiver of carbohydrates and are in principle like these that outlines the Kremlin diet, the Atkins diet, the Duran diet, the Kato diet, among others. The logic is that without carbohydrates, the body begins to burn fat. Although the theoretical basis of this diet was the most consistent, it has a number of disadvantages, and there are already three consecutive years the British Dietetic Association (British Dietetic Association) says the Duran Diet the most dangerous of all and recommends avoided (1) . Type 5: Control of nutrients Control not only calories, but also the composition of meals and the amount of fat, protein and carbohydrates gives us the most reasonable method of diet to change our weight. Among the most popular diets, which is closest to this type of diet is the Zone. This type of diet has only one disadvantage: it requires a constant and complete control of the amount of food consumed and its composition. Not everyone can do it and it is precisely for this reason that people seek simpler methods and believe in “miracle diets”. Among the infinite variety of diets, only the theoretical foundation have diets low in carbohydrate and diets with controlled intake of nutrients. Taking into account the fact that diets low in carbohydrates can be dangerous, the best option is to control the nutrients.
<urn:uuid:417e19ab-b791-401b-af93-1ea38c8e9f81>
{ "date": "2018-06-19T08:14:35", "dump": "CC-MAIN-2018-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861981.50/warc/CC-MAIN-20180619080121-20180619100121-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9345092177391052, "score": 2.625, "token_count": 313, "url": "https://generalhealthissuess.wordpress.com/2014/04/23/john-barban-diet-low-in-carbohydrates/" }
Continuing my series on how same things can be done differently in SQL Server and MySQL, in this post, we will see temporary table support in SQL Server vs MySQL. We may often need to create a temporary table while processing data to provide a workspace for storing intermediate results. Both SQL Server and MySQL support temporary tables. In SQL Server, all temporary tables should be prefixed by the # sign create table #test insert into #test(id, names) select * from #test We can drop this table by using a DROP command DROP table #test In MySQL, we have to use the keyword 'temporary' when creating a temporary table Consider the following code create temporary table if not exists test insert into test(id, names) select * from test The above creates a temporary table called test in the current session if it is not already available. To drop a temporary table in MySQL, we can use the following code Did you like this post? |subscribe via rss||subscribe via e-mail| |print this post||follow me on twitter|
<urn:uuid:aceafe3f-84bd-4ea5-af53-1e2bf066fdc6>
{ "date": "2014-09-02T23:45:04", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535923940.4/warc/CC-MAIN-20140909031808-00123-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.736742377281189, "score": 2.609375, "token_count": 233, "url": "http://www.sqlservercurry.com/2012/04/temporary-tables-sql-server-vs-mysql.html" }
Diplopia (double vision) is often the first manifestation of many systemic disorders, especially muscular or neurologic processes. An accurate, clear description of the symptoms (eg, constant or intermittent; variable or unchanging; at near or at far; with one eye [monocular] or with both eyes [binocular]; horizontal, vertical, or oblique) is critical to appropriate diagnosis and management. Binocular diplopia (or true diplopia) is a breakdown in the fusional capacity of the binocular system. The normal neuromuscular coordination cannot maintain correspondence of the visual objects on the retinas of the 2 eyes. Double vision may be secondary to thyroid eye disease, myasthenia gravis, tumors of the orbit or brain, and cerebral aneurysms. Rarely, fusion cannot occur because of dissimilar image size, which can occur after changes in the optical function of the eye following refractive surgery (eg, LASIK) or after a cataract is replaced by an intraocular lens.
<urn:uuid:0c210156-cdb9-414c-82e7-1fa15e6c23d3>
{ "date": "2017-07-22T14:56:18", "dump": "CC-MAIN-2017-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424079.84/warc/CC-MAIN-20170722142728-20170722162728-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8586164712905884, "score": 3.078125, "token_count": 213, "url": "https://www.lvcenter4sight.com/double-vision/" }
‘The study of great books allows the past to speak for itself, combining history, creative writing, philosophy, politics, and ethics into a seamless whole. The goal…is a greater understanding of our own civilization, country, and place in time, stemming from an understanding of what has come before us…The goal of classical education is not an exhaustive exploration of great literature. The student with a well-trained mind continues to read, think, and analyze long after classes have ended.’ Susan Wise Bauer, pg. 473 We are in the process of wrapping up this year’s ancient history studies, and I have learned as much or more about this period of history as my teen. What have I learned? - Reading great books is difficult, but not impossible. At minimum, it takes a commitment to gain something from what you’re reading, even if that commitment is not accompanied by genuine interest. - Names like Plato and Homer shouldn’t intimidate you; learning about them before reading their books allows you to be more comfortable with what you are reading. - Tools like Sparknotes and books like An Invitation to the Classics (an invaluable resource, giving brief but easily understandable information on authors and describing their books in context) can be marvelous helps, but they will never fully convey the emotion of the author. - Living books don’t need accompanying textbooks to “fill in blanks;” by studying people in the context of their surroundings, your child can fill in any blanks regarding events, customs, and culture. - You can lead a horse to water, but you can’t make it drink. And a mule won’t even allow itself to be lead. Enough said. One last thing I’ve learned. Audiobooks are my new BFF. So, having almost completed this year’s work, I’ve begun to think about reading plans for next year for all three kids, but primarily for the oldest. I’m sure this is a function of what I’m least comfortable laying out. Last year, I spent most of the summer preparing a syllabus of sorts to help her get accustomed to reading through one. Though a number of my homeschooling friends have benefited from it, I can safely say that she would have been just as contented to figure it out as she goes. This is one of many differences in our personalities: I plan ahead, but my oldest gets a lot done on last-minute adrenalin. God is gracious enough that only a few of my hairs have turned grey (smile). So, in spite of a few horse and mule days, this is our proposed read-aloud/ together list for high school, 2010-2011: Virgil’s Aeneid (audiobook) The City of God (audiobook) How the Irish Saved Civilization The Song of Roland The Magna Carta (?) Dante’ Inferno (audiobook) Somewhere along the way, we will also spend some time with Japanese haiku, and cover via the Compact Book of World Religions Islamic beliefs. Ambitious? You bet. I’m still determining what will make the final list, and of course, the list on paper may or may not match what we actually get done. As I embrace this particular passage of Ms. Bauer’s, I am comfortable that even if we don’t cover all the books in the curriculum, we will work to understand the period and how it relates to where we are. We also have “free” reading. In our home, these are books that don’t have any follow-up assignments attached to them, nor is the reading graded in any way; the children read them to me. They are my selections for them, but they are intended to be both educational and entertaining. Free reading also gives us the opportunity to add in books that are written from a different perspective than Western Hemisphere and European. Again, this is a first cut, and subject to change several times before it’s put into action (and a few times afterward!) The Sumarai by Shusako Endo Ashaki, African Princess by Patricia Weaver The Life of Alexander the Great by Plutarch (audiobook) Genghis Khan and the Making of the Modern World by Jack Weatherford Dr. Jekyll and Mr. Hyde by Robert Louis Stevenson (audiobook) My younger two will be much easier to plan for, thankfully. We’ve hit a sweet spot where we have the publishers that work for us, and all we have to do is pass down what we’ve bought already and/or make minimal purchases to complement something that’s been consumed . I’m pretty sure that our son will use selections from Sonlight’s History of God’s Kingdom. Interestingly enough, several of the books are in my possession already from the teen’s studies this year, so there’s my head start on purchases. Speaking of a head start, I shared previously that I’d probably go with Sonlight’s 2nd grade readers for the youngest. In comparing our bookshelves to the newest catalog, I found these: I was happy to not have to spend as much on books. In fact, from a cursory look at next year, it looks like I will only have to buy Apologia’s chemistry text, Horizons Math, and Teaching Textbooks Geometry! Now of course, these three resources will run me upwards of $200, and that’s the not-so-good news. Anyway, I do love that a plan is coming together.
<urn:uuid:389de394-2bf5-4f6e-9a4b-a76838debe58>
{ "date": "2017-09-24T05:01:59", "dump": "CC-MAIN-2017-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00696.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9489459991455078, "score": 2.828125, "token_count": 1199, "url": "https://theblessedheritage.wordpress.com/2010/04/03/2010-11-reading-plans-2/" }
Last year, one in four of the more than 607,000 bridges that dot the United States was deemed deficient and is either racked with structural issues or is so obsolete it isn't suitable for traffic. In a recent report, the Government Accountability Office found there has been some improvement to bridge safety — the number of deficient bridges has decreased in the past decade — but there are still funding issues. And the overall funding picture is murky at best. “The impact of the federal investment in bridges is difficult to measure,” GAO noted in its report. “For example, while Department of Transportation tracks a portion of bridge spending on a state-by-state basis, the data do not include state and local spending, thus making it difficult to determine the federal contribution to overall expenditures. Understanding the impact of federal investment in bridges is important in determining how to invest future federal resources.” GAO conducted the study in the aftermath of the Skagit I-5 bridge collapse, which has severed a major artery in Washington state. The office wanted to look into what is known about the current condition of bridges around the nation and to see what changes have been made in line with MAP-21’s goals. Ultimately, for GAO, the infrastructure recommendations included in MAP-21 were deemed sufficient and it's not making any new recommendations at this time. To read the full report, click here . - Jon Ross
<urn:uuid:3c360224-8b30-47a2-8673-5abc5214e562>
{ "date": "2015-04-01T10:44:37", "dump": "CC-MAIN-2015-14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304444.86/warc/CC-MAIN-20150323172144-00226-ip-10-168-14-71.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9673367738723755, "score": 2.65625, "token_count": 294, "url": "http://americanshipper.com/Main/News/GAO_checks_in_on_nations_bridges_54186.aspx?taxonomy=Regulatory1" }
A post office since 1897. Located on the eastern bank of Big Niangua River in the northern part of Warren Township, seven miles south of Linn Creek. Named by Major Kellog when the town was laid out at the beautiful springs at this place. He claimed it was derived from the Indian words, "Iha-ha," to smile, and "tonka," meaning water. It means then "smiling waters." The etymology, as is often the case with such artificial, "made-up" Indian appellations, is a highly dubious one. It is true that there is in the Osage language a verb "i-ha-ha," but it means "to laugh" or "to ridicule" rather than to smile; and the word "tonka" (more correctly "tonga" in the Osage language) means "big," not "water." Major Kellog probably modeled his name on that of Lake Minnetonka in Minnesota, which means "big water;" but it is the first part of that name, "Minne," that means "water," and the second part, "tonka," means "big." The Indian name of these beautiful Missouri Springs means, therefore, if it means anything, rather "Big Laugh" than "Smiling Waters." Overlay, Fauna R. "Place Names Of Five South Central Counties Of Missouri." M.A. thesis., University of Missouri-Columbia, 1943.
<urn:uuid:353aa921-c6c6-49be-8431-826c51621f08>
{ "date": "2019-06-25T11:39:44", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.23/warc/CC-MAIN-20190625112522-20190625134522-00056.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9719544053077698, "score": 3.078125, "token_count": 299, "url": "http://www.ozarkdragon.com/2013/12/the-origin-of-name-hahatonka.html" }
Sometime in the Han Dynasty in China between 206 BC and 220 AD the game of Rock, Paper, Scissors was born. It’s a two-player game. The opponents face each other and at a signal like “1-2-3-Go!” each player vigorously extends a hand out as a gesture. The gestures are one of the following: Rock (as a fist), Scissors (as a V made with two or four fingers) or Paper (represented by your palm, fingers together). The winner is determined by the logic: Rock beats Scissor (it has the power to break it), Scissor beats Paper (it can cut it) and Paper beats Rock (it can wrap itself around and cover it). The same opposing gestures, such as a Rock against a Rock, or Scissors against Scissors, constitute a draw. The game is hugely popular all over the world; its simplicity ensures that all age groups can play. Between machine opponents truly operating at random, there is no advantage. But human operators tend to be non-random and there have been competitions using various algorithms and heuristics to win the game. Recently, a robot has been built that is unbeatable. It wins by cheating. Using a high-speed camera, the robot detects the beginning muscle shapes and movements of its human opponent’s gesture. Before the human can complete the gesture, the robot speedily comes up with the counter-response that wins. Here we look at the game as being between humans and six macro subliminal lessons in business behavior that can be learned from it. 1. Disruption overcomes the status quo Rock, paper, scissors are each disruptive to the other – and the right attack wins. Just responding in like manner, say Rock vs. Rock, is ho-hum, just a prolonging of the game. To win, you need to come from somewhere else. 2. Time is of the essence If you are not fast enough in your response, you are not playing the game. The opponent has to be matched. Fast analysis is key to responding to the competition. 3. Learn the pattern of your opponent’s behavior Is your opponent prone to certain types of actions? Is there a bias? Are they fast or slow to adapt? The more competitive analysis you do, the better off you will be. 4. Don’t get caught in a rut yourself You, too, can fall into a trap of repeat behaviors, when in fact advantage is to those who stay nimble. Spot trends and learn to change your behavior to harvest opportunity. Becoming predictable yourself can mean decline. 5. Longevity is not an advantage Your opponent may have been around for a long time – but what matters is how you play the game. The past does not matter – how you are playing today does. Newcomers can beat entrenched adversaries. (I see kids beat grownups all the time). 6. What appears to be a weakness could be a strength Paper overcoming Rock – that is really out-of-the-box thinking. Paper has little intrinsic strength, can be easily torn. But it can cover Rock to overwhelm. Business has many examples of this where an ostensibly weak adversary overcomes a stronger opponent by using a new approach, often as an element of surprise. Now you are ready to play to win. There is a nice counter to the earlier story about the cheating robot. Recently, a brain implant was put into a paralyzed man. His mind is now able to control a robot arm by thought alone – a great medical breakthrough. Repeated practice is needed even for simple acts such as drinking beer. To play Rock, Paper, Scissors the practice has to be done 6,700 times. Azmi Jafarey has held CIO and senior IT positions with Ipswitch, Vertical Communications and Artisoft. Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
<urn:uuid:18372d9a-5c8d-4900-90df-46e5458ba041>
{ "date": "2019-07-21T13:22:04", "dump": "CC-MAIN-2019-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00536.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9556937217712402, "score": 3.609375, "token_count": 839, "url": "https://www.cio.com.au/article/587614/6-business-lessons-rock-paper-scissors/" }
On February 3, 1904 “Pretty Boy” Charles Floyd was born in Adairsville, Georgia. He was a poor farmer during the great depression. He was raised in a compact farming community of Oklahoma, Akins and the Cookson Hills which he would become the arm of the law. He moved to Oklahoma where his family owned a farm. They were extremely poor because of the fact that banks were taking over people’s farms. Floyd was best known for his constant crimes involving payroll robbery in the mid 1920’s, which he was arrested for. After his release, he continued robbing numerous banks.Floyd earned the nickname “Choc” because of his love of Choctaw beer. His best nickname was “Pretty Boy” he received his nickname from a girlfriend at Kansas City. Floyd hated his nickname so he went back to letting people call him “Chock”. Due to the “Dust Bowl” happening at the time, he used crime to escape the poverty he and his family were facing. Floyd was poor, and out of work which was hard since he had a young family. Even when he took normal jobs, it still did not help.He got married to Ruby Hardgraves at the age of 20. They gave birth to a boy named Charles Dempsey Floyd. Jack Floyd was born between 1922 and 1923. His father was serving a four year prison sentence for robbing a store in St. Louis, Missouri. Due to his actions of robbing a bank, his wife filed a divorce against him. Floyd was diagnosed with mental disorder by killing an accused man and killing his own father. Due to this reckless usage of a machine gun, he started robbing banks in Ohio River with a group. While on his crime spree, him and his group became popular by completing criminal acts such as destroying papers at the bank, stealing mortgages at banks and liberating debt-ridden citizens. Floyd completed many criminal acts against the government. To begin with, Floyd strangled William Brown’s wife, and brutalized her body. William Brown’s wife was pregnant,which was the police informed William Brown as being a double homicide. During that time Georgia Green and her daughter were alone when Floyd came in. He battered Georgia Green and her daughter to death. Both girls had red hair. The police understands that both victim had red hair which made the police infer that he attacked women with red hair. Floyd struck again in 1945. During that time the victims were Panta Lou Niles and the red haired women named Georgia Green.Before Floyd was caught and shot twice, he was attempting to escape from the police. He was in no position for negotiation thus leaving by himself. His behavior before his death was being frightened and worried if he was going to get caught. Floyd was unable to withstand trial because he was shot twice. He robbed numerous banks and was not able to go through trial because he was not caught at that timeFloyd was sentenced after admitting to the rape and murder of five women and the murder of an unborn child. Charles Floyd’s behavior was unfortunately bad. He was sentenced to life in a mental establishment.Before his death, Floyd had a bounty on his head for $23,000 which urged him to create an alias under his name. His alias was Mr. George Sanders and with that, he escaped and went into hideout with Richetti and two other women. After, in Wellsville, Chief J.H. Fultz was told that people were skulking outside of the police station. He caught Rich Richetti and Floyd making their escape. That day, authorities found Floyd in an East Liverpool cornfield and he began shooting at the cops. He was shot twice and his last were, “I’m done for; you’ve hit me twice.” FBI agents attempted to get him an ambulance to save him but he ended up dying 15 minutes after he was shot.
<urn:uuid:10321ba8-a44b-4710-9667-5f042912b3ae>
{ "date": "2019-12-12T03:31:54", "dump": "CC-MAIN-2019-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9961197972297668, "score": 3.296875, "token_count": 806, "url": "https://postamble.org/on-farmer-during-the-great-depression-he-was-raised/" }
THE WHOOPING CRANE REPORT: 16 A pair of whoopers takes turns incubating eggs in their pen at Patuxent. While the female incubates the egg, the male stands nearby, keeping watch and waiting his turn. Both parents care for the eggs and rear the chick. Photo by USFWS We use a portable suitcase incubator when we collect crane eggs. The suitcase is small enough to be easily carried by a single person. We keep the suitcase in a horizontal position to keep the eggs from jostling. Hot water bottles in the bottom of the suitcase keep the eggs warm. The eggs are enclosed by a Styrofoam holder which cradles the eggs securely and keeps them warm. The entire suitcase is lined with shock-absorbing insulating foam. A long thermometer is inserted through the top of the incubator to make sure the eggs are not too hot or too cold. Photo by USGS Eggs that are ready to go into the mechanical incubator in the propagation building are placed in this rack, cushioned with paper towels, after they've been weighed and candled. The position of the egg is always small end down, large end up. A sandhill crane patiently incubates whooper eggs on her nest at the Patuxent. Photo by USFWS Whooper eggs that are close to hatching may be placed in a mechanical incubator in the last 10 ten days. These eggs may be scheduled for costume-rearing, either for the WCEP migration study, or the Florida Life of a Patuxent Whooper Egg In nature, whooping cranes usually mate, establish territory, build a nest, then lay two eggs. If everything goes right, they will raise one chick. But life for the Patuxent cranes, and the people who care for them, is more complicated. At Patuxent, whoopers usually mate, establish territory in their pen, build a nest, then lay two eggs, which is called a clutch. However, the birds don't get to keep these eggs. After they lay the second egg, we remove the clutch. Why? Because removing the clutch after it's laid causes the cranes to lay again in about 10 days. By taking the eggs away, we can increase production 3 or 4 times. So instead of only laying 2 eggs and raising at the most one chick, as they would in the wild, Patuxent's whoopers may lay 6 or 8 eggs each year. Depending on the year's production goals, Jane, the flock manager decides when to end egg production for each pair. The whoopers are usually allowed to incubate their last egg, and raise the chick themselves. But what happens to all the other eggs? Whooper eggs do best if incubated by cranes instead of mechanical incubators, at least in the early stages. Every time we remove an egg, we take it to the propagation building, carrying it in a rigid suitcase redesigned to be a portable incubator. The eggs are handled carefully, since jostling them can kill a fragile embryo, as can temperature extremes. At the propagation building, the eggs are weighed, measured, and examined to be sure they are not cracked or have weak shells. (Eggs with cracks or thin shells will have to receive special care if we hope to hatch a chick from them.) We give each egg an identification number based on the parent's pen location and the order in which the egg was laid. (For example, the 4th egg from the whooper pair who lives in the 12th pen in the Blue series will have as its ID number, B12 #4.) This number is written directly on the egg's shell with a lab marker. After this is done, we bring the egg back to crane pens and place it under an incubating sandhill crane for the next 10 days. Patuxent maintains a flock of Florida and greater sandhill cranes, both for incubating whooper eggs, and for providing non-endangered birds to use in studies. The Whooping Crane Eastern Partnership (WCEP) ultra light migration, for example, used sandhill cranes first, to work out the best techniques. Pairs of sandhill cranes are rated, based on previous years' breeding experience, on their incubation and parenting skills. Only the highest rated pairs are trusted with whooper eggs. Detailed charts are kept on each pair's breeding schedule -- when they laid their own eggs, and how long they've been incubating -- so the birds will be ready when we give them a whooper egg. The surrogate sandhills will incubate the whooper egg for 10 days. Both the male and the female will take turns caring for it. After 10 days, we'll remove it, take it back to the propagation building, and weigh and examine it again to see if it's fertile. Weighing it also tells us if the egg has lost too much weight. A fertile egg is a living thing. All fertile eggs lose weight as the chick inside grows and uses up the egg's material. However, excessive weight loss indicates the egg is dehydrating too quickly. We can often remedy problems like this, so the egg weight is critical. If the egg is fertile and healthy, Jane, the flock manager will check the charts to decide which pair of sandhills would be best to incubate the egg for the next ten days. After those twenty days, the whooper egg will be brought in again to make sure it is developing normally. At 20 days, it is safe to place the egg in a mechanical incubator for the last 10 days of incubation. Managing the care of whooper eggs means knowing what stage of incubation they're at, what condition they're in, and most importantly, where they are. At the height of the breeding season, there might be over 50 whooper eggs to keep track of, and over 100 sandhill eggs. Since all crane eggs look similar, proper identification of each individual egg and careful record keeping is critical. Even if we're in a hurry -- and in the breeding season, we're always in a hurry -- paper work must be done precisely and on time. WCEP news: All 5 whoopers are doing very well at the Chassahowitzka National Wildlife Refuge in Citrus County Florida. Read regular reports and see pictures of the birds at: Recent videos can be downloaded here: Patuxent Crane Videos --To see these videos, you will need to install the free Real Player application. Go to the Real Player link, above, and make sure you select Download Free Real Player. The .rm extension on the files indicates a RealVideo file. The rate with which you connect with our system can affect the quality of the video transmission. Low connectivity rates caused by noisy phone lines or heavy internet traffic may make the video hard to view. If that happens, try during a less busy time and the video may transmit better. Some systems may not have the appropriate hardware or internet connection to handle videos so we provide the still-photos on the left, that were taken directly from the videos. These photos show some of the scenes from the video, so users who cannot access the video can still experience the story. Whooping Crane Videos: See Report 10 for more info on pre-flight Click here to ask questions about Patuxent's whooping crane program.Whooping Crane Reports Hatch Day (Click on numbered links to view all other egg (negative numbers) and chick days).
<urn:uuid:6286b984-d762-4101-b251-17f887640972>
{ "date": "2014-03-08T20:16:52", "dump": "CC-MAIN-2014-10", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999662979/warc/CC-MAIN-20140305060742-00006-ip-10-183-142-35.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9435169100761414, "score": 3.3125, "token_count": 1564, "url": "https://www.pwrc.usgs.gov/whoopers/report16.htm" }
Here is a refresher on how to format your dissertation paper: This has all the relevant information on it such as what the title of the dissertation is called, the writer’s name, the date that it is due, any names that are relevant, and what department it has come out of. This is the part that says what you are covering in you dissertation. If you have a bunch of figures and tables that you are using, you would include that list so it will make it easier to find. This is where you will list all the parts of your paper. This includes listing not only the parts of the paper, but also what pages those particular parts are on. For example if you are writing a paper on education reform and mention the “No Child Left Behind Act” then you would need to list that along with whatever page it is on. This is where the writer will acknowledge anyone that has helped him or her with the project as well as thanking the committee to allow them to be able to present whatever topic they are covering. For the reader, if they want to just read a brief summary of what the dissertation is over, then this where they will find it. You provide the background information in this section, as well as what is the topic important, and answer the important question of why did you set out to do this. This is where a lot of sources will come into play. It takes a critical look at what you have chosen and goes deeper into it. Think of this section as the “I am knowledgeable” section. This explains what method(s) you did in order to obtain the data for your research, why you chose those methods and how is credible.Analysis: Just like in research paper, this part just takes the data that you collected and goes into a deeper explanation of what type of data you collected.Summary: You take all the data that you collected and turn into something that people will be able to understand. That way people will understand why it is important and what they can do.Conclusion This part wraps everything up and asks the call of action questions and or provides recommendations about why something should be looked into or maybe why not.References: This is the section where all of your sources that you used will be at.Glossary: If there is something that someone might not understand term wise then they will come to this section in order to seek out a deeper meaning and definition. The above dissertation and thesis writing manuals, guides, samples and tips have been prepared by our team of writers and editors. You can use them free of charge. © 2013 · Forest Research Tools (forestresearchtools.com) · Expert dissertation and thesis writing assistance
<urn:uuid:19a8ef42-85fa-4867-9334-a4f25f553d60>
{ "date": "2017-02-26T16:52:27", "dump": "CC-MAIN-2017-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00504-ip-10-171-10-108.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9663812518119812, "score": 3.15625, "token_count": 562, "url": "http://www.forestresearchtools.com/prompts-on-effective-reviewing-and-refining-of-dissertation-format" }
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Do you really want to delete this prezi? Neither you, nor the coeditors you shared it with will be able to recover it again. Make your likes visible on Facebook? Connect your Facebook account to Prezi and let your likes appear on your timeline. You can change this under Settings & Account at any time. European Exploration Timeline Transcript of European Exploration Timeline By: Sophia Antunes 1400 1700 1492: Christopher Columbus was the first European to reach North America, but believed he found a new sea route to Asia. He even called the Native Americans "Indians" because he thought he had reached India. 1522: Ferdinand Magellan's last remaining ship completed the journey of circumnavigating the world. Magellan was killed before the voyage was complete, and only 18 of the original crew members survived. 1534: Pizarro and his men had taken over the entire Inca Empire. The Spanish had an advantage over the Native Americans because of their guns, the diseases they brought with them unintentionally, and their steel swords. Horses also gave them an advantage because the Inca were intimidated by the horses, and the Spaniards were higher up on the horse to be able to kill enemies more easily. 1512: Juan Ponce de Leon discovered the coast of present-day Florida. He also searched for the Fountain of Youth, but failed to find it. Ponce de Leon was given permission to colonize Florida but failed to. 1541: Hernando de Soto was the first European to find the Mississippi River. 1638: The first Swedish settlement was established in North America. It was called Fort Christina and located along the Delaware River. However, in 1655, the governor of New Netherland conquered New Sweden, but allowed it to be called the "Swedish Nation." 1510: Spain made the selling of slaves legal in its colonies. 1588: The huge Spanish Armada was defeated by the small, faster fleet of English sea dogs. Spain was tired of English piracy, and the fact that the Queen of England was protestant. 1517: Martin Luther protested some of the practices of the Catholic Church. The church was becoming corrupt, so the Protestant Reformation began. The printing press helped spread word of the reformation throughout Europe, starting in Germany. Martin Luther posted the 95 Thesis, a document with 95 criticisms of the Church. Early 1400's: Prince Henry the Navigator set up a school for navigation and an observatory. Although the Portuguese Prince never set sail himself, he helped improve methods of sailing and navigation. Caravel - A ship that had triangular sails and a large rudder to sail into the wind and turn more easily. 1520's (Through 1860's): About 12 Million Africans were shipped to the Americas as slaves between the 1520's and the 1860's, and more than 10 million survived the voyage. The journey across the Atlantic Ocean that slaves had to suffer through was called the Middle Passage. The spreading of slaves throughout the New World was known as the African Diaspora. 1609: The English sailor Henry Hudson was hired by the Dutch in search of the Northwest Passage. Hudson never found the Northwest Passage, but he discovered present-day New York in 1609, and he discovered Hudson Bay. 1497 & 1498: John Cabot, and Italian sailor, made voyages to North America for England. Many people believe he traveled along the coast of Newfoundland, in present-day Canada. He had been in search of the Northwest Passage, or an all-water route through North America from the Atlantic Ocean to the Pacific, which many Europeans were in search of. However, he never found the Northwest Passage. Cabot's voyage.
<urn:uuid:847ecf41-0a70-409e-a03a-46491067ff75>
{ "date": "2017-02-20T23:58:30", "dump": "CC-MAIN-2017-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00276-ip-10-171-10-108.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9744804501533508, "score": 3.59375, "token_count": 841, "url": "https://prezi.com/bqfqh8s6bq9t/european-exploration-timeline/" }
In 2007, NASA began an agency wide initiative to replace its aging facilities with smaller, more efficient buildings. When agency leaders selected the Ames Research Center in Northern California for funding, officials at the center proposed a fairly traditional replacement facility. But when Ames Associate Director Steve Zornetzer saw the plans, he had a different vision: Make the building a showcase of NASA’s technological expertise and leadership in imagining the future. “It was inconceivable to me that in the 21st century, in the heart of Silicon Valley, NASA would be building a building that could have been built 25 years ago,” he said. “NASA had to build the highest-performing building in the federal government, embed NASA technology inside and make a statement to the public that NASA was giving back to the people of planet Earth what it had developed for advanced aerospace applications,” he said. But there was one catch: The redesigned building couldn’t cost more than the originally proposed project. The Sustainability Base, as the project became known, centered on four elements: - Make maximum use of the existing environment; - Employ advanced technologies to minimize energy consumption and maximize efficiency; - Install advanced monitoring and adaptive operational systems; - Create a living laboratory for research into advancing sustainability goals. Faced with a tight timeline and budgetary constraints, the architects and contractors chose design tools that allowed fast and effective communications among all involved. The design team relied on a Building Information Modeling process based on Autodesk Sustainability Solutions, which was integrated with other modeling tools. This facilitated communication across teams and aided in making design decisions quickly and accurately. The building's core design elements included a complex radial geometry, an innovative steel-frame exoskeleton, and numerous eco-friendly features, such as geothermal heat and cooling , natural ventilation, high-performance wastewater treatment, and photovoltaics on the roof. The resulting $26 million, 50,000-square-foot, two-story building houses 220 office workers, including scientists, managers, mission support personnel and financial specialists. The extensive floor-to-ceiling windows and open spaces fully embrace the natural daylight. With reduced demand for artificial light and the application of high-efficiency radiant heating/cooling systems, the building site produces more electricity than it uses and is on its way to reducing potable water consumption by up to 90 percent compared with a comparable traditional building. Read the full Sustainability Base case study here.
<urn:uuid:77df2d3d-43d4-443d-bdba-a2d218f2f558>
{ "date": "2015-07-29T22:41:28", "dump": "CC-MAIN-2015-32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00196-ip-10-236-191-2.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9479519724845886, "score": 3.09375, "token_count": 513, "url": "http://www.nextgov.com/emerging-tech/2013/04/nasa-establishes-sustainability-base-earth/62225/" }
Wind: 5 mph Contributed by Bob Brightbill With the millions and millions of cars that are sold in America every year, did you ever wonder who bought the very first one? And who made that car? No, it wasn't a Ford. It was a Winton and was bought by Robert Allison who lived in the small town of Port Carbon outside of Pottsville, PA. Alexander Winton owned a bicycle factory in Cleveland, Ohio. Robert Allison was 71 years old at the time and the car was purchased on April 1, 1898, for $1,000. He was a mechanical engineer who owned a company, Franklin Iron Works, where they made machinery for the mining companies right there in the anthracite coal region. Other cars were made at this time also. Haynes and Duryea made cars and though Haynes would claim to have sold a car at this time, there is no record of it. The car was a one-seater. There was no steering wheel but it was steered with a bar much like the tongue of a wagon. It was a one lunger, meaning it had one cylinder. The engine was in the rear. The wheels were like large bicycle wheels with 36 inches in the rear and 32 inches in the front. It had two gears – forward and reverse. It was called a horseless carriage. The brochure that came with the vehicle from Winton Motor Carriage Co. of Cleveland, Ohio, stated, among other things, that "this particular carriage has fully demonstrated its practicability and success in actual service over all kinds and conditions of roads – uphill and down – through mud, sand and snow at a speed of from 3 to 20 miles per hour." It arrived by rail freight in Pottsville. Many people came to see it. Several men tried to crank the car to get it started without success. Finally a large man gave it a try just as Mr. Allison's son found a switch under the seat and turned it on. The engine roared and the frightened man ran away. Mr. Allison drove it home the two miles to Port Carbon with only one incident. A team of horses bolted at the sight of it. Mr. Allison parked it in his stable. Sometime later he drove the car to Philadelphia which was about 90 miles away and the trip only took three days. He wanted to drive the car on a paved road where one could be found in a park there. Think about it. The roads were dirt and where did he get the gas? There were no gas stations. He bought gas in drug stores. When he arrived in Philadelphia, they would not let him drive in the park anyway. He had to coax several of his friends to go with him before one would. No, he didn't drive the car home. He had it shipped by rail. Mr. Allison purchased several cars after that and you might say that he was the first to trade in a car since Alexander Winton bought it back. It was in the Smithsonian Museum in Washington, DC, for many years and then loaned to an automobile museum in either Cleveland or Detroit. How do I know all of this? Robert Allison was my great-grandfather. Incidentally, my first car was a Model A Ford coupe which was older than I was and it had a rumble seat in the back. What's a rumble seat? Bob Brightbill lives in Waitsfield.
<urn:uuid:4f141050-7ae0-4835-b50e-00e7f7bc4e34>
{ "date": "2015-05-23T05:56:24", "dump": "CC-MAIN-2015-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00224-ip-10-180-206-219.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9924864768981934, "score": 2.8125, "token_count": 707, "url": "http://www.valleyreporter.com/index.php/en/news/news/8950-who-bought-the-first-car-sold-in-america" }
Actinopterygii (ray-finned fishes) > Perciformes (Perch-likes) > Epigonidae Etymology: Epigonus: Greek, epi = over, in front + Greek, gonio = angle (Ref. 45335); ctenolepis: Named for its ctenoid scales on the lateral line. Environment: milieu / climate zone / depth range / distribution range Marine; bathydemersal; depth range 100 - 1200 m (Ref. 559). Deep-water Northwest Pacific: Kumanonada Sea, Japan. Size / Weight / Age Maturity: Lm ?  range ? - ? cm Max length : 10.0 cm SL male/unsexed; (Ref. 559) Morphology | Morphometrics soft rays: 9; Vertebrae: 25. Well-developed opercular spine present. Body is noticeably flattened laterally, its greatest width is 70-80 percent of greatest depth. Length of the second spine on the first dorsal fin is 23-26 percent of head length (Ref. 31632). Eighth rib below tenth vertebra absent. Ctenoid scales on lateral line; about 12-14 transverse scales below lateral line. Eyes large, oval; mouth oblique (Ref. 35777). A mesobenthic-pelagic species living mainly above the bottom (Ref. 31632). Found on or near continental slopes and seamounts. Life cycle and mating behavior Maturity | Reproduction | Spawning | Eggs | Fecundity | Larvae Mochizuki, K. and K. Shirakihara, 1983. A new and a rare apogonid species of the genus Epigonus from Japan. Japan. J. Ichthyol. 30(3):199-207. (Ref. 35777) IUCN Red List Status (Ref. 115185) CITES (Ref. 115941) Threat to humans Common namesSynonymsMetabolismPredatorsEcotoxicologyReproductionMaturitySpawningFecundityEggsEgg development ReferencesAquacultureAquaculture profileStrainsGeneticsAllele frequenciesHeritabilityDiseasesProcessingMass conversion Estimates of some properties based on models Phylogenetic diversity index (Ref. 82805 = 0.5000 [Uniqueness, from 0.5 = low to 2.0 = high]. Bayesian length-weight: a=0.00603 (0.00257 - 0.01414), b=3.12 (2.91 - 3.33), in cm Total Length, based on LWR estimates for this (Sub)family-body shape (Ref. 93245 Trophic Level (Ref. 69278 ): 3.3 ±0.5 se; Based on size and trophs of closest relatives Resilience (Ref. 69278 ): High, minimum population doubling time less than 15 months (Preliminary K or Fecundity.). Vulnerability (Ref. 59153 ): Low to moderate vulnerability (26 of 100) .
<urn:uuid:cd23de68-bb02-45f7-801f-50e31175d7d2>
{ "date": "2019-02-23T03:31:38", "dump": "CC-MAIN-2019-09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249434065.81/warc/CC-MAIN-20190223021219-20190223043219-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.6790969371795654, "score": 3.25, "token_count": 675, "url": "http://fishbase.us/summary/SpeciesSummary.php?id=23460" }
As the name suggests biology articles are written on biology. Biology is a branch of science which deals with the living organisms such as human being, animals and plants. We study different things in biology like nature; mechanism of living things etc. Biology is divided din to two branches, zoology which marks for study of animals and botany which contain the study of plants. Biology is very important subject because it provides the knowledge of living things. It also accounts for living organism which we cannot see from our naked eye. It also describes about the beautiful creation of the nature such as colorful and blooming flowers, different types of trees from which wood can be taken to make furniture. As mentioned above biology articles are written on biological stuffs. These articles are very helpful in getting the expertise of natural creations. Some of them are written after thorough research that is why they have a wide range of information. Some biology articles are written on small organisms, some on wild animals, some on plants and human beings, and some on wild life. These articles can be found on internet so it is even much easier for a person to gather more and more information about living creatures using his personal computer or laptop without wasting time and money. The topics on which biology articles are written includes a wide variety for example photosynthesis (the process by which plants make their own food), digestive system circulatory system and respiratory system of human beings, different types of animals like herbivores (who eat plants), carnivores (who eat other animals) and omnivores (who eat both plants and animals), histological topics in which we study about how tissues and organs are formed. Biology articles include some more topics such as micro organisms causing diseases, harmful and useful micro organisms. One of the main topics is genetics in which we study about how the baby is born and why and how does he resemble to his parents, the structure of DNA, evolution of animals. Biology articles should be very informative. It helps the students to know about their field. It is first step toward medical profession. Articles written on pharmacy are also a part of biology articles because most of the medicines are made from herbs that are studied in botany. Biology articles are about new researches done in various areas of biology. It describes new technologies used in researches. These articles provide the information about extinct species and also those species which are going to extinct. They also tell us about how to preserve those species in zoo. They also describe how to cultivate those animals whose demand are increasing day by day. In the end, it can be said that biology articles are the best source for a person to know about biology. Some writers write these articles so properly that the readers can easily understand the material and the concept discussed under the topic. They can provide the better guidance to the people.
<urn:uuid:5b916118-5afb-41f3-a288-be1412f573cd>
{ "date": "2018-11-18T22:41:45", "dump": "CC-MAIN-2018-47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744750.80/warc/CC-MAIN-20181118221818-20181119003818-00536.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9666212201118469, "score": 2.625, "token_count": 568, "url": "https://www.duplichecker.com/blog/biology-articles.php" }
Learn how to control your breathing. Hyperventilation brings on many sensations (such as lightheadedness and tightness of the chest) that occur during a panic attack. Deep breathing, on the other hand, can relieve the symptoms of panic. By learning to control your breathing, you can calm yourself down when you begin to feel anxious. And if you know how to control your breathing, you’re also less likely to create the very sensations that you’re afraid of. It's important to note that everyone feels anxiety to some degree regularly throughout their life - fear and anxiety are adaptive and helpful emotions that can function to help us notice danger or threat, keep us safe, and help us adapt to the environment. Anxiety disorders represent states when fear or anxiety becomes severe or extreme, to the extent that it causes an individual significant distress, or impairs their ability to function in important facets of life such as work, school, or relationships. It is also important that risk factors don't at all imply that anxiety is anyone's fault; anxiety disorders are a very common difficulty that people experience. In this section, we will review risk factors for anxiety disorders. There are many potential risk factors for anxiety disorders, and most people likely experience multiple different combinations of risk factors, such as neurobiological factors, genetic markers, environmental factors, and life experiences. However, we do not yet fully understand what causes some people to have anxiety disorders. “One day, without any warning or reason, a feeling of terrible anxiety came crashing down on me. I felt like I couldn’t get enough air, no matter how hard I breathed. My heart was pounding out of my chest, and I thought I might die. I was sweating and felt dizzy. I felt like I had no control over these feelings and like I was drowning and couldn’t think straight. For more information, please visit Mental Health Medications Health Topic webpage. Please note that any information on this website regarding medications is provided for educational purposes only and may be outdated. Diagnosis and treatment decisions should be made in consultation with your doctor. Information about medications changes frequently. Please visit the U.S. Food and Drug Administration website for the latest information on warnings, patient medication guides, or newly approved medications. I think I had an anxiety attack the other day, but I’m not sure. I was at the movies and felt scared, like something or someone was going to attack me. I drove home and felt like I was scared of the dark and was having trouble breathing and focusing on driving. After dropping off my bf and driving home, I started crying and hyperventilating, and felt detached from the world, like nothing mattered, and felt like I was going to die. It took me two hours to fall asleep and I had nightmares. The episode was over by morning, but I’m concerned that it will happen again. Guided imagery is another relaxation strategy that can help reduce or prevent overwhelming anxiety. Guided imagery involves directed mental visualization to evoke relaxation. This could involve imagining your favorite beach or a peaceful garden that can distract you from your anxious state and allow your mind and body to focus on the positive thoughts and sensations of the imagery exercise. Women are more than two times as likely as men to be diagnosed with an anxiety disorder. (6) It’s not clear why this is the case, but researchers have theorized that it may be due to a combination of social and biological factors. Scientists are still investigating the complex role that sex plays in brain chemistry, but some research suggests that in women, the amygdala, which is the part of the brain responsible for processing potential threats, may be more sensitive to negative stimuli and may hold on to the memory of it longer. (7) Your brain focuses on some alleged thread, for instance, a very scary thought that was floating somewhere at your subconscious. Your thalamus – the part of the brain responsible for regulating consciousness, sleep and alertness – transfers that information to your amygdala – the part of the brain responsible for emotional reactions, decision-making and memory – which marks it as “danger” and sends a signal to your sympathetic nervous system, activating the fight-or-flight response. Everyone here has issues, but what happens when you’re blue as hell and CANNOT figure out the source of the problem? There is no quote, no book, no video, no saying or phrase, no motto, which is helping me right now. I feel like absolute total HELL. And I damned well know it’s not going to last, and that it’s probably a result of thinking too hard, too long, too deeply. Anyway, thank you all for sharing your pain with strangers. It shows that you’re way stronger than you think. Guys, I am 23 and this might sound very stupid but i recently broke up with my boyfriend of 7 months(yes quite a less time to experience anxiety issues but yes..) One fine day he just comes over and says its done between us.. I have fell out of love and thats why I cant pretend to be with you. It happened on 17th of this month i.e. 17th july and for over a week i couldnt sleep, eat food and I was nauseaic and I am still in a bad state.. I am forcing myself to sleep, to not think about it but my attacks starts early in the morning and get suffocated and want to just run out of the space. I get urges to calling him, speak to him, tell him how much I love him and miss him but its all like I am speaking to a wall. And i dont trouble my parents with this problem. should i visit a counsellor or should I give myself some time to heal ? We all experience anxiety. For example, speaking in front of a group can make us anxious, but that anxiety also motivates us to prepare and practice. Driving in heavy traffic is another common source of anxiety, but it helps keep us alert and cautious to avoid accidents. However, when feelings of intense fear and distress become overwhelming and prevent us from doing everyday activities, an anxiety disorder may be the cause. Anxiety is a normal reaction to stress and can be beneficial in some situations. It can alert us to dangers and help us prepare and pay attention. Anxiety disorders differ from normal feelings of nervousness or anxiousness, and involve excessive fear or anxiety. Anxiety disorders are the most common of mental disorders and affect nearly 30 percent of adults at some point in their lives. . But anxiety disorders are treatable and a number of effective treatments are available. Treatment helps most people lead normal productive lives. We all tend to avoid certain things or situations that make us uncomfortable or even fearful. But for someone with a phobia, certain places, events or objects create powerful reactions of strong, irrational fear. Most people with specific phobias have several things that can trigger those reactions; to avoid panic, they will work hard to avoid their triggers. Depending on the type and number of triggers, attempts to control fear can take over a person’s life. Relaxation strategies, such as deep diaphragmatic breathing, have been shown to lower blood pressure, slow heart rate, and reduce tension that is commonly associated with stress. Engaging in relaxation strategies regularly can equip you to reduce anxiety when it occurs, by allowing your body to switch from its anxious state to a more relaxed and calm state in response to stressors. So, if anxiety has so many negative effects, why is it relatively common? Many scientists who study anxiety disorders believe that many of the symptoms of anxiety (e.g., being easily startled, worrying about having enough resources) helped humans survive under harsh and dangerous conditions. For instance, being afraid of a snake and having a "fight or flight" response is most likely a good idea! It can keep you from being injured or even killed. When humans lived in hunter-gatherer societies and couldn't pick up their next meal at a grocery store or drive-through, it was useful to worry about where the next meal, or food for the winter, would come from. Similarly avoiding an area because you know there might be a bear would keep you alive —worry can serve to motivate behaviors that help you survive. But in modern society, these anxiety-related responses often occur in response to events or concerns that are not linked to survival. For example, seeing a bear in the zoo does not put you at any physical risk, and how well-liked you are at work does not impact your health or safety. In short, most experts believe that anxiety works by taking responses that are appropriate when there are real risks to your physical wellbeing (e.g., a predator or a gun), and then activating those responses when there is no imminent physical risk (e.g., when you are safe at home or work). A licensed mental health professional that has earned a Master’s degree from a variety of educational backgrounds (e.g. general counseling background, social work, marriage and family counseling). Once their formal education is completed, these clinicians are supervised in the field 1-2 years and pass a State exam to become fully licensed in the state in which they practice. These mental health professionals are licensed to diagnose emotional, mental health and behavioral health problems. They can provide mental health treatment in the form of counseling and psychotherapy, or work in other capacities as patient advocates or care managers. Licensed Master’s level clinicians work in many settings, including hospitals, community mental health clinics, private practice, school settings, nursing homes, and other social service agencies. Titles and licensing requirements may vary from state to state. Repeated and persistent thoughts ("obsessions") that typically cause distress and that an individual attempts to alleviate by repeatedly performing specific actions ("compulsions"). Examples of common obsessions include: fear that failing to do things in a particular way will result in harm to self or others, extreme anxiety about being dirty or contaminated by germs, concern about forgetting to do something important that may result in bad outcomes, or obsessions around exactness or symmetry. Examples of common compulsions include: checking (e.g., that the door is locked or for an error), counting or ordering (e.g., money or household items), and performing a mental action (e.g., praying). Although anxiety is often accompanied by physical symptoms, such as a racing heart or knots in your stomach, what differentiates a panic attack from other anxiety symptoms is the intensity and duration of the symptoms. Panic attacks typically reach their peak level of intensity in 10 minutes or less and then begin to subside. Due to the intensity of the symptoms and their tendency to mimic those of heart disease, thyroid problems, breathing disorders, and other illnesses, people with panic disorder often make many visits to emergency rooms or doctors' offices, convinced they have a life-threatening issue. People often fear the worst when they're having an anxiety attack. Most of the time, there’s no underlying physical problem, such as a real heart attack. But you should get the medical all clear if you have repeat anxiety attacks, just to be sure you don’t need additional treatment. Then find a cognitive behavioral therapist with experience treating anxiety to help you through. Simple Phobias and Agoraphobia: People with panic disorder often develop irrational fears of specific events or situations that they associate with the possibility of having a panic attack. Fear of heights and fear of crossing bridges are examples of simple phobias. As the frequency of panic attacks increases, the person often begins to avoid situations in which they fear another attack can occur or places where help would not be immediately available. This avoidance may eventually develop into agoraphobia, an inability to go beyond known and safe surroundings because of intense fear and anxiety. Generally, these fears can be resolved through repeated exposure to the dreaded situations, while practicing specific techniques to become less sensitive to them. Dr. Roxanne Dryden-Edwards is an adult, child, and adolescent psychiatrist. She is a former Chair of the Committee on Developmental Disabilities for the American Psychiatric Association, Assistant Professor of Psychiatry at Johns Hopkins Hospital in Baltimore, Maryland, and Medical Director of the National Center for Children and Families in Bethesda, Maryland. An evolutionary psychology explanation is that increased anxiety serves the purpose of increased vigilance regarding potential threats in the environment as well as increased tendency to take proactive actions regarding such possible threats. This may cause false positive reactions but an individual suffering from anxiety may also avoid real threats. This may explain why anxious people are less likely to die due to accidents. Panic attacks and panic disorder are not the same thing. Panic disorder involves recurrent panic attacks along with constant fears about having future attacks and, often, avoiding situations that may trigger or remind someone of previous attacks. Not all panic attacks are caused by panic disorder; other conditions may trigger a panic attack. They might include: SSRIs and SNRIs are commonly used to treat depression, but they are also helpful for the symptoms of panic disorder. They may take several weeks to start working. These medications may also cause side-effects, such as headaches, nausea, or difficulty sleeping. These side effects are usually not severe for most people, especially if the dose starts off low and is increased slowly over time. Talk to your doctor about any side effects that you have. A panic attack is a sudden rush of fear and anxiety that seems to come out of nowhere and causes both physical and psychological symptoms. The level of fear experienced is unrealistic and completely out of proportion to the events or circumstances that trigger a panic attack. Anyone can have a single panic attack, but frequent and ongoing episodes may be a sign of a panic or anxiety disorder that requires treatment. Cognitive behavioral therapy (CBT), is based on the idea that our thoughts cause our feelings and behaviors, not external things, like people, situations, and events. According to the National Association of Cognitive Behavioral Therapists the benefit of this therapy is that we can change the way we think to feel and act better even if the situation does not change. CBT focuses on determining the thought and behavior patterns responsible for sustaining or causing the panic attacks. CBT is a time-limited process (treatment goals—and the number of sessions expected to achieve them—are established at the start) that employs a variety of cognitive and behavioral techniques to affect change. While conducting research for this article, we encountered more than a dozen mental health professionals who mistakenly believed the terms “anxiety attack” and “panic attack” were synonymous. They were licensed professionals, but none of them had a specialty in anxiety. Because “anxiety attack” is not a clinical term, they assumed it was a synonym for “panic attack.” This caused them to use the terms interchangeably. Those who experience anxiety attack disorder are not alone. It’s estimated that 19 percent of the North American adult population (ages 18 to 54) experiences an anxiety disorder, and 3 percent of the North American adult population experiences anxiety attack disorder. We believe that number is much higher, since many conditions go undiagnosed and unreported. The above statements are two examples of what a panic attack might feel like. Panic attacks may be symptoms of an anxiety disorder. Historically, panic has been described in ancient civilizations, as with the reaction of the subjects of Ramses II to his death in 1213 BC in Egypt, and in Greek mythology as the reaction that people had to seeing Pan, the half man, half goat god of flocks and shepherds. In medieval then Renaissance Europe, severe anxiety was grouped with depression in descriptions of what was then called melancholia. During the 19th century, panic symptoms began to be described as neurosis, and eventually the word panic began being used in psychiatry. In fact, some studies have suggested that people with chronic anxiety disorders have an increased prevalence of CAD—that is, chronic anxiety may be a risk factor for CAD. So doctors should not be too quick to simply write the chest pain off as being “simply” due to anxiety. They should at least entertain the possibility that both disorders may be present and should do an appropriate evaluation. If you believe you are suffering from Generalized Anxiety Disorder, your doctor will perform a variety of physical exams as well as mental health checks. You might first go to your doctor complaining of constant headaches and trouble sleeping. After he or she rules out any underlying medical conditions that are causing your physical symptoms, s/he may refer you to a mental health specialist for further diagnosis. Your mental health specialist will ask you a series of psychological questions to get a better understanding of your condition. To be clinically diagnosed with Generalized Anxiety Disorder, your doctor and/or mental health provider will assess the length of time you have been suffering from excessive worry and anxiety, your difficulty in controlling your anxiety, how your anxiety interferes with your daily life, and if you are experiencing fatigue, restlessness, irritability, muscle tension, sleep problems, and difficulty concentrating. The typical course of panic disorder begins in adolescence and peaks in early to mid-twenties, with symptoms rarely present in children under the age of 14 or in older adults over the age of 64 (Kessler et al., 2012). Caregivers can look for symptoms of panic attacks in adolescents, followed by notable changes in their behavior (e.g., avoiding experiencing strong physical sensations), to help potentially identify the onset of panic disorder. Panic disorder is most likely to develop between the ages of 20-24 years and although females are more likely to have panic disorder, there are no significant sex differences in how the disorder presents (McLean et al., 2011). I felt pretty much like a anxiety attack today and I felt like nausea, puked literally green fluid. And then after a while felt relieved. Suddenly felt like nausea and was burping real bad and then I go to the toilet and then sat on the floor and thank god I had two of my besties at home to support me holding my hands and asked me to calm down. Since it clicked me that something is getting extra in my body I started breathing fast and then kept saying “I am strong” and came out to my bedroom and started working out jumping like crazy for almost 5 minutes and then all the shivering went away. Finally I vomited once again and then after reaching hospital and getting intravenous injection I felt relieved. Just to make sure nothing is really wrong I went to visit a general physician and he gave me meds and suggested looking at my fear for a sonography. Turns out I need to relax. Beta Blockers, also known as beta-adrenergic blocking agents, work by blocking the neurotransmitter epinephrine (adrenaline). Blocking adrenaline slows down and reduces the force of heart muscle contraction resulting in decreased blood pressure. Beta blockers also increase the diameter of blood vessels resulting in increased blood flow. Historically, beta blockers have been prescribed to treat the somatic symptoms of anxiety (heart rate and tremors) but they are not very effective at treating the generalized anxiety, panic attacks or phobias. Lopressor and Inderal are some of the brand names with which you might be familiar. Family Therapy is a type of group therapy that includes the patient's family to help them improve communication and develop better skills for solving conflicts. This therapy is useful if the family is contributing to the patient's anxiety. During this short-term therapy, the patient's family learns how not to make the anxiety symptoms worse and to better understand the patient. The length of treatment varies depending on the severity of symptoms. Foster the development of a strong peer network. It's probably no surprise to hear that peer relationships become a major source of support during adolescence. Encourage your child to engage in interests (like arts, music, and sports) that will help them develop and maintain friendships. If your child already has a very busy and structured schedule, try to carve out some time for more relaxed socializing. However, note that sometimes peers can be the source of anxiety, whether through peer pressure or bullying. Check in with your child about the nature of their relationships with others in their social circle (school or class). Please Note: In some cases, children, teenagers, and young adults under 25 may experience an increase in suicidal thoughts or behavior when taking antidepressant medications, especially in the first few weeks after starting or when the dose is changed. Because of this, patients of all ages taking antidepressants should be watched closely, especially during the first few weeks of treatment.
<urn:uuid:a9be478d-9726-4495-84bd-b0de43471cb3>
{ "date": "2019-03-20T19:07:46", "dump": "CC-MAIN-2019-13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212054-00061.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9626683592796326, "score": 2.59375, "token_count": 4240, "url": "http://panic-attackrelief.com/anxiety-without-depression-anxiety-likert-scale.html" }
Richard Louv's best selling book Last Child in the Woods brought national attention to "nature deficit disorder." In brief children are spending less time outdoors, and particularly less time in unstructured interaction with nature. At the Northern Virginia Regional Park Authority we have been working to engage children with nature through the following actions: - NVRPA was one of the first park agencies to sign onto No Child Left Inside a coalition that is working to advocate for outdoor/environmental education. - In 2009 we initiated a program that would allow area youth to volunteer some of their time and effort in our parks in exchange for access to park facilities that have fees associated with them. This program was to reduce potential barriers that some youth might have to using facilities like waterparks, and hopefully provided some insight into the fields of park management and maintenance. - In 2009 we renovated the Nature Center at Potomac Overlook to enhance its appeal. It is now the only nature center we know of that is focused on energy, where it comes from, how it is used by people and the natural world, and what are the impacts of its use. - For the last several years we have had a roving naturalist program during our peak months. This program brings nature education to thousands, whether that is a waterpark, campground, or special event. In terms of reaching the largest numbers of public with environmental education this is our most effective program. - With the generous donation from a long-time park supporter, we are embarking on building a children’s garden at Meadowlark Botanical Gardens that will initially focus on Native American and early colonial settlers, mixing fun, imagination and historical and environmental education. In the end the issue of children spending less time outdoors is less a child issue and more of a parent issue. As parents we need to look for opportunities to get our children outdoor and engaged with nature. If parents would make a new years resolution to take their child for a walk (hike) in the woods this year it would be a great start. Walking along surrounded by nature is a great time to bond and have the kind of conversation about school and life in general that it is hard to have during the hussel and bussel of daily life.
<urn:uuid:dcfaaed3-6b14-4abf-bb6b-c4d215c79294>
{ "date": "2018-12-11T20:58:30", "dump": "CC-MAIN-2018-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823702.46/warc/CC-MAIN-20181211194359-20181211215859-00136.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9674822688102722, "score": 2.953125, "token_count": 460, "url": "http://regionalparks.blogspot.com/2009/12/richard-louvs-best-selling-book-last.html" }
See also the BCCM Activity report 2009-2012 The BCCM patrimony At the end of 2012, the total holdings of the BCCM consortium numbered about 173,000 different organisms. According to their status in the collection, these organisms can be categorised into 4 types of deposits of biological material: - Public deposits: (micro)biological material that has been deposited in the collection and that may be catalogued and distributed; - Safe deposits: (micro)biological material that is deposited in the collection for back-up safety reasons; it remains the exclusive property of the depositor, it will not be catalogued or distributed; - Patent deposits: (micro)biological material that is deposited under Belgian patent legislation or the international Budapest Treaty; - Research deposits: (micro)biological material that is or has been the subject of scientific research projects and that is not publicly available, as well as (micro)biological material for which a provisional restriction regarding cataloguing and/or distribution is applicable. Total number of organisms: 173,000 Figure 1: Relative importance for the consortium of the different deposit types. The BCCM holdings consists of about 37% public deposits and 63% research deposits. Due to their specific nature, the safe and patent deposits represent only a small part of the holdings. The large share of research deposits illustrates the growth potential of the public collections. This is often the case regarding organisms that are kept as research objects. The BCCM curators will gradually transfer some of these organisms to the public collection. The BCCM public collections The BCCM public collections contain about 64,000 different organisms, distributed over fungi, bacteria, yeasts, plasmids, diatoms and DNA libraries. These organisms are fully documented and readily available for distribution. Total number of organisms in the public BCCM collections: 64,000 Figure 2: Relative importance of the different organism types in the public BCCM collections. DNA libraries (0.03%) are not visible in the graph. Other scientific services The BCCM consortium offers a range of services based on its scientific expertise in the field of molecular and microbiology. The most important service is the identification of microorganisms, followed by testing and microbial count services. Figure 7: : Relative importance of the different services carried out by the BCCM consortium. The category "other services" contains about 15 other types of service which each contribute less than 2% to the total revenues of the scientific services. In the past 4 years the BCCM collections have been involved as a partner in 26 research projects with external funding. These projects add value to the collections by either expanding the holdings with new interesting material, further describing and characterising the biological material that is already available, contributing to the taxonomic or phylogenetic studies or developing new or optimised preservation methods. Since the BCCM consortium has no legal status yet, the contracts related to these projects are managed by the BCCM host institutes. The BCCM budget The BCCM consortium is funded by the Belgian Science Policy Office (Belspo) under an annual recurrent funding system. The income that the collections generate from their services and research projects constitutes an additional source of funding. In the past four years, Belspo provided 78% of the total BCCM budget, while services and research projects generated, respectively, 10% and 12% of the total BCCM budget. Figure 8: Relative importance of sources of funding for the BCCM consortium
<urn:uuid:a4c89858-7480-40c1-8d8d-63245dcda3cd>
{ "date": "2017-06-25T05:23:18", "dump": "CC-MAIN-2017-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320438.53/warc/CC-MAIN-20170625050430-20170625070430-00577.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9164958596229553, "score": 2.625, "token_count": 716, "url": "http://bccm.belspo.be/about-us/facts-figures" }
This paper provides information about online safety for service providers and other professionals who work with families and children. It will help professionals to provide support to families and to discuss ways to keep children and young people safe online. Relevant resources are included to share with parents and carers. For specific information on cyberbullying, see Parental Involvement in Preventing and Responding to Cyberbullying. Information technology is now used in virtually every home in Australia. Ninety-seven per cent of households with children aged under 15 years have access to the internet, with an average number of seven devices per household. Ninety-nine per cent of young people aged 15–17 years are online, making this age group the highest users. They spend an average of 18 hours per week online (Australian Bureau of Statistics [ABS], 2016). Social networking, entertainment and educational activities are the most popular activities online for children and young people, and there can be many positive outcomes of this use. Young people are increasingly exposed to an open and collaborative online culture, which allows them to access information, maintain friendships and relationships with family, and create and share content (Collin, Rahilly, Richardson, & Third, 2011). However, children and young people are at a dynamic stage of development in which risk-taking behaviours and emerging decision-making can lead to negative outcomes (Viner, 2005). As a result, parents need to remain actively involved and vigilant regarding the nature of their children’s online activities, and to continue to communicate and negotiate with children and young people about their use of technology. Parental involvement in the safe use of technology should start from a child's first use, and parents continue to be a critical influence in children and young people being responsible digital citizens and engaging in online activities safely. What is online safety and why is it important? Online safety is often used interchangeably with terms such as internet safety, cybersafety, internet security, online security and cyber security, although these terms can relate to different aspects of online engagement. For example, the risk of using computers, mobile phones and other electronic devices to access the internet and social media is that breaches of privacy may lead to fraud, identity theft and unauthorised access to personal information. Other risks for children and young people include image-based abuse, cyberbullying, stalking and exposure to unreliable information or illicit materials. Criminal offenders are highly skilled at exploiting new modes of communication to gain access to children and young people, and children and young people can easily access adults-only material if there are no protective mechanisms in place (Queensland Police, 2014). These situations can place a child or young person's emotional and physical wellbeing at risk. This is particularly the case where little or no attention has been paid to monitoring use, communicating with children or young people about use or securing the device being used. In these cases, and for the purpose of this paper, online safety is a child protection issue. While online safety is important for protecting children and young people from dangerous and inappropriate websites and materials, this does not mean that parents should discourage their children from using digital technology. The challenge is to help children and young people enjoy the benefits of going online while having the skills and knowledge to identify and avoid the risks. Office of the eSafety Commissioner The Office of the eSafety Commissioner (the Office) is an independent statutory office that was created by the Enhancing Online Safety for Children’s Safety Act 2015. The Office was established in 2015 to coordinate and lead the online safety efforts across government, industry and the not-for-profit community. The Office operates a world-first reporting scheme to deal with serious cyberbullying that affects Australian children. There is also a reporting function for Australians who come across illegal content online and the Office is taking the lead on tackling image-based abuse through an online portal and reporting tool. The significance of being 13 years old As part of their privacy policies, social networking sites such as Facebook, Twitter, Instagram and YouTube specify that users must be at least 13 years old. Parents may be unaware of this requirement. The minimum age stipulations are based on the requirements of the US Congress as set out in the Children's Online Privacy Protection Act.1 The act specifies that website operators must gain verifiable parental consent prior to collecting any personal information from a child younger than 13 years old (O'Keeffe et al., 2011). Many social networking sites avoid this requirement by setting a minimum age of use at 13 years old but there is no onus on website operators to verify the age of users. Practical tips for parents to help children and young people use the internet The following tips will help parents provide support and guidance for children and young people as they engage in online activities. Monitoring and supervision Monitoring a young person's online activities includes checking that websites are appropriate for a child's use and keeping an eye on the screen. If parents are willing to provide children and young people with access to mobile phones and computers, then a responsibility to understand, model appropriate behaviour and communicate the basics of good digital citizenship should come with the access. Advice on monitoring often focuses on keeping the device in a shared family area, yet in the age of wireless connections and internet-enabled smartphones this is increasingly difficult. Similarly, young people may control their own online details, such as passwords and web browser histories. Parents can address these difficulties in the following ways: - Develop a plan about internet use in partnership with family members. This can include: - details of appropriate online topics; - privacy setting checks; - physical locations for internet use and parental monitoring (looking over the shoulder or line of sight supervision); - limits on screen time; - limits on when wireless internet connections and/or mobile devices will be available; and - what may be identified as inappropriate posts on online profiles. - An internet-use agreement may be useful to develop with older children. Many schools have internet-use agreements that can be replicated and Queensland Police have produced an example . - Take an active role in discussing the benefits of online activities with children and young people, and what strategies they may use to respond to cyberbullying, other negative online behaviours or if they unintentionally access adult content. Discussions can include how these rules apply wherever they are online, including at home in their bedroom and when they are outside the home, for example at a friend's place. Parents can be encouraged to: - Find out whether their child's school has an internet policy and how online safety is maintained. Inquiries should focus on the strategies used to educate children and young people about online safety and cyberbullying, whether parents are involved in cyberbullying initiatives and developing cyberbullying policies. - Point out to children and young people that some websites on the internet are for adults only and are not intended for children or young people to see. Discuss what strategies a young person might adopt if they access this content. - Use a family-friendly internet service provider (ISP) that provides proven online safety protocols. Filtering tools should not be solely relied on as a solution. Open discussion and communication with young people about monitoring and supervision is needed. - Empower children and young people to use the internet safely by mutually exploring safe sites and explaining why they are safe. It's also important to educate children and young people on why it's not safe to give out any personal details online. Engagement and communication Parents can be encouraged to: - Discuss with their children how they may recognise the difference between online information that is helpful or unhelpful, true or false, useful or not useful. For example, government or education websites may contain more accurate information than opinions that are posted on an unfamiliar person's blog. - Increase their own knowledge and become more adept at engaging in online activities and exploring social networking sites that are being used by their children. Learning alongside children and young people can be an effective way to achieve this—parents can be encouraged to let their children be the experts and help them to understand the tools they are using online. - Focus on the positive aspects of the internet—spend time looking together at sites that are fun, interesting or educational. Find sites together that are age and stage appropriate for their children. - Encourage their child to question things on the internet. When looking at a new site, their child could ask questions such as, "Who is in charge of this site?", "Have I found information or is it just opinion?" or "Is this site trying to influence me or sell me something?" If you have found any material online that you believe is prohibited or inappropriate, you should contact the eSafety Hotline. For further information, go to the Office of the eSafety Commissioner where a range of resources is available for parents and caregivers. Sources: O'Keeffe, Clarke-Pearson, & Council on Communications and Media, 2011; Raising Children Network, 2011; Robinson, 2012. Resources and campaigns A number of education and awareness campaigns promoting online safety target children, young people and parents. Campaigns are most effective when they combine information with training and skill acquisition. Websites, leaflets and other information-only resources may have a limited effect when delivered in isolation. Information provided through interactive training programs, online quizzes, video games and formal curriculum assessment are more likely to translate to more secure conduct online (Connolly, Maurushat, Vaile, & van Dijk, 2011). For this reason, parents are encouraged to facilitate their children's engagement with age-appropriate interactive learning materials related to online safety. There are many online safety resources available. The following is a selection of these, including campaigns that provide targeted and interactive online learning opportunities for children, young people and parents. Be Deadly Online is an animation and poster campaign about online issues such as bullying, reputation and respect for others. It was developed with Indigenous writers and voice actors for Australians. There are resources for children and young people, as well as schools and communities. This resource is a hub of information about e-safety issues, including how to protect yourself and your personal information, where and how to report risky online behaviour, cyberbullying and how to stay safe online. This resource sheet provides information about safety and good practice when images of children and young people are displayed online. It contains information about legal issues and privacy laws, classifications of online images, good practices and emerging issues around images, and lodging a complaint about a website. It also has links to additional resources. Raising Children Network—Pre-teens/teenagers entertainment and technology The Raising Children Network provides information on common concerns such as cyberbullying, sexting and access to pornography, as well as practical advice for keeping pre-teens and teens safe online. This site provides information to help parents understand why young people use technology, the risks associated with being online, problems to look out for and ways to help their children use technology safely. A separate tab provides a series of practical tips on what parents can do to help young people manage technology use in a safe and balanced way. The following is a selection of Australian websites that focus on different aspects of online content and online safety that may also be useful. This guide assists Australian internet users to understand Australia's co-regulatory framework for online content and the legal obligations of internet service providers and internet content hosts. The Communications Alliance is a non-profit, private sector industry body that (among other things) develops best practice rules for the industry in Australia in conjunction with the Australian Communications and Media Authority. A collection of Australian Government sites with initiatives and resources focused on protecting Australian internet users. This paper from the Child Family Community Australia information exchange outlines definitions and statistics related to cyberbullying. It explores the differences between cyberbullying and offline bullying, and parents' roles and involvement in preventing and responding to cyberbullying incidents. The aim is to inform practitioners and professionals of ways to help parents clarify their roles, and to provide parents with the tools to help their teenaged children engage in responsible online behaviour. The School A to Z website provides practical help for parents about keeping kids safe online. It includes Ten Cybersafety Tips Every Parent Should Know, and information from experts about cybersafety. Useful information is also provided for parents of children who are bullied. This website provides advice for schools on cybersafety and the responsible use of digital technologies. It covers a range of topics including bullying, cybersafety strategies, and practical steps and actions relating to online incidents. This website is a one-stop shop for Australian internet users, providing information on the simple steps they can take to protect their personal and financial information online. The site has informative videos, quizzes and a free alert service that provides information on the latest threats and vulnerabilities. Tagged is short film for young people about a group of high-school friends who experience first hand the life consequences caused by cyberbullying, sexting and a negative digital reputation. Tagged has received acclaim for its realistic depiction of teenagers and the problems they can face in a digital world. Since its launch in September 2011, Tagged has become a popular resource for Australian teachers and parents and has attracted more than 645,000 views on YouTube. ThinkUKnow is an internet safety program delivering interactive training to Australian parents, carers and teachers. Created by the UK Child Exploitation and Online Protection (CEOP) Centre, ThinkUKnow Australia has been developed by the Australian Federal Police (AFP) and Microsoft Australia. Users will need to subscribe to the site to gain access to its tools and resources. Published by the Queensland Police Service's Task Force, Argos, this brochure provides information for parents on internet safety for children and young people. It discusses social networking, mobile phones, webcams and online gaming, and it provides information about the types of things to look out for that may indicate children could be at risk. Some of the more popular social networking sites provide information specifically tailored to help parents understand their child's use of the site. For example: - Facebook: Help Your Teens Play it Safe - Instagram: Tips for Parents - Snapchat: Safety Centre - Twitter: Safety on Twitter - YouTube: Policies, Safety and Reporting - Australian Bureau of Statistics (ABS). (2016). Household use of information technology, Australia, 2014–15 . Canberra: ABS. Retrieved from <www.abs.gov.au/ausstats/[email protected]/mf/8146.0>. - Collin, P., Rahilly, K., Richardson, I., & Third, A. (2011). The benefits of social networking services: A literature review. Melbourne: Cooperative Research Centre for Young People, Technology and Wellbeing . - Connolly, C., Maurushat, A., Vaile, D., & van Dijk, P. (2011). An overview of international cyber-security awareness raising and educational initiatives . Research report commissioned by the Australian Communications and Media Authority. Melbourne: Commonwealth of Australia/ACMA. - O'Keeffe, G. S., Clarke-Pearson, K., & Council on Communications and Media. (2011). Clinical report: The impact of social media on children, adolescents, and families. Pediatrics, 127 (4), 800–804. Retrieved from <pediatrics.aappublications.org/content/127/4/800.short>. - Queensland Police. (2014). Who's chatting to your kids? Brisbane: Queensland Police. Retrieved from <www.police.qld.gov.au/programs/cscp/personalSafety/children/childProtection/>. - Raising Children Network. (2011). Internet safety for children. Raising Children Network (Australia) Ltd. Retrieved from <raisingchildren.net.au/articles/internet_safety.html>. - Robinson, E. (2012). Parental involvement in preventing and responding to cyberbullying (CFCA Paper No. 4). Melbourne: Australian Institute of Family Studies. Retrieved from <www.aifs.gov.au/cfca/pubs/papers/a141868/index.html>. - Viner, R. (Ed.). (2005). The ABC of adolescence. Malden, MA: BMJ Books/Blackwell Publishing. Authors and Acknowledgements This paper was updated by Elly Robinson and Morwynne Carlow, Child Family Community Australia (CFCA) information exchange. Previous versions of this paper have been updated by Lucy Ockenden, Kathryn Goldsworthy, Rose Babic, Elly Robinson, and Shaun Lohoar. Outlines definitions of cyberbullying, differences between cyberbullying and offline bullying, and parents' roles in dealing with cyberbullying. An overview of the issues involved when displaying images of children and young people online, including privacy laws, consent and safety An overview of the innovative use of technology in service delivery for organisations working with families, children and young people. Outlines the role and duties of children's commissioners, and how they differ between Australian states and territories The Protecting Australia's Children: Research and Evaluation Register is a searchable database of 944 research and evaluation projects related to protecting children. A range of filtering options enable easy access to relevant Australian research conducted between 2011 and 2015.
<urn:uuid:e32c2934-d885-466c-8b3a-abbbf72fb233>
{ "date": "2019-01-23T16:10:08", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584334618.80/warc/CC-MAIN-20190123151455-20190123173455-00096.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9241353869438171, "score": 3.921875, "token_count": 3594, "url": "https://aifs.gov.au/cfca/publications/online-safety" }
Pencils by type A huge deposit of solid graphite was discovered in Cumbria in the mid 1500s. This was in a particularly pure and solid form and still remains the only large deposit of solid graphite ever found. It was initially used by the locals for marking sheep. Later, it was discovered that graphite could be used in the process of casting cannon balls and because of their strategic importance the graphite mines were taken over by the Crown and closely guarded. However, the graphite was smuggled out for use in pencils, with the soft graphite sticks being wrapped in string or sheepskin to give them strength. Graphite mines in other parts of the world could only produce graphite powder, as the graphite had to be crushed to remove the impurities. The first attempts to create graphite sticks from graphite powder were made in Germany in the mid 1600s . The method of mixing powdered graphite with clay was invented by Austrian Joseph Hardtmuth in 1790, but it was Frenchman Nicholas Jacques Conte who developed the method in order to overcome the British embargo on the export of graphite to France. This clay/graphite mix is what is used today, the different grades being achieved by varying the proportions. Faber-Castell started making graphite pencils in 1761 and have been in continuous production ever since. They have taken the traditional wood cased pencil to new a level with the 'Perfect Pencil' range, which includes a combined extender and sharpener. Pencils remain popular. Many people like to work in pencil and the ability to easily correct mistakes or remove a temporary note, can be a real advantage.Filter 'Mechanical pencil', is a generic term which covers all types of pencil which have a mechanism to control a replaceable lead. Mechanical pencils are used by artists, draughtsmen and designers, as well as being a handy tool for note-making. The qualities of the wooden pencil have been valued for 250 years. The simple, reliable, ubiquitous wooden pencil will no doubt be around for many more years to come. - Product Type - Ink Colour - Refill Type - Nib Width
<urn:uuid:87d7c76d-22f0-4200-8083-64637c7881b0>
{ "date": "2019-01-18T17:26:27", "dump": "CC-MAIN-2019-04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00576.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9752706289291382, "score": 3.1875, "token_count": 449, "url": "https://www.thepencompany.com/pencils-type/" }
Food preservation is the name for a number of processes that help to preserve food. This means that the food treated that way will go bad (spoil from bacteria) later that if it had not been treated that way. For thousands of years, humans have used methods of preserving food, so that they can store food to eat later. The simplest methods of preserving food, such as drying strips of fish or meat in the hot sun have been used for thousands of years, and they are still used in the 2000s by indigenous peoples. How food goes bad[change | change source] Food is spoiled because microorganisms change it. There are five basic techniques which make food last longer: - Killing of the microorganisms, or preventing them from multiplying - The contact of the microoganisms with the food is removed, and new contact is made more difficult, or impossible. - Microogranisms need certain basics to survive. Removing these will kill the microorganism - One of the ingredients of the food is highly concentrated, so microoganims cannot use it. - Certain additives prevent the growth of microoganisms, or make it slower. Usually several of the techniques are combined. Methods of preserving food[change | change source] Common ways of preserving food are: - Heating the food or baking it (a hard corn-flour biscuit stays edible much longer than a bowl of fresh corn). - Pasteurization: Louis Pasteur found that simply heating food kills most microorganisms and makes it last longer. Liquids such as milk are commoly pasteurized - Converting the food into a longer-lasting form (for example, fresh goat's milk can be converted into cheese or yogurt, which lasts much longer than fresh milk) - Pickling: putting vegetables, meat, or fish in salty water (brine) - Salting the food: Covering it with dry salt - Putting the food in a jar with alcohol (ethanol) or vinegar, also called pickling - Putting large amounts of sugar into the food (for example, as with jam or fruit jarred in sugar and water) - Drying in the sun or in an oven - Smoking the food with the smoke from burning wood. Usually, this is done to food that was salted first. - Keeping the food cold or frozen - adding sulfur dioxide (known as oxidation) Multiple methods[change | change source] Many common methods use several of these approaches at the same time. For example, pickles preserved in a jar are heated then put in a mixture of vinegar and brine. Fruit jams and jellies are heated and mixed with a large amount of sugar. Some preserved fruit is heated and then mixed with alcohol (for example, Brandy) and a large amount of sugar. Smoked hams are cured in brine and then exposed to the smoke from burning wood chips.
<urn:uuid:fac2ae5e-df8c-4694-9b30-3973f2d4f317>
{ "date": "2015-11-26T00:22:58", "dump": "CC-MAIN-2015-48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446248.56/warc/CC-MAIN-20151124205406-00058-ip-10-71-132-137.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9485665559768677, "score": 3.640625, "token_count": 603, "url": "https://simple.wikipedia.org/wiki/Food_preservation" }
Deadly Teflon chemical – Decades of cover-ups (Natural News) It’s in your cookware, your clothing, furniture, carpets, popcorn bags and even in your food! It’s perfluorooctanoic acid (PFOA) and it remains indefinitely in the environment and even gets stuck in your body. PFOA is a toxicant and known carcinogen that has been detected in the blood of more than 98% of the US population. Exposure to this chemical has been associated with increased cholesterol, uric acid levels, preeclampsia, heart disease, liver damage, thyroid trouble, neurological disorders, chronic kidney disease and kidney cancer. High levels of exposure to the Teflon (a DuPont registered trademark) chemical PFOA causes the risk of testicular cancer to skyrocket by 170 percent. DuPont’s plant on the Ohio River has used PFOA since the 1950s to make chemicals used in the production of nonstick products, oil-resistant paper packaging like hamburger wrappers, and stain-resistant textiles. PFOA is pretty much in anything wrinkle-free, heat-proof, stain-resistant, and more. Children downstream from a DuPont chemical plant on the Ohio River carry PFOA in their blood prompting one of the first studies of its effects on kids. Out of more than 10,000 kids ages 1 to 17, those with the highest levels were more likely to have thyroid disease. Of course, these results only support previous findings from studies with adults. Thyroid hormones play critical roles in metabolism, growth and brain development. These hormones are especially important during fetal development and early childhood with small changes in thyroid hormone levels during these developmental periods affecting IQ and motor skill development in children. DuPont fined for Teflon cover-up In June 2005 there was a $5 billion class-action lawsuit filed against Dupont for failing to alert the public about over 20 years of known health problems associated with PFOAs. The Environmental Protection Agency later announced it would slap the $25 billion Teflon maker with a mere $16.5 million for two decades’ worth of covering up studies that showed it was polluting drinking water and harming newborn babies with an indestructible chemical. The fine was the largest administrative fine the EPA had ever levied under a flimsy toxic chemical law and the fine was less than half of one percent of DuPont’s profits from Teflon at the time and a mere fraction of the $313 million the agency could have imposed. The Environmental Working Group (EWG.org), said the penalty highlighted the federal government’s weak hand in dealing with industrial polluters. “What’s the appropriate fine for a $25 billion company that for decades hid vital health information about a toxic chemical that now contaminates every man, woman and child in the United States?” Group president and co-founder Ken Cook said. “We’re pretty sure it’s not $16 million, even if that is a record amount under a federal law that everyone acknowledges is extremely weak.” Of course DuPont acknowledges no liability for failure to report its 1981 discovery that a compound used to make Teflon had contaminated the placenta and bloodstream of a West Virginia worker’s unborn child. Other complaints allege that DuPont withheld information for years about unexpected contamination in the blood of workers, and pollution releases that eventually contaminated water supplies serving thousands in West Virginia and Ohio. DuPont’s official position is that they believe there are no human health effects associated with their Teflon product. Cash is king DuPont is one of the largest chemical companies in the world. Between 2008 and 2010, it reported over $2 billion in profits, paid no federal income taxes, increased its executive compensations by a whopping 188% and spent almost $14 million on lobbying for more corporate friendly laws. DuPont is the nation’s only manufacturer of PFOA. So why is it so difficult to stop this madness? Because Americans still buy their products and cash is king. Sources for this article About the author: Craig Stellpflug is a Cancer Nutrition Specialist, Lifestyle Coach and Neuro Development Consultant at Healing Pathways Medical Clinic, Scottsdale, AZ. http://www.healingpathwayscancerclinic.com/ With 17 years of clinical experience working with both brain disorders and cancer, Craig has seen first-hand the devastating effects of vaccines and pharmaceuticals on the human body and has come to the conclusion that a natural lifestyle and natural remedies are the true answers to health and vibrant living. You can find his daily health blog at www.blog.realhealthtalk.com and his articles and radio show archives at www.realhealthtalk.com
<urn:uuid:24765bbc-8f42-418d-a95a-05132219f0d7>
{ "date": "2018-07-17T15:04:51", "dump": "CC-MAIN-2018-30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589752.56/warc/CC-MAIN-20180717144908-20180717164908-00336.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9509578347206116, "score": 2.640625, "token_count": 999, "url": "https://www.federaljack.com/deadly-teflon-chemical-decades-of-cover-ups/" }
sustantivo[mass noun] Biochemistry An enzyme which catalyses the synthesis of a complementary RNA molecule using an RNA template. Más ejemplos en oraciones - Q-beta replicase based gene amplification: This approach involves production of RNA in the amplification reaction using QB replicase as the enzyme and reaction at fixed temperature. - The estimate is based on a large mutational target, the 804-base TMV MP gene that encodes the viral movement protein, which is a cognate sequence for the viral replicase. - As a replicase, Pol is highly sensitive to template defects and misinserts nucleotides only extremely rarely. Definición de replicase en: - el diccionario Inglés de EE.UU.
<urn:uuid:52889ddf-c832-4ea9-81b1-afcef6f03a29>
{ "date": "2014-10-22T14:28:49", "dump": "CC-MAIN-2014-42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447020.15/warc/CC-MAIN-20141017005727-00217-ip-10-16-133-185.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7385972738265991, "score": 2.515625, "token_count": 170, "url": "http://www.oxforddictionaries.com/es/definicion/ingles/replicase" }
Taxing Air: Facts and Fallacies about Climate Change — by Bob Carter and John Spooner, with Bill Kininmonth, Martin Fell, Stewart Franks, and Bryan Leyland — could be the best textbook yet written on climate change for all ages and backgrounds. The one thing that is a little confusing about the book is the title, which derives from the fact that all the authors are from Down Under (Australia, New Zealand, and Tasmania), where the governments are indeed taxing air for its carbon dioxide content. Carter is a renowned marine geologist and environmental scientist, and Spooner is a lawyer turned cartoonist whose contribution makes the book both fun to read and insightful. Carter et.al. chose to write the book using the Socratic method of asking questions and providing answers. It is divided into eight chapters, all filled with every question you could imagine to ask followed by brief answers and then the supporting material to prove the answers. The book’s great opening question is: What is a climate scientist? The accurate answer is that with so many fields of knowledge required, no one can be an overall climate expert. The authors follow with a simple question that few can answer: How does the climate system work? The simple answer is that it works through atmospheric and oceanic circulations that continuously transfer excess solar energy from the tropics to the polar regions. The full, long answer is broken down into small, understandable bits throughout the book, including virtually all the everyday questions such as: Is the earth warming, has it ever been warmer, how much warming occurred in the 20th century, how much warming is due to atmospheric carbon dioxide, and how much of that has been provided by mankind? In Chapter Two the authors explain the source and production of climate alarmism in a manner that can quickly explain why such unsupportable falsehoods could attract such a massive and powerful following. Their description of the makeup of the Intergovernmental Panel on Climate Change (IPCC) is enlightening, especially the fact that the participants are not asked to determine the nature of climate change but instead to produce evidence of man’s impact on climate. Carter sums up the climate change dilemma as follows: “The past 24 years have seen thousands of scientists expend well over $100 billion in studying the influence that human-related emissions may be having on climate. Given these intensive efforts, the absence of a measurable or unequivocal human imprint in the recent temperature record and the absence of any global warming trend at all over the last 16 years both point to frailty in the dangerous Anthropogenic Global Warming hypothesis. A reasonable default conclusion is that any human influence on the global climate lies within the noise of natural variability.” Carter and friends do a great job of explaining how we know about ancient climates, and they describe the proxies scientists use, such as tree rings and chemical indicators, to determine temperatures before man began to record them. They do an even better job of explaining the Milankovitch cycles of 20,000, 41,000, and 100,000 years determined by the movement of planetary bodies in our solar system which create gravitational interactions. They also explode the myths of coral leaching and polar bear declines. The authors’ complete explanation of the greenhouse warming theory and the truth about greenhouse gases is really outstanding—and it shows that the matter is not as simple as the public is being told. The authors hit hard on a point I make in all my lectures: the more carbon dioxide there is in the atmosphere, the less effective it is at capturing outgoing radiation from the earth, because it works only in a narrow range of the electromagnetic spectrum. Thus there appears to be a natural limit to the greenhouse effect. Perhaps best of all is the authors’ explanation of the mathematical models used to determine future earth temperatures. These so called General Circulation Models are so reliant on guesses regarding the many variables that affect climate as to be of no real value in planning future policies. They explain it thus: “Confidence in the projections made by the current generation of deterministic climate models is low because their construction is based on only a short period of climate history, and because they have not been validated on independent data. For the moment, therefore, deterministic Global Circulation Models represent a highly constrained and simplified version of the Earth’s complex and chaotic climate system.” Their chapter on the gigantic impact of the ocean on climate and carbon dioxide content as compared to the less dense atmosphere is really outstanding. It is followed by a lengthy chapter on Australian climate politics which will be of general interest to some but might be skimmed or skipped by others. In the final chapters the authors deal beautifully with the fallacies of renewable energy and explain how mankind can prepare for whatever climate changes nature may throw at us. In summary, this is the very best instructional book I have seen for those seeking a clear understanding of the realities of the earth’s climate and a remedy for the amazing fallacies spread daily by those who wish to make political gains by plying the public with unscientific misinformation.[NOTE: Printed copies of Taxing Air (A$30 + p&p) can be ordered at TaxingAir.com, and a Kindle version ($7.99) is available from Amazon.]
<urn:uuid:00589f12-d774-4d14-821a-c5e50099a9dc>
{ "date": "2014-04-21T00:09:40", "dump": "CC-MAIN-2014-15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00459-ip-10-147-4-33.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.939751148223877, "score": 3.09375, "token_count": 1079, "url": "http://blog.heartland.org/2013/08/book-review-taxing-air-facts-and-fallacies-about-climate-change/" }
Sensitivity Of Glaciation In The Arid Subtropical Andes To Changes In Climate Understanding the sensitivity of glaciers to changes in climate provides insight into the climatic drivers of past glaciations and helps us predict how glaciers and ice sheets may respond to future warming. The South American Andes are ~7000 km long, and the full length of the range is glaciated, or has been glaciated as recently as the last Pleistocene (~20,000 years ago). This means that glaciers in the Andes exist, or have existed, in a broad range of climatic settings. Between 18 and 27 °S, in the subtropical Andes, there are rock and sediment deposits from past glaciers but no modern glaciers, even at altitudes over 6000 m. These glacial deposits have been used by scientists to identify when the glaciers that used to exist in the area were at their maximum extent. Those scientists found that maximum glaciation occurred at different times across three distinct sub-regions: the northern Altiplano, the hyper-arid western cordillera, and the more humid eastern cordillera. While temperatures are similar across the three regions, the western cordillera is drier and receives more radiation than the other regions. The out-of-phase timing of glaciation across these three regions, likely due to those differences in climate, makes the subtropical Andes an especially interesting area to investigate the relationship between glaciers and climate. We use a numerical glacier model to calculate the changes in climate, compared with today, that would have been required for glaciers to have previously existed in the subtropical Andes. The model calculates the changes in the modern climate, including temperature, precipitation, and radiation, required for glaciers to exist in each of the three regions. The eastern cordillera likely had glaciers ~23,000 years ago during the Last Glacial Maximum, when temperatures were ~6°C colder than they are today. Our model shows that by simply decreasing temperatures by 6°C in the eastern cordillera, the climate would support glaciers. Maximum glaciations in the Altiplano and the hyper-arid western cordillera was likely closer to ~15,000 years ago, when temperatures were ~3.5°C colder than they are today. Our model shows that in these regions, simply decreasing temperatures is not enough to support glaciers and that further increases in precipitation, decreases in incoming radiation, or a combination of both, is necessary. In the Altiplano, increasing precipitation of 10 – 60%, in conjunction with lower incoming radiation of 7 – 12% and lower temperatures of 3.5 °C would have supported glaciers in the area. In the western cordillera, for the same lower radiation and temperature, precipitation needs to be 90 – 160% higher than modern for the region to have supported glaciers. We find that glaciation of the Altiplano and western cordillera is ~3 times more sensitive to changes in precipitation and ~2.5 times more sensitive to changes in shortwave radiation compared with the eastern cordillera. While all three regions are part of the arid subtropical Andes, the extremely high sensitivity of glaciation in the western cordillera is likely due to the especially low precipitation and high radiation in the area. These results suggest that future changes in precipitation and radiation can be as important as upcoming warming for future changes in subtropical glaciers. These findings are described in the article entitled Sensitivity of glaciation in the arid subtropical Andes to changes in temperature, precipitation, and solar radiation, recently published in the journal Global and Planetary Change. This work was conducted by L.J. Vargo from the Victoria University of Wellington, J. Galewsky from the University of New Mexico, S. Rupper from the University of Utah, and D.J. Ward from the University of Cincinnati.
<urn:uuid:05d63d8d-25e0-4cc2-a8a7-246879b43d81>
{ "date": "2019-05-26T04:16:19", "dump": "CC-MAIN-2019-22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00136.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9418805837631226, "score": 3.96875, "token_count": 801, "url": "https://sciencetrends.com/sensitivity-of-glaciation-in-the-arid-subtropical-andes-to-changes-in-climate/" }
Ten Things You Might Not Know About Our WorldThe Herald | HeraldOnline.com12, 2012 — /PRNewswire-USNewswire/ -- These facts about our interconnected world were brought to you by Geography Awareness Week 2012. It is possible in many cities to identify zones with a particular type of land use - eg a residential zone. Often these zones have developed due to a combination of economic and social factors. In some cases planners may have tried to separate out some land uses, eg an airport is separated from a large housing estate. The concentric and sector models in one news article? The BBC is showing once again the possibilities available if only the United States taught more geography in the schools. For over 1400 years, Mecca has been one of the most important cities in the Arabian Peninsula. By the middle of the 6th century, there were three major settl... As the heart of Islam, Mecca brings in pilgrims from around the world. This documentary gives a great overview of the historical, spiritual and cultural reasons why this is sacred space to over one billion Muslims. Additionally, this documentary contains an analysis of the logistics that are a part of the Hajj. The Brazilian government's geographic department (Instituto Brasileiro de Geografia e Estatística-roughly equivalent to the U.S. Census Bureau) has compiled an fantastic interactive world factbook (available in English and Spanish as well as Portuguese). The ease of navigation allows the user to conduct a specific search of simply explore demographic, economic, environmental and development data on any country in the world. This interactive map documents where 443 million people around the world get there water (although the United States data is by far the most extensive). Most people can't answer this question. A recent poll by The Nature Conservancy discoverd that 77% of Americans (not on private well water) don't know where their water comes from, they just drink it. This link has videos, infographics and suggestions to promote cleaner water. This is also a fabulous example of an embedded map using ArcGIS Online to share geospatial data with a wider audience. By Climate Central's Michael D. Lemonick: July 2012 was officially not only the warmest July on record, but also the warmest month ever recorded for the lower 48 states, according to a report released Wednesday by scientists at the National Oceanic... The drought footprint cover 63% of the contiguous states during the hottest month in American history. It's the hottest 12 month stretch (August 2011-July 2012) on record for the lower 48, making it the fourth consecutive month to set a new record (i.e. old record was July 2011-June 2012).The biggest difference from other hot months is the nighttime temperature have been exceptionally high. The most current drought monitor map can be found at the University of Nebraske website. "After growing by leaps and bounds for more than three decades, China’s economic growth has come to a halt, falling from around 12 percent in the second quarter of 2006 to 7.6 percent in the second quarter of 2012. Export-dependent manufacturing sector has been hard hit. The June HSBC Flash Purchasing Managers Index hit a seven-month low of 48.1, down from a final reading of 48.4 in May, the eighth consecutive month that the index has been below 50—the contraction threshold. Is this just a temporary pause, caused by a prolonged slow-down in the world economy or something more serious?" News analysis TEL AVIV, Israel - Israel has called up army reserves, the standing army is poised for a ground invasion of Gaza, the air force and navy is attacking a list of specified targets, mostly Hamas fighters and weapons facilities. Americans tend to locate near other people who share similar political views, creating a large number of counties that either tend to be reliably Democratic or Republican during election season, writes Timothy Heleniak, director of the American... This map is a fantastic geovisualization that maps the spatial patterns of languages used on the social media platform Twitter. This map was in part inspired by a Twitter map of Europe. While most cities would be expected to be linguistically homogenous, but London's cosmopolitan nature and large pockets of immigrants influence the distribution greatly. Tags: social media, language, neighborhood, visualization, cartography. DB: The aesthetics of architecture within a society not only reveal the communities interpretation of what is considered beautiful or pleasing in appearance but also differentiates between what is considered sacred or important. The symbolic significance of aesthetics in colors, designs and a place of residence can be indicative of socioeconomic standing is within society and what the community values. Jodhpur, India is well known for the beautiful wave of blue houses that dominate the landscape of a rather dry region. However, it is believed that these blue houses originally were the result of ancient caste traditions. Brahmins (who were at the very top of the caste system) housed themselves in these “Brahmin Blue” homes to distinguish themselves from the members of other castes. Now that the Indian government officially prohibits the caste system, the use of the color blue has become more widespread. Yet Jodhpur is one of the only cities in India that stands steadfast to its widespread aesthetics obsession with the color blue which is making it increasingly unique, creating a new sense of communal solidarity among its residence. Questions to Consider: How has color influenced the cultural geography of this area? How are the aesthetics of this community symbolic of India’s traditional past, present and possible future? Tags: South Asia, culture, housing, landscape, unit 3 culture. Births have plummeted since their 2007 peak, and the recession is a factor. There's worry that the birthrate will be affected for years. The graph for this article is an incredible visual that highlights how the economic conditions of a country can impact its demographics. Not surprisingly, Americans have less children during tough times. Questions to ponder: would this phenomenon be expected in all parts of the world? Why or why not? Demographically, what will the long-term impact of the recession be?
<urn:uuid:eae14294-0234-4da8-b994-c8e702accf4d>
{ "date": "2014-08-20T05:42:45", "dump": "CC-MAIN-2014-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500800168.29/warc/CC-MAIN-20140820021320-00020-ip-10-180-136-8.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9478570222854614, "score": 2.8125, "token_count": 1266, "url": "http://www.scoop.it/t/hum-geo/p/3060149631/2012/10/22/scratch-hip-hop-documentary" }
New research published in JAMA Internal Medicine has revealed a link found in a recent study of 450,000 EU citizens between drinking sugary soft drinks and a greater mortality risk. The study found a correlation between consuming such drinks and deaths from digestive conditions, and between diet soft drinks and deaths from circulatory diseases. Participants who had two or more a day were more at risk of dying from bowel disease, heart disease and strokes. “New study reveals link between sugary drinks and an increased risk of death“ CEO of the Oral Health Foundation, Dr. Nigel Carter, said: “In the UK, we have one of the highest rates of sugar consumption worldwide. This study is a frightening eye-opener and reminds us that excessive amounts of sugar can be really harmful to our health. Added sugar is the main culprit when it comes to several major chronic diseases including tooth decay, diabetes and heart disease….More must be done to drive down sugar consumption and incentivise healthier alternatives. Tooth brushing twice daily, with a fluoride toothpaste, is a crucial aspect of good oral health but it cannot prevent tooth decay caused by excessive sugar consumption….Plain still water is the best ‘tooth-friendly’ way of quenching thirst, without putting our health at risk. The sugar tax shows that government intervention is absolutely necessary for reducing the amount of sugar on supermarket shelves and in British homes. Tighter regulation, along with making healthier alternatives more financially affordable, are the next important steps in fixing the UK’s unhealthy relationship with sugar.”
<urn:uuid:f31692b1-b44c-4419-a389-dbd20b99cf77>
{ "date": "2019-12-06T16:51:18", "dump": "CC-MAIN-2019-51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00456.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9527943134307861, "score": 2.953125, "token_count": 316, "url": "https://www.zenopa.com/news/1643/new-study-reveals-link-between-sugary-drinks-and-an-increased-risk-of-death" }
The following information is a Web Extra from the pages of Farm Journal. It corresponds with the article "Knock Out Nematodes." You can find the article in Farm Journal's 2012 Seed Guide. Steps for Developing a Nematode Management Program: - Sample field(s) to determine if nematodes are present; if so, establish population density levels. If NO species are detected, the strategy is to make sure none are introduced. - If nematodes are present, the strategy is to keep them from spreading to non-infested fields and to reduce population densities. - Effective management practices require knowing which field(s) is/are infested, genera present, and population densities. - Lance nematodes should be identified by specific species. - Develop a Nematode Management Program well in advance of planting. (For more information, see Nematode Soil Sampling). - Review analysis of soil samples taken last fall to identify nematode species, their locations, and their densities. - Review nematode control options -- cultural and chemical practices -- that are available or practical in your situation. - Design and follow a strategy that suits your special situation. Consider factors such as history of cropping patterns, soil types, single/multiple nematode species present, and weather anticipated. SOURCE: COTTON, INC.
<urn:uuid:7401809d-dc37-4cd7-8044-ae5e67dad8b9>
{ "date": "2018-09-21T14:23:01", "dump": "CC-MAIN-2018-39", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00016.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8474797606468201, "score": 3.15625, "token_count": 289, "url": "https://www.agweb.com/article/how_to_manage_nematodes_in_cotton/" }
Radiant heat from propane jet fires Rights accessOpen Access Sonic propane jet fire experiments were carried out in absence of wind, with visible flame length ranging between 2.2 and 8.1 m. The thermal radiation intensity increased with the mass flow rate and the flame length. The net heat released was also computed and a correlation for the flame length as a function of Q is proposed. The surface emissive power and the fraction of heat irradiated were estimated by applying the solid flame model, assuming the flame to be a cylinder. The variation of the emissive power as a function of flame length was found to follow a linear equation. The fraction of heat irradiated η was obtained from the value of the total radiative power; its average value for sonic propane gas flames was 0.07. CitationGómez, M.; Muñoz, M.; Casal, J. Radiant heat from propane jet fires. "Experimental thermal and fluid science", 2010, vol. 34, núm. 3, p. 323-329.
<urn:uuid:2051acc6-796b-46ff-9a01-72072c571781>
{ "date": "2015-08-29T15:27:23", "dump": "CC-MAIN-2015-35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064503.53/warc/CC-MAIN-20150827025424-00349-ip-10-171-96-226.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9192568063735962, "score": 2.5625, "token_count": 223, "url": "http://upcommons.upc.edu/handle/2117/7144" }
- breeches (n.) - c. 1200, a double plural, from Old English brec "breeches," which already was plural of broc "garment for the legs and trunk," from Proto-Germanic *brokiz (cognates: Old Norse brok, Dutch broek, Danish brog, Old High German bruoh, German Bruch, obsolete since 18c. except in Swiss dialect), perhaps from PIE root *bhreg- (see break (v.)). The Proto-Germanic word is a parallel form to Celtic *bracca, source (via Gaulish) of Latin braca (aource of French braies), and some propose that the Germanic word group is borrowed from Gallo-Latin, others that the Celtic was from Germanic. Expanded sense of "part of the body covered by breeches, posterior" led to senses in childbirthing (1670s) and gunnery ("the part of a firearm behind the bore," 1570s). As the popular word for "trousers" in English, displaced in U.S. c. 1840 by pants. The Breeches Bible (Geneva Bible of 1560) so called on account of rendition of Gen. iii:7 (already in Wyclif) "They sewed figge leaues together, and made themselues breeches."
<urn:uuid:c8a74955-2fb6-423a-9a93-a29aedaeef19>
{ "date": "2015-10-07T08:28:23", "dump": "CC-MAIN-2015-40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682947.6/warc/CC-MAIN-20151001215802-00164-ip-10-137-6-227.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9477116465568542, "score": 3.5, "token_count": 288, "url": "http://www.etymonline.com/index.php?term=breeches&allowed_in_frame=0" }
Critical Analysis Of The Poem Alfred Tennyson English Literature Essay Grief is one of the most powerful emotions that a human being can experience. This is the predominant theme of the poem 'Break, Break, Break' by Alfred Tennyson, written around 1834, approximately a year after the death of his close friend Arthur Henry Hallam. 'Break, Break, Break' can be interpreted as a written example of the grief felt by Tennyson at the loss of his friend. This essay will examine the various techniques used by Tennyson to convey his emotion to the reader. The repetition of the word 'Break' in the opening line, can be viewed on a number of levels; at its most basic it can be seen to be a literal description of the waves breaking upon 'they cold gray stones', it could however, also be describing the heartbreak felt by the voice of the poem. When the repetition of the word 'break' is combined with the trimeter structure of the opening line, it forms a rhythmic beat, akin to that of a ticking clock; which symbolically can be perceived to represent not only the unrelenting breaking of the sea, but also the unrelenting march of time itself, which we all eventually submit to. On another level the breaking waves can be a metaphor for the waves of emotion breaking over the voice, drowning them in their grief. In the final two lines of the opening stanza, the voice reveals their desire to communicate 'The thoughts that arise' within them, this exhibits a high level of irony given that the whole of the poem itself is an expression of their 'thoughts'. The theme of communication follows on into the second stanza; the descriptions of a 'fisherman's boy' shouting with 'his sister at play!', and a 'sailor lad' singing in 'his boat on the bay!', both show examples of the worlds ability to make noise which is in direct contrast to the voice. They both also show that despite the voice of the poem feeling as if the world has ended, it has in fact carried on. The use of an exclamation mark at the end of both descriptions can be viewed to signify both the voice's irritation to these interruptions to his silent grief, and also their annoyance at the worlds seeming indifference to their anguish. The third stanza shows an example of Tennyson's careful choice of words when describing the destination of the 'stately ships'; he chooses to use the word 'haven' instead of the more obvious harbour. This works because of the two different meanings of the chosen word, when read in context it refers to the port where the ships are heading, however its alternate meaning of a place of shelter and protection fits perfectly with the underlying theme of the poem; being that shelter and protection from their grief is something that the voice is looking for. There is also a point of interest when noting the location of the 'haven' where the ships are heading, it is describe as being 'under the hill:', this could be symbolic of being buried, which would tie in with the themes of death and grief that are present within the poem. The final two lines of the third stanza reaffirm the yearning felt by the voice, this time for 'the touch of a vanish'd hand', and to once again hear the 'voice that is still!'; the notion of a mute voice is something that was originally seen in the opening stanza, this time however the 'still' voice is referering to the deceased, this link is something with strengthens the link between the voice and the source of their grief; this link between voice and departed is something that strengthens the connection of the two to the reader, allowing the grief of the voice's loss to feel more authentic. The final stanza starts with the repetition of 'Break', seen in the opening line. This brings a sense of the poem coming full circle and allows the reader to conclude that the end is coming near. By using this repetition once again it is established that the voice's state of mind and indeed the theme of the poem remain firmly entrenched in grief; despite all that has gone before it the reiteration of the repetition of 'Break' construe that the voice's heart is still broken, indeed even that their mind, body and soul are broken too. This also forms a form of connection between the voice and the deceased, where we have the literally dead person, we also have the voice themselves, who is experiencing a form of living death, isolated within their own grief unable to share in the joy of the world exhibited by the 'fisherman's boy' and the 'sailor lad', but also unable to even communicate the immense sorrow that they are experiencing; on both ends of the spectrum of human emotion they are in isolation. In conclusion upon reading 'Break, Break, Break', the reader is left in little doubt as to what the predominant theme of the poem is. Tennyson achieves this on two levels, firstly in a literal sense, upon an initial read through of the poem we are presented with a description of a person that has suffered loss and is grieving as a result; Tennyson reinforces this them to the reader through clever use of techniques such as repetition, structure and choice of language and punctuation, these work at a level where the reader does not have to be consciously aware of them in order for them to succeed. If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
<urn:uuid:8c6b3065-d6eb-4581-ae37-83fcc6d41d1c>
{ "date": "2014-12-18T04:02:31", "dump": "CC-MAIN-2014-52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765610.7/warc/CC-MAIN-20141217075245-00155-ip-10-231-17-201.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9613809585571289, "score": 3.328125, "token_count": 1135, "url": "http://www.ukessays.com/essays/english-literature/critical-analysis-of-the-poem-alfred-tennyson-english-literature-essay.php" }
PALGA Protocol Module The PALGA Foundation (Pathological Anatomical National Automated Archive) was set up in 1971 by pathologists to promote communication and information supply within and between pathology laboratories and to make the information obtained available for health care. All 56 pathology laboratories in the Netherlands take part in the PALGA network. ICT (ICT) developed the PALGA Protocol Module (PPM) for PALGA. The laboratories use this module to draw up reports in a structured manner. The system supports pathologists to understand vital data and this results in a complete report that is drawn up according to the Oncology guidelines. Using the PPM it is also possible to generate the conclusion, the report text, the pTNM and PALGA codes automatically. The entered observations are also recorded in a structured manner, thus raising the quality of the national PALGA database. Clever technology and knowledge sharing The PPM is based on LogicNets’ Expert System. LogicNets is the American partner of ICT. The technology of this expert system is extremely suitable to provide technical support solutions in general and for protocol reporting in the medical world; PALGA’s PPM is a good example here. In addition to a lot of technological knowledge ICT is known for its knowledge in the area of Medical Data Exchange. By means of protocol reporting using LogicNets technology, ICT ensures that the data to be exchanged are standardised / structured on input and, once stored in PALGA’s national database, can be used as valuable information for scientific research. Together with the PALGA Foundation, ICT develops and manages the national protocols; there are currently 30 protocols on the list for development and some of these have already been realised. Agile Scrum methodology is used here and is the perfect combination of domain knowledge (PALGA) and technology (ICT). Laboratories can also make protocols themselves (or have them made). ICT regularly gives a course on modelling protocols. ICT also provides consultancy customised for laboratories who develop protocols themselves with a little bit of support from ICT. Determining the content is the first step in drawing up a protocol. Within the laboratory this gives rise to a considerable amount of work and discussion. The LogicNets technology accelerates the process from discussion to implementation by the graphic environment within which the protocols are modelled; programming is now a thing of the past.
<urn:uuid:bfa46f78-259f-403c-a254-cac9abc72413>
{ "date": "2019-06-25T00:27:18", "dump": "CC-MAIN-2019-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999779.19/warc/CC-MAIN-20190624231501-20190625013501-00416.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9233171939849854, "score": 2.703125, "token_count": 490, "url": "https://ict.eu/case/palga-foundation/" }
Understanding Sacrificial Offerings or Understanding Sacrifices The chapters in the Torah which detail the practice of animal sacrifice in the Templeare some of the most difficult for a 21st century individual to understand. As such practices have completely disappeared from civilized society, we tend to view them as cruel, primitive and superstitious. They seem incompatible with other humane and progressive commandments of the Torah, which were revolutionary when the Torah was first given and today form the basis for not only a vibrant Judaism but for the moral and ethical standards of most of the rest of the world as well. Writing in the early 16th century and incorporating the words of Maimonides, who preceded him by several hundred years, Abarbanel provides a perspective on sacrifices that we can appreciate today. The primary reason for the necessity of these rituals was to assist the nascent Jewish nation in believing in the existence and oneness of G-d and to draw closer to Him by following His directives. Human perfection can be more effectively realized by attaining knowledge and faith through prayer, enlightenment and adherence to the Torah’s other precepts than by burning animals on an altar. However, the Jewish People were commanded to devote themselves to the worship of G-d, and the prevailing form of worship at that time was through animal sacrifice in specially-designated temples. G-d determined that the Jewish People would not be able to easily abandon such a well-established universal custom. By shifting the mode of worship from polytheistic paganism to the worship of one G-d, idolatry could be eliminated without radically interfering with practices already familiar to the people. In fact, the enormous amount of detail and the many differences between the various offerings symbolize many of the fundamental precepts of man’s responsibilities to himself and his Creator. The first type of animal offering is the Oleh, or Elevation Offering, which is completely consumed on the altar. This represents the uniting of the soul with G-d. Just as the animal’s body is united with the flames, so too is man’s eternal soul united with G-d after death. This offering demonstrates that our sole purpose is to devote ourselves completely to the service of G-d. Since it symbolizes man’s Divinely-created non-physical soul, material man has no share in it and cannot partake of it The second type of offering is the Sin Offering. This offering functions as one aspect of the atonement process that is required of one who transgresses Torah commandments unintentionally. It encourages the transgressor to be more vigilant and to consider the consequences of his actions. It functions as a monetary fine as well, since the transgressor must provide the animal. Even if one is unsure whether he transgressed at all he still must bring an offering. The procedures of the offering differ for unintentional transgressions committed by the High Court or the High Priest, as their positions involve greater responsibility. The third type of offering is the Peace Offering, which is brought by people who are thanking G-d for His numerous favors — for granting us the Land of Israel and for other acts of miraculous Divine intervention. It can represent gratitude for a past favor or act as a way of beseeching G-d to help us in the future. A festive meal is part of the offering. The one who brings the animal and the priests who conduct the rituals are allowed to consume part of the offering as they all join in thanking G-d for His blessings. The internal organs are burned on the altar, as they are symbols of man’s internal thoughts. It is as if the owner is saying that he is pouring out his inner soul before G-d. All of these offerings always consist of the most expensive animals: cattle, sheep and goats. They are also accompanied by the finest wheat flour, oils and wines. Here the Torah is emphasizing that the finest products of Israeldepend on G-d’s blessing. In summary, the Elevation Offering is ideological in nature. It symbolizes the immortality of the soul and its intimate connection with G-d. The Sin Offerings teach the importance of personal vigilance and accountability, the just reward for those who fear and worship God and the punishment for those who defy Him. At the same time, it is essential for that person to understand that his sins can be pardoned. Otherwise, there is the possibility that he will lapse even more. Finally, the Peace Offerings illustrate our faith in Divine Providence, in our recognition that G-d is the ultimate source of our material blessings.
<urn:uuid:fd5e6818-ad01-43cb-80fb-265b6bfa6df5>
{ "date": "2017-10-22T04:25:38", "dump": "CC-MAIN-2017-43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825141.95/warc/CC-MAIN-20171022041437-20171022061437-00856.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9649313688278198, "score": 2.71875, "token_count": 942, "url": "https://ohr.edu/this_week/ask_the_rabbi/5385" }
Lesson 20 - Going to a Place (Present Continuous) - Exercise 1 What are they doing? They're rushing to the bank. - Where are they going? To the bank. © The Marzio School and Real English L.L.C. Real English® is a Registered Trademark of The Marzio School. All rights Reserved.
<urn:uuid:8958a69c-46d5-4d75-8922-2e4ed73488ec>
{ "date": "2016-06-27T18:25:15", "dump": "CC-MAIN-2016-26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00178-ip-10-164-35-72.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9130159616470337, "score": 2.671875, "token_count": 74, "url": "http://www.real-english.com/reo/20/20-1.htm" }
Lake Tahoe's clarity decreased last year, slowing a trend of steady improvement that had seen the cobalt mountain waters reach their clearest level in a decade, researchers said Tuesday. The decline in clarity - about 9 percent from the previous year's average - is within normal ranges and should not be viewed as a reversal of significant gains made in recent years to protect the lake, scientists said in releasing the annual figures at an environmental symposium. The decline likely was caused by relatively high precipitation from thunderstorms in 2003, which led to increased runoff of soil and pollutants into the lake, the experts said. "I view this as neither good nor bad news," said John Reuter, a researcher with the University of California-Davis Tahoe Research Group. He said it reaffirms the need to continue regional planning and environmental programs as well as research to determine the most cost-effective restoration measures. Visible at depths of 102 feet as recently as 1968, a white plate called a "Secchi disk" could be seen at an average depth of 71 feet last year, the new figures show. In 2002, it was visible at depths of 78 feet, the clearest in 10 years. The five previous years were: - 2001, 73.6 feet - 2000, 67.3 feet - 1999, 69 feet - 1998, 66 feet - 1997, 64 feet "While the pattern may not be what we want to see or would have predicted, the decrease in Secchi depth is well within the average inter-annual variation in these measurements," said Larry Benoit, manager of the Tahoe Regional Planning Agency's Water Quality Program. Known for its cobalt blue and azure hues, the 193-square-mile lake that is 1,636 feet at its deepest point contains enough water to cover the state of California to a depth of 14.5 inches. But sedimentation and other pollution are spurring algae growth that threatens to turn Tahoe's waters green. John Singlaub, the regional planning agency's executive director, said the latest results "demonstrate the continued need for environmental improvements at Lake Tahoe." "It's important to look at these numbers in the context of the big picture of what's happening and our path for the future," he said. Research models show the most important factor affecting year-to-year changes in clarity is precipitation, including rain, snowfall and runoff. Increasing clarity in recent years was mostly due to relatively dry years from 1999 to 2002, Reuter said.
<urn:uuid:ae9920e7-3970-4f7a-8473-f960c303e63b>
{ "date": "2016-04-28T19:54:00", "dump": "CC-MAIN-2016-18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860106452.21/warc/CC-MAIN-20160428161506-00056-ip-10-239-7-51.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9626069068908691, "score": 2.546875, "token_count": 522, "url": "http://www.kolotv.com/home/headlines/781032.html?site=mobile" }