text
stringlengths
198
630k
id
stringlengths
47
47
metadata
dict
Climate change has already pushed the nation's wildlife into crisis, according to a report released Wednesday from the National Wildlife Federation (NWF), and further catastrophe, including widespread extinction, can only be curbed with swift action to curb the carbon pollution that has the planet sweltering. Entitled Wildlife in a Warming World: Confronting the Climate Crisis, the report looks at 8 regions across the U.S. where "the underlying climatic conditions to which species have been accustomed for thousands of years," the report explains, have been upturned by human-caused climate change. “Some of America’s most iconic species—from moose to sandhill cranes to sea turtles – are seeing their homes transformed by rapid climate change,” stated Dr. Amanda Staudt, climate scientist at the National Wildlife Federation. Feb 15, 2013 Living on Earth: STARVING POLAR BEARS Polar Bears have long been the poster species for the problem of climate change. But a new paper in Conservation Letters argues that supplemental feeding may be necessary to prevent polar bear populations from going extinct. Polar bear expert Andrew Derocher from the University of Alberta joins Host Steve Curwood to discuss how we can save the largest bear on the planet.http://www.loe.org/shows/segments.html?programID=13-P13-00007&segmentID=2
<urn:uuid:8f73ff6f-28d6-4d1c-b460-f3a592885a8d>
{ "date": "2013-05-24T01:31:20", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9093286991119385, "score": 3.265625, "token_count": 286, "url": "http://www.scoop.it/t/why-has-putin-closed-the-archives-relating-to-the-holocaust-and-why-has-russian-joined-the-wto/p/3371169705/israel-shells-syria-and-gaza-sabbah-report" }
How deep is your love for this song? Go deeper. Humble (and Racist) Beginnings In 1859, Bryant’s Minstrels needed to spice up their act. Their material was growing stale; audiences were tired of the same old song and dance. So the troupe owners asked one of the performers, Daniel Decatur Emmett, to put together a new “walk-around.” Walk-arounds were audience favorites, high-energy finales in which the cast members took turns trying to out-sing or dance the performer before them. Emmett accepted the assignment, and legend has it that within a day he had written “Dixie.” The song was like many other minstrel show songs of the time. It was narrated by a Southern slave who told a tale about “Ole Missus” and her husband Will. The specifics of the tale were not important, though. In fact, if you read the lyrics today, it’s hard to understand why audiences found them so hilarious. But that’s because the humor in minstrel show songs had little to do with the words sung. Instead, audiences were entertained by the manner in which the song and dance routines were performed. In minstrel shows, white actors put on blackface by covering their faces with burnt cork and then talked, sang, and danced in a manner believed typical of African slaves. These imitations were grotesque stereotypes, crude and racist. And “Dixie” was typical of the formula. Emmett’s narrator sang in the broken English believed typical of slaves (“Old Missus marry Will-de-weaber / Willium was a gay deceaber”), and the words suggested that slaves were fat and happy in their lives (“Dar's buck-wheat cakes an 'Ingen' batter, makes you fat or a little fatter”). Most important, the song suggested that, contrary to all the talk of reformers and abolitionists, slaves were not interested in trading slavery for freedom. Far from it, according to the song: they wished they were “in Dixie, Hooray! Hooray!” Despite what you may think, though, Dan Emmett was no friend of slavery—his father worked on the Underground Railroad. But that did not prevent him from writing a song aimed at tickling the same racist funny bone. Nor did it prevent Northern audiences from enjoying the song. In fact, shortly after it debuted at Mechanics’ Hall in New York City, the song became a national hit. By 1860, people throughout the country were “whistlin’ Dixie.” But almost as quickly as “Dixie” became a hit, it was surrounded by controversy. Southern secessionists, intent on withdrawing from the Union now that Abraham Lincoln had been elected president, embraced the song as an anthem. Most of the lyrics were unimportant, but one line in particular resonated with their cause: “In Dixie Land I'll take my stand to live and die in Dixie.” And so when South Carolinians met in a special convention to decide whether to withdraw from the Union, a band played “Dixie” every time a delegate voted in favor of secession. And two months later, when Jefferson Davis was inaugurated president of the Confederate States of America, the band also played “Dixie.” By the time the Civil War had commenced, “Dixie” was the Confederacy’s unofficial anthem. One Confederate officer, Lester Pike, even wrote a new set of lyrics, transforming the song into a battle cry: Southrons, hear your Country call you Up, lest worse than death befall you To arms! To arms! To arms, in Dixie It’s easy to see why Southerners believed it was their song. After all, the song’s setting was a Southern plantation. And “Dixie” was a common nickname for the South, although it’s not exactly clear why. Some believe the label may have come from ten-dollar bank notes circulated by a New Orleans bank. Referred to as “Dix”—French for ten—they were only accepted as payment for transactions in regions close to New Orleans. In other words, “Dixie Land” was that part of the Deep South that honored these notes as tender. Others argue, however, that Dixie became another name for the South after Charles Mason and Jeremiah Dixon completed their survey in 1773, establishing the border between Pennsylvania and Virginia. Since slavery was soon abolished in Pennsylvania, the Mason-Dixon Line became the border between free and slave states. Yet others have argued that “Dixie Land” was a paradise-like plantation owned by a generous Manhattan slave-owner named “Mr. Dix” (or in some places “Mr. Dixy” or “Mr. Dixie”) in the early part of the century, before slavery became illegal in New York in 1827. Allegedly rumors circulated around that time among slaves that he was a master so kind that his own slaves refused to leave or run away. Hence “Dixie Land” grew to be known as a place of refuge and happiness for slaves somewhere in the North. War of the Words However it originated, folks came to associate Dixie with the South, and “Dixie Land” became known as a distinctive region that had built its economy on slave labor and that stretched from Virginia south to Florida and west to Texas. But that did not prevent Northerners from arguing that they had an equal claim on the song. After all, it had been written by a Northerner (Emmett was from Ohio) and debuted in a Northern city (New York was about as Union as it gets). Heck, even Abe Lincoln loved “Dixie;” he had used it regularly on whistle stops during his 1860 campaign. And so after Southerners adopted the song for their secession soundtrack, Northerner Francis J. Crosby answered with a set of pro-Union lyrics: On! ye patriots to the battle Hear Fort Moultrie's canon rattle Then away, then away, then away to the fight! Go meet those Southern Traitors with iron will And should your courage falter boys Remember Bunker Hill Hurrah! Hurrah! Hurrah! The stars and stripes forever! Hurrah! Hurrah! Our Union shall not sever! In Crosby’s rendition, Northern soldiers were told that their battle against Southern rebellion was actually part of a larger war launched in 1775. Fort Moultrie was the Patriot fort outside Charleston, South Carolina, that had played such a dramatic role in the defense of the city against British invaders during the American Revolution. Now that South Carolina was the center of insurrection, Crosby urged Northern soldiers to remember these nation-founding battles—Fort Moultrie and Bunker Hill—and meet the Southern traitors with an iron will. They were fighting to preserve the Union that earlier Patriots had secured through revolution; if they adopted their forefathers’ courage, “our union shall not sever.” In the end, the North won the musical battle as well as the military war. Shortly after announcing the surrender of Confederate general Robert E. Lee, Lincoln ordered the band to strike up “Dixie.” The song, he said, had been “fairly captured.” While some few may say Lincoln had the song played as a way to rub in his victory, and others say that he told the band to play “Dixie” because he missed hearing it himself, most historians agree that it was in fact part of a broader political plan. Once the war was over, Lincoln wanted nothing more than the successful reunion of two American peoples ravaged by war. Having the band play “Dixie” was symbolic of his desire to bring the broken pieces of their great nation back together. Just a Little Bit of History Repeating But all of the controversy surrounding the song did not end when Lee surrendered at Appomattox Courthouse. Roughly a century later, “Dixie” excited a new set of arguments. This second round of debates began in the 1960s when African American students at Southern universities objected to the playing of “Dixie” at school events. The song was implicitly racist, they argued, rooted in the minstrel tradition that grotesquely mocked slaves and their degraded lives. And as the anthem of the Confederacy, the song represented the South’s attempts to retain several million African Americans in perpetual bondage. Nonsense, answered the songs’ defenders; “Dixie” was just a harmless expression of Southern heritage. However it might have originated or been heard 100 years before, it had become nothing more than a celebration of the South, a proud and distinctive part of America. Banning “Dixie,” they said, was “political correctness” run wild, an overly sensitive reaction to an important expression of the South’s culture and history. The argument was not restricted to college campuses. Several politicians joined the students in arguing that the song should be banned from public ceremonies, just as many countered that the song was a harmless piece of Americana. Even Supreme Court Chief Justice William Rehnquist included “Dixie” on the song list for the sing-along he hosted every year at a legal conference. So which is it? Is the song a racist and painful expression of past sins? Or is it an important piece of American history and culture, an expression of the “Old South” that can be sung without endorsing the attitudes that may have originally lain beneath it? The debate continues to be waged today, and some of these questions are more easily answered than others. Despite 19th-century attempts to rewrite the song’s history, “Dixie” is not exactly a positive expression of the Old South. It was not written on some Southern porch, nor did it emerge from some ancient folk melody. It was written by a Northerner and first performed in a New York theater, and many historians argue that it was intended to be an ironic parody of Southern values, a joke at the plantation owners’ expense. But clearly the South had a different idea of what the song meant. It was not only set within Southern culture, it became an anthem as the Confederacy launched its war for separation from the Union (another reason it should no longer be celebrated, some say). There’s no denying the racist tone of the old minstrel song. With its crude portrait of slaves and cheery view of slave life, the song celebrates rather than mourns a tragic part of American history. But on the other hand, the part of the song most commonly sung today is the refrain, which, in isolation, makes a more simple statement: “I wish I was in Dixie, Hooray! Hooray!” Old Song, New Views On top of all the other controversy surrounding “Dixie, recent research suggests that the song may be representative of a different legacy of racism: America’s failure for centuries to acknowledge all of the contributions to American life made by African Americans. According to some historians, Daniel Emmett did not actually write the song; they claim he learned “Dixie” from members of the Snowden Family Band, a group of African American performers that lived near his family farm in Ohio. There’s plenty of evidence to suggest that Emmett knew the Snowdens; in fact, later in his career he actually performed alongside them. And the Snowden family has long maintained that their ancestors wrote the famous minstrel hit, although no claims were made to this effect during Emmett’s lifetime. Not every music historian has embraced this theory. In fact, most argue that the evidence is more circumstantial than verifiable. Yet the possibility serves as a reminder that African Americans’ contributions to our national culture were buried for centuries. And if ultimately proven true, the Snowdens’ authorship would provide yet another, albeit ironic, example of the enormous impact African Americans made not only on American music, but also Southern identity. It seems as though we may be in the same place as we were 150 years ago in terms of the song’s place in American society. Is “Dixie” another reminder of America’s painful past that should be mourned rather than celebrated? Or is it an important expression of Southern culture that deserves to be honored and performed? Ultimately, it’s for you to decide. In the meantime, you might begin by asking a different set of questions. What sort of historical artifact is the song? What does it tell us about American culture and the ways in which 19th-century Americans composed and used music? What does it say about American popular entertainment and both Northern and Southern audiences? What does the song say about its composer, a Northerner and son of an abolitionist? What does it say about Abe Lincoln? And what might it say about the Snowdens and other African American families like them? To play or not to play “Dixie”—somehow that is still the question. Next Page: Technique Previous Page: Lyrics
<urn:uuid:050f8c12-a17c-4d12-88f9-b07175c521ab>
{ "date": "2013-05-24T02:04:39", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9792109131813049, "score": 3.515625, "token_count": 2810, "url": "http://www.shmoop.com/daniel-decatur-emmett-dixie/meaning.html" }
Well, "In Memory of W.B. Yeats" is about as traditional as an elegiac title can be (and this is an elegy, a poem written in memory of a deceased person). It brings to mind another uber-famous (and uber-long) elegy called In Memoriam, A.H.H., written by Tennyson about his friend who passed away. If you've read what we have to say in our "Summary" section, you'll immediately see the irony in this move: Auden's title may be traditional, but his poem is anything but. He's shaking things up stylistically, which is perhaps why he chooses to ground us right away by directing our attention to the object of the poem, William Butler Yeats. Yeats was a poet, playwright, and important political figure in the late 19th and early 20th century. If you're interested, you can find out more about him by checking out our guides on his poetry. Here are a few:
<urn:uuid:b7a16678-9965-4b38-9412-a557ace18172>
{ "date": "2013-05-24T01:37:02", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9886144399642944, "score": 2.578125, "token_count": 207, "url": "http://www.shmoop.com/in-memory-of-wb-yeats/title.html" }
Brora: the industrial capital of Sutherland A small pocket of Jurassic rocks containing coal on the east coast of Sutherland at Brora gave rise to its place in history as the industrial capital of the Highlands. Coal was mined here for 400 years. The first reference to coal in Brora occurs in a Sutherland Charter of 1529. The last pit closed in 1974. Coal powered a range of local industries including brickworks, textile making, distilleries and of course salt panning. No trace of the coal industry upon which the one-time prosperity of Brora so largely depended, now remains. “The ancient glory of Brora...laid bare” These words from the Inverness Advertiser in 1869 were reporting an event in which “the sand banks along the shore have been considerably encroached upon, and at Port Cheaniraidh (winter port), a mile to the west of the river, the action of the sea against the banks has laid bare a row of buildings which must have been for ages lain imbedded in the sea… Numbers of people flock to visit this long hidden relic of the ancient glory of Brora”. Read more about recent uncovering of long hidden buildings and new discoveries about Brora’s former salt pan sites in our Archaeology pages. Local participation and events are the foundation and the strength of the Brora Saltpans Project. Find out more - and find yourself! on our Gallery pages.
<urn:uuid:1e7ac5ab-a871-40de-9253-544b69a39085>
{ "date": "2013-05-24T01:52:19", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9291335940361023, "score": 3.3125, "token_count": 320, "url": "http://www.shorewatch.co.uk/brora/" }
Paper recycling programme Environmental responsibility is fundamental to Shred-it’s corporate mission and values. Once materials have been shredded on site at a customer’s facility, they are subsequently baled and recycled into a variety of useful paper products. This process ensures that our customers’ confidential information is always disposed of in the most secure way possible, whilst helping save the environment in the process. For every two security consoles filled with paper, one tree is saved through the recycling process. In 2008, Shred-it’s UK recycling programme saved over 630,000 trees, through the responsible recycling of 35,000 tonnes of paper. So if recycling is part of your organisation’s environmental policy, we can help you fulfil your green commitments. At the end of each year we provide our regularly-scheduled shred clients with an Environmental Certificate indicating the number of trees saved by using Shred-it’s services. Shred-it Trucks and Eco-Friendly Document Shredding Shred-it uses environmentally friendly hydraulic fluids (ENVIRON MV 32) in all of our new shredding vehicles which are: - Inherently biodegradable - Non-toxic, non-carcinogenic, low odour - Contain no heavy metals that contaminate ground and waste waters - Have a longer life than vegetable-oil-based fluids which decreases consumption Shred-it also uses “Idle Down” in our new technology as part of our eco-friendly paper shredding. All of our new technology meets EURO 5 Emission Standards and use AdBlue® technology to provide cleaner emissions. All of our security consoles are constructed with 100% recycled-wood particle board. Shred-it is continually developing new technologies and processes to help further reduce our emissions, fuel consumption and carbon footprint. Interested in a greener tomorrow? Want to see how paper shredding helps save the environment? For more information on our commitment to the environment, download our Environmental Fact Sheet. Contact Us today to find out how paper shredding can help save the environment – and stay secure in the process.
<urn:uuid:d152f1d6-c3dd-4e91-9683-60fb8a8d2ac6>
{ "date": "2013-05-24T01:31:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9038063287734985, "score": 2.625, "token_count": 449, "url": "http://www.shredit.co.uk/About-Us/Corporate-responsibility/Environnement.aspx" }
With parallel processing, high-quality images and animations can be rendered in reasonable times. This course reviewed the basic issues involved in rendering within a parallel or distributed computing environment, presented various methods for dividing a rendering problem into subtasks and distributing them efficiently to independent processors, then reviewed the strengths and weaknesses of multiprocessor machines and networked render farms for graphics rendering. Case studies demonstrated practical ways of dealing with the issues involved. Some knowledge of ray tracing, radiosity, and photon maps. No prior knowledge of parallel or distributed processing, although previous experience in the area would be advantageous. Basic issues involved in rendering in a parallel environment (task subdivision, load balancing, task communication, task migration, and data management), parallel rendering systems for various task subdivision techniques in two hardware environments (traditional multiprocessor machine and render farm), and several successful applications of ray tracing, radiosity, and photon maps. University of Bristol Henrik Wann Jensen
<urn:uuid:dbe48f05-705b-4aac-b472-9c061fc53bf0>
{ "date": "2013-05-24T01:59:50", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8694573044776917, "score": 2.953125, "token_count": 202, "url": "http://www.siggraph.org/s2000/conference/courses/crs30.html" }
Do it when you're 18! Now that you are 18, make sure you have your say – use your right to vote. If you don’t you will lose your chance to influence the way things are run in the country. You can vote: - When you're 18 and have registered to vote. - You also need to be a British, Irish, Commonwealth or European Union citizen. Remember you can only vote at elections if your name appears in the Register of Electors. How do you register? An electoral registration form is sent to every household in September/October every year. Make sure your name is included on this form before it is returned to us. You should be included on this form if you are: - 16 years of age or over (as the Register of Electors runs from 1st December to 30th November every year we need the dates of birth of persons just under 18 to make sure they are on the Register in time for them to vote). Although your name may appear on the Register of Electors before you are 18 you will not be allowed to vote until you are 18. - a British, Irish, Commonwealth or European Union citizen The information we receive on the registration forms returned from each household is then transferred into a list and is called the Electoral Register. This Register is published every year on the 1st December. If you move house after the register has been compiled you can apply to have your details changed by completing a Rolling Registration form. There are two variations of this register one is called the full register and the other is the edited register. The full register is everyone who is registered to vote. It is used for elections and by certain organisations to check credit applications and to stop crime. The edited register is some of the people who are registered to vote. You can choose if you want to be on this register or not by ticking a box on the registration form that’s sent to your home. Anyone can purchase this register and use it for whatever purpose. Some Points to Remember - It is a legal requirement to register to vote. - It is not compulsory to vote but it is to register to vote. - You must register annually. - If you apply for credit, credit reference agencies can use the register to check your details and if you are not registered you may be refused credit.
<urn:uuid:75faf60a-2b62-4590-bdf8-07e1194e526a>
{ "date": "2013-05-24T02:05:10", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9601818323135376, "score": 2.75, "token_count": 487, "url": "http://www.sirgaerfyrddin.gov.uk/English/council/electionsandvoting/Pages/Firsttimevoter.aspx" }
Bombay (Mumbai) was originally established on seven islets off the coast of India but the separating waterways have been filled in to connect these islands to each other and to the much larger Salsette Island. Now the site of the city is essentially a peninsula (although technically still an island). A bridge across Thana Creek brings the extensive development on the mainland into the metropolitan area of Bombay. What is now Mumbai was known to the Greek Ptolomy as Heptanesia, "seven islands." But in even more ancient times the area was known for the temple to the goddess Mumba, a consort of Shiva. The name Bombay stems from the 16th century when the Portuguese acquired control of the area. Bombay is a corruption of Mumbai. The Portuguese monarchy transferred control of the area to the British monarchy in 1661 a part of the dowry of the sister of of the Portuguese king when she married King Charles II of England. The monarchy transferred control in 1668 to the East India Company. Initially Bombay was far less important to the East India Company than its trading stations in Calcutta and Madras. But Bombay began to grow as a result of refugees from the region seeking the protection that the British could provide. Over the years Bombay grew to be the most important center of trade, commerce, manufacturing and finance for India. Bombay was particularly important in the cotton textile industry in the 19th century, but that industry is now less significant. Bombay has well-developed industries in vehicles, chemicals, electronics, paper making, publishing and food processing. Bombay is the home of the Reserve Bank of India, the central bank of the nation. Private banking is also concentrated in Bombay. In effect, Bombay is the New York and Chicago of India. HOME PAGE OF Thayer Watkins
<urn:uuid:be4bbdc2-1253-449f-af5f-9bd572dcfaa6>
{ "date": "2013-05-24T01:59:16", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9558479189872742, "score": 2.984375, "token_count": 364, "url": "http://www.sjsu.edu/faculty/watkins/bombay.htm" }
The AutoCAD Command System: The first thing to understand about AutoCAD is how the command system operates. Everything in AutoCAD is achieved by issuing a command. Generally, commands are entered by: In some cases, a particular command can be entered in any one of those three ways depending on your personal preference. To complicate the matter further, some commands can be typed in with an abbreviated alias, which is often the quickest way to get the command started. Once a command has been issued, AutoCAD displays a prompt (or instruction) on the command line indicating your options for proceeding with that command. To see how the command process operates, watch the command line as you click on any toolbar button. You will see the corresponding command printed at the command prompt and then, as AutoCAD responds to the command, it will display some kind of instruction or prompt (on the same line) and then wait for you to respond. Remember to press the key if you wish to cancel the command (unless you want to carry it through to see what happens). The Command Line Prompt always follows the same “pattern”. AutoCAD first tells you what it expects you to do (“enter a point”, “type in a value”, etc). If there are other alternative actions you can take as part of that command sequence, then those are listed next within square brackets, each separated by a slash character. You can select any of those alternative options by typing the capitalized letters only of the option. At the end of the prompt line, if appropriate, AutoCAD displays the default answer to the prompt within angle brackets. If you simply press the Enter key, AutoCAD will use that default value. To develop your understanding of this process further, choose any of the drawing tools, click the button while watching the command prompt, and then pick points in the drawing area in response to the prompts. You should be able to draw things with very little further explanation!
<urn:uuid:467356d6-f722-4d4a-a2eb-73c26cd4cc6e>
{ "date": "2013-05-24T01:52:46", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8954365849494934, "score": 3.453125, "token_count": 415, "url": "http://www.sketch-plus.com/CAD_Tutorials/AutoCAD_2000-lesson_12-The_Command_Structure.html" }
Linda Colley speaks about 'When the British Constitution becomes un-written' at the Festival of Ideas, Clip 2 Duration: 5 mins 5 secs About this item Linda Colley explains about the British Charter. Britain, it is often claimed, is the world's only democracy without a written constitution. Yet, historically, Britain has influenced the writing of constitutions in other countries more than anywhere else on the globe. Linda Colley examines this paradox, and discusses how, why and when the notion of Britain's own "unwritten constitution" evolved. |Collection:||Festival of Ideas 2010| |Publisher:||University of Cambridge| |Copyright:||University of Cambridge| |Keywords:||law; festival of ideas; linda colley;|
<urn:uuid:525f6329-ee4e-4849-87c7-a275305985e7>
{ "date": "2013-05-24T01:57:20", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.7915464043617249, "score": 2.890625, "token_count": 163, "url": "http://www.sms.cam.ac.uk/media/1077193" }
When selecting a solar electric back power system for your home or business, it is important to know approximately how much power you will need to have available to power emergency loads during a blackout. Unlike an offgrid solar system which needs to replenish the amount of power consumed with available sunlight within each day that the system is operating. A backup power system need only supply power for the anticipated duration of a blackout, which in most cases is only a few hours. In fact in most cases, solar panels are not necessary in such a system because the backup power system's batteries can be recharged in anticipation of the next power failure by using utility power once it has returned to normal operation. The only advantage that solar panels would serve in such a system would be in the event that a prolonged outage (More than 24 hours) should occur. In a typical backup power system, batteries store the energy that is needed to power the designated emergency loads for the pre-determine period of time. Just like a small UPS (Uninterruptible Power Supply) for your computer can supply power for 5 to 15 minutes allowing you time to safely shut you computer off, a backup power system supplies power but in this case for hours or even days, allowing you to operate your home or business until the power has returned. In order to select the appropriate system for your backup power needs, it is important to match your anticipated power consumption with your back up system's battery bank capacity. To correctly size a system for your home or business you must first determine the wattage of each item that you wish to power during a power failure and also determine how long each item will run during the power failure. For example a 60 watt light bulb that is used for 5 hours will consume 300 watt hours. Watts multiplied by time is equal to watt hours. A microwave oven that consumes 800 watts that runs for 15 minutes, consumes 200 watt hours, 800 watts times .25 hours equals 200 watt hours. So to correctly size a system, simply make a list of each item that you intend to run. Next to each item write down it's power consumption in watts and next to that write down the amount of time that the item will run during the power failure, then multiply the watts of the item by the amount of time that it will run and write that number down in the last column. After you have calculated the watt hour consumption for each item, simply add each item's watt hour rating together and you'll have your total consumption.. For example: Once we have this information, it's a simple matter to match the number of batteries that you will need in order to store enough power for what you Choosing an inverter for your backup power system DC to AC inverters are available as inverter units only, or may have additional circuits added that allows them to charge batteries when an external AC source is fed into the inverter. This type of configuration is know as an inverter/charger. In addition to the charger circuit, these units will typically include a device known as an AC transfer switch. The advantage to purchasing an inverter/charger with transfer switch is that it can function as a highly reliable automatic power backup unit or UPS. When the utility company is operating normally the inverter/charger passes the utility company power through its internal transfer switch to your appliances and maintains a charge on your battery bank. As soon as the utility power fails, the inverter automatically stops charging the battery bank and begins producing its own AC power which is passed on to your appliances through its internal AC When the utility power returns, the inverter goes back to charging the batteries and again passes the utility power though the transfer switch to your appliances. Most inverter/chargers switch from utility power to inverter power and back again so fast that most of your appliances will hardly miss a beat. Sizing the wattage rating of an inverter for your backup system is a simple matter of determining the total number of appliances that you would typically be operating on a concurrent basis, and adding a buffer of at least 500 watts. In other words if there was a possibility that you would have your 600 watt microwave, a 200 watts coffee maker and a 200 watt stereo running at the same time, you would be drawing 1000 watts, then you should choose a 1500 watt inverter. An inverter should never be run at it's maximum rating for prolonged periods of time, doing so will shorten the life of the inverter. Another issue to consider is the amount of surge current that your appliances draws. Any appliance that uses a transformer, motor or other magnetic device draws what is known as surge current at startup. These devices are otherwise known as inductors. Inductors appose the flow of electrical current. When an inductor is first energized there is a great degree of inertia that must be overcome for the magnetic field which surrounds the inductor to reach it's maximum field. Just as it's difficult to initially push a car by hand that is at rest and gets easier to push as it gets going. Initially starting an inductor takes a great deal of current to get it started but backs off on the current after it gets going. Devices such as microwave ovens, refrigerator compressors, fan motors and large transformer based appliances can draw from 3 to 6 times it's normal wattage in an initial surge of current. This initial surge of current typically only lasts milliseconds but it's enough to shut down an inverter if it's not sized properly. Thus it's important to choose an inverter that has enough surge capacity to start such appliances. For example a meager 600 watt microwave oven will typically require a 2000 watt inverter just to get it started. If all of this information seems a little overwhelming, don't worry our friendly knowledgeable staff are here to help you every step of And finally, be cautious when purchasing a backup power system for your home or place of the Internet. Get to know who you're dealing with. Many of the backup power kits that are available on the Internet are actually home made configurations. Many websites on the Internet that would appear to be large reputable companies are actually home based affairs that operate from an impossible to trace POB (PO Box). Remember, you're about to give this individual your personal information and more importantly your credit card number. Is his company solvent ? Does he have liability insurance ? Does he really have the items that you're about to purchase in stock ? Does he have any stock ? With the advent of the energy crisis, dozens of home based dealers with little or no formal training or experience have cropped up on the Internet. Even if you don't live nearby, ask the dealer if you can get directions to his place of business so you can stop by and take a look at some products. If you can't get directions or a straight answer from him, then in our opinion, steer clear ! It's important to remember that it takes only minutes to upload a website to the Internet and only seconds to take it down. would like to learn more about protecting yourself when shopping on the Internet
<urn:uuid:b26c9f8c-855d-4238-886a-cd60d6fe52ec>
{ "date": "2013-05-24T01:44:17", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9384167194366455, "score": 2.796875, "token_count": 1599, "url": "http://www.solarkits.com/gettingstartedbackup.htm" }
Open House: An informal, unstructured Public Meeting during which information stations with exhibits convey important project information and Department of Transportation and consultant personnel are available to answer the public’s questions. Ordinary High Water (OHW): The ordinary high water mark is the elevation at which US Army Corps of Engineers jurisdiction begins. The OHW mark is the line on the shore established by the fluctuations of water and indicated by physical characteristics such as an impressed natural line, shelving, a vegetation change or debris lines. Ozone: A colorless gas with a sweet odor. Ozone is not a direct emission from transportation sources but rather a secondary pollutant formed when hydrocarbons (HC) and nitrogen oxides (NOx) combine in the presence of sunlight. Ozone is associated with smog or haze conditions. Although ozone in the upper atmosphere protects the earth from harmful ultraviolet rays, ground level ozone produces an unhealthy environment in which to live.
<urn:uuid:a46b25b7-7b4c-4abe-ba29-012fccd2ace0>
{ "date": "2013-05-24T01:52:04", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9204553961753845, "score": 3.25, "token_count": 193, "url": "http://www.southcapitoleis.com/global/glossary/o.aspx" }
Problems of Philosophy Chapter 5 - Knowledge by Acquaintance and Knowledge by Description After distinguishing two types of knowledge, knowledge of things and knowledge of truths, Russell devotes this fifth chapter to an elucidation of knowledge of things. He further distinguishes two types of knowledge of things, knowledge by acquaintance and knowledge by description. We have knowledge by acquaintance when we are directly aware of a thing, without any inference. We are immediately conscious and acquainted with a color or hardness of a table before us, our sense-data. Since acquaintance with things is logically independent from any knowledge of truths, we can be acquainted with something immediately without knowing any truth about it. I can know the color of a table "perfectly and completely when I see it" and not know any truth about the color in itself. The other type of knowledge of things is called knowledge by description. When we say we have knowledge of the table itself, a physical object, we refer to a kind of knowledge other than immediate, direct knowledge. "The physical object which causes such-and-such sense-data" is a phrase that describes the table by way of sense-data. We only have a description of the table. Knowledge by description is predicated on something with which we are acquainted, sense-data, and some knowledge of truths, like knowing that "such- and-such sense-data are caused by the physical object." Thus, knowledge by description allows us to infer knowledge about the actual world via the things that can be known to us, things with which we have direct acquaintance (our subjective sense-data). According to this outline, knowledge by acquaintance forms the bedrock for all of our other knowledge. Sense-data is not the only instance of things with which we can be immediately acquainted. For how would we recall the past, Russell argues, if we could only know what was immediately present to our senses. Beyond sense-data, we also have "acquaintance by memory." Remembering what we were immediately aware of makes it so that we are still immediately aware of that past, perceived thing. We may therefore access many past things with the same requisite immediacy. Beyond sense-data and memories, we possess "acquaintance by introspection." When we are aware of an awareness, like in the case of hunger, "my desiring food" becomes an object of acquaintance. Introspective acquaintance is a kind of acquaintance with our own minds that may be understood as self-consciousness. However, this self-consciousness is really more like a consciousness of a feeling or a particular thought; the awareness rarely includes the explicit use of "I," which would identify the Self as a subject. Russell abandons this strand of knowledge, knowledge of the Self, as a probable but unclear dimension of acquaintance. Russell summarizes our acquaintance with things as follows: "We have acquaintance in sensation with the data of the outer senses, and in introspection with the data of what may be called the inner sense—thoughts, feelings, desires, etc.; we have acquaintance in memory with things which have been data either of the outer senses or of the inner sense. Further, it is probable, though not certain, that we have acquaintance with Self, as that which is aware of things or has desires towards things." All these objects of acquaintance are particulars, concrete, existing things. Russell cautions that we can also have acquaintance with abstract, general ideas called universals. He addresses universals more fully later in chapter 9. Russell allocates the rest of the chapter to explaining how the complicated theory of knowledge by description actually works. The most conspicuous things that are known to us by description are physical objects and other people's minds. We approach a case of having knowledge by description when we know "that there is an object answering to a definite description, though we are not acquainted with any such object." Russell offers several illustrations in the service of understanding knowledge by description. He claims that it is important to understand this kind of knowledge because our language uses depends so heavily on it. When we say common words or proper names, we are really relying on the meanings implicit in descriptive knowledge. The thought connoted by the use of a proper name can only really be explicitly expressed through a description or proposition. Bismarck, or "the first Chancellor of the German Empire," is Russell's most cogent example. Imagine that there is a proposition, or statement, made about Bismarck. If Bismarck is the speaker, admitting that he has a kind of direct acquaintance with his own self, Bismarck might have voiced his name in order to make a self-referential judgment, of which his name is a constituent. In this simplest case, the "proper name has the direct use which it always wishes to have, as simply standing for a certain object, and not for a description of the object." If one of Bismarck's friends who knew him directly was the speaker of the statement, then we would say that the speaker had knowledge by description. The speaker is acquainted with sense-data which he infers corresponds with Bismarck's body. The body or physical object representing the mind is "only known as the body and the mind connected with these sense-data," which is the vital description. Since the sense-data corresponding to Bismarck change from moment to moment and with perspective, the speaker knows which various descriptions are valid. Still more removed from direct acquaintance, imagine that someone like you or I comes along and makes a statement about Bismarck that is a description based on a "more or less vague mass of historical knowledge." We say that Bismarck was the "first Chancellor of the German Empire." In order to make a valid description applicable to the physical object, Bismarck's body, we must find a relation between some particular with which we have acquaintance and the physical object, the particular with which we wish to have an indirect acquaintance. We must make such a reference in order to secure a meaningful description. To usefully distinguish particulars from universals, Russell posits the example of "the most long-lived of men," a description which wholly consists of universals. We assume that the description must apply to some man, but we have no way of inferring any judgment about him. Russell remarks, "all knowledge of truths, as we shall show, demands acquaintance with things which are of an essentially different character from sense-data, the things which are sometimes called 'abstract ideas', but which we shall call 'universals'." The description composed only of universals gives no knowledge by acquaintance with which we might anchor an inference about the longest-lived man. A further statement about Bismarck, like "The first Chancellor of the German Empire was an astute diplomatist," is a statement that contains particulars and asserts a judgment that we can only make in virtue of some acquaintance (like something heard or read). Statements about things known by description function in our language as statements about the "actual thing described;" that is, we intend to refer to that thing. We intend to say something with the direct authority that only Bismarck himself could have when he makes a statement about himself, something with which he has direct acquaintance. Yet, there is a spectrum of removal from acquaintance with the relevant particulars: from Bismarck himself, "there is Bismarck to people who knew him; Bismarck to those who only know of him through history" and at a far end of the spectrum "the longest lived of men." At the latter end, we can only make propositions that are logically deducible from universals, and at the former end, we come as close as possible to direct acquaintance and can make many propositions identifying the actual object. It is now clear how knowledge gained by description is reducible to knowledge by acquaintance. Russell calls this observation his fundamental principle in the study of "propositions containing descriptions": "Every proposition which we can understand must be composed wholly of constituents with which we are acquainted." Indirect knowledge of some particulars seems necessary if we are to expressively attach meanings to the words we commonly use. When we say something referring to Julius Caesar, we clearly have no direct acquaintance with the man. Rather, we are thinking of such descriptions as "the man who was assassinated on the Ides of March" or "the founder of the Roman Empire." Since we have no way of being directly acquainted with Julius Caesar, our knowledge by description allows us to gain knowledge of "things which we have never experienced." It allows us to overstep the boundaries of our private, immediate experiences and engage a public knowledge and public language. This knowledge by acquaintance and knowledge by description theory was a famous epistemological problem-solver for Russell. Its innovative character allowed him to shift to his moderate realism, a realism ruled by a more definite categorization of objects. It is a theory of knowledge that considers our practice of language to be meaningful and worthy of detailed analysis. Russell contemplates how we construct a sense of meaning about objects remote from our experience. The realm of acquaintance offers the most secure references for our understanding of the world. Knowledge by description allows us to draw inferences from our realm of acquaintance but leaves us in a more vulnerable position. Since knowledge by description also depends on truths, we are prone to error about our descriptive knowledge if we are somehow mistaken about a proposition that we have taken to be true. Critics of this theory have held that Russell's hypothesis of knowledge by description is confusing. His comments when defining sense-data, that the physical world is unknowable to us, contradict his theory of knowledge by descriptions. He implies that "knowledge by description" is not really a form of knowledge since we can only know those things with which we are acquainted and we cannot be acquainted with physical objects. Russell's theory amounts to the proposition that our acquaintance with mental objects appears related in a distant way to physical objects and renders us obliquely acquainted with the physical world. Sense-data are our subjective representations of the external world, and they negotiate this indirect contact. While innovative, Russell's theory of knowledge by description is not an attractive theory of knowledge. It is clearly unappealing because our impressions of the real world, on his view, are commensurate with muddy representations of reality. Though we have direct access to these representations, it seems impossible to have any kind of direct experience of reality. Reality, rather, consists in unconscious, inferential pieces of reasoning. Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note!
<urn:uuid:0abbeab1-fafa-4389-9f7e-7573eb693c9e>
{ "date": "2013-05-24T02:06:11", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9639775156974792, "score": 3.21875, "token_count": 2196, "url": "http://www.sparknotes.com/philosophy/problems/section5.rhtml" }
How to Become an Airline or Commercial Pilot Students often use flight simulators to learn how to fly. Many pilots learn to fly in the military, but a growing number now earn an associate’s or bachelor’s degree from a civilian flying school. All pilots who are paid to transport passengers or cargo must have a commercial pilot's license and an instrument rating. To qualify for a commercial pilot’s license, applicants must be at least 18 years old and have at least 250 hours of flight experience. Education and Training Military veterans have always been an important source of experienced pilots because of the extensive training and flight time that the military provides. However, an increasing number of people are becoming pilots by attending flight school or taking lessons from a Federal Aviation Administration (FAA) certified instructor. The FAA certifies hundreds of civilian flight schools, including some colleges and universities that offer pilot training as part of an aviation degree. In addition, most airline companies require at least 2 years of college and prefer to hire college graduates. In fact, most pilots today have a bachelor’s degree. Because the number of college-educated applicants continues to increase, many employers are making a college degree an entry-level requirement. Preferred courses for airline pilots include English, math, physics, and aeronautical engineering. Because pilots must be able to make quick decisions and react appropriately under pressure, airline companies will often reject applicants who do not pass psychological and aptitude tests. Once hired by an airline, new pilots undergo additional company training that usually includes 6-8 weeks of ground school and 25 hours of additional flight time. After they finish this training, airline pilots must keep their certification by attending training once or twice a year. Commercial pilot’s license. All pilots who are paid to transport passengers or cargo must have a commercial pilot's license. To qualify for this license, applicants must be at least 18 years old and have at least 250 hours of flight experience. Applicants must also pass a strict physical exam to make sure that they are in good health, must have vision that is correctable to 20/20, and must have no physical handicaps that could impair their performance. In addition, they must pass a written test that includes questions about safety procedures, navigation techniques, and FAA regulations. Finally, they must demonstrate their flying ability to an FAA-designated examiner. Instrument rating. To fly during periods of low visibility, pilots must be rated to fly by instruments. They may qualify for this rating by having at least 40 hours of instrument flight experience. Pilots also must pass a written exam and show an examiner their ability to fly by instruments. Airline certifications. Currently, airline captains must have an airline transport pilot certificate. In 2013, new regulations will require first officers to have this certificate as well. Applicants must be at least 23 years old, have a minimum of 1,500 hours of flight time, and pass written and flight exams. Furthermore, airline pilots usually maintain one or more advanced ratings, depending on the requirements of their particular aircraft. All licenses are valid as long as a pilot can pass periodic physical, eye, and flight examinations. Many civilian pilots start as flight instructors, building up their flight hours while they earn money teaching. As they become more experienced, these instructors can move into jobs as commercial pilots. Commercial pilots may begin their careers flying charter planes, helicopters, or crop dusters. These positions typically require less experience than airline jobs require. Some commercial pilots may advance to flying corporate planes. In nonairline jobs, a first officer may advance to captain and, in large companies, to chief pilot or director of aviation. However, many pilots use their commercial experience as a steppingstone to becoming an airline pilot. Airline pilots may begin as flight engineers or first officers for regional airline companies. Newly hired pilots at regional airline companies typically have about 2,000 hours of flight experience. Over time, experience gained at these jobs may lead to higher paying jobs with major airline companies. Newly hired pilots at major airline companies typically have about 4,000 hours of flight experience. For airline pilots, advancement depends on a system of seniority outlined in union contracts. Typically, after 1 to 5 years, flight engineers may advance to first officer and, after 5 to 15 years, to captain. Communication skills. Pilots must speak clearly when conveying information to air traffic controllers. They must also listen carefully for instructions. Depth perception. Pilots must be able to see clearly and judge the distance between objects. Detail oriented. Pilots must watch many systems at the same time. Even small changes can have significant effects, so they must constantly pay close attention to many details. Monitoring skills. Pilots must regularly watch over gauges and dials to make sure that all systems are in working order. Problem-solving skills. Pilots must be able to identify complex problems and figure out appropriate solutions. When a plane encounters turbulence, for example, pilots assess the weather conditions, select a calmer airspace, and request a route change from air traffic control. Quick reaction time. Because warning signals can appear with no notice, pilots must be able to respond quickly to any impending danger. Teamwork. Pilots work closely with air traffic controllers and flight dispatchers. As a result, they need to be able to coordinate actions on the basis of the feedback they receive.
<urn:uuid:38a647c7-c5e9-4419-be4d-a4f6ecc93834>
{ "date": "2013-05-24T01:30:56", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9602981209754944, "score": 2.75, "token_count": 1115, "url": "http://www.stats.bls.gov/ooh/transportation-and-material-moving/print/airline-and-commercial-pilots.htm" }
- Introduction to Hubble - The Current Science Instruments - Mission Operations and Observations - Previous Instruments - Technical Overview Introduction to Hubble The Hubble Space Telescope (HST) is a cooperative program of the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA) to operate a space-based observatory for the benefit of the international astronomical community. HST is an observatory first envisioned in the 1940s, designed and built in the 1970s and 80s, and operational since the 1990. Since its preliminary inception, HST was designed to be a different type of mission for NASA -- a long-term, space-based observatory. To accomplish this goal and protect the spacecraft against instrument and equipment failures, NASA planned on regular servicing missions. Hubble has special grapple fixtures, 76 handholds, and is stabilized in all three axes. HST is a 2.4-meter reflecting telescope, which was deployed in low-Earth orbit (600 kilometers) by the crew of the space shuttle Discovery (STS-31) on 25 April 1990. Responsibility for conducting and coordinating the science operations of the Hubble Space Telescope rests with the Space Telescope Science Institute (STScI) on the Johns Hopkins University Homewood Campus in Baltimore, Maryland. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc. (AURA). HST's current complement of science instruments includes three cameras, two spectrographs, and fine guidance sensors (primarily used for accurate pointing, but also for astrometric observations). Because of HST's location above the Earth's atmosphere, these science instruments can produce high-resolution images of astronomical objects. Ground-based telescopes are limited in their resolution by the Earth’s atmosphere, which causes a variable distortion in the images. Hubble can observe ultraviolet radiation, which is blocked by the atmosphere and therefore unavailable to ground-based telescopes. In the infrared portion of the spectrum, the Earth’s atmosphere adds a great deal of background, which is absent in Hubble observations. When originally planned in the early 1970s, the Large Space Telescope program called for return to Earth, refurbishment, and re-launch every 5 years, with on-orbit servicing every 2.5 years. Hardware lifetime and reliability requirements were based on that 2.5-year interval between servicing missions. In the late 70s, contamination and structural loading concerns associated with return to Earth aboard the shuttle eliminated the concept of ground return from the program. NASA decided that on-orbit servicing might be adequate to maintain HST for its 15-year design life. A three-year cycle of on-orbit servicing was adopted. HST servicing missions in December 1993, February 1997, December 1999, March 2002 and May 2009 were enormous successes and validated the concept of on-orbit servicing of Hubble. The years since the launch of HST in 1990 have been momentous, with the discovery of spherical aberration in its main mirror and the search for a practical solution. The STS-61 (Endeavour) mission of December 1993 corrected the effects of spherical aberration and fully restored the functionality of HST. Since then, servicing missions have regularly provided opportunities to repair aging and failed equipment as well as incorporate new technologies in the telescope, especially in the Science Instruments that are the heart of its operations. See OPO's Hubble Primer for more information about HST. The Current Science Instruments Space Telescope Imaging Spectrograph A spectrograph spreads out the light gathered by a telescope so that it can be analyzed to determine such properties of celestial objects as chemical composition and abundances, temperature, radial velocity, rotational velocity, and magnetic fields. The Space Telescope Imaging Spectrograph (STIS) can study these objects across a spectral range from the UV (115 nanometers) through the visible red and the near-IR (1000 nanometers). STIS uses three detectors: a cesium iodide photocathode Multi-Anode Microchannel Array (MAMA) for 115 to 170 nm, a cesium telluride MAMA for 165 to 310 nm, and a Charge Coupled Device (CCD) for 165 to 1000 nm. All three detectors have a 1024 X 1024 pixel format. The field of view for each MAMA is 25 X 25 arc-seconds, and the field of view of the CCD is 52 X 52 arc-seconds. The main advance in STIS is its capability for two-dimensional rather than one-dimensional spectroscopy. For example, it is possible to record the spectrum of many locations in a galaxy simultaneously, rather than observing one location at a time. STIS can also record a broader span of wavelengths in the spectrum of a star at one time. As a result, STIS is much more efficient at obtaining scientific data than the earlier HST spectrographs. A power supply in STIS failed in August 2004, rendering it inoperable. During the servicing mission in 2009, astronauts successfully repaired the STIS by removing the circuit card containing the failed power supply and replacing it with a new card. Since STIS was not designed for in-orbit repair of internal electronics, this task was a substantial challenge for the astronaut crew. Near Infrared Camera and Multi-Object Spectrometer The Near Infrared Camera and Multi-Object Spectrometer (NICMOS) is an HST instrument providing the capability for infrared imaging and spectroscopic observations of astronomical targets. NICMOS detects light with wavelengths between 0.8 and 2.5 microns - longer than the human-eye limit. The sensitive HgCdTe arrays that comprise the infrared detectors in NICMOS must operate at very cold temperatures. After its deployment, NICMOS kept its detectors cold inside a cryogenic dewar (a thermally insulated container much like a thermos bottle) containing frozen nitrogen ice. NICMOS is HST's first cryogenic instrument. The frozen nitrogen ice cryogen in NICMOS was exhausted in early 1999, rendering the Instrument inoperable at that time. An alternate means of cooling the NICMOS was developed and installed in the March 2002 servicing mission. This device uses a mechanical cooler to cool the detectors to the low temperatures necessary for operations. The technology for this cooler was not available when the instrument was originally designed, but fortunately became available in time to support the reactivation of the instrument. Since late 2008, the NICMOS Cooling System (NCS) has experienced difficulties maintaining the instrument’s nominal scientific operating state, in which the detectors are maintained at ~ 77K. Repeated restart attempts have demonstrated that it is not possible to restart the NCS in a cold state immediately following safing events. The main culprit for the problems is believed to be water ice in the primary (circulator) loop of the NCS. An inefficient approach to this problem would be to put the NCS through a several-month warm-up/cooldown cycle and hope that there is an opportunity for science prior to the next payload safing event. The only feasible path towards satisfactory operation of NICMOS is to remove the putative water by venting the existing contaminated Ne coolant and replacing it with a fresh charge, which is available onboard but has never actually been used on-orbit. Based on the Cycle 18 proposal review results, STScI and Goddard HST Project, with the concurrence of NASA Headquarters, have decided that NICMOS will not be available for science in Cycle 18. A decision on the availability of NICMOS beyond Cycle 18 has not yet been made and awaits further discussion. Advanced Camera for Surveys The ACS is a camera designed to provide HST with a deep, wide-field survey capability from the visible to near-IR, imaging from the near-UV to the near-IR with the point-spread function critically sampled at 6300 Å, and solar blind far-UV imaging. The primary design goal of the ACS Wide-Field Channel is to achieve a factor of 10 improvement in discovery efficiency, compared to WFPC2, where discovery efficiency is defined as the product of imaging area and instrument throughput. These gains are a direct result of improved technology since the HST was launched in 1990. The Charge Coupled Devices (CCDs) used as detectors in the ACS, are more sensitive than those of the late 80s and early 90s, and also have many more pixels, capturing more of the sky in each exposure. The wide field camera in the ACS is a 16 megapixel camera. The ACS was installed during the March 2002 servicing mission. As a result of the improved sensitivity it instantly became the most heavily used Hubble instrument. It has been used for surveys of varying breadths and depths, as well as for detailed studies of specific objects. The ACS worked well until January 2007, at which time a failure in the electronics for the CCDs occurred and has prevented use of those detectors. Engineers and astronauts then developed an approach to remove and replace the failed electronics, which was carried out during the 2009 servicing mission. As with the STIS repair, the ACS repair was challenging, since the instrument was not designed originally with this type of repair in mind. Fine Guidance Sensors The Fine Guidance Sensors (FGS), in addition to being an integral part of the HST Pointing Control System (PCS), provide HST observers with the capability of precision astrometry and milliarcsecond resolution over a wide range of magnitudes (3 < V < 16.8). Its two observing modes - Position Mode and Transfer Mode - have been used to determine the parallax and proper motion of astrometric targets to a precision of 0.2 mas, and to detect duplicity or structure around targets as close as 8 mas (visual orbits can be determined for binaries as close as 12 mas). Cosmic Origins Spectrograph The Cosmic Origins Spectrograph (COS) is a fourth-generation instrument that was installed on the Hubble Space Telescope (HST) during the 2009 servicing mission. COS is designed to perform high sensitivity, moderate- and low-resolution spectroscopy of astronomical objects in the 115-320 nm wavelength range. It significantly enhances the spectroscopic capabilities of HST at ultraviolet wavelengths, and provides observers with unparalleled opportunities for observing faint sources of ultraviolet light. The primary science objectives of the COS are the study of the origins of large scale structure in the Universe, the formation and evolution of galaxies, the origin of stellar and planetary systems, and the cold interstellar medium. The COS achieves its improved sensitivity through advanced detectors and optical fabrication techniques. At UV wavelengths even the best mirrors do not reflect all light incident upon them. Previous spectrographs have required multiple (5 or more) reflections in order to display the spectrum on the detector. A substantial portion of the COS improvement in sensitivity is due to an optical design that requires only a single reflection inside the instrument, reducing the losses due to imperfect reflectivity. This design is possible only with advanced techniques for fabrication, which were not available when earlier generations of HST spectrographs were designed. COS has a far-UV and near-UV channel that use different detectors: two side-by-side 16384 x 1024 pixel Cross-Delay Line Microchannel Plates (MCPs) for the far-UV, 115 to 205 nm, and a 1024x1024 pixel cesium telluride MAMA for the near-UV,170 to 320 nm. The far-UV detector is similar to detectors flown on the FUSE spacecraft, and takes advantage of improved technology over the past decade. The near-UV detector is a spare STIS detector. Wide Field Camera 3 The Wide Field Camera 3 (WFC3) is also a fourth generation instrument that was installed during the 2009 servicing mission. Equipped with state-of-the-art detectors and optics, WFC3 provides wide-field imaging with continuous spectral coverage from the ultraviolet into the infrared, dramatically increasing both the survey power and the panchromatic science capabilities of HST. The WFC3 has two camera channels: the UVIS channel that operates in the ultraviolet and visible bands (from about 200 to 1000 nm), and the IR channel that operates in the infrared (from 900 to 1700 nm). The performance of the two channels was designed to complement the performance of the ACS. The UVIS channel provides the largest field of view and best sensitivity of any ultraviolet camera HST has had. This is feasible as a result of continued improvement in the performance of Charge Coupled Devices designed for astronomical use. The IR channel on WFC3 represents a major improvement on the capabilities of the NICMOS, primarily as a result of the availability of much larger detectors, 1 megapixel in the WFC3/IR vs. 0.06 megapixels for the NICMOS. In addition, modern IR detectors like that in the WFC3 have benefited from improvements over the last decade in design and fabrication. Mission Operations and Observations: Although HST operates around the clock, not all of its time is spent observing. Each orbit lasts about 95 minutes, with time allocated for housekeeping functions and for observations. "Housekeeping" functions includes turning the telescope to acquire a new target, switching communications antennas and data transmission modes, receiving command loads and downlinking data, calibrating the instruments and similar activities. On average, the telescope spends about 50% of the time observing astronomical targets. About 50% of the time the view to celestial targets is blocked by the Earth, and that time is used to carry out these support functions. Each year the STScI solicits ideas for scientific programs from the worldwide astronomical community. All astronomers are free to submit proposals for observations. Typically, 700-1200 proposals are submitted each year. A series of panels, involving roughly 100 astronomers from around the world, are convened to recommend which of the proposals to carry out over the next year. There is only sufficient time in a year to schedule about 1/5 of the proposals that are submitted, so the competition for Hubble observing time is tight. After proposals are chosen, the observers submit detailed observation plans. The STScI uses these to develop a yearlong observing plan, spreading the observations evenly throughout the period and taking into account scientific reasons that may require some observations to be at a specific time. This long-range plan incorporates calibrations and engineering activities, as well as the scientific observations. This plan is then used as the basis for detailed scheduling of the telescope, which is done one week at a time. Each event is translated into a series of commands to be sent to the onboard computers. Computer loads are uplinked several times a day to keep the telescope operating efficiently. When possible, two scientific instruments are used simultaneously to observe adjacent target regions of the sky. For example, while a spectrograph is focused on a chosen star or nebula, a camera can image a sky region offset slightly from the main viewing target. During observations the Fine Guidance Sensors (FGS) track their respective guide stars to keep the telescope pointed steadily at the right target. Engineering and scientific data from HST, as well as uplinked operational commands, are transmitted through the Tracking Data Relay Satellite (TDRS) system and its companion ground station at White Sands, New Mexico. Up to 24 hours of commands can be stored in the onboard computers. Data can be broadcast from HST to the ground stations immediately or stored on a solid-state recorder and downlinked later. The observer on the ground can examine the "raw" images and other data within a few minutes for a quick-look analysis. Within 24 hours, GSFC formats the data for delivery to the STScI. STScI is responsible for calibrating the data and providing them to the astronomer who requested the observations. The astronomer has a year to analyze the data from the proposed program, draw conclusions, and publish the results. After one year the data become accessible to all astronomers. The STScI maintains an archive of all data taken by HST. This archive has become an important research tool in itself. Astronomers regularly check the archive to determine whether data in it can be used for a new problem they are working on. Frequently they find that there are HST data relevant for their research, and they can then download these data free of charge. Hubble has proven to be an enormously successful program, providing new insight into the mysteries of the Universe. Previously Flown Instruments: - Wide Field Planetary Camera - Wide Field Planetary Camera 2 - Faint Object Spectrograph - Goddard High Resolution Spectrograph - Corrective Optics Space Telescope Axial Replacement - Faint Object Camera - High Speed Photometer Wide Field/Planetary Camera The Wide Field/Planetary Camera (WF/PC1) was used from April 1990 to November 1993, to obtain high resolution images of astronomical objects over a relatively wide field of view and a broad range of wavelengths (1150 to 11,000 Angstroms). Wide Field Planetary Camera 2 The original Wide Field/Planetary Camera (WF/PC1) was replaced by WFPC2 on the STS-61 shuttle mission in December 1993. WFPC2 was a spare instrument developed by the Jet Propulsion Laboratory in Pasadena, California, at the time of HST launch. It consisted of four cameras. The relay mirrors in WFPC2 were spherically aberrated in just the right way to correct for the spherically aberrated primary mirror of the observatory. (HST's primary mirror is 2 microns too flat at the edge, so the corrective optics within WFPC2 were too high by that same amount.). The "heart'' of WFPC2 consisted of an L-shaped trio of wide-field sensors and a smaller, high resolution ("planetary") camera tucked in the square's remaining corner. WFPC2 was removed in the May 2009 servicing mission and replaced by the Wide-Field Camera 3 (WFC3). Faint Object Spectrograph A spectrograph spreads out the light gathered by a telescope so that it can be analyzed to determine such properties of celestial objects as chemical composition and abundances, temperature, radial velocity, rotational velocity, and magnetic fields. The Faint Object Spectrograph (FOS) was one of the original instruments on Hubble; it was replaced by NICMOS during the second servicing mission in 1997. The FOS examined fainter objects than the High Resolution Spectrograph (HRS), and could study these objects across a much wider spectral range -- from the UV (1150 Angstroms) through the visible red and the near-IR (8000 Angstroms). The FOS used two 512-element Digicon sensors (light intensifiers). The "blue" tube was sensitive from 1150 to 5500 Angstroms (UV to yellow). The "red" tube was sensitive from 1800 to 8000 Angstroms (longer UV through red). Light entered the FOS through any of 11 different apertures from 0.1 to about 1.0 arc-seconds in diameter. There were also two occulting devices to block out light from the center of an object while allowing the light from just outside the center to pass on through. This could allow analysis of the shells of gas around red giant stars of the faint galaxies around a quasar. The FOS had two modes of operation: low resolution and high resolution. At low resolution, it could reach 26th magnitude in one hour with a resolving power of 250. At high resolution, the FOS could reach only 22nd magnitude in an hour (before noise becomes a problem), but the resolving power was increased to 1300. Goddard High Resolution Spectrograph The Goddard High Resolution Spectrograph (GHRS) was one of the original instruments on Hubble; it failed in 1997, shortly before being replaced by STIS during the second servicing mission. As a spectrograph, HRS also separated incoming light into its spectral components so that the composition, temperature, motion, and other chemical and physical properties of the objects could be analyzed. The HRS contrasted with the FOS in that it concentrated entirely on UV spectroscopy and traded the extremely faint objects for the ability to analyze very fine spectral detail. Like the FOS, the HRS used two 521-channel Digicon electronic light detectors, but the detectors of the HRS were deliberately blind to visible light. One tube was sensitive from 1050 to 1700 Angstroms; while the other was sensitive from 1150 to 3200 Angstroms. The HRS also had three resolution modes: low, medium, and high. "Low resolution" for the HRS was 2000 -- higher than the best resolution available on the FOS. Examining a feature at 1200 Angstroms, the HRS could resolve detail of 0.6 Angstroms and could examine objects down to 19th magnitude. At medium resolution of 20,000; that same spectral feature at 1200 Angstroms could be seen in detail down to 0.06 Angstroms, but the object would have to be brighter than 16th magnitude to be studied. High resolution for the HRS was 100,000, allowing a spectral line at 1200 Angstroms to be resolved down to 0.012 Angstroms. However, "high resolution" could be applied only to objects of 14th magnitude or brighter. The HRS could also discriminate between variations in light from objects as rapid as 100 milliseconds apart. Corrective Optics Space Telescope Axial Replacement COSTAR was not a science instrument; it was a corrective optics package that displaced the High Speed Photometer during the first servicing mission to HST. COSTAR was designed to optically correct the effects of the primary mirror's aberration for the Faint Object Camera (FOC), the High Resolution Spectrograph (HRS), and the Faint Object Spectrograph (FOS). All the other instruments that have been installed since HST's initial deployment, have been designed with their own corrective optics. When all of the first-generation instruments were replaced by other instruments, COSTAR was no longer be needed and was removed from Hubble during the 2009 servicing mission. Faint Object Camera The Faint Object Camera (FOC) was built by the European Space Agency as one of the original science instruments on Hubble. It was replaced by ACS during the servicing mission in 2002. There were two complete detector systems for the FOC. Each used an image intensifier tube to produced an image on a phosphor screen that is 100,000 times brighter than the light received. This phosphor image was then scanned by a sensitive electron-bombarded silicon (EBS) television camera. This system was so sensitive that objects brighter than 21st magnitude had to be dimmed by the camera's filter systems to avoid saturating the detectors. Even with a broad-band filter, the brightest object that could be accurately measured was 20th magnitude. The FOC offered three different focal ratios: f/48, f/96, and f/288 on a standard television picture format. The f/48 image measured 22 X 22 arc-seconds and yielded a resolution (pixel size) of 0.043 arc-seconds. The f/96 mode provided an image of 11 X 11 arc-seconds on each side and a resolution of 0.022 arc-seconds. The f/288 field of view was 3.6 X 3.6 arc-seconds square, with resolution down to 0.0072 arc-seconds. High Speed Photometer The High Speed Photometer (HSP) was one of the four original axial instruments on the Hubble Space Telescope (HST). The HSP was designed to make very rapid photometric observations of astrophysical sources in a variety of filters and passbands from the near ultraviolet to the visible. The HSP was removed from HST during the first servicing mission in December, 1993. For more complete technical information about HST and its instruments, see the HST Primer.
<urn:uuid:eb22ec4d-4069-49a1-8a5e-6fdb526d2a5a>
{ "date": "2013-05-24T02:06:56", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9429327249526978, "score": 3.484375, "token_count": 4978, "url": "http://www.stsci.edu/hst/HST_overview" }
Honour and cherish them... In our country, the father was the head of the family and the sole bread winner from ancient times. However, with more and more women going to work, to supplement the income, this role of being the sole bread winner has changed in modern times. But, in our culture, he's still the head of the family and is highly respected. Even though the mother is the primary care-giver, traditionally father's too play a key role in the upbringing of a child, especially in these modern times where both parents go to work. It is important to show love and gratitude to your father for all he does to provide for the family and protect you all. Today, which is Father's Day is ideal to do so, especially if you are too busy with your own life. Even though we don't need such special days, let's make the most of it and see how it originated. In the US and many other countries, Father's Day is celebrated on the third Sunday in June. So, it means today is Father's Day. According to this rule, the date changes every year. Next year it'll fall on June 20. There have been many theories about Father's Day and how it came into practice to celebrate our wonderful fathers. Some say, the idea evolved from around 4,000 years ago when a boy with the name of Elmusu wished his father longevity(long life) life and good health by making a card out of mud clay and scribing his appreciation message on it. What happened to Elmusu and his father is not known, but the tradition of having a special day to honour our fathers has continued through the years internationally. The commercialisation of Father's Day is said to have emerged after an American (Spokane, Washington) woman named Sonora Smart Dodd thought of the Father's Day celebration whilst listening to a Mother's Day sermon in 1909. Having being raised by her widowed single father, Henry Jackson Smart after Sonora's mother died, Senora wanted her father to know how special he was to her. She saw him as a selfless, courageous and loving man and since he was born in June - she chose to celebrate and hold the first Father's Day on June 19, in Spokane, Washington. In 1924, President Calvin Coolidge proclaimed the third Sunday in June as Father's Day, which was then established as a permanent national observance by President Nixon in 1972. Roses are the Father's Day flowers: red to be worn by a living father, white if the father has passed away. You are my best friend You are my world You are my heart You are my life You are my best friend. I'm proud of you, Because you're my father, You bring me, What I need, Thanks, my dear father. Yes, my loving father, I am proud to be, A child of yours, I am glad, You had to never punish me. Yes my dear father, On this March 5, You turned 40 years, But I didn't like it. Anyway we can't, Stop the time, But I wish you, All the best of life, On this Father's Day, And everyday of your life.
<urn:uuid:ba131a31-c4ea-4dfc-9a6d-fc5412edb576>
{ "date": "2013-05-24T01:38:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9691372513771057, "score": 2.8125, "token_count": 721, "url": "http://www.sundayobserver.lk/2009/06/21/jun04.asp" }
What is lead poisoning? Lead poisoning occurs when you absorb too much lead by breathing or swallowing a substance with lead in it, such as paint, dust, water, or food. Lead can damage almost every organ system. In children, too much lead in the body can cause lasting problems with growth and development. These can affect behavior, hearing, and learning and can slow the child's growth. In adults, lead poisoning can damage the brain and Reference nervous system Opens New Window, the stomach, and the kidneys. It can also cause Reference high blood pressure Opens New Window and other health problems. Although it isn't normal to have lead in your body, a small amount is present in most people. Environmental laws have reduced lead exposure in the United States, but it is still a health risk, especially for young children. What causes lead poisoning? Lead poisoning is usually caused by months or years of exposure to small amounts of lead at home, work, or day care. It can also happen very quickly with exposure to a large amount of lead. Many things can contain or be contaminated with lead: paint, air, water, soil, food, and manufactured goods. The most common source of lead exposure for children is Reference lead-based paint Opens New Window and the dust and soil that are contaminated by it. This can be a problem in older homes and buildings. Adults are most often exposed to lead at work or while doing hobbies that involve lead. Who is at highest risk of lead poisoning? Lead poisoning can occur at any age, but children are most likely to be affected by high lead levels. Children at highest risk include those who: - Live in or regularly visit homes or buildings built before 1978. These buildings may have lead-based paint. The risk is even higher in buildings built before 1950, when lead-based paint was more commonly used. - Are immigrants, refugees, or adoptees from other countries.Reference 1 They may have been exposed to higher lead levels in these countries. - Are 6 years old or younger. Young children are at higher risk because: - They often put their hands and objects in their mouths. - They sometimes swallow nonfood items. - Their bodies absorb lead at a higher rate. - Their brains are developing quickly. Others at risk for lead poisoning include people who: - Drink water that flows through pipes that were soldered with lead. - Work with lead either in their job or as a hobby (for example, metal smelters, pottery makers, and stained glass artists). - Eat food from cans made with lead solder. These types of cans aren't made in the United States. - Cook or store food in ceramic containers. Some ceramic glaze contains lead that may not have been properly fired or cured. - Eat or breathe traditional or folk remedies that contain lead, such as some herbs and vitamins from other countries. - Live in communities with a lot of industrial pollution. What are the symptoms? You may not notice any symptoms at first. The effects are easy to miss and may seem related to other conditions. The higher the amount of lead in the body, the more severe the symptoms are. In children, symptoms can include: - Slightly lower intelligence and smaller size compared to children of the same age. - Behavior problems, such as acting angry, moody, or hyperactive. - Learning problems. - Lack of energy, and not feeling hungry. In adults, lead poisoning can cause: - Changes in behavior, mood, personality, and sleep patterns. - Memory loss and trouble thinking clearly. - Weakness and muscle problems. Severe cases can cause seizures, paralysis, and coma. How is lead poisoning diagnosed? The doctor will ask questions and do a physical exam to look for signs of lead poisoning. If your doctor suspects lead poisoning, he or she will do a blood test to find out the amount of lead in the blood. Diagnosing lead poisoning is difficult, because the symptoms can be caused by many diseases. Most children with lead poisoning don't have symptoms until their blood lead levels are very high. In the United States, there are screening programs to check lead levels in children who are likely to be exposed to lead. Whether your child needs to be tested depends in part on where you live, how old your housing is, and other risk factors. Talk to your child's doctor about whether your child is at risk and should be screened. Adults usually aren't screened for lead poisoning unless they have a job that involves working with lead. For these workers, companies usually are required to provide testing. If you are pregnant or trying to get pregnant and have a family member who works with lead, you may want to ask your doctor about your risk for lead poisoning. But in general, experts don't recommend routine testing for lead in pregnant women who don't have symptoms.Reference 2 How is it treated? Treatment for lead poisoning includes removing the source of lead, getting good nutrition, and, in some cases, having chelation therapy. Removing the source of lead. Old paint chips and dirt are the most common sources of lead in the home. Lead-based paint, and the dirt and dust that come along with it, should be removed by professionals. In the workplace, removal usually means removing lead dust that's in the air and making sure that people don't bring contaminated dust or dirt on their clothing into their homes or other places. Good nutrition. Eating foods that have enough iron and other vitamins and minerals may be enough to reduce lead levels in the body. A person who eats a balanced, nutritious diet may absorb less lead than someone with a poor diet. Reference Chelation therapy Opens New Window. If removing the lead source and getting good nutrition don't work, or if lead levels are very high, you may need to take chelating medicines. These medicines bind to lead in the body and help remove it. If blood lead levels don't come down with treatment, home and work areas may need to be rechecked. Call your local health department to see what inspection services are offered in your area. The best way to avoid lead poisoning is to prevent it. Treatment cannot reverse any damage that has already occurred. But there are many Reference ways to reduce your exposure—and your child's—before it causes symptoms. |By:||Reference Healthwise Staff||Last Revised: Reference July 26, 2012| |Medical Review:||Reference John Pope, MD - Pediatrics Reference R. Steven Tharratt, MD, MPVM, FACP, FCCP - Pulmonology, Critical Care Medicine, Medical Toxicology
<urn:uuid:e0b50729-faba-4c79-993d-e7f0d46df9a1>
{ "date": "2013-05-24T01:59:12", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9639321565628052, "score": 3.6875, "token_count": 1385, "url": "http://www.sutterhealth.org/health/healthinfo/index.cfm?A=C&hwid=hw119898" }
In terms of their breadth, we can describe plans as strategic or tactical. Strategic plans are plans that are organization-wide (apply to the entire organization). The plans establish overall objectives, and position an organization in terms of its environment. Strategic plans drive the efforts of an organization to achieve its goals, and they serve as a basis for the tactical plans. Tactical plans often referred to as operational plans. Tactical plans are plans that specify the details of how an organization’s overall objectives should be achieved. Unlike strategic plans that tend to cover long periods of time, tactical plans tend to cover short periods of time. Some examples of tactical plans are an organization’s monthly, weekly, and daily plans.
<urn:uuid:e67d30d4-c4a4-403a-85d2-2da1385de225>
{ "date": "2013-05-24T01:31:27", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.969585657119751, "score": 2.9375, "token_count": 144, "url": "http://www.swotanalysistemplate.info/2011/05/strategic-and-tactical-planning.html" }
Chocolate may be good for the heart, scientists have said, following a large study that found that those who eat more of it are less likely to suffer heart disease and strokes. Why chocolate lovers should be better off than those who shun it is not clear. It contains antioxidant flavonoids, known to be protective, but also sugar and — especially in the forms popular in the UK — milk powder, which are implicated in weight gain. Obesity is a well-established cause of serious heart problems. Dieticians suggested that eating chocolate might be helpful because people find it relaxing. The study was presented at the European Society of Cardiology meeting in Paris and was published online by the British Medical Journal. It was undertaken by Oscar Franco and colleagues from Cambridge University, who wanted to try to establish whether a long-speculated association between eating chocolate and reduced risk of heart disease was real. The scientists carried out a review of all the relevant and most convincing evidence they could find — seven studies involving more than 100,000 people. They compared the rates of heart disease in those who ate the most chocolate with those who ate the least. Five of the seven studies found chocolate — eaten in a variety of forms, from sweet bars to chocolate biscuits and drinking cocoa — to be protective. They concluded that the “highest levels of chocolate consumption were associated with a 37 percent reduction in stroke compared with lowest levels.” The studies did not differentiate between dark, milk and white chocolate. They also found no effect on heart failure. The authors are cautious about the results, warning that chocolate is high in calories — about 500 for every 100g — which can cause people to put on weight and lead to heart disease. However, they think the possible benefits should be further explored, including ways to reduce the fat and sugar content of chocolate. “This paper doesn’t really say eat chocolate to improve heart health. Nor do the authors conclude this either. What they seem to say is, those who don’t deny themselves a sweet treat of chocolate — white or brown — have better cardiovascular outcomes,” said Catherine Collins, a dietician at St George’s hospital in London. “I do feel that the perceived relaxing effect of chocolate ... [is] perhaps akin to modest alcohol consumption — a relaxing treat, perceived as a ‘de-stressor’ and a food whose cost base is so low it’s affordable by virtually all,” she said. In the UK, she said, any benefit must be almost entirely due to this relaxation effect, because the cocoa content in products sold in Britain is much lower than in -continental chocolate and many people eat it in the shape of chocolate-covered sweet bars, which have very little flavonoid content.
<urn:uuid:97f21bba-a866-472d-879f-6d726d7cde26>
{ "date": "2013-05-24T02:01:00", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.972821831703186, "score": 3.125, "token_count": 571, "url": "http://www.taipeitimes.com/News/world/archives/2011/08/31/2003512122" }
Drought Emergency Planning Workshops During 2012, the Texas Commission on Environmental Quality (TCEQ) hosted drought emergency planning workshops throughout the state. The workshops provided local government officials, board members, and their water system operators information and tools to prevent and mitigate water outages. Workshop presentation topics included: - the status and severity of the continuing drought in Texas, - an explanation of the emergency process and the role of each agency in the process, - a discussion on what the Drinking Water Task force is and what it has done thus far, - an explanation of available tools, including Financial, Managerial, and Technical (FMT) assistance; the EnviroMentor Program; funding; and the Texas Water Infrastructure Coordination Committee (TWICC), - a discussion on creating an emergency plan: - obtaining interconnections - drilling emergency wells - planning for conservation Representatives from the Texas Commission on Environmental Quality, Texas Division of Emergency Management, Texas Water Development Board, the Texas Water Infrastructure Coordination Committee, local river authorities, and the local Ground Water Conservation Districts were available to provide additional information. Drought and Emergency Management Resources for Public Water Systems - Map of Current Drought Conditions – U.S. Drought Monitor map of current drought conditions - Financial, Managerial, and Technical Assistance Program – Professional contractors who provide a water system, or the utility operating it, help that is tailored to their specific needs - EnviroMentor Program – Professional volunteers who can help small businesses and local governments comply with state environmental rules - Resources for Texas Water and Wastewater Utilities (RG-220) – Lists agencies and organizations that provide funding and assistance to utilities for water, wastewater, and waste disposal - Texas Water Infrastructure Coordination Committee (TWICC) – Committee to identify solutions to water and wastewater infrastructure compliance issues and to seek funding - Emergency Interconnection Procedure for Public Water Systems – TCEQ’s Procedure for public water system emergency interconnection - Emergency and Temporary Use of Wells for Public Water Supplies – Rules and procedures for converting wells to temporary use to supply the public when other sources are unavailable - List of Licensed Water Haulers – List of licensed water haulers in Texas - TCEQ Drought Contingency Plans – Resources for developing a contingency plan in the event of drought or similar water shortage - Water Conservation – Learn how you can conserve water - Texas Drought Information – Drought information, including surface and groundwater regulations, and emergency procedures.
<urn:uuid:22452352-36bf-46cd-b734-c3e871445168>
{ "date": "2013-05-24T01:51:50", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8918319344520569, "score": 2.734375, "token_count": 524, "url": "http://www.tceq.texas.gov/assistance/water/drought-emergency-planning-workshops" }
Reading Photographs: Snow Day How carefully do you look at photos? Often, photographs and other visual primary sources come with captions that say when they were created and what they portray. But what if a picture isn't captioned? How can you figure out its date? Examine the photo below carefully. (To see a larger version of the photo, click "Snow Day Photo" in the box on the left.) Then answer these questions about what you see.
<urn:uuid:fb421b92-a8e2-4f22-a6a3-a06747eb2f61>
{ "date": "2013-05-24T01:52:50", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9672417640686035, "score": 2.515625, "token_count": 96, "url": "http://www.teachinghistory.org/history-content/quiz/25345" }
Some of the founders and leading lights in the fields of artificial intelligence and cognitive science gave a harsh assessment last night of the lack of progress in AI over the last few decades. During a panel discussion—moderated by linguist and cognitive scientist Steven Pinker—that kicked off MIT’s Brains, Minds, and Machines symposium, panelists called for a return to the style of research that marked the early years of the field, one driven more by curiosity rather than narrow applications. “You might wonder why aren’t there any robots that you can send in to fix the Japanese reactors,” said Marvin Minsky, who pioneered neural networks in the 1950s and went on to make significant early advances in AI and robotics. “The answer is that there was a lot of progress in the 1960s and 1970s. Then something went wrong. [Today] you’ll find students excited over robots that play basketball or soccer or dance or make funny faces at you. [But] they’re not making them smarter.” Patrick Winston, director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997, echoed Minsky. “Many people would protest the view that there’s been no progress, but I don’t think anyone would protest that there could have been more progress in the past 20 years. What went wrong went wrong in the ’80s.” Winston blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the “mechanistic balkanization” of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms. “When you dedicate your conferences to mechanisms, there’s a tendency to not work on fundamental problems, but rather [just] those problems that the mechanisms can deal with,” said Winston. Winston said he believes researchers should instead focus on those things that make humans distinct from other primates, or even what made them distinct from Neanderthals. Once researchers think they have identified the things that make humans unique, he said, they should develop computational models of these properties, implementing them in real systems so they can discover the gaps in their models, and refine them as needed. Winston speculated that the magic ingredient that makes humans unique is our ability to create and understand stories using the faculties that support language: “Once you have stories, you have the kind of creativity that makes the species different to any other.” Smaller design teams can now prototype and deploy faster.
<urn:uuid:4ed58e86-a5ee-4543-839a-cecd0c29e7c7>
{ "date": "2013-05-24T01:31:18", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9633479714393616, "score": 2.875, "token_count": 549, "url": "http://www.technologyreview.com/news/423917/unthinking-machines/" }
Pricing Carbon Emissions A bill before Congress may prove a costly way to reduce greenhouse gases. - Friday, June 5, 2009 - By Kevin Bullis Experts are applauding a sweeping energy bill currently before the United States Congress, saying that it could lead to significant cuts in greenhouse-gas emissions and improve the likelihood of a comprehensive international agreement to cut greenhouse gases. "It's real climate-change legislation that's being taken seriously," says Gilbert Metcalf, a professor of economics at Tufts University. But many warn that the bill's market-based mechanisms and more conventional regulations could make these emissions reductions more expensive than they need to be. The bill, officially called the American Clean Energy and Security Act of 2009, is also referred to as the Waxman-Markey Bill, after its sponsors, Henry Waxman (D-Ca.) and Edward Markey (D-Mass.). The legislation would establish a cap and trade system to reduce greenhouse gases, an approach favored by most economists over conventional regulatory approaches because it provides a great deal of flexibility in how emissions targets are met. But it also contains mandates that could significantly reduce the cost savings that the cap and trade approach is supposed to provide. In a cap and trade system, the government sets a cap on total emissions of greenhouse gases from various industrial and utility sources, including power plants burning fossil fuels to generate electricity. It then issues allowances to polluters allowing them to emit carbon dioxide and other greenhouse gases; total emissions are meant to stay under the cap. Over a period of time, the government gradually reduces the cap and the number of allowances until it reaches its target. If companies' emissions exceed their allowances, they must buy more. Economists like the system because companies can choose to either lower their emissions, such as by investing in new technology, or buy more allowances from the government or from companies that don't need them--whichever makes the best economic sense. It is meant to create a carbon market, putting a value on emissions. In the proposed energy bill, the government will set caps to reduce greenhouse-gas emissions by 17 percent by 2020 (compared with 2005 levels) and by 80 percent by 2050--targets chosen to prevent the worst effects of climate change. Setting caps will make electricity more expensive, as companies turn to cleaner technologies to meet ever lower caps or have to spend money to buy allowances from others with lower emissions. But the bill has some provisions for cushioning the blow, especially at first. For one thing, it gives away most of the allowances rather than charging for them, and it also requires that any profits gained from these free allowances be passed on to electricity customers. It also allows companies to buy "offsets" that permit them to pay to reduce emissions outside the United States. If the program is designed right, there are fewer allowances than the total emissions when the program starts. At first, when the caps are relatively easy to meet, the prices for allowances on the carbon market will be low. But eventually, they will get higher as the allowances become scarcer. In an ideal world, companies will predict what the price of the allowances will be, and plan accordingly.
<urn:uuid:ecbdee27-d586-4d08-a03d-036829352851>
{ "date": "2013-05-24T01:50:02", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9556233286857605, "score": 3.40625, "token_count": 648, "url": "http://www.technologyreview.in/energy/22755/page1/" }
The A-Z of Programming Languages: Lua - 11 September, 2008 20:29 This time we chat to Prof. Roberto Ierusalimschy about the design and development of Lua. Prof. Ierusalimschy is currently an Associate Professor in the Pontifical Catholic University of Rio de Janeiro's Informatics Department where he undertakes research on programming languages, with particular focus on scripting and domain specific languages. Prof. Ierusalimschy is currently supported by the Brazilian Council for the Development of Research and Technology as an independent researcher, and has a grant from Microsoft Research for the development of Lua.Net. He also has a grant from Finep for the development of libraries for Lua. Please note that due to popular demand we are no longer following alphabetical order for this series. If you wish to submit any suggestions for programming languages or language authors you would like to see covered, please email [email protected]. What prompted the development of Lua? Was there a particular problem you were trying to solve? In our paper for the Third ACM History of Programming Languages Conference we outline the whole story about the origins of Lua. To make a long story short, yes, we did develop Lua to solve a particular problem. Although we developed Lua in an academic institution, Lua was never an "academic language", that is, a language to write papers about. We needed an easy-to-use configuration language, and the only configuration language available at that time (1993) was Tcl. Our users did not consider Tcl an easy-to-use language. So we created our own configuration language. How did the name Lua come about? Before Lua I had created a language that I called SOL, which stood for "Simple Object Language" but also means "Sun" in Portuguese. That language was replaced by Lua (still nameless at that time). As we perceived Lua to be "smaller" than Sol, a friend suggested this name, which means "moon" in Portuguese. Were there any particularly difficult problems you had to overcome in the development of the language? No. The first implementation was really simple, and it solved the problems at hand. Since then, we have had the luxury of avoiding hard/annoying problems. That is, there have been many problems along the way, but we never had to overcome them; we have always had the option to postpone a solution. Some of them have waited several years before being solved. For instance, since Lua 2.2, released in 1995, we have wanted lexical scoping in Lua, but we didn’t know how to implement it efficiently within Lua's constraints. Nobody did. Only with Lua 5.0, released in 2003 did we solve the problem, with a novel algorithm. What is the most interesting program that you've seen written with Lua and why? I have seen many interesting programs written in Lua, in many different ways. I think it would be unfair to single one out. As a category, I particularly like table-driven programs, that is, programs that are more generic than the particular problem at hand and that are configured for that particular problem via tables. - The A-Z of Programming Languages: AWK - The A-Z of Programming Languages: Ada - The A-Z of Programming Languages: ASP - The A-Z of Programming Languages: BASH/Bourne-Again Shell - The A-Z of Programming Languages: C++ - The A-Z of Programming Languages: Forth - The A-Z of Programming Languages: INTERCAL - The A-Z of Programming Languages: YACC - The A-Z of Programming Languages: Modula-3 - The A-Z of Programming Languages: D - The A-Z of Programming Languages: Python - Pontifical Catholic University - our paper Review: Sony Xperia SP Coming to a shopping centre near you: 3D body scanners ASIC debacle: Conroy open to transparency over website blocks Verizon, Jennifer Lopez partner on Latino-focused wireless stores WikiLeaks Party closer to registering
<urn:uuid:632c1813-17b5-40c6-a671-98731f1b3702>
{ "date": "2013-05-24T01:52:45", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.939734935760498, "score": 2.625, "token_count": 859, "url": "http://www.techworld.com.au/article/260022/a-z_programming_languages_lua/" }
Researchers calculated that cutting daily processed meat intake to 20g (just under an ounce) would reduce premature deaths by 3.3 per cent, mainly by cutting cancer and heart disease rates. About 100,000 people die prematurely in Britain every year, before the age of 65, suggesting the reduction could prevent about 3,000 early deaths a year. However, the study by Zurich University academics has been criticised by a dietitian. The researchers, who followed the health of almost 450,000 people aged 35 to 69, found the more processed meat people ate, the more likely they were to die early from any cause. This was true even after attempting to account for the fact that those who eat more meat tend to be less active, drink more and smoke. Professor Sabine Rohrmann, who led the analysis of the European Prospective Investigation into Cancer and Nutrition study, said: “Risks of dying earlier from cancer and cardiovascular disease also increased with the amount of processed meat eaten. “Overall, we estimate that three per cent of premature deaths each year could be prevented if people ate less than 20g processed meat per day." The researchers also found an indication that eating a lot of unprocessed red meat resulted in higher death rates, although this link was not strong enough for them to consider it statistically valid. But Dr Carrie Ruxton, a dietitian from the Meat Advisory Panel, said: “This study should not put you off the odd bacon sandwich.” She argued that such studies could never truly account for lifestyle differences, and isolate the supposed role of meat intake in death rates. “If you’ve got someone who’s overweight, watching television for hours, munching a meat pie and smoking a fad, which one of those is relevant?” she asked. “You can’t say reducing processed meat intake will reduce mortality rates by three per cent."
<urn:uuid:918e313e-5f9f-4401-a006-15575c9a240d>
{ "date": "2013-05-24T01:39:45", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9690178036689758, "score": 2.828125, "token_count": 401, "url": "http://www.telegraph.co.uk/health/healthnews/9913033/Just-a-chipolata-a-day-say-health-experts.html" }
Wreck of the Bounty The remains of the ship lie in just three metres of water below Bounty Bay, where it was burned by the mutineers in 1790. Tourists can dive onto it. Another wreck, the SS Cornwallis, can also be explored. The only settlement. It contains a post office, church, courthouse, library, health centre, acupuncturist and hairdresser. Power is provided by three generators, which operate for five hours in the morning and five hours in the evening. Bells in the main square are used to make public announcements. A series of strikes in ones and twos is the call for prayer, three strikes signifies public work, four strikes is the signal for a share-out of food from a passing ship and five strikes announces the arrival of a ship. Hill of Difficulty The steep slope up which visitors must travel after arriving in Bounty Bay, following in the footsteps of the mutineers. Opened in Adamstown in 2005, it contains the original bible from the Bounty. A four metre anchor from the ship is mounted in the square outside the courthouse. A Bounty canon is also on display nearby. Fletcher Christian's Cave On a ridge west of Adamstown is a cave in which Fletcher Christian stayed during an early periods of upheaval on the island. He is said to have later been killed by another islander. The uninhabited islands of Ducie and Oeno, which also form part of the same Overseas Territory, have large central lagoons. Whirlpools in the Ducie lagoon are caused by tunnels that drain it to the sea. The lagoon is deep and noted for its poisonous fish and dangerous sharks. The islands are home to thousands of birds, including several rare species, including, the Henderson crake, Henderson fruit-dove, Henderson lorikeet, Henderson petrel, Henderson reed-warbler, Phoenix petrel and Pitcairn reed-warbler. Pitcairn's waters are full of fish. Local boats are available for trips, or visitors can fish from the rocks. Humpbacks and pilot whales can be spotted from the shore as they breach in the waters just off the coast. There is one café on Pitcairn, called Christian's Café. It opens every Friday. There is also a takeaway, open on Wednesdays, and two bakeries. All visitors will also need an alcohol license before their arrival, if they wish to drink. These cost £40 and are valid for six months. John Adams' Grave Known as the Patriarch of Pitcairn, John Adams outlived the other Bounty mutineers and played a key role in restoring stability to the community after its early period of bloodshed. Bang on iron A place on the northeast coast road where, under an overhanging rock, the mutineers set up their forge. Another unusually-named spot is "Where Reynolds Cut The Firewood", a place where the captain of a ship visiting the island came ashore for firewood. Bernice Christian Memorial Park A sports area with facilities for tennis, volleyball, rounders, cricket and longball. A steep cliff, at the bottom of which is a popular picnic area and Pitcairn's only beach. Includes all of the island's 11 endemic plant species, as well as other rare flora and fauna. Well-preserved remains of stone age settlements from the island's earlier inhabitants. There are also the remains of a prehistoric altar at Tedside, where human sacrifices are understood to have been made. The 1,1138 ft highest point on Pitcairn. Little George Coc'nuts A valley located in the south west of Pitcairn. It was a coconut grove owned by George Young, son of mutineer Ned Young. No Guts Captain The burial site of a captain from an early visiting ship who requested before death that he not be buried at sea. Pitcairn was the next landfall and he was buried there. Pitcairn is surrounded by a treacherous – but stunning – coastline with locations whose names evoke the island's history. Among the sites for tourists to visit are several where inhabitants have suffered accidents, including "Where Dan Fall", "Where Freddie Fall", "Where Minnie Off" and "Where Tom Off". Others include "Timiti's Crack", where a Tahitian fell to his death, and "Down the God", where heathen idols were found and cast into the sea. Rocks off the shore include Big George Rock, Bitey-Bitey and Bop Bop. An area of the southern coast is called Ugly Name Side. The origin of the name is unknown. Nearby is a point simply called "Oh Dear".
<urn:uuid:81b8e181-075f-4f81-8091-f91f7e83705d>
{ "date": "2013-05-24T02:00:14", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9647418260574341, "score": 2.75, "token_count": 987, "url": "http://www.telegraph.co.uk/news/worldnews/australiaandthepacific/pitcairnislands/6175954/A-tourists-guide-to-Pitcairn-Island.html" }
We hear so much about toxins in fish, you’re probably wondering – what exactly is safe to eat? According to Health magazine, experts say that most seafood is healthy to eat twice a week. It contains high-quality protein, heart-healthy omega-3s, and low levels of saturated fat. Some types also contain pollutants such as mercury, which could harm developing babies. It is important to avoid these. Luckily, it’s pretty easy to do. Here’s how. - Think small. Tim Fitzgerald is a scientist and senior policy specialist with the Environmental Defense Fund. He says that the best way to reduce exposure to contaminants is to cut back on eating big fish. Pollutants from the atmosphere regularly settle into the ocean, and fish that grow large - like shark, marlin and Chilean sea bass - accumulate more contaminants in their bodies during their long lives. - Also: Mix it up. You want to eat a variety of seafood to lower the risks of contaminants. For example, if you like tuna, eat it only once a week because it’s a bigger fish. Then choose something smaller, such as shrimp, for your next meal. Whatever you do, don’t stop eating all seafood out of fear. Dr. Dariush Mozaffarian is a director with the Harvard School of Public Health. He says that eating fish is the single best dietary change you can make to reduce your risk of heart disease. Studies show that the abundant omega-3 fats in seafood help your heart by lowering blood-fat levels, slowing the buildup of plaque in your arteries, and lowering blood pressure. Also, the Environmental Defense Fund has come up with a new “Super Green” list of seafood that’s both low in contaminants, great sources of omega-3s, and easy on the planet – meaning they’re not caught by trawls and dredges, which damage the ocean floor. This list includes farmed muscles, oysters and trout, wild Alaskan salmon, pole-caught Albacore tuna, and wild Pacific sardines. So go ahead and enjoy them.
<urn:uuid:6de77555-710b-4a8c-9632-9cfcf8b11a04>
{ "date": "2013-05-24T01:44:50", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9534217119216919, "score": 3.03125, "token_count": 440, "url": "http://www.tesh.com/topics/home-and-food-category/which-seafood-should-you-eat/cc/7/id/8290" }
City CBD Looking South |Population:||90,466 (2006 census)| |Area:||116.5 km² (45.0 sq mi)| |Region:||Darling Downs, South East Queensland| The City of Toowoomba was a Local Government Area located in the Darling Downs region of Queensland, Australia, encompassing the centre and inner suburbs of the regional city of Toowoomba. The city covered an area of 116.5 square kilometres (45.0 sq mi), and existed as a local government entity in various forms from 1860 until 2008, when it amalgamated with several other councils in the surrounding area to form the Toowoomba Region. The Toowoomba Municipality was proclaimed on 19 November 1860 under the Municipalities Act 1858, a piece of New South Wales legislation inherited by Queensland when it became a separate colony in 1859. William Henry Groom, sometimes described as the "father of Toowoomba", was elected its first mayor. It achieved a measure of autonomy in 1878 with the enactment of the Local Government Act. With the passage of the Local Authorities Act 1902, Toowoomba became a town council on 31 March 1903. On 29 October 1904, Toowoomba was proclaimed a City. Toowoomba absorbed parts of the Shire of Middle Ridge and Town of Newtown on 23 February 1917, and on 19 March 1949, following a major reorganisation of local government in South East Queensland, Toowoomba grew its area to include parts of the Highfields and Drayton Shires. In 2005 Toowoomba City Councillor Lyle Shelton called for Toowoomba's boundaries to be expanded to encompass the area some refer to as "Greater Toowoomba", reflecting Toowoomba's suburban spread beyond the city boundaries. In 2006 the Mayor proposed a controversial plan to recycle sewage into Cooby Dam which is used for drinking water. The federal government agreed to provide partial funding subject to a number of conditions including a requirement to hold a referendum on the issue. On 29 July 2006, Toowoomba voted against the recycled sewage project with the 'No' vote winning by 62% to 38%. On 15 March 2008, under the Local Government (Reform Implementation) Act 2007 passed by the Parliament of Queensland on 10 August 2007, the City of Toowoomba merged with the Shires of Cambooya, Clifton, Crows Nest, Jondaryan, Millmerran, Pittsworth and Shire of Rosalie to form the Toowoomba Region. The former mayor of the Shire of Jondaryan won the mayoralty of the new entity.
<urn:uuid:362921d0-c768-4c58-9f7b-2e93f0d089fd>
{ "date": "2013-05-24T01:44:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9304681420326233, "score": 2.5625, "token_count": 573, "url": "http://www.thefullwiki.org/City_of_Toowoomba" }
Peerumade, Kottayam, Punalur, Tiruvalla and Alappuzha are those stations in the State which have witnessed significant reduction in rainfall during the last century. Y.E.A. Raj, Deputy Director-General, Regional Meteorological Centre, Chennai, revealed this during a special address at the Kerala Environment Congress 2012 here. The topic of his address was ‘Extent of climate change over India and its projected impact on Indian agriculture.’ Climate change in respect of individual stations manifests with mixed trends with positive and negative changes, he said. For instance, positive trends are available from stations such as Kochi (100.6 mm) and Kasaragode (153.5 mm) in the State. “It must be stated here that rainfall series for individual months/seasons in some of the series may have shown a significant trend. In some other cases, these trends would have manifested only recently. “A more detailed analysis of time series must be performed to detect and analyse such incidence,” Raj said. The scenario of significant climate change, especially global warming, is now well documented and the evidence incontrovertible. However, in the Indian context, there appears to be no clear signal of such change at least in crucial parameters such as rainfall and occurrence of cyclonic storms. Projected climate change based on various models suggests steady increase in temperature and, at a later stage, slight increase in rainfall. The effect on agriculture is likely to be mixed, Raj said. The increase in carbon dioxide in the atmosphere initially favours agricultural production. But increase in temperature would have exactly the reverse effect. The situation is fluid and could even be seen to be contradictory at times. This calls for learnt and measures responses based on scientific facts free from transnational biases.
<urn:uuid:c28dd36c-4567-4575-956c-32c24a99652d>
{ "date": "2013-05-24T01:58:57", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9556844830513, "score": 3.21875, "token_count": 378, "url": "http://www.thehindubusinessline.com/news/states/article3795775.ece" }
Most people know Guantanamo Bay as the US military prison located in Cuba, but few know the American presence in this bay goes back to 1898. Guantanamo Bay is a 45 square mile area located on the eastern end of Cuba. Christopher Columbus described the bay as "a broad bay with dark water, of unsuspected dimensions," during his second voyage to the new world. Spanish settlers later took control of the area from the native people, and the British would later seize control in 1741. During the Spanish-American war in1898, a US fleet took shelter in the Bay from summer hurricane weather. After the Spanish-American war, the US government signed a perpetual lease with the first president of Cuba in 1903. It left Cuba with sovereignty of the land, but gave the US "complete jurisdiction and control" of the area. This lease was reaffirmed in a 1934 treaty. The authenticity of the lease is still debated today. The United States used the bay as coal refueling station and a harbor for its military. During WWII it served as a strategic base for escorting cargo ships to the Panama Canal. Over the years the Bay went through many transformations and redesigns, including dry docks, airfields, and eventually the construction of the today's military prisons. Guantanamo Bay is known today for "War on Terror" prisons, but has been an important military location for a great portion US history.
<urn:uuid:34ac1c10-b23e-4efc-aa91-be9119a1d2cb>
{ "date": "2013-05-24T01:59:52", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9824920892715454, "score": 3.765625, "token_count": 284, "url": "http://www.theparrotparty.com/stories/leasing_from_the_enemy_guantanamo.php" }
Over 8,000 websites created by students around the world who have participated in a ThinkQuest Competition. Compete | FAQ | Contact Us CSS H.L. Hunley Travel with our crew as we explore the Confederate Submarine Hunley. After 137 years lying undetected under the water, the CSS Hunley is now back in port and being excavated. The CSS Hunley is a time capsule that has been opened, giving us access to her treasures and her many secrets. 12 & under History & Government > War & Conflicts
<urn:uuid:03efbf8b-b42a-4c6c-bae4-066c0ff41ca6>
{ "date": "2013-05-24T01:52:35", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9017093777656555, "score": 2.765625, "token_count": 112, "url": "http://www.thinkquest.org/pls/html/f?p=52300:100:2061099710493578::::P100_TEAM_ID:501580067" }
Some Games Industry Facts - A typical computer game costs approximately £19.99-£49.99 to buy and anywhere from £300,000 to £5,000,000 to develop! - A game, depending on its size, scope, content and platform can take from 6 months to 3 years to create. Most games have a team size varying between 12 and 60 staff which will include programmers, artists, designers, scripters, audio engineers and producers. Games are also usually created as multi- platform, (for example PC, Xbox360 and Nintendo Wii). - There are approximately 200 hundred developers based in the UK with team sizes varying from 20 to 200 people. Across the world there are around 2000 developers some of which employ over 1000 staff! - The UK is recognised as having some of the best development studios in the world. UK game developers are also employed at studios across the world; as far afield as Australia, Canada, Japan and North America To many the creation of a game may seem quite easy, but today’s games take a great deal of time and resource. Most game development is broken down into a series of milestones the key ones of which are as follows: - The concept – typically drawings, words, models, code and basic prototypes that describe the nature, objectives and features of the game.. - Design document – a detailed plan of the game’s features and how it will play are generated from the concept and presented to potential publishers. - Technology demo – a prototype that demonstrates the game’s technology, illustrating the appearance and movement of the characters or objects on the screen. This is usually produced for review purposes, to decide if the concept is marketable. - Playable prototype or Vertical slice – normally a few levels of the game produced with as many of the game play features in and working. This allows reviewers to get a feel for the game, the level of complexity, the saleability of the title and technical risk involved in fully developing the game. - ALPHA/BETA – the next major milestones that represent near completed games. These are subject to extensive game-play, compatibility and bug testing by both the developer and the publisher. - MASTER – a completed game approved by the platform holder and available for manufacturing in readiness for sales on the high street.
<urn:uuid:978437a8-fc55-4c0b-82a4-b1647bae32e1>
{ "date": "2013-05-24T01:38:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9602104425430298, "score": 2.890625, "token_count": 478, "url": "http://www.train2game.com/resources/some-games-industry-facts/" }
What should medical volunteers consider before taking part in a clinical trial? It is important for medical volunteers to learn as much about the clinical trial before deciding to take part, and discuss any questions they have concerning the study with the research staff, in order to learn about the type of care that can be expected and the various procedures involved. Below is a list of questions that could be helpful for a medical volunteer to consider before taking part in the clinical trial. Answers to these questions are usually covered in the informed consent document that is provided to medical volunteers before taking part in the study. - What is the aim of the study? - What is the required medical volunteer population for the study? - What are the reasons for researchers believing that the drug under investigation will be effective? Has the drug been researched before? - What are the tests and procedures that will be carried during the study - What are the likely risks, side-effects and benefits for a medical volunteer taking part in the study? - How will my usual routine be affected? - What is the duration of the study? - Will I be required to stay in the clinical unit overnight - Who is the funding the clinical trial? - Will I be compensated for my time? - Are there any long-term commitments required for study participation? - How can I tell if the drug under investigation is effective? Will the study results be disclosed to me? - Who is the person responsible for my wellbeing during the course of the study? What preparation should potential medical volunteers make before meeting with a research team to discuss clinical trial participation? It is important for potential medical volunteers to plan ahead and prepare any questions they have for the initial meeting with the research team. It may also be beneficial for a friend or relative to come along to the screening visit to provide support and hear the responses to these questions. Every clinical trial conducted in the UK must be approved and monitored by and Independent Research Ethics Committee (IREC) to ensure it safe to conduct the research study in human volunteers (with as least possible risks) and to ensure the study objectives are ethical and in line with Good Clinical Practice (GCP) guidelines. An IREC is an independent committee of research experts, physicians, statisticians and other community representatives that assess whether the planned research study is ethical and ensure that the rights of medical volunteers are protected. All research organisations that support or conduct biomedical research involving human medical volunteers must, by UK legislation, be approved and continuously monitored by an IREC. How is the safety of the medical volunteer protected? The conduct of clinical trials is strictly governed by the legal and ethical guidelines for Good Clinical Practice (GCP). Furthermore, most clinical research studies are designed with built-in safeguards aim to protect the medical volunteer participants. All clinical trials are carried out in line with a stringently controlled study protocol, detailing the required tasks and procedures that are to be carried out by the clinical and research team without any deviation. During the course of the study, the team of researchers will be responsible for reporting the results to the study sponsor, various medical journals and government agencies. However, the medical volunteers' data will not be disclosed in these reports at any time and will remain strictly confidential. St George's and Mayday Hospitals could earn between £90 - £350 also receive £5 for travel expenses your Flash Player
<urn:uuid:b6a1376c-f5bd-4ac4-bf9a-6f6966bf2f72>
{ "date": "2013-05-24T01:38:33", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9571114778518677, "score": 2.515625, "token_count": 689, "url": "http://www.trials4us.co.uk/medical-volunteer.php" }
Book of Hours. Tours, ca. 1470. (Ms. 7). Albrecht Durer. Apocalypse. Nuremberg: A. Durer, 1511. The song of songs which is Solomon's. Chelsea: Ashendene Press, 1902. The tradition of the book in the Western World is well documented in the Watkinson. - Medieval manuscripts and manuscript leaves, 12th- 15th century - 3,000-4,000 early printed books (15th-17th centuries) - Trumbull-Prime collection of early illustrated books, especially strong in 16th-century German and Italian materials, including works of Albert Durer, emblem books and Florentine and Venetian book illustration - ca. 10,000 18th-century titles - 19th-century illustrated books on a wide range of subjects and from many lands, including an extensive Cruikshank collection - Alphabet books (ca. 350 titles) - Fine printing from the Private Press collection including: - a nearly complete run of the Ashendene Press - classic titles from the Private Press movement - 100+ contemporary artists' books and examples of fine printing - Examples of fine bookbinding (15th-20th centuries) - Extensive secondary holdings on the history of printing and bookbinding Book Arts and the History of the Book Guide
<urn:uuid:8479b770-b036-4cb6-ad08-1a432adc0258>
{ "date": "2013-05-24T01:36:34", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9149647355079651, "score": 2.765625, "token_count": 287, "url": "http://www.trincoll.edu/LITC/Watkinson/collections/Pages/bookarts.aspx" }
Today, where the grass is always greener. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. What sport uses the world's largest maintained playing field? That would be golf. A typical course covers more area than 100 football fields, though only about half of that is manicured grass. Golf courses provide pleasant green space, as housing developers are all too aware. But the voracious need for weed killers, pesticides, fertilizers, and water puts golf courses squarely in the sights of environmentalists. Today's picture perfect courses are a far cry from the grassy dunes of Scotland where the game originated. Early course maintenance was performed by grazing animals who, together with sea birds, also provided natural fertilizer. The demand for near flawless courses grew with changes in technology. By the end of the twentieth century, course design wasn't so much about respecting nature as it was about taming it. Course architects bulldozed first and asked questions later. But that's changed. Today, the best course designs fit naturally into the landscape and respect the surrounding environment. The courses at Bandon Dunes, unpretentiously laid out on the windswept Oregon coast, have won accolades from architects, golfers, and environmentalists alike. They're among a growing number of courses achieving recognition as Certified Audubon Cooperative Sanctuaries. Golf courses can be used to reclaim otherwise unusable land. Houston's Wildcat course is built on the site of a former garbage dump. One benefit for golfers is that the course has hills — a rarity in the gulf coastal region. Abandoned strip mines and quarries can be the setting for stunning layouts. The aptly named Quarry golf course in San Antonio falls into this category. Many golf courses maintain their own lakes and ponds for watering the turf. That reduces demand on public sources of water. Some have experimented with the use of gray water — wastewater that doesn't include sewage. The highly rated Jimmie Austin course at the University of Oklahoma is an example. Some of the greatest strides in environmentally sound golf can be found in improvements to grass. The U.S. Golf Association's Turfgrass and Environmental Research Program is constantly working to develop pest resistant strains of grass that survive on less water. It turns out that watching grass grow can be a vibrant activity. So why is the golf community so actively focused on environmental issues? Well, in part it's good for public relations. As the film Wall Street's protagonist Gordon Gekko might say, “Green is good.” But it goes much deeper. After all, greenskeepers are highly educated professionals responsible for some of the largest, most heavily used green spaces in the world. They're gardeners with really big gardens. And they share a love of nature just as they share a love of the game. I'm Andy Boyd at the University of Houston, where we're interested in the way inventive minds work. NOTES AND REFERENCES: T. Cook. "Greener Links." From the Oregon State University's Oregon's Agricultural Progress website: http://oregonprogress.oregonstate.edu/spring-2006/greener-links. Accessed March 27, 2012. P. Iacobelli. "Natural Golf Courses Redefine Green." From the Environment on MSNBC website: http://www.msnbc.msn.com/id/8418445/ns/us_news-environment/t/natural-golf-courses-redefine-green/#.T3NpXdmwX_h. Accessed March 27, 2012. R. Maranon. "OU Golf Course Stays Green with Grey Water." From the Oklahoma Daily website: http://oudaily.com/news/2009/sep/18/ou-golf-course-stays-green-grey-water/. Accessed March 27, 2012. All pictures are taken from the websites of the referenced golf courses. This episode was first aired on March 29, 2012 The Engines of Our Ingenuity is Copyright © 1988-2012 by John H. Lienhard.
<urn:uuid:4baf3e7b-1fc0-4f87-8600-35e22127d1a5>
{ "date": "2013-05-24T01:46:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9366756081581116, "score": 2.921875, "token_count": 866, "url": "http://www.uh.edu/engines/epi2784.htm" }
University of Idaho Geologists Take Preventative Measures Before Potential Earthquake Monday, July 11 2011 IDAHO FALLS, Idaho – Grand Teton National Park is a spectacular site along the Wyoming-Idaho border. The park brings in nearly 4 million visitors a year and creates a scenic background for those who live there. While the beauty is stunning, it’s tempered by the potential of danger from beneath the ground. The majestic mountain range sits on an active fault line that could one day lead to a severe earthquake. The University of Idaho and the Idaho Bureau of Homeland Security are working together with local officials to identify areas that would be most affected in Idaho’s Teton County in the event of an earthquake. The results of the survey will allow county leaders and citizens the opportunity to better protect government buildings and private property before an earthquake hits. “With eastern Idaho’s risk from earthquakes, it is important to have the best information so that emergency managers can be prepared and make informed decisions,” said Brig. Gen. Bill Shawver, director of Idaho Bureau of Homeland Security. “This project is a great cooperative effort between Teton County, the University of Idaho and BHS that will increase the ability of emergency managers to plan for earthquakes.” Teton County’s governmental seat is the city of Driggs, roughly 20 miles west of the Teton fault. While this fault has been seismically quiet in recorded historic time, geologists believe it could generate a magnitude 7.2 earthquake at some point in the future. “Such an earthquake could produce heavy damage in Teton County to structures not built to seismic standards,” explained Bill Phillips, research geologist for the Idaho Geological Survey. “The amount of damage during earthquakes also is influenced by local soil and rock conditions. We are constructing a map of these conditions in Teton County so that emergency planners can be better prepared.” During the week of July 18-22, geologists will be in the field using seismographs and geophone sensors in 25 places around Teton County to determine what type of soil and bedrock make up the area and how those areas would react during potential earthquake activity. Results from the survey will be given to the county’s emergency services center. The survey is funded by the Idaho Bureau of Homeland Security through the Earthquake Hazard Reduction grant program. For more information on the survey, contact Bill Phillips from the University of Idaho at (208) 301-8794, or Greg Adams from Teton County at (208) 354-2703. # # # About the University of Idaho Founded in 1889, the University of Idaho is the state’s land-grant institution and its principal graduate education and research university, bringing insight and innovation to the state, the nation and the world. University researchers attract nearly $100 million in research grants and contracts each year. The University of Idaho is classified by the prestigious Carnegie Foundation as high research activity. The student population of 12,000 includes first-generation college students and ethnically diverse scholars, who select from more than 130 degree options in the colleges of Agricultural and Life Sciences; Art and Architecture; Business and Economics; Education; Engineering; Law; Letters, Arts and Social Sciences; Natural Resources; and Science. The university also is charged with the statewide mission for medical education through the WWAMI program. The university combines the strength of a large university with the intimacy of small learning communities and focuses on helping students to succeed and become leaders. It is home to the Vandals, and competes in the Western Athletic Conference. For more information, visit www.uidaho.edu
<urn:uuid:598d39a2-6240-4ac2-9698-b54ba43fcdc1>
{ "date": "2013-05-24T01:45:10", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9338088631629944, "score": 3.21875, "token_count": 752, "url": "http://www.uidaho.edu/newsevents/item?name=university-of-idaho-geologists-take-preventative-measures-before-potential-earthquake" }
College living can undoubtedly be exciting. For most students attending schools outside of their home town or state, it's their first opportunity to be independent. For many students, this is the first time they are away from their homes, families and friends for any significant period. While college provides new and exciting opportunities, it also introduces a myriad of new safety hazards, especially to students living in dormitories, apartments and other community locations. Although a student may have been the safest person in their school, house or neighborhood, an impeccable safety record doesn't safeguard someone against the actions of other residents in shared college housing facilities. Therefore, it is extremely important to develop and practice an escape route should there be a fire. Fire is the third leading cause of accidental deaths in the United States. A residential fire occurs every 82 seconds in this country, and once burning, the size of a fire doubles every 30 seconds. If a fire should occur in your building, evacuate as soon as possible. Do not try to act bravely or put the fire out. That is a fight too easily lost, and is just not worth it. If you have an escape plan, follow it at the first sign or smell of a fire. Never exit a door if it feels hot to the touch, as flames are likely on the other side. It is also a good idea to know where all the fire extinguishers are located in the building. In community living facilities, everyone must do their part to make their dwelling a safer place. Here are a few easy steps you can take to help prevent fire through electrical hazards: - Look for the UL Mark on all products. It means samples of the product have been tested for safety. - Make sure outlets are not overloaded. - Check electrical wires and cords on appliances, tools, lamps, etc., to make sure they are not worn or frayed. - Never run electrical wires or extension cords under carpets or heavy items, and never bunch them up behind a hot appliance. - Unplug appliances when not in use. - Have building management install at least one smoke alarm on each level, and make sure they are maintained and tested regularly. Fire is a chemical reaction involving fuel, oxygen and heat. Take away any of these three elements and a fire cannot last. There are four classifications of fires dependent on their fuels: - Class A -- Ordinary materials like wood, paper, cloth, rubber and plastics. Most home fires fall into this category. - Class B -- Combustible liquids such as gasoline, kerosene, alcohol, paint and propane. These tend to be more severe and dangerous than Class A fires because the liquid fuel is highly flammable and can propagate easily. - Class C -- Electrical equipment like appliances, switches and power tools. These fires are extremely dangerous due to added shock hazards and because the source is energized. An energized fire source supplies a steady and constant ignition condition. - Class D -- Combustible metals like magnesium, titanium, potassium and sodium. These fires burn at a very high temperature and can react violently with water or other chemicals.
<urn:uuid:7ef51981-6fd7-4882-8d63-412b3fba0091>
{ "date": "2013-05-24T01:31:10", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9515349864959717, "score": 3.265625, "token_count": 641, "url": "http://www.ul.com/global/eng/pages/offerings/perspectives/consumer/collegesafety/" }
Second World Assembly on Ageing Madrid, Spain 8 -12 April 2002 No safety net for older migrants and refugees "Older refugees have been invisible for too long." - United Nations High Commissioner for Refugees Sadako Ogata (1999) Older refugees represent some 11.5 per cent of refugee populations and, in some cases, they may represent as much as 30 percent. The majority of older refugees are women. These are people who have lost more than just family or belongings, and in interviews conducted by the Office of the United Nations High Commissioner for Refugees (UNHCR), it is apparent that for many, there is no reason to live. The figures tell little, however, of individual hardship and suffering. Typical is the case of an old man sitting alone, weeping, in a camp in the former Yugoslav Republic of Macedonia. Clutching his few belongings and refusing to move, he seemed to have lost the will to live. Or the 86 year-old Kosovo Serb woman, living by herself in Pristina, who had been brutally beaten by three teenagers. The media, for the most part, do not cover the particular situations of older persons in need, and such images and stories are rarely known. But they are real, and so is the painful situation of so many older refugees. Older refugees commonly encounter three main problems: social disintegration, negative social selection and chronic dependency. · Social disintegration occurs when, due to economic decline, the formal or informal social support systems erode; or when war, flight or insecurity cause families to become separated and dispersed. In either case, the number of elderly persons in need increases. · Negative social selection occurs when refugee camps and collection centres empty over a period of time. Those who are young, healthy and able-bodied are the first to depart, leaving behind the weak and the vulnerable. The plight of the elderly is particularly wretched. Often they have nowhere to go and no one to care for them. · Chronic dependency can occur when solitary older persons, unable to secure state benefits or family support, become dependent on UNHCR for long periods of time. In this situation, UNHCR faces a particular challenge. At the same time that UNHCR is working to ensure that the older person's experience of exile is not deepened by poverty and destitution, it must also discourage chronic dependency - by helping them to regularize their status and obtain access to all possible benefits, entitlements and rights. In 2000, to address these problems, the Standing Committee of UNHCR approved its Policy on Older Refugees. Based on the 1991 United Nations Principles for Older Persons, the policy stresses that older refugees should not be seen solely as passive recipients of assistance; on the contrary, they should be seen as a valuable resource with much to offer. These are people with a wealth of accumulated experience and knowledge, and they are well able to participate in decisions and activities that affect their own lives and those of their families and communities. Older refugees often serve as formal and informal leaders of communities. They provide guidance and advice, and they transmit traditions, skills and crafts to other generations, thus preserving the culture of the dispossessed and displaced. They make active contributions to the well-being of their family members, and only become totally dependent in the final stages of frailty, disability and illness. Older persons have taken the lead in returning to countries as far afield as Croatia and Liberia, and once back home they are often able to contribute to peace and reconciliation measures. Making full use of the capabilities and talents of older refugees and realizing their potential is an essential component of the programmes of UNHCR. Although older refugees may have specific needs, UNHCR has found that they can best be assisted within overall protection and assistance programmes rather than through the establishment of separate services. For example, older refugees may need food that is easily digestible, but that need can better be met through appropriate planning within existing programmes. The needs of older refugees are also met most effectively within the context of family and the community. Therefore the capacity of families and communities to meet their own needs and incorporate older people within them should be strengthened. "Migrants... tend to be paid low wages, receive few or no benefits, and work without even minimal safety and health protection. ... Clearly, we must work together to ensure that migrants live in dignity and safety." -United Nations Secretary-General Kofi Annan They migrated from their homes, usually in the countryside, when they were younger, in search of new jobs and opportunities. But after spending years working in low-paying positions, many older migrants find themselves living anonymously in crowded apartments in growing cities, with little support from either their families or from the government. According to available data, one in every 50 persons - some 150 million total - live permanently or temporarily outside their country of origin. This number includes 80 to 97 million workers and their dependants, some 14 million recognized refugees, and permanent immigrants. According to estimates by the International Monetary Fund, migrant worker earnings sent back to home countries accounted for $77 billion in 1997, second only to world petroleum exports in international trade monetary flows. Where the extended family network once helped older people in the rural community, older migrants find that these traditional social networks are non-existent, and there are few alternatives to replace them in the cities. The situation becomes critical, especially when the older migrant becomes ill or disabled. The problems that older migrants face are generally the problems that most older poor people have, and efforts to assist the older poor will help migrants as well. These efforts include providing better access to social protection, designing measures to sustain economic and health security, establishing community centres for older persons, and helping families share living spaces with older family members who are in need. For older migrants who have moved to another country, the situation is different, often depending on how well they have integrated into their new country. As legal international migrants from earlier decades grow older, governments can assist them, for example, by extending social protection and ensuring pension rights. They can help them become part of their new communities by breaking down language barriers and ensuring that they receive services. The situation of ageing migrants who perform illegal work is different since they fall outside the realm of social protection and have no access to pension schemes or adequate health services. The International Labour Organization (ILO) has identified the plight of these migrants as a significant cause for concern. The ILO is working to ensure that older migrants receive treatment equal to that of national workers, and that the rights that they have acquired are maintained after transfer of residence from one country to another. This article was based on information provided by the Office of the United Nations High Commissioner for Refugees (UNHCR) and the International Labour Organization (ILO). For further information, please contact: UN Department of Public Information Tel: (212) 963-0499 Published by the United Nations Department of Public Information DPI/2264 March 2002Back to Table of Contents
<urn:uuid:0c40758d-7c09-4458-a52f-27e3bb95d299>
{ "date": "2013-05-24T01:40:35", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.961013674736023, "score": 3.015625, "token_count": 1428, "url": "http://www.un.org/swaa2002/prkit/oldermigrants.htm" }
- Date submitted: 25 Oct 2011 - Stakeholder type: Member State - Name: Canada - Submission Document: Download Millennium Development Goals (1 hits), MDG CANADA'S NATIONAL SUBMISSION Twenty years after the Earth Summit much has been done to address the environmental and development challenges identified in 1992. However, many of the challenges still exist, others have grown more acute and new issues have emerged. It is clear that progress needs to be more comprehensive and effective. The UN Conference on Sustainable Development (UNCSD) in June 2012 in Brazil is an opportunity to reinvigorate efforts towards sustainable development through an international renewal of political commitment that highlights the economic importance of the sustainable use of natural resources and raises awareness of the economic and social costs of environmental damage and its associated impact on human well-being. The Government of Canada's approach to sustainable development emphasizes transparency and accountability in the integration of sustainability into government planning, reporting, programming and decision-making within the federal government. The cornerstone to this approach is Canada's Federal Sustainable Development Strategy (FSDS), which is an integrated, whole-of-government, results-based approach to achieve sustainability. A key component of the FSDS is the effective monitoring and reporting on goals and targets using indicators in order to track effectively and report on progress. The Government of Canada considers sustainability issues through its Cabinet Committee structure. Canada has many national institutions that address various aspects of sustainable development and that are part of the overall supportive framework for sustainable development in Canada. Over the last ten years, Canada has provided important contributions to the efforts of developing countries towards meeting the Millennium Development Goals (MDGs). Canadian development assistance has been significantly increased and Canada is working to make its assistance more effective, accountable and responsive to the needs and priorities of developing countries. Canada doubled its international assistance from 2001 to 2010, with assistance to Africa doubling from 2003/04 to 2008/09. Canada has been a first-mover on announcing and disbursing against its 2009 G8 L'Aquila Summit commitments to support sustainable agricultural development. As of April 2011, Canada has fully disbursed its $1.18 billion L'Aquila commitment and is the first G8 country to do so. Canada has launched significant new initiatives to support maternal, newborn and child health (the 2010 G8 Muskoka Initiative), education and food security in developing countries, with a clear focus on sustainability of effort and impacts. A critical component of Canada's Aid Effectiveness Agenda in support of the Paris Declaration is ensuring that aid is effective, accountable and responsive to the needs and priorities of its developing country partners. One means of achieving this is through the establishment of gender equality, governance and environmental sustainability as cross-cutting themes that are integrated into development assistance. Canada believes that countries need to focus and strengthen efforts on the management of their natural resources in a sustainable and socially responsible manner. These efforts should include policies that improve natural resource management, environmental sustainability and corporate social responsibility. Particular attention should be given to assisting countries that face significant capacity challenges. Canada has taken a leadership role in corporate social responsibility (CSR) by launching in 2009 its CSR Strategy for the Canadian extractive sector operating abroad. The Strategy includes support for host country resource governance capacity-building initiatives such as the Extractive Industries Transparency Initiative; endorsement and promotion of widely-recognized international CSR performance guidelines such as the Voluntary Principles on Security and Human Rights; and the creation of the Office of the Extractive Sector CSR Counsellor. Part I ? Conference Objective Securing renewed political commitment for sustainable development is in the interest of all countries. Sustainable development requires conscious effort, priority and planning at all levels in order to be effective. National priorities need to support and align with local action and actors in order to achieve tangible results. In Canada's view, this requires a focus on practical steps for implementation which avoid duplication and overlap, harness broad support internationally and make a difference in the lives of citizens. The Conference can help make the case that a transition to a green economy can be consistent with the environmental, economic and social objectives underpinning sustainable development and poverty eradication. Canada sees the Conference as an opportunity to identify (a) policy tools and best practices to facilitate the transition and (b) a balanced suite of voluntary indicators for measuring progress towards a green economy. The Conference could also propose practical strategies to improve the existing institutional framework for sustainable development, with the objective of enhancing coherence and co-ordination, while reducing inefficiency and duplication. With the current global economic situation, priority needs to be given to the effective use of existing resources for sustainable development, and improving the quality and effectiveness of programs, more than ever. Canada would like the outcome document to be strategic, concise and focused on highlighting progress on the two Conference themes. Part II ? A Green Economy in the Context of Sustainable Development and Poverty Eradication Transitioning towards a green economy is a long-term process that requires the active engagement and leadership of many actors including industry and civil society as well as all levels of government. The engagement of the private sector, especially small and medium enterprises, is critical. The Conference could examine the means by which industry, civil society and government can work together to leverage their potential to create jobs, support workers through training and skills development, access financing, advance innovative technologies, and influence the supply chain towards greener methods of production. The five thematic priorities of Canada's international development assistance are sustainable economic growth, security and stability, democracy, children and youth and food security. Canada believes that the Conference should explore the opportunities associated with a green economy in the context of sustainable development and poverty eradication. Using both regulatory and non-regulatory instruments according to national circumstances can enable the wide range of sectors and actors whose participation is necessary to successfully achieve a green economy. Well-designed regulations provide predictability for business, thus supporting innovation and economic growth while meeting environmental objectives. In Canada, a number of policy tools and practices developed and implemented over the years have contributed to greening our economy. In that vein, a number of policy tools and best practices are included in Annex I for consideration as part of a green economy toolkit which Canada believes would be a useful contribution of the Conference towards sustainable development efforts. Examples include Canadian initiatives in the fields of chemicals management, corporate social responsibility and green accounting. In order to provide evidence of progress towards a green economy, indicators will be needed that can be applied flexibly to demonstrate the effectiveness of regulatory and non-regulatory actions, as well as provide the data needed for evidence-based decision-making. Canada believes that a balanced suite of indicators, adaptable to national circumstances, can assist governments to measure progress as they transition to a green economy but their interpretation should consider differences in national circumstances to avoid inappropriate comparisons. (See Annex B). Part III ? Institutional Framework for Sustainable Development Transparent, democratic and accountable systems of governance at the local and national level are vital to achieving sustainability globally and directly influence the ability of states to achieve sustainable economic, environmental and social development and security for their citizens. There is significant scope for improvement with regard to the UN's ability to foster the integration of economic, social and environmental considerations into its support for sustainable development. Canada believes that the international framework for sustainable development could be enhanced through greater integration and coherence among economic, social and environmental objectives in the existing UN institutional system and on-the-ground programs of member states. The existing UN structure includes many important bodies in which member states can discuss pressing sustainable development issues. These include the General Assembly, as well as the UN Economic and Social Council and its functional and regional Commissions. The programmes and funds, such as UNDP and UNEP also have a vital role to play, as well as the secretariats of the multilateral environmental agreements (MEAs). Canada considers that the Conference can serve as an opportunity to improve the current institutional arrangements for sustainable development to bring more cohesion and effectiveness and avoid duplication. A number of ideas have been advanced for mainstreaming sustainable development within the UN. These have included enhancing UNEP, strengthening the integrative role of the UNDP, especially at the country-level, a new umbrella organization for sustainable development, a specialized agency such as a World Environment Organization, possible reforms to the Economic and Social Council (ECOSOC) and the Commission on Sustainable Development, and enhanced institutional reforms and streamlining of present structures. While remaining open to discussing all options that could promote better integration of sustainable development, in the current challenging global economic context, it is not possible in Canada's view to envisage the creation of new agencies such as a World Environment Organization. Canada would like to explore options related to improving the effectiveness, coherence and coordination of UNEP and UNDP, as well as whether a more focused agenda and streamlined format at the UN Commission on Sustainable Development is feasible so that the outcomes make a more effective contribution to achieving sustainable development and the implementation of Agenda 21. Canada is also open to a discussion on whether ECOSOC could play a more integrative role with regard to international sustainable development. In addition to these, another option that could also be considered as part of a "package" of institutional reforms is the elaboration of a framework that better enables the mainstreaming of sustainable development considerations (i.e., economic, social and environmental considerations) across the UN system, with a particular emphasis on the UN's programming activities at the country level. This approach would include efforts to support sustainable development considerations identified within national partners' development priorities as a core element of an enhanced sustainable development partnership. The framework could be focused on improving existing mechanisms and structures and program delivery at the country level rather than the creation of new institutions (for further details see Annex III). Canada looks forward to working constructively towards a successful Conference with concrete and practical results. Canada would like to see a concise outcome document that, on the green economy in the context of sustainable development and poverty eradication theme, fosters the sharing of best practices, encourages the exchange of information, improves the capacity to measure progress and provides support for the active engagement of the private sector; and on the institutional framework for sustainable development theme, promotes better coherence and coordination of existing mechanisms and structures for addressing sustainable development issues in the United Nations system, in particular at the country level where the impact in people's lives is greatest. Annex I - Policy Tools and Best Practices for a Green Economy Toolkit The Government of Canada believes that the elaboration of a green economy toolkit would be a useful contribution of the Conference towards sustainable development efforts. Practical solutions would assist in securing renewed political commitment for sustainable development, and would help bring local actions in line with national priorities in a manner applicable to local circumstances. The sharing of policy tools and best practices reinforces Canada's interest to duplication and overlap and achieve concrete results in the pursuit of sustainable avoid development. Canada has identified a number of policy tools and best practices that could be considered for the toolkit, which are outlined below. Decision-making and Results Green accounting (or integrated environmental and economic accounting) measures linkages between the environment and the economy. Statistics Canada has played a leadership role in the development of green accounting since the first Rio conference. This tool has garnered increasing attention worldwide as governments and international organizations recognize the need to reconcile social and economic development with environmental sustainability. Variables measured in the Canadian accounts include greenhouse gas emissions, energy use, water flows and stocks of natural resources, and research is underway to develop accounts on ecosystem goods and services. As much as possible, the accounts include both physical and monetary measures, are complementary with the national economic accounts, and are based on internationally agreed concepts and methods. Strategic Environmental Assessment (SEA) is a systematic and comprehensive process of examining the environmental and socio-economic effects of policies, plans and programs in order to influence decisions-making towards more sustainable paths. In Canada's experience, SEAs can alert decision makers to risks, incorporate community engagement and traditional knowledge, and facilitate cooperation across sectors and boundaries. The use of Integrated Strategic Environmental Assessment, which can incorporate other critical themes like gender equality and governance, can be a particularly good tool for furthering sustainable development and moving towards a green economy. As per a 2004 Cabinet directive, all Canadian federal departments must complete a SEA of a policy, plan or program proposal, including free trade agreements, when the proposal may result in important environmental impacts, either positive or negative. The results of the SEA are then integrated into the development of the proposal and inform ongoing decision making. The integration of Corporate Social Responsibility (CSR) practices and principles into business operations can help companies contribute to the realization of a green economy. Canada has funded the development of a CSR Implementation Guide, a CSR Tool Kit for business, and a small and medium enterprise (SME) Sustainability Road Map. These free on-line tools provide practical guidance on why and how to integrate sustainability-oriented practices into business operations. Themes include: governance, decision-making, human resources, purchasing and marketing, waste management, buildings, transportation, product design and development, and resource use. In March 2009, the Government of Canada announced Building the Canadian Advantage: a Corporate Social Responsibility Strategy for the Canadian International Extractive Sector. The four pillars of the Strategy are: 1) continuing support for host country capacity-building initiatives related to resource governance; 2) promotion of widely-recognized international corporate social responsibility performance guidelines; 3) the creation of the Office of the Extractive Sector Corporate Social Responsibility Counsellor to assist in the resolution of issues pertaining to the activities of Canadian companies abroad; and 4) support for the development of a Centre for Excellence in Corporate Social Responsibility to develop and disseminate high-quality CSR tools and training to stakeholders. Well-designed regulations provide predictability for business thus supporting innovation and economic growth while meeting environmental performance objectives. An example of Canadian environmental policies and regulations that support these goals include Canada's approach to managing chemical substances through the Chemicals Management Plan (CMP), designed to protect the environment and human health by setting stringent standards, while also spurring innovation and investment in the economy by being flexible, predictable, and cost effective. Canada was the first country in the world to categorize the thousands of chemical substances in use before comprehensive environmental protection laws were created. This has facilitated priority setting for those substances suspected to have the most dangerous properties and those requiring further research. Canada's risk-based approach relies on sound science, assessment, and monitoring, combined with a variety of tools to manage the potential risks posed by chemicals. The goal is to safeguard human health and our environment while supporting economic growth. The Air Quality Health Index (AQHI) is a public information tool that provides current conditions and daily forecasts about air quality levels. It is the first of its kind to communicate the short-term health risks posed by the air pollutant mixture (ground level ozone, particulate matter and nitrogen dioxide) which are known to harm human health including cardiovascular and respiratory effects. AQHI forecasts are currently available across Canada through Environment Canada's Airhealth.ca website and disseminated by a private broadcaster. The success of the AQHI can be attributed to strong partnerships between Environment Canada, Health Canada, provincial governments and key stakeholder groups who share a common interest in ensuring that Canadians have access to information that can help them protect their health. It demonstrates environment-health linkages and as such can contribute to the green economy by influencing the behaviour of Canadians. Sustainable Development Technology Canada (SDTC) was created by Canada to finance and support the development and demonstration of clean technologies which provide solutions to issues of climate change, clean air, water quality and soil, and which deliver economic, environmental and health benefits to Canadians. SDTC, which UNEP's Sustainable Energy Finance Initiative (SEFI) has called "a carefully crafted hybrid between grant and venture capital", targets the gap in earlystage venture capital financing, as well as the risk of proving a technology worthy of private investment. SDTC operates two funds aimed at the development and demonstration of innovative technological solutions. The $590 million SD Tech FundTM supports projects that address climate change, air quality, clean water, and clean soil. The $500 million NextGen Biofuels FundTM supports the establishment of first-of-kind large demonstration-scale facilities for the production of next-generation renewable fuels. Since 2002, SDTC has completed seventeen funding rounds and allocated a total of $515 million to 210 projects. That amount has been leveraged with an additional $1.2 billion in funding from other project partners for a total project value of $1.8 billion. SDTC has been recognized as a model by the OECD, which has stated that "SDTC plays a very positive role in enhancing Canada's competitive position in the environmental field." SEFI has noted that "SDTC's strategy exemplifies how taking aspects of different financial mechanisms can be very To help support sustainable development initiatives and spur green innovation at the local level, the Canada endowed the Federation of Canadian Municipalities (FCM) with $550 million to establish the Green Municipal Fund (GMF), which is co-managed by Natural Resources Canada and Environment Canada at arm's length (the FCM Board of Directors, the decision-making body for the GMF, is advised by a 15 member Council with five appointees from the federal government). The Fund supports municipal initiatives to improve local air, water and soil quality and promote renewable energy with grants and below-market loans. Through GMF, FCM provides funding to three types of initiatives: plans, studies and projects. Grants are available for sustainable community plans, feasibility studies and field tests, while a combination of grants and loans are available for capital projects. Funding, for which all Canadian municipalities and their partners are eligible, is allocated in five sectors of municipal activity: brownfields, energy, transportation, waste and water. The Fund promotes partnerships between, and leverages funds from, the public and private sectors. Further, support for community investment in clean energy is also available from the GMF as well as the ecoENERGY for Aboriginal and Northern Communities program, which funds energy efficiency and renewable energy projects in First Nations, Inuit, Metis and northern communities Agriculture is vital to addressing climate change, food security, poverty reduction and sustainable development. The objective of the Global Research Alliance on Agricultural Greenhouse Gases is to increase international collaboration and investment in public and private research activities to improve knowledge sharing, access to and application by farmers of sustainable practices and technologies. The exchange of existing and new science-based knowledge and practices can provide an opportunity for farmers to contribute to addressing the global challenges of climate change and food security, while pursuing sustainable livelihoods. Canada has initiated a $25 million Agricultural Greenhouse Gases Program to increase the development and adoption of sustainable practices that mitigate agricultural greenhouse gases, which can be shared domestically and internationally. Sustainable Resource Management The International Model Forest Network (IMFN) was introduced by Canada at UNCED in 1992, with the aim of sharing best practices with the world. Model Forests are large, forest-based landscapes where a wide variety of stakeholders work together to address social, environmental and economic issues in a sustainable manner. Model Forests provide a practical and flexible approach to sustainable forest management, with a focus on enabling local communities to address challenges specific to their landscapes for their benefit. Canada demonstrates its global commitment to issues such as biodiversity conservation, climate change and local economic development through support of the IMFN, a robust international network unique in bridging policy-making and on-the-ground delivery. With 58 Model Forests in nearly 30 countries, the IMFN provides the framework for exchanging innovative ideas between sites. Canada's Green Mining Initiative (GMT), launched in 2009, brings together stakeholders to develop green technologies, processes and knowledge for sustainable mining. The GMI objectives are to improve the mining sector's environmental performance, promote mining innovation and position Canadian mining companies and suppliers as global leaders in green mining in an emerging market. Natural Resources Canada invests $8M annually on the GMI with an additional $3M in direct industry funding. The Green Mining Initiative has spurred green mining innovation across Canada, which led to significant progresses on a number of key R&D projects and new projects being launched. Examples include developing and testing, in collaboration with a Canadian equipment manufacturer, the first worldwide electric-diesel hybrid loader in a Canadian mine. Additional funding from Sustainable Development Technology Canada will result in the bringing to market of a new hybrid loader with over twice the production capacity of the first prototype. A green mining vehicle ? green energy roadmap is being developed to provide a strategy for selecting clean alternatives to diesel. Meeting strength requirements for a unique alternative binder process that could be used for mining backfill was also undertaken. Patenting is underway for this technology as well as for a technology developed to successfully recover gold without the use of cyanide. Results of a third year of monitoring on mine sites as part of the Green Mines Green Energy initiative continue to demonstrate that the growth of biomass crops on mine tailings is feasible. GMI is a striking example of good governance in sustainable mining and of what can be achieved throughout collaboration and partnerships. Benefits of increasing collaboration at the international level could be significant. Canada's Integrated Oceans Management Program (IOM) supports regional processes through which decisions are made for the sustainable use, development and protection of Canada's marine ecosystem and resources. The IOM program provides federal, provincial and territorial authorities, industry and Canadians with the science and risk-based tools and governance fora needed to collaboratively develop Integrated Management Plans for defined ocean spaces. These plans, which incorporate social, economic and environmental considerations in decision making, are informed by the identification of Ecologically and Biologically Significant Areas; Species and Community properties; the mapping of human uses; and the assessment of potential interactions between uses and key functional and structural aspects of marine ecosystems. Outcomes of the IOM process also include the identification of conservation measures, including networks of marine protected areas needed to support the sustainable development of ocean resources contributing to Canada's continuing fulfillment of international ocean-related commitments. Sustainable Consumption and Production Significant Canadian advancement in green building and sustainable community planning has been accelerated, in part, by federal programs such as the EQuilibriumTM Sustainable Housing Demonstration Initiative. Led by the Canada Mortgage and Housing Corporation (CMHC) and supported by Natural Resources Canada's Canmet ENERGY expertise, this initiative will result in the design, construction and demonstration of 12 highly sustainable homes across the country which produce as much energy as they consume on an annual basis. This cooperation between the public and private sectors has informed, inspired and accelerated the adoption of net zero energy healthy housing concepts nationally. Building on this successful initiative, the $4.2M EQuilibrium TM Communities initiative supports research, monitoring and showcasing of selected high performance neighbourhood projects. EQuilibrium TM Communities aims to provide measurable improvements over current practices in energy and water consumption, environmental protection, financial viability, land use and transportation. Canada has put in place a strategy for the environmentally sound and secure disposal of all of its surplus electronic and electrical equipment. The Federal E-waste Disposal Strategy emphasizes reuse prior to recycling, where possible. Reuse options include donation to Computers for Schools (CFS), interdepartmental transfer, charitable donation and sale to the public. The Strategy provides recycling options for equipment that cannot be reused including disposal through provincial recycling programs and a standing offer for e-waste recycling services. The Strategy is contributing to the realization of the green economy in Canada by creating green jobs, diverting e-waste from landfill, supporting provincial recycling infrastructure and providing computer-based educational and learning opportunities. Technology Roadmaps (TRMs) are effective tools to enhance the coordination and development of innovative industries and technologies, with benefits to sustainability and the green economy. Since 2002, Canada has used TRMs as forecasting tools that aim to determine future market needs, promote collaboration, and advance promising technologies. TRM processes allow government, industry and academia to work together to predict needs. Many TRMs focus on advancing emerging renewable and clean energy industries, or to address sustainability issues of other industrial sectors. Industry Canada and Natural Resources Canada have completed TRMs for sustainable fuels and chemicals from biomass, hydrogen fuel cell commercialization, clean coal, and carbon capture and storage. Work continues on TRMs for marine energy and sustainable housing. Among Canada's newer initiatives are green patents, or patent applications related to environmental technologies. Accelerating such patent applications can foster investment and expedite commercialization of technologies that could help to resolve or mitigate environmental impacts or to conserve the natural environment and resources. In 2011, the Canadian Intellectual Property Office implemented a new regulation to expedite the examination of green patents, and no fee is required. Annex II - Proposed Green Growth / Green Economy Indicators As part of Canada's commitment to further dialogue and identify a balanced approach to measuring progress towards green growth, Canada believes there would be value in an international discussion on a suite of green growth indicators that countries could voluntarily choose to compile. In support of a constructive discussion at Rio+20, Canada is in favour of a process for the creation of a voluntary set of indicators. In this spirit, Canada has identified an illustrative set of indicators for international consideration building upon the OECD's ongoing work on green growth. Using the OECD's Towards Green Growth framework as a lens, Canada's draft indicators have been groupedby the themes identified by the OECD. The policy rationale for the indicators selected in each theme is summarized below: I. Environmental and resource productivity Indicators in this theme reflect the need for natural capital to be used efficiently. Sustainable growth will require the decoupling of economic growth and environmental impacts. II. Natural asset base Indicators in this theme reflect that traditional markets and accounting frameworks do not always properly reflect the risks associated with declining natural capital stocks. Further, longer-term economic prosperity requires an adequate asset base. III. Environmental quality of life Indicators in this theme reflect that environmental considerations can be particularly important to people when they have a direct impact on their lives. Indicators in this theme can be an effective bridge across the economic, social, and environmental pillars of sustainable development. IV. Economic opportunities and policy responses Indicators in this theme reflect that policies can help shape and define new opportunities. Innovative markets and technologies can spur productivity and job growth while minimizing the environmental footprint of goods and services. Work is required to ensure that the proposed indicators can overcome challenges in data, methodology, and comparability across countries. Canada looks forward to continuing to collaborate closely with the UN, the OECD, member countries, and other partners to further develop this important work. Proposed green growth indicators I. Environmental and resource efficiency and productivity 1. Production-based CO2 intensity 2. Demand-based CO2 intensity 3. Energy intensity 4. Share of renewable energy (by type) in total primary energy supply and in electricity production 5. Non-GHG emitting sources as a share of total primary energy supply and in electricity 6. Material intensity 7. Waste intensity 8. Nutrient flows and balances 9. Multi-factor productivity reflecting environmental services II. Natural asset base Physical and monetary (where possible) measures of key natural capital stocks: 10. Energy resources 11. Freshwater resources 12. Forest resources 13. Fish resources 14. Mineral resources 15. Land resources (land use and quality) III. Environmental quality of life 17. Environmentally induced health problems & related costs 18. Population-weighted exposures to air pollution IV. Economic opportunities and policy responses 19. R&D expenditure of importance to green growth 20. Patents of importance to green growth 21. Environment-related innovation in all sectors 22. Value of environmental goods and services produced in the economy 23. Value-added in environmental goods and services production 24. Imports and exports of environmental goods and services 25. Employment in environmental goods and services production (direct and indirect) 26. Capital and operating expenditures on environmental protection (remediation andmitigation) Annex III - Institutional Framework for Sustainable Development Proposal Better Integration of Sustainable Development Considerations in the UN System: a potential Outcome for Rio+20 This is a proposal for the elaboration of a framework that better enables the mainstreaming of sustainable development considerations (i.e., economic, social and environmental considerations) across the UN system, with a particular emphasis on the UN's programming activities at the country level. This approach would include efforts to support developing countries to integrate sustainable development considerations into their national development plans and priorities as a core element of an enhanced sustainable development partnership. This proposal is focused on improving existing mechanisms and structures and program delivery at the country level rather than the creation of new institutions and agencies. It is meant to serve as a contribution to the overall effort to improve the institutional framework for sustainable development This proposal is focused on establishing sustainable development as a mainstream priority within UN programming in developing countries. As such, it seeks to address a gap in the context of the institutional framework for sustainable development, which includes efforts at strengthening sustainable development at the local, national, regional and international levels. To achieve the integration of sustainable development considerations, a two-pronged approach could be adopted. At the Country Level: The first element is inspired by the Delivering as One initiative as an example of how to achieve greater coherence and coordination among UN activities at the country level. UN programming within the Delivering as One framework is based on the UN Development Assistance Framework (UNDAF). It is the role of the Resident Coordinator to ensure its implementation. Following the principle of national ownership, whereby the UN Development Assistance Framework (UNDAF) is built upon the national priorities set out by the host country under the principles elaborated by the Paris Declaration and the Accra Agenda for Action, there would be the need for program countries to identify sustainable development within their national plans or poverty reduction strategies. An opportunity for this is presented by the Rio+20 Conference, the objective of which is to secure renewed political commitment for sustainable development. One possible outcome of the conference could be for member states to commit to identifying sustainable development as a priority within their national development plan, perhaps with the formal relabelling of UNDAF as the UN Sustainable Development Assistance Framework, or something comparable. Once identified as a national priority, the Resident Coordinator would coordinate the mainstreaming of sustainable development considerations and their implementation by the various UN agencies active within the country while ensuring their coherence and consistency in accordance with national priorities. The Resident Coordinator would rely on the input of those agencies with mandates related to the three pillars of sustainable development: UNEP and UNDP, among others, with respect to the environmental pillar; the UN Regional Economic Commissions with respect to the economic pillar; and a combination of agencies including inter alia UNDP, UN Women, UNESCO and UNICEF with respect to the social pillar. At the Headquarters Level: The second element could involve calling for all UN entities to mainstream sustainable development considerations across all areas of their work. While many of these entities already incorporate sustainable development considerations within their activities, there is a need to ensure both the priority and the coherence of the UN system's engagement and activities related to sustainable development on an ongoing basis. One possible method of achieving this could be through the inclusion of a call for the UN system and for each UN entity governing body to do so within the Rio+20 outcome document. The Rio+20 document could call on the UN Secretary-General to provide high-level guidance to UN entities to mainstream sustainable development considerations as a top priority for operational activities. Consideration could be given to updating the mandate of the United Nations Development Group (UNDG) ? the UN body supporting the Resident Coordinator and the UN country teams in the delivery of coherent programming and the attainment of internationally agreed development goals ? so that its focus on sustainable development is made explicit. Consideration could also be given to providing a formal, permanent role for UNEP within theUNDG Advisory Group, which is tasked with providing frequent guidance to the UNDG on operational dimensions of the Resident Coordinator system, including ensuring the coherence of country level development operations. Other options for headquarters governance could be considered. This proposal could also be complemented by measures to improve the effectiveness,coherence and coordination of other existing structures with mandates related to sustainable development, such as UNDP and UNEP.
<urn:uuid:4c0bd643-49dd-41b0-a955-ef94b8a329be>
{ "date": "2013-05-24T01:38:48", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9308223724365234, "score": 2.875, "token_count": 6720, "url": "http://www.uncsd2012.org/index.php?page=view&nr=33&type=510&menu=20&template=529&str=Millennium%20Development%20Goals%20(MDGs)" }
Disparities affecting children cloud economic good news story Despite widespread economic upturn in South-Eastern Europe and the Commonwealth of Independent States (SEE/CIS) since the late ‘90s, one in four children under 15 is still living in extreme income poverty according to a UNICEF report. The Innocenti Social Monitor 2006: Understanding Child Poverty in South-Eastern Europe and the Commonwealth of Independent States finds that while the number of children under 15 living in extreme poverty has decreased from 32 million to 18 million, stark disparities in child well-being and opportunities exist: the share of children now living in extreme poverty ranges from 5 per cent in some SEE countries to a startling 80 per cent in the poorest Central Asia countries. Analysis of data from rural and urban settings, from households of different sizes and structures, throws up disparities within countries that particularly affect children in families where there are more than two siblings. Official data for Bosnia and Herzegovina say that extreme poverty (persons with annual consumption bellow 772 BAM per capita) is not reported, but that because of statistical probability can be supposed that it is less than 5 per cent. Based on absolute poverty line (persons with annual consumption bellow 2223 BAM per capita), among the poorest group in BIH, as reported in the official government report from 2004, are families with three and more siblings (66 % households are poor), refugees and displaced persons (37% families are poor), households with two children (32% families are poor) , families in which the head of the household is younger than 25 (29 of them are poor), unemployed and families in which the head of the household has only primary education (25%). Direct income support in the form of state transfers for households with children are widespread in the region, much occurring in the form of pensions. However, income support targeted on children is often of too low value to have a significant impact on poverty reduction “Child poverty should be the number one concern of governments in the region,” said Maria Calivis, UNICEF Regional Director for Central and Eastern Europe and the Commonwealth of The fact that increase of family leads a family to poverty in BIH is alarming. Additionally alarming is the fact that the risk of falling into poverty, for families with two or more children, is increasing in the last several years. In 2005/2006, Economic Policy and Planning Unit (EPPU) BIH, together with UNICEF and Save the Children UK, have undertaken a research on socio-economic policy impact to child rights. The first step was to assess the impact of eventual increase in electricity price to benefit of children and families. In the research the parents agreed that increase of price of electricity would predominantly affect habits and living standards of children.” In our everyday life, everything affects children. They feel the irresponsibility and immaturity of adults who decide on their behalf.” “The increase in price of electricity affect young people the most, because parents, maybe not knowingly, begin economizing on clothing, schooling, even provisions.” “Children suffer because of poverty...Chidren already do not have normal conditions for development (education, nutrition, hygiene, leisure time and similar)“ – these are some statements of parents who participated in the research. One of the boys who participated in the focus groups discussion told a story about his schoolmate who had to leave the school after the first year because he couldn't afford to buy books, shoes, transportation and other necessities. He went to visit him in his village and he found him keeping sheep. One of the fathers in Mostar, who has a child with special needs said in the research that in case of increase in price of electricity „he couldn't afford the hearing device for his child, which would affect him significantly. His schooling would be questionable, because he couldn't hear, and all progress so far would be diminished.“ Interviewed representatives of educational institutions believe that changes in using electricity caused by increase in price would affect quality and duration of classes. In some elementary schools, school representatives even mentioned effects of economizing electricity expenditure, for example through shortening the classes. The UNICEF’s Social Monitor 2006 report argues that the future of the region depends on a healthy and educated generation, which will require a better use of resources and more generous support from the international community. To address the challenges posed by the persistence of child poverty in the region, the report calls on governments to work towards: • More and better public spending on social services (health, education and social infrastructure); reforms of the budget allocation principles to ensure adequately resources targeted in those regions and population groups most in need; • Better targeting and higher levels of social transfers to families with young children in order to provide effective protection from poverty and discourage institutionalization; • A policy shift away from the widespread practice of placing children in institutions in some countries of the region, as well as a firm statement of intent to devote more policy efforts and resources to provide social support for families in crisis UNICEF works with families and communities in the region to mitigate the fallout of poverty. Policy and legislative reforms to protect all children and all their rights are the cornerstone of UNICEF’s programme with governments to support the implementation of the Convention on the Rights of the Child and achieve the Millennium Development Goals in each country.
<urn:uuid:538a0fb6-a9a7-4e91-961d-4db4dd867a71>
{ "date": "2013-05-24T01:39:22", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9595977663993835, "score": 3.171875, "token_count": 1104, "url": "http://www.unicef.org/bih/media_5356.html" }
There are more than 4.2 million teenagers in Mozambique. For many, poverty, HIV/AIDS and limited education opportunities have made adolescence a particularly challenging period. Yet, an increasing number of them are getting involved in finding solutions to their own problems and in creating new opportunities to voice their concerns through media programmes, youth groups or community theatre. However, access to secondary school is limited and remains the privilege of mostly urban children. Only eight per cent of children of secondary school age attend high school. There are not enough secondary schools in the country and most are located in towns. To cope with overcrowding, schools have introduced morning, afternoon and evening shifts in both secondary and primary schools. It is not uncommon to see students in class at 10 pm. Pressure to leave school, especially for girls, comes from different fronts. Girls often have to drop out to take care of younger siblings or sick family members. Many also drop out when they get married at an early age – around 18 per cent of 20 to 24 year-old women have been married before the age of 15. Adolescence also carries other risks. By the age of 14, a third of Mozambican children have become sexually active but knowledge of HIV prevention methods is low. Twelve per cent of young women and 27 per cent of young men aged 15–24 reported using condoms during their last sexual relation. Girls and young women are three times more likely to be HIV-positive than boys and young men.
<urn:uuid:90dd232f-a980-4c2c-a8a8-135cf81c074d>
{ "date": "2013-05-24T01:54:34", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9751547574996948, "score": 3.3125, "token_count": 303, "url": "http://www.unicef.org/mozambique/children_1592.html" }
Want to stay on top of all the space news? Follow @universetoday on Twitter At 54.6 million km away at its closest, the fastest travel to Mars from Earth using current technology (and no small bit of math) takes around 214 days — that’s about 30 weeks, or 7 months. A robotic explorer like Curiosity may not have any issues with that, but it’d be a tough journey for a human crew. Developing a quicker, more efficient method of propulsion for interplanetary voyages is essential for future human exploration missions… and right now a research team at the University of Alabama in Huntsville is doing just that. This summer, UAHuntsville researchers, partnered with NASA’s Marshall Space Flight Center and Boeing, are laying the groundwork for a propulsion system that uses powerful pulses of nuclear fusion created within hollow 2-inch-wide “pucks” of lithium deuteride. And like hockey pucks, the plan is to “slapshot” them with plasma energy, fusing the lithium and hydrogen atoms inside and releasing enough force to ultimately propel a spacecraft — an effect known as “Z-pinch”. “If this works,” said Dr. Jason Cassibry, an associate professor of engineering at UAH, “we could reach Mars in six to eight weeks instead of six to eight months.” The key component to the UAH research is the Decade Module 2 — a massive device used by the Department of Defense for weapons testing in the 90s. Delivered last month to UAH (some assembly required) the DM2 will allow the team to test Z-pinch creation and confinement methods, and then utilize the data to hopefully get to the next step: fusion of lithium-deuterium pellets to create propulsion controlled via an electromagnetic field “nozzle”. Although a rocket powered by Z-pinch fusion wouldn’t be used to actually leave Earth’s surface — it would run out of fuel within minutes — once in space it could be fired up to efficiently spiral out of orbit, coast at high speed and then slow down at the desired location, just like conventional rockets except… better. “It’s equivalent to 20 percent of the world’s power output in a tiny bolt of lightning no bigger than your finger. It’s a tremendous amount of energy in a tiny period of time, just a hundred billionths of a second.” – Dr. Jason Cassibry on the Z-pinch effect In fact, according to a UAHuntsville news release, a pulsed fusion engine is pretty much the same thing as a regular rocket engine: a “flying tea kettle.” Cold material goes in, gets energized and hot gas pushes out. The difference is how much and what kind of cold material is used, and how forceful the push out is. Everything else is just rocket science. Read more on the University of Huntsville news site here and on al.com. Also, Paul Gilster at Centauri Dreams has a nice write-up about the research as well as a little history of Z-pinch fusion technology… check it out. Top image: Mars imaged with Hubble’s Wide-Field Planetary Camera 2 in March 1995.
<urn:uuid:c8b6f20c-f68c-449a-a881-34c4b2fdc078>
{ "date": "2013-05-24T01:58:30", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.90874844789505, "score": 3.953125, "token_count": 694, "url": "http://www.universetoday.com/95991/new-flying-tea-kettle-could-get-us-to-mars-in-weeks-not-months/" }
The Aftenposten newspaper in Norway reports that earlyWednesday morning, the country was struck by a meteor thathad theexplosive force of the atomic bomb that was dropped onHiroshima in 1945.The meteor hit the side of a mountain. There are no reportsof damage or casualties, but if the impact was as great asit appears, it is one of the most significant such events indecades. Aftenposten quoted Norwegian astronomer Knut J?rgen R?ed?degaard as saying, "There were ground tremors, a houseshook and a curtain was blown into the house. This is simplyexceptional. I cannot imagine that we have had such apowerful meteorite impact in Norway in modern times. If themeteorite was as large as it seems to have been, we cancompare it to the Hiroshima bomb. Of course the meteorite isnot radioactive, but in explosive force we may be able tocompare it to the (atomic) bomb." ?degaard seemed surprisedby the fact that no astronomer anywhere in the world wasaware that this meteor was on its way. ?degaard said the meteorite was visible for over 100 miles.Despite the fact that in summer, the midnight sky in Norwaystays lit up by the sun, the meteor flash was witnessed byalmost everyone in the country. Residents of the northernpart of Norway, especially, reported seeing a "ball of fire"that took several seconds to streak across the sky. Peter Bruvold, who caught the meteor on camera, says, "I sawa brilliant flash of light in the sky, and this became alight with a tail of smoke." He then heard an enormouscrash. He says, "I heard the bang seven minutes later. Itsounded like when you set off a solid charge of dynamite akilometer (over half a mile) away." At unknowncountry.com, we regularly tell you what'sreally goingon. At this website, you can get information youjust won'tfind anywhere else. That's what we do for you; now there'ssomethingyou can do for us:subscribe today.That way you can be SUREwe'll behere for you tomorrow! To learn more,clickhere andhere. NOTE: This news story, previously published on our old site, will have any links removed.
<urn:uuid:32957bf0-bb6b-4ca9-822d-4a65b68ce8b8>
{ "date": "2013-05-24T01:45:46", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9667986035346985, "score": 3, "token_count": 492, "url": "http://www.unknowncountry.com/news/norway-meteor-impact" }
CHAPTER 9 - SUBMERGED ORIFICES 12. Orifice Check Structures Occasionally, at a given site, the canal water surface level should be checked up to a specified elevation while simultaneously measuring the rate of flow. The combined checking and measuring functions can be provided by orifice check structures which are built into the canals as in-line structures (figure 9-7). One or more orifice openings of the necessary size are constructed in the lower portion of a wall that extends across the canal at the upstream end of the check-type structure. These orifices are used to measure the discharge. A second wall with one or more gated openings is constructed at the downstream end of the structure. This downstream control is used to check the canal water surface to the desired elevation. Two stilling wells are located outside of the structure. One is connected to a piezometer in the canal upstream from the orifice wall, and the other is connected to a piezometer in the basin between the upstream and downstream walls. In small orifice check structures, staff gages are used in place of piezometer and stilling wells. In either case, the differential head acting across the orifice can be determined, and with knowledge of the orifice dimensions and characteristics, the rate of flow can be determined. The coefficients of discharge that should be used to compute the rate of flow are difficult to determine analytically because of different degrees of suppression at the bottom and sides and between the orifice openings. Computed discharge tables are ordinarily provided for each structure, but usually a statement is included that a field rating is necessary to ensure accurate results. In general, the recommended practice is that field ratings be made by current meter data and that discharge curves be prepared. For maximum potential accuracy, care must be exercised to prevent either excessively small gate openings or small differential head readings that cause large errors of precision of head or gate opening effects on discharge measurement.
<urn:uuid:073d1dc6-cc8f-43cf-8486-29c9ef0301cb>
{ "date": "2013-05-24T01:39:04", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9307858943939209, "score": 3.515625, "token_count": 397, "url": "http://www.usbr.gov/pmts/hydraulics_lab/pubs/wmm/chap09_12.html" }
- Prayer and Worship - Beliefs and Teachings - Issues and Action - Catholic Giving - About USCCB The Two Witnesses. 1* a Then I was given a measuring rod like a staff and I was told, “Come and measure the temple of God and the altar, and count those who are worshiping in it. 2But exclude the outer court* of the temple; do not measure it, for it has been handed over to the Gentiles, who will trample the holy city for forty-two months. 3I will commission my two witnesses* to prophesy for those twelve hundred and sixty days, wearing sackcloth.” 4b These are the two olive trees and the two lampstands* that stand before the Lord of the earth. 5* If anyone wants to harm them, fire comes out of their mouths and devours their enemies. In this way, anyone wanting to harm them is sure to be slain. 6They have the power to close up the sky so that no rain can fall during the time of their prophesying. They also have power to turn water into blood and to afflict the earth with any plague as often as they wish.c 7When they have finished their testimony, the beast that comes up from the abyss* will wage war against them and conquer them and kill them.d 8Their corpses will lie in the main street of the great city,* which has the symbolic names “Sodom” and “Egypt,” where indeed their Lord was crucified. 9* Those from every people, tribe, tongue, and nation will gaze on their corpses for three and a half days, and they will not allow their corpses to be buried. 10The inhabitants of the earth will gloat over them and be glad and exchange gifts because these two prophets tormented the inhabitants of the earth. 11But after the three and a half days, a breath of life from God entered them. When they stood on their feet, great fear fell on those who saw them.e 12Then they heard a loud voice from heaven say to them, “Come up here.” So they went up to heaven in a cloud as their enemies looked on.f 13At that moment there was a great earthquake, and a tenth of the city fell in ruins. Seven thousand people* were killed during the earthquake; the rest were terrified and gave glory to the God of heaven. The Seventh Trumpet.* 15Then the seventh angel blew his trumpet. There were loud voices in heaven, saying, “The kingdom of the world now belongs to our Lord and to his Anointed, and he will reign forever and ever.” 16The twenty-four elders who sat on their thrones before God prostrated themselves and worshiped God 17and said: “We give thanks to you, Lord God almighty, who are and who were. For you have assumed your great power and have established your reign. 18The nations raged, but your wrath has come, and the time for the dead to be judged, and to recompense your servants, the prophets, and the holy ones and those who fear your name, the small and the great alike, and to destroy those who destroy the earth.”g 19Then God’s temple in heaven was opened, and the ark of his covenant could be seen in the temple. There were flashes of lightning, rumblings, and peals of thunder, an earthquake, and a violent hailstorm. * [11:1] The temple and altar symbolize the new Israel; see note on Rev 7:4–9. The worshipers represent Christians. The measuring of the temple (cf. Ez 40:3–42:20; 47:1–12; Zec 2:5–6) suggests that God will preserve the faithful remnant (cf. Is 4:2–3) who remain true to Christ (Rev 14:1–5). * [11:2] The outer court: the Court of the Gentiles. Trample…forty-two months: the duration of the vicious persecution of the Jews by Antiochus IV Epiphanes (Dn 7:25; 12:7); this persecution of three and a half years (half of seven, counted as 1260 days in Rev 11:3; 12:6) became the prototype of periods of trial for God’s people; cf. Lk 4:25; Jas 5:17. The reference here is to the persecution by the Romans; cf. Introduction. * [11:3] The two witnesses, wearing sackcloth symbolizing lamentation and repentance, cannot readily be identified. Do they represent Moses and Elijah, or the Law and the Prophets, or Peter and Paul? Most probably they refer to the universal church, especially the Christian martyrs, fulfilling the office of witness (two because of Dt 19:15; cf. Mk 6:7; Jn 8:17). * [11:5–6] These details are derived from stories of Moses, who turned water into blood (Ex 7:17–20), and of Elijah, who called down fire from heaven (1 Kgs 18:36–40; 2 Kgs 1:10) and closed up the sky for three years (1 Kgs 17:1; cf. 18:1). * [11:8] The great city: this expression is used constantly in Revelation for Babylon, i.e., Rome; cf. Rev 14:8; 16:19; 17:18; 18:2, 10, 21. “Sodom” and “Egypt”: symbols of immorality (cf. Is 1:10) and oppression of God’s people (cf. Ex 1:11–14). Where indeed their Lord was crucified: not the geographical but the symbolic Jerusalem that rejects God and his witnesses, i.e., Rome, called Babylon in Rev 16–18; see note on Rev 17:9 and Introduction. * [11:9–12] Over the martyrdom (Rev 11:7) of the two witnesses, now called prophets, the ungodly rejoice for three and a half days, a symbolic period of time; see note on Rev 11:2. Afterwards they go in triumph to heaven, as did Elijah (2 Kgs 2:11). * [11:13] Seven thousand people: a symbolic sum to represent all social classes (seven) and large numbers (thousands); cf. Introduction. By accepting this message, you will be leaving the website of the United States Conference of Catholic Bishops. This link is provided solely for the user's convenience. By providing this link, the United States Conference of Catholic Bishops assumes no responsibility for, nor does it necessarily endorse, the website, its content, or
<urn:uuid:063e86cc-07dd-407b-93dd-dd9f2e5f689e>
{ "date": "2013-05-24T01:31:43", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9610063433647156, "score": 2.53125, "token_count": 1441, "url": "http://www.usccb.org/bible/revelation/11" }
- Prayer and Worship - Beliefs and Teachings - Issues and Action - Catholic Giving - About USCCB The story of Jonah has great theological import. It concerns a disobedient prophet who rejected his divine commission, was cast overboard in a storm and swallowed by a great fish, rescued in a marvelous manner, and returned to his starting point. Now he obeys and goes to Nineveh, the capital of Israel’s ancient enemy. The Ninevites listen to his message of doom and repent immediately. All, from king to lowliest subject, humble themselves in sackcloth and ashes. Seeing their repentance, God does not carry out the punishment planned for them. At this, Jonah complains, angry because the Lord spares them. This fascinating story caricatures a narrow mentality which would see God’s interest extending only to Israel, whereas God is presented as concerned with and merciful to even the inhabitants of Nineveh (4:11), the capital of the Assyrian empire which brought the Northern Kingdom of Israel to an end and devastated Jerusalem in 701. The Lord is free to “repent” and change his mind. Jonah seems to realize this possibility and wants no part in it (4:2; cf. Ex 34:6). But the story also conveys something of the ineluctable character of the prophetic calling. The book is replete with irony, wherein much of its humor lies. The name “Jonah” means “dove” in Hebrew, but Jonah’s character is anything but dove-like. Jonah is commanded to go east to Nineveh but flees toward the westernmost possible point (1:2–3), only to be swallowed by a great fish and dumped back at this starting point (2:1, 11). The sailors pray to their gods, but Jonah is asleep in the hold (1:5–6). The prophet’s preaching is a minimum message of destruction, while it is the king of Nineveh who calls for repentance and conversion (3:4–10); the instant conversion of the Ninevites is greeted by Jonah with anger and sulking (4:1). He reproaches the Lord in words that echo Israel’s traditional praise of his mercy (4:2; cf. Ex 34:6–7). Jonah is concerned about the loss of the gourd but not about the possible destruction of 120,000 Ninevites (4:10–11). Unlike other prophetic books, this is not a collection of oracles but the story of a disobedient, narrow-minded prophet who is angry at the outcome of the sole message he delivers (3:4). It is difficult to date but almost certainly is postexilic and may reflect the somewhat narrow, nationalistic reforms of Ezra and Nehemiah. As to genre, it has been classified in various ways, such as parable or satire. The “sign” of Jonah is interpreted in two ways in the New Testament: His experience of three days and nights in the fish is a “type” of the experience of the Son of Man (Mt 12:39–40), and the Ninevites’ reaction to the preaching of Jonah is contrasted with the failure of Jesus’ generation to obey the preaching of one who is “greater than Jonah” (Mt 12:41–42; Lk 11:29–32). The Book of Jonah may be divided as follows: By accepting this message, you will be leaving the website of the United States Conference of Catholic Bishops. This link is provided solely for the user's convenience. By providing this link, the United States Conference of Catholic Bishops assumes no responsibility for, nor does it necessarily endorse, the website, its content, or
<urn:uuid:713fe000-1e2a-4a59-93b7-15635266fec5>
{ "date": "2013-05-24T01:38:29", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9497002363204956, "score": 3.078125, "token_count": 793, "url": "http://www.usccb.org/bible/scripture.cfm?bk=Jonah&ch=" }
The USB to Serial Adapter - Generally A USB to serial adapter, also referred to as a USB serial converter or USB RS232 adapter is a small electronic device which can convert a USB signal to serial RS232 data signals. RS232 is the type which is in many older PCs and is referred to as a serial COM port. A USB to serial adapter typically converts between USB and either RS232, RS485, RS422 or TCP signals, however some USB to serial adapter / USB RS232 serial converter designs have other special conversion features. Even the USB to serial adapter RS232 standard is an older communication protocol it is still used by many modern USB to serial adapter devices in both business and consumer markets and is also often used for personal and office USB to serial adapter devices. More recently most new computers do not have a COM port so a USB serial adapter is often used to connect many types of devices. A USB to serial adapter is a very useful device for connecting equipment such as USB to serial adapter printers, scanners, scales and GPS devices, but also most USB to serial business and consumer equipment. A USB to serial adapter is typically either a RS232 USB serial adapter, a RS485 USB serial adapter or a RS422 USB serial adapter. Many new USB to serial adapter devices are today designed with a multiple USB to serial adapter RS232/RS485 and even USB to serial adapter RS422 interface, which makes it very convenient since you then can use it for both RS232 and RS485 products. The USB serial adapter protocol has been around for many years and is a very reliable protocol. The USB serial adapter is usually used in industrial, office and business environments however also often as a consumer product for connecting USB serial products to personal computers. The RS485 USB to serial adapter is most often only used in industrial environments, labs and similar facilities since the RS485 USB to serial adapter protocol is very useful in these environments. The USB to serial adapter is made in a variety of models and types, the standard non-isolated USB to serial adapter is most commonly used in business, office and laboratory environments. The isolated USB to serial adapter is often used in industrial environments due to its resistance to voltage spikes and ground loops.
<urn:uuid:d2151495-0bf0-4a76-8dac-a654714385c5>
{ "date": "2013-05-24T01:51:58", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9600739479064941, "score": 3.265625, "token_count": 449, "url": "http://www.usconverters.com/index.php?main_page=index&cPath=67" }
When the invader appears honest citizens must choose sides. Forced at length to defend their own homes and firesides, Massachusetts and Connecticut now felt the recoil of unpatriotic behavior. Instead of trusting their governors with the local defense as the administration had done with States which upheld the war, the President now insisted upon retaining the exclusive control of military movements. Because Massachusetts and Connecticut had refused to subject their militia to the orders of the War Department, Monroe declined to pay their expenses. The cry was raised by peace men in consequence that the National Government had abandoned New England to the common enemy. Upon this false assumptionfor false, candor must pronounce it, inasmuch as government was maturing all the while a consistent plan of local defensethe Massachusetts loaders made hasty proclamation that no choice was left between submitting to the enemy, which could not be thought of, and appropriating to the defense of the States the revenues derived from her people, which had hitherto been spent elsewhere. The Massachusetts Legislature appropriated $1,000,000 to support a State army of 10,000 men. And Otis, who inspired these measures, brought Massachusetts to the point of instituting a delegate convention of Eastern Statesthis convention to meet at Hartford. A Hartford convention was no new project to Otis's own mind. The day for assembling was fixt at December 15th. Twelve delegates were appointed by the Massachusetts Legislature, men of worth and respectability, chief of whom were Cabot and Otis. In Connecticut, whose Legislature was not slow to denounce Monroe's conscription plan as barbarous and unconstitutional, a congenial delegation of seven was made upGoodrich and Hillhouse, hoary men of national renown, at the head. Rhode Island's Legislature added four more to the list. So deep-rooted, however, was the national distrust of this movement that Vermont and New Hampshire shrank from giving the convention a public sanction. New Hampshire had a Republican council; while in Vermont the Plattsburg victory stirred the Union spirit; Chittenden himself having changed in official tone after the war became a defensive one. Violent county conventions representing fractions of towns chose, however, three delegates, two in New Hampshire and one in Vermont, whose credentials being accepted by the convention, the whole number of delegates assembled at Hartford was twenty-six. This Hartford Convention remains famous in American history only as a powerful menstruum in national politics. What its most earnest projectors had hoped for was left but half done; but that half work condemned to political infamy twenty-six gentlemen highly respectable. Lawyers, they were, of State eminence, for the most part, and all of high social character, but inclined, like men of ability most used to courts than conventions, to treat constituencies like clients, and spend great pains over phraseology. Perhaps, indeed, these had been selected purposely to play the lion's part, that moderate fellow-citizens, Unionists at heart, whose conversion was essential, might not quake at the roar of the convention. Quincy was not there, nor the stout-hearted Pickeringof whose readiness to become a rebel unless the Constitution could be altered, flagrante bello, to suit his views, there can be little doubt. Delegates like the present were prudent rather than earnest, better talkers than actors; men by no means calculated for bold measures. What bold measures were possible? one may ask. Pickering's Confederacy of 18042 would have embraced New York, perhaps Pennsylvania. But these Eastern Federalists, with that clannishness at which Hamilton himself had marveled, were now circumscribed within the limits of New England, and of that section, moreover, but three States out of five had delegations at Hartford worthy of the name. The first effort to assemble a New England convention was, we have seen in 1808-9. The second, if John Quincy Adams may be believed, was in 1812, immediately after the declaration of war against Great Britain, and that project Dexter defeated by a speech in Faneuil Hall. The third, and present, tho partially successful, by bringing delegates into conference, was, like the Stamp Act Congress, or the Annapolis Conference of 1786, an instrument necessarily for later and riper designs. The American Confederacy, the American Union, are each the product of begetting conventions; nor without prudence were States now forbidden to enter into agreements or compacts with one another without the consent of Congress. The Hartford Convention may well have justified dire forebodings, for it did not dissolve finally, as a mass-meeting might have done, upon a full report, but contingently adjourned to Boston. Organized on the appointed day in Hartford, then a town of four thousand inhabitants, by the choice of George Cabot as president, and Theodore Dwight as secretary, the present convention remained in close session for three continuous weeks. Of irregular political assemblies the worst may be suspected when proceedings are conducted in secrecy; and never, certainly, were doors shut more closely upon a delegate, and professedly a popular convention, than upon this one; not even doorkeeper or messenger gaining access to the discussion. Inviolable secrecy was enjoined upon every member, including the secretary, at the first meeting, and once more before they dispersed, notwithstanding the acceptance of their final report. The injunction was never removed. Not before a single State legislature whose sanction of this report was desired, not to any body of those constituents whose votes were indispensable to the ultimate ends, if these ends were legally pursued, was that report elucidated. Four years afterward, when the Hartford Convention and its projectors bent under the full blast of popular displeasure, Cabot delivered to his native State the sealed journal of its proceedings, which had remained in his exclusive custody; but that when opened was found to be a meager sketch of formal proceedings, and no more; making no record of yeas and nays, stating none of the amendments offered to the various reports, attaching the name of no author to a single proposition, in fine, carefully suppressing all means of ascertaining the expression or belief of individual delegates. Casual letters of contemporaries are preserved sufficient to show that representative Federalists labored with these delegates to procure a separation of the States, but how many more of the same strain President Cabot may have torn up one can only conjecture. That twenty-six public men should have consented to leave no ampler means of vindicating to their own age, and to posterity, themselves and their motives, may evince a noble disinterestedness, sublime confidence in the rectitude of their own intentions, a comforting reliance upon "the Searcher of Hearts," but certainly an astonishing ignorance of human nature in this our inquisitive republic. Assembling amid rumors of treason and the execration of all the country west of the Hudson, its members watched by an army officer who had been conveniently stationed in the vicinity, the Hartford Convention, hardening into stone, preserves for all ages a sphinxlike mystery. The labors of this convention, whatever they were, ended with a report and resolutions, signed by the delegates present, and adopted on the day before final adjournment. Report and resolutions disappointed, doubtless, both citizens who had wished a new declaration of independence, and citizens who had feared it. Neither Virginia nor Kentucky could, with propriety, condemn the heresies of State sovereignty which supplied the false logic of this report, and an imperfect experience of this Federal Union may excuse in Otis and his associates theoretical errors which Jefferson and Madison while in the opposition had, first inculcated. Constitutional amendments were here proposed which, not utterly objectionable under other circumstances, must have been deemed at this time an insult to those officially responsible for the national safety, and only admissible as a humiliation of the majority. It requires little imagination to read, in report and resolutions, a menace to the Union in its hour of tribulation, a demand for the purse and sword, to which only a craven Congress could have yielded, and a threat of local armies which, with the avowed purpose of mutual aid, might in some not remote contingency be turned against foes American not less than British. 1 From Schouler's "History of the United States." By permission of the author and of his publishers, Dodd, Mead & Company.Copyright, 1880-1891. 2 Timothy Pickering, Secretary of State in Adams's Cabinet, and afterward Senator from Pennsylvania, is here referred to. He came into serious disagreement with Adams and was summarily removed. Out of this rupture and the bad feeling that ensued, came what is known as Pickering's Confederacy. THE BATTLE OF NEW ORLEANS
<urn:uuid:c6b0a6a3-2a3d-44aa-8769-be78a5e07788>
{ "date": "2013-05-24T01:51:42", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9725725054740906, "score": 3.375, "token_count": 1779, "url": "http://www.usgennet.org/usa/topic/preservation/epochs/vol5/pg96.htm" }
Free US Law DictionaryBETA A creditor is a party (e.g. person, organization, company, or government) that has a claim to the services of a second party. The first party, in general, has provided some property or service to the second party under the assumption (usually enforced by contract) that the second party will return an equivalent property or service. The second party is frequently called a debtor or borrower. The term creditor is frequently used in the financial world, especially in reference to short term loans, long term bonds, and mortgages. In law, a person who has a money judgment entered in their favor by a court is called a judgement creditor. The term creditor derives from the notion of credit. In modern America, credit refers to a rating which indicates the likelihood a borrower will pay back his or her loan. In earlier times, credit also referred to reputation or trustworthiness.
<urn:uuid:ea397d0e-2098-4808-a5c9-9a4d9d238042>
{ "date": "2013-05-24T01:52:44", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9687473177909851, "score": 3.515625, "token_count": 184, "url": "http://www.uslaw.com/us_law_dictionary/c/Creditors" }
Previously just the worry of climate scientists, environmentalists, doomsday prognosticators, and gas-price watchers, climate change is starting to worry some others— public health specialists, who say that global warming could affect large swaths of the population. In a paper published in the journal PLoS Medicine Tuesday, a group of European public health experts write that climate change could alter "patterns of physical activity and food availability, and in some cases [bring] direct physical harm." Slight temperature increases could also change disease distribution in colder regions and make hotter regions less hospitable to humans. "Certain subgroups are at more risk—mainly the young, the old, and the poor," says Peter Byass, director of the Umea Centre for Global Health Research in Sweden. "The middle age and wealthy will be better off. It's a crude way of looking at it, but it's not so far off the mark." That means more prevalence of diseases that affect the poor, such as malaria and dengue fever, and heat stroke in drought-afflicted areas. For years, scientists have warned about more extreme hurricanes and weather patterns, but until recently, not much emphasis was put on less noticeable changes. "I don't think there's a big gang of global health experts saying [climate change] is unimportant," he says. "But I don't think people have been making the connections that need to be made between public health and climate change." Byass' paper isn't the first time health officials have pondered the human toll of climate change. In March, a group of doctors suggested that the incidence of asthma and other lung respiratory illnesses could increase, due to longer pollen seasons and increasing ranges of disease-causing molds and mosquitoes. "At this point, we might not be able to stop climate change, but we can be a bit prepared as to what the consequences might be," he says. It's something people in his field are increasingly worried about. At last year's "Durban Climate Meeting," a United Nations convention to discuss climate change, people focused on health issues had their say. The unpredictability of climate change—there are many models of what might happen over the next century—makes Byass' and his colleagues' jobs much harder, he says. "I think it's pretty clear that things won't stay the same, so we can talk about the what-ifs of different climate change [theories], but it's hard to say for sure what will happen," he says. The United Nations has been placing more of an emphasis on climate change, with many of its member countries asking the world's largest carbon producers—China, India, and the United States—to enter legally-binding agreements to reduce emissions. This year, government officials will meet in Doha to continue negotiating. Late last year, officials from around the world met in Durban, South Africa at what is now known as the "Durban Climate Meeting," in which officials from India, the United States, and China agreed to continue negotiating legally-binding carbon emission rules. "It's about behaving in a way that's responsible for the planet. One would hope the United Nations could help get everyone together," Byass says. Countries must be willing to take an economic hit in becoming more energy efficient. "Protecting the future of the planet has a price tag, there's no doubt about that."
<urn:uuid:1beaa212-0ebb-4946-8b56-78084b20bfbd>
{ "date": "2013-05-24T01:31:51", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9739727973937988, "score": 3.34375, "token_count": 703, "url": "http://www.usnews.com/news/articles/2012/06/05/expert-climate-change-will-increasingly-become-global-health-issue" }
Researching the History of Agriculture in Vermont You can get a good background on the general agricultural history of Vermont by reading A Short History of Vermont Barns [link] and looking at the Barn Census Powerpoint presentation (Click Here to open the presentation in a new window. If the presentation does not display correctly, Click Here to download the presentation as a PPT file. The file is large; 54 megabytes.) Barn design was influenced by local tradition, availability of materials, and the specific demands of different types of farming in different periods of history. With a little background you can decipher the clues that the building gives you. Depending on your time and interest, you can also consult the Long History of Vermont Agriculture [link] and the other publications listed in the Resources section. Doing a little reading before you look at your first barn will make it easier to understand what you are looking at, how it functioned, and when it was built. The Visual Glossary [link] and Vermont Agricultural Property Types [link] give more detailed information on the whole array of historic agricultural buildings in Vermont, from corn cribs to silos, to mink sheds and potato barns. Researching the History of Your Community Although the majority of your time will be spent out and about, exploring the different parts of your community while looking for barns and other agricultural structures, it is worthwhile to first spend some time researching your community’s history to become familiar with the influences that shaped its agricultural past. This isn’t as difficult as it sounds. Historical research is a great way to really get a feel for the way your community developed and may provide clues as to what barns (either general types or specific instances) may hold special significance in your community. Historical research can answer questions such as: What crops and livestock were common in the area? At one time were there especially large or significant farming operations nearby? If so, what were their names and locations? What ethnic groups settled and farmed the area? What influences did they have on the way different agricultural structures were designed? How did agriculture cause the community to grow and change? How did technologies (like railroads and electricity) or events (like wars or the Great Depression) change agriculture in your community? To conduct research, a good place to start is your town library or community historical society. You may want to look for county, town, or other local histories that have been published. Also interesting are historic photograph files that may have images of older farms and newspaper clipping folders that may contain historical agricultural news of the community. Two widely available 19th century map sources, the Wallings maps and Beers atlases, include symbols that indicate properties with buildings, including owner names. The Division for Historic Preservation has been conducting inventories of historic buildings since the early 1970’s. The information for two counties – Addison and Rutland- has been published in book form. Offprints of single town sections are available for free upon request, while supplies last. Please call 828-1220 to request a copy of your town section (Only available for Addison and Rutland county towns; for more information see the 'Related Information' section of the website.) The Division’s office in Montpelier has a Resource Room with files on over 40,000 historic buildings in Vermont, and the public is welcome to visit and use the records. Copies of those records are available on CD, and the Division is working on distributing them to libraries, starting with the larger libraries in each region. Town Clerks and local libraries often have a binder with paper copies of the records for their town. Other sources of information can include town clerks, local historians, college and university libraries and history departments, and the Vermont Historical Society www.vermonthistory.org. The Vermont Landscape Images Project [link] contains an on-line archive of historic photos, organized by town. Part of the fun of historical research is playing detective – finding out who might have the type of information you’re looking for and talking with them to see how their insight applies to the questions you have. Don’t be afraid to be creative in where you look for information. When researching your community’s agricultural history, you can’t go wrong if you always keep in mind the six basic questions; who, what, when, where, why, and how. Farmers who own or used to own the barn you are surveying will undoubtedly be the best source of specific information about the age and evolution of the structure. They may know dates of construction, how the barn was used and how it changed over time. They will often know the family history of the farm, including stories that bring the history of the barn alive. In the Field – What You’ll Need Now that you have a better understanding of the agricultural history of your community, it’s time to get out there and find some barns! A little organization at this step can make things run very smoothly, so here are some suggestions about supplies you may need and ways to conduct your piece of the census. First, get some good road maps of the area you’ll be surveying. You can contact your town office or the Agency of Transportation to see what free maps are available. You can also download maps from the Vermont Center for Geographic Information website [link]. VCGI offrs a variety of maps, including maps of town highways [link]. Plan a route that covers all the roads in the area you’ve chosen. Barn Census volunteers can work individually or in groups. If you are working in a group that is going to split up to divide the work, each smaller group should be assigned their own sector on the map so that it’s less likely that work will be duplicated. Next, you should gather the supplies that you’ll want to bring with you. We recommend: - Blank survey worksheets [link] (you’ll need one sheet for every barn you survey, so make sure to print and bring extras!) - A hard writing surface, like a clipboard or notebook - A digital camera - Letter of introduction [link] (Please open the file, fill in your name and print it out. You may want multiple copies) - A dashboard sign [link] identifying your vehicle as participating in the Vermont Barn Census - A copy of this manual You may also want to bring: - Cell Phone
<urn:uuid:d990575f-00fb-4bee-81f7-fbae209923ac>
{ "date": "2013-05-24T01:59:49", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9427576065063477, "score": 3.125, "token_count": 1331, "url": "http://www.uvm.edu/~barn/get_ready.html" }
NASA's Curiosity landing blazes trail for humans on Mars The most technologically advanced space robot ever built -- the 5 foot tall Curiosity contains a total of 10 scientific instruments, including a robotic arm with a power drill -- landed on Mars last night, beginning a mission eight years in the making to search for signs of past life on the Red planet. An image taken by rover Curiosity on August 6, 2012 of Mount Sharp on Mars. Image credit: NASA/JPL-Caltech. US President Barack Obama praised NASA’s successful landing of the car-sized rover Curiosity that travelled 570 million-km over eight months to reach our neighbour planet. "Tonight, on the planet Mars, the United States of America made history," Obama said. Equipped with ten instruments, including a laser that can zap rocks from a distance and a mobile organic chemistry lab, Curiosity gives scientists the opportunity to learn more than they ever have about Mars. It also furthers the possibility of one day sending humans to Mars to investigate first-hand. NASA Administrator Charles Bolden noted that Obama wants to be able to send humans to Mars by the 2030s. The Curiosity Mars Descent Imager (MARDI) captured the rover's descent to the surface of the Red Planet. The instrument shot 4 fps video from heatshield separation to the ground. Source: YouTube. “Today, the wheels of Curiosity have begun to blaze the trail for human footprints on Mars,” Bolden said at a press conference immediately following the release of the first few images from a new, and until now, unexplored part of Mars. Secrets of a Martian mountain Curiosity landed next to a strip of dunes in Gale Crater, a desirable destination given signs that water, a key requirement for life as we know it, once carved channels along the crater’s wall. At the centre of the 154-km-wide crater rests Mount Sharp, a 5.8 kilometre mountain that rivals Mount Kilimanjaro in height. Scientists believe the mountain’s layers of sediment could hold clues to the planet’s ancient history, including whether it held microbial life. Able to roll over obstacles 2 feet high and travel up to about 200 metres per day, the nuclear powered mobile laboratory will eventually be digging, drilling and investigating the Martian landscape for at least the next two years in search of the building blocks of life: carbon, nitrogen, phosphorus, sulphur and oxygen. But before the exploration begins, scientists at NASA intend to perform a few weeks of health checks on the machine that just survived the most epic landing in the history of robotic space travel. This view of Gale Crater is made up of a combination of data from three Mars orbiters. The circle in the top left corner indicates the area where scientists aimed Curiosity’s landing. Image Credit: NASA/JPL-Caltech/ESA/DLR/FU Berlin/MSSS. Surviving seven minutes of terror Scientists at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California twitched and hunched forward nervously as they waited for confirmations that Curiosity had survived the technological challenge of landing on the surface of Mars. Scientists had dubbed the descent into the Martian atmosphere the “seven minutes of terror” due to the intricate and tightly choreographed maneuvers required for a safe landing, including slowing down from 20,921 km per hour to zero in just a few minutes. A heat-shield protected the one-tonne Curiosity from a 1600 degrees Celsius blaze that engulfed it at the force of impact with the Martian atmosphere. Because the Martian atmosphere is so thin, a supersonic parachute, weighing only 100 pounds but able to withstand 65,000 pounds of pressure, then needed to slow down its descent, but even it could only do so much. To enable a safe landing, NASA equipped the rover with a “rocket-propelled backpack” to lower it down to the surface on cables. Photo of parachute landing Today NASA released a photo of the parachute landing that was snapped by a spacecraft that’s been orbiting Mars for six years. In a testament to advanced planning, the commands to take the photo had to be uploaded 72 hours prior. "Guess you could consider us the closest thing to paparazzi on Mars," said Sarah Milkovich, High Resolution Imaging Science Experiment (HiRISE) investigation scientist at NASA's JPL. "We definitely caught NASA's newest celebrity in the act." Photo credit: NASA/JPL-Caltech/Univiversity of Arizona. Astronomer, lecturer, and author Phil Plait, who blogs for the popular Discover Magazine blog Bad Astronomy, wrote about the “sheer amazingness” of capturing a photo of the 16-metre-wide parachute. “Here we have a picture taken by a camera on board a space probe that’s been orbiting Mars for six years, reset and re-aimed by programmers hundreds of millions of kilometers away using math and science pioneered centuries ago, so that it could catch the fleeting view of another machine we humans flung across space, traveling hundreds of million of kilometers to another world at mind-bending speeds, only to gently – and perfectly – touch down on the surface mere minutes later.” NASA erupts in cheers The landing depended on the perfect execution of a computer already given its commands, while scientists could only wait for a delayed signal back on Earth on how it all went. It takes fourteen minutes for a signal to reach Earth from Mars. Engineers at NASA's Jet Propulsion Laboratory in Pasadena, California celebrate the landing of NASA's rover Curiosity on Mars. Image Credit: NASA/JPL-Caltech. When word of the safe landing reached Earth, scientists at NASA jumped out of their chairs and threw their hands up in the air in joy, erupting in cheers, hugs and tears. A few minutes later they received the first three photos taken by Curiosity, black-and-white images of the rover’s wheel on the rocky surface of Mars. “I can’t believe this. This is unbelievable,” said Allen Chen, the deputy head of the rover’s descent and landing team. In Times Square, hundreds gathered to watch the NASA live stream of the team in California overseeing the landing. When NASA released the images online, their website crashed due to an unprecedented number of hits. Mission manager at NASA's JPL Michael Watkins said that he loves these first few images the most. “Here we are seeing a part of Mars that we've never seen before,” he said in a news conference this morning. Better resolution photos should be arriving in the next few days along with black-and-white panoramas and the first colour images. Working on Mars time Hundreds of thousands of scientists and engineers contributed to this $2.5 billion mission to Mars. All together seven countries collaborated, including Canada, Finland, Spain, Russia, France, Germany, and the United States. Scientists at the Canadian Space Agency (CSA) spent years working on a device aboard Curiosity. A new alpha particle X-ray spectrometer, designed by a team of researchers at Guelph University in Ontario, will measure chemicals in the rocks. Director-general of space exploration at CSA Gilles Leclerc told the Associated Press that workers celebrated the landing last night. “Well, we’re Canadians, eh? So it was less enthusiastic but I would say it was as emotional as it was in the U.S. But there were cheers indeed and it was again a great moment.” There are 300 or more engineers and 400 scientists working on Curiosity’s mission on Mars. Watkins called it a kind of immersion training, as the team will not only be learning how to operate the vehicle, but how to work with each other. They will be working on Mars time for the next three months and experiencing a kind of inter-planetary jet lag as a day on Mars is 40 minutes longer than a day on Earth.
<urn:uuid:70ceb230-7269-4565-90e3-c6b5afc9cf27>
{ "date": "2013-05-24T01:45:39", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9282928109169006, "score": 3.59375, "token_count": 1682, "url": "http://www.vancouverobserver.com/world/nasas-curiosity-landing-blazes-trail-humans-mars" }
Ear ChecksOur hospital offers thorough Ear Exams as one of our many extensive Dermatology Services. Conditions of the ears can be extremely uncomfortable and even painful. Symptoms like head shaking, scratching and pawing at the ears, rubbing the ears on the floor or furniture, whining, and abnormal odors are common. Conditions of the ear occur frequently in dogs and cats and result from a variety of causes. Determining these causes is vital to the long term resolution of the symptoms. As with any other health issue, gathering a detailed history and performing a full examination of the patient are critical. Careful examination of the ear with an otoscope, visualizing the full length of the ear canal and the ear drum, provides information important to the diagnosis. Pain is common in ear disease and some patients will require sedation or anesthesia to be properly evaluated. Additional diagnostic tests are often warranted. Gathering samples to check for mites, inflammation, bacteria and yeast is a common first step. Cultures are sometimes needed. In uncomplicated ear disease, treatment may be straight-forward when the proper information is gathered. However, most recurrent cases of ear disease involve multiple causes like allergy, infection, foreign bodies, tumors or ear mites. Treating only one cause will limit the response to therapy. Rechecking the ear after each stage of the treatment is critical to providing a long term resolution to the problem.
<urn:uuid:f3427910-3abe-40c2-9139-c8c35c3805f6>
{ "date": "2013-05-24T01:59:13", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9319800734519958, "score": 2.6875, "token_count": 285, "url": "http://www.vcahospitals.com/arroyo/services/primary-care/ear-checks" }
In the book Cutting Through the Hype: the Essential Guide to School Reform, authors Jane L. David and Larry Cuban concisely describe various aspects of educational reform. They categorize educational reform into three strategies: reforming the system, reforming how schools are organized, and reforming teaching and learning. Their descriptors are factual, historical and informative, but conspicuously devoid of any information supporting partisan political positions. David and Cuban give the facts about 22 educational reform strategies and not the political spins from either side of the aisle either vociferously supporting or vehemently opposing any of these 22 educational reform strategies. Does the American voter see education reform as another partisan political fight as our political leaders have demonstrated? Apparently, American voters see education reform much like David and Cuban see education reform, desperately needed and devoid of political bias. According to the Center for the Next Generation Survey of American Voters Attitudes on Education and Global Competitiveness (http://www.tcng.org/files/Survey_of_American_Voters_Attitudes_on_Education_and_ Global_Competitiveness.pdf) and as stated in a recent US Politics Today article (http://uspolitics.einnews.com/pr_news/111185232/more-than-three-in-four-u-s-voters-want-next-president-to-prioritize-education-new-survey-finds), 78 percent of American voters say restoring America’s leadership in global innovation and increasing investments in education should be a top or high priority for the next President. The American voter, regardless of political affiliation, wants education reform but are they willing to pay for educational reform? The answer is a resounding “Yes!” The survey also revealed that by more than a ratio of 2-1, voters are very, or somewhat willing to pay more in taxes if the funds are dedicated to K-12 education programs. The willingness to pay more taxes if funds are dedicated to education improvements was represented by strong majorities of each major political affiliation (81 percent of Democrats surveyed, 59 percent of Independents surveyed and 57 percent of Republicans surveyed.) The American voter realizes the importance of a strong education system and how it relates to global competitiveness; however, the improvement of our education system is contingent upon employing effective education reforms as described by David and Cuban. Perhaps David and Cuban did not address the politics of educational reform because the urgency of educational reform has superseded political partisanship. The American voter, Democrat, Republican, and Independent, are seemingly cognizant of the need for educational reform and are willing to support reforms with resources. But most importantly, are the politicians listening to their electorate?
<urn:uuid:ab5e40f6-0f53-4f73-9dd7-633ec047e965>
{ "date": "2013-05-24T01:45:08", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9475685954093933, "score": 2.53125, "token_count": 551, "url": "http://www.vcuedd.com/?p=2371" }
Gulf War syndrome is a widely used term to refer to the unexplained illnesses occurring in Gulf War veterans. The following are the most common symptoms of Gulf War syndrome. However, each person experiences symptoms differently. Symptoms may include: According to the American College of Occupational and Environmental Medicine, at least 12 percent of Gulf War veterans are currently receiving some form of disability compensation because of Gulf War syndrome. Possible causes include: While there is no specific treatment for Gulf War syndrome, research suggests than an approach called cognitive-behavioral therapy may help patients with non-specific symptoms syndromes lead more productive lives by actively managing their symptoms. The Department of Veterans Affairs is conducting a two-year, scientifically controlled study to determine the effectiveness of cognitive-behavioral therapy for veterans with these symptoms. Research into Gulf War syndrome, which remains controversial, is taking place in research centers around the country.
<urn:uuid:1fe8a406-347e-4acb-8630-15ab7a92a0d0>
{ "date": "2013-05-24T01:45:31", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.959554135799408, "score": 3.171875, "token_count": 184, "url": "http://www.vetshelpcenter.com/articles/gulf-war-syndrome/confirmation/what-is-gulf-war-syndrome.html" }
[Photographs by the author, shown here by kind permission of the Dean & Chapter. Click on the images to enlarge them: some details cannot be seen in the smaller versions.] St Chad's Roman Catholic Cathedral, Queensway (Inner Ring Road dual carriageway), Birmingham, by A. W. N. Pugin (1812-1852) with George Myers (1803-1875) as builder. 1837-41. The interior is seen here at the start of a particularly joyous Mass for French-speaking members of the community. Overlooking the congregation on the right is a sixteenth-century oak statue of St Chad holding a model of Lichfield Cathedral, where this "Apostle of the Midlands" established his see in the seventh century. In terms of its continued outreach and vibrancy, as well as of its historic significance in the Catholic Revival, St Chad's is one of Pugin's most important works. It also marks a major advance in the architect's own career. John Betjeman was less than complimentary about the exterior, describing it as "not much to look at." But the interior was quite another matter: "inside it fairly takes the breath away," he wrote: "it soars to the heavens; its long thin pillars are like being in a mighty forest. The roof is a bit flimsy-looking, but the flimsiness is redeemed by the brilliant colours. The stained glass glows like jewels; the great rood screen in front of the altar adds mystery to what would otherwise be rather an obvious place; the altars blaze with gilding and colour" (qtd. in Doolan 1). Not everyone would agree that the roof looks "flimsy": Andy Foster, for instance, admires the "daringly thin timbers," suggesting that "anything heavier would overwhelm the arcades" which he describes as "exceptionally tall and delicate" (49). But most would find the rest of Betjeman's comment perfectly accurate — except for the mention of the rood screen, which was removed in 1967 in the interests of accessibility and openness (see final discussion). Left: The chancel. Right: Looking towards the west entrance. Worth noting in the left-hand picture (when enlarged) is the statue just glimpsed on the extreme left. A fifteenth-century oak figure of the Virgin Mary from the Netherlands or perhaps Germany, which Pugin restored and gave to the cathedral in 1841, it was probably the very first such statue to have been "erected for public veneration in England after the Reformation" (Doolan 5). So, like the new cathedral itself, it marks a significant step forward for Catholicism in Britain. Foster describes the Virgin as looking "memorably enduring and serene" (51). The statue stands under its canopy by the Lady Chapel, against the beautifully stencilled and scrolled pillars shown in close-up here. The archbishop's throne to its right, also of oak, was designed by Pugin, its canopy matching that over the altar, and rising to nearly 30' (Dent 467). Looking west from the chancel gives some idea of the spatial dimensions of the cathedral. The original organ was housed in a west gallery, which also accommodated the choir; but it was moved to the sacristy in 1854 (see Dent 464). The present organ was installed here only in the late twentieth century. Its splendid case works well with other elements in the cathedral. Interior of Chancel Left to right: (a) Stained glass window to the left of centre: here Pugin shows St John the Evangelist and St Peter, above St Michael and St Edmund. (b) Close-up of the altar of 1841, decorated at the top by Joseph Aloysius Pippet, Hardman's chief decorator. (c) Central east window, showing Mary holding the infant Jesus, beside St Chad with his mitre and red chasuble. All three east windows were the gift of the Earl of Shrewsbury: note the shields between the saints of the left-hand window, sporting the rampant heraldic lions of his family, cf. the lions on the west door of St Giles, Cheadle. The windows were executed for Pugin by William Warrington (Shepherd 356). The altar is described by Foster as an "important, early re-creation of medieval fittings, comparable with Pugin's thrones in the House of Lords" (51). The finely-wrought casket on the altar was also designed by Pugin. It is a reliquary — a container for the relics of St Chad found under the altar of the chapel of Aston Hall, on the outskirts of Birmingham. Pugin was guided here by Bede's description of the original shrine for the relics at Lichfield Cathedral (see Fisher 29-30). Even the curtains either side of the altar were designed by Pugin (Doolan 6). Left: The Lady Chapel, with its screen, altar, elaborate reredos, cylindrical tabernacle and window all designed by Pugin. Right: Close-up of altar carving, showing the Presentation of Jesus in the Temple, the Nativity, and the Adoration of the Magi. The hounds in the nativity scene may be another reference to the Earl of Shrewsbury's family shield. Left: The reredos shows the Visitation, with Mary and her cousin Elizabeth, the Virgin Mary with the infant Jesus, and the Annunciation, with the Angel Gabriel. Below are four female saints: Mary Magdalene, St Barbara, St Cecilia and St Catherine. Right: The stained glass window at the back, partly obscured here, shows Mary with St Cuthbert and St. Chad. Pugin particularly mentions the tabernacle in the Lady Chapel when writing to A. N. Didron, editor of Annales Archéologiques, in January 1842, describing it as being "en forme de tour ornée de pierreries at de quatre évangélistes, en émail" (9). The Lady Chapel's screen too is elaborate, beautifully carved with a central quatrefoil, trefoils and pinnacled arches. This makes up a little for the loss of the cathedral's original rood screen. The walls are stencilled in patterns of blue and gold, with the initial "M" and fleur-de-lys patterning close to the altar. All in all, as Pugin himself remarks to Didron, the chapel is "extrêmement riche" (9). Monument to Bishop Walsh Left:The monument to Bishop Walsh in the north aisle, in Bath stone. Right: Closer view of the bishop's recumbent effigy. Bishop Walsh (1776-1849), shown here in "pontificals, with crozier and mitre" (Dent 465), was responsible for commissioning not only this cathedral but St Mary's College in nearby Oscott. He had taken a personal and paternal interest in the young architect, sometimes calling him "Bishop Pugin," and forbidding him to fast during Lent to preserve his health. He had once written to Shrewsbury, "I consider him to be an extraordinary genius raised up for these times" (qtd. in Hill 210). The design of the monument is widely attributed to Pugin, and its execution to George Myers, in 1850. The central roundel of its daipered backing sports a little figure of the bishop bearing a model of the cathedral. The monument was displayed in the Medieval Court at the Great Exhibition, where it attracted praise for being so finely carved and authentically medieval: "The effigy has a striking resemblance to those venerable and dignified effigies still remaining in our ancient churches" (The Crystal Palace, 215). Note that an early account credits the design of the monument to Pugin's eldest son, Edward Welby Pugin (see Dent 465). This seems likely in view of the latter's similar monuments to his own father at St Augustine's, Ramsgate (1853), and to Canon Richard North at Our Ladye Star of the Sea, Greenwich (1860). But it seems even more likely that Pugin himself would have been involved in paying this tribute to his old friend. The octagonal font, designed by Pugin, not elaborate but with four carvings representing the four Evangelists. The casket for the holy oils, behind it, was also designed by Pugin, as were the candle-holders and tiles, the latter made for Pugin by Minton & and Co. (see Doolan 3). The candlestand by the font is similar to one of the large ones illustrated in the Crystal Palace catalogue (218), and is for the paschal candle at Easter. St Chad's makes its impact by bringing together many different elements. As at St Giles in Cheadle, Pugin has turned his hand to every aspect of the church and its fitments and furnishings, drawing on his scholarly research into medieval craftsmanship, and blending the subsequent designs into a new harmony. As well as recreating the past in his architecture and art, he has brought in the fruits of another related interest — collecting ecclesiastical antiquities. The statue of Mary by the Lady Chapel is just one example. Another important example is the fifteenth-century Flemish figure of Christ which Pugin affixed to the great crucifix on his rood screen, now suspended in front of the chancel. At last the time came, in June 1841, for the consecration. Partly because of the installation of the relics of St Chad, this grand affair, with Pugin as master of ceremonies, and Bishop Walsh officiating, lasted five days. The architect/designer had high hopes for his new cathedral: "see Birmingham is cramed [sic] full every sunday eving [sic] standing room & all," he wrote (qtd. in O'Donnell 21). And, as noted at the end of my discussion of the exterior, these hopes were largely fulfilled. The cathedral before the rood screen was removed (from Dent 458): sublimity sacrificed to accessibility? The cathedral is not as Pugin had left it. Removing the rood screen has clearly made a huge difference. Ironically, this feature so beloved of Pugin is now at Holy Trinity, Reading — an Anglican church — a similar fate to that which befell the tabernacle of St Augustine's, which ended up in the Harvard chapel at the Anglican Southwark Cathedral. Surely Betjeman was right to have felt that it added "mystery" to what is now, instead, "rather an obvious place." Other items have also been lost. Indeed, the listing text says bleakly: "The furnishings now mostly gone." On the other hand, like older cathedrals, St Chad's has acquired new elements over the years, such as additional stained glass windows by John Hardman and his firm, and of course the whole of St Edward's Chapel, designed by Pugin's grandson, Sebastian Pugin Powell (1866-1949). Still, St Chad's continues to serve as a lasting testimony to the inspiration and versatile talent of one "extraordinary genius." - St Chad's exterior - The Rood Screen originally at St Chad's - The Bishop's House (that once stood opposite; sadly, dem. 1960) - Some of Pugin's other stained glass windows in the Cathedral (coming shortly) - St Thomas à Becket window, by John Hardman & Co. (1865) (coming shortly) - World War I Memorial window, by John Hardman & Co. (dedicated 1921) (coming shortly) Web Resources of Special Interest - [Offsite] "The Building of St Chad's Cathedral." Catholic Herald Archive. - [Offsite] Navigate to a virtual tour of the crypt (the cathedral's own website). "Cathedral Church of St Chad, Birmingham." British Listed Buildings. Web. 20 January 2013. The Crystal Palace, and Its Contents. London: W. M. Clark, 1851. Internet Archive. Web. 20 January 2013. Dent, Robert Kirkup. Old and New Birmingham: A History of the Town and Its People. Birmingham: Houghton & Hammond, 1880. Internet Archive. Web. 20 January 2013. Doolan, Father Brian. The Metropolitan Cathedral and Basilica of St Chad Birmingham. 5th revised ed. Birmingham: St Chad's Publications, 2006. Available at the cathedral. Print. Fisher, Michael. "Gothic For Ever": A. W. N. Pugin, Lord Shrewsbury, and the Rebuilding of Catholic England. Reading: Spire, 2012. Print. Foster, Andy. Birmingham. Pevsner Architectural Guides. New Haven and London: Yale University Press, 2005. Print. Hill, Rosemary. God's Architect: Pugin and the Building of Romantic Britain. London: Penguin, 2007. Print. O'Donnell, Roderick. The Pugins and the Catholic Midlands. Leominster: Gracewing, and the Archdiocese of Birmingham, 2002. Print. Pugin, A. W. N. The Collected Letters of A. W. N. Pugin, Vol. 1. 1830-42. Ed. Margaret Belcher. Oxford: Oxford University Press, 2001. Print. Shepherd, Stanley A. The Stained Glass of A. W. N. Pugin. Reading: Spire Books, 2009. Print. Last modified 20 January 2013
<urn:uuid:c329c1b7-d3a1-4ab0-8fe1-ae7f32e58773>
{ "date": "2013-05-24T01:51:32", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9565255045890808, "score": 2.609375, "token_count": 2867, "url": "http://www.victorianweb.org/art/architecture/pugin/41.html" }
By Pete Kotz By Michael Musto By Michael Musto By Capt. James Van Thach told to Jonathan Wei By Kera Bolonik By Michael Musto By Nick Pinto By Steve Weinstein Washington, D.C.President Bush's vague plan for coping with a serious outbreak of bird flu is based largely on fear and greed. There is no secret about this. He seeks to get people's attention by scaring the citizenry with visions of millions of people dying from a pandemic so bad it leads to martial law, mass quarantines, restrictions on travel, and so on. He wants to encourage private business to meet the crisis by producing more of existing drugs such as Tamiflu to combat a flu plague and entice the drug companies to work harder and faster to make a vaccine by ensuring its profitability. The answer to a bird flu pandemic is not a passive first-world population riveted to the TV, watching one person after another drop dead across the world as sickened birds fly closer and closer and finally land in our midst. The answer lies in effective communication at all levels among different nations, through their medical establishments, scientists, and spotters, so that as soon as sick or dead birds are found, the birds in surrounding areas can be culled. This is a job for the World Health Organization, which is part of the United Nations, the organization Bush and his ambassador, John Bolton, are determined at all costs to wreck. While developed countries race to lay in supplies of antiviral drugs, there is little interest in the animals themselves and in animal-human interaction where flu can begin and spread. The WHO and Food and Agriculture Organization have only 40 veterinarians between them. "Reducing human exposure requires education about handling poultry and a fundamental change in cultural attitudes towards human- animal interactions and husbandry in many parts of the world," writes The Lancet, the British medical journal. "In some African countries, people sleep in the same places as poultry. In southeast Asia, 'wet markets,' where live poultry are traded and slaughtered on the spot, pose a risk of human transmission. And in Central Asia and Eastern Europe, hunting of wild birds may have played a major part in the spread of avian influenza." Changing the interplay of animals and humans may meet considerable resistance among small poultry farmers in poor countries, who face the loss of whole flocks in a mass culling. If farmers are offered too little to cull their birds, they won't do it. And if too much money is proffered, "the money will be an incentive to deliberately infect their flocks," Milan Brahmbhatt, the World Bank's lead economist for East Asia and the Pacific, told The Lancet. The overall effect of a pandemic in Asia will be to drive small poultry farmers out of business and open the way for U.S.-style industrial chicken farming, with ownership concentrated in the hands of a few. Among the major exporters are China and Thailand (Southeast Asia now accounts for about a quarter of the world poultry business). Most of their chickens go to Japan. Many countries are banning imports from these two nations, and that is running up the price of chickens worldwide and promising to up exports from such places as the U.S., Brazil, and the EU. A serious effort to stave off a pandemic also means stopping the pharmaceutical companies from scaring people to make more money. It is by now well-known that the drug companies provide huge sums of cash to politicians$133 million to federal candidates since 1998, according to the Center for Public Integrity, with upwards of $1.5 million going to Bush, the top recipient. The industry operates an elaborate lobby in Washington that in 2004 spent $123 million and employed an army of 1,291 lobbyists, more than half of whom were former federal officials. The industry's sales machine aims to bypass doctors with TV and other advertising aimed directly at the patient, appealing to his or her judgment over that of a physician. In the case of making and marketing drugs to combat flu, the results are disastrous. The industry claims it can't make flu vaccines because there is no money in it. When asked about last year's flu vaccine shortage by CBS's Bob Schieffer, Bush said the industry was fearful of damage suits concocted by ambulance-chasing lawyers. He explained the shortage this way: "Bob, we relied upon a company out of England to provide about half of the flu vaccines for the United States citizen, and it turned out that the vaccine they were producing was contaminated. And so we took the right action and didn't allow contaminated medicine into our country." This was not true. The American inspectors had approved flu vaccine shipments from a U.S. producer's British company. It was British inspectors who blocked shipment of the questionable vaccine from the American firm. With no vaccine in sight, the U.S. government, along with others, is belatedly stocking up on Tamiflu, a drug that supposedly offers some defense against bird flu. But last week Japanese newspapers told how children who were administered Tamiflu went mad and tried to kill themselves by jumping out of windows. In a cautionary statement the FDA noted 12 deaths among children, and said there are reports of psychiatric disturbances, including hallucinations, along with heart and lung disorders. Roche, the manufacturer, is quoted by the BBC as stating that the rate of deaths and psychiatric problems is no higher among those taking its medication than among those with flu. The company is increasing Tamiflu production to 300 million doses a year to meet demand. Find everything you're looking for in your city Find the best happy hour deals in your city Get today's exclusive deals at savings of anywhere from 50-90% Check out the hottest list of places and things to do around your city
<urn:uuid:11d5b9e6-b97c-4053-9615-3205974750a9>
{ "date": "2013-05-24T01:31:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9667551517486572, "score": 2.78125, "token_count": 1196, "url": "http://www.villagevoice.com/2005-11-15/news/capitalizing-on-the-flu/" }
Written by Andrew Forgotch Last updated on January 07, 2013 @ 8:25PM Created on January 07, 2013 @ 6:47PM Health departments all across the state have reported high number of flu cases. The latest totals showed that there were more than 10-thousand confirmed cases of the flu across the state. What's shocking is that that might be a low estimate. That's because doctors rely on numbers from doctors from across the state to report the number of cases they treated. The problem is that not everyone who is sick goes to the doctor's office. As of mid-December there had been more than 10-thousand cases of the flu in West Virginia. Last year there were a little less than 6-thousand cases for the entire month of December. Health officials are concerned because the flu season typically hits in late January or early February. The good news in all of this is that the strain that's going around is the H3N2, and that's what's included in the flu vaccine. Ted Krafczyk, an epidemiologist with the Monongalia County Health Department, said it's not too late to get a flu shot. Krafczyk also said to parent the flu it's a good idea that you wash your hands on a regular basis. You must have an active WDTV.COM user account to post comments. Please login to your account, or create your free account today!
<urn:uuid:65978163-eda0-41bd-a043-b550426a6706>
{ "date": "2013-05-24T01:38:17", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9774330258369446, "score": 2.765625, "token_count": 302, "url": "http://www.wdtv.com/wdtv.cfm?func=view&section=5-News&item=Flu-Season-Hits-West-Virginia-Early7489" }
Meningitis Rate Is Dropping in U.S. CDC Researchers Say Pneumococcal Vaccine Is Helping to Lower Meningitis Rate May 25, 2011 -- Cases of bacterial meningitis continue to decline in the U.S., with incidence falling by almost a third over the last decade, the CDC says. The latest drop is being attributed in part to the introduction of the pneumococcal conjugate vaccine, which protects children from a leading cause of bacterial meningitis, Streptococcus pneumoniae. It follows an even bigger decline in cases over the previous decade, which saw the introduction of a vaccine targeting Haemophilus influenza type B (Hib). Between the mid-1980s and mid-1990s, bacterial meningitis cases in the U.S. dropped by 55%. The CDC report appears in tomorrow's New England Journal of Medicine. "The good news is that this very serious infection is now a lot less common than it was," CDC chief of bacterial and respiratory diseases Cynthia Whitney, MD, MPH, tells WebMD. "But we want people to know that this disease does still occur. There are about 4,000 cases of bacterial meningitis each year in the U.S., so physicians still need to be aware of the signs and treat patients quickly and aggressively." Meningitis Can Be Fatal Quickly While there has been great progress in preventing bacterial meningitis, far less progress has been made in treating the disease once people get it, Whitney says. If not treated quickly, bacterial meningitis can sometimes progress from first symptoms to death in less than a day. In April, a 21-year-old college student in New Hampshire with a rare form of bacterial meningitis died just 12 hours after seeking medical treatment for a severe headache and rash, according to news reports. And last spring, two students at an elementary school in Oologah, Okla., died and four others were hospitalized with bacterial meningitis within days of first complaining of symptoms. High fever, headache, and neck stiffness are the most common symptoms of bacterial meningitis in adults and children over the age of 2. "When people get bacterial meningitis, it still tends to be very serious," Whitney says.
<urn:uuid:dd9796cb-7abd-4e1b-8c28-047c47cf30c6>
{ "date": "2013-05-24T01:30:55", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9455770254135132, "score": 2.625, "token_count": 472, "url": "http://www.webmd.com/brain/news/20110525/meningitis-rate-is-dropping-in-us?src=rsf_full-news_pub_none_rltd" }
In programming, classification of a particular type of information. It is easy for humans to distinguish between different types of data. We can usually tell at a glance whether a number is a percentage, a time, or an amount of money. We do this through special symbols -- %, :, and $ -- that indicate the data's type. Similarly, a computer uses special internal codes to keep track of the different types of data it processes. Most programming languages require the programmer to declare the data type of every data object, and most database systems require the user to specify the type of each data field. The available data types vary from one programming language to another, and from one database application to another, but the following usually exist in one form or another:
<urn:uuid:47437832-f7d4-4366-94ba-2caef5456fb8>
{ "date": "2013-05-24T01:36:53", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8495874404907227, "score": 3.609375, "token_count": 152, "url": "http://www.webopedia.com/TERM/D/data_type.html" }
Primary Health Care in Action Madagascar in numbers1 - Life expectancy (both sexes, 2006): 59 years - Gross National Product per capita (PPP in international $, 2006): 870 - Per capita total expenditure on health (PPP in international $, 2005): 33 - Number of physicians (per 10 000 population, 2005): 3 MADAGASCAR PRIMARY HEALTH DRIVE ACHIEVES MIXED RESULTS2 - Life expectancy has increased since 1990, polio has been eradicated and infant mortality is declining - Islanders have never been more motivated to look after their health - Only 60–70% of the population has ready access to primary health care services - Health centres in a poor state of disrepair When the first batch of 1500 young health aides was dispatched in 1980 to Madagascar’s villages, it was thought to herald a new era in health care for the island nation off the south-east coast of Africa. The project was the centrepiece of the country’s primary health care programme, launched in 1978 with high hopes of meeting the Alma-Ata goal of providing health for all by 2000. The ‘health for all’ idea was not to eradicate every disease, but to attain an acceptable level of health, equitably distributed throughout the world. The results in Madagascar, however, have been mixed, with strong advances in some areas and little progress in others. On the plus side, islanders today have never been more motivated to look after their health, says Professor Dieudonné Randrianarimanana, cabinet director of the Madagascar Ministry of Health, Family Planning and Social Protection. HEALTH HAS IMPROVED Currently, average life expectancy is 59 years which represents an increase of about 6 years from its 1990 level. Poliomyelitis has been eradicated. Officials there say the prevalence of leprosy is less than 1 per 10 000; and infant mortality is decreasing (in 2006 the probability of dying in the first year of life was down to 72 deaths per 1000 live births compared with 84 in 2000 and 103 in 1990). But 30 years on, only 60–70% of the population has ready access to primary health care, officials say. Many people still have to walk 10 kilometres or more to receive treatment, though mobile health centres have been introduced in remote and sparsely populated areas. Like Randrianarimanana, nurse Florentine Odette Razanandrianina has experienced the ups and downs of primary health care. She arrived in the village of Ambohimiarintsoa, 200 kilometres from the capital Antananarivo, in October 2006 to run the health clinic. She provides twice-weekly prenatal and postnatal check-ups. She also offers child immunization and vaccination, family planning services and disease treatment. But five of the centre’s seven small rooms are in a poor state of repair and lack sufficient equipment, Razanandrianina says. “We have five mattresses for only one bed. Consequently, we are often obliged to let patients sleep on the mattresses placed directly on the soil.” HEALTH WORKERS RECEIVE MIXED RECEPTION There are many other health centres in a similarly poor state of disrepair across Madagascar, officials admit. Also, frictions can arise when modern practices are perceived as counter to traditional customs. Since moving to the village, Razanandrianina’s efforts to teach people about the need for personal hygiene have not always been welcome. Despite these setbacks, Razanandrianina has not curtailed her efforts. For example, when people from villages further away have chosen not to attend vaccination clinics, she has gone to them. “Every time we visit the remotest villages, people wait for us in a group. They really appreciate our visits,” Razanandrianina says. This is an abridged version of an article published in the Bulletin of the World Health Organization in June 2008. 1World Health Statistics 2008, Online version: http://www.who.int/whosis/data/Search.jsp (accessed on 26/09/2008) 2Primary health care: back to basics in Madagascar, WHO Bulletin, Vol 86 (6), http://www.who.int/bulletin/volumes/86/6/08-010608/en/index.html
<urn:uuid:7ae46773-f648-4faf-9ea3-bc4e79c7ba58>
{ "date": "2013-05-24T01:38:52", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9371748566627502, "score": 3, "token_count": 921, "url": "http://www.who.int/whr/2008/media_centre/country_profiles/en/index7.html" }
The ungulates and their relatives are a puzzling group, including animals as diverse as whales and hippos, elephants and hyraxes, horses and tapirs, giraffes and sheep. What they have in common is that many of them walk around on their toenails. The phylogeny (family relationships) of the ungulates is a constantly shifting terrain, but genetic analysis is beginning to help sort out this enigmatic group of animals. At WhoZoo, elephants, hippos and rhinos have been grouped for convenience as "large herbivores," but this is an artifical grouping; these animals actually represent three different branches of hooved animals. The simplified family tree below is redrawn from the cladograms and other information at the University of California Museum of Paleontology Web Site and from an article on mammalian phylogeny in Nature (1 February 2001). The three main branches of the tree below are the Cetartiodactyla, the Perissodactyla and a part of the recently defined Superorder Afrotheria, which includes the Orders Proboscidea (elephants), Sirenia (manatees and sea cows) and Hyracoidea (hyraxes). In the Perissodactyla, the major axis of the leg lines up with the middle toe (the third digit), while in the hooved members of the Cetartiodactyla, this axis falls between two toes (the third and fourth digits). Because of molecular evidence that indicates similarities between hippos and whales, the Order Cetartiodactyla combines two former orders: the Cetacea (whales and dolphins) and the Artiodactyla. The Ruminants are a large and successful subgroup of the Cetartiodactyla with complex stomachs and the habit of chewing a cud -- a chunk of food that has been swallowed once and then brought back up into the mouth for additional processing. Of the ruminants, the Bovids constitute the largest and most diverse group, including cattle, antelope, sheep and goats. Pointers indicate taxa with representatives at the Fort Worth Zoo. For groups including animals with an informational page or a picture at WhoZoo, there is a link from the group name to one representative of the group. University of California Museum of Paleontology. Murphy, et al. Molecular phylogenetics and the origins of placental mammals. Nature 409 (1 February 2001): 614-618. For more information on specific ungulates, see The Ultimate Ungulate Page.
<urn:uuid:28b01616-244f-42a0-95a3-76a9e2d19feb>
{ "date": "2013-05-24T01:51:08", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9025325775146484, "score": 3.375, "token_count": 536, "url": "http://www.whozoo.org/mammals/Hoof/hoofphylo.html" }
I posted a couple of weeks ago about our progress with determining importance and main idea/details in informational text. If you're interested, you can read about that here. While we continue to work on this, we have layered in work on note taking and text structures. I will eventually share some of that work, but today I want to share some of the main idea stuff. What I found when we started, was that the kids struggled with one main idea, let alone two. So, I backtracked a bit to get that solid, before pushing for that 5th grade standard of finding more than one (it will be coming!). Since I'm a pretty linear thinker, I approached it from the sequence of modeling, doing as a whole class, trying alone, then getting in small groups to do it together. They did a really nice job. Here are the kids as they began working alone in short text, rocking those post-its: They're so diligent, and I swear the rest of the class was doing the same! They are the greatest class ever. They had a choice of three short articles that we had already worked with in other ways, so they were somewhat familiar. As they worked, I conferred with them to get a sense of who might be finding it tricky and who was on the road to getting it. This helped me to group them for the small group portion of this work. Once I had a sense of how to group them, their task was to get together, share their work with one another, and collaboratively generate a main idea that they felt was most accurate. This was great to observe--I'm so proud of their ability to listen to one another and speak kindly to one another when they disagree. Once they did this, the worked together to determine the most important/key details to support this main idea. They then put this together on a large chart paper to then present to the group. Here, you can see a main idea that one of the groups came up with. This group read an article called "Alaska: State of Extremes". This main idea is pretty accurate: "One main idea is that Alaska is extreme because it stands out from the other 49 states." True. However, we talked subsequently about paraphrasing to show understanding. (foreshadowing our note taking work) For example, here, using something other than extreme (extraordinary/exceptional/unusual characteristics, etc.), would be closer to extreme than stands out. In any event, they're on the right track. Here are the key ideas/supports that fit nicely with their main idea: And here is one that is a bit awkward: In this one, they were looking at a section of the text that was about how the capitol is tricky to even get to, because there are no roads and it's very isolated, the point being that in most states, the capitol is accessible and not isolated. This idea was not as explicit in the text as the other three, so they needed to do a bit more work here. They were able to hint that they understood this in conversation, but had trouble articulating it in a way that was clear. They were pulling this together with the idea of Alaska not being part of the contiguous US as well. That's ok! We will get there, little people. By the way, these adorable people decided to stay in for recess to fancy up the poster with all of the patterns and such, since I was pretty clear that the sketching and doodling done in workshop should be in service of the work itself (sometimes it is, right?). In this case, they just wanted to make it look nice, so we had lunch together while they decorated it. I love these kids--they are kids I am willing to give up my lunch to hang out with, which, if you know me, says something. I have other examples, but you get the idea. We will be working on the 2 main ideas as we keep going with note taking, text structures and summarization. I'm loving this unit, though I'm fumbling in the dark a bit since it's new to our curriculum (not info. text, just the particular unit itself). We have a wonderful staff developer from Teachers College coming in mid-December to help us with the final phase, which is to do with research. She has a great blog if you are interested: indent . There's a nice post about annotating that I found very helpful. And now I will leave you with a picture of another main idea on this chilly Saturday morning. I sure needed this by the end of conference week: Need I say more? Our jobs are awesome in so many ways :)
<urn:uuid:eed6da8d-2578-459c-8f9c-05392c2cdbbe>
{ "date": "2013-05-24T01:45:04", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9838148355484009, "score": 3.0625, "token_count": 968, "url": "http://www.wildrumpusblog.com/2012/12/main-ideas-follow-up-2.html" }
A spectacular cathedral whose architecture has been inspired by the northern lights has launched in a town 500 kilometres north of the Arctic Circle in Norway. Designed by Danish architectural firm Schmidt Hammer Lassen in collaboration with local company Link Arkitekter, the cathedral opened this week, more than 10 years after its design was chosen by the municipality of Alta. In 2001, Alta held a competition, where it called for a design for an architectural landmark that would highlight Alta's role as a tourism hotspot. The town was keen to make sure that people could also see the legendary aurora borealis. The 1,917-square-metre cathedral, which cost €16.2 million to build over a four-year period, rises 47 metres into the air in a spiralling shape. The outside of the building has been clad in titanium, which reflects the northern lights. Inside, there is a functioning church which can accommodate 350 people. There are also offices, classrooms and exhibition areas. The cathedral was opened by the Crown Princess of Norway, Mette-Marit. Check out the gallery of images embedded in this
<urn:uuid:7548f15c-930a-4a11-bd02-f472511e0101>
{ "date": "2013-05-24T01:30:36", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9579178690910339, "score": 2.5625, "token_count": 252, "url": "http://www.wired.co.uk/news/archive/2013-02/12/cathedral-of-northern-lights" }
Odd Wisconsin Archive The Search for Wisconsin's First Priest That's the question that greeted Rev. A.A.A. Schmirler as he paddled the rivers of northern Wisconsin during the summer of 1959. The historian-priest was not fishing, however, but retracing the route of the first missionary to visit Wisconsin almost exactly 300 years before. Father Schmirler was trying to discover the exact location where Fr. Rene Menard died while trying to reach refugee Indians on the headquarters of the Black River in the summer of 1661. When Menard died, Iroquois warriors hundreds of miles to the east had driven rival Indian nations who were sympathetic to the French from New York, Ohio, Quebec, and Michigan. The fleeing tribes, who included the Sauk, Fox, Potawatomi, Kickapoo, Ottawa, Miami, Huron, and others, took refuge in Wisconsin during the 1650s. Some who had lived close to the French settlements had become Christian, and Fr. Menard joined a flotilla of fur traders to rejoin his congregation in the far western wilderness. So in the fall of 1660, the first missionary to Wisconsin travelled the Great Lakes with fur traders. He made it as far as Lake Superior when his birchbark canoe was irretrievably damaged just as winter set in. He survived at modern L'Anse, Michigan, with the help of voyageurs and local Indians. While wintering on Keweenaw Bay, he wrote two letters about conditions among the refugees, and with the spring thaw set out for a village of exiled Hurons near the headwaters of the Black River. He hiked overland across the Upper Penninsula, entered the Wisconsin River near Lac Vieux Desert, and proceeded down it to the vicinity of Wausau. Guided by a Frenchman named L'Esperance who had already made the trip, he threaded the streams northwest of modern Wausau. When they were within one day of their desination, Fr. Menard left the canoe to make a short portage while L'Esperance shot the rapids. Fr. Menard was never seen again. Three hundred years later, Fr. Schmirler followed him downriver from Lac Vieux Desert in a 14-foot kayak. He had first examined all the contemporary textual and cartographic evidence, as well as reviewing all the previous theories about where Fr. Menard had died. Allowing for modern changes in the river (such as dams) and considering the practices of 17th-century voyageurs, he ultimately concluded that the rapids where Fr. Menard vanished were on the Rib River where it crosses the Taylor/Lincoln county line, just east of modern Goodrich (topographic map). You can read Fr. Schmirler's account of his unique on-the-ground investigations in the online version of the Wisconsin Magazine of History. And you can see how Fr. Menard's contemporaries reported his death at Turning Points in Wisconsin History. :: Posted in Strange Deaths on June 17, 2007
<urn:uuid:43471d40-d5f9-4eb1-867f-c25aab2ed8a7>
{ "date": "2013-05-24T01:31:00", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9779901504516602, "score": 2.875, "token_count": 638, "url": "http://www.wisconsinhistory.org/odd/archives/002947.asp" }
Ma Ingalls describes family life in 1861 Letters to Charles and Martha Carpenter, 1861-1919 and 1975-1977 (selection). Caroline Quiner Ingalls (mother of Laura Ingalls Wilder) wrote this letter to her sister, Martha Quiner Carpenter, on Oct. 6, 1861. Both sisters were in their early twenties and recently married. Caroline was still living near their parents at Concord, in Jefferson Co., Wis., but Martha had moved with her new husband to Stockholm, in Pepin Co. A few years later, Caroline would join her sister up north when the Ingalls family moved to the "little house in the big woods" in nearby Pepin. Caroline's letter was written before she had any children. It describes life on the farm of Laura's grandparents, including an epidemic of scarlet fever which sickened Laura's cousins and nearly killed her grandmother. It also describes Laura's parents' early married life, their health, and their farm work and crops. 1860 census records suggest that the people mentioned in the letter are: "mother": Charlotte Mary (Tucker) (Quiner) Holbrook, Laura's maternal grandmother. "Charlotte": Charlotte Holbrook, Caroline's half-sister and Laura's aunt, born 1854 "Eliza": Eliza Quiner, Caroline 's sister, born 1842 "Louisa": daughter of Caroline's brother Henry Quiner, born 1860 "Lafayette": three-year-old son of Charles Ingall's sister Lydia "Thomas": Thomas Quiner, Caroline's brother, born 1844 "Nancy": Nina Quiner, wife of Caroline's brother Joseph. "Father Ingalls": Lansford Ingalls, Charles' father and Laura's grandfather "Peter": Peter Ingalls, Charles' brother and Laura's uncle To see the original handwritten letter, click "View the Document" below. Use the right-hand frame of the document viewer to navigate. To see a typed transcript, open the drop-down box above the navigation pane reading, "View Image & Text." This letter is part of a small collection of letters to Charles and Martha Carpenter preserved by the family. Civil War letters by Laura's uncles and one aunt are included elsewhere in Turning Points in Wisconsin History. A lesson plan based on this document is available. Immigration and Settlement| Wisconsin in the Civil War Era The Civil War Home Front Farming and Rural Life |Creator: ||Ingalls, Caroline Quiner |Pub Data: ||Letters to Charles and Martha Carpenter, 1861-1919 and 1975-1977 (selection), Wisconsin Historical Society manuscript collection Stout SC 142. |Citation: ||Ingalls, Caroline Quiner. Letter to Martha Quiner Carpenter, Oct. 6, 1861, in: Letters to Charles and Martha Carpenter, 1861-1919 and 1975-1977, Wisconsin Historical Society manuscript collection Stout SC 142. Online facsimile at: Visited on: 5/23/2013
<urn:uuid:d513356b-e344-4460-b01e-4ea8531e8977>
{ "date": "2013-05-24T01:45:32", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9190760254859924, "score": 2.78125, "token_count": 627, "url": "http://www.wisconsinhistory.org/turningpoints/search.asp?id=1711" }
Many living things undergo a process known as development in which a single cell replicates and divides to form a multicellular organism with various structures and functions that the original cell did not have. You are one such organism, growing from a fertilized egg cell in your mother's womb to the full-sized adult you are today. Along the way, your cells changed from having features like the original fertilized egg to having those of the developed cells that make up human tissues like nerve and skin. This process is known as differentiation. Plants are also multicellular organisms, and they too undergo this differentiation process. There are many similarities between the way plants and animals like yourself differentiate, and therefore, they make excellent creatures to study the cycle of growth and development. You will be studying this cycle in a lab in the next few days, and to be ready for it, there are some ideas with which you need to be familiar. The following questions will help you do so. 1. What function does DNA play in all organisms? 2. How does fertilization occur in flowering plants and what does it produce? 3. Why is there identical DNA in all cells of the same plant (provided there are no mutations)? 4. What occurs during cellular differentiation? 5. Does a developing plant embryo undergo differentiation? Explain your answer. 6. What is germination? 7. Describe the physical appearance and function of the radicle and hypocotyl in germinating and sprouting seeds? 8. How can cells create tissues that have different morphology and physiology in spite of the fact that the DNA in all the cells in any organism are identical? 9. Make a labeled diagram of a young radicle and identify the regions where you think the most growth is taking place? 10. If respiration rate (consumption of O2/min/mg tissue) is directly related to rate of growth in a plant, what part of the radicle would have the highest respiration rate? Explain your answer. 11. Hormones play a critical role in the growth and development of plants as well as other organisms; what are the specific hormones that influence radicle development and what effect do they have?. 12. Pick two embryonic plant structures and decide on an interesting question you could ask about their respective developmental rates; then write a hypothesis that addresses your question (explaining why it does) and write a brief summary of a procedure you could use to test it.
<urn:uuid:33df6621-3db8-417f-8cb0-1980bfa99e7f>
{ "date": "2013-05-24T01:31:05", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9550429582595825, "score": 3.6875, "token_count": 516, "url": "http://www.woodrow.org/teachers/bi/1997/plantdev/bi97pp4.htm" }
I'm young - do I really need to know the signs of a Stroke? Q. I hear a lot about knowing the signs of stroke. I'm young - Is it really important to me? A. A Stroke can happen to anyone at anytime, regardless of race, gender or age, so we all should be aware of the risks, symptoms and prevention. Some risks for stroke may be out of your control. Risks that you cannot change include being over age 55, male, African-American, having a family history, or a medical condition such as diabetes. Lifestyle risk factors include: • Inactivity (lack of exercise) • Poor diet • Alcohol consumption However, there are risks for stroke that we can control. Medical conditions that contribute to an increase risk for stroke that can be managed or controlled include: • Previous stroke episode • High cholesterol • High blood pressure • Heart disease • Atrial fibrillation It is believed that 80% of strokes are preventable. Listed below are stroke prevention guidelines as provided by the National Stroke Association include: 1. Know your blood pressure (hypertension) 2. If you take blood pressure medicine, take it as prescribed. 3. Know if you have atrial fibrillation (Afib) - Afib is an abnormal heartbeat that can increase stroke risk by 500 percent 4. Stop Smoking and using tobacco products. Smoking doubles the risk of stroke. 5. Limit alcohol use - Alcohol use has been linked to stroke in many studies. 6. Know your cholesterol levels- Cholesterol is a fatty substance in blood that is made by the body and it also comes from food. High cholesterol levels can clog arteries and cause a stroke. 7. Control diabetes - You doctor can prescribe a nutrition program, lifestyle changes and medicine. Know your blood sugar level and your hemoglobin A1C if you have diabetes 8. Manage exercise and diet - Excess weight strains the circulatory system. Maintain a diet low in calories, salt, saturated and trans fats and cholesterol. Eat as least five servings of fruit and vegetables daily. Cardiovascular exercise for 30 minutes per day 5 days a week. 9. Treat circulation problems - Fatty deposits can block arteries. To learn more about stroke, visit www.stroke.com
<urn:uuid:bd971e3d-44c9-4781-b731-f45d53240170>
{ "date": "2013-05-24T01:59:27", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9171201586723328, "score": 2.796875, "token_count": 486, "url": "http://www.woosterhospital.org/article/risk-stroke/im-young-do-i-really-need-know-signs-stroke" }
Sorry, no definitions found. “To say nothing of Bill, staggering around those second-tier campaign-stops, still wagging that effete index-finger in our faces.” “The nails should be neither longer nor shorter than the points of the fingers; and the surgeon should practice with the extremities of the fingers, the index-finger being usually turned to the thumb; when using the entire hand, it should be prone; when both hands, they should be opposed to one another.” “That analysis used the measurements of thumb width, index-finger width, and index-finger length for the program.” “Among the channels XM is adding: a classic rock venue called Big Tracks; U.S. Country, which will offer--you guessed it--country & western music; a southern gospel station, enLighten; and thanks to the magic of diversity and freedom of choice, the former will be joined by a channel devoted to--prepare those pinky and index-finger Mephistopheles horns--heavy metal rock.” “Geraldine turns away from the bench and sends an index-finger signal to Clarence Wexler.” “Then when he spoke he was likely to fling back his great, white mane, his eyes half closed yet showing a gleam of fire between the lids, his clenched fist lifted, or his index-finger pointing, to give force and meaning to his words.” “You might wear out your index-finger running up and down the columns of dictionaries, and never find the word,” “She clenched her hands, looked at her index-finger nail again.” “Sweeney used the traditional black "stroke" or "frailing" or "claw-hammer" style of striking down across the strings with thumb and the back of the index-finger nail (I would demonstrate).” ““The new Act makes inheritance on intestacy very much simpler,” said Mr. Murbles, setting his knife and fork together, placing both elbows on the table and laying the index-finger of his right hand against his left thumb in a gesture of tabulation.” These user-created lists contain the word ‘index-finger’. sentences showing how finger is used To point at with ..., She was never wea..., The cardinals hav..., Thy heart is fing..., To cheat (a perso..., Frank Lee..had ‘f..., To play upon (an ..., fore-finger, index-finger, middle finger, fool's finger, ring finger and 93 more... Looking for tweets for index-finger.
<urn:uuid:64b504b5-ea7f-4025-ba92-4fedf64955eb>
{ "date": "2013-05-24T01:59:45", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9358335733413696, "score": 2.65625, "token_count": 578, "url": "http://www.wordnik.com/words/index-finger" }
American Heritage® Dictionary of the English Language, Fourth Edition - n. A state of open, armed, often prolonged conflict carried on between nations, states, or parties. - n. The period of such conflict. - n. The techniques and procedures of war; military science. - n. A condition of active antagonism or contention: a war of words; a price war. - n. A concerted effort or campaign to combat or put an end to something considered injurious: the war against acid rain. - v. To wage or carry on warfare. - v. To be in a state of hostility or rivalry; contend. - idiom. at war In an active state of conflict or contention. Century Dictionary and Cyclopedia - n. A contest beween nations or states (international war), or between parties in the same state (civil war), carried on by force of arms. International or public war is always understood to be authorized by the sovereign powers of the nations engaged in it; when it is carried into the territories of the antagonist it is called an aggressive or offensive war, and when carried on to resist such aggression it is called defensive. Certain usages or rights of war have come to be generally recognized and defined under the name of the Laws of War, which in general (but subject to some humane restrictions which in recent times have been greatly increased) permit the destruction or capture of armed enemies, the destruction of property likely to be serviceablo to them, the stoppage of all their channels of traffic, and the appropriation of everything in an enemy's country necessary for the support and subsistence of the invading army. On the other hand, though an enemy may be starved into surrender, wounding, except in battle, mutilation, and all cruel and wanton devastation are contrary to the usages of war, as are also bombarding an unprotected town, the use of poison in any way, and torture to extort information from an enemy: but it is admitted that an enemy may be put to death for certain acts which are in themselves not criminal, and it may be even highly patriotic and praiseworthy, but are injurious to the invaders, such as firing on the invaders although not regularly enrolled in an organized military force, or seeking to impair the invaders' lines of communication. - n. A state of active opposition, hostility, or contest: as, to be at war (that is, engaged in active hostilities). - n. Any kind of contest or conflict; contention; strife: as, a wordy war. - n. The profession of arms; the art of war. - n. Forces; army. Compare battle. - n. Warlike outfit. - n. Specifically— In Roman history, the war between Sulla and Marius (commencing 88 b. c.) or that between Pompey and Cæsar (commencing 49 b. c.) - n. In English history, the war of the great rebellion. See rebellion. - n. In United States history, the war of secession. See secession. - n. of 1828–9, ending in the defeat of Turkey; - n. of 1853–6 (see Crimean); - n. of 1877–8, between Russia and its allies (Rumania, etc.) and Turkey, resulting in the defeat of Turkey and the reconstruction of southeastern Europe. - n. 343–341 b. c. - n. 326–304 b. c. - n. 298–290 b. c., ending in the triumph of Rome. - To make or carry on war; carry on hostilities; fight. - To contend; strive violently; be in a state of opposition. - To make war upon; oppose, as in war; contend against. - To carry on, as a contest. - Same as worse. - To defeat; worst. - A Middle English form of ware. - A Middle English form of were. - n. uncountable Organized, large-scale, armed conflict between countries or between national, ethnic, or other sizeable groups, usually involving the engagement of military forces. - n. countable A particular conflict of this kind. - n. countable By extension, any conflict, or anything resembling a conflict. - n. uncountable A particular card game for two players, notable for having its outcome predetermined by how the cards are dealt. - v. intransitive To engage in conflict (may be followed by "with" to specify the foe). - v. To carry on, as a contest; to wage. GNU Webster's 1913 - adj. obsolete Ware; aware. - n. A contest between nations or states, carried on by force, whether for defence, for revenging insults and redressing wrongs, for the extension of commerce, for the acquisition of territory, for obtaining and establishing the superiority and dominion of one over the other, or for any other purpose; armed conflict of sovereign powers; declared and open hostilities. - n. (Law) A condition of belligerency to be maintained by physical force. In this sense, levying war against the sovereign authority is treason. - n. Poetic Instruments of war. - n. Poetic Forces; army. - n. The profession of arms; the art of war. - n. a state of opposition or contest; an act of opposition; an inimical contest, act, or action; enmity; hostility. - v. To make war; to invade or attack a state or nation with force of arms; to carry on hostilities; to be in a state by violence. - v. To contend; to strive violently; to fight. - v. rare To make war upon; to fight. - v. rare To carry on, as a contest; to wage. - n. the waging of armed conflict against an enemy - v. make or wage war - n. an active struggle between competing entities - n. a concerted campaign to end something that is injurious - n. a legal state created by a declaration of war and ended by official declaration during which the international rules of war apply - From Middle English werre, from Late Old English werre, wyrre "armed conflict" from Old Northern French werre (compare Old French guerre, guerre, whence modern French guerre), from Frankish *werra (“riot, disturbance, quarrel”) from Proto-Germanic *werrō (“mixture, mix-up, confusion”), from Proto-Indo-European *wers- (“to mix up, confuse, beat, thresh”). Akin to Old High German werra ("confusion, strife, quarrel") (German verwirren (“to confuse”)), Old Saxon werran ("to confuse, perplex"), Dutch war ("confusion, disarray"), Old English wyrsa, wiersa ("worse"), Old Norse verri ("worse") (originally "confounded, mixed up"). Compare Latin versus ("against, turned"), past participle of vertere ("turn, change, overthrow, destroy"). More at worse, wurst. (Wiktionary) - Middle English warre, from Old North French werre, of Germanic origin. (American Heritage® Dictionary of the English Language, Fourth Edition) “VIEW FAVORITES yahooBuzzArticleHeadline = 'American plans to loot Iraqi oil and other Bush war crimes'; yahooBuzzArticleSummary = 'Though Bush has given every other lie and cover story to justify the US war of aggression against Iraq, the real reasons for the \'war\' are now openly admitted.” “VIEW FAVORITES yahooBuzzArticleHeadline = 'President Bush regrets his legacy as man who wanted war'; yahooBuzzArticleSummary = 'President Bush has admitted to The Times that his gun-slinging rhetoric made the world believe that he was a guy really anxious for war in Iraq.” “Edwards: "End 'preventive war' doctrine" yahooBuzzArticleHeadline = 'Edwards: "End \'preventive war\' doctrine "'; yahooBuzzArticleSummary =' Article: John Edwards talks about ending Bush\'s" preventative war doctrine "and how to diplomatically engage with Iran. ” “VIEW FAVORITES yahooBuzzArticleHeadline = 'Chomsky: \'There Is No War On Terror\' '; yahooBuzzArticleSummary =' The acclaimed critic of U.S. foreign policy analyzes Bush\'s current political troubles, the war on Iraq, and what\'s really behind the global \'war on terror.” “If Iraq is key to Bush's 'terror war' ... we're losing yahooBuzzArticleHeadline = 'If Iraq is key to Bush\'s \'terror war\' ... we\'re losing '; yahooBuzzArticleSummary =' Article: If Democrats are going to continue to acknowledge Bush\'s \'terror war\ ', they should oblige him and aggressively tie it to the quagmire in Iraq and his regime\'s wallowing failures elsewhere in the world.” “Bushs insistence that he treated war with Iraq as a last resort and that Saddam Hussein was the one who chose war by refusing to let” “Thats funny Lynn Cheney is a war monger of the AEI enterprise$$ for$$ war$$ think tank.” “While the phrase The war to end war is often associated with Woodrow Wilson, its authorship was claimed by Wells in an article in Liberty, December 29, 1934, p. 4.” “At the same time, if we have the choice of continued war or a cowardly peace -- _we vote for war_.” “_It is the war which kills slavery, and not the man who leads the war_.” These user-created lists contain the word ‘war’. English words of Anglo-Saxon origin. With the exception of abbreviations and mosaic words all types of words (proper names, past tense of verbs, etc.) are allowed. Movies or TV shows where the titles are also common words, generally one-word titles. Unabashedly stolen from a comment made by courier12. words for fighting ( open list, randomness ) absolute majority, absolute monarchy, abstentionism, access to informa..., acquisition of arms, action brought be..., action for annulment, action to establi..., ad hoc committee, adjournment, adjournment motion, administration and 965 more... All words of the poem by Gerard Nolst Trenité Dearest creature in creation, Study English pronunciation. I will teach you in my verse <... ...And all that heavy metal. A list of English words that are three letters long. ABM Agreement, accession to a co..., accession to a tr..., accession to an a..., achievement of peace, ACP-EC Convention, advanced technolo..., aerospace industry, African organisation, aggression, agreement, agricultural coop... and 851 more... Here be a trove of words and phrases associated (fore or aft) with picarooning / pickarooning, scavenged from Google Books citations. The Prince Edward Island folksong Mick Riley inspi... English words of Norman-French origin. Read the top word on the list and add a word that you associate with it. The association may be semantic, etymological, structural, literary, personal, etc. 1. In t... Looking for tweets for war.
<urn:uuid:9f5da247-70d1-4901-80f2-d257de8f427a>
{ "date": "2013-05-24T01:47:46", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8895094990730286, "score": 3.3125, "token_count": 2497, "url": "http://www.wordnik.com/words/war" }
The challenges we face due to the effects of climate change are unprecedented. Its reach is widespread, taking into account damage to ecosystems - which in turn effects entire nation’s infrastructures - the economy and peoples' health. Already we are seeing extreme weather conditions leading to widespread hunger and disease. Reports find that unless action is taken now, the economic costs of climate change could amount to US$20 trillion annually by 2100 - that’s 6-8% of global economic output. Since 1990, annual losses of around $60 billion due to climate change have been recorded - with 2005 costing a record $200 billion. In the US, Hurricane Katrina cost $125 billion in economic losses. The European heat wave in 2003 cost $15 billion in damages. Flood damage costs in Europe are anticipated to rise from $10 billion to $120-$150 in the years ahead. Massive amounts of revenue will be lost from the collapse of the tourist industry as places of natural beauty and tranquillity are irreparably damaged. An example of this is Australia’s Great Barrier Reef. Currently attracting millions of visitors each year, it is already showing signs of dying. If ocean temperatures continue to increase, it is predicted that 95% of the Barrier Reef’s living coral will soon be lost. There is not one part of the world or any one individual living on the planet that will go unaffected by the serious results of inaction on climate change. Predictions of the timescale in which we may expect to experience the most serious effects of climate change have been seen to be conservative, with many events, such as the complete melting of Arctic ice, being revised to occur sooner as the science is better understood. The cost of not putting climate change mitigation at the top of the agenda now could be immeasurable. Further delay will increase the costs of reducing emissions and will risk us reaching the point of no return. As the cost of action is far less than cost of inaction, reports show that policies put foward with the goal of mitigating global warming in the short term will not have a major impact on the economy.
<urn:uuid:06552abe-5837-422c-9e37-af1728e6814d>
{ "date": "2013-05-24T01:38:09", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9492629766464233, "score": 3.546875, "token_count": 435, "url": "http://www.worldpreservationfoundation.org/topic.php?cat=economicCosts" }
The Feasibility of Micronutrient (Iron) Food Fortification in Pacific Island Countries (A report of Mr Robert Hughes prepared for WHO Western Pacific Regional Office) The main aim of the consultancy was to obtain information on food production and distribution, availability and consumption in the Pacific to determine the best vehicles for food fortification. Specific objects were: 1. Determination of the total volume of possible vehicles for fortification available for consumption in Pacific Island Countries (PIC) by source of origin. 2. Description of distribution and marketing structure (in bulk or labeled), as well as agreements among countries regarding importations and food control. 3. Investigate the availability of information from food consumption studies that can help determine the distribution of the above foods (potential vehicles for fortification) in each country in order to estimate nutritional implications for urban vs rural population, different age groups, males/females and socio-economic classes. 4. Draw conclusions on the most suitable vehicles for a fortification program for the Pacific countries and propose the steps needed to implement this. The prevalence of anaemia is high in the region and the most likely causes are diets insufficient in iron and/or parasite infections. The prevalence of anaemia in women and children in PICs is high enough to warrant a public health intervention. Fortifying the food supply with iron would be an effective way of increasing population iron intakes. The most effective programs to reduce iron deficiency anaemia would involve the elimination of helminth and parasite infections and increasing dietary iron intakes of Pacific populations. Food import volumes were determined for 10 PICs. The principal sources of origin of flour and rice for most PICs were Fiji, Australia and the USA and imported rice and flour from these countries now provide the main staple foods. Literature searches found only three recent Pacific food consumption studies. Low proportions of rural populations consume flour and rice (14.7%) in Vanuatu and possibly most of rural/remote Melanesia. On the basis of evidence food availability data and limited consumption studies, flour and rice appear to be the most suitable vehicles for fortification. Results of 3 food consumption studies may not be sufficient on which to base a food fortification program. Imported flour and rice is already enriched in many PICs. Food production in Australia, NZ and the USA shows a general trend towards enriching foods for domestic consumption with additional nutrients, including iron. The issue of fortifying multiple food vehicles becomes an alternative to fortification of a single food. Food availability data collected and analysed by FAO remains the best source of food availability in the Pacific. Unfortunately, these data only provide information for 8 PICs and at best, are only rough estimates. In many cases country import data were either, unreliable, inappropriate, or not available in a form that could be analysed, raising more issues than solely determining the nutrient quality of the food supply. From the results of this consultancy, it is recommended that: 1. Wheat flour and rice are the most suitable vehicles for iron fortification in PICs. Issues such as levels and safety of iron fortificants, policing of mandatory fortification, quality control, contamination and producer compliance are beyond the scope of this consultancy. 2. Fortification and helminth elimination programs be undertaken in unison. There is enough evidence to suggest that a food fortification program should not be undertaken in isolation. Iron deficiency anaemia is an outcome of a range of influencing factors that include an iron deficient diet and helminth/parasite infections. Since fortified foods seem to be imported into many PICs, the question of whether a fortification program is necessary arises. More data should be collected to determine the exact proportions of fortified foods already entering PICs. Pacific governments should be alerted to this in order to make informed decisions about the development of national and/or regional food fortification programs. This also enables governments and regional bodies to determine whether a single or multiple food vehicle program will be the most effective. 3. Regular low-cost food and nutrition surveys be undertaken. The searches undertaken during this consultancy showed that little is known about the dietary habits and food consumption patterns of Pacific populations. Very little is known about food distribution within countries and what proportion of anaemia prevalence is due to parasitic infection. Accurate information on food consumption is necessary for governments to make decisions about a range of food and health issues in order to develop policy and programs. 4. A uniform regional approach to food import and availability data collection and analysis be taken. This consultancy found it difficult to access individual PIC food availability data and of data received, many were incomplete, difficult to interpret and/or inconsistent. This raises issues of national food security, disaster preparedness and emergency relief. Every country government should have easy access to up-to-date per capita food availability of nutrient rich foods in order to determine quantities of foods available to feed populations in times of emergency.
<urn:uuid:eec00eb0-dae3-4a10-bdd7-ffb9b3bcd5db>
{ "date": "2013-05-24T01:58:55", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9237424731254578, "score": 2.875, "token_count": 1024, "url": "http://www.wpro.who.int/nutrition/documents/Hughes_FINAL_REPORT/en/index.html" }
Achieving Quality the Final Piece of the Puzzle Properly calibrated testing equipment ensures quality In order for a business to generate a high-quality product or service, it is essential to obtain a quality measurement system that will be used to study the integrity of its finished product. In the certification industry, testing equipment is essential to measure the different variables that could potentially alter the quality of a raw material, finished product or final status on a certification report. The quality of a product or service is compromised if the test equipment used to measure the final quality is not reading accurate results. This is why a flawless calibration system is the final puzzle piece to achieving a high-quality product or service. What is Calibration? In this article, calibration will be defined as the comparison between measurements. During the calibration of a test instrument, a device with a known magnitude or assigned correctness, known as a standard, will be used to check the measuring accuracy of a test instrument. Calibration ensures that a measuring instrument is providing results for a sample that fall in an acceptable accurate range. Accurate testing results allow manufacturers or certification agencies to eliminate or minimize factors that could cause inaccurate measurements during production or testing. Calibration procedures naturally vary depending on the instrument being calibrated. Generally, the test instrument is used to test calibrators, which are one or more test samples that have known values. The results are then used to establish a relationship between the measurement instrument and the known values. The calibration processes eliminate or “zero out” the current instrument error at the specified calibration points. This process basically “teaches” the instrument to produce more accurate results. After a test instrument is calibrated, it will provide more accurate results for unknown values tested during its everyday normal usage. To keep a successful calibration system, calibrations must be done consistently and on a systematic schedule. When is Calibration Needed? During the manufacturing or certification process of any product, there may be many different types of test instruments used to determine the quality of a product or service. The question of which test instruments need to be calibrated and which do not is answered by whether or not the test performed and the test instrument used affect the final quality of the product or service. There are situations in which a test instrument does not need to be calibrated. If the readings of the test instrument are for reference only, and the accuracy of the test results have little or no impact on the quality of the product or service being provided, then you do not need to calibrate the test instrument. It is important to be aware that non-calibrated instruments can appear to be working properly while not providing reliable results. Sometimes cost is the main reason that people choose not to calibrate a test instrument. It is important to know that there can be huge hidden costs associated with not calibrating a test instrument that should be calibrated. Calibrating test instruments may decrease the number of final product rejects because they do not fall within acceptable tolerances. Besides saving money in some situations, there are health, safety, legal and regulatory concerns that should be considered. Who Should Perform Calibrations? Once it is determined which test instruments need to be calibrated, the next step is to determine who will perform these calibrations. To be sure that the calibration results are accurate, they must be traceable back to standards held at a national measurement institute. In order to maintain formal traceability of measurements, the calibrations should be done by a national metrology institute or a United Kingdom Accreditation Service-accredited (or equivalent) laboratory that has independent third-party accreditation. National Physical Laboratory is an example of one of these national measurement institutes. It is also essential that the appropriate equipment and procedures are used in the calibration process and that trained, authorized personnel are performing the calibrations. You can choose a non-accredited source to calibrate your equipment, and you can also choose to calibrate your equipment yourself, but keep in mind the confidence that can be placed on the results will be much greater if the calibration source is third-party accredited. Frequency of Calibration After you decide who will perform your calibrations, the next question is how frequently an instrument should be calibrated. Just like refueling your car, you should calibrate your test instrument when needed. Daily or periodic standard checks can provide a good indication of how the test instrument is performing. If these checks show that the instrument performance is stable, then the instrument does not need to be recalibrated. If the history of standard checks show that the instrument is showing a short-term significant shift, then the test instrument should be recalibrated. Some laboratory standard operating procedures or regulatory requirements may require that the test instrument be recalibrated on a set schedule even when the standard check results do not indicate that a recalibration is needed. These requirements should take primacy when there is uncertainty as to whether to recalibrate the test instrument. New devices should be calibrated more frequently to establish their metrological stability.
<urn:uuid:6f31546c-b0ec-4f52-9416-e3bd9ec13159>
{ "date": "2013-05-24T01:52:55", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9242271184921265, "score": 2.90625, "token_count": 1033, "url": "http://www.wqpmag.com/achieving-quality-final-piece-puzzle" }
There is a huge diversity of creatures that share the common name ‘worms’, such as flat worms, hook worms, ribbon worms, horsehair worms, and segmented worms. The common earthworm is a member of the class Oligochaeta, within the phylum Annelida (segmented worms). Only about one-third of the earthworms in North America are native; in fact, some exotic species in the northeast are having detrimental effects on forests. The Xerces Society has profiled the status of a rare and unusual native earthworm – the Oregon giant earthworm. There have been few sitings of this animal in recent years. It can reach up to 60 centimeters long and it reportedly has spit that smells of lilies.
<urn:uuid:79181330-630e-4bdc-84fb-d2879321e5df>
{ "date": "2013-05-24T01:30:12", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9474878907203674, "score": 3.546875, "token_count": 155, "url": "http://www.xerces.org/worms/" }
xmlsh is derived from a similar syntax as the unix shells (see Philosophy) . If you are familiar with any of these shell languages (sh,bash,ksh,zsh) you should be right at home. An attempt was made to stay very close to the sh syntax where reasonable, but not all subtlies or features of the unix shells are implemented. In order to accomidate native XML types and pipelines some deviations and extensions were necessary. Lastly, as an implementation issue, xmlsh is implemented in java using the javacc compiler for parsing. This made support for some of the syntax and features of the C based shells difficult or impossible. Future work may try to tighten up these issues. xmlsh can run in 2 modes, interactive and batch. In interactive mode, a prompt ("$ ") is displayed and in batch mode there is no prompt. Otherwise they are identical. Running xmlsh with no arguments starts an interactive shell. Running with an argument runs in batch mode and invokes the given script. You can run an xmlsh script by passing it as the first argument, followed by any script arguments xmlsh myscript.xsh arg1 arg2 For details on xmlsh invocation and parameters see xmlsh command - Current Directory - Environment variables - Standard ports ( input/output/error ) The shell itself maintains additional environment which is passed to all subshells, but not to external (sub process) commands. - Namespaces, including the default namespace (See Namespaces) - Declared functions (See SyntaxFunction ) - imported modules and packages (See Modules) - Shell variables (Environment variables and internal shell variables) (See BuiltinVariables) - Positional parameters ($1 ... $n) - Shell Options (-v, -x ...) On startup, xmlsh reads the standard input (interactive mode) or the script file (batch mode), parses one command at a time and executes it. The following steps are performed - Parse statement. Statements are parsed using the Core Syntax. - Expand variables. Variable expansion is performed. See Variables and CoreSyntax. - Variable assignment. Prefix variable assignment is performed. Variables and CoreSyntax. - IO Redirection. IO redirection (input,output, here documents) CommandRedirect and CoreSyntax. - Command execution. Commands are executed. CommandExecution - Exceptions raised can be handled with a try/catch block. After the command is executed, then the process repeats.
<urn:uuid:693b762b-6579-4096-8873-426b157e93c6>
{ "date": "2013-05-24T01:31:30", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.853335976600647, "score": 2.78125, "token_count": 532, "url": "http://www.xmlsh.org/BasicSyntax" }
Is the U.S. Ready for Human Rights? :: Discussion Guide The articles mentioned below are available on our website: see our Focus index to our Human Rights issue. You are welcome to download and photocopy the articles free of charge. If you would like to purchase multiple copies of YES! or subscriptions for your class or group, please phone 800/937-4451. The United States has been proud of its leadership in human rights. Many times throughout history, we have amended our constitution to answer demands for increased tolerance and equal treatment of all citizens. Yet now we find uncharged captives held indefinitely in Guantánamo Bay, reports of torture in Abu Ghraib, limits on prisoners' access to habeas corpus, and other violations. In order for any great nation to progress, it must reflect upon itself, celebrating its accomplishments, addressing areas of concern, and working towards improvement. One of the key goals of this issue of YES! is to present such a “look in the mirror” for the United States. This discussion guide will focus on the following articles: - Sometimes a Great Nation by Eric Foner - Check Your Rights at the Border by Justin Akers Chacon - Who's Afraid of Economic Human Rights? by Carol Estes - Mere Justice by Jesse Wegman - Yes. We're Ready. by Larry Cox & Dorothy Thomas - The Universal Declaration of Human Rights with footnotes by the YES! team Sometimes a Great Nation SEE ARTICLE ONLINE :: Sometimes a Great Nation by Eric Foner The United States has sometimes been the world leader in human rights. Yet as Thomas Jefferson warns, “the price of liberty is eternal vigilance.” Has our record of courage and decency led us to assume that we are still the gold standard of human rights enforcement? Foner urges us to examine our true history—one made up of both justice and injustice, of honor and cowardice. Can we reclaim our right to call ourselves a truly great nation in terms of human rights? Foner says yes. - In your opinion, what are the some of the U.S.'s greatest achievements in human rights? What are some of the times the U.S. has fallen short of its ideals? - Consider a time when someone discriminated against you based on your gender or the color of your skin, or something more abstract, like your faith or level of education. Discuss this experience. Were you treated with less dignity than you deserve? How did it make you feel? - Throughout history, various groups have been denied equal opportunities for reasons that seem justified at the time. In examining past differences that were once intolerable, but are now accepted, perhaps we can learn something. Do you see any commonalities among groups of the past and present that have been excluded and oppressed? - Sometimes, we are a great nation. With concentrated effort, we can reclaim a national identity associated with upholding human rights. What steps can you as an individual take to advance this change? Check Your Rights at the Border SEE ARTICLE ONLINE :: Check Your Rights at the Border by Justin Akers Chacon U.S. trade policy is one of the factors contributing to the immigration of people from Mexico and Central America seeking work in the United States. Free trade agreements have lowered tariffs and undermined traditional economies. Chacon says that human rights are among the many things immigrants leave behind while making the journey north. - Part of eliminating stereotypes is identifying and addressing them. What words or images come to mind when you think of the word “immigrant?” Where did these associations come from? - Consider your own heritage. If your ancestors immigrated to the United States, what stories have they passed down in your family? Did they experience economic, cultural, social, or other kinds of discrimination? - What role do immigrants play in our country? What are the advantages and disadvantages of easing the limitations on international migration? What effect would that have in your own life, neighborhood, and job? - What distinguishes “beneficial” immigration from “harmful” immigration? Who's Afraid of Economic Human Rights? SEE ARTICLE ONLINE :: Who's Afraid of Economic Human Rights? by Carol Estes Evidently, the U.S. government is. Even though 155 of the world's nations have ratified the Covenant on Economic, Social and Cultural Rights, which recognizes economic rights for all, the United States has not. Carol Estes argues that people who live in the richest country in the world should be entitled to housing, food, and medical care. - “If you're homeless, you must have done something wrong to end up there, and you alone are responsible for getting yourself out of that situation.” Have you seen or heard messages like this? If so, where have they come from? Do you agree or disagree with them? - The UDHR recognizes that every human being has economic rights. Do you think that in the United States we treat basic economic security as a right, or something to be earned? - Does having a population of permanently poor people serve to sustain, or even benefit, an economic system? - How would your life be different if our government provided the basic economic security outlined in the UDHR to all its citizens? What might you lose? What might you gain? SEE ARTICLE ONLINE :: Mere Justice by Jesse Wegman In his article, Wegman discusses the obstacles faced by prisoners who seek review of their cases using habeas corpus. He explains the origin of AEDPA, the Anti-Terrorism and Effective Death Penalty Act, which dramatically hinders access to this right. The Act speeds up the process by which death-row inmates are pushed to executions, and simultaneously restricts all other prisoners' ability to appeal for justice. - Do you think prisoners have too much access to court review of their cases, or too little? - Wegman mentions that many politicians crack down on prisoners' rights because they are afraid of being perceived as “soft” or “sympathetic” towards criminals. How do you feel about this? What does this tell us about our society? Yes. We're Ready. SEE ARTICLE ONLINE :: Yes. We're Ready. by Larry Cox & Dorothy Thomas Larry Cox, the executive director of Amnesty International, and Dorothy Q. Thomas, senior program advisor to the U.S. Human Rights Fund, say that human rights are a powerful unifying force for activists, and many groups are drawing on human rights theory to make change. - Think of a cause about which you feel strongly. Does this cause share values with those promoting human rights? Could you imagine forming a meaningful connection with someone working towards a cause different from your own, based on human rights? - How might you expand the human rights network within your own community? The Universal Declaration of Human Rights SEE ARTICLE ONLINE :: The Universal Declaration of Human Rights with footnotes by the YES! editorial team In 1948, the UDHR was birthed into the world through the efforts of Eleanor Roosevelt and the newly formed United Nations. The document was the first of its kind. Its 30 articles define the basic rights we all own, simply by virtue of being human. The UDHR has been translated into hundreds of languages, yet its contents are unfamiliar to many. Examine your rights. Then consider the YES! footnotes, showing a U.S. position on each article. - Had you seen the UDHR before? Do any of the 30 articles surprise you? If so, which ones, and why? - The YES! team debated long and hard over whether to include the footnotes about the U.S.'s position on each article. How did you feel about these interjections? Did they make the document more meaningful, or take away from its inherent beauty? - How do you feel about this document? Is it just a case of one group imposing norms on others, or are there such things as “universal human rights”? If such rights exist, does this document capture them? What rights do you feel are missing from the document, if any? What are you doing? How are you using this discussion guide? How could we improve it? Please share your stories and suggestions with us at editors @ yesmagazine.org, with “Discussion Guide” as the subject. YES! is published by the Positive Futures Network, an independent, nonprofit organization whose mission is to support people's active engagement in creating a more just, sustainable, and compassionate world. That means, we rely on support from our readers. Independent. Nonprofit. Subscriber-supported.
<urn:uuid:962d29cd-6022-40da-af97-5793ae9f5f1b>
{ "date": "2013-05-24T01:44:14", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9504677057266235, "score": 2.640625, "token_count": 1804, "url": "http://www.yesmagazine.org/issues/is-the-u.s.-ready-for-human-rights/is-the-u.s.-ready-for-human-rights-discussion-guide" }
Wastes and waste management Finnish waste legislation covers all types of waste except certain special wastes such as radioactive wastes, which are controlled by separate laws. Finnish waste legislation is largely based on EU legislation, but in some cases includes stricter standards and limits than those applied in the EU as a whole. Finland also has legislation on some issues related to wastes that have not yet been covered by EU legislation. For more information on wastes and their environmental impacts see the web pages on the state of the environment. Statistics on the generation and management of wastes in Finland are compiled by Statistics Finland. The Ministry of the Environment supervises and controls the way Finnish waste legislation is put into practice. The Finnish Environment Institute conducts research and training, publicises new ideas and methods, and monitors all developments related to waste issues, while also participation in drawingup new legislation and guidelines related to waste. The Institute also monitors international waste shipments. Centres for economic development, transport and the environment guide, encourage and monitor the implementation of the Waste Act in their own regions. They also provide training and advice for firms and the public, and issue waste permits to larger firms and operations. The national authority responsible for producer registration and other related issues is the Centre for economic development, transport and the environment for Pirkanmaa. Local authorities organise the collection, recovery and disposal of household refuse and other similar waste, and supervise waste management in general in their own area. They also set local regulations on waste management, ensure that advice on waste matters is freely available, and issue waste permits to smaller firms and operations.
<urn:uuid:70b2b9ea-7812-48dc-889d-67e7ee5449b0>
{ "date": "2013-05-24T01:45:40", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9456444978713989, "score": 3.28125, "token_count": 323, "url": "http://www.ymparisto.fi/default.asp?contentid=62020&lan=en" }
Best Evidence in Brief: Free, fortnightly round-up of education research news Better: Evidence-based Education: Magazineexploring effective practice A Book for Governors: Must-read book for school governors Best Evidence Encyclopaedia: Which educational programmes have good evidence? The IEE is working to help schools with the problems they face - trying to make education research more accessible, directing them to solutions that have been proven to work, and connecting them with colleagues who have faced the same issues. You can guide our work by telling us the specific areas where you need proven programmes and practices for improving teaching and learning. We would be very grateful if you could answer a few questions about this. We are currently working with schools in the following ways: We have enrolled the help of schools across the country to take part in a range of research projects. This includes schools that are piloting new programmes and practices, and also ‘control’ schools against which we compare the results. You can find out more about our current research here. School case study: St Anne’s Catholic Primary School, Keighley The IEE is currently working with a small group of schools in our region, helping them with how they might select and use evidence-based programmes. The project, which is called YIPI, has included putting together a directory of recommended programmes and practices. If successful, a larger pilot study will take place. YIPI case study: Pannal Primary School, Harrogate E4F is aimed at practitioners who are currently using, or wish to use, research evidence to inform their work. By participating in the scheme they will be able to access high-quality, independent research expertise and resources, with the help of a an intermediary 'broker'. People involved in E4F can also be put in touch with each other, so that they can share experiences and problems. E4F is a project of the Coalition for Evidence-based Education (CEBE), and it is hoped that it will get research evidence more widely used in decision-making in schools and colleges.
<urn:uuid:4e0016d0-35f9-4792-8993-008116d3b637>
{ "date": "2013-05-24T01:52:25", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.967598021030426, "score": 2.78125, "token_count": 434, "url": "http://www.york.ac.uk/iee/working_with_schools/index.htm" }
Now The Rats Are Sinking The Leaking Ship While the massive population of New York City is awfully impacted by Sandy, there is a more populous and even more caustic population that is struggling with the aftermath: Rats! As Forbes notes, the NYC Subway is notorious for its rat population and with all five subway tubes now submerged, one can only imagine where these cute cuddly rabies-wielding devil rodents will make their new homes. "Rats are incredibly good swimmers and they can climb" is hardly the reassuring news lower Manhattan homeowners were looking for, and as the Daily Mail notes, this could bring infectious diseases such as leptospirosis, hantavirus, typhus, salmonella, and even the plague into human contact. On the bright side (well not really), rats don't need to bite a human to transmit its gross payload; rodent feces and urine can spread conditions like hantavirus just as easily - get long hand sanitizer stocks! If they can do this - then what happens when they are forced above ground? Cue gratuitous "biggest rat in the world" scary video clip (real or not - this is crazy) - Rats can climb brick walls, trees, and telephone poles, and walk across telephone lines. - Rats can fall from a height of 50 feet without getting hurt. - Rats can jump three feet in the air from a flat surface and leap more than four feet horizontally. - Rats can scamper through openings as small as a quarter. General rule: If a rat’s head fits into the hole then the body will follow. Everything you did not want to know about rats and really did not want to ask:
<urn:uuid:e8f67657-2fc9-41ca-aedc-3c24dcbc3f46>
{ "date": "2013-05-24T01:34:00", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9532806873321533, "score": 2.515625, "token_count": 354, "url": "http://www.zerohedge.com/news/2012-10-30/now-rats-are-sinking-leaking-ship" }
The real goal of nutrition is the management of cellular inflammation. Increased cellular inflammation makes us fat, sick, and dumb (how about overweight, ill, and less intelligent). Strictly speaking, diets are defined by their macronutrient balance. This is because that balance determines the resulting hormonal responses. This doesn’t mean you can ignore the impact of various food ingredients on the generation of cellular inflammation. This is why I categorize food ingredients into three major classes depending on when they were introduced into the human diet. The more ancient the food ingredients, the less damaging inflammatory impact they will have on turning genes off and on (i.e. gene expression). This is because the greater the period of time our genes have co-evolved with a given food ingredient, the more our body knows how to handle them. Unfortunately, human genes change slowly, but changes in our food supply can happen very rapidly. With that as a background, let me describe the three major categories of food ingredients, especially in terms of their introduction to the human diet. This category includes food ingredients that were available more than 10,000 years ago. Our best evidence is that humans first appeared as a new species in Southern Africa about 200,000 years ago (1). For the next 190,000 years, food ingredients of the human diet consisted of animal protein (grass-fed only), fish, animal and fish fats, fruits, vegetables, and nuts. I call these Paleolithic ingredients. This means for the first 95 percent of our existence as a species, these were the only food ingredients that genes were exposed to. As a result of 190,000 years of co-existence with our genes, these food ingredients have the least inflammatory potential on our genes. Our best estimate of the macronutrient composition of the typical Paleolithic diet some 10-15,000 years ago was 25-28 percent protein, 40 percent carbohydrate, 32-35 percent fat with a very high intake of EPA and DHA (about 6 grams per day) and a 1:1 ratio of omega-6 to omega-3 fats (2). This is basically the composition of the anti inflammatory diet (3-5). If you use only Paleolithic ingredients, then you are almost forced to follow an anti inflammatory diet. The food ingredients are more restrictive, but the increased anti-inflammatory benefits are well worth it. The second group of food ingredients represents those food choices that were available 2,000 years ago. We started playing Russian roulette with our genes 10,000 years ago as we started to introduce a wide variety of new food ingredients into the human diet. First and foremost was the introduction of grains, but not all at the same time. Wheat and barley were introduced about 10,000 years ago with rice and corn coming about 3,000 years later. Relative latecomers to the grain game were rye (about 5,000 years ago) and oats (about 3,000 years ago). At almost the same time came the first real use of biotechnology. This was the discovery that if you fermented grains, you could produce alcohol. Alcohol is definitely not a food ingredient that our genes were prepared for (and frankly our genes still aren’t). I think it only took mankind about 10 years to learn how to produce alcohol, which probably makes the first appearance of beer occurring some 9,990 years ago. Wine was a relatively late arrival appearing about 4,000 years ago. With the domestication of animals (some 8,000 years ago) came the production of milk and dairy products. About 5,000 years ago, legumes (beans) were also introduced. Legumes tend to be rich in many anti-nutrients (such as lectins) that must be inactivated by fermentation or boiling. Needless to say, these anti-nutrients are not the best food ingredients to be exposed to. I call this second group of food ingredients Mediterranean ingredients since they are the hallmark of what is commonly referred to as a “Mediterranean diet” (even though the diets as determined by macronutrient balance in different parts of the Mediterranean region are dramatically different). Those cultures in the Mediterranean region have had the time to genetically adapt to many of these new ingredients since all of these ingredients existed about 2,000 years ago. Unfortunately, many others on the planet aren’t quite as fortunate. That’s why we have lactose intolerance, alcohol-related pathologies, celiac disease, and many adverse reactions to legumes that have not been properly detoxified. As a result these Mediterranean ingredients would have greater potential to induce increased levels of cellular inflammation than Paleolithic ingredients. However, at least they were better than the most recent group, which I term as, the “Do-You-Feel-Genetically-Lucky” group. Unfortunately, these are the food ingredients that are currently playing havoc with our genes, especially our inflammatory genes. You eat these ingredients only at your own genetic risk. The first of these was refined sugar. Although first made 1,400 years ago, it didn’t experience a widespread introduction until about 300 years ago. With the advent of the Industrial Revolution came the production of refined grains. Products made from refined grains had a much longer shelf life, were easier to eat (especially important if you had poor teeth), and could be mass-produced (like breakfast cereals). However, in my opinion the most dangerous food ingredient introduced in the past 200,000 years has been the widespread introduction of refined vegetable oils rich in omega-6 fatty acids. These are now the cheapest source of calories in the world. They have become ubiquitous in the American diet and are spreading worldwide like a virus. The reason for my concern is that omega-6 fatty acids are the building blocks for powerful inflammatory hormones known as eicosanoids. When increasing levels of omega-6 fatty acids in the diet were combined with the increased insulin generated by sugar and other refined carbohydrates, it spawned a massive increase in cellular inflammation worldwide in the past 40 years starting first in America (6). It is this Perfect Nutritional Storm that is rapidly destroying the fabric of the American health- care system. The bottom line is that the macronutrient balance of the diet will generate incredibly powerful hormonal responses that can be your greatest ally or enemy in controlling cellular inflammation. Unless you feel incredibly lucky, try to stick with the food ingredients that give your genes the best chance to express themselves. - Wells S. “The Journey of Man: A Genetic Odyssey.” Random House. New York, NY (2004) - Kuipers RS, Luxwolda MF, Dijck-Brouwer DA, Eaton SB, Crawford MA, Cordain L, and Muskiet FA. “Estimated macronutrient and fatty acid intakes from an East African Paleolithic diet.” Br J Nutr 104: 1666-1687 (2010) - Sears, B. “The Zone.” Regan Books. New York, NY (1995) - Sears, B. “The OmegaRx Zone.” Regan Books. New York, NY (2002) - Sears, B. “The Anti-Inflammation Zone.” Regan Books. New York, NY (2005) - Sears B. “Toxic Fat.” Thomas Nelson. Nashville, TN (2008) Nothing contained in this blog is intended to be instructional for medial diagnosis or treatment. If you have a medical concern or issue, please consult your personal physician immediately.
<urn:uuid:89631b6f-4273-416b-aa35-b938a0e74076>
{ "date": "2013-05-24T01:58:56", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.945522129535675, "score": 2.828125, "token_count": 1556, "url": "http://www.zonediet.com/blog/2011/02/a-short-history-of-the-human-food-supply/" }
by Jonah Lehrer Book list Creativity a flash in the pan or 99-percent perspiration? A-list journalist Lehrer (How We Decide, 2009) tackles the question in broad strokes, covering topics as diverse as office layouts, urban planning, drug use, and brain chemistry. It turns out that the question isn't easy to answer, for it seems that a method used by one creative person doesn't translate for another. Lehrer describes the creative activities of such luminaries as David Byrne and the CEO of Pixar, then dissects why each approach works for that individual or group. Some examples are a bit of a stretch. The section on Shakespeare, for instance, is eye-rollingly speculative. But, just as Lehrer points out that explicit instruction is anathema to creative play and discovery, he seems to say in each section, Isn't this neat? and leave the bulk of the work to the reader's imagination. In that sense, Imagine is a great introduction for anyone curious about the nature and dynamics of creativity.--Hunter, Sarah Copyright 2010 Booklist From Booklist, Copyright © American Library Association. Used with permission. Choice For those acquainted with Lehrer's two previous books, Proust Was a Neuroscientist (2007) and How We Decide (CH, Aug'09, 46-6789), the format of the present volume will be quite familiar. The subject, in this case the creative process, is broken down into two subcategories--"Alone" and "Together"--each illumined by anecdote, case study, and scientific findings from the field and laboratory. In the course of looking at art, invention, and improvisation, the author has focused on creative works and products ranging from West Side Story, Bob Dylan's "Like a Rolling Stone," and Shakespeare's Henry VI to the personal computer, Post-It notes, and Nike's "Just Do It" slogan. He explores the creative work of individuals--including Steve Jobs, Paul Erdos, Jack Kerouac, and Yo-Yo Ma--and innovative institutions such as 3-M, Google, Second City, Pixar, and Eli Lilly. Lehrer examines both standard approaches to the study of creativity and recent developments in psychology and neuroscience, for example, right-brain functioning, neuronal learning, recursive loops, semantic priming, conceptual blending, and informational entropy. This is a fitting companion to the author's earlier work and an informative introduction to one of the most elusive of human capacities, the creative imagination. Summing Up: Highly recommended. Lower- and upper-division undergraduates; graduate students; professionals; general readers. R. M. Davis emeritus, Albion College Copyright American Library Association, used with permission. Library Journal In his new book on creativity, Lehrer (How We Decide) presents captivating case studies of innovative minds, companies, and cities while tying in the latest in scientific research. He recounts the sometimes surprising origins of hugely successful inventions, brands, and ideas (e.g., the Swiffer mop, Barbie doll, Pixar animation) and reveals unexpected commonalities in the creative experiences (e.g., the color blue, distractedness, living abroad). The book combines individual case studies with broader psychology to provide new insights into creativity, much like Sheena Iyengar's The Art of Choosing. Many of Lehrer's insights are based on emerging scientific practices and are thus fresh and especially applicable to modern life. He emphasizes innovative companies and experimental approaches to education and includes historical factoids that reveal the backstories of everyday items. VERDICT Lehrer's findings can be used to inform the design of innovative programs or to structure a productive work environment at home or at the office. This book will appeal to educators, business administrators, and readers interested in applied psychology. [See Prepub Alert, 10/15/11.]-Ryan Nayler, Univ. of Toronto Lib., Ont. (c) Copyright 2012. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted. (c) Copyright 2010. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.
<urn:uuid:5f205404-4a7b-40cb-8e0e-7b0f31abdad4>
{ "date": "2013-05-24T01:30:03", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9100733399391174, "score": 2.609375, "token_count": 855, "url": "http://www1.youseemore.com/weslaco/hottitles.asp?loc=2&isbn=9780547386072&Author=Jonah+Lehrer&Title=Imagine" }
April 1, 2009 The American Recovery and Reinvestment Act of 2009 (ARRA) appropriates significant new funding for programs under Parts B and C of the Individuals with Disabilities Education Act (IDEA). Part B of the IDEA provides funds to state educational agencies (SEAs) and local educational agencies (LEAs) to help them ensure that children with disabilities, including children aged three through five, have access to a free appropriate public education to meet each child's unique needs and prepare him or her for further education, employment, and independent living. Part C of the IDEA provides funds to each state lead agency designated by the Governor to implement statewide systems of coordinated, comprehensive, multidisciplinary interagency programs and make early intervention services available to infants and toddlers with disabilities and their families. The IDEA funds under ARRA will provide an unprecedented opportunity for states, LEAs, and early intervention service providers to implement innovative strategies to improve outcomes for infants, toddlers, children, and youths with disabilities while stimulating the economy. Under the ARRA, the IDEA funds are provided under three authorities: $11.3 billion is available under Part B Grants to States; $400 million is available under Part B Preschool Grants; and $500 million is available under Part C Grants for Infants and Families. Preliminary information about each state's allocation is available at: http://www.ed.gov/about/overview/budget/statetables/index.html. This Web site also provides information about the State Fiscal Stabilization Fund (SFSF) under the ARRA, which is separate from the IDEA ARRA funds described in this fact sheet. This document focuses on Part B; additional information on Part C is available at http://www.ed.gov/policy/gen/leg/recovery/factsheet/idea.html. IDEA, Part B ARRA funds are a key element of the ARRA principles as described below: Overview of ARRA Principles: The overall goals of the ARRA are to stimulate the economy in the short term and invest in education and other essential public services to ensure the long-term economic health of our nation. The success of the education part of the ARRA will depend on the shared commitment and responsibility of students, parents, teachers, principals, superintendents, education boards, college presidents, state school chiefs, governors, local officials, and federal officials. Collectively, we must advance ARRA's short-term economic goals by investing quickly, and we must support ARRA's long-term economic goals by investing wisely, using these funds to strengthen education, drive reforms, and improve results for students from early learning through college. Four principles guide the distribution and use of ARRA funds: Spend funds quickly to save and create jobs. ARRA funds will be distributed quickly to states and other entities in order to avert layoffs and create jobs. States in turn are urged to move rapidly to develop plans for using funds, consistent with ARRA's reporting and accountability requirements, and to promptly begin spending funds to help drive the nation's economic recovery. Improve student achievement through school improvement and reform. ARRA funds should be used to improve student achievement and help close the achievement gap. In addition, the SFSF requires progress on four reforms previously authorized under the bipartisan Elementary and Secondary Education Act of 1965, as amended, and the America Competes Act of 2007: Making progress toward rigorous college- and career-ready standards and high-quality assessments that are valid and reliable for all students, including English language learners and students with disabilities; Establishing pre-K to college and career data systems that track progress and foster continuous improvement; Making improvements in teacher effectiveness and in the equitable distribution of qualified teachers for all students, particularly students who are most in need; Providing intensive support and effective interventions for the lowest-performing schools. Ensure transparency, reporting and accountability. To prevent fraud and abuse, support the most effective uses of ARRA funds, and accurately measure and track results, recipients must publicly report on how funds are used. Due to the unprecedented scope and importance of this investment, ARRA funds are subject to additional and more rigorous reporting requirements than normally apply to grant recipients. Invest one-time ARRA funds thoughtfully to minimize the "funding cliff." ARRA represents a historic infusion of funds that is expected to be temporary. Depending on the program, these funds are available for only two to three years. These funds should be invested in ways that do not result in unsustainable continuing commitments after the funding expires. Awarding IDEA Part B Grants to States and Preschool Grants ARRA Funds The Department of Education awarded 50 percent of the IDEA, Part B Grants to States and Preschool Grants ARRA funds to SEAs on April 1, 2009. The other 50 percent will be awarded by September 30, 2009. These awards will be in addition to the regular Fiscal Year (FY) 2009 Part B Grants to States and Preschool Grants awards that will be made on July 1, 2009 (Grants to States and Preschool Grants) and October 1, 2009 (Grants to States only). Together, these grant awards will constitute a state's total FY 2009 Part B Grants to States and Preschool Grants allocations. - A state did not need to submit a new application to receive the first 50 percent of the Part B Grants to States and Preschool Grants ARRA funds because these funds were made available to each state based on the state's eligibility established for FY 2008 Part B funds and the provision of the certification required by section 1607 of the ARRA. The assurances in the state's FY 2008 application, as well as the requirements of the ARRA, apply to these ARRA funds. In order to receive the remaining 50 percent of IDEA, Part B ARRA funds, a state must submit, for review and approval by the Department, additional information that addresses how the state will meet the accountability and reporting requirements in section 1512 of the ARRA. - The additional IDEA funds provided under the ARRA do not increase the amount a state would otherwise be able to reserve for state administration or other state-level activities under its regular grants to states FY 2009 award. - LEA eligibility for the first 50 percent of the IDEA ARRA funds is based on eligibility established by the LEA for FY 2008 funds. - In accordance with the goals of the ARRA, a state should obligate IDEA ARRA funds to LEAs expeditiously, but prudently. A state should make the Part B Grants to States and Preschool Grants ARRA funds that it receives in March available to LEAs by the end of April 2009. - Similarly, an LEA should use the IDEA ARRA funds expeditiously, but prudently. An LEA should obligate the majority of these funds during school years 2008-09 and 2009-10 and the remainder during school year 2010-11. States may begin obligating IDEA, Part B ARRA funds immediately upon the effective date of the grant. All IDEA ARRA funds must be obligated by September 30, 2011. Uses of IDEA, Part B ARRA Funds All IDEA ARRA funds must be used consistent with the current IDEA, Part B statutory and regulatory requirements and applicable requirements in the General Education Provisions Act (GEPA) and the Education Department General Administrative Regulations (EDGAR). An LEA must use IDEA ARRA funds only for the excess costs of providing special education and related services to children with disabilities, except where IDEA specifically provides otherwise. The IDEA ARRA funds constitute a large one-time increment in IDEA, Part B funding that offers states and LEAs a unique opportunity to improve teaching and learning and results for children with disabilities. Generally, funds should be used for short-term investments that have the potential for long-term benefits, rather than for expenditures the LEAs may not be able to sustain once the ARRA funds are expended. Some possible uses of these limited-term IDEA ARRA funds that are allowable under IDEA and aligned with the core reform goals for which states must provide assurances under SFSF include: - Obtain state-of-the art assistive technology devices and provide training in their use to enhance access to the general curriculum for students with disabilities. - Provide intensive district-wide professional development for special education and regular education teachers that focuses on scaling-up, through replication, proven and innovative evidence-based school-wide strategies in reading, math, writing and science, and positive behavioral supports to improve outcomes for students with disabilities. - Develop or expand the capacity to collect and use data to improve teaching and learning. - Expand the availability and range of inclusive placement options for preschoolers with disabilities by developing the capacity of public and private preschool programs to serve these children. - Hire transition coordinators to work with employers in the community to develop job placements for youths with disabilities. Invitation for Waivers The Secretary intends to issue regulations to allow reasonable adjustments to the limitation on state administration expenditures to help states defray the costs of ARRA data collection requirements. IDEA, Part B Fiscal Issues - An LEA may be able to reduce the level of state and local expenditures otherwise required by the IDEA LEA maintenance of effort (MOE) requirements. Generally, under section 613(a)(2)(C), in any fiscal year that an LEA's IDEA allocation exceeds the amount the LEA received in the previous year, under certain circumstances, the LEA may reduce the level of state and local expenditures by up to 50 percent of the amount of the increase, as long as the LEA uses those freed-up local funds for activities that could be supported under the ESEA, such as services for children at risk of school failure without additional support. If an LEA takes advantage of this provision, the required MOE for future years is reduced consistent with the reduction it took, unless the LEA increases the amount of its state and local expenditures on its own. SEAs should encourage LEAs that can and do take advantage of this flexibility to focus the freed-up local funds on one-time expenditures that will help the state make progress on the goals in the SFSF program, such as improving the equitable distribution of effective teachers and the quality and use of assessments to enhance instruction for students most in need. - Alternatively, an LEA may (or in some cases must) use up to 15 percent of its total IDEA, Part B Grants to States and Preschool Grants for early intervening services for children in grades K through 12 who are not currently identified as children with disabilities, but who need additional academic and behavioral support to succeed in a general education environment. However, an LEA may use only up to 15 percent of its allocation minus any amount (on a dollar-for-dollar basis) by which the LEA reduced its required state and local expenditures under section 613(a)(2)(C). - State-level MOE may be waived under Part B of the IDEA by the Secretary of Education on a state-by-state basis, for a single year at a time, for exceptional or uncontrollable circumstances, such as a natural disaster or a precipitous and unforeseen decline in the financial resources of a state. LEA-level MOE may not be waived. - With prior approval from the Secretary of Education, a state or LEA may count SFSF (but not IDEA ARRA funds) under the ARRA that are used for special education and related services as non-federal funds for purposes of determining whether the state or LEA has met the IDEA, Part B MOE requirements. (See separate fact sheet on SFSF for more information.) As with all federal funds, states and LEAs are responsible for ensuring that the IDEA, Part B ARRA funds are used prudently and in accordance with the law. - ARRA requires that recipients of funds made available under that act separately account for, and report on, how those funds are spent. - The President and the Secretary are committed to ensuring that ARRA dollars are spent with an unprecedented level of transparency and accountability. The administration will post reports on ARRA expenditures on the Recovery.gov Web site. - The Department will provide updates as additional information becomes available regarding the details of the IDEA ARRA funds. - The Department will also provide further information on the government-wide data collection and reporting requirements as this information becomes available. - If you have any questions or concerns, please email them to [email protected].
<urn:uuid:37def29d-30fc-4853-9ef9-90ca5a33a2b2>
{ "date": "2013-05-24T01:38:38", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9411230087280273, "score": 2.640625, "token_count": 2606, "url": "http://www2.ed.gov/policy/gen/leg/recovery/factsheet/idea.html?exp=8" }
Shallow caves in central Arizona protect masonry dwellings built in the early 14th Century. These and additional sites in the surrounding areas were home to the Salado people. They left no written record of their existence, no chronology of events that shaped their society. The most vivid signs of life are in their pottery, in remnants of fabric, in smoke stains from their cook fires, and in handprints on pueblo walls. Most of what we know - or think we know - about the Salado has been reconstructed from what remains of their material culture - their personal and community belongings. In addition, plants and animals that made up their natural environment still thrive here. Like pieces of a puzzle, each element contributes to the larger picture of Salado culture. Take a virtual tour of Tonto National Monument to learn more about these people and their environment. Select a topic below to beginning exploring.
<urn:uuid:bea5d836-c37a-4bdc-bd71-38d0b27caf88>
{ "date": "2013-05-24T01:59:08", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9736720323562622, "score": 3.765625, "token_count": 183, "url": "http://www2.nature.nps.gov/views/Sites/TONT/ET_TONT_Index.htm" }
Fractions may have numerators and denominators that are composite numbers (numbers that has more factors than 1 and itself). How to simplify a fraction: - Find a common factor of the numerator and denominator. A common factor is a number that will divide into both numbers evenly. Two is a common factor of 4 and 14. - Divide both the numerator and denominator by the common factor. - Repeat this process until there are no more common factors. - The fraction is simplified when no more common factors exist. Another method to simplify a fraction - Find the Greatest Common Factor (GCF) of the numerator and denominator - Divide the numerator and the denominator by the GCF
<urn:uuid:42ad4010-ea6b-42d5-a669-d0ea598c6efe>
{ "date": "2013-05-26T09:35:54", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9142336845397949, "score": 3.671875, "token_count": 152, "url": "http://aaamath.com/B/g57h_sx1.htm" }
Units of Work The Units of Work are delivered as downloadable WORD documents so that teachers can customise them for use in their classroom. They include references to electronic resources (software, websites and items from The Learning Federation collection of online resources), print resources (many include worksheets for students to use) and other physical resources. There is a brief outline of each unit below including suggested year level(s). Teachers from around the country were involved in this work, and AAMT members would like to thank them for their efforts. Melanie Bezear, Calwell Primary School, ACT Jane McAlpine, Chapman Primary School, ACT Anne Pillman, Marryatville Primary School, SA Stephanie Watts, Trinity Catholic Primary School, NSW Thomas Psomas, All Saints Grammar School, NSW Gayle Cann, Parap Primary School, NT Mark Darrell, Hallett Cove School, SA Nicole Heyder, Atwell College, WA Lyn Pierrehumbert, Durack School, NT Shelley Jenkinson, Deanmore Primary School, WA Bernie O’Sullivan, St Luke's Anglican School, Qld Wendy Fletcher, Centre for Extended Learning Opportunities Roxanne Steenbergen, Claremont Primary School, Tas Michael Macrae, Duncraig Senior High School Education Support Centre, WA Terry Jacka, St Hilda's School, Qld Ed Cuthbertson, Lanyon High School, ACT [Year 1] Counting on, counting back The beginning of this unit focuses on ensuring that students have basic foundation skills, and an understanding of both what the number line means and the forward and backward number sequence. They then progress to developing conceptual understandings of place value, specifically tens and ones. Once these foundation skills have been mastered, students are introduced to the strategies of counting on and then counting back. uw_003_counting_on_counting_back.doc 582.50 kB [Years 2-4] Going places In this unit of work students visualise and plan routes, understand and use the language of position and give and interpret directions using a variety of formats and resources. uw_005_going_places.doc 665.00 kB [Year 3] Area and perimeter This unit allows students to explore the beginning concepts of perimeter and area, including the formal units of cm, m, cm sq, and m sq and to learn the differences between these measurements. uw_006_area_perimeter.doc 140.50 kB [Year 4] Multiplying and dividing with arrays This unit explores the concepts of multiplication and division and offers students strategies to perform these operations. uw_003_uniting_and_dividing.doc 282.50 kB [Years 4-5] Cities taking shape Students develop their knowledge of 2D and 3D shapes, and the relationships between them. They learn about how a 3D shape can look different when viewed from different positions. Students use knowledge and skills gained through the unit to design and construct a ‘model’ city or town block. uw_003_cities_taking_shape.doc 948.00 kB [Years 4-10] Telling the time This unit introduces the formal measurement of time using the terms ‘o’clock’, ‘half past’, ‘quarter past’ and ‘quarter to’ and their representation on an analogue clock. It is specifically for students with autism spectrum disorders (ASD), so the learning experiences offered are structured, methodical and sequential and require one-to-one instruction. uw_006_telling_the_time.doc 639.00 kB [Years 5-6] Graphs and data This unit of work explores why we need data, and how to collate, present and analyse it to extract the information it offers. uw_004_graphs_and_data.doc 247.00 kB [Years 5-6] Chances are This unit explores the mathematics of chance. Discover the language of chance and how it affects our decisions. Explore the notion of probability, and how we can influence this. uw_005_chances_are.doc 699.50 kB [Year 6] You say data, I say data This unit introduces the students to a variety of graphs. They will examine examples of various types of graphs and then conduct their own surveys to collect and present the data in a specified form. uw_003_you_say_data.doc 495.00 kB [Years 6-7] Places for polygons This unit of work investigates the geometric properties of buildings to develop students’ understanding of polygons. uw_003_places_for_polygons.doc 412.00 kB [Years 6-8] Take a chance This unit provides students with a real life context to investigate the language of chance and how events may be manipulated to alter the chance of something happening. It also introduces the use of fractions, decimals and percentages when looking at probability. uw_004_take_a_chance.doc 276.50 kB [Years 7-8] Fraction action In this unit of work students move from working with tenths, hundredths and thousandths to relating common and decimal fractions and percentages. uw_007_fraction_action.doc 350.50 kB [Year 8] Investigating us This unit is designed to enable students to design and conduct simple surveys, collate the data into appropriate tables, identify the types of graphs that are suited to display the data sets depending on the number and types of variables, select appropriate display formats to represent the data and interpret data from the graphs and tables. It utilises students’ natural interest regarding themselves and where they ‘fit in’ in relation to their peers. uw_004_investigating_us.doc 582.00 kB [Years 8-10] Turn up the volume In this unit of work students explore and explain the connections between the surface area and volume of different shapes and how each attribute is measured. uw_008_turn_up_the_volume.doc 881.00 kB [Years 11-12] Periodic functions This unit focuses on periodic functions and is part of a university preparation course for those aiming to study mathematics and science courses such as engineering. uw_005_periodic_functions.doc 831.00 kB
<urn:uuid:c7e4f163-184a-441c-9dd2-7f4145fd8883>
{ "date": "2013-05-26T09:35:53", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.8896293044090271, "score": 3.59375, "token_count": 1404, "url": "http://aamt.edu.au/Activities-and-projects/The-Learning-Federation/Units-of-Work" }
Newly discovered bacterial alchemists could help save billions of plastic bottles from landfills. The Pseudomonas strains can convert the low-grade PET plastic used in drinks bottles into a more valuable and biodegradable plastic called PHA. PHA is already used in medical applications, from artery-supporting tubes called stents to wound dressings. The plastic can be processed to have a range of physical properties. However, one of the barriers to PHA reaching wider use is the absence of a way to make it in large quantities. The new bacteria-driven process – termed upcycling – could address that, and make recycling PET bottles more economically attractive. PET bugs Although billions of plastic bottles are made each year, few are ultimately recycled. Just 23.5% of US bottles were recycled in 2006. This is because the recycling process simply converts the low value PET bottles into more PET, says Kevin O'Connor at University College Dublin, Ireland. "We wanted to see if we could turn the plastic into something of higher value in an environmentally friendly way," he says. O'Connor and colleagues knew that heating PET in the absence of oxygen – a process called pyrolysis – breaks it down into terephthalic acid (TA) and a small amount of oil and gas. They also knew that some bacteria can grow and thrive on TA, and that other bacteria produce a high-value plastic PHA when stressed. So they wondered whether any bacteria could both feed on TA and convert it into PHA. Bacteria hunt "It was a long shot to be honest," says O'Connor. His team studied cultures from around the world known to grow on TA, but none produced PHA. So they decided to look for undiscovered strains, in environments that naturally contain TA. Analysing soil bacteria from a PET bottle processing plant, which are likely to be exposed to small quantities of TA, yielded 32 colonies that could survive in the lab using TA as their only energy source. After 48 hours they screened each culture for PHA. Three cultures, all similar to known strains of Pseudomonas, accumulated detectable quantities of the valuable plastic. The next step is to improve the efficiency of the process, says O'Connor. "A quarter to a third of each cell is filled with plastic – we want to increase that to 50 to 60%." Less landfill Sudesh Kumar, a microbiologist at the University of Science, Malaysia, in Penang, is impressed with the study. "There are many other systems that are economically more viable to produce PHA with better material properties," he says. "But Kevin's work offers an interesting novel approach to solve the problem of PET accumulation in landfill dumps." But it is still unlikely that using the new approach alone will appeal to industry, O'Connor says. "Working with this kind of environmental technology in isolation, the chances of success are reduced," he says. The best approach, he continues, would be to use the new bacteria as just one part of a bio-refinery capable of upcycling an array of waste products in an environmentally friendly way.
<urn:uuid:811ceaf5-cb13-42d4-a80b-1b11c2060170>
{ "date": "2013-05-27T02:57:40", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9618173241615295, "score": 3.65625, "token_count": 651, "url": "http://abcnews.go.com/Technology/SmartHome/story?id=5844268&page=1" }
Simple observational proof of the greenhouse effect of carbon dioxide Posted by Ari Jokimäki on April 19, 2010 Recently, I showed briefly a simple observational proof that greenhouse effect exists using a paper by Ellingson & Wiscombe (1996). Now I will present a similar paper that deepens the proof and shows more clearly how different greenhouse gases really are greenhouse gases. I’ll highlight the carbon dioxide related issues in their paper. Walden et al. (1998) studied the downward longwave radiation spectrum in Antarctica. Their study covers only a single year so this is not about how the increase in greenhouse gases affects. They measured the downward longwave radiation spectrum coming from atmosphere to the surface during the year (usually in every 12 hours) and then selected three measurements from clear-sky days for comparison with the results of a line-by-line radiative transfer model. First they described why Antarctica is a good place for this kind of study: Since the atmosphere is so cold and dry (<1 mm of precipitable water), the overlap of the emission spectrum of water vapor with that of other gases is greatly reduced. Therefore the spectral signatures of other important infrared emitters, namely, CO2, O3, CH4, and N2O, are quite distinct. In addition, the low atmospheric temperatures provide an extreme test case for testing models Spectral overlapping is a consideration here because they are using a moderate resolution (about 1 cm-1) in their spectral analysis. They went on further describing their measurements and the equipment used and their calibration. They also discussed the uncertainties in the measurements thoroughly. They then presented the measured spectra in similar style than was shown in Ellingson & Wiscombe (1996). They proceeded to produce their model results. The models were controlled with actual measurements of atmospheric consituents (water vapour, carbon dioxide, etc.). The model is used here because it represents our theories which are based on numerous experiments in laboratories and in the atmosphere. They then performed the comparison between the model results and the measurements. Figure 1 shows their Figure 11 where total spectral radiance from their model is compared to measured spectral radiance. The upper panel of Figure 1 shows the spectral radiance and the lower panel shows the difference of measured and modelled spectrum. The overall match is excellent and there’s no way you could get this match by chance so this already shows that different greenhouse gases really are producing a greenhouse effect just as our theories predict. Walden et al. didn’t stop there. Next they showed the details of how the measured spectral bands of different greenhouse gases compare with model results. The comparison of carbon dioxide is shown here in Figure 2 (which is the upper panel of their figure 13). The match between the modelled and measured carbon dioxide spectral band is also excellent, even the minor details track each other well except for couple of places of slight difference. If there wouldn’t be greenhouse effect from carbon dioxide or if water vapour would be masking its effect, this match should then be accidental. I see no chance for that, so this seems to be a simple observational proof that carbon dioxide produces a greenhouse effect just as our theories predict. Walden, V. P., S. G. Warren, and F. J. Murcray (1998), Measurements of the downward longwave radiation spectrum over the Antarctic Plateau and comparisons with a line-by-line radiative transfer model for clear skies, J. Geophys. Res., 103(D4), 3825–3846, doi:10.1029/97JD02433. [abstract]
<urn:uuid:7ca379c3-faf0-4aab-83e1-0999a130f017>
{ "date": "2013-05-26T09:42:47", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9338589310646057, "score": 2.78125, "token_count": 746, "url": "http://agwobserver.wordpress.com/2010/04/19/simple-observational-proof-of-the-greenhouse-effect-of-carbon-dioxide/" }
Growing Native Plants Pelargonium rodneyanum, commonly known as the Magenta Storksbill, is a member of the family Geraniaceae. Naturally occurring in fragmented populations within heathland, rocky outcrops, sclerophyll forest and woodland areas of South Australia, New South Wales and Victoria. P. rodneyanum is commercially cultivated for use as a colourful potted, rockery or bedding display plant. Preferring well-drained, slightly acidic soil, P. rodneyanum adapts well to most soil types and likes a full sun to semi-shaded position. It withstands frost in colder climates, is semi-drought tolerant and is useful as a spreading ground cover in the garden - easy removal prevents this attractive plant from becoming invasive. Gardeners also appreciate its lengthy flowering period, producing blooms when other plants have stopped. An herbaceous perennial, P. rodneyanum reaches 45 cm in height, flowers during November through to May and forms vertical tubers as part of its root system. The light to dark green leaves are ovate to narrow ovate with crenate, shallow lobes and a 3-10 cm long petiole. The umbel inflorescence usually consists of 7 flowers on slender pedicels 13-22 mm in length, which rise from a whorl of 6 bracts on a 5-12 cm long peduncle. It has five petals that are deep pink in colour and irregular in shape and size. The two larger petals are marked with deep magenta streaks and are positioned slightly separate from the lower petals. Of the ten stamens produced, 7-8 are fertile, slightly longer and bear anthers. The fruit forms on pilose mericarps which, when ripe, each contain a 1.5 mm long, dark grey seed. P. rodneyanum can be propagated by tuber division (end of winter early spring), soft/semi hardwood cuttings (spring through summer), clump division (all year), meristem culture (all year) and by seed (spring through summer). Using wind as a natural dispersal method, it readily self seeds although new seedlings tend not to flower during the first season of growth. P. rodneyanum benefits from a hard pruning and reduced water intake during the winter months; tubers may rot if left in water for long periods. During early spring, P. rodneyanum may be susceptible to White Fly, which can be treated with Pyrethrum or diluted dishwashing liquid sprays. Other pests include larger animals (eg. kangaroos and wombats), which during times of drought use the tubers as a food source. Derivation of the name: Pelargonium - from the Greek word pelargos (a stork) with reference to its storksbill like fruit. rodneyanum - thought to be named after Admiral George Rodney (1718-1792) who lead victorious English naval battles against Dutch, French and Spanish forces. Text by Jacqui McKinnon (2004 Student Botanical Intern) Elliot, W.R. & D.L. Jones (1997) Encyclopedia of Australian plants: suitable for cultivation, Lothian.
<urn:uuid:8a4d5008-55ab-4e5a-a448-32931dcb434d>
{ "date": "2013-05-26T09:34:46", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9267109632492065, "score": 3.21875, "token_count": 683, "url": "http://anbg.gov.au/gnp/interns-2004/pelargonium-rodneyanum.html" }
The fish family that includes herrings, shads, sardines, and menhadens is a large, global family with 216 species. Most species are tropical and almost all species are found in oceans, although some are found in freshwater as well. Fish in this family are small to medium-sized, from 2 to 75 cm long. They generally have torpedo shaped bodies that are laterally compressed. Fish in this family are strong, fast swimmers, generally travel in large schools, and they typically feed on plankton. They are some of the most important commercially fished species in the world. Tanya Dewey (author), Animal Diversity Web. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. uses smells or other chemicals to communicate having the capacity to move from one place to another. specialized for swimming uses touch to communicate
<urn:uuid:46837a76-440f-4330-bdb5-3e64616954c2>
{ "date": "2013-05-26T09:37:10", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9631431698799133, "score": 3.59375, "token_count": 214, "url": "http://animaldiversity.ummz.umich.edu/accounts/Clupeidae/" }
WASHINGTON -- Splitting 5-4, the Supreme Court yesterday rejected the Clinton administration's plan to use statistical sampling to make up for individuals who get overlooked in the 2000 census. The court ruled that the allocation of House of Representatives seats after the next census must be based on a head count. The decision was a major setback for cities, for the Democratic Party and for a key part of its following: minorities, who are more often missed in counts by census-takers. The "undercount" in the 1990 census left out more than 4 million people, mainly urban minorities and children, according to the Census Bureau. The justices left open the possibility that sampling could be used to adjust population figures for two purposes -- dividing up $180 billion in federal aid to the states and redrawing election districts at other levels of government. It ruled narrowly that the census law does not permit sampling during the count every decade that is used to reapportion the 435 House seats among the states. It did not rule on the broader question of whether sampling is unconstitutional. However, four of the justices in the majority said "a strong case can be made" that sampling is unconstitutional. It would have taken five votes to establish that proposition. The challenge to the Clinton administration's plan for the first-time use of sampling in the decennial census was made by the House, by four counties and by individual voters in 13 states. The court did not rule on whether the House could challenge the plan but found that the other challengers had made their case. The court majority decided an actual count -- in person, or by mail with follow-up, in-person contacts -- is the only method allowed by federal census law for House apportionment. "From the very first census, the census of 1790," Justice Sandra Day O'Connor wrote for the majority, "Congress has prohibited the use of statistical sampling in calculating the population for purposes of [House] apportionment." To Baltimore's dismay Baltimore, one of the cities in which minority population is thought to be undercounted in every census, could lose some access to federal money if sampling is not allowed for federal funding purposes after the 2000 census. The city's relative power in the the General Assembly also could be adversely affected. News of the decision frustrated officials in Baltimore, where one of every four dollars spent in the city's $1.8 billion annual budget comes from the federal government. City leaders, including Mayor Kurt L. Schmoke, supported census sampling, noting that the majority of Baltimore's population -- 60 percent -- is black. In the 1990 census, an estimated 4.4 percent of the nation's African-American population was not counted, the biggest segment being males ages 18 to 34. Baltimore officials say minorities tend to be undercounted because, if they are poor, they do not own property and are less likely to respond to census surveys. "It's a shame," said Gloria Griffin, a city planner helping to organize a group to ensure a more accurate Baltimore census. "Those poor souls who really need [the federal aid] are not going to get it." If sampling is done next year for purposes other than House apportionment, it could result in two versions of the nation's 2000 population: one for allocation of seats in the House, a second for everything else. But sampling can occur next year only if Congress approves the necessary money. Republicans strongly oppose sampling, because it appears to favor Democrats. Yesterday's decision, because of its limited scope, reignited that partisan controversy as it bears upon the Census Bureau's legal authority to use sampling techniques to calculate national, state and local populations for these key purposes: Dividing $180 billion in federal funding for social programs -- an allocation keyed to population; Calculating where to draw the lines for House election districts, once the seats have been apportioned among the states, and districts for state legislatures and local governing bodies -- keyed to population within states. House Speaker Dennis Hastert repeated the GOP opposition after the court's ruling, saying: "The [Clinton] administration should abandon its illegal and risky polling scheme and start preparing for a true head count." President Clinton appears to have no intention of abandoning support for sampling within the limits that the Supreme Court ruling may permit. He reiterated his support for sampling in his State of the Union message last week, and the White House noted pointedly yesterday that the high court had not ruled sampling to be unconstitutional. House Minority Leader Richard A. Gephardt, a Missouri Democrat, interpreted the ruling to mean that the Census Bureau is required to do sampling for purposes of redistricting and distribution of federal funding.
<urn:uuid:829e8e3a-11c6-44d1-8a06-5414b99d0f67>
{ "date": "2013-05-26T09:36:58", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9545261859893799, "score": 2.6875, "token_count": 963, "url": "http://articles.baltimoresun.com/1999-01-26/news/9901260090_1_census-sampling-census-law-1990-census" }
Effects of agriculture, urbanization, and climate on water quality in the northern Great Plains Limnol. Oceanogr., 44(3_part_2), 1999, 739-756 | DOI: 10.4319/lo.1999.44.3_part_2.0739 ABSTRACT: The QuAppelle Valley drainage system provides water to a third of the population of the Canadian Great Plains, yet is plagued by poor water quality, excess plant growth, and periodic fish kills. Fossil algae (diatoms, pigments) and invertebrates (chironomids) in Pasqua Lake were analyzed by variance partitioning analysis (VPA) to determine the relative importance of climate, resource use, and urbanization as controls of aquatic community composition 1920-1994. From fossil analyses, we identified three distinct biological assemblages in Pasqua Lake. Prior to agriculture (ca. 1776-1890), the lake was naturally eutrophic with abundant cyanobacterial carotenoids (myxo-xanthophyll, aphanizophyll), eutrophic diatoms (Stephanodiscus niagarae, Aulacoseira granulata, Fragilaria capucina/ bidens), and anoxia-tolerant chironomids (Chironomus). Principal components (PCA) and dissimilarity analyses demonstrated that diatom and chironomid communities did not vary significantly (P . 0.05) before European settlement. Communities changed rapidly during early land settlement (ca. 1890-1930) before forming a distinct assemblage ca. 1930-1960 characterized by elevated algal biomass (inferred as beta-carotene), nuisance cyanobacteria, eutrophic Stephanodiscus hantzschii, and low abundance of deep-water zoobenthos. Recent fossil assemblages (1977-1994) were variable and indicated water quality had not improved despite 3-fold reduction in phosphorus from sewage. Comparison of fossil community change and continuous annual records of 83 environmental variables (1890-1994) using VPA captured 71-97% of variance in fossil composition using only 10-14 significant factors. Resource use (cropland area, livestock biomass) and urbanization (nitrogen in sewage) were stronger determinants of algal and chironomid community change than were climatic factors (temperature, evaporation, river discharge). Landscape analysis of inferred changes in past algal abundance (as b-carotene; ca. 1780-1994) indicated that urban impacts declined with distance from point sources and suggested that management strategies will vary with lake position within the catchment.
<urn:uuid:a1a2c89e-04ce-4111-9061-590b9e2b279d>
{ "date": "2013-05-26T09:42:02", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8817854523658752, "score": 2.828125, "token_count": 557, "url": "http://aslo.org/lo/toc/vol_44/issue_3_part_2/0739.html" }
Spark plugs are instrumental in making a gasoline engine run. They shoot out a spark of electricity into the compressed mixture of air and gasoline within an engine's cylinders. This ignites the mixture and forces the cylinder's piston down. The motion of the piston is what creates power. Spark plugs also have another job -- they pull heat away from the combustion chamber. That means spark plugs can get very hot. In general, cars have a spark plug for every cylinder in the engine. For instance, a four-cylinder engine will have four spark plugs. However, there are exceptions to the rule -- a vehicle with a HEMI engine will have two spark plugs per cylinder. Spark plugs wear out over time. As they get older, they may not spark properly. This affects your engine's performance and results in a loss of power. But replacing your spark plugs isn't like other car repair projects -- it's much more straightforward and only requires a few tools. You don't need to be a skilled auto mechanic to change your car's spark plugs. In most cases, you can switch out an old set for new plugs in about an hour. Before changing your spark plugs, you should consult your vehicle's owner's manual. You're looking for two things: how often you should change your spark plugs and where the spark plugs are located on your engine. For most vehicles, the rule of thumb is to replace your spark plugs every 30,000 miles (48,280 kilometers). You should also make sure the engine is cold before you start -- spark plugs can get very hot! Even after other parts of your engine have cooled down, the spark plugs may still be too warm to touch. Let your engine cool down for a couple of hours before you begin.
<urn:uuid:d6c4ea78-0410-45a1-91fc-0d2db828d70a>
{ "date": "2013-05-26T09:42:36", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9568192362785339, "score": 2.953125, "token_count": 359, "url": "http://auto.howstuffworks.com/under-the-hood/vehicle-maintenance/change-auto-spark-plugs.htm" }
Approximately 7000 seeds per gram seed-counts are only a guide, not to be used for accurate calculations. B and T World Seeds' reference number: USDA average, annual, minimum temperature Zone:4 Type of plant - perennial Flower: dk. PURPLE, YELLOW throat, infl. 1-6 Fruit: ........(prov. Sutherland, Caithness, Orkney only) Foliage: -5cm., ell.-spath., dense farin. ben. Height in meters: 0.1 Common names for Primula scotica: is included in the following B and T World Seeds flowering plant categories: 9: Alpine and Rock Garden Seed List (Hardy and Tender) 12: British Native Wild Flowers Shrubs and Trees 43: Herbaceous Border Plant Seed List 161: Edible Flowers 185: Plant Species whose germination is improved by Smoke Primula scotica seeds will usually germinate in 20-40 days, even under good conditions germination may be erratic. Normally will only germinate with light so surface sow. Sow Primula scotica seeds on the surface of a Peaty seed sowing mix at about 15°C. Stratification (cold treatment or vernalization) Some seeds need to be overwintered before they will germinate. Some seeds need just a couple of weeks, others 3 months. Seeds can be stratified in dampened peat or sand, in a plastic box or bag at 4°C or 5°C in a refrigerator. The seeds should not be frozen or in a wet medium. Very small seeds can be sown on the surface of their growing medium, in pots sealed in plastic bags, and kept in the 'fridge. Many vernalized seeds need light to germinate when they are sown in the "Spring". P.Sinensis germinates in the dark. Cold stratify 3 weeks.
<urn:uuid:dd56aadf-e340-4f6e-babf-6e76c0c71330>
{ "date": "2013-05-26T09:43:15", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8396143913269043, "score": 3.03125, "token_count": 419, "url": "http://b-and-t-world-seeds.com/cart_print.asp?species=Primula%20scotica&sref=432623" }
By Julie Steenhuysen CHICAGO (Reuters) - Using cervical fluid collected from routine Pap smears, U.S. researchers were able to spot genetic changes caused by both ovarian and endometrial cancers, offering promise for a new kind of screening test for these deadly cancers. Experts say that although the test has tremendous potential, it is still years from widespread use. But if proven effective with more testing, it would fill a significant void. Currently, there are no tests that can reliably detect either ovarian or endometrial cancer, which affects the uterine lining. Research teams have been trying for several years to find a screening test that could identify these cancers early, when there is a better chance of a cure. "Pap smears have had a tremendous impact in reducing the rate of cervical cancer in the United States," said Dr. Andrea Myers of Dana-Farber Cancer Institute, a co-author of the commentary on the study published in Science Translational Medicine. "The lack of an equally effective screening test for women at high risk for endometrial or ovarian cancer has created a great deal of interest in developing tests that could identify these cancers by their genetic ‘signature' - the collection of specific mutations within them," she said. "This new study is an important step in that direction." The new approach, developed by a team at Johns Hopkins Kimmel Cancer Center in Baltimore, piggybacks on routine Papanicolaou or Pap testing, which is already done routinely to detect cervical cancer. The idea is to take fluid collected from the cervix for Pap tests and use gene sequencing technology to look for genetic changes that would only be found in endometrial and ovarian tumors. Since Pap tests occasionally contain cells shed from the ovaries or the lining of the uterus, cancer cells from these organs could be present in the fluid as well. The team tested for mutations in 24 endometrial and 22 ovarian cancers. 'EXCITING FIRST STEP' "We could detect 100 percent of endometrial cancers and 40 percent of ovarian cancers, even at the earliest stages of their disease, and we can do it without any false positives," said Dr. Luis Diaz, associate professor of oncology at Johns Hopkins, who worked on the study published on Wednesday in Science Translational Medicine. Diaz called the study "an exciting first step." "We're seeing high sensitivity in endometrial cancer. We're seeing moderate sensitivity in ovarian cancer, and we're seeing no false positives," he said. That offered enough rationale to start tests on 100 ovarian cancers of different stages and 100 endometrial cancers, as well as a large number of samples from healthy women. The team hopes to complete that testing by the end of the year. Dr. Shannon Westin, an expert in gynecologic cancers at the University of Texas MD Anderson Cancer Center, said the need for a screening test for these two cancers is great. In the United States, the two cancers combined are diagnosed in 70,000 women each year, and about 23,500 women will die from these cancers. Westin, who co-wrote a commentary on the study, said the paper is "very compelling and very interesting" that you could find evidence of these cancers in a screening test using fluid from Pap tests. But the test must still be validated and shown to be effective in a large populations of women, a process that could take 10 to 15 years. "It's a great first step. It is a proof of principle that this can be done. Patients are used to getting the Pap smear. They understand it," she said. That might mean women would ultimately be comfortable getting this type of test. Dr. David Chelmow, a professor of obstetrics and gynecology at Virginia Commonwealth University Medical Center, who was not involved with the research, said it would be "fantastic" to have a test that would reliably detect cancers. "It's an innovative idea. It's neat. But the question is really going to be what happens when this gets more thoroughly tested," he said. Diaz said currently there are no tests to screen for these cancers early. The experimental test would cost about $100, but with the falling cost of sequencing technology, he estimates it will be half or even a tenth of that cost within the next year. (Reporting by Julie Steenhuysen; Editing by Jilian Mincer and Eric Walsh)
<urn:uuid:9be1c2b7-462c-40c6-9be2-df4821656782>
{ "date": "2013-05-26T09:36:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9639390707015991, "score": 2.78125, "token_count": 927, "url": "http://b93radio.com/news/articles/2013/jan/09/fluid-from-pap-test-used-to-detect-ovarian-endometrial-cancers/" }
E. Cobham Brewer 18101897. Dictionary of Phrase and Fable. 1898. Girondists (g soft). French, Girondins, moderate republicans in the first French Revolution. So called from the department of Gironde, which chose for the Legislative Assembly five men who greatly distinguished themselves for their oratory, and formed a political party. They were subsequently joined by Brissot, Condorcet, and the adherents of Roland. The party is called The Gironde. (179193.) The new assembly, called the Legislative Assembly, met October 1, 1791. Its more moderate members formed the party called the Girondists.C. M. Yonge: France, chap. ix. p. 168.
<urn:uuid:f36be907-e573-4e39-a5c0-ffe13d2b3c32>
{ "date": "2013-05-26T09:35:20", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9578762650489807, "score": 3.359375, "token_count": 156, "url": "http://bartleby.com/81/7213.html" }
Find and Replace Basics Bluefish offers a wide range of find and replace methods in the Edit menu, also available through the contextual menu within a document. Here we will explore the most basic ones. For advanced find and replace methods, see Section 5, “Find and Replace”. 10.1. Searching for a word within a whole document Choose the Edit → Find... (Ctrl+F) menu item. A Find dialog will be displayed. Enter the word (or string) to search for in the Search for: field. Then click OK. If the search is successful, the document window scrolls up to the first occurrence of the string to search in the document and highlights it. Below is an example of a search applied to a mediawiki document. To find a subsequent occurrence of the string, use the Edit → Find again (Ctrl+G) menu item. If no further occurrence is found, a dialog will be displayed notifying you that no match was found. 10.2. Setting limits to the search scope You may want to search for a string from the cursor location till the end of the document. Here is an example to search all name == occurrences within a python script from a given location. Procedure V.2. Searching from selection - Put the cursor where you want to start the search from in the document window - Open the Find... dialog - Enter your search string in the Search for: field - Choose Current position till end from the Starts at: pop up menu - Click OK. Here is the result: Notice that the search does not take into account the occurrence of the same string at line 50, since it is outside the search scope. You can also limit the search scope to a selection range. In that case, highlight the selection before the search, and choose Beginning of selection till end of selection from the Starts at: pop up menu in the Find dialog. 10.3. Case sensitive search By default, the search process is case insensitive. If you want to make it case sensitive, just check the Match case box in the Find dialog. Here is the result applied to a ruby script: Notice again that the result does not catch the XML string at line 45, since the search string was xml and case sensitive search was requested. 10.4. Overlapping searches It may occur that the document contains some kind of palindrome you want to search for. The "normal" find process does not retrieve all occurrences of that kind of string. In this case, you have to check the Overlap searches box in the Find dialog to retrieve all occurences of the string. The search (with Ctrl+F, then Ctrl+G) will give the following results: 10.5. Retrieving previous search strings Notice that the pop up menu to the right of the Search for field in the Find dialog allows you to retrieve previous search strings. They are listed in reverse order by search history, providing quicker access to the most recent searches. 10.6. More on find For an explanation of the Bookmark results box of the Find dialog, see Section 4.1, "Generating several bookmarks at once”. You will find details on Find Again and Find from Selection in Section 5, “Find and Replace”. For a quick way of switching from HTML entities to other types of encoding and changing letter cases, see Section 5.1, “Special find and replace features”. 10.7. Replacing features The Edit → Replace... (Ctrl+H) menu item works the same way and has all the features, the Edit → Find... (Ctrl+F) menu item offers. The Replace dialog is also accessible through the contextual menu within a document. For the features common to the Find dialog, see 10.1, “Searching for a word within a whole document”. Here we will explain the features unique to the Replace dialog. 10.8. Retrieving previous replace strings As for the Search for field's pop up menu, the Replace with field's pop up menu allows you to retrieve previous strings used for replace, the most recent ones being at the top of the list. 10.9. Changing letter case when replacing If you want to change letter case when replacing, use the Replace type pop up menu. The default choice is Normal, that is the case is not changed. With the Uppercase replace type, the search string will be replaced with its uppercase translation. Likewise, with the Lowercase replace type, the search string will be replaced with its lowercase translation. Notice that in this case, the Replace with field is deactivated, thus not taken into account even if you have entered some string in it. 10.10. Choosing strings to replace It may occur that you do not want to replace all search strings retrieved by the search process, but only some of them. In this case, check the Prompt before replace box. A Confirm replace dialog will appear for each retrieved string where you can choose to Skip this string, i.e. leave it as it is, Replace it, Replace all strings within the search scope, or Close the dialog, i.e. cancel the process. If you want to replace only the first occurrence of a search string, check the Replace once box instead. 10.11. More on replace For further explanation on replace features within Bluefish, see Section 5, “Find and Replace”.
<urn:uuid:f678da6c-9a52-4a12-93ec-be98d88fd7e8>
{ "date": "2013-05-26T09:35:38", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.8394289612770081, "score": 2.53125, "token_count": 1158, "url": "http://bfwiki.tellefsen.net/index.php/Find_and_Replace_Basics" }
What's the Latest Development? While the U.S. sorely lacks a national space agenda, China has recognized the development of space-based solar energy as essential to the betterment of its place in the world. According to the China Academy of Space Technology (C.A.S.T.), "the state has decided that power from outside the earth, such as solar power and the development of other space energy resources is to be China's future direction." In space-based solar power, China sees a sustainable energy source capable of supplying its blossoming economic industries. What's the Big Idea? Beyond feeding its economy, China sees the development of space-based energy technologies as important for "social development, disaster prevention and mitigation, and cultivating innovative talents through an increased space effort the likes of which haven't been seen since the Apollo program." In the list of technologies C.A.S.T. plans to develop, many can be used to benefit other kinds of space ambitions, suggesting that energy is but one of China's missions for the development of space.
<urn:uuid:752109af-49fe-4e58-b6f9-c0f5039cbd60>
{ "date": "2013-05-26T09:42:26", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9492543339729309, "score": 3.0625, "token_count": 217, "url": "http://bigthink.com/ideafeed/chinas-space-based-solar-power-strategy" }
Over range of ADHD behavior, genes major force on reading achievement, environment on math First study of its kind reveals complex interaction Humans are not born as blank slates for nature to write on. Neither are they behaving on genes alone. Research by Lee A. Thompson, chair of Case Western Reserve University’s Psychological Sciences Department, and colleagues found that the link between Attention-Deficit/Hyperactivity Disorder (ADHD) and academic performance involves a complex interaction of genes and environment. Genetic influence was found to be greater on reading than for math, while shared environment (e.g., the home and/or school environment the twins shared) influenced math more so than reading. The researchers don’t know why. Their study of twins, published in Psychological Science, Vol. 21, was the first to look simultaneously at the genetic and environmental influences on reading ability, mathematics ability, and the continuum of ADHD behavior. “The majority of the twins used in the study don’t have ADHD,” Thompson said. “We are looking at the continuum of the behavioral symptoms of ADHD - looking at individual differences - not a disorder with an arbitrary cutoff.” This type of continuum is a normal distribution or bell curve, with scores symmetrically distributed about the average and getting much less frequent the farther away a score is from the average. Disability is usually classified as the lower extreme on the normal distribution. The symptoms of ADHD, according to Thompson, can be described with such a continuum, as can reading and mathematics ability. Only a small percent of individuals fall below the common medical cutoff between ability and disability. For what we refer to as gifted or disabled, Thompson points out, “There is no difference in cause, just different expression of achievement.” Thompson collaborated with Sara Hart, a graduate student at the Florida Center for Reading Research, and Stephen Petrill, a professor at the Ohio State University, in analyzing 271 pairs of ten-year-old identical and fraternal twins. The twins were selected from the Western Reserve Reading and Mathematics Project, a study that began in 2002 with kindergarten and first grade-age twins and has collected data yearly about their math and reading ability. The study focused on two ADHD symptoms: inattention and hyperactivity, which are viewed as extremes of their respective attention and activity continuums. As part of the study, the mother of the twins rated each child on 18 items such as the child’s ability to listen when spoken to, play quietly, and sit still, to assess attention and activity levels. A researcher testing each twins’ mathematics and reading ability also rated the twins each year on their attention to tasks and level of hyperactivity. The researchers assessed reading ability by evaluating the twins’ recognition and pronunciation of words and passage comprehension. They measured the twins’ capacity for mathematics by focusing on the twin’s ability to solve problems, understanding of concepts, computational skills, and the number of computations completed in 3 minutes. Researchers analyzed the data from three perspectives: one looked at the overall ADHD behavior, one at the level of attention, and at the activity level. They then determined the similarities in genetic and environmental influence between ADHD symptoms and reading and between the symptoms and mathematics. To do so, researchers looked at the variance and covariance of ADHD symptoms and academic ability. Variance measures the individual differences on a given trait within a population and covariance is a measure of how much two traits are related. These measures were broken down into identified components: additive genetic effects, shared environment and non-shared environment. Using quantitative analysis of the components, the researchers found that there are some general genes that influence the symptoms of ADHD simultaneously with reading and mathematics ability and some genes that influence each specifically. This study also found that both inattention and hyperactivity were related to academics. “If we have this much overlap between genes that affect behaviors of ADHD and academic achievement,” Thompson said, “it gives validity to the relation of ADHD behaviors and poor academics.” But genes are not everything, Thompson adds. There are different approaches for interventions that can be taken based on the extent of environmental influence on ADHD behavior, reading ability, and mathematics ability across the entire continuum of expression. Future research, the study notes, should focus on the underlying connection between ADHD symptoms and poor academic achievement in order to identify the influences that may alter these often co-occurring outcomes. Additional authors include Erik Willcutt from the University of Colorado, Boulder; Christopher Schatschneider from Florida Center for Reading Research, Florida State University; Kirby Deater-Deckard from Virginia Polytechnic Institute and State University; and Laurie E. Cutting from Vanderbilt University. Funding for the study was provided by the National Institute of Child Health and Human Development and by the Department of Education. Sara Hart was additionally supported by the Lucile and Roland Kennedy Scholarship Fund in Human Ecology from the Ohio State University and the P.E.O. Scholar Award. Reference: S Hart et al. Exploring How Symptoms of Attention-Deficit/Hyperactivity Disorder Are Related to Reading and Mathematics Performance: General Genes, General Environments. Psychological Science. DIO:10.1177/0956797610386617 (2010). Contact: Lee A. Thompson, Case Western Reserve University, [email protected] Release prepared by Sarah Gavac, Case Western Reserve University
<urn:uuid:8fd51d86-04a7-4f13-b3af-8413a3754571>
{ "date": "2013-05-26T09:35:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 3, "language": "en", "language_score": 0.9436678886413574, "score": 3.109375, "token_count": 1135, "url": "http://blog.case.edu/think/2011/04/26/over_range_of_adhd_behavior_genes_major_force_on_reading_achievement_environment_on_math" }
How Much Does the Ocean Weigh? Water does weigh something; about 8.3 pounds per gallon. In research published this week, scientists from the National Oceanography Center and Newcastle University have proposed an idea that will assess the mass of the world ocean by weighing it at a single point. But there is a catch. Global sea level is currently rising at about 3 mm per year, but predictions of rise over the century vary from 30 cm to over a meter. There are two ways global sea level can increase. The water in the oceans can warm and expand, leading to the same weight of water taking up more space. In other words water density can vary which must be taken into account. Alternatively, more water added to the ocean from melting of land ice will increase the ocean’s weight. The National Oceanography Centre’s Prof Christopher Hughes said: “We have shown that making accurate measurements of the changing pressure at a single point in the Pacific Ocean will indicate the mass of the world ocean. And we know where to place such an instrument — the central tropical Pacific where the deep ocean is at its quietest. This pressure gauge needs to be located away from land and oceanic variability. The principle is rather like watching your bath fill: you don’t look near the taps, where all you can see is splashing and swirling, you look at the other end where the rise is slow and steady.” By a lucky chance, pressure measurements have been made in the Pacific Ocean since 2001, as part of the U.S. National Tsunami Hazard Mitigation Program, which focuses on detecting the small pressure fluctuations produced by the deep ocean waves that become tsunamis at the coast. From these measurements, the team including Dr Rory Bingham, based in the School of Civil Engineering and Geosciences at Newcastle University, have been able to show that a net 6 trillion tonnes of water enters the ocean between late March and late September each year, enough to raise sea level by 1.7 cm, and leaves the ocean in the following six months. Prof Hughes: “Of course, what we are most interested in is how much water accumulates in the ocean each year, and this is where we currently have a problem. While present instruments are able to measure pressure variations very accurately, they have a problem with long term trends, producing false outcomes.” By knowing the weight an estimate of how much the ocean in increasing would be known which would be related to how much global warming is occurring. “This is a challenging goal. The pressure changes are smaller than the background pressure by a factor of about 10 million, and the deep ocean is a hostile environment for mechanical components with erosion and high pressures. However, there are many other measurement systems with this kind of accuracy and there is no reason, in principle, why someone with a new idea and a fresh approach could not achieve this. Article appearing courtesy Environmental News Network. |Tags: oceanic variability oceanography pacific ocean pressure fluctuations sea level water density||[ Permalink ]|
<urn:uuid:7c5d0829-26bf-4315-9573-7861ac4c1901>
{ "date": "2013-05-26T09:40:58", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.9557080268859863, "score": 3.65625, "token_count": 630, "url": "http://blog.cleantechies.com/2012/09/10/how-much-does-the-ocean-weigh/" }
Throughout the history of the United States, equality for all people has been fought for and won time and time again. Thomas Jefferson wrote in the Declaration of Independence ”that all men are created equal,” and over time equal rights have been gradually extended to different groups of people. However, equality has never been achieved without heated debate, despite our country’s founding principle that all people are created equal in the first place. The language used to seek equality has remained familiar over time. Posters demanding equal rights (pictured) contain messages we have all seen or heard. One of my theories is that since the human life span is finite, the message of equality has to be relearned by each generation as it comes to realize that more work needs to be done. If humans lived longer, would full equality across racial and gender lines have been acquired by now? Ask yourself: Would women suffragists from the 1920s, who so vehemently demanded the right to vote, think it was fine for African Americans to be denied this same right? It depends. My theory also includes the caveat that empathy for others does not always translate into citizens banding together for the greater good. Then again, the social evolution of the United States is progressing. This progression is the reason the language and message of equality remains relevant. Equality is a shared goal that not everyone enjoys. Racial intolerance for one group is no different than bigotry for another. Denying equality for a particular group plays into the kind of discriminatory trap that makes no sense if one applies the very same principles of equality indiscriminately. All people are created equal, period. The Declaration of Independence was written with the hope of possibility. Think about it—the signers of this document were declaring a new and independent country! Jefferson’s words made a statement about human rights that became the foundation for a country unlike any other in the world. The signers never anticipated that their vision would eventually embrace so many different kinds of people, but that is the beauty of it. The Declaration was groundbreaking because it provided a foundation of principles and moral standards that have endured to modern times and that accommodate human evolution and its capacity for acceptance. Stepping back and viewing all these posters as a whole, one could come to two conclusions. First: the human race does not learn from history. Second: humans repeat the same mistakes over and over. However, I believe that the preservation and repurposing of the messages of protest in all their different forms are evidence that we do learn from history, and that we apply these tactics when the moment calls for them. Similar to my previous posts on Race-Based Comedy and Race in Advertising, this post is a small glimpse into a bigger topic that welcomes further discussion. These subjects would be commonplace in a college syllabus, but is there any reason why we shouldn’t introduce dialogue about such issues into our daily lives? At the dinner table, instead of asking your kids how their day was at school and receiving a one-word answer, try bringing up issues that are important to you. If you care about some form of injustice and you voice your opinion honestly, your kids may sense the gravity of the conversation and weigh in with something just as meaningful.
<urn:uuid:3c86c75d-667b-4a1e-8c02-c28489d487b3>
{ "date": "2013-05-27T02:54:37", "dump": "CC-MAIN-2013-20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00000-ip-10-60-113-184.ec2.internal.warc.gz", "int_score": 4, "language": "en", "language_score": 0.974034309387207, "score": 3.96875, "token_count": 658, "url": "http://blog.leeandlow.com/2012/06/01/life-liberty-and-the-pursuit-of-equality-for-all/" }