text
stringlengths 199
648k
| id
stringlengths 47
47
| dump
stringclasses 1
value | url
stringlengths 14
419
| file_path
stringlengths 139
140
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 50
235k
| score
float64 2.52
5.34
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Michigan - and the entire region - should benefit from new EPA requirements for purifying ballast water dumped from ships on the Great Lakes.
Most large vessels use ballast tanks filled with water as a balancing mechanism. Researchers think these tanks also provide convenient hiding places for exotic species hitching a ride from foreign waters.
A 2002 state law required all oceangoing vessels to self-police their practices for dumping ballast water. In 2008, the law was toughened, requiring all ships incoming from the Atlantic and docking in Michigan to show a permit proving treatments were used on discharged water or face fines.
The law was tough, but it didn't solve what was essentially a regional problem - no other states followed Michigan's lead, so many Great Lakes ports were unprotected. Things have finally changed. Under new rules released last week by the EPA, vessels longer than 79 feet - perhaps 60,000 ships in total - must treat ballast water with technology such as ultraviolet light or chemicals to kill organisms before discharge.
The new guidelines don't apply to vessels sailing only within the Great Lakes, a provision ciriticized by environmentalists because it may allow ships to transport invasives quickly around the lakes. But the move should, at a minimum, close the window of opportunity for potential future invaders.
The EPA rules impose international cleanliness standards that the Coast Guard already adopted last year, bringing some harmony to a formerly dissonant regulatory environment. The entire Great Lakes basin will be covered by one set of rules.
Studies by the EPA's science advisory board and the National Research Council backed the new standards, which limit the number of living organisms allowed in a given volume of ballast water.
Under the new EPA and the Coast Guard rules, ships built after Dec. 1 will have to comply with the treatment standards immediately. The requirements will be phased in for existing vessels, with treatment technology being installed gradually as ships undergo routine maintenance.
Over the years, we've advocated for tough ballast rules to protect the Great Lakes, as an economical as well as ecological imperative. Previous efforts to pass a national standard for ballast water have bogged down over how strict the rules should be. A balance has to be found between slowing invasive aquatic species and supporting the region's shipping industry, and we think these rules may be the answer.
Some will certainly argue that the rules should be stronger or implemented faster - that the EPA's solution is not ideal. But at least some rules are now on the books. Perhaps they aren't the best rules, but they are, finally, being universally applied.
|
<urn:uuid:67f84500-a841-423b-97cb-d248e89ecd43>
|
CC-MAIN-2016-26
|
http://www.miningjournal.net/page/content.detail/id/585839/EPA-ballast-requirements-step-in-right-direction.html?nav=5155
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959534 | 524 | 2.953125 | 3 |
The role of teachers in education extends past the responsibility of passing along information. As the name suggests, the primary function of the teacher includes teaching a variety of facts and skills to students. Additionally, the teacher’s role incorporates a multifaceted sense of purpose intended to encourage a child’s social development. The age and grade level of the students directly affects the type of role the teacher plays in all aspects of education. The important role of the teacher in education delivers many benefits to children and parents, including students who possess special needs.
According to the 2010-11 Occupational Outlook Handbook offered by the Bureau of Labor Statistics, teachers play a vital role in the formation of a student’s potential. Kindergarten through secondary school teachers often provide the sole source of a child’s learning experience, therefore taking on the important responsibility of cultivating knowledge in the students.
A classroom teacher guide published by the ERIC Clearinghouse on Elementary and Early Childhood Education describes at length the purpose of the teacher beyond the educator role. The teacher’s deeper sense of purpose includes maintaining an attentive attitude towards conflicts between children. The classroom environment fosters numerous interactions between multiple children on a daily basis, creating the largest social component of a child’s life. Shaping a child’s social development therefore falls on the teacher, who takes on the role of mediator and coach.
The role of teachers in education changes according to the grade of the students. For example, The Bureau of Labor Statistics notes that the role of the teacher during a child’s early years includes developing primary skills necessary for advancement. As the student progresses to middle school, the role of teacher expands to pass on information specific to a particular subject area. The responsibility of shaping the student’s social development also changes as children age and become capable of making learned choices.
Teachers take on the role of becoming a third parent to many students, both due to the extensive amount of time spent together as well as the needs of the child. A clear benefit to this role includes reinforcing the parent’s desires and methods through communication.
Another benefit and role of the teacher helps children who lack a solid family structure. Teachers portray an image of accomplishment, especially for students in secondary school. Teachers also fulfill a child’s need for a positive role model when the family fails to provide one.
A crucial role of the teacher in education consists of evaluating, assessing and providing for children with special circumstances. A child suspected of having a disability or suffering from abuse receives proper attention due to the responsibility of the teacher to intervene. In addition to evaluating suspected special needs children, the teacher works with parents and a team of specialists to provide the required assistance for students who already have a diagnosis.
- U.S. Department of Labor: Bureau of Labor Statistics: Occupational Outlook Handbook, 2010-11 Edition: Teachers - Kindergarten, Elementary Middle and Secondary
- Education Resources Information Center: The Teacher's Role in the Social Development of Young Children. : II. General Teaching Strategies: Optimum Teacher Intervention
|
<urn:uuid:ceed0922-4eb2-41bc-bb5c-8d61296f5149>
|
CC-MAIN-2016-26
|
http://www.ehow.com/about_6509642_role-teachers-education_.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404382.73/warc/CC-MAIN-20160624155004-00165-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940341 | 629 | 3.9375 | 4 |
Combat Capability [42%], Role and Missions, Structure of the Navy, in-service ships, surface ships, submarines, chronology.
|Tell a friend||Print version|
Home / History / The struggle of the Russian people for sea access between the XIIIth and XVIIth centuries
The struggle of the Russian people for sea access between the XIIIth and XVIIth centuries: Introduction
After the Tatar invasion, Russia was left with access to only three seas: the Baltic, the Barents and the White. In the Baltic Sea, the Russians controlled only a short section of the coastline in the Gulf of Finland. This coastal tract, however, had great significance, since on the Gulf of Finland is the mouth of the River Neva, which flows from Lake Ladoga, into which, in turn, flow a multitude of other rivers. The eastern part of the Gulf of Finland, which connects the Baltic Sea with the river systems of northeastern regions of Europe, had long been attracting the attention of Russia’s westerly neighbors, above all Sweden. Swedish invaders sought to occupy the shores of the Neva and the Gulf of Finland, in order to cut off Novgorod from the coastline and deprive the Russians of their only sea route to the nations of Western Europe.
The struggle for the eastern coast of the Gulf of Finland turned especially fierce during the 13th century, when Russian lands lay ravaged by Tatar incursions, allowing the Catholic Church to undertake a large-scale crusade against a prostrate Russia. The crusade was initiated by the Roman papacy, and executed by Germanic knights of Livonia, and Swedish and Danish feudal lords.
In the summer of 1240, a Swedish army commanded by their marshal, Birger, landed at the confluence of the Izhora River into the Neva. From there, Swedish generals intended to begin the invasion of Novgorodian lands and occupy Lake Ladoga, but these plans were thwarted by a well-organized defense of the Russian shores. A permanent naval patrol was stationed at the Neva estuary, guarding two inlet channels into branches of the river. The purpose of the naval guard was two-fold: it monitored the movements of the enemy fleet, and provided pilots for merchant ships. The naval sentry stationed at the Neva estuary, commanded by Pelgusiy (the “patriarch” of the Ingrian lands), notified Novgorod of the Swedes’ landing. Upon receiving this news, the Prince of Novgorod Alexander Yaroslavich (Nevsky) immediately marched out with his retinue and the Novgorodian militia, not sparing the time to wait for reinforcements from his father in the Suzdal lands. On July 15, 1240, Alexander Yaroslavich’s warriors took the Swedish encampment by surprise and utterly routed the Swedes. Not contenting themselves with the destruction of the Swedish camp, the Novgorodians attacked enemy vessels moored at the shore, destroying many of them. One Novgorodian, Misha, alone destroyed three Swedish ships. Another Novgorodian, Gavrila Oleksich, forced his way horseback on the gangways (“along the deck”) to an enemy ship while pursuing a Swedish noble, was thrown off the gangways, but survived. The defeat of the Swedish landing force was total, corpses of just the Swedish nobles filling three ships, which were sunk by the Swedes themselves. In the night the enemy fleet sailed out of the Neva and the Gulf of Finland disgraced, abandoning any further attempt to establish themselves on the shores of the Neva. The battle on the Neva had great impact on the subsequent organization of the Russian struggle against German knights and Danish feudal lords. The Swedes could no longer participate in subsequent military operations. This simplified Alexander Nevsky’s efforts against the German knights. The Russian campaign against the German knights immediately acquired a pan-Russian character. Alexander and the Novgorodian regiments were joined by reinforcement armies from the Suzdal lands, under the command of Alexander’s brother. The decisive battle against the combined German and Danish armies took place on April 5, 1242, on the ice of Chud Lake (Lake Peipus). The German and Danish lords were utterly destroyed and attempted to flee, but the springtime ice collapsed, and the lake’s icy waters buried many of the German “knights, like dogs.”
New attempts to firmly establish themselves on the shores of the Neva were made by Swedish feudal lords at the end of the 13th century. They were aligned with Swedish ambitions of bringing Karelia under their rule. In 1284, the Swedish fleet invaded Lake Ladoga. The goal of this expedition was the subordination of Karelians: Swedish lords “wanted to put the squeeze on Karelia." Novgorodians and Ladogans, headed by the Posadnik Semyon, awaited the return of the sea intruders to the Neva delta (“standing on the Neva estuary”), attacked them and destroyed a large part of the Swedish ships. This battle on the Neva affords grounds to claim that Novgorodians possessed sea and river vessels intended for military operations, for otherwise they could not have destroyed the Swedes, who invaded Ladoga “on ‘loyva’ and ‘shneka’ boats.” The presence of the naval guard at the Neva estuary is noted this time as well, as it sent word of the approach of the enemy flotilla.
The Swedish expedition of 1284 was primarily a reconnaissance mission. Events developed in quite a different manner in 1300, when Swedish feudal lords made another attempt to firmly establish themselves on the shores of the Neva, constructing a naval stronghold there. The Swedish fleet stopped at the confluence of the Okhta River into the Neva, where ships could directly approach the bank per se. The Swedes brought architects for constructing the castle, among which was a “significant” (distinguished) master “from great Rome and the Pope.” The undertaking was conceived grandly. The new citadel was named Landskrona (in Russian chronicles, “Crown of the Earth”). The expedition was headed by Torkel Knutsson, ruler of Sweden, called by a Russian chronicler the King’s Viceroy.
The struggle against the Swedes quickly acquired an all-Russian character, as the Novgorodians turned for help to the Crown Prince Andrey Alexandrovich. Prince Andrey arrived with Suzdal troops, and together with the Novgorodians approached Landskrona. The siege of the fortress was conducted according to all the art-of-war rules of the times. In an effort to destroy the Swedish fleet, the Russians released burning log rafts into the river's current. But the Swedes stretched iron chains across the river in anticipation, foiling the Russian plot. Fighting continued day and night under the walls of the fortress. Finally, on May 18, 1301, the Novgorodians breached the stronghold, slaughtered the garrison, and burned and pillaged the fortifications, taking 300 prisoners. “Ruthlessness paid them for their arrogance,” didactically noted a Russian chronicler of the Swedes. The failure of this last Swedish attempt to deprive the Russian people of access to the Baltic Sea was rejoiced in Novgorod, and the chronicler commemorates with gratitude those nameless heroes, “who laid their lives for that city.”
Not being content with just defending their shores, Novgorodians oftentimes moved into active operations against the Swedes. In these they showed themselves to be experienced seamen and created a military fleet suitable for remote expeditions.
In 1310, the Novgorodians undertook a campaign to reestablish a town on the River Uzyerva, which flows into Lake Ladoga. The new city of Korela (Kexholm) was built on the site of old fortifications, becoming the base of Novgorodian operations in the region. In 1311, Novgorodians undertook a marine campaign into the heart of Finland. The bold expedition of the Novgorodians is evidence to their determination to defend their lands by aggressive operations in territories captured by the Swedish lords.
Yet, while the Neva remained undefended, the threat of enemy invasion into Lake Ladoga also persisted, especially since the Swedes had a great base on the shores of the Gulf of Finland – the city of Vyborg. Therefore, after an unsuccessful siege of Vyborg, the Novgorodians set about fortifying the defenses of their shores. In 1323, they established the city of Orekhovets (”Nut”), or Oreshek (“Nutlet”), on Orekhovets Island. Subsequently, Peter I renamed it into Schlisselburg (“Key-city”), rightly recognizing its importance as the key to the capture of the Neva, which provides access to the sea. In the same year, Novgorod negotiated the Orekhovets Treaty (Treaty of N?teborg) with Sweden, under which the whole of the Neva remained in Russian possession, with the condition that both nations refrain from establishing any more cities in Karelia. This treaty became the basis for all subsequent treaties between Russia and Sweden right up to the beginning of the 17th century.
The last Swedish attempt to seize the Neva basin likewise ended in disgraceful failure. This attempt was carried out by the Swedish King Magnus, who had earned the derogatory sobriquet “the Weak.” He hoped to improve his image with a successful campaign in the Russian lands. In 1348, the Swedish fleet again found itself in the Neva delta. The assault party was disembarked on Beryozovy Island, where Magnus came to a stop "with his entire force.” In August of that year, he captured the city of Orekhovets. However, Orekhovets wasn’t held by the invaders for long, as within a year it was liberated by the Russian army. After this the struggle for access to the Baltic Sea abated for a long time. Russians firmly held on to the sea passage.
The struggle for the Baltic shores revived with a new vigor in the 16th century. The interests of the economic and political development of the centralized Russian state, which formed in the 15th and 16th centuries, urgently demanded the resolution of the “Baltic question.” Thus even Ivan III already paid special attention to fortifying the part of the coast of the Gulf of Finland belonging to Russia, building the strongholds Jama and Koporye there. In 1492, on the border with Estonia, directly across from Narva, the new city of Ivangorod (in Ivan III's honor) was founded, becoming an important trade and strategic point on the northwestern border of Russia. It was in fact a first-rate castle for its time, constructed out of stone. From that time, Russian trade in the Baltic Sea was mainly conducted through the new port, which at the same time represented the state of the art in Russian fortification on the Baltic coast.
The strengthening of the Russian nation in the 16th century was cause for great alarm in Sweden, Livonia, Poland and Germany. Leaders of these nations did everything they could to impede the establishment of trade relations between Russia and Western Europe. At the same time, Sweden and Denmark also harbored ambitions of dominion in the Baltics. Ivan IV (“the Terrible”), having paid close attention to the situation in the Baltics and to military preparations against Russia, anticipated the enemy, and started the war with the Livonian Order in 1558, before the hostile nations could join forces. The initial years of the war were marked by spectacular successes for the Russians, who captured Yuryev (Tartu) and Narva.
Russia’s objectives in the Livonian War were to gain access to the Baltic Sea. As a result, Ivan IV hoped in the future to create his own fleet on the Baltic.
In the course of the Livonian War, a special emphasis was placed on the question of defending Russian trade on the Baltic Sea. Intending to forcibly paralyze Russian sea trade with the West, Poland, and subsequently Sweden, resorted to the common means of sea trade disruption at the time – piracy. Seas and oceans in those days swarmed with pirates, who gladly went into the service of various governments. Entering into such service, corsairs were granted a special “letter of marque” (or license) which gave them the right to legally exist. Ivan the Terrible also acquired a corsair fleet to defend the shores of the Baltic Sea, commanded by the head pirate Karsten Rode. The emergence in the Baltic Sea of a corsair fleet in the name of Ivan the Terrible’s administration was cause for great consternation in Sweden, Germany and other Baltic nations, although this fleet was very short-lived.
Besides creating a corsair fleet, Ivan IV’s intentions to seriously establish himself on the shores of the Baltic Sea are further evidenced by his attempt to seize Reval, an important trade port and naval stronghold. Control of this city, captured after Sweden’s breakup of the Livonian Order, meant not only the expulsion of the adversary from the southern shores of the Gulf of Finland, but also the acquisition of a fortified base for the corsair fleet. But the seven-month-long siege of Reval from land did not yield the intended results. The defensive means of Reval proved too strong, while reinforcements and everything needed by the city was constantly supplied by sea.
The long-lasting Livonian War was a strain on all the resources of the Russian nation. From 1578, military operations took an unfavorable turn for the Russian army, although the advance of the Polish-Lithuanian army was halted at the walls of the heroically defended Pskov. The long war, lasting a quarter of a century, ended with an armistice with Poland (in 1582) and Sweden (in 1583), highly unfavorably for Russia, which lost not only all the lands acquired in Livonia, but also the southern coast of the Gulf of Finland together with the Russian cities of Jama (presently Kingisepp), Koporye and Ivangorod. Russia was cut off from the Baltic Sea.
The Russian nation could not come to terms with the loss of access to the Baltic Sea. Thus the Russian government answered Sweden's offer of signing a peace treaty in place of the armistice with a demand for the return of Russian cities on the Baltic coast, and began seeking the return of the lost lands by armed force. The new Russo-Swedish War ended with the Treaty of Tyavzino in 1595, which forced Sweden to return the coast of the Gulf of Finland and Korela (Kexholm) to Russia.
Yet Sweden, despite signing “eternal peace,” continued to prepare for the capture of the Russian coast of the Gulf of Finland. At the beginning of the 17th century, when Russia was weakened by Polish encroachment, Sweden began open aggressive actions, even occupying Novgorod.
The Stolbovo Treaty of 1617 again gave Sweden the Russian coast of the Gulf of Finland. The Russian loss of access to the Baltic Sea was a source of jubilation in Sweden. In an official appearance, King Gustavus Adolfus said, “The Russians are dangerous neighbors: their lands stretch to the North, Caspian and Black Seas; they have a powerful noble class, a numerous peasant class, populous cities; they can mobilize large armies; and now this adversary cannot release a single vessel into the Baltic Sea without our permission.”
The lack of access to the Baltic started to be especially felt in Russia in connection with the formation, in the 17th century, of an internal “all-Russian market” and the development of economic and political relations with nations of Western Europe. A staunch supporter of the struggle against Sweden for access to the Baltic Sea was the distinguished Russian diplomat of the 17th century, A. L. Ordin-Naschokin. In a special note served to the Tsar Alexei Mihailovich, he insisted on making peace and allying with the Rzeczpospolita, in order to combine forces against Sweden.
Preparing for war against Sweden, the Muscovite government developed a broad plan of military operations, which provided for a simultaneous advance of Russian forces on several fronts. The main force, headed by the Tsar himself was to take boats down the Western Dvina to Riga. This front was considered the most crucial, as the capture of Riga opened access to the Baltic Sea.
In August of 1656, Russian troops seized Dinaburg and Kokenhausen (Kukenois). Construction of battleships began on the Western Dvina. Yet the capture of Riga was unsuccessful.
Another detachment of the Russian army, led by the Voivode Potemkin, was to clear the Izhora of Swedes, and seize the Neva estuary, after which Potemkin’s mission was to march on Stockholm. For this purpose he was given vessels and over 500 Cossacks were sent from the Don, the latter being experienced seamen. In the spring of 1656, Potemkin approached the Neva and captured the city of Nyenschantz (Kantsy), which was constructed at its mouth. Having taken Nyenschantz, Potemkin approached N?teborg (Oreshek), but was not able to seize it, even though he received reinforcements from the Ladoga in the form of a multitude of smaller vessels. In July of that year, having sailed down the Neva into the Gulf of Finland, Potemkin undertook an attack on Kotlin Island, where he met with a detachment of Swedish vessels, and captured a “galley” and prisoners in battle. A landing force disembarked on Kotlin and burned down the settlements established there.
The international situation, having become more complicated, impeded Russia’s retrieval of lost lands on the shores of the Gulf of Finland. The main goals sought by the Russians in the Ingrian lands were not achieved, and N?teborg remained in Swedish hands. Nevertheless the talks that began in 1658 between Russia and Sweden strongly emphasized the question of harbors for Russian vessels.
But the Swedes were in fact wary most of all of the appearance of Russians on the shores of the Baltic Sea. According to the armistice agreement signed in the village of Valiesar (in 1658), Russia was left only with the several towns that it occupied in Livonia. But even these acquisitions were lost in the treaty with Sweden signed in 1661 in Kardis. Russia and Sweden were left with the borders delineated in the exorbitantly unfavorable Treaty of Stolbovo. Russia persistently sought a harbor on the Baltic Sea, but this crucial historical objective would not be reached until Peter I.
Besides the short section of coastline along the Gulf of Finland, Russia had long possessed hugely expansive stretches of coastline of the northern seas – the White and the Barents. The Barents Sea was known to the Russians under the distinctive sobriquet “breathing,” that is, non-freezing, sea, having tides all year round.
Settlements of the Novgorodians had long ago began appearing on the Kola Peninsula and along the shores of the White Sea. Seal hunting and fishing were the long-time economy of the coast-dwellers, who undertook expeditions on their vessels deep into the Barents Sea. Intrepid Novgorodians made their way far east and north, to the shores of Novaya Zemlya. In the 14th century, three Novgorodian vessels (“yumas”), spent a long time wandering the northern seas: one of them perished, while two moored at a tall mountain range. The sailors were led by Moislav Novgorodets and his son Yakov, who described seeing an "aural light" which was more brilliant than the sun, that is, the Northern Lights. It is surmised that Moislav and his companions reached the mountainous shores of Vaygach and Novaya Zemlya.
The desolate shores of the White Sea were frequently a theater of vicious battles between Russians and Norwegians ("Murmans"), who pillaged the coastal lands. This is described in some detail by chroniclers of the 15th century. In 1419, Norwegians appeared at the mouth of the Northern Dvina with a brigade of 500 men, "in ‘busa’ and ‘shneka’ boats," and plundered Nenoksa and several other villages. The coast-dwellers attacked the marauders and destroyed two ‘shneka’ boats, after which the surviving Norwegian ships sailed out to sea. In 1445, Norwegians again appeared at the Dvina estuary, wreaking havoc on the locals. This invasion was perpetrated, apparently, to avenge a campaign by the Novgorod-allied Karelians within the borders of Norway (possibly this refers to the northern parts of Finland and Norway). The Karelians had caused great damage, having "beat them and warred and captured." As the first time, the Norwegian expedition was a total failure. In a surprise attack, the Dvinians slaughtered a great number of Norwegians, killed three of their leaders, and took prisoners, sending them to Novgorod. The remaining Norwegians "darted to the ships fleeing.”
In light of the lack of a consistent link to Western Europe through the Baltic Sea, communication across the northern seas acquired a great economic and political significance for Russia. The route to Europe through the White and Barents Seas had long been known to Russian coastal dwellers, rather than having been discovered by English sailors, as is asserted in many English sources. This route to Europe was taken by Istoma Grigoryev, together with Danish ambassadors, at the end of the 15th century. The travelers boarded four vessels at the mouth of the Northern Dvina, and sailed along the coast of the Kola Peninsula and Scandinavia, thus reaching Bergen. Istoma's journey was not an exceptional occurrence. The same route was taken by the Russian ambassador on his way to Spain, and by several other Russians. Most noteworthy is the fact that Russian travelers characterized this route to northern Europe as “longer, but also safer.”
Thus, the arrival of an English ship, commanded by Chancellor, at the Northern Dvina estuary was just the beginning of more or less regular English trade relations with Russia. Following the English visit were Dutch ships. The small settlement at the Northern Dvina delta quickly grew and became the city of Arkhangelsk (in 1584), the largest Russian port of the 17th century.
In the course of the Livonian War, seafaring in the White Sea underwent expansive growth. It was during this time that Sweden made attempts to establish itself in the White Sea. In 1571, Swedish military vessels appeared near the Solovetsky Islands. The Swedes were apparently conducting reconnaissance, preparing for the capture of the Solovetsky Islands, which would guarantee them supremacy on the White Sea. To defend against enemy attacks, a wooden stockade was built around the Solovetsky Monastery, and archers and Cossacks recruited. This proved to be a timely measure, for during the Russo-Swedish War of 1590-1595, the Swedes attacked the western coast of the White Sea.
In August of 1591, military operations unfolded on a rather large scale in the North. A Swedish detachment of 1200 men “in small vessels” made its way to the Kola stockade. The adversary approached two towers of the wooden stockade, intending to set fire to them, but was repulsed. The attack was repeated in September. This time, 400 Swedes made their way in small vessels along the River Kem, and unexpectedly appeared at the Sumsky stockade. The Swedes attempted to set fire to the stockade for eight hours, but they lifted the siege the same day (September 23) and turned back, plundering several villages along the way. The Swedes suffered great losses in men killed, wounded and taken prisoner under the walls of the wooden stockade, which was defended by 200 Russians, of which only 30 were archers and cannoneers. The Swedish military commander was killed.
In response to the Swedish attack, Russian troops crossed into Swedish territory in the winter of the same year, 1591. The Russian detachment numbered 3000 men – archers, Cossacks and recruits from Ustyug, Kholmogor, Zaonezhye, and monasterial servants from the Kirillo-Belozersky and Solovetsky Monasteries. The voivodes were the Princes Andrey and Grigoriy Volkonsky. The march set out from the Sumsky stockade, with the goal of reaching the Kayan lands in the north of Finland, where the Russian troops waged war for six weeks.
Thus, Sweden’s attempt at the end of the 16th century to force the Russians off the Kola Peninsula, with the aim of obstructing their merchant marine in the White Sea, was not successful. The northern route became especially significant at the end of the 16th century, after the conquest of Siberia. The sea route along the coast of the North Arctic Ocean led to Mangazeya, located on the River Taz in Siberia, which was the main center of the fur trade at the end of the 16th and beginning of the 17th centuries.
Russian vessels (“kochs”), leaving from the mouth of the Northern Dvina, would sail along the eastern shore of the White Sea, round Kanin Peninsula, though sometimes traverse it by using the river system and the fact that during even the driest time of year the “portage,” that is, the dry section between the rivers feeding into Mezen Bay and Czech Bay, was insignificant. Experienced seafarers would sail “the great sea-ocean through the tract of the Yugorsky Strait,” then enter the Kara Sea. The entire journey to Mangazeya was wrought with extreme hardship, but this didn’t stop Russian tradesmen. In 1610, 16 kochs with 150 people arrived at Mangazeya. A later chronicle states that "many people came by sea" to Mangazeya.
Word of the existence of a route to Mangazeya percolated through Western-European trade circles. Already during the talks preceding the Treaty of Stolbovo, Swedish delegates interrogated Russian ambassadors, "how long is it from Muscovy to Siberia?” The English and Dutch dreamed of opening a northern route from Europe to China, Japan and India, in place of the longer route through the Atlantic and Indian Oceans to the southern and eastern shores of Asia. Theoretically the northern route to the East was shorter and, consequently, more profitable, but practically, this route, mastered only in modern times, was inaccessible to the merchant vessels of Western Europe.
All geographic discoveries in Siberia were made by intrepid Russian seafarers. Already in 1610, Russian merchants in Mangazeya had made an important discovery: the Dvinian Kondratiy Kurochkin, together with merchants from the Northern Dvina, undertook a sailing expedition from the Turukhansk winter outpost on the Yenisey estuary, "and as the river and sea cleared... and they sailed from the Yenisey into open sea." Thus it was shown that the Yenisey feeds the "Glacial" sea, that there is access to the mouth of the Yenisey, that "large ships can enter the Yenisey from the sea."
Polar journeys were exceptionally dangerous and frequently resulted in the deaths of courageous Russian seafarers. Stories of unknown explorers are told in a wonderful discovery made by the eastern coast of the Taymyr Peninsula by Soviet sailors in 1940. They found here the remains of items belonging to Russian winterers who survived a shipwreck in Simsa Bay. That these explorers “went by sea and not by land, is undeniably demonstrated not only by the debris of a wrecked vessel and an iron sail rig pulley, but the remnants of at least six special sea navigation instruments.”
From the middle of the 17th century, Russian vessels began to appear in the eastern part of the North Arctic Ocean. From the mouth of the Lena they would sail west and within “a day of sailing” reach the River Olenyok. Sailing further the Russian vessels would reach the mouth of the Yana within three to five days. The biggest obstacles en route for the gallant sailors were the hunks of ice, which the kochs had to navigate while being blown against the coast by marine winds.
After the construction of three fortified winter outposts on the Kolyma River, eastward expeditions along the coast of the North Arctic Ocean became more frequent. In 1648, an expedition of six kochs began from the Kolyma estuary. Three vessels reached the Great Chuckchi Peninsula, now known as Cape Dezhnev in honor of Semyon Dezhnev, leader of one of the kochs that discovered the channel between Asia and America. The expedition rounded the easternmost end of Asia and reached the River Anadyr. In this way, the passage from the North Arctic Ocean to the Pacific Ocean was shown to exist. The contours of the great Northern Sea Route around the Asian shores were thus outlined by courageous Russian seafarers already in the 17th century.
In the 16th and 17th centuries, Russian and Ukrainian settlements extended almost to the very shores of the Black and Azov Seas. The Zaporozhian Cossacks settled on one of the Dnieper islands (Khortytsia), in close proximity to Turkish fortresses located in the Dnieper and Bug estuaries. Don Cossack villages were established, already at the end of the 16th century, on the lower Don, also in close proximity to the Turkish fortress of Azov.
The struggle against the khanate of Crimea and against Turkey for the northern Black Sea coastline was, for Russia, historically inevitable. It was mainly a result of the necessity of defending the perimeter of Russian lands from Turkish and Tatar attacks.
The Don and Zaporozhian Cossacks played a crucial role in this struggle of the Russian and Ukrainian people. Oppression of the lower classes caused mass migration of peasants into the lower reaches of the Dnieper and the Don. The Zaporozhian and Don Cossacks established themselves there. The Cossacks were involved in a persistent struggle against the Crimean Tatars and Turks, which was not limited to a defensive position, but included response attacks on Crimea and the Turkish coast of the Black Sea. In these excursions they proved themselves to be experienced seamen.
In the 16th century the Black Sea became the setting for frequent maritime battles between small Cossack vessels and the large ships of the Turkish fleet. Cossack sorties to the Turkish shores fundamentally undermined Turkey’s military prowess, decidedly shattering the mythos of invincibility they possessed at the time.
The Cossacks’ maritime heroics are striking for the bravery displayed, and their campaigns for the thoroughness of the preparations. For these sea sorties the Cossacks built special vessels ("chaikas"), up to 20 m in length, 3 to 4 m wide, with a draft of 50-60 cm. These vessels were equipped with two rudders – one each on the stern and the bow. Each of them had a mast, and in fair weather and favorable wind, a sail would be raised; normally, however, the "chaikas" were propelled by oars, for which 10 to 15 rowers were seated on each side. Bundles of reed were tied to the sides, which would keep the Cossack “chaikas” afloat even in the event that they took on water. Supplies were kept in barrels. Eighty to one hundred Cossack "chaikas" would be assembled for long-distance sorties. Each vessel was equipped with four or six small-caliber cannons (falconets), and had a crew of 50-70 people; each Cossack had two rifles and a saber. Such a squadron constituted a formidable force, especially since the Cossacks usually staged surprise attacks, not allowing the enemy the opportunity to assemble their forces.
The Cossack fleet would sail down the Dnieper to the mouth of the river. At the front would be the Ataman’s vessel with a flag on its mast, followed by the other "chaikas." Knowing that Turkish galleys closely guard the Dnieper estuary, the Cossacks would hide their vessels in the cattail thickets of river channels, awaiting nightfall. Often the Cossack fleet penetration would not go unnoticed by the Turks, who would notify Constantinople in time of the impending danger. The alarm would spread immediately along the coast of the Black Sea, but the Cossacks would suddenly appear where they were not expected. Battles unfolded between the Cossacks and Turkish flotillas.
The successful sorties of the Zaporozhian and Don Cossacks demonstrated the effectiveness of using river routes for attacks on Tatar and Turkish cities. Moreover, this was the only way to deal a blow to the Crimean Tatars on their own territory, since Crimea was securely defended from the north by nearly insuperable coastal steppes and marine firths.
The Crimean campaign of 1556 ranks among the remarkable military feats of the 16th century. The main Russian forces marched from Putivel toward the Dnieper under the command of a representative of the Muscovite government, the Dyak Rzhevsky. On the Dnieper, Rzhevsky’s Cossacks were joined by 300 Ukrainian Cossacks from Kaniv. The vessels for the campaign were built on a tributary of the Dnieper – the River Psyol. The farthest outpost of the Crimean Tatars on the Dnieper – Islam-Kermen – turned out to have been abandoned by the Tatars. Having occupied it, the Russian force moved further, to Ochakiv, which guarded the exit from the Dnieper and the Bug into the Black Sea. Rzhevsky secured a great victory here, destroying a detachment of Tatars and Turks and seizing the outskirts of Ochakiv (a “stockade”). Rzhevsky’s campaign demonstrated the fragility of the Turko-Tatar defenses on the Dnieper, and thus the vulnerability of the Black Sea coastline of Crimea and Turkey.
Rzhevsky’s brave feats were carried further in 1559. This time, the Dnieper detachment was commanded by the Okolnichiy Daniil Adashev. Garnering a force of 8000 men, he sailed his boats down the Dnieper to Ochakiv, whereabouts he seized two Turkish ships. Disembarking on the northern coast of Crimea, 15 kilometers from Perekop, the Russian troops plundered Tatar settlements and successfully returned. The Khan pursued them with a small force, "as not many people rushed to join him.”
The significance of Adashev's campaign as the first successful Russian sea attack on Crimea was characterized thus by a chronicler of these events: “heretofore from the beginning, since the Yurt became Crimean, and since that Korsun Isle (i.e., Crimea) was overrun by the impious Busorman, the Russian saber has not shed crimson blood in those impious dwellings until now.” For the first time, the war was taken to the territory of the Crimean horde itself, which had been marauding across Russian and Ukrainian lands unpunished.
This marked the beginning of Cossack excursions into the Black Sea. In 1589, Cossack “chaikas” under the command of Ataman Kaluga sailed down the Dnieper and headed for the shores of Crimea. The Cossacks captured a Turkish ship at sea, and attacked the city of Kozlov (now Eupatoria) at night. In 1606, Zaporozhian Cossacks captured 10 Turkish galleys at sea, with all their supplies, and attacked Varna. In the fall of 1608, Cossacks took Perekop, and next year 16 “chaikas” appeared in branches of the Danube.
The Cossack campaigns became even more threatening for the Turks after the Zaporozhians began combining forces with the Don Cossacks. It became clear that the mighty Turkish fleet was in no condition to defend the shores of Crimea and Asia Minor. The major Black Sea ports – Kefe (Feodosiya), Trabzon and Sinop – became targets of Cossack attacks; Cossack “chaikas” even started appearing under the walls of Constantinople.
The most important event in the 17th-century history of Cossack campaigns was the siege of Sinop, one of the wealthiest Turkish cities on the Asia-Minor coast of the Black Sea. The way to Sinop was shown the Cossacks by their own kin, “Poturnaks,” i.e., captive Cossacks who, not able to bear the torture and forced heavy labor on the Turkish galleys, succumbed and converted to Islam. Having been converted by force, the Poturnaks detested their oppressors and gladly served as guides to the Cossacks. The Cossacks staged a surprise attack on the city, plundered the castle and its arsenal, destroyed sail- and rowboats in the harbor, and liberated the Christian captives. The capture of Sinop (1616) had a great impact in Turkey, and was a cause of the ousting of the Grand Vizier.
In 1615, Don Cossacks attacked Azov and destroyed many Turkish ships, after which they sailed 70 boats to Kefe, captured it and liberated many captives. From the shores of Crimea, they headed to the south shore of the Black Sea and captured Trabzon. Here, too, the Don and Zaporozhian Cossacks acted jointly and simultaneously. Contemporaries credited the leader of the Cossack fleet, the Hetman Peter Sagaydanovich, with, “during his hetmanship, taking from Turkey the town of Kefe; that even the Turkish Caesar was in a great fear.” Soon after the capture of Sinop and Trabzon, the Cossack fleet appeared under the walls of Istanbul (Constantinople), the capital of the Turkish Empire. The expedition comprised Don and Zaporozhian Cossacks under the command of the Ataman Shil.
The amphibious expeditions of the Zaporozhian Cossacks were, indubitably, a remarkable military undertaking, playing an important role in the defense of Russian and Ukrainian lands from the predatory forays of Crimean Tatars.
The Zaporozhian and Don Cossacks participated equally in the naval campaigns on the Black Sea. Yet the campaigns of the Zaporozhian Cossacks are celebrated by Russian historians, while the Don campaigns remain relatively unknown, even though the Don naval expeditions not only held their own in scope, as compared to the Zaporozhian ones, but frequently outdid them.
The Cossack naval sorties were, of course, a source of constant diplomatic tension between Russia and Turkey, the latter demanding the cessation of Cossack raids on the Black Sea coastline. This explains the fact that the Muscovite government, in their missives, forbade the Cossacks from pillaging the Crimean and Turkish shores. However, this injunction was just a formality, since the Muscovite government had a vested interest in the existence of a permanent Cossack fleet on the Don, to counterbalance the Turkish naval force that held absolute dominion over the Black and Azov Seas. As a result, the Cossack flotillas not only didn't disband, but in fact were reinforced with new boats that were built in Voronezh, at the expense of the Tsar’s treasury. Thus, the Tsar’s missive of 1627, which reprimanded the Don Cossacks for raids on Crimean and Turkish lands, at the same time allowed the Cossacks to retain 14 boats on the Don for escorting Turkish and Russian ambassadors.
The greatest moment in the history of Don Cossack naval campaigns was the capture of Azov. The Azov fortress stood on the left bank of the Don, close to its mouth. Consequently, such a fortress could only be besieged by river or light sea vessels. The Cossacks’ siege of Azov began on April 21, and continued for two months. The Cossacks took Azov on June 18, 1637, and “killed many people.” During the siege, the Cossacks pounded the city walls with cannons, surrounded the fortress with trenches, and attempted to sap the towers. All this was made possible only by the fact that the Cossacks possessed adequate artillery and ammunition, provided by Moscow. The arrival in Azov of the nobleman Stepan Chirikov and the Ataman Ivan Katorzhnop from Moscow, bearing bread, gunpowder and money was considered by a writer of a historical novel about Azov to be the turning point of the siege of that stronghold. That day "the Great Don host” shelled the castle walls with rifles and cannons.
Azov quickly attained the status of being the Don Cossack capital, and the destination of a steady flow of Zaporozhian Cossack migrants from Ukraine, whose numbers were estimated by the Don migrants themselves to be, "in Azov and on the Don," 10,000 strong. Soon Azov established trade relations with Kerch and Taman, from which two ships came with Turkish merchants and goods. Azov garnered an even greater significance as a naval base, from which light Cossack boats could set out to sea.
In 1638, the Cossack fleet encountered 44 Turkish galleys. The confrontation occurred during a storm that destroyed six of the Turkish galleys. The galley fleet, according to Cossack intelligence, was to serve as a barrier for Cossack vessels in the Kerch Strait. The Cossack flotilla, according to a Russian chronicler, comprised 40 boats manned by 2000 crew (on average 50 men per vessel). The battle continued all day, and by night both sides retreated: the galleys went out to sea, the Cossacks – to the shores of the Azov Sea. The next day, the sea battle recommenced: “there was a big battle made, and a great smoke rose up.” The Cossack fleet of 53 vessels and 1700 crew, after an unsuccessful attack on Kefe, escaped into branches of the Kuban. The Turkish fleet blocked off the mouth of the Kuban and pursued the Cossacks in smaller vessels.
This was not the end of the Cossack operations on the Black Sea. Receiving orders from Moscow, the Cossacks sent 37 large boats into the Black Sea. Here the Cossack flotilla encountered Turkish galleys and engaged them. The Cossacks had to battle a mighty Turkish squadron of 80 large and 100 small battleships. Nevertheless, they captured 5 galleys and sank them together with their cannons. The Cossacks' military operations on the sea continued for three weeks. Cossack boats damaged by cannon fire from Turkish vessels moored by the shore; the Cossacks returned to Azov by land.
The combat activity on the sea was renewed soon after the Cossacks abandoned Azov in 1642. The Don "water route" allowed the possibility of dealing severe blows on Crimea and Turkey. This fact was duly noted in Russian administrative circles. In 1646, the nobleman Zhdan Kondyryov came to the Don as a representative of the government, and was to accompany the Don Cossacks to the Crimean shores. The Don host's leader suspected in Kondyryov’s mission an attempt by the Muscovite government to establish control over Cossack naval campaigns. The Cossacks diplomatically explained their unwillingness to obey Kondyryov with the claim that he would hardly withstand the Cossacks’ sea and land travels, being a “delicate man.” In like manner, they also raised the issue of the quality of sea vessels that could be used for a sea sortie to the Crimean shores. Agreeing to set out on 30 boats, the Cossacks pointed out that expeditions to the Turkish and Crimean shores require 300-400 boats.
Around this time, the Muscovite government made attempts to establish a fleet on the Don. To build the Don fleet, 100 boats were to be collected in cities on the Volga, “made from a single wood, suitable for sea travel.” In the case that such a quantity of boats could not to be found, the remainder was to be immediately constructed in Kazan. All collected and constructed boats were to be sailed down the Volga to Tsaritsyn, having been loaded with rye flour. From Tsaritsyn, the boats had to be portaged by land to the Don, with the use of iron-clad rollers. The experts appointed to determine “which kind of boats are needed” were Don Cossacks.
The extent to which a Cossack flotilla consisting of such vessels posed a serious threat for the Turks is evidenced by the fact that in June of the same year, 1646, the Cossacks seized two Turkish ships, together with cannons and ammunition, without any resistance. The Turkish crew abandoned the ships upon hearing shots fired by Azov. Moreover, the Cossacks burned three more Turkish ships near Azov.
Thus, the construction of the fleet on the Don under Peter I was not an absolute novelty. Peter profited from the experience of earlier achievements of the 17th century. In the mid-17th century, the Don fleet, apparently, was not completed, as Russia stood on the brink of war with the Rzeczpospolita over Ukraine and Belarus, but the naval campaigns of the Cossacks were nevertheless a remarkable phenomenon in the history of the fleet, demonstrating the high level of martial skill possessed by the Russian and Ukrainian people.
The Caspian Sea also became a theater for campaigns of the Don and Yaik (Ural) Cossacks. The most courageous of the Cossack expeditions in the Caspian Sea is the well-known Persian campaign of Stepan Razin, which was begun in March of 1668. The Cossack fleet comprised 24 boats. The Cossacks moved along the western shore of the Caspian Sea to the Terek delta, where Razin was joined by Ataman Sergey Krivoy. From there, the Cossacks headed to Derbent, Baku and further south. Passing the winter on the Miyan-Kala Peninsula, they marauded the eastern shores of the Caspian, afterwards retreating to the isle of Suina, near the mouth of the Kura, where they destroyed a joint fleet of Persians and Kumyks, comprised of 70 vessels, and captured 33 cannons.
Due to the development of trade relations with nations of the East, the Muscovite government undertook a series of measures for the defense of the Volga route. At the end of the 16th century, the cities of Samara, Saratov and Tsaritsyn were built on the Volga, and a stone citadel erected in Astrakhan. Yet complete safety of travel on the Caspian Sea could only be guaranteed with the creation of naval vessels. A fleet on the Volga was to secure Russia’s position in the Caspian basin and provide assistance to the Cossack flotillas on the Don.
The first attempt in this direction was made in the 17th century. The foundation was laid on the construction of a ship by 50 Russian carpenters in Nizhny Novgorod. The ship was built “of pine boards,” with a flat bottom. It had a length of about 38 m, and a width of 12.5 m. The ship was propelled by sails, and in the absence of wind, by oars (it had 12 pairs of oars, with two rowers per oar). Its armament consisted of several cannons.
This ship set sail on July 30, 1636, when the Volga already began to shallow out. The trip was mostly uneventful, other than delays in the shallows. On September 15, a month and a half after leaving Nizhny Novgorod, the ship arrived at Astrakhan. From Astrakhan, it headed further only on October 10. The sailing on the Caspian Sea was difficult, and ended in catastrophe, in which, on November 14, 1636, the ship was thrown ashore by a storm, south of Derbent. The failure of this first military ship was due, primarily, to the expedition’s poor preparation for sailing in the stormy Caspian waters.
The necessity of creating a naval flotilla on the Caspian Sea was clearly recognized by the Muscovite government. Also not insignificant was the Tsarist government’s endeavor to keep the “Volga outlaws” in fear by creating a naval fleet on the Volga and in the Caspian Sea, during a time when the peasant uprising led by Stepan Razin was already beginning. A. L. Ordin-Naschokin, at the peak of his fame during this time, was the initiator of the construction of a new military vessel “for sorties out of Astrakhan and into the Khvalyn Sea.” Naschokin clearly saw the tremendous possibilities of marine trade on the Caspian Sea that would be open with the creation of a naval flotilla here. The order for the construction of the ship was given on June 19, 1667. The location for the construction was selected to be the palatial village of Dedinovo, on the banks of the Oka, where river vessels had long been built – mainly flat-bottomed boats. Among the workers sent to Dedinovo for the ship’s construction were Russian masters and 30 carpenters, “that had previously been building ‘busa’ and ‘strug’ boats.” Thus, the construction of the ship in Dedinovo was carried out by the hands of Russian masters. Materials for the ship were likewise of Russian manufacture: the iron, for instance, was delivered from the Tula and Kashira factories. The ship, later given the name Oryol (Eagle), was begun in Novemeber of 1667. It had a length of 24.5 m, width of 6.5 m, and a draft of 1.5 m. Simultaneously, a small yacht was also being built, for courier service, with an armament of six small cannons, a boat and two other small craft.
The logging was carried out in the Kolomna region, and the iron, “the fittest for shipbuilding,” was supplied by Tula and Kashira factories. Overseeing the shipbuilding was entrusted to Yakov Poluyektov. In March of 1668, the Oryol’s body was in a sufficient state of readiness to send for a painter and an engraver to perform finishing and decorative work. In January of 1668, the state of the ship’s construction was thus: “the ship’s bottom and sides are formed, and the bent wood is all nailed down, and timber for the top is being ground.” In May of 1668, the ship was lowered onto water, but the decorative work was lagging behind, and the Oryol passed the winter in Dedinovo. An inspection of the ship established its complete fitness for sailing in the Caspian Sea. Furthermore, Astrakhanians indicated that the vessels sailing the Caspian Sea followed “the same design.”
In April of 1669, the ship was named the Oryol (Eagle), and a depiction of the eagle as the Russian national coat of arms was sewn onto the ship’s flags. On May 7, the new vessel raised its sails and left port. The entire trip from Dedinovo to Astrakhan took three and a half months.
The commissioning of the first military ship necessitated the organization of service aboard it. A short marine charter project, in the form of a "letter of a ship’s regime" (i.e. organization) was introduced in the Ministry of Foreign Affairs. This “letter” consisted of an introduction and 34 regulatory articles, which outlined the basic rules of ship service, responsibilities and interrelationship of the captain with other servicemen aboard the vessel, as well as short instructions for crew actions while moored, while under way, during battle and under other circumstances. These articles, receiving the Tsar’s approval, were evidence to the fact that the construction of the Oryol was not a random phenomenon in the history of the Russian nation, but the true beginning of the creation of a regular naval fleet.
The Oryol arrived in Astrakhan in troubled times. The entire Volga was in the grips of an uprising against the Tsarist administration. At the head of the uprising stood Stepan Timofeyevich Razin. Soon after the usurpation of Astrakhan by the rebels, the Oryol was burned, as its operation and sail equipment were too difficult to master, while posing a danger to the rebels if captured by Tsarist troops.
The construction of a military vessel in Dedinovo, as well as the boat construction on the Don, did not pass into history in vain for the Russian fleet. Perhaps even the notorious “Brandt’s boat,” discovered by Peter I in a barn, was a remnant of the construction that took place on the Oka in 1667-1668. Peter I’s work on construction of a Russian fleet had precedents and relied on their broad experience. Russia already possessed experienced ship carpenters that took part in the shipbuilding of small military and merchant vessels. It was their experience that was used by Peter I in the construction of ships on the Baltic Sea, and earlier on the Don, during the Azov campaigns. Russia also possessed experienced sailors, familiar with sailing conditions on the White and the Barents, on the Black and Caspian Seas. Peter’s shipbuilding efforts would be severely stunted, were Russia not to have already possessed the experienced manpower in the form of ship carpenters, captains, sailors, and experience in building military ships.
|
<urn:uuid:2d956bc6-7efc-4e65-83cd-f4ef061b3af3>
|
CC-MAIN-2016-26
|
http://rusnavy.com/history/middleages/io02.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00023-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.971667 | 11,638 | 3.453125 | 3 |
Greening your garden is not just about having the lushest lawn or the most vibrant rosebushes in the neighborhood. Instead, it's about creating a garden you can enjoy today while keeping an eye on sustainability. It's using water efficiently, protecting the quality of air and water supplies, and replacing harsh chemicals with natural, healthy alternatives.
So what's the big deal with going green anyway, and what does your lawn have to do with protecting the environment? To understand the benefits of going green, it helps to understand how the greenhouse effect works. Nearly every item we use throughout the day is manufactured and transported using energy produced by fossil fuels. As these fuels are burned, they release carbon dioxide and other pollutants. These pollutants form a thick blanket across the Earth's atmosphere that traps heat. This trapped heat contributes to global warming, which may one day dramatically impact wildlife, plants, shorelines and human life. By choosing recycled or local products, you can minimize your contribution to this problem and help protect the future of our ecosystem.
In addition to global warming, green proponents are concerned with preserving limited resources, such as the world's water supply. According to the United Nations, at least two-thirds of the world's population is expected to experience water shortages by 2025 if current consumption patterns continue [source: United Nations]. By relying on more efficient methods of watering your plants, you can help reduce shortages and improve access to water in the future.
But a green garden does more than protect future sustainability. Simple greening techniques can offer benefits in real time, providing cleaner air and water supplies to you and your family.
The best part about greening your garden is how easy it is. Sure, you could search garden centers for the perfect chemical fertilizer, or switch to composting and get your fertilizer at home for free. You could fill your garden with exotic flowers and fight an endless battle to keep them alive, or you could switch to native plants and spend your time enjoying your garden instead of struggling against it. Going green is not only the healthy choice, but more often than not, it's the easiest and most affordable choice. Truly sustainable gardening is about finding simple, natural solutions, not about investing in the latest trends.
Did you know that you can recycle tea bags and grass clippings to help your garden grow? Read on to learn how composting can provide eco-friendly fertilizer.
|
<urn:uuid:6299fbc1-74fd-43be-a508-6b8c75895b95>
|
CC-MAIN-2016-26
|
http://home.howstuffworks.com/gardening/garden-design/5-ways-to-green-your-garden.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946659 | 485 | 3.359375 | 3 |
What is the most important scientific idea of all time?
Obviously, the answer to so rich a question is up for grabs, but let me opt for uniformitarianism, the idea that the present is the key to the past and future.
Uniformitarianism assumes that the same laws and processes we see operating in the world today were at work also in the past. If we want to understand the history of the universe, the Earth, or of life, we create our explanations using the world as we find it today. No unique causalities. No miracles. No changing laws of nature.
You run the tape backwards from the present. Or, if you wish, into the future. You follow the tape wherever it leads, even, perhaps, to the big bang itself.
Maybe I shouldn't call this a scientific idea, but rather a philosophical principle. It is not something we can prove, not having direct access to the past or future. But the application of the uniformitarian idea was fundamental to modern geology, evolutionary biology and cosmology. It was the door that opened onto the grand vistas of modern science. The proof of the pudding, as they say, is in the eating.
The idea has a long pedigree, but it is usually credited to James Hutton, the 18th century Scot sometime gentleman farmer and sometime Edinburgh bon vivant. Fiercely resisted at the time, for both scientific and religious reasons, it triumphed -- in Lyell's geology, Darwin's biology, and Hubble's cosmology -- by the sheer grandeur of its explanatory power.
It is not an exaggeration to say that we have yet to encounter reliable evidence of any past event that cannot be explained -- at least in principle-- by natural processes acting in the world today.
|
<urn:uuid:23e29132-3de0-4539-9033-26784c73db27>
|
CC-MAIN-2016-26
|
http://blog.sciencemusings.com/2012/03/natural-philosophy.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.949185 | 365 | 3.109375 | 3 |
- The Government
- United Kingdom
For centuries, the UK has used the first-past-the-post system to elect its MPs. This system is out-dated and promotes inequality.
The two main parties (Labour and Conservative) are over-represented and the rest are under-represented. Most importantly, there is no proportionality.
The Labour government is currently running our country with the support of a mere 35.2% of the voters. The Green party can get 15% and not win a single seat.Is this fair?
I propose a system where, if a party gets 10% of the votes it will also get 10% of the seats in Parliament. ie, a FAIR system.
|
<urn:uuid:2ec9db91-75cd-4062-9f6c-bb6a11a813ad>
|
CC-MAIN-2016-26
|
http://www.gopetition.com/petitions/adopt-proportional-representation-for-general-elections-in-the-uk.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950833 | 146 | 3.515625 | 4 |
8 Class Physics Working Models
Best Results From Wikipedia Yahoo Answers Encyclopedia Youtube
Working class (or Lower class, Labouring class) is a term used in the social sciences and in ordinary conversation to describe those employed in lower tier jobs (as measured by skill, education and lower incomes), often extending to those in unemployment or otherwise possessing below-average incomes. Working classes are mainly found in industrializedeconomies and in urban areas of non-industrialized economies.
As with many terms describing social class, working class is defined and used in many different ways. When used non-academically, it typically refers to a section of society dependent on physical labor, especially when compensated with an hourly wage. Its use in academic discourse is contentious, especially following the decline of manual labor in postindustrial societies. Some academics question the usefulness of the concept of a working class.
The term is usually contrasted with the Upper classandMiddle class, in general terms of access to economic resources,education and cultural interests. The cut-off between Working class and Middle class is more specifically where a population spends money primarily as a lifestyle rather than for sustenance (for example, on fashion versus merely nutrition and shelter).
Its usage can alternately be derogatory, or can express a sense of pride in those who self-identify as Working class.
Definitions of social classes reflect a number of sociological perspectives, informed by anthropology, economics, psychology and sociology. The major perspectives historically have been Marxism and Functionalism.. The parameters which define working class depend on the scheme used to define social class. For example, a simple stratum model of class might divide society into a simple hierarchy of lower class, middle class and upper class, with working class not specifically designated. Due to the political interest in the working class, there has been debate over the nature of the working class since the early 19th century. Two broad schools of definitions emerge: those aligned with 20th-century sociological stratum models of class society, and those aligned with the 19th-century historical materialism economic models of the Marxists and anarchists. Key points of commonality amongst various ideas include the idea that there is one working class, even though it may be internally divided. The idea of one single working class should be contrasted with 18th-century conceptions of many laboring classes. Sociologists Dennis Gilbert, James Henslin, William Thompson, Joseph Hickey and Thomas Ayling have brought forth class models in which the working class constitutes roughly one third of the population, with the majority of the population being either working or lower class.
Karl Marx defined the working class or proletariat as individuals who sell their labor power for wages and who do not own the means of production. He argued that they were responsible for creating the wealth of a society. He asserted that the working class physically build bridges, craft furniture, grow food, and nurse children, but do not own land, or factories. A sub-section of the proletariat, the lumpenproletariat (rag-proletariat), are the extremely poor and unemployed, such as day laborers and homeless people.
In The Communist Manifesto, Marx argued that it was the destiny of the working class to displace thecapitalist system, with the dictatorship of the proletariat, abolishing the social relationships underpinning the class system and then developing into a future communist society in which "the free development of each is the condition for the free development of all." In Capital, Marx dissected the ways in which capital can forestall such a revolutionary extension of the Enlightenment. Some issues in Marxist arguments about working class membership have included:
In some ways we would not have computers today were it not for physics. Furthermore, the needs of physics have stimulated computer development at every step. This all started due to one man's desire to eliminate needless work by transferring it to a machine. Charles Babbage (1791–1871) was a well-to-do Englishman attending Cambridge University in the early 1800s. One day he was nodding off over a book containing tables of astronomical phenomena. He fancied that he would become an astronomical mathematician. The motion of heavenly bodies was, of course, governed by the laws of physics. For a moment, he thought of having the tables calculated automatically. This idea came up several times in succeeding years until he finally designed a calculator, the Difference Engine, that could figure the numbers and print the tables. A version of the Difference Engine made by someone else found its way to the Dudley Observatory in Albany, New York, where it merrily cranked out numbers until the 1920s. Babbage followed this machine with a programmable version, the Analytical Engine, which was never built. The Analytical Engine, planned as a more robust successor to the Difference Engine, is considered by many to be the first example of a modern computer. In the late 1800s, mathematician and scientist Lord Kelvin (William Thomson) (1824–1907) tried to understand wave phenomena by building a mechanical analog computer that modeled the waves on beaches in England. This was a continuation of the thread of mechanical computation applied to understand physical phenomena in the 1800s. In the 1920s, physicist Vannevar Bush (1890–1974) of the Massachusetts Institute of Technology built a Differential Analyzer that used a combination of mechanical and electrical parts to create an analog computer useful for many problems. The Differential Analyzer was especially suited for physics calculations, as its output was a smooth curve showing the results of mathematical modeling. This curve was very accurate, more so than the slide rules that were the ubiquitous calculators in physics and engineering in the first seven decades of the twentieth century. Beginning during World War II and finishing just after the war ended, the Moore School of the University of Pennsylvania built an electronic digital computer for the U.S. Army. One of the first problems run on it was a model of a nuclear explosion. The advent of digital computers opened up whole new realms of research for physicists. Physicists like digital computers because they are fast. Thus, big problems can be figured out, and calculations that are boring and repetitious by hand can be transferred to computers. Some of the first subroutines, blocks of computer code executed many times during the run of a program, were inspired by the needs of physics. Even though digital computers were fast with repetitious tasks, the use of approximation and visualization has the largest effect on physicists using electronic computers. Analog machines, both mechanical and electronic, have output that models real world curves and other shapes representing certain kinds of mathematics. To calculate the mathematical solution of physical problems on digital computers meant the use of approximation. For example, the area under a curve (the integral) is approximated by dividing the space below the curve into rectangles, figuring out their area, and adding the small areas to find the one big area. As computers got faster, such approximations were made up of an ever-increasing number of smaller rectangles. Visualization is probably the physicist's task most aided by computers. The outputs of Lord Kelvin's machine and the Differential Analyzer were drawn by pens connected to the computational components of the machine. The early digital computers could print rough curves, supplemented by cleaner curves done on a larger scale by big plotters. Interestingly, the plotters drew what appeared to be smooth lines by drawing numerous tiny straight lines, just like a newspaper photograph is really a large number of gray points with different shades. Even these primitive drawing tools were a significant advance. They permitted physicists to see much more than could be calculated by hand. In the 1960s, physicists took millions of photographs of sub-atomic particle collisions. These were then processed with human intervention. A "scanner" (usually a student) using a special machine would have the photographs of the collisions brought up one by one. The scanner would use a trackball to place a cursor over a sub-atomic particle track. At each point the scanner would press a button, which then allowed the machine to punch the coordinates on a card. These thousands upon thousands of cards were processed to calculate the mass and velocity of the various known and newly discovered particles. These were such big jobs that they were often run on a computer overnight. Physicists could use the printed output of batch-type computer systems to visualize mentally what was really happening. This is one of the first examples of truly large-scale computing. In fact, most of the big calculations done over the first decades of electronic digital computing had some relationship to physics, including atomic bomb models, satellite orbits, and cyclotron experiments. The advent of powerful workstations and desktop systems with color displays ended the roughness and guessing of early forms of visualization. Now, many invisible phenomena, such as fields, waves, and quantum mechanics, can be modeled accurately in full color. This is helping to eliminate erroneous ideas inspired by the poor visualizations of years past. Also, these computer game–quality images can be used to train the next generation of physics students and their counterparts in chemistry and biology classes, making tangible what was invisible before. Finally, the latest and perhaps most pervasive of physics-inspired computer developments is the World Wide Web. It was first developed as a way of easily sharing data, including graphics, among researchers in the European cyclotron community and also for those outside of it with appropriate interests. So whenever a browser is launched, 200 years of physics-driving computer development is commemorated. see also Astronomy; Data Visualization; Mathematics; Navigation. James E. Tomayko Merrill, John R. Using Computers in Physics. Boston: Houghton Mifflin Company, 1976.
physics branch of science traditionally defined as the study of matter , energy , and the relation between them; it was called natural philosophy until the late 19th cent. and is still known by this name at a few universities. Physics is in some senses the oldest and most basic pure science; its discoveries find applications throughout the natural sciences, since matter and energy are the basic constituents of the natural world. The other sciences are generally more limited in their scope and may be considered branches that have split off from physics to become sciences in their own right. Physics today may be divided loosely into classical physics and modern physics. Classical Physics Classical physics includes the traditional branches and topics that were recognized and fairly well developed before the beginning of the 20th cent.— mechanics , sound , light , heat , and electricity and magnetism . Mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies at rest), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics, the latter including such branches as hydrostatics, hydrodynamics, aerodynamics, and pneumatics. Acoustics , the study of sound, is often considered a branch of mechanics because sound is due to the motions of the particles of air or other medium through which sound waves can travel and thus can be explained in terms of the laws of mechanics. Among the important modern branches of acoustics is ultrasonics , the study of sound waves of very high frequency, beyond the range of human hearing. Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection , refraction , interference , diffraction , dispersion (see spectrum ), and polarization of light . Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th cent.; an electric current gives rise to a magnetic field and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest. Modern Physics Most of classical physics is concerned with matter and energy on the normal scale of observation; by contrast, much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on the very large or very small scale. For example, atomic and nuclear physics studies matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale, being concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in large particle accelerators . On this scale, ordinary, commonsense notions of space, time, matter, and energy are no longer valid. The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. The quantum theory is concerned with the discrete, rather than continuous, nature of many phenomena at the atomic and subatomic level, and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with relative uniform motion in a straight line and the general theory of relativity with accelerated motion and its connection with gravitation. Both the quantum theory and the theory of relativity find applications in all areas of modern physics. Evolution of Physics Greek Contributions The earliest history of physics is interrelated with that of the other sciences. A number of contributions were made during the period of Greek civilization, dating from Thales and the early Ionian natural philosophers in the Greek colonies of Asia Minor (6th and 5th cent. BC). Democritus (c.460-370 BC) proposed an atomic theory of matter and extended it to other phenomena as well, but the dominant theories of matter held that it was formed of a few basic elements, usually earth, air, fire, and water. In the school founded by Pythagoras of Samos the principal concept was that of number; it was applied to all aspects of the universe, from planetary orbits to the lengths of strings used to sound musical notes. The most important philosophy of the Greek period was produced by two men at Athens, Plato (427-347 BC) and his student Aristotle (384-322 BC); Aristotle in particular had a critical influence on the development of science in general and physics in particular. The Greek approach to physics was largely geometrical and reached its peak with Archimedes (287-212 BC), who studied a wide range of problems and anticipated the methods of the calculus. Another important scientist of the early Hellenistic period, centered in Alexandria, Egypt, was the astronomer Aristarchus (c.310-220 BC), who proposed a heliocentric, or sun-centered, system of the universe. However, just as the earlier atomic theory had not become generally accepted, so too the astronomical system that eventually prevailed was the geocentric system proposed by Hipparchus (190-120 BC) and developed in detail by Ptolemy (AD 85-AD 165). Preservation of Learning With the passing of the Greek civilization and the Roman civilization that followed it, Greek learning passed into the hands of the Muslim world that spread its influence from the E Mediterranean eastward into Asia, where it picked up contributions from the Chinese (papermaking, gunpowder) and the Hindus (the place-value decimal number system with a zero), and westward as far as Spain, where Islamic culture flourished in Córdoba, Toledo, and other cities. Little specific advance was made in physics during this period, but the preservation and study of Greek science by the Muslim world made possible the revival of learning in the West beginning in the 12th and 13th cent. The Scientific Revolution The first areas of physics to receive close attention were mechanics and the study of planetary motions. Modern mechanics dates from the work of Galileo and Simon Stevin in the late 16th and early 17th cent. The great breakthrough in astronomy was made by Nicolaus Copernicus, who proposed (1543) the heliocentric model of the solar system that was later modified by Johannes Kepler (using observations by Tycho Brahe) into the description of planetary motions that is still accepted today. Galileo gave his support to this new system and applied his discoveries in mechanics to its explanation. The full explanation of both celestial and terrestrial motions was not given until 1687, when Isaac Newton published his Principia [Mathematical Principles of Natural Philosophy]. This work, the most important document of the Scientific Revolution of the 16th and 17th cent., contained Newton's famous three laws of motion and showed how the principle of universal gravitation could be used to explain the behavior not only of falling bodies on the earth but also planets and other celestial bodies in the heavens. To arrive at his results, Newton invented one form of an entirely new branch of mathematics, the calculus (also invented independently by G. W. Leibniz), which was to become an essential tool in much of the later development in most branches of physics. Other branches of physics also received attention during this period. William Gilbert, court physician to Queen Elizabeth I, published (1600) an
From Yahoo Answers
Answers:take a cardboard....stick a paper on it...write on the left side a+b^2 solving by putting in values.....on the right side rite a^2 +b^2 +2ab....and use matchsticks to represent values....hope u understand???
Answers:Just do a pendulum. Attach a weight to a string and let it swing.
Answers:Can you get hands on pulleys ? Pulley systems relatively easy to set up and awesome demonstration of Lever type system. If set up right (firm ring stand or mounted) they can lift quite interesting amounts of weight with little force. Perfect for "Work and Energy" in Class IX physics. Web site might give some help if you are in India ?
|
<urn:uuid:43b33d2b-17f4-4df9-be68-7afe47c84937>
|
CC-MAIN-2016-26
|
http://www.edurite.com/kbase/8-class-physics-working-models
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00201-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.958264 | 3,723 | 3.59375 | 4 |
Electronic cigarettes pose a threat to adolescents and should not be sold to minors, the World Health Organisation (WHO) says, in a long-awaited report that calls for strict regulation of the devices.
In the 13-page report, which will be debated by member states at a meeting in October in Moscow, the United Nations health agency also voiced concern at the concentration of the $3 billion market in the hands of transnational tobacco companies.
The WHO declared war on "Big Tobacco" a decade ago, clinching the WHO Framework Convention on Tobacco Control (FCTC), the world's first public health treaty that has been ratified by 179 states since entering into force in 2005.
The treaty recommends price and tax measures to curb demand as well as bans on tobacco advertising and illicit trade in tobacco products.
Prior to Tuesday's report the WHO had indicated it would favour applying similar restrictions to all nicotine-containing products, including smokeless ones.
The WHO urged a range of "regulatory options", including prohibiting e-cigarette makers from making health claims — such as that they help people quit smoking — until they provide "convincing supporting scientific evidence and obtain regulatory approval".
E-cigarettes should be regulated to "minimise content and emissions of toxicants", and those solutions with fruit, candy-like and alcohol-drinks flavours should be banned, it said. Vending machines should be removed in almost all locations.
The use of e-cigarettes poses a threat to adolescents and the foetuses of pregnant women, the report said.
E-cigarettes also increase the exposure of bystanders and non-smokers to nicotine and other toxicants, it said regarding Electronic Nicotine Delivery Systems that it calls ENDS.
"In summary, existing evidence shows that ENDS aerosol is not merely 'water vapour' as is often claimed in the marketing or these products," the WHO said in the report.
Scientists are divided on the risks and potential benefits of e-cigarettes, which are widely considered to be a lot less harmful than conventional cigarettes.
One group of researchers warned the WHO in May not to classify them as tobacco products, arguing that doing so would jeopardise a major opportunity to slash disease and deaths caused by smoking.
But opposing experts argued a month later that the WHO should hold firm to its plan for strict regulations.
E-cigarettes more tempting than regular cigarettes: study
A total of 178 countries are parties to the FCTC and are obliged to implement its measures, with the United States the one notable non-signatory.
Major tobacco companies including Imperial Tobacco, Altria Group, Philip Morris International and British American Tobacco are increasingly launching their own e-cigarette brands as sales of conventional products stall in Western markets.
A Wells Fargo analyst report in July projected that US sales of e-cigarettes would outpace conventional ones by 2020.
Uptake of electronic cigarettes, which use battery-powered cartridges to produce a nicotine-laced inhalable vapour, has rocketed in the past two years and analysts estimate the industry had worldwide sales of some $3 billion in 2013.
But the devices are controversial. Because they are so new there is a lack of long-term scientific evidence to support their safety and some fear they could be "gateway" products to nicotine addiction and tobacco smoking.
Adolescents are increasingly experimenting with e-cigarettes, with their use in this age group doubling between 2008 and 2012, the WHO said.
A study by US researchers published on Monday found they may be more tempting to non-smoking youths than conventional cigarettes.
|
<urn:uuid:e6104231-dc59-412f-bb55-7566a54ce564>
|
CC-MAIN-2016-26
|
http://mobile.abc.net.au/news/2014-08-26/e-cigarettes-a-threat-to-adolescents-says-world-health-authority/5698674?pfm=sm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00167-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.954622 | 729 | 2.6875 | 3 |
Adjust font size:
LONDON, England (CNN) -- Up to half of the world's magnolia species are in danger of extinction, according to a new study by conservationists.
While popular ornamental species continue to bloom in gardens, the flowering plants face a more precarious future in the wild as their native forest habitats are increasingly threatened by human activities, the authors warn.
The Red List of the Magnoliaceae, produced jointly by Botanic Gardens Conservation International (BGCI) and Fauna & Flora International (FFI) following a global mapping project by researchers at the UK's Bournemouth University, identifies 131 endangered species from a worldwide total of 245.
Some two-thirds of magnolias are found in Asia but the subtropical plant also thrives in the parts of the U.S. and South America, BGCI Secretary General Sara Oldfield told CNN.
Oldfield said that widespread deforestation posed the biggest risk to magnolias in the wild: "General forest loss is the main threat, so it varies according to where the species is.
"For example, some of the species in Colombia are threatened by the development of coffee plantations or banana plantations. And on top of that there's exploitation of certain species. Some of them are used medically and that places an extra strain on the species. Others are used for timber and some of them are edible as well."
Many of the most critically affected species are found in China, including Magnolia phanerophlebia, of which only around 200 trees are estimated to exist in the wild and Magnolia sinica, believed to have a single population of fewer than 10 mature trees. Both species grow exclusively in Yunnan province -- now the focus of an extensive re-planting campaign organized by FFI.
"We hope to be able to extend this work to take action for other species, both in China and in other parts of the world." said Georgina Magin, Global Trees Campaign Coordinator at FFI.
China is also hosting the Global Botanic Gardens Congress in Wuhan this month, where the BGCI will launch a survey of garden collections of threatened species.
Oldfield said botanical gardens had a role to play in alleviating the risk to magnolias but warned it would be a "tragedy" to allow magnolia numbers in the wild to continue falling.
"I think botanical gardens can provide a good insurance policy, by bringing them into cultivation as a way of making sure they don't go completely extinct, but we want them to be secure in the wild as well," she said. "We've got to protect their habitats as well."
As well as helping to safeguard the future of a plant that has been cultivated by humans for centuries -- some specimens growing in Chinese temples are estimated to be up to 800 years old -- magnolias are a useful subject of study for conservationists because, as one of the oldest species of flowering plants, they provide a good indicator to the overall health of the wider forest.
"They are an ancient family," said Oldfield. "They've survived all sorts of geological and climactic upheavals in the past -- so now we know their status and we've mapped them we can use them to monitor what happens to the forests in the future."
Oldfield told CNN that magnolias' plight was merely one example of how environmental degradation was threatening plant life.
"We just happen to have up to date information on magnolias, but it's symptomatic of what's happening to plants in the wild in general," Oldfield said.
"Plants are taken for granted; it's a little green blindness. There's always the emphasis on saving animals, but actually if you don't save the plants then everything will go extinct."
|
<urn:uuid:1aa7ec63-6e0a-42f3-96ec-778497935352>
|
CC-MAIN-2016-26
|
http://edition.cnn.com/2007/TECH/science/04/09/magnolias.threat/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00059-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948982 | 779 | 3.609375 | 4 |
We recently wrote about how the spending bill signed in December favors solar power with better and longer renewable energy tax credits than it gives to wind power. However, solar also did better in the extenders” bill than the one technology responsible for generating the most energy from renewables in the US today: biomass.
The tax extenders package benefits biomass power with an extension of the Section 45 production tax credit (PTC). The PTC for technologies other than solar or wind has been extended for two years, through Dec. 31, 2016. The incentive amount for wind, geothermal, and “closed-loop” biomass — the kind that does not create carbon dioxide — is $0.023 per kilowatt hour. For other eligible technologies such as fuel biomass, municipal solid waste, landfill gas and others, the credit is $0.012 per kw/h.
In contrast, the legislation allows solar power companies to keep claiming federal tax credits at 30% of the price of a solar array. The credits, which apply to home solar kits as well as big commercial installations, will be good through 2019. After that, though, the credit will begin to drop, declining to 10% in 2022. Credits through 2022 vs. $0.012 cents per kw/h for one year? Even wind did better than biomass with its $0.023 cents per kw/h and an extension of those credits through 2019.
What is Biomass?
Biomass is biological material derived from living, or recently living organisms. In the context of biomass for electrical power generation, this is often used to mean plant-based material, but biomass can equally apply to both animal- and vegetable-derived material. Woodburning stoves are a primitive form of biomass heating. Ethanol for cars also falls under the biomass category as “biofuel” but it’s not used for electrical power generation.
|
<urn:uuid:68126e1c-1d6e-4bcc-b1ed-148eef14bf5e>
|
CC-MAIN-2016-26
|
http://agmetalminer.com/category/green/page/3/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955762 | 393 | 2.90625 | 3 |
Maple tapping means spring is on its way
Before the April showers and the May flowers, the sap starts running in the trees. It may not be as visible as tulips sprouting or puddles forming on the ground, but it's a sure sign that warmer weather is on its way.
"It's a harbinger of spring. When you get the sap running, it's springtime," Red Wing Environmental Learning Center field instructor Brad Nagel said.
For Burnside second-graders, the running sap also means one of their first chances to get out of the classroom.
"After a long winter, it's a great opportunity to get outside," second-grade teacher Jody Sjoblom said.
For the past three years, ELC staff have been helping Burnside students tap box elder trees - a variety of maple tree - to make syrup.
"(The box elder) is a great tree to tap, and they produce a lot of sap," Nagel said.
Two weeks ago, the kids, along with Nagel and Jason Jech, ELC's executive director, suited up in their winter gear and tromped over the still snow-covered ground to a grove of trees just behind Burnside Elementary School.
With the help of Nagel and Jech, the students took turns using a hand drill to bore holes in box elder trunks. Then they pounded in a tap and attached a plastic hose, allowing the sap to collect in five-gallon buckets.
Wednesday, the students boiled that sap on a wood-burning stove to make their final product - maple syrup. Each tree can produce about 5 gallons of sap, Nagel said. While that may seem like a lot, once boiled down, it takes about 40 gallons of sap to make just 1 gallon of maple syrup.
The second-graders enjoyed their precious product over ice cream, Nagel said.
But a sweet treat isn't the only thing the students get out of the project. Maple tapping has fit in well to what has been going on inside the classroom, Sjoblom said.
Second-grade curriculum requires that students know the life cycles and purposes of plants, Sjoblom said.
"We investigate the different plants and talk about the root system, photosynthesis, chlorophyll," she said. Before the students head out to the trees, Nagel said they get a lesson in tree structure and how the sap moves from the trunk to the buds and leaves.
"Some kids have to have that hands-on testimonial," Sjoblom said.
The tree-tapping project also gives students the opportunity to look beyond plants. "It's more than just tapping maple trees," Nagel said.
Once outside, Sjoblom's class went on a nature walk, talking about animal habitats, how animals spread seeds and how each eco-system relies on one another.
"It allows us to see how everything molds to each other ... how everything is interdependent," Sjoblom said.
Wednesday, while they boiled the sap, Nagel said he used the steam to explain the water cycle to the students. "We try to give them a broader view than just standing in the mud," he said.
"There's so much outside that you can teach kids," Sjoblom agreed. "Its nice to ... bring them outside and just let them explore and observe
|
<urn:uuid:76628eb8-8fb9-4259-ba70-3345b34be1b8>
|
CC-MAIN-2016-26
|
http://www.republican-eagle.com/content/maple-tapping-means-spring-its-way
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965357 | 705 | 2.703125 | 3 |
1 Answer | Add Yours
The sentence in question is "His first two clients were the last two persons hanged in the Maycomb County jail." This leads us to think that he did not succeed in defending them, and that he was not a particularly good attorney. However, we learn he really didn't have much of a chance to defend them because they committed their crime in "the presence of three witnesses" and then were foolish enough to plead "not guilty." Atticus "was present at their departure," meaning he saw them hanged, which is why he now has a "profound distaste for the practice of criminal law." We learn in the story, however, that Atticus is a very accomplished lawyer, and that no one but he could have made the jury deliberate about Tom as long as they did.
We’ve answered 328,311 questions. We can answer yours, too.Ask a question
|
<urn:uuid:5498a0b7-d713-4ed0-90b9-fac156fa8df5>
|
CC-MAIN-2016-26
|
http://www.enotes.com/homework-help/how-does-first-sentence-describing-atticus-s-first-12843
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00141-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.995059 | 186 | 2.53125 | 3 |
Like many of Percy Bysshe Shelley’s poems, “Ode to the West Wind” was inspired by a natural phenomenon, an autumn storm that prompted the poet to contemplate the links between the outer world of nature and the realm of the intellect. In five stanzas directly addressed to the powerful wind that Shelley paradoxically calls both “destroyer” and “preserver” (line 14), the poet explores the impact of the regenerative process that he sees occurring in the world around him and compares it to the impact of his own poetry, which he believes can have similar influence in regenerating mankind.
In each stanza, Shelley speaks to the West Wind as if it is an animate power. The first three stanzas form a logical unit; in them the poet looks at how the wind influences the natural terrain over which it moves. The opening lines describe the way the wind sweeps away the autumn leaves and carries off seeds of vegetation, which will lie dormant through winter until the spring comes to give them new life as plants. In the second stanza, the poet describes the clouds that whisk across the autumn sky, driven by the same fierce wind and twisted into shapes that remind him of Maenads, Greek maidens known for their wild behavior. Shelley calls the wind the harbinger of the dying year, a visible sign that a cycle of nature’s life is coming to a close. The poet uses the third stanza to describe the impact of the wind on the Mediterranean coast line and the Atlantic ocean; the wind, Shelley says, moves the waters and the undersea vegetation in much the same way it shifts the landscape.
In the final two stanzas, the speaker muses about the possibilities that his transformation by the wind would have on his ability as a poet. If he could be a leaf, a cloud, or a wave, he would be able to participate directly in the regenerative process he sees taking place in the natural world. His words—that is, his poetry—would become like these natural objects, which are scattered about the world and which serve as elements to help bring about new life. He wishes that, much like the seeds he has seen scattered about, his “leaves” (line 58), his “dead thoughts” (line 63)—his poems—could be carried across the world by the West Wind so that they could “quicken to a new birth” (line 64) at a later time, when others might take heed of their message. The final question with which the poet ends this poem is actually a note of hope: The “death” that occurs in winter is habitually followed by a “new life” every spring. The cycle of the seasons that he sees occurring around him gives Shelley hope that his works might share the fate of other objects in nature; they may be unheeded for a time, but one day they will have great impact on humankind.
|
<urn:uuid:b14cfe41-7914-4126-a75f-b594f28d5e36>
|
CC-MAIN-2016-26
|
http://www.enotes.com/topics/ode-west-wind/in-depth
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.972742 | 611 | 3.375 | 3 |
JANUARY FENCING TIP OF THE MONTH
Sponsored Content by Gallagher Animal Management Systems
It is no secret that a constant supply of clean, fresh water will greatly improve livestock production. Studies have shown higher rates of average daily gain and improved overall animal health when a clean and reliable water source is available.
Sometimes a picture says a thousand words. These photos were taken early one January morning in South Dakota. The outside temperature was -12 and the water temperature was 53!
When purchasing a watering system, make sure to choose one that is appropriate for the type of livestock you have, will accommodate the number of head, and is suitable for your climate.
Many systems today, are energy-free and made from rugged, high-impact resistant polyurethane.
Find a system that will provide consistent water temperature throughout the year and energy efficient with an enclosed baffle design that protects valve from freezing.
Written By: Dwain Christophersen
Photos Courtesy of Rural Manufacturing
|
<urn:uuid:2c9a08be-fad7-4608-8dd4-cf77114090f7>
|
CC-MAIN-2016-26
|
http://beefmagazine.com/fencing-guide/livestock-management-fencing
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00056-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.939328 | 202 | 2.765625 | 3 |
Madrid on quest to find the remains of ‘Don Quixote’ author
He has been hailed as the father of the novel, a writer who wielded the Spanish language so forcefully that it is nicknamed for him: la lengua de Cervantes (“the language of Cervantes”).
But when Miguel de Cervantes died nearly four hundred years ago, he was penniless, and his burial took place, like much of his life, in obscurity.
Although a small plaque on the side of the Convent of Trinitarians in Madrid marks the “Don Quixote” author’s final resting place, the precise location of his gravesite is unknown. The coffin containing his remains was lost during construction work years ago.
This is set to change on Monday, when a team of historians and anthropologists will use radars to map the subsurface of the church’s floor, hoping to find, and appropriately commemorate, the Spanish writer’s remains by the end of next year.
Francisco Etxeberria, a forensic anthropologist involved in the search, does not think identifying the remains will be difficult. Cervantes was distinctive looking, a self-described toothless hunchback who lost use of his left arm while serving as a soldier during a war against the Ottoman Turks.
The convent is located in a neighborhood known for its literary prominence, leaving some to question why no effort has been expended to find Cervantes until now.
Alfonso de Ceballos-Escalera, a publisher and historian, cited a culturally Catholic idea that “what is important after a burial is the spirit and not the body and the physical remains.”
But Fernando de Prado, a historian who has been lobbying the Spanish government to fund this search for at least four years, said he thinks there is no “better moment than now” to commence the project. 2015 and 2016 will mark the fourth centenary of the publication of “Don Quixote” and Cervantes’ death, respectively.
The project is estimated to cost 100,000 euros ($138,000).
|
<urn:uuid:2e2506f3-7425-4f07-984b-fdf231296a4a>
|
CC-MAIN-2016-26
|
http://www.pbs.org/newshour/rundown/madrid-quest-find-remains-don-quixote-author/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.72/warc/CC-MAIN-20160624154955-00059-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.970374 | 451 | 2.796875 | 3 |
Fifteen years ago, 5 ceremonial censers were found in community plots at Tlahuac, a Mexico City delegation. The dwellers are celebrating the return of one of them dedicated to Chicomecoatl, the Mexica maize goddess, which replica will be guarded at Cuitlahuac Regional Community Museum from September 4th 2010.
It was in August 3rd 1995 when Jesus Galindo Ortega discovered the terracotta censers covered with stucco, which dimensions go from 106 to 120 centimeters and present a great ornamental richness, as well as a good conservation state.
These high-quality pieces represent priests dressed as deities participating in a ceremony dedicated to maize and fertility, as the 36th Borbonic Codex page illustrates, where several lords at the Titl ceremony carry the same iconographic elements of the censers.
The censers represent Xilonen, goddess of fertility; Chicomecoatl, goddess of mature maize; Tlaloc, deity of rain; Neppatecuhtli, priest of Tlaloc, and Chalchiuhtlicue, goddess of water.
Considered an important archaeological discovery, it was notified to the National Institute of Anthropology and History
(INAH) and archaeologist Pedro Ortega, from the Direction of Archaeological Salvage, went to the place of the discovery to find out that the Late Post Classic period (1500-1520 AD) Mexica pieces were beautiful.
After the archaeological salvage, the censers underwent restoration. In 2001 they became part of the permanent exhibition at the Mexica Hall, where the beauty of 4 of them is enjoyed daily by hundreds of visitors at the National Museum of Anthropology (MNA). The Chalchiuhtlicue representation was found incomplete and is not at display.
These censers have been part of international exhibitions, such as Aztecs, at the Royal Academy of Arts, in London; Aztechi, at Palazzo Ruspoli, in Rome; The Aztec Empire in New York and Bilbao Guggenheim museums; and The Aztec Pantheon and the Art of Empire at the Paul Getty Museum of Los Angeles.
An exact replica of the Chicomecoatl censer created at the INAH Reproductions Workshop will be delivered by the Institute to Tlahuac community to celebrate the findings 15th anniversary.
Archaeologist Pedro Ortega mentioned that the censer presents a rich polychromy in red, white, black and blue. The back of the piece is a recipient where copal was burned in Prehispanic times to thank deities for a good rain cycle and prosperous harvests.
A feast is being organized for the occasion in September 4th 2010 at 11 hours: dances and music will take place at the Tlahuac Delegation esplanade, from where a procession will take the Chicomecoatl representation to its definitely seat, the Regional Community Museum of Tlahuac.
Jesus Galindo Ortega, who presides the Tecpancalco, Atenchincalca and Tepancalco Tizic Neighborhoods Alliance, commented that INAH will deliver the other 4 replicas of the censers found in Tlahuac to be exhibited in the local museum.
According to the offering made by the Institute to the community, this will happen once we have constituted in a civil organization and coadjuvant organism in the historical and archaeological heritage preservation.
This is how Cuitlahuac Community Museum was founded, where 80 archaeological pieces are exhibited.
This work has allowed creating conscience in the community of the need of preserving Tlahuac archaeological heritage, which was a very important Mexica ceremonial center, as the discovery of many pieces evidence. Since the censers were found, people have more interest in heritage conservation and have donated to the museum pieces they have found, remarked Jesus Galindo.
Cuitlahuac Regional Community Museum is located at 63 Tlahuac-Chalco Ave. in Tlahuac Center, Mexico City.
|
<urn:uuid:a48eee4b-8e1c-4e91-a7b4-89c92e76e19a>
|
CC-MAIN-2016-26
|
http://artdaily.com/news/40496/Mexica-Ceremonial-Censer-Replica-Handed-over-to-Tlahuac-Community
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.942611 | 842 | 2.921875 | 3 |
NAMED ORGANIC REACTIONS
* The disproportionation reaction of aldehydes without α-hydrogens in presence of a strong base to furnish an alcohol and a carboxylic acid is called Cannizzaro reaction. One molecule of aldehyde is reduced to the corresponding alcohol, while a second one is oxidized to the carboxylic acid.
* The applicability of Cannizzaro reaction in organic synthesis is limited as the yield is not more than 50% for either acid or alcohol formed.
* In case of aldehydes that do have α-hydrogens, the aldol condensation reaction takes place preferentially.
* The α,α,α-Trihalo aldehydes undergo haloform reaction in strongly alkaline medium. E.g. Choral will give chloroform in presence of an alkali.
* The cannizzaro reaction is initiated by the nucleophilic attack of a hydroxide ion to the carbonyl carbon of an aldehyde molecule by giving a hydrate anion. This hydrate anion can be deprotonated to give an anion in a strongly alkaline medium. In this second step, the hydroxide behaves as a base.
* Now a hydride ion, H- is transferred either from the monoanionic species or dianionic species onto the carbonyl carbon of another aldehyde molecule. The strong electron donating effect of O- groups facilitates the hydride transfer and drives the reaction further. This is the rate determining step of the reaction.
* Thus one molecule is oxidized to carboxylic acid and the other one is reduced to an alcohol.
* When the reaction is carried out with D2O as solvent, the resulting alcohol does not show carbon bonded deuterium. It indicates the hydrogen is transferred from the second aldehyde molecule, and not from the solvent.
* The overall order of the reaction is usually 3 or 4.
* The Cannizzaro reaction takes place very slowly when electron-donating groups are present. But the reaction occurs at faster rates when electron withdrawing groups are present.
1) Formaldehyde is disproportionated to formic acid and methyl alcohol in strong alkali.
2) Benzaldehyde can be converted to benzoic acid and benzyl alcohol.
3) Furfural gives furoic acid and furfuryl alcohol in presence of strong alkali.
4) Crossed Cannizzaro reaction: When a mixture of formaldehyde and a non enolizable aldehyde is treated with a strong base, the later is preferentially reduced to alcohol while formaldehyde is oxidized to formic acid. This variant is known as crossed Cannizzaro reaction.
E.g. Benzyl alcohol and formic acid are obtained when a mixture of benzaldehyde and formaldehyde is treated with alkali.
The reason may be: the initial nucleophilic addition of hydroxide anion is faster on formaldehyde as there are no electron donating groups on it.
The preferential oxidation of formaldehyde in crossed Cannizzaro reactions may be utilized in the quantitative reduction of some aldehydes.
5) α-keto aldehydes can be converted to α-hydroxy carboxylic acids by an intermolecular Cannizzaro reaction.
E.g. Phenylglyoxal undergoes intramolecular cannizzaro reaction by giving Mandelic acid (α-hydroxyphenylacetic acid or 2-Hydroxy-2-phenylethanoic acid)
6) Phthalaldehyde can undergo intramolecular Cannizzaro reaction by giving (o-hydroxymethyl) benzoic acid.
|
<urn:uuid:857a2bfe-0e12-4c80-a889-d569f5717cb0>
|
CC-MAIN-2016-26
|
http://www.adichemistry.com/organic/namedreactions/cannizzaro/cannizzaro-1.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00178-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.90812 | 781 | 3.015625 | 3 |
Explorers, Scientists &
Musicians, Painters &
Poets, Writers &
Native Americans & The Wild
Tribes & Peoples
Assassinations in History
got slain, almost slain, when, how,
why, and by whom?
Go to the
Online History Dictionary A - Z
All-Time Records in
What was the
bloodiest battle, the battle with the least
casualties, who was the greatest military leader?
Records in History
The Crusaders Before Jerusalem
The Crusades 1095-1272
The Crusades (from the Latin word
for cross) were a series of
wars between European Christians and
Muslims fighting for the Holy Land
(Palestine, especially the city of
Los cruzados ante Jerusalén / The
Crusaders Before Jerusalem. Oil on canvas by Eugenio Lucas
Museo Lázaro Galdiano, Madrid
The Crusades in a
Regaining control over the Holy Land was not the
only objective. Christians were
also concerned by the speedy Islamic expansion in general.
the gift shops of both camps, two centuries of massacre
would be sold as
end, the Crusades failed to free the Holy Land from Muslim control.
The eight Crusades took place between
the years 1095 and 1272. Here is a brief timeline with main events:
In AD 637, Jerusalem was taken by the Muslims and, slowly but surely, it
became obvious that Christians, pilgrims and otherwise, weren't welcome anymore.
Parallel to this, the power and the influence of the papacy in Europe weakened. How could the
Church get back to power?
War unites, or so figured
Pope Urban II, and delivered a
fiery speech at Clermont, France,
in 1095. Everyone was energized and merrily hiked direction Holy
Land, most of them
with one-way tickets.
Here are the
Crusades in the stream of time.
And here are the maps:
Map of the Era of the Crusades 1095-1272 AD
Map of the First Crusade
1095-1099 First Crusade
1097-1099 The Levant
Map of the States of the Crusaders 1140 AD
Map of the Second and Third Crusade
Map of the Fourth, Sixth, Seventh, and
|
<urn:uuid:3f39dee6-f2f5-486f-8439-651d896a3616>
|
CC-MAIN-2016-26
|
http://www.emersonkent.com/wars_and_battles_in_history/history_of_the_crusades.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00166-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.857766 | 497 | 3.203125 | 3 |
2. Noun. Something that runs between cities, such as a railroad. ¹
¹ Source: wiktionary.com
Definition of Intercity
Click the following link to bring up a new window with an automated collection of images related to the term: Intercity Images
Lexicographical Neighbors of Intercity
Literary usage of Intercity
Below you will find example usage of this term as found in modern and/or classical literature:
1. Defense Conversion: Redirecting R&Dby DIANE Publishing Company by DIANE Publishing Company (1994)
"... Efficient Transportation: Public Systems HGH-SPEED intercity GROUND TRANSPORTATION High-speed ground transportation (HSGT)—trains that operate at speeds ..."
2. Foreign Exchange: Theory and Practice by Thomas York (1920)
"intercity Loans; Other Types.—Loans negotiated between New York and London have heretofore been considered solely with reference to one particular type, ..."
3. A New and General Biographical Dictionary: Containing an Historical and by William Tooke, William Beloe, Robert Nares (1798)
"... and active concern for the intercity and accommodation of the inferior clergy. ... intercity ..."
4. Amtrak Management: Systemic Problems Require Actions to Improve Efficiency by JayEtta Z. Hecker (2006)
"On the basis of data obtained from the Federal Railroad Administration (FRA), intercity passenger rail accounted for a relatively substantial portion (15 ..."
|
<urn:uuid:c5ee5f0f-6834-414f-959e-e8707560f1f6>
|
CC-MAIN-2016-26
|
http://www.lexic.us/definition-of/intercity
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00019-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.879877 | 311 | 2.9375 | 3 |
Constitutional change, seemingly so orderly, formal, and refined, has in fact been a revolutionary process from the first, as Bruce Ackerman makes clear in We the People: Transformations. The Founding Fathers, hardly the genteel conservatives of myth, set America on a remarkable course of revolutionary disruption and constitutional creativity that endures to this day. After the bloody sacrifices of the Civil War, Abraham Lincoln and the Republican Party revolutionized the traditional system of constitutional amendment as they put principles of liberty and equality into higher law. Another wrenching transformation occurred during the Great Depression, when Franklin Roosevelt and his New Dealers vindicated a new vision of activist government against an assault by the Supreme Court. These are the crucial episodes in American constitutional history that Ackerman takes up in this second volume of a trilogy hailed as "one of the most important contributions to American constitutional thought in the last half-century" (Cass Sunstein, New Republic). In each case he shows how the American people--whether led by the Founding Federalists or the Lincoln Republicans or the Roosevelt Democrats--have confronted the Constitution in its moments of great crisis with dramatic acts of upheaval, always in the name of popular sovereignty. A thoroughly new way of understanding constitutional development, We the People: Transformations reveals how America's "dualist democracy" provides for these populist upheavals that amend the Constitution, often without formalities. The book also sets contemporary events, such as the Reagan Revolution and Roe v. Wade, in deeper constitutional perspective. In this context Ackerman exposes basic constitutional problems inherited from the New Deal Revolution and exacerbated by the Reagan Revolution, then considers the fundamental reforms that might resolve them. A bold challenge to formalist and fundamentalist views, this volume demonstrates that ongoing struggle over America's national identity, rather than consensus, marks its constitutional history.
Back to top
Rent We the People 1st edition today, or search our site for other textbooks by Bruce A. Ackerman. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Belknap Press.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
|
<urn:uuid:18d95933-ee88-4aad-a497-6c60b76c0e00>
|
CC-MAIN-2016-26
|
http://www.chegg.com/textbooks/we-the-people-1st-edition-9780674003972-0674003977
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00093-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.927526 | 442 | 2.90625 | 3 |
Your affectionate Brother.
COURSE OF READING.
1. Sacred and Ecclesiastical History.—Josephus’ Works; Millar’s History of the Church; Jahn’s Hebrew Commonwealth, Mosheim’s Ecclesiastical History; Milner’s Church History; Scott’s Continuation of Milner; Life of Knox; Gilpin’s Lives of the Reformers; Fuller’s and Warner’s Ecclesiastical History of England; Millar’s Propagation of Christianity; Gillies’ Historical Collections; Jones’ Church History; Mather’s Magnalia; Neale’s History of the Puritans; Wisner’s History of the Old South Church, Boston; Bogue and Bennett’s History of the Dissenters; Benedict’s History of the Baptists; Life of Wesley; History of Methodism; Life of Whitefield; Millar’s Life of Dr. Rodgers; Crantz’s Ancient and Modern History of the Church of the United Brethren; Crantz’s History of the Mission in Greenland; Loskiel’s History of the North American Indian Missions; Oldendorp’s History of the Danish Missions of the United Brethren; Choules’ Origin and History of Missions. Those who have not sufficient time for so extensive a course, may find the most interesting and important events in the progress of the church during the first sixteen centuries of the Christian era, in the author’s Sabbath-school Church History.
2. Secular and Profane History.—Rollin’s Ancient History; Russel’s Egypt; Russel’s Palestine; Plutarch’s Lives, to be kept on hand, and consulted as the names appear in history; Wharton’s Histories; Beloe’s Herodotus; Travels of Anacharsis; Mitford’s Greece; Ferguson’s History of the Roman Republic; Baker’s Livy; Middleton’s Life of Cicero; Murphy’s Tacitus; Sismondi’s Decline of the Roman Empire; Muller’s Universal History; Hallam’s History of the Middle Ages; James’ Life of Charlemagne; Mills’ History of the Crusades and of Chivalry; Turner’s History of England; Burnett’s History of his own Times; Robertson’s History of Scotland; Robertson’s Charles V.; Vertot’s Revolutions of Sweden; Vertot’s Revolutions of Portugal; Sismondi’s History of the Italian Republics, (abridged in Lardner’s Cabinet of History;) Roscoe’s Lorenzo de Medici and Leo X.; Sketches from Venetian History; Malcolm’s History of Persia; Irving’s Life of Columbus; Prescott’s Ferdinand and Isabella; Robertson’s History of America; Bancroft’s History of America; Winthrop’s Journal; Ramsay’s American Revolution; Marshall’s Life of Washington; with the Biographies of Penn, Jay, Hamilton, Henry, Greene, Otis, Quincy, Morris, the Signers of the Declaration of Independence, Sparks’ American Biography, with the Lives of any other distinguished Americans; Scott’s Life of Napoleon.
|
<urn:uuid:757a1fdc-ef17-4407-a13d-ca340f754836>
|
CC-MAIN-2016-26
|
http://www.bookrags.com/ebooks/17934/175.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.724221 | 728 | 2.515625 | 3 |
The Memristor was predicted by Prof. L. Chua in 1971 and first prototype was reported by team of HP researcher. The memristor follows interesting relation in the view of magnetic flux and charge. There are tremendous applications areas emerged out in the framework of memristor in last few years. The applications in the Memory Technology, Neuromorphic Hardware solution, Soft Computing are name of few. The memristor was hidden at many instances in biomedical fields, but recently reported literature reveals that memristor is universal part of medical diagnosis. In the bird eye view of this scenario, this paper deals with elementary note on the skin hydration measurement using memristor.
Forth Circuit Element; Memristor; Hydration Measurement
Chua, L. O. Memristor - the missing circuit element, IEEE Trans. Circuit Theory, 18, 1971, pp.507–519.
Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. Nature, 453, 2008, pp.80–83
Gorm K. Johnsen, An introduction to the memristor – a valuable circuit element in bioelectricity and bioimpedance, J Electr Bioimp, vol. 3, 2012, pp. 20 –28.
Yogesh N Joglekar and Stephen J Wolf, "The elusive memristor: properties of basic electrical circuits", European Journal of Physics, vol. 30, 2009, pp. 661–675.
L. Chua and S.M. Kang, "Memristive Device and Systems," Proceedings of IEEE, Vol. 64, no. 2, 1976, pp. 209-223.
Z. Biolek, D. Biolek, V. Biolková, "Spice Model of Memristor with Nonlinear Dopant Drift", Radio engineering, vol. 18, no. 2, 2009, pp. 210-214.
Robinson E. Pino, Kristy A. Campbell, Compact Method for Modeling and Simulation of Memristor Devices, Proceeding of international Symposium on Nanoscale Architecture, 2010, pp.1-4.
Dongale, T. D. An Overview of Fourth Fundamental Circuit Element-'The Memristor', Supporting Docs. "NanoHUB. org." Available at: https://nanohub.org/resources/16590
Cole KS. Rectification and inductance in the squid giant axion, J Gen Physiol. 25; 1941; pp.29-51
Cole KS, Membranes, ions, and impulses. University of California Press; Berkeley, 1972.
Mauro A, Anomalous impedance, a phenomenological property of time-variant resistance – an analytic review, Biophys J. 1961; 1; pp.353-72.
Johnsen GK, Lütken CA, Martinsen ØG, Grimnes S. Memristive model of electro-osmosis in skin. Phys Rev E, 83, 031916, 2011.
Tiny organisms remember the way to food, Available at: http://www.newscientist.com/article/dn11394-tiny-organisms-remember-the-way-to-food.html, Retrieved: 28 December, 2012.
Licht TS, Stern M, Shwachman H. Measurement of the electrical conductivity of sweat. Clin Chem. 1957;3; pp.37–48.
Tronstad C, Johnsen GK, Grimnes S, Martinsen ØG. A study on electrode gels for skin conductance measurements. Physiol Meas. 2010;31;pp.1395-1410.
Martinsen, Ø. G., Grimnes, S., Lütken, C. A., & Johnsen, G. K. (2010, May). Memristance in human skin. In Journal of Physics: Conference Series (Vol. 224, No. 1, p. 012071). IOP Publishing.
S.P. Kosta, Y.P. Kosta, Mukta Bhatele, Y.M. Dubey, Avinash Gaur, Shakti Kosta, Jyoti Gupta, Amit Patel and Bhavin Patel, Human blood liquid memristor, Int. J. Medical Engineering and Informatics, Vol. 3, No. 1, 2011
Grimnes S, Psychogalvanic reflex and changes in electrical parameters of dry skin. Med Biol Eng Comp1982; 20; pp.734-40.
Grimnes S. Skin impedance and electro-osmosis in the human epidermis. Med Biol Eng Comp. 1983;21;pp.739-49
R L Gunter, W D Delinger, T L Porter, R Stewart, J Reed, Human hydration level monitoring using embedded piezoresistive microcantilever sensors, Medical Engineering & Physics, Vol.27, issue-3, pp.215-220
Zhang, S. L., Meyers, C. L., Subramanyan, K., & Hancewicz, T. M. (2005). Near infrared imaging for measuring and visualizing skin hydration- A comparison with visual assessment and electrical methods, Journal of biomedical optics, 10(3), 031107-031107
Memristor in human skin, Available at: http://www.newscientist.com/article/mg20928024.500-sweat-ducts-make-skin-a-memristor.html, Retrieved: 28 December, 2012.
Cite this work
Researchers should cite this work as follows:
|
<urn:uuid:f12c5d56-2503-4d56-8c01-5a329ea8d3c0>
|
CC-MAIN-2016-26
|
https://nanohub.org/resources/18861
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00022-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.698168 | 1,197 | 2.609375 | 3 |
A Germanic people who originally came from the Balkans, the Visigoths invaded Italy under the leadership of Alaric I (reigned 395–410) before settling in southern Gaul in 412. In 418, the Visigoths—under the leadership of Theodoric I, who ruled from 418 to 451—were settled by the Roman emperor Constantius III in Aquitaine. The Visigoths quickly extended their kingdom, which stretched from the Loire to the Pyrenees with Toulouse and Bordeaux as their capitals. The arrival of the Franks in Gaul forced the Visigoths into Spain. They lost all their possessions in Gaul apart from Septimania—now Languedoc-Roussillon—, settled in Old Castile under the leadership of Euric (reigned 466–484), and made Toledo (which is right in the heart of the Iberian Peninsula) their capital. Athanagild I lost Andalusia to the Byzantines in around 556.
Integrated in a Western Europe that was still dominated by the Romans, the Visigoths were influenced by the culture of classical antiquity. There are very few archaeological remains from the three centuries of occupation of the Iberian Peninsula, but what mark did the Visigoths leave? What were their relations with the Mediterranean?
Initially, the Visigoths reused and transformed the existing Roman civil monuments. After a long period in which there was a search for points of reference and unity, king Leovigild (reigned 567–586) re-established royal authority in the Peninsula, and established his legitimacy by building Reccopolis in 568, of which some traces remain today.
King Reccared I officially ended any religious disputes in 589 by recognizing the notion of consubstantiality. The Visigoth kingdom remained faithful to the Church until 672. In the period that followed there was an attempt to reconcile old Arian traditions and the new ‘Creed’.
The most striking examples of Visigoth buildings are the rural churches. The architecture is simple and canonical, and horseshoe arches were characteristic. Also known as the ‘Byzantine arch’, it was in fact used later by the Muslims. They were no doubt influenced by the Byzantine buildings in the south-west of the Peninsula.
The Visigoth churches contain other treasures that provide some enlightenment about how artists were influenced by foreign and Classical works. The most impressive example of architectural sculpture is to be found in the church of San Pedro de la Nave (in the province of Zamora), which dates from the seventh century. This consists of hieratic bas-reliefs sculpted with a drill, bevelled, and stylized, in accordance with the new styles of representation, which make consistent use of a Classical vocabulary—rows of palmettes, foliage, vine-leaves, and birds pecking at grapes—to depict scenes from the Old Testament, such as the Sacrifice of Abraham and Daniel in the Lion's Den.
The ritual of clothed burials—a custom that may have been adopted by the Visigoths and Ostrogoths, who lived in the Peninsula between 472 and 474—tells us about the metalworking techniques. The custom of placing objects in tombs was common in the rural necropolises, which provides information about the material culture of the peasantry, and especially the clothing and goldsmithing. Cloisonné techniques and gem encrustation, highly prized by the barbarian kingdoms, resulted from contact with Germans from the East and were introduced in the West in the fifth and sixth centuries. There was no funeral furniture in the Trinitarian ritual, but the few pieces found in the tombs after the conversion of Reccared I indicate that vegetal, animal, and Christian motifs from the Mediterranean and Eastern traditions were used.
Information about goldsmithing in the royal courts only became available with the discovery of the treasure of Guarrazar in 1858, a gift from the Visigoth kings to the Cathedral of Toledo, whose most noteworthy object is the votive crown of king Recceswinth (reigned 653–672). Currently kept in the National Archaeological Museum of Spain (Madrid), this crown in gold repoussé openwork shows a great mastery of these techniques. The Byzantines first introduced votive crowns into churches. This practice was also adopted by the Muslims in their princely architecture: in the court in the palace complex of Khirbat al-Mafjar (Syria) there is an exedra in which is suspended a huge crown, under which sat the king.
Germanic in origin, the Visigoth culture comprises omnipresent elements of Roman Antiquity, to which is added Christian symbolism and strong Byzantine influences. The Byzantine trading posts in the peninsula were probably the only links with the Mediterranean. The Visigoths were primarily a land-based people.
The Visigoth kingdom ended in 711 when Muslims from North Africa invaded Spain, then under the leadership of king Roderic (reigned 709–711). He died in the battle of Jerez de la Frontera under the onslaught of the armies of Tāriq ibn Ziyād.
E. D. –P.
Collectif, Moyen Age : Chrétienté et Islam, Paris, 1996, Flammarion
Durliat, M., Des Barbares a l’an Mil, Paris, 1985, Citadelles-Mazenod, coll. « L’art et les grandes civilisations »
The Art of Medieval Spain (a.d. 500-1200), (cat. exp., New York, The Metropolitan Museum of Art, 1993-1994), New York, 1993, The Metropolitan Museum of Art
|
<urn:uuid:84ca3819-2f08-4464-96c9-f7f0f18a3cbb>
|
CC-MAIN-2016-26
|
http://www.qantara-med.org/qantara4/public/show_document.php?do_id=1305&lang=en
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00156-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.955507 | 1,204 | 3.765625 | 4 |
It’s called Kristallnacht, or the Night of Broken Glass. Lore Jacobs heard the sound of Nazi boots pounding up the stairs to the fourth-floor apartment in Frankfurt where she lived with her mother and father.
Lore – pronounced Lori – was 14 then. “I remember every minute,” she says.
Now she is 89, living in west Hamilton. On Thursday evening, Nov. 7, she will remember some more. She will attend a commemoration of the 75th anniversary of Kristallnacht, sponsored by the Hamilton Jewish Federation. (It’s free, open to all, 7.30 at Adas Israel Synagogue, 125 Cline South.)
The guest speaker is David Halton, acclaimed reporter with CBC for decades. His work took him around the world, but in Hamilton he will talk about what his father Matthew saw. He too was a journalist and went to Germany twice in 1933, which resulted in a series of 27 articles for the Toronto Star.
Hitler had become chancellor that year, and his campaign against Jews was building. They were being blamed for Germany’s loss in World War I and hyperinflation and the Depression.
Matthew Halton witnessed this and wrote his stories. From one, on Mar. 30, 1933: “I saw a parade of hundreds of children, between the ages of seven and sixteen, carrying the swastika and shouting at intervals, ‘The Jews must be destroyed.’”
Son David, writing a book about his father, believes the media then did not do a good job of exposing the Jews’ plight in Germany. There were not enough of the stories his father told.
Father sold hats
But Lore was living them.
Her family name was Gotthelf. Her father was a wholesaler of hats. She loved to try them on, especially the ones with feathers and lots of decorations.
She went to a public school, Germans and Jews. But Hitler’s campaign was relentless. First, the Nazis passed laws that restricted the practices of Jewish lawyers and doctors. Then Jews were banned from public schools.
Jews Not Allowed signs went up at libraries, restaurants, swimming pools, theatres.
Citizens were encouraged to fly a red Nazi banner from their apartments. Lore walked down the street and saw all the swastika flags, except where Jewish families lived. “There was red everywhere,” she says.
Nazis got their excuse
And in November of 1938, a German diplomat was killed in Paris by a 17-year-old Jewish boy. The Nazi paramilitary had been looking for an excuse to pillage, and that was it.
They say Kristallnacht was the beginning of the Final Solution, the Holocaust that took six million lives. On Nov. 9 and through the following days, Nazi gangs looted, smashed, torched thousands of Jewish homes, businesses, synagogues. Nearly a hundred were killed in the attacks.
Three men stomped into Lore’s home. They upended furniture, emptied drawers, stole the silver, looted the safe box concealed behind a painting.
“Then they started to push my father around,” Lore says. “I screamed at them to leave him alone.” They took her father away and Lore and her mother huddled in the corner of their apartment for days. Father was returned a few weeks later in bad health. Lore says he was never the same.
Others had managed to already leave Germany. The United States seemed a good refuge, but it had strict quotas. Canada did too.
The UK rescued children
Then Lore’s parents learned of something called the Kindertransport, an effort in the UK by Jews, Quakers and other groups to rescue Jewish children, up to age 17. Lore was accepted.
The refugees using Kindertransport were allowed one sealed suitcase. Somehow, Lore’s parents managed to equip her with two large trunks. Clothes, tablecloths, soap, atlas, dictionary and, most precious of all, dozens and dozens of photos.
“I’ve never talked about that,” Lore says. “To tell you the truth, I’m a little embarrassed.” (But her cargo became important. The photos – and her trunks – are now part of the collection of the United States Holocaust Memorial Museum in Washington.)
Lore was able to say only a quick goodbye to her parents at the train station in Frankfurt, July 7, 1939. She crossed the channel to England and moved in with a couple who had a fine home in Northampton.
But Lore spoke almost no English. “I couldn’t explain myself. I cried a lot.” And in September of 1939, with war declared, Lore became an enemy alien.
Everyone was terrified
She was no longer just a Jewish refugee in England, she was a German. That troubled the woman of the house where Lore was staying. “She wanted me out. Everyone was terrified. I can’t blame them.”
Lore moved to Birmingham, got work, survived. “I always accepted everything, even when I left my parents,” she says. “I never complained.”
At synagogue socials in Birmingham, she danced and fell in love with Erwin Jacobs. He had made it out of Berlin. They married in 1944. She was 20, he was 24.
The next year, the war over, Lore learned through the Red Cross that the Nazis had sent her parents to the Lodz Ghetto. They had perished there in the spring of 1942.
In the mid ‘50s, Lore and Erwin came to Hamilton and he worked at Westinghouse. They raised two children, Peter and Gale. When Erwin died 17 years ago, the family set up a Holocaust Education fund in his name.
For many years, Lore did not tell her story. That changed and she has now visited many schools. “Some don’t want to hear about the past, but children do,” she says. “When people ask me to talk, I have to. I can’t say no.”
|
<urn:uuid:f369059f-9437-4ebb-a239-5eae77dce8eb>
|
CC-MAIN-2016-26
|
http://www.cbc.ca/news/canada/hamilton/news/paul-wilson-75-years-after-nazi-rampage-the-girl-who-got-away-1.2356171
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402479.21/warc/CC-MAIN-20160624155002-00169-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.986761 | 1,297 | 2.65625 | 3 |
Introduction to POGIL
David M. Hanson, Stony Brook University and Richard S. Moog, Franklin and Marshall College
Process-oriented guided-inquiry learning (POGIL, rhymes with "mogul") is both a philosophy and a strategy for teaching and learning. It is a philosophy because it encompasses specific ideas about the nature of the learning process and the expected outcomes. It is a strategy because it provides a student-centered methodology and structure that are consistent with the way people learn and achieve these outcomes. The goal of POGIL is to help students simultaneously master discipline content and develop essential learning skills. This module explains the relationship between three primary components of POGIL: cooperative learning, guided inquiry, and metacognition. It also offers advice on implementing POGIL in the classroom and provides evidence that POGIL instruction produces better understanding and higher grades compared with traditional lecture-style methods.
Studies reveal that traditional teaching methods in higher education are no longer meeting studentsí educational needs. This has led to several reform initiatives. Some of these initiatives focus on changing the curriculum and course content; others seek to utilize computer-based multimedia technology for instruction; and some promote more student involvement in class in order to engage students in learning.
Several key ideas about learning have emerged from current research in the cognitive sciences (Bransford, Brown and Cocking). This research documents that people learn by
POGIL is built on this research base, sharing the key premise that most students learn best when they are actively engaged in analyzing data, models, or examples and when they are discussing ideas; when they are working together in self-managed teams to understand concepts and solve problems; when they are reflecting on what they have learned and thinking about how to improve performance; and when they are interacting with an instructor who serves as a guide or facilitator of learning rather than as a source of information. To support this research-based learning environment, POGIL utilizes self-managed learning teams, guided-inquiry materials based on the learning cycle, and metacognition (Hanson).
Role of Cooperative Learning
Learning environments can be competitive, individualized, or cooperative. Research has documented that relative to the other situations, students learn more, understand more, and remember more when they work together. They feel better about themselves and their classmates, and they have more positive attitudes regarding the subject area, course, and instructors. Students working in a team environment are also more likely to acquire essential process skills such as critical and analytical thinking, problem solving, teamwork, and communication. (Johnson, Johnson and Smith).
It should not be surprising that group learning environments are successful; individuals working alone in competitive or individualized instructional modes do not have the opportunity for intellectual challenge found in a learning team. As a learning team becomes involved in a lesson, the differences in various team membersí information, perceptions, opinions, reasoning processes, theories, and conclusions will inevitably lead to disagreement. When managed constructively using appropriate interpersonal, social, and collaborative skills, such controversy promotes questioning, an active search for more information, and finally a restructuring of knowledge. This process results in greater mastery and retention of material and more frequent use of critical thinking and higher-level reasoning compared with the outcomes gained through learning in competitive and individualized modes (Johnson, Johnson and Smith; Cooper; Hanson; Millis and Cottell).
Role of Guided-Inquiry
Much research documents that, in order to achieve real understanding and learning, learners must actively restructure the information they absorb. To restructure new knowledge, learners must integrate it with previous knowledge and beliefs, identify and resolve contradictions, generalize, make inferences, and pose and solve problems. Thus, knowledge is personal and is constructed in the mind of the learner (Johnson, Johnson and Smith; Herron; Cracolice; Bransford, Brown and Cocking; Johnstone; Bodner). A POGIL learning activity engages students and prompts them to restructure information and knowledge; guided-inquiry activities help students develop understanding by employing the learning cycle. This learning cycle consists of three stages or phases: exploration, concept invention or formation, and application (Abraham).
In the "exploration" phase of the learning cycle, students develop their understanding of a concept by responding to a series of questions that guide them through the process of exploring a model or executing a task. Almost any type of information can be processed in this way: a diagram, a graph, a table of data, one or more equations, a methodology, some prose, a computer simulation, a demonstration, or any combination of these things. In this exploration phase, students attempt to explain or understand the material that is presented by proposing, questioning, and testing hypotheses.
The second phase may involve either "concept invention" or "concept formation." When the second phase involves concept invention, the exploration phase does not present the concept explicitly. Learners are effectively guided and encouraged to explore, then to draw conclusions and make predictions. Once learners have engaged in this phase, additional information and the name of the concept can be introduced. Instructors may be the ones to introduce the concept name (to ensure that standard language is used), but it is the students themselves who discover the patterns. Other activities are designed with a second phase that involves concept formation. In these activities, some representation of the concept is presented explicitly at the beginning. Students work through questions which lead them to explore the representation, develop an understanding of it, and identify its relevance and significance.
Once the concept is identified and understood, it is reinforced and extended in the "application" phase. In the application phase, learners use the new knowledge in exercises, problems, and even research situations. "Exercises" give learners the opportunity to build confidence in simple situations and familiar contexts. "Problems" require learners to analyze complex situations, to transfer the new knowledge to unfamiliar contexts, to synthesize it with other knowledge, and to use it in new and different ways. "Research" questions identify opportunities for learners to extend learning by raising new issues, questions, or hypotheses.
The Role of Metacognition
"Metacognition" literally means "thinking about thinking." It includes self-management and self-regulation, reflection on learning, and assessment of oneís own performance. POGIL requires students to use metacognition to help them realize that they are in charge of their own learning and that they need to monitor it (self-management and self-regulation), that they need to reflect on what they have learned and what they don't yet understand (reflection on learning), and that they need to think about their performance and how it can be improved (self-assessment) (Bransford, Brown and Cocking).
Metacognition produces an environment for continual improvement. Students can be asked to assess their own work and that of each other. Instructors monitor the teams and, when appropriate, provide feedback to individuals, teams, and the class in order to improve studentsí skills and to help them identify needed improvements. It is possible to establish an atmosphere in which such assessments are safe, positive, and valued by all by making a distinction between assessment and evaluation. "Assessment" is the process of measuring a performance, work product, or skill and giving feedback to document strengths and growth and to provide directives for improving future performance. "Evaluation" is the process of making a judgment or determination concerning the quality of a performance, work product, or use of skills against a set of standards (Distinctions between Assessment and Evaluation). Assessments are nonjudgmental and are designed and intended to be helpful and to produce improvement. Evaluations, on the other hand, are judgmental and are designed and intended to document the level of achievement that has been attained. Feedback provided during daily learning experiences is given in the form of an assessment, while course examinations provide the evaluation. The situation is similar in athletics: coaching during weekly practices and scrimmages is given in the form of an assessment; the big game on Saturday is an evaluation.
Metacognition has been shown to be especially effective in improving problem-solving skills. When students were trained in a five-step self-explanation self-regulation methodology, they were deemed to be more successful at solving problems. After encountering new material students were asked to identify the important concepts, to elaborate on and identify connections between the concepts, to examine a sample problem and to identify the steps needed to solve it, to identify the reason for and meaning of each step, and to relate the concepts presented in the initial material to the steps in the sample problem (Bielaczyc, Pirolli and Brown). This methodology helps students construct the large mental structures that are essential for success in problem solving: those linking conceptual and procedural knowledge (Bransford, Brown and Cocking).
There are a variety of ways to implement POGIL to suit the instructor, the class size, the classroom structure, and the local culture. In some successful implementations, all lectures have been replaced with POGIL sessions (Farrell, Moog and Spencer). In some, one lecture per week has been replaced with a POGIL session (Lewis and Lewis). At a large university, standard recitation sessions have been converted to POGIL sessions (Hanson and Wolfskill). And at several institutions with 100 to 500 students in a lecture hall, POGIL activities are being used with electronic student response systems for all or part of each session. All of these implementations typically employ the learning cycle: students work together in small groups on activities that have been carefully designed to guide them in constructing understanding and in applying this understanding to solve problems. In the POGIL classroom the instructor is not the expert provider of knowledge; he or she is a coach or facilitator who guides students in the process of learning, helping them to develop process skills and conceptual understanding, and to apply this understanding in solving problems. In this context, the instructor has four roles to play: leader, monitor/assessor, facilitator, and evaluator.
As a leader, the instructor creates the learning environment: he or she develops and explains the lesson and defines the objectives (both content objectives and process skill objectives), criteria for success, and expected behaviors. He or she also establishes the structure of the environment (i.e. the goal/reward structure, the team structure, the class structure, the room structure, and the time structure) (Overview of Creating a Quality Learning Environment).
As a monitor/assessor, the instructor circulates through the class monitoring and assessing individual and team performance and acquiring information on student understanding, misconceptions, and difficulties in collaboration. The instructor uses this information as a facilitator to improve performance.
As a facilitator, the instructor intervenes when appropriate and asks timely critical-thinking questions to help teams understand why they may be having difficulty and to think about what they need to do to improve and make progress. Facilitators should intervene on process issues, not content issues, and they should provide the kind of input that encourages deeper thought. Questions posed by the facilitator should help the team identify why they are having difficulty. The first questions should be open-ended and general; further questions should be more directed and specific as needed. At the end of the intervention, the team should be asked to reflect on the process: What was the source of the difficulty? How did you resolve it? How might you avoid this difficulty in similar situations in the future? What generalizations can you make to help you in new situations?
As an evaluator, the instructor provides closure to the lesson by asking team members to report answers, to summarize the major points, and to explain the strategies, actions, and results of the teamís work. Individuals and teams are evaluated on their performance, achievement, and effectiveness; general issues are shared with the class.
POGIL has been evaluated for its effectiveness in various courses over a wide range of institutions. In addition to formal published studies (Farrell, Moog and Spencer; Hanson and Wolfskill; Lewis and Lewis), a number of more informal and unpublished evaluations also have been conducted. In general, similar results are obtained regardless of the type of institution, the course, and the size of course. Student attrition from POGIL courses is lower than that for courses using traditional methods ("attrition" in this case is defined as earning a grade of D or F or withdrawing from the course). Student mastery of content is at least as high or higher than that gained through traditional instruction. Students also generally prefer the POGIL approach over traditional methods, they have more positive attitudes about the course and their instructors, and their learning skills appear to improve over the semester.
For example, one study of a full POGIL implementation in general chemistry at Franklin and Marshall College compares the performance of over 400 students taught using the POGIL approach over a four-year period to a similar number taught in previous years using a traditional approach by the same instructors (Farrell, Moog and Spencer). The attrition rate decreased from 22 % (traditional) to 10% (POGIL). The percentage of students earning an A or B rose from 52% to 64%.
Similar results have been obtained when POGIL has been used as a component of large lecture classes. In general chemistry classes at Stony Brook University, graduate teaching assistants used a POGIL approach to facilitate the recitation sessions. Students performed better on examinations.
These gains were exhibited uniformly in the performance of low through high-achieving students (Hanson and Wolfskill). Another study conducted at a large urban university examined the effect of replacing one of three general chemistry lectures each week with a peer-led team learning session using POGIL materials (Lewis and Lewis). They found that the students who had attended the group-learning sessions generally performed better on common examinations.
When they first hear about POGIL, many instructors are intrigued by the approach and can see its advantages, but they are concerned that the pace at which the material is covered will be significantly slower in a POGIL course than for a lecture-based course. Our experience is that this is not a problem. One way to measure this is to compare the standardized exam performance of students who learned using POGIL instruction, comparing the average outcomes against those of students from the same institution who experienced a traditional approach. Such comparisions show that students experiencing POGIL instructions scored higher on these examinations than students in traditional classes in both general chemistry and organic chemistry. It is inspiring that in evaluating POGIL at Stony Brook, instructors said, This is the way to teach! and many students responded, More time for workshops and less time for lectures!
Abraham, M. R. (2005). Inquiry and the learning cycle approach. In N. J. Pienta, M. M. Cooper, & T. J. Greenbowe (Eds.), Chemists' guide to effective teaching (pp. 41-52). Upper Saddle River, NJ: Pearson Prentice Hall.
Bielaczyc, K., Pirolli, P. L., & Brown, A. L. (1995). Training in self-explanation and self-regulation strategies: Investigating the effects of knowledge acquisition activities on problem solving. Cognition and Instruction, 13 (2), 221-52.
Bodner, G. M. (1986). Constructivism: A theory of knowledge. Journal of Chemical Education, 63, 873.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.) (2000) How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.
Cooper, M. M. (2005). An introduction to small-group learning. In N. J. Pienta, M. M. Cooper, & T. J. Greenbowe (Eds.), Chemists' guide to effective teaching (117-28). Upper Saddle River, NJ: Pearson Prentice Hall.
Cracolice, M. S. (2005). How students learn: Knowledge construction in college chemistry courses. In N. J. Pienta, M. M. Cooper, & T. J. Greenbowe (Eds.), Chemists' guide to effective teaching (12-27). Upper Saddle River, NJ: Pearson Prentice Hall.
Farrell, J. J., Moog, R. S., & Spencer, J. N. (1999). A guided-inquiry general chemistry course. Journal of Chemical Education, (76) 4, 570-74.
Hanson, D. M. (2006). Instructor's guide to process-oriented guided-inquiry learning. Lisle, IL: Pacific Crest.
Hanson, D. M. & Wolfskill, T. (2000). Process workshops: A new model for instruction. Journal of Chemical Education, 77, 120.
Herron, J. D. (1996). The chemistry classroom: Formulas for successful teaching. Washington, DC: American Chemical Society.
Johnson, D. W., Johnson, R. T., & Smith, K. A. (1991). Active learning: cooperation in the college classroom. Edina, MN: Interaction.
Johnstone, A. H. (1997). Chemistry teaching: science or alchemy? Journal of Chemical Education, 74, 262-68.
Lewis, S. E., & Lewis, J. E.(2005). Departing from lectures: An evaluation of a peer-led guided inquiry alternative. Journal of Chemical Education, 82 (1), 135-39.
Millis, B. J., & Cottell, P. G. (1998). Cooperative Learning for Higher Education Faculty. Phoenix: American Council on Education, Onyx Press.
|
<urn:uuid:743828b5-345f-4c3a-954a-ee4e03dca98b>
|
CC-MAIN-2016-26
|
http://www.pcrest.com/PC/pub/POGIL.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00039-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.937641 | 3,684 | 3.4375 | 3 |
THE race of the Cymry have not always dwelt in the Isle of Britain. In the dim past they inhabited the Summer Country called Deffrobani. While they sojourned there a great benefactor arose among them, to whom the name of Hu Gadarn, Hu the Mighty, was given. He invented the plough, and taught them to cultivate the ground. He divided them into communities, and gave them laws, whereby fighting and contention were lessened. Under his guidance they left the Summer Country, and crossing the Mor Tawch in coracles came to the Isle of Britain, and took possession of it under the protection of God and His peace. Before that time no one lived therein, but it was full of bears, wolves, beavers, and bannog oxen; no one, therefore, has a right to the Isle of Britain but the Cymry, for they first settled in it. They gave to it the name of the Honey Island, on account of the great quantity of honey they found (Britain is a later name). Hu ruled them with justice, establishing wise regulations and religious rites, and those who through God's grace had received poetic genius were made teachers of wisdom. Through their songs, history and truth were preserved throughout the ages until the art of writing was discovered.
Some time after they came to the Honey Island, the Cymry were much troubled by a monster called an afanc, which broke the banks of Llyn Llion, in which it dwelt, and flooded their lands. No spear, dart, or arrow made any impression upon its hide, so Hu Gadarn resolved to drag it from its abode and to place it where it could do no harm. A girl enticed it from its watery haunt, and while it slept with its head on her knees it was bound with long iron chains. When it woke and perceived what had been done, it got up, and, tearing off its sweetheart's breast in revenge, hurried to its old refuge. But the chains were fastened to Hu Gadarn's team of bannog oxen, which pulled it out of the lake and dragged it through the mountains to Llyn y Ffynnon Las, the Lake of the Green Well, in Cwm Dyli, in Snowdonia. A pass through which they laboured has ever since been called Bwlch Rhiw'r Ychen, the Pass of the Slope of the Oxen. One of the oxen dropped one of its eyes through its exertions in this defile, and the place is styled Gwaun Llygad Ych, the Moor of the Ox's Eye. A pool was formed where the eye fell, which is known as Pwll Llygad Ych, the Pool of the Ox's Eye; this pool is never dry, though no water rises in it or flows into it except when rain falls, and no water flows out of it, but it is always of the same depth, reaching just to the knee-joint.
The afanc could not burst the banks of the Lake of the Green Well, but it is still dangerous to go near it. If a sheep falls into the lake it is at once dragged down to the bottom, and it is not safe even for a bird to fly across it.
|
<urn:uuid:838fae81-ba4e-4ed7-b1d4-4223c3d40ca0>
|
CC-MAIN-2016-26
|
http://www.sacred-texts.com/neu/celt/wfb/wfb77.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.985765 | 685 | 3.015625 | 3 |
Yoga is derived from the Sanskrit word yug, which means “to yoke.” This is a term we’re familiar with from the Bible (Phil. 4:2; Matt. 11:9). A yoke is a crossbar that joins two draft animals at the neck so they can work together; the term, therefore, is applied metaphorically to people being joined together or united in a cause. In Hinduism, as in many religions, union is desired with nothing less than God or the Absolute, and yoga is the system that Hindus have developed to achieve that end.
The historic purpose behind yoga, therefore, is to achieve union with the Hindu concept of God. This is the purpose behind virtually all of the Eastern varieties of yoga, including those we encounter in the West. This does not mean it is the purpose of every practitioner of yoga, for many people clearly are not practicing it for spiritual reasons but merely to enhance their physical appearance, ability, or health. The thesis I will be arguing in this three-part series, however, is that when someone participates in a practice that was developed with a specific purpose in mind by someone else, it is possible and even probable that on subtle levels the participant who does not have the original purpose in mind nonetheless will be moved along in the direction of fulfilling that purpose.
|
<urn:uuid:396d724b-5314-40a9-a5f5-7446f7a88073>
|
CC-MAIN-2016-26
|
http://www.equip.org/articles/what-is-yoga/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397562.76/warc/CC-MAIN-20160624154957-00102-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.97543 | 273 | 2.5625 | 3 |
Compost Tea for Anyone
So what exactly is compost tea? Well as the name suggests it is made by soaking or brewing compost in water and then straining off the liquid – the tea! The water gets infused with the nutrients and beneficial bacteria form the garden compost and this resulting compost tea is a perfect liquid fertilizer and conditioner. It can be applied as a foliar spray or as a drench for the soil, depending on what you need it for.
It is a great addition to just using regular compost. Having a foliar feed can give an almost instant boost to plants and is believed to enhance the flavor of vegetables. If used in an organic garden regularly it can also help to keep the garden healthy and fend off many unwanted pests and diseases. Compost tea contains many good bacteria that can help keep plants healthy.
How to Make Compost Tea
Compost tea can be made in a number of ways and using different ingredients. Experienced growers will all have their own compost tea recipes that they will swear by and may contain many unusual ingredients. The basic choice you have is to buy one of the ready to go compost tea systems or to go the DIY method and set everything up yourself.
The kits are great especially for larger scale production but can be a little expensive to get started, although they will soon pay for themselves with the saving in fertilizer and pesticide costs. Various sizes and prices are available to suit most people’s needs.
The DIY compost tea maker will need to work a little harder to get set up however the results can be just as good and you will save on some of the set up costs. It is best suited to small scale compost tea production but if you are inventive it can be scaled up.
In its most basic form you will have a container to which you add water, compost and molasses. This mixture is then oxygenated using an aquarium pump for 2-3 days to brew. The liquid is then strained off and is ready to use. There are a few important points to keep in mind.
Oxygen is the key to making great tea. make sure you have a constant stream of strong bubbles going through the mixture and give it a good stir from time to time during the brewing process.
The water should be rain water if possible. If you need to use tap water the pump should be ran in the water for a couple of hours before any other ingredients are added to remove some of the chlorine. The chlorine in the tap water may harm the bacteria.
Molasses is added to help provide food for the bacteria and to get them working more quickly. Some recipes will recommend other ingredients.
The compost you use to brew the tea should be good quality and matured. Compost made from mostly green waste will be higher in bacteria and so best suited for making compost tea, but any good organic compost will do. Worm composting is a great source as this vermicompost is very rich and contains lots of good bacteria.
When making the compost and before using it check the smell. In all compost making bad smells usually indicate a problem. Good compost should smell earthy and quite sweet; if it smells bad it is a sign that there is not enough oxygen. For the tea increase the bubbles and for regular compost you need to mix it more frequently. Do not use compost tea that smells bad, it could do more harm than good to your garden.
Using Compost Tea
Once your tea has been strained it should be used as soon as possible. The good bacteria can begin to die quite quickly without a good oxygen supply so use within a day or two at most. As a foliar spray it is a great way to give an instant feed to the plants and as a soil drench it will benefit slow and steady.
The frequency you feed will depend on the plants but an average routine would be to feed once per month during the growing season. For heavy feeders up to once a week may be good, the best system should come from a little testing to see how the plants are growing.
To enjoy the full benefits of compost tea you should stop using all chemicals in the garden. Chemicals such as pesticides can kill the good bacteria and so reduce the benefits of compost tea. Of course this may not suit everyone’s methods.
So who is going to put the kettle on for some lovely Compost Tea?
|
<urn:uuid:396a410f-ef56-4e86-9b31-84289bc03b0c>
|
CC-MAIN-2016-26
|
http://www.orchidcarenow.com/compost-tea/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00137-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956905 | 892 | 2.5625 | 3 |
This Week in the Civil War - 761
The long feared march through Mississippi by Union forces under General William Sherman began in early February 1864. As Sherman’s forces advanced from Vicksburg through the old battlefields of 1863, Confederate forces under General Leonidas Polk gave grounds before the superior Union force.
By Friday, February 5, 1864 Union forces entered Jackson, the state capital; destroyed by Sherman’s forces in May 1863, Jackson was no longer militarily important and was once again abandoned after skirmishing by Confederate cavalry.
With 26,000 infantry and an additional force of approximately 7600 Union cavalry, Sherman could not be stopped. The vastness of the Southern Confederacy left her vulnerable to Union assaults, and William Tecumseh Sherman would become the most effective, and infamous, of those Union generals invading the heartland of the South.
|
<urn:uuid:b41224bb-ccfd-4e92-bfa5-f644757b64d2>
|
CC-MAIN-2016-26
|
http://tpr.org/post/week-civil-war-761
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396959.83/warc/CC-MAIN-20160624154956-00073-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95181 | 177 | 4.21875 | 4 |
Frequently Asked Questions
What are the suggested R-values for the various components of a home?
The best, most current information suggests the range of R-values in the table below.
|Ceilings||R-30 to R-40|
|Walls||R-13 to R-24|
|Floors over unheated spaces||R-20 to R-24|
|Basement Walls||R-9 to R-15|
|Crawl-space walls||R-10 to R-16|
The value chosen within the range you should pick depends on where the house is located in the state. Lower R-values are more appropriate in southeastern Kansas, while homeowners in northwest Kansas should consider higher R-values. Those living in the central part of the state should aim at a value somewhere in the middle of the range. An R-value is the measure of a substance's resistance to temperature change. Select a building system that will provide R-values within or above these ranges, and see that materials are installed so as to create a well-sealed structure.
Which areas in a home would benefit the most from insulation?
Most heat lost in uninsulated homes is through the roof.
Because the attic is usually accessible, it is an area that is easy to insulate. If the attic has not been insulated, first install a vapor barrier directly above the ceiling, then place insulation up to an R-38.
Some types of insulation have a vapor barrier attached directly to them. This insulation should be installed so that the vapor barrier is toward the warm side of the house in winter.
If there is already insulation in the attic, don't install another vapor barrier over the old insulation. It is acceptable to mix types of insulation, such as adding cellulose over fiberglass batts.
Of equal priority to insulating an attic is to seal and insulate any exposed ductwork that runs through unheated areas, such as crawl spaces and attics. These ducts should be insulated with a minimum of an R-11.
If the ducts are used during the summer for central air conditioning, the insulation should have a good vapor barrier on the outside of the insulation. This will prevent condensation from forming on the cold duct due to the humid summer air.
The next priority is to insulate unheated crawl spaces either directly beneath the floor or on the foundation walls. If insulating below the floor, install the vapor barrier on the warm side in winter, or facing up.
Be sure that any plumbing in the crawl space is on the warm side of the insulation to keep the pipes from freezing.
Insulating crawl-space walls is appropriate only in unventilated crawl spaces. Insulation on these walls should run from the band joist down the foundation wall and extend at least 2 feet across the floor of the crawl space. The band joist is the area between the foundation wall and the floor of the room above the crawl space.
The dirt floor of the crawl space should be covered with a polyethylene film.
Insulating basement walls is the next priority and is just as important as crawl-space insulation. It is possible to add furring to the wall, insulate between the furring, and add a finished surface, such as wood paneling. Or, attach rigid foam directly to the basement wall and cover it with a noncombustible material, such as gypsum board.
Although not generally considered a do-it-yourself project, installing wall insulation can be very cost effective. This requires drilling through the siding or removing some of the siding and drilling through the sheathing under the siding. Knowledge of building construction is helpful to make sure that all wall cavities are filled with insulation. Wall insulation installed at the proper density and with no voids will not only significantly reduce conduction heat loss through the walls, but can reduce air leakage as much as 30 percent.
How does one achieve an R-38 in the attic near the edge of the roof?
The problem arises at the joint where the roof, wall, and ceiling come together.
Full-depth insulation may cut off continuous ventilation.
It is necessary to maintain one inch to one and one-half inch of air space over the insulation from the soffit area into the attic. Full-depth insulation to the outside face of the wall is desirable.
If this insulation is not firmly fixed or protected, it may be moved by winds and air pressure moving through the soffit vents. This may lead to moisture problems on the interior sheetrock finish.
The best solution to both problems is to use raised-heel roof trusses with sufficient depth over the wall for the necessary insulation. Regardless of the roof construction, the edge of the insulation over the wall should be protected by baffles, which are flush with the exterior face. The baffles should turn up and follow along the truss, maintaining a vent space under the sheathing.
This should prevent wind-driven movement of insulation and reduce the possibility of moisture problems at the ceiling perimeter.
Which is better for insulating attics, fiberglass or cellulose?
Both products are excellent insulating materials. Either can be used for insulating an attic, but, generally, cellulose is easier to install and is usually less expensive. Cellulose also has a slightly higher R-value per inch thickness and is more effective in reducing air leakage.
Some studies have shown that cellulose insulation retains its insulating value at lower temperatures when compared to fiberglass. Based on these points, cellulose is the preferred insulation for most attic arrangements.
However, be sure to seal all holes in the attic floor before beginning to insulate, regardless of which material is used.
Should I use a radiant barrier in the attic instead of conventional insulation?
No. In Kansas's winter climate, conventional types of insulation are necessary to cut heat loss from the interior of the house through the ceiling. Installing insulation properly and careful attention to air sealing will reduce air leakage through the ceiling.
A radiant barrier provides the greatest savings in the summer by reducing radiant heat transfer from a hot roof to the attic floor. However, they generally have not proven to be cost effective in Kansas's climate.
How can I insulate my floored attic?
One of the simplest methods is to drill holes in the flooring and then blow cellulose, mineral wool, or fiberglass into the opening. This method is like blowing insulation into walls. It is possible to use holes as small as 1 inch in diameter, but larger holes provide better coverage. For each joist cavity, drill at least three holes. Holes should be located at both ends of a joist cavity and in the middle.
Another approach involves opening the center section of the floor and then using an insulation blowing tube. This tube is inserted through the floor opening between the ceiling joists (attic floor joists). The tube should be long enough to reach the far end of the joist cavity.
Next, blow insulation through the tube to fill the far end of the cavity. When insulation stops flowing, withdraw the tube about 18 inches. The flow will resume as the tube is withdrawn. Continue the process until the entire cavity is filled. The blowing tube is typically a 2-inch diameter, clear vinyl tube that is attached to the insulation blower's regular tube.
How do I seal my attic access panel?
Many people place a single piece of sheetrock or a quarter-inch-thick plywood piece over the panel. But this is not an effective way to reduce heat loss or form a tight seal with the frame.
A better solution is to add insulation to the top (attic side) of the panel. The insulation can be either fiberglass batt or rigid foam, and it should be thick enough to equal the R-value of the attic insulation. If there is loose-fill insulation in the attic, some of it may spill into the home when the access panel is opened. The easiest way to avoid this is to build up a frame around the opening.
The frame can be made with plywood, lumber, or even heavy cardboard. Apply weather-stripping to this frame to reduce air leakage. The drop-in panel should be heavy enough to form a tight seal with an adhesive foam strip.
Finally, caulk the ceiling trim around the opening to further stop leakage.
How should I insulate a slab-on-grade floor?
A concrete slab floor should be insulated first at the edge of the slab where it is exposed to the outdoor air and then down the face to the frost line or below. A foam board type of insulation is most suitable, usually extruded polystyrene with enough thickness to achieve an R-value of at least 12.5.
Insulating beneath the floor depends on a number of factors. If the slab is to be covered with carpet or other insulating materials, insulation is not needed underneath. Definitely insulate under the slab if there are any buried or in-slab heating systems, and do so in consultation with the manufacturer's and installer's recommendations. If the slab area is small or exposed on two or more sides, insulating the sides and underneath will tend to keep the slab warmer.
If the slab is to be used for direct-gain passive solar storage, insulation will reduce the heat loss to the earth below and keep the floor more comfortable. Insulate wherever the sun will strike the floor and where desired for comfort.
In larger slabs, a 4-foot-wide band near the edge may be sufficient.
If the slab rests on damp, wet soil, it will tend to lose heat more rapidly and insulation will help retard this loss. In general, 1-inch thickness of polystyrene should be adequate for most installations.
How can I seal and insulate the opening for a whole-house fan?
The metal louvers under a whole-house fan offer little protection against heat and air loss to the attic in winter. Consider attaching an insulated panel directly below the louvers. To make this an easy, seasonal task, build a frame using 1 by 2-inch lumber to hold the insulated panel.
Cut the panel from five-eights-inch rigid insulation board, and mount it in the frame with fabric or other decorative material. Four wing nuts mounted in the frame will hold the panel in place. Taping a sheet of plastic under the louvers will help stop air leakage but will provide little insulation.
Windows and Doors
What are some common window types and their characteristics?
U-value is a measure of a material's ability to transfer heat. A window with a low U-value is better than a window with a high U-value.
Most single-pane windows in a home probably have a U-value of about one. Adding another pane of glass (referred to as double-glazing) will lower the U-value to about 0.5. The technique of double-glazing creates an air space between the panes of glass. This air space reduces conductive heat loss through the window.
By adding yet another pane of glass (triple glazing), the U-value decreases to about 0.31.
The U-value of window units is the heat flow at the center of the glass, and this is generally lower than overall U-value of the window. The overall U-value of a window includes the glass or glazing, the frame and the sash.
One common method of reducing heat gain or loss through windows is by coating the glass with an invisible, heat-reflective material. This type of glass is called low-emissivity, or low-e, glass.
A double-pane window with a low-e coating has a U-value of about 0.36, which translates to 35 percent less heat gain or loss than conventional double-pane windows. Triple-pane, low-e window units are also available and have a U-value of approximately 0.25.
Another type of window that's available is one that is gas filled, usually with Argon or Krypton. These gases are more viscous, slow moving and less conductive, reducing convective currents in the air space, lowering the heat transfer between inside and outside.
How can I choose a replacement window that will give good performance at a reasonable price?
The lowest U-value available for what a budget allows are recommended.
U-values are a measure of how much heat is lost through the window and frame.
Window frames are manufactured from different materials (wood, plastic, metal) have one, two or three glass layers, use special and light-reflecting films, and use air or special gas fillings between glass panes. The possible combinations number into the hundreds.
Fortunately for consumers, the National Fenestration Rating Council now publishes its Certified Products Directory that lists U-values for windows.
The catalog is available online at http://www.nfrc.org/.
The Certified Products Directory allows comparisons of specific models from several manufacturers.
Detailed rating information is also attached to new windows on a temporary label. This label is designed to provide consumers, builders and code officials with energy performance information in a comparable, easy-to-read format. The temporary label is accompanied by a permanent label or marking somewhere on the product, usually in an area that is unseen when the window is closed.
Pricing information is available from suppliers.
What are low-emissivity windows, and what are their advantages?
Low-emissivity windows have a special coating on the glass that reduces radiant heat transfer, thereby increasing the window's insulating value.
Emissivity refers to a surface's ability to radiate energy and is expressed as a value between zero and one. The emissivity of clear glass is about 0.85. A low-emissivity coating can reduce that to about 0.15, reducing the U-value of a double-glazed window from 0.5 to almost 0.3.
This has the same U-value as triple glazing, but without the increase in weight or size and at much less expense. Low-emissivity coatings also reduce solar transmission. This is an advantage in summer, but a disadvantage for south-facing windows in winter.
The year-round benefits of low-emissivity windows outweigh any loss of winter solar heat gain, and are appropriate for any window orientation.
What is the advantage, and expected life, of purchasing gas-filled double-pane windows?
The advantage of having argon gas between the panes of glass is that argon transfers less heat than air does. Argon has a lower U-value because it is denser than air. This reduces heat transfer within the air space.
Argon-filled glass windows have U-values ranging from 0.40 to 0.31, while air-filled windows have U-values of about 0.5. For homes with a significant amount of window area, about 25 to 40 percent of the house's square footage, this U-value difference can cut energy costs significantly.
Over time, argon gas may leak out of the space between the panes of glass. The amount lost depends on how well the window was manufactured and the quality of materials used. Argon leaks are usually caused by failure of the seals between the glass and the edge spacer. Also, some gas is lost because it diffuses through the seals. Even if the argon gas does leak, the window's thermal performance isn't affected much as long as there is no noticeable failure of the seal.
Tests have shown that if an argon-filled window leaks five percent of its gas each year, it will lose only 12 percent of its R-value after 20 years.
Does condensate on a double-pane window mean the seal has failed?
The location of the moisture indicates whether or not the seal has failed.
On a sealed double-pane window, the space between the panes is filled with a dry gas and may contain a desiccant, a material that absorbs moisture.
If the moisture is between the two glass layers, yes, the seal has failed. Contact the window supplier for a remedy.
If the moisture can be wiped from the room-side surface of the inner pane, the moisture is condensing from the room. On a double-pane window, this simply indicates high humidity - not a failed seal.
To avoid this condensation on windows, remove moisture from inside the home. This can be accomplished by using exhaust fans in the kitchen and bathroom.
What is movable insulation?
Movable insulation is a versatile window covering that allows beneficial heat gain during winter, and minimizes unwanted heat gain in summer.
Insulating windows can make a significant difference in energy bills, since windows are to blame for much of summer heat gain and winter heat loss. This is due to the low R-value of the glass pane.
R-value measures resistance to heat gain or loss.
A typical insulated wall has an R-value anywhere from 12 to 19, while a double-pane window has an R-value of about 2. By using movable insulation within the window frame, the R-value nearly doubles. This will help reduce a home's overall heating and cooling load.
Movable insulation is divided into two types: interior and exterior. Examples of interior movable insulation are thermal curtains, shades, shutters, and window quilts.
Shades and shutters keep out (or retain) the most heat, but also cost more than curtains and window quilts. Shades are most effective if they are properly sealed along the edges of the window. Interior shutters are usually made of polystyrene or a foam sheathing encased in wood or metal, and can triple the R-value of a window.
The most common type of exterior movable insulation are shutters. Most people who use movable insulation place it inside their home. The advantages of interior insulation are protection from the weather and simplicity of operation.
Exterior movable insulation has advantages as well.
Exterior shutters provide additional security to a home and can reflect more sunlight into a home during winter months. They also do a better job of reducing solar load in the summer. However, shutters generally cost more than interior insulation, and are subject to constant weathering.
What is the best way to shade a window to keep out summer sun?
An exterior shading device is best because it stops the sun's heat outside the home.
Perhaps the ideal choice is natural vegetation. Properly positioned trees and shrubs can provide the most effective shading to match cooling season demand and will enhance the local climate of the building.
Adjustable horizontal or vertical louvers, installed on the outside of the window, provide the most complete shading but cost more than most other sun control devices. Awnings, generally the most widely used exterior sun control device, provide good shade while permitting full ventilation. Awnings should be opaque and vented at the top to prevent heat build up underneath.
Reflective solar screens stop between 30 and 70 percent of the light and heat outside a window without stopping ventilation. Solar screens have the advantage of being removable in the winter to allow the sun's heat into the home.
Window films and aluminum foil taped to windows are inexpensive interior treatments but less effective than exterior devices. White or light-colored roller shades and drapes help reduce incoming sunlight and heat.
Dark shades or drapes and venetian blinds are the least effective sun control devices.
What types of doors are the most energy efficient?
The most energy efficient doors are those that seal tightly when closed.
This requires a quality weather-stripping system and a door that resists warping. The insulating value of the door is also important.
Metal and fiber glass doors are available with urethane foam cores that provide R-values up to 4.4, compared with an R-2.1 for a solid-wood door. A metal door has the added advantage of using magnetic gasket weatherstripping that works much like the seal on a refrigerator door.
It's important to keep door-related energy costs in perspective.
In Kansas, a typical solid-wood door with average-fitting weather-stripping contributes only about $9 a year to heating costs.
What plants are best for shading west windows?
Plants are useful because they can provide shade during the time of day and year when overhangs are losing their effectiveness.
Some that have been suggested include Virginia creeper, a number of ivies, and euonymus. The local county extension horticulturist or a local nursery will know exactly which plants do best in different areas.
Fruit trees also can be trained to grow along a trellis. Some of the most useful are trellises made of wood framing and weather?resistant cord or wire. The wood should be cedar, redwood or pine that has been thoroughly sealed and painted. The trellis can be fan?shaped or rectangular.
Avoid using black wire for the cross supports because this can acquire so much heat from the sun that it can burn young vines.
Also, keep the trellis more than 1 foot from the wall being shaded, or heat reflected from the house may injure the plants.
Which is the better method for insulating basement walls: exterior or interior insulation?
Both methods can be used effectively to reduce heat loss, and each has advantages and disadvantages.
The preferred method, from a thermal standpoint, is exterior rigid foam board insulation. It allows the concrete to interact thermally with the interior and helps reduce temperature fluctuations.
Exterior insulation for a basement wall must be protected from the sun and physical damage.
A major disadvantage to exterior insulation is that it provides a hidden entry path for termites. For this reason, exterior insulation should only be used in areas where the threat of termites is low.
Interior basement wall insulation is less costly, easier to install and provides a finished living space with room in the walls for utilities. Also, most builders are familiar with the techniques.
How deep should foundation insulation extend below grade?
Insulation should extend all the way to the footing. A heated basement will always lose heat through its walls, no matter how deep they are.
Although heat loss to the soil near the bottom of the wall is not great, heat is conducted up the wall to colder soil near the surface. Insulating the entire wall reduces this bypass heat loss.
Also, keep in mind that the cost of the additional insulation is relatively small compared to the cost of framing and finishing the wall.
What is the R-value of soil?
The resistance of soil to heat flow (R-value) varies a great deal, depending on the type of soil and the moisture content. In general, soil is not a good insulator.
For a fine-grained soil with 20 percent moisture content, the R-value is about 1 per foot, roughly the same as concrete.
Because of this low R-value, it is important to insulate foundations, including slabs-on-grade, crawl space walls and full basements. Insulating the first few feet below grade is the most critical area, but we recommend full-depth insulation.
Where are the most critical air leaks in a home?
These are likely to be found in the attic, as holes around plumbing and electrical lines, and other gaps in framing. If this is an existing home, move the insulation out of the way to find many of these. Using a foam sealant, regular caulk and small pieces of foam board, seal all the penetrations possible. Look for other openings in both exterior and interior walls, including plumbing openings behind bath and kitchen cabinets.
Residents should be sure to replace the insulation and avoid leaving gaps between fiberglass batts. In homes with a basement, owners should look for, and seal, the same kind of holes in the ceiling and floor framing that open into the interior cavities of the house.
After sealing is done, consider adding insulation to the attic. An attic should be insulated to an R-38, or about 12 inches fiberglass or cellulose. When adding attic insulation, cellulose can be blown directly on top of either fiberglass or cellulose. Many lumberyards will loan the equipment when purchasing the insulation from them.
For additional help, a do-it-yourself energy audit is available online at the Energy Extension Web site at http://www.oznet.ksu.edu/dp_nrgy/ees/. Some may also choose to hire a certified Kansas home energy rater. Call Energy Extension at 1-800-KSU-8898 for a list of professionals.
What is an air barrier, or house wrap?
These products are primarily designed for use in new construction as a method of reducing air infiltration. They are rolled sheet goods usually installed with staples or tape over the exterior sheathing.
Some brand names are Tyvek, Rufco-wrap, Barricade and Airtight-wrap. Their intent is to minimize the passage of air, while still allowing water vapor through the exterior skin of the building.
Three basic types currently are available. Tyvek is a spun-bonded polyethylene. This is a mat of polyethylene fibers spun-bonded in a patented process. The second type is perforated polyethylene film. The third type is spun-bonded polypropylene, a different type of plastic.
Each of these can be effective air barriers if installed according to its manufacturer's recommendations.
How effective is covering windows with plastic at sealing a window?
Properly applied, a plastic covering can make a window almost airtight. This is one of the most effective ways to seal a leaky window.
A storm window is designed more for convenience and appearance than air tightness. Even the highest quality storm windows allow air to leak around the edges of the sashes. Storm windows typically reduce air leakage through primary windows by about half.
Window plastic can be installed on the inside window surface or on the outside. It will be more difficult to maintain window plastic applied to the outside. Cold temperatures make the plastic brittle, and winds whip the plastic in and out, reducing the seal's effectiveness and sometimes even tearing the plastic.
Newer plastics are very clear when stretched tight, so do not need to worry about window coverings reducing a home's appearance. Special shrink film plastic can be heated with a blow dryer to shrink the window film and eliminate all wrinkles, making the plastic almost invisible.
For maximum leak reduction, it is important to adhere the plastic to the frame surrounding the window rather than to the window sash.
Can I close some of my attic vents during the winter?
Yes. However, assuming that an attic is properly insulated, there isn't much advantage in closing the vents in winter.
Because insulation is typically in the floor of the attic, the attic temperature will be close to that of the outdoor surroundings. Closing some vents won't significantly change this temperature.
An attic requires a certain amount of ventilation during the winter for moisture removal. This ventilation area is about half that required during the summer.
An attic will require more ventilation if significant moisture sources exist, such as kitchen or bathroom vents.
How important is crawl space venting?
Crawl space ventilation is required by code in many areas. However, a growing body of research indicates that it often is not effective in reducing moisture levels in crawl spaces.
The two most important methods to deter moisture accumulation in crawl spaces are adequate drainage away from the foundation, and a moisture barrier over the soil in the crawl space.
Drainage can be created away from the foundation walls with a minimum 5 percent slope for a distance of 15 feet from the foundation. This is the ideal. Dense ground covers like healthy turf grass also help surface run-off to drain away from the foundation.
However, even soil that feels dry inside a crawl space can be a significant source of moisture. Keep this moisture in the soil and out of the crawl space by covering the ground with a six-mil plastic vapor barrier. Overlap seams in the plastic a minimum of six inches and extend the plastic up the foundation walls six to 12 inches. Use soil, sand or rocks to weight the plastic down around the perimeter and over seams.
Is there a simple rule for sizing a kitchen or bathroom exhaust fan?
Yes, but the rule for the kitchen is different than for the bathroom.
Exhaust fans are rated by their air-moving capacity in cubic feet per minute, or CFM. The rules of thumb relate the required CFM to the volume of the space to be ventilated.
To size an exhaust fan for a kitchen, multiply the volume in a kitchen (length by width by ceiling height) by 0.20. A 12 by 12 foot kitchen with an 8-foot ceiling would require an exhaust fan rated at 230 CFM.
Kitchen exhaust fans should move at least 200 CFM as a practical minimum. A bathroom exhaust fan should move 0.13 CFM per cubic foot of space, with a minimum of 50 CFM.
Exhaust fans must be vented to the outside, not into an attic or crawl space. The venting duct should be as short as possible and have few right-angle bends. Using a flexible ducting material, pull it tight between the exhaust fan and the vent terminal while avoiding sharp bends. Then cut off any excess material. Improper ducting can reduce exhaust fan air flow by 50 percent or more.
A backdraft damper is recommended to prevent cold air from entering through the exhaust fan when it is shut off.
How do I select a quiet exhaust fan?
A sone is a subjective unit of loudness. Sone ratings for exhaust fans typically range from a low of one to a high of seven. The smaller the number, the quieter the fan.
However, the quietest fans move the least amount of air.
Don't sacrifice adequate air-moving capacity for quietness. Choose a fan that can do the job.
Once the capacity of fan needed has been determined, compare sone ratings on fans of equal capacity and choose the fan.
Can I vent my bathroom and kitchen exhausts into the attic?
Although the practice is quite common, direct venting to the outside is the recommended method. A well-ventilated attic can easily handle the moisture diffused through the ceiling, but it may be overwhelmed by the moisture from a steamy bathroom or busy kitchen.
The greatest danger is that moisture will condense and freeze on the cold underside of the roof deck near the exhaust outlet. If frost accumulates, it can result in enough water to drip down onto the insulation and ceiling.
Through-the-roof vent kits are available, and they are relatively easy to install in composition shingle roofs. Carefully installed, they are not likely to leak.
Will a wall with a five-eighths-inch thick, foil-faced sheathing on the outside and a 6-mil plastic vapor barrier on the inside have moisture problems?
There is the potential for a moisture problem, but the likelihood of this depends on the quality of the installation.
If warm, moisture-laden air from inside a home gets into the wall, and the inside face of the foil is cool enough, condensation could result. If the inside vapor barrier is carefully installed and sealed to prevent air leaks, this potential is significantly reduced.
The other factor affecting the potential for moisture problems is the temperature of the inner foil face. Because the sheathing has a high R-value, there's less chance the foil face will be cold enough to cause condensation.
What is a vapor barrier?
A vapor barrier is an impermeable material, typically plastic or asphalt paper, attached to insulation.
The purpose of a vapor barrier is to prevent moisture from passing through the insulation and condensing on the cold outer surfaces. A vapor barrier has two main functions: keeping moisture inside a home, and preventing it from condensing in the insulation.
In new construction, a sheet of polyethylene film is applied to the studs before installing the drywall. Always apply the vapor barrier on the warm side of the wall, ceiling or floor.
If insulation is to be blown into an attic, lay down the sheet of polyethylene film first, or attach it before the sheetrock is added.
Everyday household tasks such as washing, cooking and bathing release moisture inside the home. A vapor barrier slows the movement of this moisture from the home's interior to the outside, raising indoor humidity levels and preventing condensation in the wall or attic.
Will installing a vapor barrier make the walls sweat?
No, but it's easy to confuse the installation of vapor barriers with moisture problems because vapor barriers do effect indoor relative humidity.
The purpose of a continuous vapor barrier is to prevent moisture from entering wall cavities and attics, where it can condense on cold surfaces and cause structural damage.
The vapor barrier also reduces air leakage. Moisture produced by household activities accumulates quicker because of the reduced airflow, resulting in a higher relative humidity. If the humidity gets high enough, windows and other cold surfaces begin to sweat, or condense moisture.
Condensation problems can be more serious during a new home's first winter. This is due to extra moisture stored in drywall from joint compound and paint. Use of exhaust fans during periods of peak moisture production, such as while showering, bathing, cooking and wet cleaning can prevent or control moisture problems. Construction-related moisture problems will diminish with time as finish coatings cure. However, additional ventilation may be necessary during a new home's first winter.
Lighting and Appliances
Are $10 compact fluorescent lamps cost-effective?
Yes, cost-conscious consumers know these lamps can save energy and money, and they last a long time.
A standard, 60-watt lamp lasts only about 1000 hours. A 15-watt compact fluorescent lamp with the same light output will last more than 10,000 hours and use much less electricity. To get 10,000 hours of use from a standard incandescent lamp, it would take 10 light bulbs at about 50 cents each that would consume more than $40 in electricity, a total cost of more than $45. By using a compact fluorescent, the lamp cost might be $10, but it would use only about $10 worth of energy for a total cost of $20.
These lamps are best used in fixtures that get used a lot or where the lamps are difficult to change.
What is the best exterior lighting source?
The best type of lighting depends on the desired use.
For example, low-pressure sodium lamps have the highest lumen per watt output (amount of light produced per watt of energy consumed) out of all light sources. However, the distinct yellow color of low-pressure sodium lamps limits their use to area lighting, such as parking lots and security lighting.
High-pressure sodium lamps have improved color. They are not as efficient as low-pressure sodium lamps but are still effective light sources and are well suited for general-purpose lighting, parking, or as street lamps.
Metal halide lamps are the preferred light source for outdoor sports activities. The light produced by these lamps has good color and looks more natural than the yellow light of sodium lamps. The output and efficiency of metal halides is lower than either of the sodium lamps but much improved compared to the less expensive mercury vapor lamps.
Are mercury yard lights efficient?
Mercury vapor lamps are more efficient than incandescent lamps, but to substantially improve the efficiency of outdoor lighting, use high-pressure sodium lamps.
Lighting efficiency is a measure of the amount of light from a lamp, in lumens, divided by the power to the lamp, in watts. A 100-watt mercury lamp has an efficiency of 38 lumens per watt. The efficiency of an incandescent lamp is about 16 lumens per watt.
Sodium lamps producing about the same light as a 100-watt mercury vapor lamp have an efficiency of 70 lumens per watt, more than four times more efficient than incandescent lamps, and twice as efficient as mercury vapor.
The smallest sodium lamp is a 35-watt lamp. It will produce more light than a 100-watt incandescent. It takes about five minutes for a sodium lamp to brighten, so they shouldn't be used where they will be turned on and off frequently.
Can a photocell be installed on my outside lamp?
Yes, a photocell can be installed. The switch is about $20. It should be mounted near the lamp but in a location where the light won't shine on the sensor.
Does it cost more to turn a light on and off rather than just leaving it on?
Turning lights off when they are not needed will always save energy. The momentary power surge caused by turning a light on is so small and so brief, it won't even register on a electric meter.
However, frequent switching of fluorescent lamps will shorten their life, eating into the savings of turning them off. Even so, fluorescent lamps need only be off a short period of time for the energy savings to exceed the cost of reduced lamp life. Thus, if planning to be out of a room for more than about 15 minutes, shut fluorescent lamps off.
Practically speaking, incandescent lamps are not affected by frequency of switching. Shut them off whenever they are not needed, no matter how short the time period.
What are the advantages of halogen lamps compared to regular incandescent lamps?
Halogen lamps have a longer life, better color and the light output does not depreciate with lamp age.
Traditional incandescent lamps darken with age. Halogen lamps employ a special gas mixture, higher temperatures, and special glass to improve lamp life and eliminate lamp darkening.
In addition to longer life, halogen lamps offer very clean, bright white light, especially useful for retail display. The lamps are also used in reading lamps or other applications where light quality is important. However, halogen lamps do not have a second glass envelope that limits bulb surface temperature. Therefore they should be used with extreme caution. Bulb surface temperatures of up to 1,100 degrees Fahrenheit are possible.
Some halogen lamps are slightly more energy efficient than regular incandescent lamps. However, if lower operating costs are the motive, consider using compact fluorescent lamps. Several manufacturers have announced plans for an energy-efficient torchiere. Contact EPA Energy Star at (202) 233-9841 for a list.
What should I look for when buying a new water heater?
In general, about 20 percent of the energy consumed by an average home is for water heating. Water heaters have improved significantly in the last 12 years and are much more energy efficient, primarily due to more efficient combustion for gas models and added insulation.
Because the average life expectancy of a water heater is about 13 years, it is important to consider purchasing one that is energy efficient since energy-efficient models mean reduced energy consumption, which results in lower energy costs.
Most water heaters and other home appliances come with a large yellow sticker called the ENERGYGUIDE. This sticker compares average yearly energy operating costs for different models, telling consumers which ones are expected to cost the least during their lifetimes.
Also, most water heaters come with an Energy Factor (EF) value, which is listed on a separate tag beside the ENERGYGUIDE. The EF is a decimal value between 0.4 and 1.0 and is the amount of energy supplied to the heated water, divided by the water heater's total energy consumption. Gas water heaters have EF values between 0.5 and 0.7, while electric ones range from 0.75 to 0.95. Minimum EF values range from 0.51 to 0.56 for gas units, depending upon the size of storage tank, to an average of 0.89 for electric ones. Recommended EF values are 0.61 for gas units and 0.92 for electric water heaters.
All type of water heating units with higher EF values generally cost more initially, but because of the higher EF value, will more than makeup for this higher initial cost in yearly energy savings throughout the lifetime of the water heater.
How do you select the proper size for a water heater?
The size or capacity of water heater needed is based on the maximum amount of hot water consumed during any one-hour period. This is called the peak-hour demand.
To determine the peak usage hour for a family, list all the water consuming activities during that period.
Typical hot water consumption in gallons per usage for various activities is as follows: shower, 20; bath, 20; shaving, 2; hands and face washing, 4; hair shampooing, 4; hand dishwashing, 4; automatic dishwasher, 14; food preparation, 5; automatic clothes washer, 32.
The peak for one family might occur in the morning and consist of three showers (20 gallons each, 60 gallons total), hands and face washing (5 gallons), shaving (2 gallons), and food preparation (5 gallons), for a total of 72 gallons.
A water heater can provide more than its storage capacity during the first hour of operation, because it can also heat the water during this period. This capacity, the total gallons of hot water the heater provides during this first hour, is referred to as the first-hour rating.
In the sample above, a water heater with a first-hour rating of at least 72 gallons would be required.
Residential water heaters are most commonly available in 20, 30, 40, and 50-gallon capacities with first hour-ratings ranging from 22 to 100 gallons. Gas and propane water heaters typically have higher first-hour ratings than electric heaters of the same storage capacity.
How can I reduce my water heating costs?
Several simple things can be done to decrease the amount of energy used to heat water in a home. Water heaters consume about 20 percent of the energy an average home uses, with more than one-third used in showering and 25 percent to wash clothes.
Implementing certain energy-efficient measures, even small ones, can make a noticeable difference in the heating bill.
For example, water heater temperatures should be set to about 120 degrees and definitely no more than 130 degrees. In general, a 10-degree reduction in water temperature has been shown to provide an eight percent water-heating energy savings.
Another important and effective energy-saving measure is to wrap the water storage tank with an R-12 insulation blanket, especially if the water heater is an older model. Consult the manufacturer's equipment guide to make sure an insulation wrap is recommended; it may not be on some newer models. Also, insulate all exposed hot water pipes with either foam or fiberglass wrap.
Installing low-flow showerheads has been shown to save not only money in reduced water usage, but also to save energy as well.
Finally, when the time comes to purchase a new clothes washer, selecting one that is energy efficient will also save on water heating costs.
These energy-saving tips cost very little and have the potential to not only lower the amount of energy used to heat water in a home, but also save money as well.
What can you tell me about tankless water heaters?
Tankless, or demand, water heaters don't have storage tanks, so they heat water as it is used, on a demand basis.
Because there isn't a storage tank, this type of water heater can save from 10 to 20 percent on the cost of heating water.
A family of four uses about 100 gallons of hot water a day. During the course of a year, the cost to heat this amount of water will vary from $90 to $700, depending on the price of energy. If fuel prices are high, the savings gained from a tankless water heater will be significant. Tankless heaters are available in either point-of-use or central styles.
Point-of-use heaters are installed near each area that requires hot water. This minimizes plumbing for new construction. The other type, a central tankless heater, supplies water for the entire house. Tankless water heaters generally cost $200 to $500 more than conventional water heaters. While this may seem like a large premium to pay, the fuel savings may justify the additional cost during the course of just a few years.
What should I do to keep my water heater operating at maximum efficiency?
As with any heating or cooling device, regular maintenance of water heaters goes hand-in-hand with efficiency and safety. Follow these three steps to assure the water heater is giving maximum efficiency for minimum dollars.
1. Every two months, connect a hose to the bottom drain. Open the valve all the way, letting the water flush through. Be careful: this is hot water! This removes sediment, which reduces heating efficiency.
2. Place a bucket under the temperature and pressure (T-P) relief valve discharge, located on the top or side of the heater. Carefully lift the lever -- again, the water surging out will be hot. The T-P valve is a safety valve designed to prevent the tank from exceeding safe temperature and pressure levels. This test assures that sediment is not blocking the T-P valve.
3. If the unit is gas or electricity, annually inspect the heater's burner area, checking for dirt or water. If the area is dirty, shut off the pilot and clean the burner with a shop vacuum. Remember to light the pilot again. If there are signs of leaks, the water heater will probably have to be replaced, soon
If the water heater is more than and the bottom drain and T-P valve have never been checked, they may not seal properly once opened. Replace either valve if they do not seal tight after operation.
Should I use a water heater insulation blanket with my new water heater?
Residential water heaters must all meet minimum efficiency standards. For example, a 40-gallon, gas water heater must have an energy factor (EF) of at least 0.54, while an electric water heater must have an EF of at least 0.89.
While this is a considerable improvement compared to heaters marketed just a few years ago, there are water heaters on the market with EF ratings in the mid-60s or higher for gas and the mid-90s for electric.
If the existing water heater is on the low end of the efficiency rating, then it is still possible to reduce fuel cost effectively by adding an insulation blanket. However, if the water heater is on the high end of the efficiency range, then additional insulation will probably not be of much benefit.
Can you vent a water heater to an old masonry chimney?
The National Fire Code does not specifically prohibit the use of masonry chimneys with modern gas appliances. However, it requires the chimney to be lined with an approved material.
Many old masonry chimneys are not lined. Venting gas appliances into unlined chimneys could cause drafting problems for the appliance, as well as deterioration of the masonry.
It is recommended that gas appliances are vented with a properly sized and designed chimney. Check with local building code officials for their specific requirements.
Will a ceiling fan help save energy?
A ceiling fan saves energy primarily by enhancing comfort in the summer. The amount saved will depend on how much less an air conditioner is used. A fan creates air movement that can help the room feel cooler at higher air temperatures.
Research has shown that moving air can compensate for a four-degree increase in air temperature with no perceived loss of comfort.This means someone can be as comfortable at 82 degrees with a fan moving air, as someone would be at 78 degrees with no air movement.
For each degree increased on the thermostat setting in the summer, expect to save three to four percent on the cooling bill. So, if someone operates a ceiling fan and raises the thermostat setting four degrees, 12 to 16 percent will be saved.
Keep in mind the thermostat must be kept at the higher setting to achieve the expected savings. And, in order to be comfortable, a fan may be needed in each room of the house. Install a ceiling fan in the most frequently occupied room, such as the family room, and use several portable fans to move air between rooms. Any type of fan can enhance comfort in summer, not just a ceiling fan.
In the winter, ceiling fans recirculate warm air from the ceiling to the floor, but the energy savings is not significant, especially in homes with forced-air heat or ceilings lower than 12 feet. When a ceiling fan operates in winter, the air movement, even when directed upward, often causes discomfort, so the thermostat may need to be turned up.
Therefore, don't operate the ceiling fan in winter.
Can you tell me the difference between an attic fan and a whole-house fan?
An attic fan ventilates only the attic by drawing in air through the attic vents. It is installed in the roof or gable. It turns on whenever the attic temperature reaches a high temperature.
Research shows that any savings in air-conditioning costs because of an attic fan generally are offset by the cost to operate the fan.
A whole-house fan ventilates the house and uses the attic vents only for discharging the air. The fan is located in the ceiling between the occupied space and the attic. It cools the house by pulling in cool outside air through open windows.
A whole-house fan can save a significant amount of energy by reducing the need for air conditioning when outside temperatures and relative humidities are in the comfort range.
What should I consider before purchasing a whole-house fan?
Several factors must be considered before making such a purchase. First, make sure a whole-house fan is appropriate for the lifestyle and climatic location.
Cooling with ventilation works best in climates with large day-night temperature differences and relatively low humidity. This type of climate is more characteristic of western Kansas than eastern Kansas. Also, be willing to use the fan on a regular basis in lieu of air conditioning to achieve a significant savings.
If a whole-house fan still makes sense, determine the size of fan the house needs. For a whole-house fan to ventilate effectively, it should make 40 air changes an hour. This means it must move two-thirds of the house volume in one minute.
Determine the volume of the house by multiplying floor area by ceiling height. Then, select a fan that has a CFM (cubic feet per minute) rating of two-thirds the house's volume. If the house is large with several floors, consider sizing the fan for just one floor.
Second, determine where to install the fan. Whole-house fans are usually mounted horizontally in the ceiling between the attic and the top floor. If the model chosen discharges through the attic, allow enough vent area for the air to escape without building up pressure. It is recommended to have one square foot of open vent area for every 750 CFM of the fan's rated air-moving capacity.
For example, a fan rated at 4,500 CFM needs six square feet of open vent area. Remember that most attic vents have insect screening, which cuts the effective area by about 50 percent.
Whole-house fans are available in direct-drive and belt-drive models. Direct-drive models have the fan mounted directly on the motor shaft. They are usually quieter and require less maintenance than belt-drive models.
Belt-drive models often use less energy per CFM of capacity than do direct-drive because the fan motor is matched more closely to the optimum fan speed. Also, belt-drive models are usually available in larger sizes.
When installing a whole-house fan, consider a variable-speed controller and a timer. The variable speed controller allows operation of the fan at different speeds, depending on outdoor temperature. The timer allows someone to turn on the fan in the evening, and then set it to automatically shut off after a certain time.
Finally, seal off the fan during the winter months to eliminate the significant amount of heated air that can be lost through the fan louvers. The simplest method is to install a whole-house fan weatherization kit available at many hardware and discount stores. The kit provides a heavy clear plastic cover and self-adhesive plastic channels to hold the plastic. The channels and plastic are applied to the house side of the fan, making seasonal installation and removal convenient.
What can you tell me about the new front-loading washing machines?
Front-loaders have been in laundromats and in Europe for years. Their new appeal here in the United States is a result of their reduced use of water and energy.
A study done by the Department of Energy (DOE) in Bern, Kan., showed water consumption fell from 41.5 to 25.8 gallons per load with use of a front-load machine.
The study was done in Bern partly because it had a chronic water shortage and the DOE wanted to determine if switching to a new style of washer would help alleviate the water-shortage problem. Monthly water usage for the town dropped 50,000 gallons per month.
In addition, energy used to heat the water is also reduced. If water is heated with electricity, annual savings would be in the range of $15 to $25 per year.
Front-loading washers are more expensive. The three U.S.-manufactured washers start at about $700, about $200 more than the better top-loading washers.
What can the ENERGYGUIDE tell me about purchasing a new refrigerator?
The ENERGYGUIDE label for refrigerators shows the estimated annual cost of energy to operate the appliance.
The figure is based on the national average rate for electricity, or about 8.5 cents per kilowatt-hour.
If the electric rate is higher than this, the cost to operate the refrigerator will be more than the price on the label; if the cost of electricity is lower, then the operating costs will be lower as well.
When purchasing a new refrigerator, it may be advantageous to buy one that has a higher initial cost.
Once purchased, keep the refrigerator operating in top condition by vacuuming the coils on the backside or bottom once a year. The buildup of dirt reduces heat transfer and lowers efficiency.
What is the cost of operating a home computer system?
At the average Kansas electric rate of 7.9 cents per kilowatt hour (kwh), a personal home computer system consisting of a processor, video display monitor, and printer will cost about 1.2 cents per hour of operation.
The energy use of each of the components per hour is processor, 30 watts; video monitor, 45 watts; and printer, 75 watts.
Actual energy use will vary with the make and model of computer. A home computer system used eight hours a day, five days a week, would cost $2.11 a month to operate at 7.9 cents per kwh.
When buying a new computer system or component, look for the Energy Star logo, which indicates that energy-saving features have been incorporated into the design of the system or component.
What should I do with my humidifier during the summer?
Clean humidifiers and store them dry during the summer months.
If it is a room humidifier, simply drain the water.
Clean out scale with a mild detergent and inspect the media element and clean or replace it if necessary. The media element is the surface, such as a foam pad or metal grid, which is kept wet to allow for evaporation.
Always unplug the humidifier before cleaning. If storing a central humidifier, shut off the power and water to the unit. Wash any scale or debris from internal parts. Again, clean or replace the media element before using the humidifier next winter.
Space Heating and Cooling
How much will I save on my heating costs if I replace my furnace with a new, high-efficiency furnace?
A new high-efficiency furnace will have an efficiency of about 95 percent.
The amount saved will depend on how efficient the existing furnace is and how much is spent on heating. If the unit is more than 10 years old, it is difficult to estimate the efficiency of furnaces without an on-site inspection.
However, there are some clues to the efficiency.
First, look at the flue. If it is made of plastic (PVC) pipe, it is already a high-efficiency furnace. Low flue-gas temperatures in high-efficiency furnaces (also known as condensing furnaces) allow for the use of PVC flues. If the flue is metal, and the unit is more than 20 years old, it probably is about 65 percent efficient or less and it is possible to save about 30 percent with an upgrade to a high-efficiency furnace. If the existing furnace is between 10 and 20 years old, its efficiency is around 75 percent and a high-efficiency furnace will save about 20 percent. If the unit was built after 1990, it will have a minimum efficiency of 78 percent and savings of about 18 percent.
To estimate the savings in heating costs, total the gas bill for a year. Subtract 12 times the July gas bill to remove the amount spent on water heating. What is left is the amount spent on heating. Multiply the existing heating costs by the percentage savings possible from above to estimate the savings.
What does it mean if a furnace has sealed combustion?
Sealed combustion is a feature found on an increasing number of high-efficiency furnaces. Air for combustion is drawn from the outdoors through a small plastic pipe connected to the furnace. A furnace without sealed combustion has to draw air from surrounding rooms.
The combustion air is replaced by air leaking around doors, windows, and other openings. This can cause uncomfortable drafts and waste energy.
In very tight homes, the natural ventilation rate may not be able to support combustion, which results in inefficient burning and backdrafting.
Sealed combustion also supplies clean air that can be critical for condensing furnaces. Indoor air may contain chlorine gas from city water and laundry products, which could cause the condensate produced by the furnace to be more corrosive than the furnace is designed to handle.
Several new furnaces I've looked at have power venting. What is this, and why is it needed?
High-efficiency furnaces cannot rely on natural draft to exhaust flue gases. This is because too much heat is removed from the gases.
Since the gases are less buoyant, energy-efficient furnaces must use a power-driven fan to force the gases out of the furnace and flue.
Power-vented furnaces require less air for combustion than natural draft furnaces because the fan guarantees the air-flow rate. This, in turn, improves combustion efficiency.
How do modern, high-efficiency furnaces vent without a chimney?
Today, most high-efficiency furnaces use a small fan to exhaust flue gases to the outside.
The fan eliminates the need for a conventional chimney. The higher efficiency of the furnace reduces the temperature of the flue gases, lowering the surface temperature of the flue pipe.
Typically, furnaces that are 78 to 82 percent efficient are vented through a steel vent pipe that is run to the outside. High-efficiency furnaces, those more than 90 percent efficient, often are vented through plastic pipe.
Is the condensate from a high-efficiency furnace harmful to a septic system?
It's unlikely that a healthy septic system will be affected by the water condensed from the flue gases of a high-efficiency furnace.
A 60,000-Btu furnace operating 50 percent of the time will produce about seven gallons of condensate a day. The condensate has a pH level of about four, which is about the same as a carbonated soft drink. However, furnace condensate is not safe to drink because of trace toxic chemicals it contains.
Will a programmable, setback thermostat save enough energy to pay for itself? If so, what features should I look for?
A programmable thermostat saves energy by automatically controlling the furnace to provide heat only when needed.
The amount of energy saved will depend on how often the furnace can be set back and the amount of the setback. An automatic thermostat also can control the air conditioner in the summer.
In general, expect to save about 10 percent with a nighttime setback of 10 degrees, and an additional five percent savings if the thermostat is also set back during the day.
If a thermostat with both a night and a day setback is desired, choose one that can change the temperature at least four times each day because four changes are required for two setback periods.
Some models simply set the thermostat back by a certain number of degrees (selectable by the operator) from the normal temperature.
Other models allow the operator to select the actual temperature desired during different periods of the day. These models give more flexibility. For example, allowing a deeper setback during the day, when no one is home, than at night.
If a daytime setback is desired, but the feature isn't needed on weekends, purchase a model that allows for a separate weekend schedule. Some models allow a different schedule each day of the week.
If a heat pump is in place, a special setback thermostat designed for heat pumps is needed. This prevents unnecessary operation of the electric heating elements during the recovery period. Some studies have shown there may be little or no savings with winter heat pump setback, but automatic operation may be desirable during the cooling season. Other features are available that may add convenience but not necessarily energy savings.
Some will remind owners when to change the furnace filter or tell how many hours a furnace has operated during a particular period. Battery backup is a helpful feature that prevents the programmed schedule from being lost during a power outage.
Above all, select a model that is simple, easy to program, and use.
Condensed instructions should be printed somewhere on the thermostat, or the operation should be easily understood from the controls themselves. Thermostats that require consulting an operator's manual to change the temperature or override the schedule can cause a great deal of frustration and often end up not being used.
What services should be included in a furnace tune up?
A thorough furnace tune up should include checking of the burners, blower and motor, controls, and chimney by a trained professional.
In the burner assembly, the heat exchanger should be inspected visually for soot, corrosion, and cracks. If there is any concern about a cracked heat exchanger, additional tests should be performed to verify that it is safe.
The burners should be removed and cleaned and the air/fuel mixture adjusted if necessary. The temperature rise through the furnace should be measured to make sure it is within acceptable limits. Excessive heat rise indicates insufficient air flow, which wastes energy and may result in poor distribution of heated air.
The blower motor should be lubricated if it is designed for lubrication. The blower should be removed and cleaned by brushing. If the blower is belt driven, the belt should be checked for proper tension and replaced if it is cracked.
Inspect the furnace's filter and replace it if necessary. The fan switch should be checked for proper on-and-off temperatures, and the high-limit switch should be checked to make sure it will shut off the gas valve should the furnace overheat.
Mercury thermostats should be checked for level installation. The anticipator should be checked and adjusted if necessary for proper burner run time.
The flue should be inspected for proper draft, corrosion, or leaks.
After inspecting, cleaning, and reassembling the furnace, the technician should run an entire cycle to verify proper operation.
I can feel air blowing out of the ductwork joints when the furnace is running. Is there something I should do about this?
Leakage from ducts to unconditioned spaces reduces the heat going to the conditioned space, thereby reducing overall efficiency.
To eliminate this problem, inspect supply and return ductwork. Tape any cracks or openings with foil tape, or seal with caulking or mastic.
To detect leaks, use smoke from stick incense or a smoke pencil. With the furnace fan running, hold the smoke near suspected leak areas. If there is a leak, there will be an obvious disturbance in the smoke. A leak in a supply duct will blow the smoke away. If the leak is in a return duct, the smoke will be sucked into the duct.
Sealing the supply and return ductwork in unconditioned areas such as crawl spaces and attics is also important.
Due to normal home construction practices, the return duct is more prone to leakage than the supply is. Sheet metal is nailed over the cavity between wall studs and floor joists. Gaps between the metal and wood, plus holes for electrical wiring and plumbing, draw air into the system.
Is it a good idea to turn off the pilot light on my furnace during the summer?
Modern furnaces do not have pilot lights, but if the unit still has a pilot light, then turning the pilot light off during the summer will save energy. If the home is air conditioned, there will be more savings on the electric bill than on the gas bill.
A typical residential gas pilot light consumes about 750,000 cubic feet of gas per month. This heat energy is simply wasted when the furnace is not operating during the summer months. However, if the house is air conditioned during the summer, the pilot light contributes heat to the house that must be removed by the air conditioner.
In dollars and cents, keeping the pilot burning during the summer months costs about $6.00 per month for the gas and about $4.80 per month for the 60 kilowatt-hours the air conditioner will consume getting rid of the heat.
The pilot also creates a draft in the chimney that causes increased air infiltration through windows and around doors, further increasing the air-conditioning load.
Some people believe that turning out the pilot light in the summer will decrease the life of the furnace. Recent tests have indicated that the possibility for damaging the furnace is minimal or nonexistent.
To extinguish the pilot light, simply follow the printed instructions on the furnace. If the directions are unclear or missing, consult a service technician or the gas utility.
If I close heating registers in some unused rooms, can I close too many?
Yes. Closing too many may cause the furnace to overheat or cause other problems. Furnaces need the cooling action of air flowing through the furnace to cool the unit. Closing off too many registers will restrict the air flow and reduce the cooling action.
Furnaces are equipped with a safety device that closes the main gas valve when the furnace overheats, but it is not a good idea to use the safety switch as a controller. No more than two out of 10 registers should be closed at one time.
After closing a couple registers, let the furnace go through a long heating period. Turn the thermostat up and check for anything unusual such as the gas valve cycling off and on. If things don't seem right, open the registers.
In extremely cold weather my furnace seems to run all the time, even though I have the setting on automatic. Will this continuous operation hurt my furnace?
No. A furnace is designed to run as long as necessary to satisfy a home's heating load. In fact, the longer it runs during each cycle, the more closely it operates to its designed efficiency. Frequent cycling caused by partial loads during mild weather or by an over-sized furnace reduces the overall efficiency of the furnace.
The colder it is outside, the longer the furnace must run to provide the heat needed to maintain a home's comfort. A properly sized furnace in Kansas will maintain an indoor temperature of 70 degrees when it's 0 degrees outside.
Therefore, it is not uncommon for the furnace to operate continuously when the temperature is below zero, but this does not harm or stress the furnace.
It is critical to keep the furnace in good operating condition during cold weather. Keep filters clean, service motors annually, and check belts for proper tightness. The furnace will not provide its maximum heating potential if it is not in optimum condition.
What does a SEER rating on air conditioners mean, and how do I compare ratings between units?
SEER stands for seasonal energy-efficiency rating. This rating measures how well an air conditioner uses energy throughout the cooling season.
The SEER is equal to Btus of cooling supplied during the year divided by kilowatt-hours of electricity consumed in a year. The higher the SEER rating, the more efficient the air conditioner will be.
For example, a unit with a cooling capacity of 24,000 BTU that consumes 2,400 kilowatts of electricity would have a SEER of 24,000/2,400, or 10. Units with high SEERs will cost more initially, but the energy savings throughout their lifetime will more than make up for the cost difference.
When comparing SEER ratings of different air conditioners, compare only those with similar capacities (Btu).
Is it cost-effective to buy high-efficiency, air-conditioning units?
Yes, if the unit serves a home or business that air conditions throughout the summer rather than on an intermittent schedule. The additional cost of the higher efficiency units can be justified from the energy savings.
The minimum seasonal energy-efficiency rating (SEER) is 10, but the Department of Energy is considering increasing the minimum to a SEER of 12.
Homeowners and business operators can justify the purchase of air conditioners with a SEER of 13 or 14 in applications where energy costs are high or the cooling season is long. In buildings used less frequently, such as churches and meeting rooms, energy savings usually won't offset the cost of the highest efficiency units.
When buying a new central air conditioner, what should I look for to ensure a high-efficiency unit that will last?
One of the best guides to the efficiency of an air conditioner unit is the seasonal energy efficiency rating (SEER).
The higher the SEER, the more efficient the unit will be. Federal legislation dictates a minimum SEER rating of 10 for central air conditioners sold in the residential marketplace. Air conditioning units are now available with SEER ratings as high as 16.
Long life and ease of service are two other important considerations when purchasing an air conditioner. One recent development in compressor design, the scroll compressor, offers a long, trouble-free life and low noise level. Scroll compressors are also more efficient than conventional compressors. Scroll compressors are often used on units with a SEER of 12 or greater. When receiving bids, be sure to ask if the unit uses a scroll compressor.
Contact at least three air-conditioning service companies in the area to obtain bids for comparison of features, warranties, and efficiency. Be sure to carefully evaluate the proposed size of the units. Purchasing a properly sized unit is critical to achieving good performance
Why is it important to properly size an air conditioner?
Today, it is recognized that accurately sized, or even slightly undersized air-conditioning equipment, will result in greater operating economy and improved comfort because the air conditioner cycles on and off less often. This reduces wear and tear on the compressor, increases efficiency, and improves humidity control.
Determining the proper size for a residential air-conditioning system calls for a cooling load analysis. This procedure takes into account the size of the home, insulation levels, roof color, orientation of windows, shading of windows, tightness of construction, and number of occupants.
However, on extremely hot days, usually less than three percent of a normal cooling season, the indoor temperature may rise or swing upward a few degrees Fahrenheit during the hottest part of the day.
This is a small price to pay for improved performance and comfort during the balance of the cooling season.
Furthermore, comfort can be easily maintained during a designed temperature swing by using a fan to create air movement and delaying activities, such as cooking, that produce internal heat gain until the air conditioner has recovered.
A cooling load analysis of a home can be performed by most heating and air-conditioning contractors or by an independent energy auditor.
What is the status of the refrigerant used in my home air conditioner? Is it being phased out like the refrigerant in my car?
Unless a central home air conditioner is relatively new, the refrigerant used is R22. It is chemically different from the refrigerant used in an auto and has only one-twentieth the impact on stratospheric ozone. Because it is not as harmful to the ozone layer, it is not scheduled for phase-out until 2020.
Some air-conditioner manufacturers are offering equipment filled with refrigerants that pose no harm to the atmosphere. The operating efficiency of these air conditioners is no higher than those filled with R22.
These products may carry a higher price, but the refrigerants will be available after the scheduled 2020 phase-out of R22.
When adding central air conditioning to an older home, what do I need to watch out for?
There are several issues to consider when adding central air conditioning to an existing heating system.
If a home has an older heating system with no provisions for central air conditioning, the ductwork may be smaller than what is required for air conditioning. Increase the fan speed to compensate for the ductwork. A larger motor often is required to achieve this higher flow rate.
In extreme cases, it may be necessary to replace the supply ductwork.
The location of the return-air registers also plays a role in comfort. In older homes, there were often no return-air registers installed on the second floor of a two-story home. It is difficult to cool the second story if this is the case. It may be necessary to install return-air ductwork.
Another consideration is the requirement for a floor drain below the furnace level. Air conditioners produce condensate when they operate. This condensate is the consequence of removing moisture from the air.
If a floor drain is not available below the level of the furnace, it is possible to purchase a small condensate pump set. For approximately $60, this set will pump the condensate to a convenient disposal site.
A final consideration is the arrangement of the ductwork at the furnace outlet. The ductwork around the furnace must leave sufficient room for the installation of the cooling coil. When installing central air, it is an excellent time to check the supply and return air ducts for leaks. Inadequate air flow across the cooling coil is the No. 1 cause for poor air-conditioning system performance.
What can I do to reduce summer air-conditioning costs? (Part I)
First, inspect the envelope of the home. The envelope is composed of the roof, ceilings, walls, floors, windows, and doors. Various opportunities exist for improving energy efficiency, such as insulation, radiant barriers, and weatherstripping. Insulation levels as high as R-38 in the attic are appropriate. It is permissible to mix insulation types, such as covering fiberglass with cellulose. Any exposed ductwork in the attic also should be sealed and insulated.
Weatherstripping and caulking reduce both heating and cooling costs. Inspect existing weatherstripping for wear and possible replacement.
In addition to caulking window and door frames, inspect for hidden cracks such as those that exist along foundations, or where exterior wiring or air-conditioning lines may penetrate the wall.
South-facing windows can be a real benefit during the heating season but can add significantly to the cooling load.
It is preferable to block the sunlight before it penetrates the window. Although a drape will delay the instantaneous solar gain, it's more effective to stop the sunlight completely by using exterior shading or reflective blinds.
Deciduous trees provide an excellent means for natural shading in the summer, yet allow exposure of the window in the winter. Removable exterior awnings can provide a similar advantage.
Unventilated attics can reach high temperatures during the summer, contributing considerably to the cooling load in the home.
Attics should be properly ventilated by having sufficient openings along the low side of the attic, such as in soffits as well as openings along the high side of the roof for exhaust.
For ventilation, have at least one square inch of free opening for every square foot of attic space. Openings should be distributed equally between the low and high sides of the attic. Remember that screens and louvers block up to 50 percent of the ventilation area.
Move air in and through the home without relying on an air conditioner. When the outdoor air is cool, yet the home is warm, a whole-house fan, which draws air through open windows and discharges into the attic, may provide all the cooling necessary.
Additional attic ventilation is necessary when using a whole-house fan.
Have one square foot of free opening for every 750 cubic foot per minute (cfm) of air moved by the whole-house fan.
Within the home, portable fans or ceiling fans can provide some cooling relief.
What can I do to reduce summer air-conditioning costs? (Part II)
Household appliances can add considerably to the cooling load in a home. Refrigerator condenser coils should be cleaned at least twice annually.
Inspect the gasket around the refrigerator door to assure that it has not worn and needs to be replaced. The cooking range and clothes dryer should be vented to the outdoors, as should exhaust fans in bathrooms.
Heat loss from a water heater adds both to water-heating costs as well as air-conditioning costs. A water heater that is warm to the touch should be insulated with a water-heater insulating jacket.
Thermostats on water heaters should be turned down to provide hot water at the tap no greater than 140 degrees.
Prepare a furnace for summer by replacing or cleaning the air filter, and lubricating, where possible, any bearings on the blower or motor. Consider extinguishing the pilot light if the furnace is equipped with a pilot. Many new furnaces use an electronic device for igniting the flame whenever the thermostat calls for heat rather than using a standing pilot light.
This will probably not result in a significant reduction in gas costs; however, the pilot does contribute a small amount of heat to the home that then must be removed by the air conditioner. Contrary to some earlier information, extinguishing the pilot light will not shorten the life of the furnace. Be sure that if the furnace is equipped with a central humidifier that it is turned off, drained, and cleaned.
An air conditioner needs adequate air flow through the condenser for the unit to operate at maximum efficiency. Plantings and fencing should be no closer than three feet to the condensing unit.
The condensing unit should be cleaned annually by carefully removing any debris from the fins of the condenser.
Consider hiring an air-conditioning service contractor to clean the condenser thoroughly, particularly if it has not received maintenance in the last two or three years.
Service contractors will use a variety of cleaning solutions to remove any buildup on the condenser fins as well as straighten any fins which may have been damaged, lubricate any exposed bearings, and check for appropriate refrigerant levels in the air conditioner.
Taking advantage of these and other opportunities should help to reduce cooling costs this summer.
My home has a whole-house fan and central air conditioner. How can I use them both for the most economical cooling?
A whole-house fan provides cooling by moving a large volume of air into and through the house, typically exhausting it through the attic.
Use the fan when the outdoor air temperature is at or below the desired indoor temperature, usually during the late evening and night hours. Use the fan at night to help create more comfortable sleeping conditions without air conditioning.
Central air conditioning cools by lowering the temperature and humidity of indoor air. When it is hot and humid outside, the house is closed up and the indoor air is conditioned. Central air conditioning and a whole-house fan should never operate at the same time.
The best strategy may be to use the whole-house fan extensively during the late spring and early fall when the demand for cooling is rarely large and extended heat waves are unlikely. Also, use the fan in the summer when temperatures and humidity are at or below normal.
When outdoor temperatures and humidity rise to uncomfortable levels, close up the house and switch to air conditioning.
In deciding what will be most comfortable, follow the weather patterns and use one system or the other for a few days. In general, avoid using both systems every day, especially during very humid weather.
The whole-house fan may significantly raise the moisture level inside a house by bringing in outdoor air. Switching to air conditioning the next day may drop the temperature, but might be less comfortable.
The most economical approach to minimize the effects of the sun's heat on a home is to use fans and the whole house fan as a first choice, and switch to air conditioning when the heat and humidity become oppressive.
Can ceiling fans effectively reduce air-conditioning costs?
Any type of fan can be effective in reducing air-conditioning costs if the air movement helps occupants feel comfortable and results in increasing the thermostat temperature setting. If the air conditioning thermostat setting is not increased, there are no savings.
The cooling effect of moving air can compensate for as much as a four-degree rise in temperature.
Keep in mind, that during the heating season, the air movement caused by the fan will still have the same cooling effect.
How can I keep my home cooler in the summer without air conditioning?
The simplest, least expensive method to keep a home cool is shading walls, windows, and the roof.
Interior shades are inexpensive and easy to install. Use pull-down or Venetian blinds in addition to regular window coverings. Window coverings should be light colored (white or beige).
There are several ways to keep a home cool without overusing the air conditioner. Of these options, install shades first. Compare utility bills before and after the installation of shades. If satisfied with the savings, stop there, but if savings are not significant, look into other options. One option to consider is exterior awnings. They are more expensive than interior shades, but would be a great way to shade south windows.
Natural shading is another way to block heat gain in summer. For example, plant broad-leafed trees on the south and west sides of the home. They shade a home in summer months and will let in sunlight during winter months when they have shed their leaves.
Certain steps will help keep a home warm in winter and will help cool it during the summer. Insulated walls and roof reduce heat gain, just as they lower heat loss in winter. As a general rule, ceiling insulation should have an R-value of 35 to 45, and walls from 19 to 27. A light-colored roof also decreases heat gain.
Use the above suggestions, coupled with circulating fans inside the home, and utility bills will be less than if air conditioning was the only cooling source.
Is it better to leave the fan running continuously with the air conditioner or to place it in the automatic position?
It is more efficient to leave the thermostat in the automatic position.
The fan consumes only one-tenth the energy of the compressor, but when it runs continuously, the fan can cost up to $30 a month.
This amount can be reduced by cycling the fan only when it's needed.
Additionally, the air conditioner will dehumidify the only air when the compressor is running. However, if the fan remains on after the compressor cycles off, some moisture on the coil will re-evaporate. This moisture must be removed during the next compressor cycle, which increases the energy consumption.
If air distribution is poor within the home or business and hot spots or very cold areas result, the fan can be run to even out the temperatures.
However, the fan should be set to the auto position when the building is unoccupied. Even better, shut the air conditioner off or raise the thermostat setting when leaving the building.
Will I save energy by turning off my air conditioner when I leave home, or am I better off just letting it run?
If gone for four hours or more, more energy will be saved by turning off the air conditioner or turning up the thermostat.
During the day, keep windows shut and close curtains or blinds on any windows that will be exposed to sunlight.
The thermal mass of the house will probably keep the indoor temperature well below the outdoor temperature, and the house should cool quickly when the air conditioner is restarted. Use a programmable thermostat or timer to turn on the air conditioner 30 to 45 minutes before the expected arrival home. If the home is still warm upon arrival, turn on a fan to create air movement.
Moving air can make the air feel about four degrees cooler than it really is.
Can I plant bushes to hide the outside of my air conditioner?
When landscaping around an outside condensing unit, remember that the air conditioner must reject all the heat from a home.
Although it is possible to plant bushes near the condenser, leave room for adequate air circulation. Without good air circulation, the temperature near the condensing unit will rise. The higher temperature will reduce the capacity of the air conditioner, causing it to work harder and provide less cooling. This could also kill the shrubbery.
If the shrubs will not form a continuous wall around the unit, plant them so that, when they mature, there will be three feet of clearance. If the shrubs will be continuous, then allow five feet of clearance.
Are there any simple checks I can perform to see if my air conditioner is operating properly?
Check a few items that should indicate if the air conditioner has problems.
First, check the two lines connected to the outside of the air conditioner. The larger on -- the suction line -- should be cool to the touch. It should not be so cold, however, that frost develops.
The smaller line -- the high pressure line -- should be warm, but not hot. It should be 20 to 30 degrees warmer than the outside temperature. In extreme cases, it will be hot to the touch, so be cautious. If this is the case, call a service technician.
Some air conditioners are equipped with a sight glass in the high-pressure line (the small line). The glass should be clear, with no bubbles visible, while the system is running. Cloudy liquid in the sight glass may indicate contamination of the system.
One final check is to measure the temperature of the air as it leaves the register. It should be 15-20 degrees cooler than the room temperature.
If the building is warm, humid, or if the ductwork is not insulated, then there may be smaller temperature differences.
These guides are not intended to eliminate the need for an annual check by a qualified service person. If problems are suspected, call for help from someone familiar with air conditioners.
Would I be better off using several window air conditioners?
Using multiple window air conditioners has both advantages and disadvantages. A distinct advantage of window units is that they can operate individually. This flexibility allows cooling only the occupied room rather than the entire house.
A central system is more convenient to operate when cooling the entire home continually, and possibly at a lower cost of operation.
If sound level is a consideration in the home, keep in mind that window units are typically noisier than central air conditioning.
In terms of efficiency, top-of-the-line central units are generally more efficient than window units.
Look at the seasonal energy-efficiency ratio (SEER) when selecting units -- the higher the SEER, the higher the efficiency under similar conditions.
Since there are positives and negatives about window units and central systems, consider personal needs and preferences before choosing a system.
What is a ton of air conditioning?
A ton is the measure of the cooling capacity of an air-conditioning unit. It is an indication of the rate that the unit removes heat from a building.
One ton of air conditioning removes 12,000 British thermal units (Btu) of heat an hour. The term was derived from the time when ice was used for refrigeration. One ton of air-conditioner cooling capacity removes the same amount of heat required to melt 2,000 pounds, or one ton, of ice in 24 hours.
Typically, residential central air conditioners will range in capacity from one and one-half to four tons. Window units will many times be rated in Btu per hour. For example, a 6,000 Btu/hr. window unit would have the capacity of one-half ton.
Other Types of Heating
Will a heat pump cost less than a gas furnace to provide the same amount of heat?
The two units are receiving their energy in different forms.
Heat pumps operate on electricity, and gas furnaces consume natural gas. Differences in fuel prices and differing efficiencies both affect the cost of delivering heat.
At current natural gas prices of about $9 per 1,000 cubic feet, a dollar's worth of natural gas can produce about 200,000 British thermal units (Btu), that is, if extracting all available energy. If a furnace is operating at 80 percent efficiency, it delivers about 140,000 Btu for each dollar spent on fuel. A high-efficiency furnace might deliver 190,000 Btu for each dollar spent.
A typical heat pump delivers about twice as much energy as it consumes. Average residential electric prices are about 7 cents per kilowatt-hour. Often electric utility companies offer lower electric rates for all electric homes. Using an electric rate of 4 cents per kilowatt-hour, a dollar will buy about 170,000 Btu. If prices are 7 cents per kilowatt-hour, the heat delivered is reduced to 97,500 Btu per dollar spent
Both energy prices and equipment performance together determine the cost of delivering heat to home.
Are heat pumps, when operating as an air conditioner, more efficient than conventional air conditioners?
The efficiency of a heat pump during the cooling season is not necessarily greater than the efficiency of an air conditioner.
Both air conditioners and heat pumps are rated according to their seasonal energy-efficiency ratio (SEER). This rating represents the seasonal cooling efficiency rather than a peak efficiency. According to the Air Conditioning and Refrigeration Institute (ARI) directory, air conditioners are available with higher SEER ratings than heat pumps.
The cost of purchasing a heat pump is usually higher than an air conditioner of the same size. The additional cost of the heat pump may be better spent by purchasing a higher efficiency air conditioner, if the primary purpose of the heat pump will be for cooling.
What is a ground-source heat pump?
A ground-source heat pump is a heating system that uses the earth as a heat source in the wintertime, and as a heat sink to eject the heat in the summertime.
Ground-source heat pumps may be either open-loop or closed-loop.
A closed-loop system circulates the same water through the loop for the heat source and heat rejection process.
The closed-loop heat source or sink may be a vertical hole or horizontal trench.
The advantage of the closed-loop system is that the water in the loop, because it is recirculating, can be treated, and the system can be used in areas where the water in the water may be contaminated or hard.
An open-loop system would be used where the water quality is good and with soft water. The advantage of the open-loop system is the initial cost is usually lower, and the efficiency is usually higher.
What would cause the radiators on the first floor to get hot while the second floor radiators stay cool on my hydronic (hot water) heating system?
Hydronic heating systems can get air trapped at the highest locations in the system. Water systems may have automatic air-bleed valves at the highest point of the distribution system to purge the system of air. If there are no bleed valves, the velocity of the water is designed to remove the air.
In some cases, radiators located higher than the distribution system may have bleed valves. Check these valves. If they are operated manually, they may need to be bled as problems arise.
Another problem may be the loss of water in the system. This is typically caused by an automatic water makeup valve, or pressure-reducing valve stuck in the closed position. This will decrease the pressure of the system, and it could cause air leakage into the system. Air leakage will increase the potential for air locking.
Unless a person has technical training in these systems, contact the heating service personnel to determine the source of the problem.
|
<urn:uuid:fe4f32bd-cf14-4113-94d9-4f6d8e69b34f>
|
CC-MAIN-2016-26
|
http://www.engext.ksu.edu/faq
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00035-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931041 | 19,424 | 2.625 | 3 |
Advocates Library shelfmarks
In 1925 the Advocates Library transferred many of its early printed books to the newly formed National Library of Scotland.
The Library used a large number of shelfmarks to record the location of books. Standard forms of shelfmarking identified a book's 'press' (bookcase), shelf, and its exact position on the shelf.
These examples show the evolution of Faculty of Advocates shelfmarks from the 1690s to the 1920s
Date: Before 1694
Notes: The first shelfmarks used by the Advocates Library began with letters running from 'a' to 'x' followed by the shelf number and then the number of the book on each shelf.
Source: Sacro Bosco. 'In sphaeram ... '. Leiden, 1602.
f.7.17 and f.5.38
Notes: Presses 'd', 'e' and 'f' were reserved for the Lord George Douglas collection. Of these, 'e' was for books bound in vellum. This book has been moved twice within 'f' and then to Hall.
Source: Lacy. 'De podagra'. Venice, 1692.
Notes: Some astronomical symbols, signs of the zodiac, and letters from the Greek alphabet were introduced in the early 18th century. Jupiter, featured here, is no longer in use. However other symbols, such as Mars and Taurus, are still used.
Source: Silius Italicus. 'De Bello Punico Secundo libri XVII'. Leipzig, 1695.
Notes: The use of double lower case letters was introduced in the early 18th century. This form of shelfmark still exists today although the book this example is taken from was moved to a sequence of shelfmarks used to shelve works relating to ancient Greece and Rome.
Source: Ovid. 'Heroidum epistolae ...'. Florence, 1528.
Ab.5.14 and [Ai].3.10
Notes: Further shelfmarks could be created by enclosing the initial letters in a box.
Source: Baldwin. 'A survey of the British customs'. London, 1770.
Notes: In use by the 1770s, shelfmarks composed of the names of seven early Scots kings were shelved together in what was known as the Regal Room. This room no longer exists and this particular example has been re-shelfmarked. However Regal Room shelfmarks are all still in use today.
Source: 'A general history of inland navigation'. London, 1792.
Notes: Another shelfmark style from the later 18th century involved the use of Roman emperors: from the dictator Julius Caesar to Didius Julianus. These too had their own room, known as the Imperial Room. None of these shelfmarks are in use today.
Source: Fulton. 'A treatise on the improvement of canal navigation'. London, 1796.
Date: 19th / 20th century
Notes: This shelfmark refers to a room in which Advocates, who had formed a militia in 1859, would practice their drill. The shelfmark still exists today as 'Hall'.
Source: Racine. 'Theatre complet'. London, .
Date: 19th / 20th century
Notes: By the late 19th century, the Advocates were able to store books in the redundant cells under the Law Courts. Despite their secure sounding name, the Vaults were used to store books to which the Advocates gave a low priority such as contemporary novels received via legal deposit.
Source: Bindloss. 'Ainslie's Ju-Ju'. London, 1900.
|
<urn:uuid:7644c62b-0782-4a62-8111-cd10ac78a3ad>
|
CC-MAIN-2016-26
|
http://www.nls.uk/collections/rare-books/collections/advocates/shelfmarks
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00181-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934947 | 762 | 3.28125 | 3 |
The latest outbreak of bird flu, which has spread across the country from Gochang, North Jeolla Province since mid-January, will be completely eradicated in late May, authorities promise.
In a National Assembly committee on Monday, Minister of Agriculture, Food and Rural Affairs Lee Dong-phil said, "It'll be possible to declare eradication of bird flu around late May unless there's further outbreak."
A ministry official said it would be safe to declare the country free from bird flu 40 days after all preventive culling of poultry is complete, provided there are no further suspected cases.
Only one report of a suspected case of bird flu came in from Gochang this month.
The latest outbreak of bird flu will go down in history as the most devastating so far. Until Monday, a total of 12.36 million chickens and ducks had been culled and buried.
The outbreak also spread more widely than any other, affecting some 70 municipalities compared to the earlier record of 25 in 2010.
The longest epidemic lasted for 139 days in 2010.
The ministry told the committee that the latest outbreak was probably carried by migratory birds from China.
An official with the Animal and Plant Quarantine Agency said, "We believe that the H5N8 virus came from China, given that it was first discovered in Zhejiang Province."
|
<urn:uuid:cb0d8c34-d2b8-459b-aff4-da9458796b68>
|
CC-MAIN-2016-26
|
http://english.chosun.com/site/data/html_dir/2014/04/15/2014041501899.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.960735 | 274 | 2.84375 | 3 |
Debussy surely influenced the piano playing of trumpeter Bix Beiderbecke.
It is also said that the bebop harmony has been inspired by Western Music; from people like Debussy and Schoenberg.
Kubik, Gerhard. "Bebop: a case in point. The African Matrix in Jazz Harmonic Practices." (Critical essay) Black Music Research Journal 22 Mar 2005. Digital.:
While for an outside observer, the harmonic innovations in bebop would
appear to be inspired by experiences in Western "serious" music, from
Claude Debussy to Arnold Schoenberg, such a scheme cannot be sustained
by the evidence from a cognitive approach. Claude Debussy did have
some influence on jazz, for example, on Bix Beiderbecke's piano
playing. And it is also true that Duke Ellington adopted and
reinterpreted some harmonic devices in European contemporary music.
West Coast jazz would run into such debts as would several forms of
cool jazz. But bebop has hardly any such debts in the sense of direct
borrowings. On the contrary, ideologically, bebop was a strong
statement of rejection of any kind of eclecticism, propelled by a
desire to activate something deeply buried in self. Bebop then revived
tonal-harmonic ideas transmitted through the blues and reconstructed
and expanded others in a basically non-Western harmonic approach. The
ultimate significance of all this is that the experiments in jazz
during the 1940s brought back to African-American music several
structural principles and techniques rooted in African traditions.
Also, there are some scales that Debussy used that where later used in bebop (and jazz), like the whole tone scale.
(The Birth of Bebop: A Social and Musical History)
|
<urn:uuid:40b61641-8d10-4409-817b-cd948c438f00>
|
CC-MAIN-2016-26
|
http://music.stackexchange.com/questions/22742/what-is-debussys-influence-on-bebop
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00112-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920937 | 380 | 3 | 3 |
This entry is part 2 of 5 in the series Microenvironment
In the first of this series we explained how the ‘neighbourhood’, or microenvironment, around a cancer affects how it grows and spreads.
In this next post we’re taking a look at how blood vessels grow into, and feed, a tumour.
As we’ve said before, a tumour can be thought of as a ‘rogue organ’ in the body – not one that is useful to us, but one that has the same requirements as any other. This includes a network of blood vessels (vasculature), supplying the cancer cells with oxygen and nutrients, and removing waste products. And, in the case of cancer, enabling it to survive, grow, and spread around the body.
But while the blood supply feeding our healthy tissues grows as we develop in the womb, a tumour has to ‘plumb in’ its own blood supply from nearby blood vessels – a process known as angiogenesis.
And because angiogenesis is so fundamental to how cancers grow and spread, it’s an exciting focus for cancer researchers all over the world.
Getting to the root of the problem
Cancers are a bit like weeds in the garden – they look like their neighbours but take up space and out-compete other plants, and have the potential to run riot over the entire garden if left uncontrolled.
As all good gardeners know, the best way to get rid of weeds for good is to destroy their roots. Fail to do this, and they’ll just start growing again.
In a similar way, blood vessels are the ‘roots’ of a tumour, feeding it and allowing it to grow bigger. Targeting these roots and cutting off the blood supply should therefore be a good approach for treating cancer. And that’s exactly what many researchers in the field of tumour angiogenesis are trying to do.
Targeting tumour blood vessels
The idea of targeting blood vessels to treat cancer is based on the discovery that most blood vessels in adults are quiescent – in other words, they’ve done all the growing they need to and have then stopped.
But there are a couple of exceptions. Every month, new blood vessels grow in a woman’s uterus during her menstrual cycle. And every time a cut heals, new vessels grow back during that process. But (in theory at least) treatments targeting new blood vessel growth should be relatively free of side-effects, because they’re designed to target the growing blood vessels in tumours and not the established quiescent vessels.
Also, the components of blood vessels within tumours aren’t actually cancerous themselves – they’re healthy cells that have been hijacked by a cancer to do things they usually wouldn’t. This means they should be less likely to develop resistance to treatments, because they’re less able to mutate and evolve in the same way as cancer cells. So – at least in theory – this seems like another plus point.
Some drugs that target tumour blood vessels have already been developed, including “first generation” therapies such as bevacizumab (Avastin), which blocks a molecule called VEGF that is produced in large amounts by tumours to provoke angiogenesis.
Unfortunately, bevacizumab didn’t show the impressive results in cancer patients that might have been expected from early lab studies (although it fared better in combination with other chemotherapy drugs). And these types of drugs haven’t had as few side effects as researchers had hoped.
In the 30 years since VEGF was discovered, many Cancer Research UK scientists have contributed to our growing understanding of how it – along with a multitude of other molecules – is important in angiogenesis. As a result, rather than focusing on VEGF alone, other molecular messengers can be targeted at the same time to try to avoid resistance and increase the drugs’ effectiveness. “Second/third generation” anti-angiogenic therapies such as sunitinib (Sutent) and sorafenib (Nexavar) have made it to the clinic, but researchers are still working out how best to use them.
So while the idea of blocking blood vessel growth once seemed straightforward, the reality turned out not to be quite so simple. But why?
What’s so special about tumour blood vessels?
Researchers now think that the key to targeting blood vessels in tumours lies in understanding what makes them different from healthy ones. While the cells that make up tumour blood vessels are themselves quite normal (in that their genetic information isn’t damaged like it is in cancer cells) the blood vessels as a whole are very messed up.
There are two main types of cells that make up the tiny blood vessels (called capillaries or microvessels) found in tumours: endothelial cells that line the walls of vessel tubes, and pericytes, which support them around the outside.
In healthy capillaries, these cell types are quite well-organised. The endothelial cells fit together like the shields of a Roman phalanx and the pericytes support them at key points, helping to stabilise the structure.
But inside tumours, there are big gaps in the walls of the capillaries. Endothelial cells come and go as they please, sometimes the pericytes don’t show up to help out, and sometimes even cancer cells get involved and pretend to be endothelial cells. The tubes have irregular sizes and are chaotically organised, twisting tortuously about instead of lining up neatly like healthy capillaries.
This makes a tumour’s blood vessels very leaky and inefficient, causing them to release signals that drive even more blood vessel growth to feed the growing tumour in a vicious cycle.
To try and understand the disappointing results of anti-angiogenic drugs, scientists took a closer look at what was happening to blood vessels inside tumours in response to the treatment. What they found was unexpected (although our researchers Alan Le Serve and Kurt Hellmann had actually predicted this might happen back in the 1970s). Instead of destroying tumour blood vessels, anti-angiogenic drugs seem to make the strange and disordered capillaries become more normal.
At first, people thought this spelled disaster for the whole concept of anti-angiogenic therapy – surely if the treatment makes the tumour blood vessels better at their job, the cancer will just grow and spread faster. This is the opposite of what doctors and their patients want!
But on closer inspection, this ‘normalisation effect’ actually looks like it might be a positive thing – if we can catch it at just the right time. Here’s why:
- Making tumour blood vessels better at delivering nutrients and oxygen to the tumour can have positive effects on some cancer treatments. For example, if chemotherapy is given together with anti-angiogenics, the more efficient blood flow means more of the chemo drug can get to more of the cancer cells to kill them. This explains why drugs like bevacizumab seem to work better when given alongside chemo.
- Because of their disorganised blood supply, many tumours have relatively low oxygen levels – a phenomenon known as hypoxia – which seems to protect cancer cells from being destroyed by radiotherapy. Stabilising blood vessels means that more oxygen gets into the tumour, raising oxygen levels inside it. This could help to make radiotherapy more effective.
- As tumour blood vessels become more normal, they seem to attract more supporting pericytes, which help to secure capillaries against wandering cells. Some researchers have shown that this could reduce the risk of cancer spreading (metastasis), which happens when cancer cells enter the bloodstream and travel to another site in the body. If entering blood vessels becomes more difficult for cancer cells, this could be a good way to protect against cancer spread.
Combining all these things together, it seems that while anti-angiogenics might not be useful in the way we originally thought (by killing blood vessels and starving tumours), they might instead make the other kinds of treatments even more effective.
Researchers all over the world – including those funded by Cancer Research UK – are now applying these new insights in the hunt for life-saving cancer treatments. Here are just a few examples of our pioneering work in this area:
- Professor Kairbaan Hodivala-Dilke at the Barts Cancer Institute in London is determined to bring cancer therapies based on angiogenesis to the clinic. Work in her lab looking a Down’s syndrome – a phenomenon apparently unrelated to cancer – has helped us understand more about tumours and blood vessel growth.
- Professor Adrian Harris heads a team at Oxford University. Their cutting-edge research aims to uncover more about how tumours attract a blood supply and the characteristics of low-oxygen tumour environments, turning this knowledge into improved cancer therapies. Professor Harris’ work has contributed to our current understanding of the famous blood vessel growth-stimulator VEGF, and another molecular messenger called delta-like 4 (DLL4). Their research has also picked apart other key features of tumours such as hypoxia and prompted the development of new cancer treatments.
- Professor David Tuveson, who until recently was based at the Cancer Research UK Cambridge Research Institute, made a big step forward in understanding the role of blood vessels in pancreatic cancer – a deadly disease for which new treatments are urgently needed.
In pancreatic cancer, the tumour cell environment is very dense. The leakiness of blood vessels leads to a very high fluid pressure within the tumour that collapses capillaries and makes blood flow almost non-existent. This means that chemotherapy drugs (which are carried in the bloodstream) simply can’t get into the tumour.
Professor Tuveson’s team found that the solution to this problem may lie in using a combination of drugs, including one that breaks down the dense packing within the tumour. This helps to open up the tumour blood vessels, allowing chemotherapy drugs to get through.
Hope for the future
Researching anti-angiogenic therapy has been somewhat of a rollercoaster of hope, disappointment and renewed optimism.
At first it seemed like a hugely promising target for all solid tumours, then the results from the clinic didn’t live up to expectations. Now it appears they could be really effective after all, but maybe not in the ways we expected. Only further research can tell us exactly how these potentially powerful therapies can be put to work to beat cancer.
But blood vessel growth isn’t the only area we’re seeing interesting developments in: there’s also the immune system, and cancer spread, so watch this space for more posts on the tumour microenvironment.
- Marianne Baker did her PhD at Barts Cancer Institute, funded by Cancer Research UK
|
<urn:uuid:1c499fcf-6f99-4fd0-a257-046795ece576>
|
CC-MAIN-2016-26
|
http://scienceblog.cancerresearchuk.org/2013/01/18/getting-to-the-root-of-tumour-blood-vessels/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.950257 | 2,295 | 3.15625 | 3 |
Odd little pot with four, spiked
handles. Nash Neck Banded jar, Late Caddo, ca. A.D.
1400-1650. TARL collections.
This olla has a short neck with a
flaring rim and a small mouth. These features suggest
that it served as a water jar or dry storage jar that
could be sealed by tying a skin cover over the mouth.
Hodges Engraved olla, Late Caddo, ca. A.D. 1400-1600.
Example of the use of white pigment
(probably kaolin) to fill the engraved lines, thus heightening
the contrast with bright red bowl. Ripley Engraved bowl,
Late Caddo, ca. 1400-1650. TARL collections.
Small engraved bottle with highly
unusual "spiked gaping mouth." Taylor Engraved
bottle, Late Caddo, ca A.D. 1400-1650. TARL collections.
Looking down into small triangular
engraved bowl. Untyped, Late Caddo, ca. A.D. 1400-1650.
TARL collections. Click on image for enlarged view and
Hodges Engraved bottle with unusual
oblong form and pairs of nodes at both ends. Late Caddo,
ca. A.D. 1400-1650. TARL collections.
Miniature pottery probably made for
children. Untyped, Historic Caddo, after A.D. 1650.
These decorated jars are believed to have been made at
the Brazos Reserve in the 1850s. These two and another
similar pot are in the Brooklyn Museum and were collected
by medical doctor. The vessel form and decorative designs
are immediately recognizable as Caddo in origin and probably
derived from one of the Kadohadacho groups. They show
that the fine ware tradition survived into the mid-1850s.
Drawn by Nancy Reese. From Perttula, 2001.
Why Study Caddo Pottery?
Why do the archeologists who study the ancient
Caddo spend such an inordinate amount of time and effort excavating,
reconstructing, and studying Caddo pottery? For archeologists,
Caddo pottery is the prime evidence used to identify and date
ancient traces of the Caddos' past. Lacking potsherds, we
could scarcely identify the vast majority of Caddo archeological
sites as being Caddo. While there are many other distinctive
kinds of archeological evidence of Caddo life, such as house
patterns, pottery remains indispensable for understanding
the past for three main reasons.
First, the ancient and early historic Caddo
were superb potters and made and used lots of pottery. Sites
representing small farmsteads where a single family once lived
for short durations will have hundreds of potsherds. Villages
and ceremonial centers often have tens or hundreds of thousands
of potsherds and, in graves, many whole or almost whole pots.
Secondly, pottery is relatively durable and can often be identified
by style and form even when broken into small fragments. Thirdly,
Caddo pottery is tremendously varieddifferent forms
or shapes, different decorative designs, different colors,
different finishes, different sizes, and so on. Further, pottery
styles and preferences changed through time and varied from
place to place within the Caddo Homeland. Given the right
sherd, an expert often can tell approximately where the pottery
was made and how old it is, give or take a few centuries (or
sometimes a few decades). This is because we know what whole
Caddo pottery vessels look like.
The Caddo pottery tradition was tied to the
Caddo funerary tradition of placing whole pottery vessels
in the graves of departed loved ones. The vessels may have
contained food and drink to accompany the deceased in the
afterlife or they may have been prized personal possessions
(or both). Some burial pottery is obviously worn from use,
but other vessels show no wear and look like they were interred
in a fresh, newly made condition, perhaps representing gifts
from loved ones. Whatever the case, the ancient Caddo must
have considered pottery important because they included pottery
vessels as grave offerings more frequently than any other
non-perishable material. Clothing, mats, baskets, and objects
made of wood may have been more common, but these things usually
decay quickly. (The typically acidic soil in the Caddo Homeland
destroys virtually all organic materials, including human
bones, over time.)
Whole pots are also found in other contexts
besides graves, especially on the floors of houses. For instance,
over 30 vessels of various sizes and forms were recently found
on the floor of a house at the Tom Jones site in the Little
River Valley in Arkansas. Most of these were broken by the
collapse and burning of the house. (Many pots included as
grave offerings are also broken.) For the archeologist, a
reconstructed pot is every bit as informative as a never-broken
The ancient Caddo tradition of including offerings
of pottery in graves has led to the excavation of thousands
of Caddo graves, some by archeologists and many more by looters
("pothunters") seeking pottery for personal collections
and, increasingly, to sell for profit. No one really knows
how many, but tens of thousands of vessels have been removed
from Caddo graves. Many are traded or sold on the antiquities
market in the United States, Europe, and Asia. Some spectacular
Caddo vessels are rumored to have sold for over $20,000. Even
ordinary Caddo pots can bring hundreds of dollars on the market.
The desecration of Caddo cemeteries has long
been a source of anguish to Caddo people (and Caddo archeologists).
As explained in the "Graves
of Caddo Ancestors" section , the Native American
Graves Protection and Repatriation Act (NAGPRA) of 1990 has
put the fate of most of the Caddo pottery vessels excavated
from graves by archeologists in the hands of the Caddo Nation
of Oklahoma. (NAGPRA applies to federal agencies, federally
funded or permitted excavations, on federal and tribal land,
as well as to all museums and institutions that have received
federal funding. While this effectively covers most grave
goods excavated by professional archeologists, the law does
not pertain to graves dug up on private land or grave goods
in private hands.)
Caddo people are conflictedthey want to
honor their ancestors, but they are not sure that reburying
all grave goods and bones in mass or separate graves hundreds
of miles from their original resting places, as some tribes
have chosen to do, is the right thing to do. Another possibility
being considered by the Caddo is to expand their own tribal
museum so that pottery vessels and other grave goods can be
treated properly and preserved for future generations as sources
of pride and knowledge about the past.
Regardless of what happens in the future, Caddo
pottery was important to the ancient Caddo, it is important
to the Caddo Nation today, and it is important to anyone who
wants to understand ancient Caddo history.
Origin and Development of the Caddo Pottery Tradition
When we say that the Caddo pottery tradition
began about A.D. 800, we do not mean to imply that earlier
ancestors of the people known today as the Caddo weren't already
making pottery. Clearly they were. But we do not know exactly
how, when, or even where, the Caddo pottery tradition was
first established. Partly this is because it is often impossible
to recognize the origin or beginning of any complex phenomenon
in the ancient past. And partly it is because we have so few
well-excavated and well-dated Late Woodland and early Caddo
In part, the Caddo pottery tradition grew out
of the Fourche Maline pottery tradition that developed during
the Middle and Late Woodland periods. Like early Caddo pottery,
Fourche Maline pottery was usually grog or bone tempered and
it was sometimes burnished. But Fourche Maline pottery was
rarely decorated and it is very thick-walled in comparison
to the Caddo fine wares. Vessel forms are also very different
between the two traditions. Some of the favorite Caddo decorative
techniques, incising and punctating, are found on Fourche
Maline pots, but most of the designs are very simple.
The inspiration for these decorative techniques
almost certainly lies to the southeast in the Woodland cultures
of the lower Mississippi Valley (LMV). Beginning with Tchefuncte
pottery (800-200 B.C.) and continuing on into the Middle Woodland
period (200 B.C. to A.D. 500) with Marksville pottery, incised,
stamped, and puncated designs were common. Trade pieces of
Tchefuncte and Marksville pottery are found in the Caddo area.
By Late Woodland times (ca. A.D. 500-800/900) Fouche Maline
potters began to copy the designs of Coles Creek pottery from
The origin of the technique of filling the engraved
patterns with pigments and the origin of the distinctive early
Caddo vessel formslong-necked bottles and carinated
bowlsis not known. We do not see clear precedents in
the Woodland-period pottery of either the Caddo Homeland or
the Lower Mississippi valley, or the central Mississippi valley,
or the Arkansas Basin. Therefore, we suspect that one of two
things happened: ancestral Caddo potters invented these techniques
for themselves or they borrowed the ideas from distant cultures.
Archeologists have struggled with explaining
the origin of highly specific behaviors for decadesare
these "independent inventions" or the result of
the "diffusion" (spread) of ideas or of things like
domesticated plants? In the 1940s, Alex Krieger and Clarence
Webb, like many of their contemporaries, favored the diffusion
explanation. These Caddo scholars and other prominent American
archeologists of the day pointed to seemingly close parallels
between Caddo pottery and the pottery of certain Mesoamerican
cultures in what is today Mexico and Guatemala. They could
not explain how the contact between these very different and
widely separated (in space and time) cultures took place.
Nor could they point to positive evidence of direct contact,
such as the finding of a pot made in Mesoamerica at a Caddo
site (or vice versa).
Caddo archeologists today reject the notion
of a Mesoamerican origin and see the Caddo pottery tradition
as an independent development influenced only by neighboring
peoples living mainly to the east along the Mississippi River
and along the Gulf coast. The diverse Caddo pottery tradition
bears witness to the obvious inventiveness of Caddo potters
and their willingness to experiment. It is worth pointing
out that there are a great many cases across the world of
the obviously independent invention of specific forms of pottery
making and decoration. Carinated pottery, long-necked earthenware
bottles, and engraved designs with pigment all occur in many
places in the world that are separated by thousands of miles
or thousands of years (or both). For instance, carinated pottery
forms similar to those of the Caddo tradition are also found
in Mesoamerica, South America, Africa, Europe, and Asia.
Thus it seems likely that about 1200 years ago,
ancestral Caddo potters began to develop their own distinctive
pottery tradition by combining the established ways of making
pottery (the Fourche Maline tradition and probably that of
the Mill Creek and Mossy Grove traditions) with inspirations
from neighboring peoples, and creative new ideas cooked up,
so to speak, in Caddo villages by Caddo potters. By A.D. 1000,
the Caddo pottery tradition was firmly established and distinct
from all others.
While some variation is apparent across the
region, Early Caddo pottery seems to vary much less from place
to place than would be the case a few hundred years later.
Compared to the Caddo potters in later times (after A.D. 1400),
early Caddo potters used fewer decorative techniques, applied
decoration to larger areas of the surface of their fine wares,
and left most of their utilitarian wares undecorated. They
also favored bowl forms, especially carinated bowls, and bottles,
although they made jars, plates, effigy vessels, and compound
bowls, among other forms. Decorative designs were typically
curvilinear, rectilinear, and horizontal. The relative homogeneity
of early Caddo pottery is thought to be the result of broad
and extensive social interaction among Caddo groups
After A.D. 1400, Caddo pottery became more diverse
in form and, especially, in decorative technique and style.
Caddo potters developed (or borrowed) new decorative techniques
including appliqué, trailing (wide incisions, often
curved), brushing, and a great many combinations. Intricate
scroll designs with ticked lines, incised circles, negative
ovals and circles, triangles, and ladder designs are all common
in late Caddo pottery. Jar forms seem to have become more
important and bottles somewhat less so. New specialized vessel
forms such as rattle bowls and "tail-rider" effigy
bowls appear, the latter closely resembling vessel forms in
northeastern Arkansas. Very rare examples of Caddo pots made
in the style of Mississippian head pots are also known.
More than anything, the Late Caddo period was
the time during which many local styles were created. In part
this probably represents higher population levels (more people
making pottery), but it also seems to reflect the existence
of more social groups, each with its own local pottery tradition
handed down and elaborated on from generation to generation.
It is likely that the local styles were quite intentionally
made different from one another as an expression of the identities
of each Caddo community. Alice Cussens, daughter of Mary Inkinish,
told a WPA interviewer in 1937 or 1938: "each clan had
its own shape to make its pottery. You could tell who made
the pottery by the shape." [From David La Vere, 1998,
Life Among the Texas Indians, where her name is given
as Mrs. Frank Cussins. She was born in about 1885, by which
time neither Caddo pottery making nor Caddo clans survived
intact. Hence her words must reflect what she learned from
The invasion of European peoples and the attendant
catastrophic impacts on the Caddo (population loss, forced
moves, changing economy, etc.) brought about a relatively
quick end to the Caddo pottery tradition. For a time in the
late 17th and early 18th centuries Caddo women were able to
keep making beautiful and distinctively Caddo pottery, but
by the close of the 19th century, only vestiges of the tradition
survived. The last Caddo pottery of the original tradition
was apparently made in the late 1800s after the move to Oklahoma.
Today, as can be seen in other sections of this
exhibit, there is hope that the Caddo pottery tradition will
be revived, at least as an art form. Of course the tradition
will never be the same without the existence of the societies
that kept it going. Modern Caddo people use store-bought pots
and pans, just like everybody else in the developed world.
Finely crafted Holly Fine Engraved
bowl, Early Caddo, ca. A.D. 900-1200. TARL collections.
Click to see top view.
Looted Caddo cemetery in northeast
Texas. Photo courtesy Texas Historical Commission.
Late Caddo bottle with poorly smoothed
neck bands and faint ladder-like design on main body.
Hume Engraved bottle, ca. 1400-1650. TARL collections.
Click on image for enlarged view and detail of neck.
A rare Late Caddo "head pot"
from southwestern Arkansas. The Caddo master potter
who made this extraordinary piece obviously copied a
typical Mississippian head pot, but decorated it with
Caddo style engraving rather than painting. The engraved
designs may mimic facial tattooing. Courtesy Picture
of Records, original in the Henderson State University
Collection, Arkadelphia, Arkansas.
These peculiar little vessels are
rattle bowls. The protruding nodes are hollow and contain
small pebbles or rounded pieces of clay that rattle
when the bowl is shaken
Late Caddo, ca. A.D. 1400-1650. TARL collections. Click
to see enlarged view and close up of one bowl.
Large tear-drop or gourd-shaped Sanders
Engraved "seed pot," so-called because of
the small restricted mouths. In fact, there is no definitive
evidence that such vessels were used to store seeds.
This one is much too thin to have been a water jar and
it does have small holes near the rim that were probably
used to secure a lid, lending support to the seed pot
notion. Middle Caddo, ca. A.D. 1200-1400. TARL collections.
Click to see enlarged view and detail of rim.
Typical Fourche Maline jar with thick
walls and a shape resembling a flower pot. This Williams
Plain pot is from the Crenshaw site, Miller County,
Arkansas. Photo by Frank Schambach.
Detail of artist's depiction of daily life in an Early
Caddo village. The woman on the far left is engraving
a bowl. Courtesy artist George Nelson and the Institue
of Texan Cultures.
|
<urn:uuid:29cb0711-0886-46e2-aa9c-ce3d327fbded>
|
CC-MAIN-2016-26
|
http://www.texasbeyondhistory.net/tejas/clay/tradition.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00063-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.941396 | 3,978 | 2.984375 | 3 |
by Elizabeth McCracken
Our families, I think, are the first novels we know. That is: a complicated collection of people and anecdotes that add up to more than the sum of their parts. Every story about an uncle in his youth is precious, because it’s what made him that particular uncle: a sad teenage love story about a cheerful old codger means something different than the exact same story, only about a man who grows up to be bitter and disappointed. It’s that kind of pressure between event and emotion that fiction needs, and it’s our early interest in that pressure that made a lot of us writers. Still, sometimes the family stories get plonked into short stories and novels and never become fiction: divorced from their people, they become only detail.
This is an assignment that I sometimes give to writers who are just trying their hands at fiction, when they say they don’t exactly understand what makes a story a story, and not a sketch.
Choose a family story, an anecdote that you have no first hand experience of. You can choose, for example, the story of how your parents met, the death of your great-grandfather, the disappointing love affair of your uncle’s youth. Some people have many stories handed down like heirlooms: you only need one. It can be a significant story or a trivial one.
Choose two of the actors in this story and write as many pieces of information you know about them in list form. Feel free to make up the details. You’re just piling up details which may or may not come into play in the story. If you’re very close to the people in this story, you may want to start fictionalizing them instantly. Don’t worry about the proseÑyou can do it in list form or in paragraphs, whatever helps you get the most on the page quickest.
Look at your anecdote. If there’s a clear and sensible setting, again, pile up the details. If there isn’t a setting, choose one. You may make the details up.
Put your characters in the setting on the day of the anecdote. Write a list that alternates a named action with an emotional response, one causing the other, and then write another action.
Ruth and Edna rushed ahead of Louis, eager to open the door to the museum by themselves.
- WHICH MADE HIM FEEL: He was irritated by their slowness.
- WHICH MADE HIM DO: He struggled to open the other side of the door by himself.
- WHICH MADE RUTH FEEL: She was irrated by his bossiness
- WHICH MADE HER DO: She grabbed Edna and they rushed into the museum past Louis, nearly knocking him over.
- WHICH MADE HIM FEEL: He decided that if that’s what they wanted, he wasn’t going to look after them even though he was the oldest and he was supposed to.
- WHICH MADE HIM DO: He stuck his hands in his pockets and went whistling away in the other direction.
- WHICH MADE HIM FEEL: Like a successful vaudevillian.
You don’t need to alternate characters; you can have a character feel something, act on it, and then feel something again; or you can describe how what one character does makes another feel. The goal is to see the effect action has on emotion, and vice-versa.
You should be able to see the first glimmers of a story: you have characters you know a lot about, in a well-described physical world, and at least the start of a plot-line.
Now: write the story based on your family anecdote. You should either start with the punchline of your family story, or end on it. You don’t have to follow your list of actions and emotions step by step, or at all, really. Just keep in mind what you’ve learned from it and from your lists of details.
|
<urn:uuid:3facbe05-e8a7-4040-b46d-12e64701f1e9>
|
CC-MAIN-2016-26
|
http://bretanthonyjohnston.com/books/excerpts-from-naming-the-world/from-anecdote-to-story/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00188-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.967195 | 849 | 2.546875 | 3 |
Diabetes is a lifelong condition that causes a person's blood sugar (glucose) level to become too high.
The hormone insulin – produced by the pancreas – is responsible for controlling the amount of glucose in the blood.
There are two main types of diabetes:
- Type 1 – where the pancreas doesn't produce any insulin
- Type 2 – where the pancreas doesn't produce enough insulin or the body’s cells don't react to insulin
This topic is about type 1 diabetes. Read more about type 2 diabetes.
Another type of diabetes, known as gestational diabetes, occurs in some pregnant women and tends to disappear following birth.
It's very important for diabetes to be diagnosed as soon as possible, because it will get progressively worse if left untreated.
You should therefore visit your GP if you have symptoms, which include feeling thirsty, passing urine more often than usual and feeling tired all the time (see the list below for more diabetes symptoms).
Type 1 and type 2 diabetes
Type 1 diabetes can develop at any age, but usually appears before the age of 40, particularly in childhood. Around 10% of all diabetes is type 1, but it's the most common type of childhood diabetes. This is why it's sometimes called juvenile diabetes or early-onset diabetes.
In type 1 diabetes, the pancreas (a small gland behind the stomach) doesn't produce any insulin – the hormone that regulates blood glucose levels. This is why it's also sometimes called insulin-dependent diabetes.
If the amount of glucose in the blood is too high, it can, over time, seriously damage the body's organs.
In type 2 diabetes, the body either doesn't produce enough insulin to function properly, or the body's cells don't react to insulin. Around 90% of adults with diabetes have type 2, and it tends to develop later in life than type 1.
The symptoms of diabetes occur because the lack of insulin means that glucose stays in the blood and isn’t used as fuel for energy.
Your body tries to reduce blood glucose levels by getting rid of the excess glucose in your urine.
Typical symptoms include:
- feeling very thirsty
- passing urine more often than usual, particularly at night
- feeling very tired
- weight loss and loss of muscle bulk
The symptoms of type 1 diabetes usually develop very quickly in young people (over a few days or weeks). In adults, the symptoms often take longer to develop (a few months).
Read more about the symptoms of type 1 diabetes.
Causes of type 1 diabetes
Type 1 diabetes occurs as a result of the body being unable to produce insulin, which moves glucose out of the blood and into your cells to be used for energy.
Without insulin, your body will break down its own fat and muscle, resulting in weight loss. This can lead to a serious short-term condition called diabetic ketoacidosis, where the bloodstream becomes acidic and you develop dangerous levels of dehydration.
Type 1 diabetes is an autoimmune condition, where the immune system (the body's natural defence against infection and illness) mistakes the cells in your pancreas as harmful and attacks them.
Read more about the causes of type 1 diabetes.
Treating type 1 diabetes
It's important that diabetes is diagnosed as early as possible, so that treatment can be started.
Diabetes can't be cured, but treatment aims to keep your blood glucose levels as normal as possible and control your symptoms, to prevent health problems developing later in life.
If you're diagnosed with diabetes, you'll be referred to a diabetes care team for specialist treatment and monitoring.
As your body can't produce insulin, you'll need regular insulin injections to keep your glucose levels normal. You'll be taught how to do this and how to match the insulin you inject to the food you eat, taking into account your blood glucose level and how much exercise you do.
Insulin injections come in several different forms, with each working slightly differently. Some last up to a whole day (long-acting), some last up to eight hours (short-acting) and some work quickly but don't last very long (rapid-acting). You'll most likely need a combination of different insulin preparations.
There are alternatives to insulin injections, but they're only suitable for a small number of patients. They are:
- insulin pump therapy – where a small device constantly pumps insulin (at a rate you control) into your bloodstream through a needle that's inserted under the skin
- islet cell transplantation – where healthy insulin-producing cells from the pancreas of a deceased donor are implanted into the pancreas of someone with type 1 diabetes (read about the criteria for having an islet transplant)
- a complete pancreas transplant
Read more about diagnosing diabetes and treating type 1 diabetes.
If diabetes is left untreated, it can cause a number of different health problems. Large amounts of glucose can damage blood vessels, nerves and organs.
Even a mildly raised glucose level that doesn't cause any symptoms can have damaging effects in the long term.
Read more about the complications of type 1 diabetes.
Living with diabetes
If you have type 1 diabetes, you'll need to look after your health very carefully. Caring for your health will also make treating your diabetes easier and minimise your risk of developing complications.
For example, eating a healthy, balanced diet and exercising regularly will lower your blood glucose level. Stopping smoking (if you smoke) will also reduce your risk of developing cardiovascular disease.
If you have diabetes, your eyes are at risk from diabetic retinopathy, a condition that can lead to sight loss if it's not treated. Everyone with diabetes aged 12 or over should be invited to have their eyes screened once a year.
Read more about living with diabetes.
How common is diabetes?
Diabetes is very common, with an increasing number of people being affected by the condition every year.
In 2011, it was estimated that around 366 million people have diabetes worldwide, with this number predicted to grow to 552 million by 2030.
In the UK, more than 1 in 20 people are thought to have either diagnosed or undiagnosed diabetes. About 90% of those affected have type 2 diabetes, with the remaining 10% having type 1 diabetes.
How to live healthily with diabetes, including advice on diet and lifestyle
Page last reviewed: 12/08/2014
Next review due: 12/08/2016
|
<urn:uuid:379454be-f762-421a-9aa8-258d71c8aee3>
|
CC-MAIN-2016-26
|
http://www.nhs.uk/Conditions/Diabetes-type1/pages/introduction.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00064-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94641 | 1,339 | 3.828125 | 4 |
Monday, June 29, 2009
Snakeskin fruit: a tropical fruit with bite
Although the world of food is continuously getting smaller — name an exotic cooking ingredient and I can probably find it in Bay Area markets — when it comes to fruit, the world is still quite large. Many fruits just can't travel more than a few hundred miles without a severe degradation in quality. Others, like most of the scores of edible banana varieties, are too fragile for economical shipment. And others, like the mangosteen, can harbor pests that prevent their import (into the United States, at least until recently, and then often requiring irradiation, as a 2006 article by David Karp in the NY Times explains).
So, when traveling, local fruit is one of the culinary highlights.
On this most recent trip, which took me to South Korea, Singapore and Indonesia, I sampled some old favorites (perfectly ripe mangoes), got a new perspective on some others (like bananas, which I never liked as a kid), and tried some new fruits. The photo above from a roadside stand in Bali shows a small sample of what we saw. In the bottom row, from left to right, there are mangosteens, oranges, tamarillos (which actually grow in Bay Area backyards), and more oranges (an interesting fact about oranges: cool nights are required to turn their skin orange, so many tropically-raised oranges are partially or full green). The top row, from left to right has green mangoes, tamarillos, a fruit that I can't identify, snakeskin fruit (the eventual subject of this post), and bananas.
Snakeskin fruit (salak in Bahasa Indonesia and Malay, also called snake fruit), are the fruit of small palm trees. Grown in many countries of Southeast Asia, they are available most of the year.
A close-up of the fruit reveals how it got its name: the skin is scaly like a snake's. They are roughly the size of a small pear, about 15 cm long and 10-15 cm in diameter.
The peel is just a millimeter or two thick. And can be dangerous: a careless fruit peeler (like me), can easily cut a finger on the sharp scales, each one of them like a knife-tip.
Underneath the peel you'll find a few hard white orbs that each contain a sturdy pit. The fruit tasted somewhat like a combination of apple, pear, and lychee, with a bit of astringency and a surprisingly dry texture. Overall, an interesting fruit to look at, but not so interesting to eat.
Random link from the archive: Royals See Organic Garden
Technorati tags: Indonesia : Fruit : Food
|
<urn:uuid:1e71b9a9-457e-466d-8ccf-56703d32e368>
|
CC-MAIN-2016-26
|
http://marcsala.blogspot.com/2009/06/snakeskin-fruit-tropical-fruit-with.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.940944 | 565 | 2.53125 | 3 |
The Global Ed Yellow Pages
» Technology (N-Z)
+ - Odyssey Online
In Odyssey Online you’ll find museum objects from the Michael C. Carlos Museum at Emory University in Atlanta, GA, the Memorial Art Gallery of the University of Rochester in Rochester, NY and the Dallas Museum of Art of Texas. The site also houses a teacher resource section to help them learn more about teaching with museum objects. Users of the site can choose museum items from the Near East, Egypt, Greece, Rome and Africa.
+ - The Odyssey: World Trek for Service and Education
The World Trek takes students on an otherwise impossible field trip – a two year trek around the world. More than 1300 classes in over 80 countries are traveling the world via the Odyssey website, their connection to a team of five educators doing a real two-year World Trek. The team visits ten major non-western countries to document their histories and cultures: Guatemala, Peru, Zimbabwe, Mali, Egypt, Israel, Turkey, Iran, India and China.
+ - Schoolwires, Inc.
Schoolwires, Inc. provides technology products and related services to more than 1,300 educational entities, including k-12 school districts and schools in the U.S. and China.
+ - Skype in the Classroom
Skype in the Classroom is a free and easy way for teachers to open up their classrooms. Meet new people, talk to experts, share ideas and create learning experiences with teachers around the world.
+ - Taking It Global
Taking It Global (TIG) aims to empower teachers around the globe to utilize technology to facilitate transformative international learning experiences that build 21st Century skills. TIG programs, resources, and online tools engage students in collaborative education that builds leadership skills, environmental stewardship, and global citizenship.
+ - Teacher’s Guide to International Collaboration on the Internet
The Teacher’s Guide to International Collaboration is a project of the U.S. Department of Education. It was developed to help teachers use the internet to “reach out” globally. Materials on the website were prepared as part of the department’s International Education Initiative and include a variety of project examples in a number of subject areas, tutorial guides, tips for online collaboration and much more of interest to global educators.
+ - ThinkQuest
ThinkQuest is an international website-building competition, sponsored by the Oracle Education Foundation. Teams of students between nine and nineteen and their teachers are challenged to build websites on educational topics. These sites are published in the ThinkQuest Library and top-scoring teams win valuable prizes. Three prizes are awarded in each of the three age divisions (12 and under, 15 and under, 19 and under) along with a Best of Topic prize for each subject area. Rules for the contest can be found on the website.
+ - The Vermont Institutes
The Vermont Institutes (VI) supports standards-based curriculum, instruction and assessment in the areas of math, science, and technology. VI provides professional development, technical support, leadership development and coaching, systems design, program evaluation, technology applications and research services. It partners with schools, school districts, businesses, higher education and foundations to improve performance of students. VI produces SimSchool, an interactive simulation for training pre-service and in-service teachers in differentiating instruction, instructional decision-making and use of student performance data. It also produces ETIP’s, a series of case-based, interactive studies for the integration of technology in classrooms.
+ - Voices of Youth
Voices of Youth is a site created by the United Nations Children’s Fund (UNICEF) for young people who want to know more, do more and say more about the world. It’s about linking children and adolescents in different countries to explore, speak out and take action on global issues that are important to them and create a world fit for children. In addition to complete interactive modules on a range of child rights issues, the site also has state-of-the-art discussion forums, and a Take Action area with ideas for making a difference.
+ - YouthActionNet
YouthActionNet is a website created by and for young people. It spotlights the vital role that youth play in leading positive change around the world. Launched in 2001 by the International Youth Foundation (IYF) and Nokia, YouthActionNet serves a virtual gathering place for young people looking to connect with each other -- and with ideas about how to make a difference in their communities. The site contains instructions for building a website, an action tool kit, descriptions of youth projects and other information.
|
<urn:uuid:fddb96dc-f304-47c0-ab31-aa128444f04d>
|
CC-MAIN-2016-26
|
http://chapman.edu/ces/global-ed-yellow-pages/subject-matter-resources/technology-n-z.aspx
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00069-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.929716 | 951 | 3.0625 | 3 |
September 2009 - The 'sixth war' between Government forces and Shia al-Houthi tribal groups broke out on August 12th 2009, after the collapse of the most recent year-long truce in Yemen. The United Nations reports that at least 150,000 people have been displaced by the new wave of fighting in Sa'ada province, and are being housed in Internally Displaced Persons (IDP) camps and with host families.
CIDA's contribution is helping the International Committee of the Red Cross (ICRC). Activities undertaken by the ICRC include IDP camp management, the provision of basic health care, shelter, and essential relief items, improvement of access to safe water and sanitation services, protection services, and the promotion and monitoring of International Humanitarian Law. The ICRC's mission is to protect the lives and dignity of victims of war and internal violence and to provide them with assistance.
This is a new feature, part of CIDA's efforts towards increasing transparency. Information will only be available for projects approved after October 15, 2011. For other projects, information on expected results is usually included in the description.
|International Committee of the Red Cross (ICRC) Appeals via the Canadian Red Cross Society (CRCS)||2009-09-22||Grant|
|
<urn:uuid:932ab43f-fdbb-4a80-a5d2-01bf1fe69e91>
|
CC-MAIN-2016-26
|
http://www.acdi-cida.gc.ca/CIDAWEB/cpo.nsf/vWebProjByStatusSCEn/C1E242ECA02F84A88525762D00374946
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00031-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.92816 | 262 | 2.515625 | 3 |
The Tenosynovitis website provides information on the many aspects of the tenosynovitis condition ranging from what it is, what the different types of tenosynovitis are and what causes them, what the symptoms of tenosynovitis are, how the condition can be prevented, treated and what professions are particularly at risk of their employees developing tenosynovitis.
Tenosynovitis is a condition that affects the tendons, specifically the sheath (synovium) that surrounds the tendons. Often it can be classified as a repetitive strain injury because the repetitive motions of a physical activity often brings about or exacerbates the condition, however, rarely tenosynovitis can be developed through infection of a cut or wound whereby the bacteria will travel to the nearest tendon and cause inflammation.
There are variations of tenosynovitis such as De Quervain’s tenosynovitis which affects the thumb and stenosing tenosynovitis (sometimes called Trigger Finger) which usually affects either the middle finger, fourth finger or the thumb). Symptoms are synonymous across the different types of tenosynovitis in that the sufferer will usually feel pain, stiffness, aching, swelling and a dysfunction of the area which includes the inability to straighten the affected area or a loss of grip or strength.
Tenosynovitis is a common condition, particularly amongst middle aged people with the vast majority of sufferers being women. There are a wide range of treatments available and there is a good chance of complete recovery from tenosynovitis if the condition is caught early enough and treatment is followed.
|
<urn:uuid:7cc9870f-e3cd-4c8f-a862-60d514cf197a>
|
CC-MAIN-2016-26
|
http://www.tenosynovitis.org.uk/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00121-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935951 | 337 | 2.953125 | 3 |
Excerpt from U.S. Department of Labor
"People tend to eye-minded, and the impacts visual aids bring to a
presentation are, indeed, significant. The studies, below, reveal
interesting statistics that support these findings:
- In many studies, experimental psychologists and educators have found
that retention of information three days after a meeting or other
event is six times greater when information is presented by visual and
oral means than when the information is presented by the spoken word
- Studies by educational researchers suggest that approximately 83% of
human learning occurs visually, and the remaining 17% through the
other senses - 11% through hearing, 3.5% through smell, 1% through
taste, and 1.5% through touch.
- The studies suggest that three days after an event, people retain
10% of what they heard from an oral presentation, 35% from a visual
presentation, and 65% from a visual and oral presentation.
"Presenting Effective Presentations with Visual Aids" May 1996
OSHA Occupational Safety & Health Administration U.S. Department of
To communicate information that people need to recognize, pictures are
extremely effective. In one study (Shepard, 1967), people looked at
600 pictures, sentences, or words. On an immediate test, recognition
accuracy was 98% for pictures, 90% for sentences, and 88% for words.
Another study (Nickerson, 1968) found that people had 63% recognition
accuracy for a group of 200 black and white photographs one year after
initial viewing. Other researchers (Standing, Conezio, & Haber, 1970)
showed people 2,560 photographs for 10 seconds each. After three days,
the study participants recorded recognition accuracy of over 90%. Read
and Barnsley (1977) showed adults pictures and text from the
elementary school books they used 20 to 30 years ago. Recognition
accuracy rates for pictures and text were better than chance, with
pictures alone being recognized more accurately than text alone.
Finally, Stoneman & Brody (1983) found that children in visual or
audiovisual conditions recognized more products in commercials than
children in an auditory only condition. Pictures seem to allow very
rich cognitive encoding that allows surprisingly high recognition
rates, even years after the initial encoding took place.
Illustrations are superior to text when learning spatial information.
For example, Bartram (1980) arranged for college students to learn how
to get from a starting point to a destination using a minimum number
of buses. The researcher presented the bus route information via maps
or lists and asked the students to provide as quickly as possible the
correct list of bus numbers in the correct order. Bartram measured the
time it took to correctly complete each bus route task. The study
found that the students learned the bus route information more quickly
when they used a map than when they used lists. Bartram believed that
the students performed a spatial task, and the maps were superior to
lists because the map presentation of information is consistent with
people's preferred internal representation of spatial information.
In an exploratory study, Bell and Johnson (1992) allowed four people
to select pictures or text for communicating instructions for loading
a battery into a camera. Qualitative results showed a strong
preference for pictures rather than text. The researchers believed
that the information to be communicated was spatial, and that the
results supported the hypothesis that spatial information should be
presented pictorially. "
"Multimedia Information and Learning" by Lawrence J. Najjar, School of
Psychology, Georgia Institute of Technology, 1996 Journal of
Educational Multimedia and Hypermedia, 5, 129-150.
One of the basic ways that illustrations aid retention relates to the
well-researched (but not undebated) dual-coding theory of memory
(Paivio, 1971). This theory proposes that information is stored in
long-term memory both as verbal propositions and as mental images. It
suggests that when information is presented verbally and visually it
has a better chance of being remembered. Corroborating research shows
that concrete words are remembered better than abstract words, and
that pictures alone are remembered better than words alone (Fleming &
Levie, 1978). From the dual-coding perspective, an explanation is that
concrete words help us generate associated mental images, and that
pictures alone help us to generate associated words, in addition to
detailed mental images. The combination of verbal proposition and
mental image establishes multiple pathways by which the information
can be retrieved from memory...
Retention in Working Memory
Illustrations can also be seen as assisting the short-term or working
memory by making more information readily available. Illustrations can
present simultaneously all the information needed to explain a topic
or perform a task. Where a linear string of words must use a series of
semantic cues to its organization over the course of its passage, an
illustration can use lines, boxes, arrows, space, color, typefaces,
and the relative distance between elements to communicate information
about the relationships of those elements. Because the reader can see
this information at a glance or with minimal study, graphical
presentation can be more efficient than words alone (Winn, 1987). For
example, charts with multiple columns and rows can reveal the complex
relationships between large amounts of information. Such information
would be difficult to present and even more difficult to comprehend in
words alone. When students read prose or hear exposition, they have to
hold information in working memory long enough to relate it to
information presented later a difficult task in a long passage.
Simultaneous presentation can reduce the processing load on the
working memory and thus help students better see relationships within
"The Instructional Role of Illustrations" Cooperative Program for
Operational Meteorology, Education and Training
"Mayer and Anderson's (1992) contiguity principle asserts that
multimedia instruction is more effective when words and pictures are
presented contiguously in time or space. Studies involving multimedia
instruction have shown that learners perform better on problem solving
and recall tasks when related text or narration are close to an
illustration or animation sequence rather than when they are far away.
In a series of studies reported by Mayer and his colleagues (Moreno &
Mayer, 1999; Mayer, 1997) students read a text passage or listened to
a narration describing a cause and effect system (e.g., how a bicycle
tire pump works) and either studied a diagram or an animated sequence
illustrating the process that was described verbally. In each study,
students receiving text contiguously in space (text physically close
to the diagram or animation) or time (narration chronologically close
to the animated sequence) performed better on recall and problem
solving tasks than students under less contiguous conditions. The
current research was designed to determine whether the contiguity
principle applies to leaning from geographic maps. Comparing rollover
and hyperlink features to a separate narrative allows us to study this
It was hypothesized that learners who study a map with animated
features would more successfully encode both map feature and map
structure information than learners who studied a static map. Few
research studies have been reported on the role of animation in
learning from geographic maps. However, research integrating animation
with simulations (Rieber, 1996), graphic organizers (Blankenship &
Dansereau, 2000) and problem solving tasks (Ok-choon Park & Gittelman,
1992) have shown positive effects for animated over static displays."
"Effects of Fact Location and Animation on Learning from Online Maps"
Jul 31, 2001 by Steven M. Crooks, Michael P Verdi, David White Texas
"IS THERE A DIFFERENCE IN THE LEARNING PROCESS WHEN MULTIMEDIA IS
INVOLVED? At the University of Maribor in Slovenia,
electroencephalography (EEG) was used to measure brain activity when
exposed to different media... The results show that students find it
difficult to form mental models from text alone. Multimedia
presentations trigger visualization strategies such as mental imagery,
which is crucial to many kinds of problem solving."
"The Affect of Multimedia on the Learning Process" Encyclopedia of
effective presentations information compared visual elements design
researchers OR research recall visual information effective retention
figures OR statistics
I hope that helps, if you need any clarification just ask.
|
<urn:uuid:9d4f4005-2b44-4149-9906-b3d4690d24a5>
|
CC-MAIN-2016-26
|
http://answers.google.com/answers/threadview/id/73682.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.886765 | 1,802 | 3.59375 | 4 |
On 2D Inverse Problems/Harmonic functions
Harmonic functions can be defined as solutions of differential and difference Laplace equation as follows.
A function/vector u defined on the vertices of a graph w/boundary is harmonic if its value at every interior vertex p is the average of its values at neighboring vertices. That is,
Or, alternatively, u satisfies Kirchhoff's law for potential at every interior vertex p:
A harmonic function on a manifold M is a twice continuously differentiable function u : M → R, where u satisfies Laplace equation:
A harmonic function defined on open subset of the plane satisfies the following differential equation:
The harmonic functions satisfy the following properties:
- mean-value property
The value of a harmonic function is a weighted average of its values at the neighbor vertices,
- maximum principle
Corollary: the maximum (and the minimum) of a harmonic functions occurs on the boundary of the graph or the manifold,
- harmonic conjugate
One can use the system of Cauchy-Riemann equations
to define the harmonic conjugate.
Analytic/harmonic continuation is an extension of the domain of a given harmonic function.
Harmonic functions minimize the energy integral or the sum
if the values of the functions are fixed at the boundary of the domain or the network in the continuous and discrete models respectively. The minimizing function/vector is the solution of the Dirichlet problem with the prescribed boundary data.
|
<urn:uuid:d78f5515-9918-4c38-b5ae-dffab9d4e29d>
|
CC-MAIN-2016-26
|
https://en.wikibooks.org/wiki/On_2D_Inverse_Problems/Harmonic_functions
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00201-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.843781 | 311 | 3.15625 | 3 |
Philip Neri was born in Florence, Italy, in 1515 into a poor family. As a young man, he received word in a vision that he had a special mission in Rome, so he cut himself off from his family and friends and left.
While in Rome, he studied philosphy and theology, and tutored young boys. Eventually Philip became bored of learning, so he sold all of his books, gave the money he received from them to the poor, and visited the sick.
Later, he co-founded the Confraternity of the Most Holy Trinity and began to preach, and many people converted thanks to Philip's preaching and example. During this time, he was a lay person and lived as a hermit, however a good friend eventually convinced him to enter the priesthood, and he was ordained in 1551.
Many people came to him for confession. He also began to work with youth. Pope Gregory XIV wanted to make Philip a cardinal, but the priest declined.
He then founded the Congregation of the Oratory, also known as the Oratorians, dedicated to preaching and teaching, and they still exist today.
He died May 27, 1595, and was canonized by Pope Gregory XV in 1622. He is the patron of Rome and the U.S. Army Special Forces.
|
<urn:uuid:eac79c2f-14b5-43e5-8f7b-e8f5581d3b58>
|
CC-MAIN-2016-26
|
http://www.catholicnewsagency.com/saint.php?n=478
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00185-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.995141 | 271 | 2.78125 | 3 |
The latest Arizona Vegetable Integrated Pest Management Update from the University of Arizona (UA) Cooperative Extension in Yuma.
Best management practices for pest control in vegetables
By John Palumbo, UA Research Scientist and Extension Specialist
In 2008 a group of research and Extension vegetable entomologists and crop consultants from vegetable-producing states met to discuss pest management issues, plus challenges and opportunities confronting the fresh vegetable industry.
From that workshop, best management practice (BMP) recommendations were developed. A continuation of that meeting was held in 2009 and focused on refining the BMPs for vegetable insect control, and in general terms, defined a number of best practices for successful insect management in vegetable crops.
In addition, based on the cumulative experiences of the participating entomologists, the strengths and weaknesses of a number of new pesticide technologies (registered products and compounds under development) were identified. These included Radiant, Pyrifluquinazon, Oberon, Movento, Rimon, Coragen, Synapse, and Cyazypyr.
A number of important issues, challenges, and opportunities in insect control in vegetable crops were discussed based on regional perspectives. Among the topics discussed were shifting pest spectrums, trends toward selective pesticide technologies, resistance management, MRLs, and other production issues.
A copy of the 2009 BMPs generated from those discussions can be found at this link.
Contact Palumbo: (928) 928-782-3836 or [email protected].
Lettuce drop: aerial infection
By Mike Matheron, UA Extension Plant Pathologist
A widespread outbreak of aerial infections caused by the lettuce drop pathogen Sclerotinia sclerotiorum was reported in several locations in Yuma during the last week of December.
A review of the biology of the two lettuce drop pathogens and the environmental conditions required for production of airborne spores may help explain this occurrence.
Lettuce drop is caused by two fungal pathogens, Sclerotinia minor and Sclerotinia sclerotiorum. The pathogens produce structures called sclerotia which allow the organisms to survive in the soil between the plantings of host crops.
In desert plantings, infection of lettuce by S. minor and usually by S. sclerotiorum results from direct germination of sclerotia in the soil followed by the colonization of the base of the plants.
However, when soil moisture and temperature conditions are favorable, sclerotia ofS. sclerotiorum an inch or less below the soil surface can create fruiting bodies that in turn produce vast number of spores dispersed by wind throughout the field and to other fields. The spores germinate and cause aerial infections when deposited on lettuce leaf tissue.
The optimal conditions that stimulate airborne spore production include exposure of sclerotia to nearly saturated soil for at least a two-week period and soil temperatures ranging from approximately 52 to 60 degrees F. Soil in vegetable production fields is normally very wet and soil temperatures from Nov. 26 until the present have been in the favorable temperature range.
The airborne spores require free moisture from rainfall, dew, or sprinkler irrigation on senescent or damaged leaf tissue for optimal infection to occur. The weather record shows freezing temperatures throughout the area on Nov. 26 and 27 resulting in damaged lettuce leaf tissue, and rainfall on Dec. 21 and 22. The favorable conditions for airborne spore production and infection were present in the area.
In other agricultural regions where airborne infection of crops byS. sclerotiorum is common, foliar application of fungicides including Endura, Rovral, or Switch can provide significant disease protection.
For lettuce, the initial application of fungicides during the rosette stage, about 30 to 40 days before harvest, has been shown to significantly reduce the incidence of lettuce drop caused by airborne infections of Sclerotinia sclerotiorum.
Contact Matheron: (928) 726-6856 or [email protected].
Weed seeds and pre-emergent herbicides
ByBarry Tickes,UA Area Agriculture Agent
Most pre-emergent herbicides do not kill dormant weed seeds. In most cases, the seeds must first germinate and contact the herbicide before they are killed. Some pre-emergent herbicides are absorbed only by the roots, some by shoots only (at the hypocotyl in broadleaves and at the coleoptile in grasses), and some by roots and shoots.
Weed seedlings sometimes emerge and grow for awhile before dying or becoming uncompetitive with the crop.
Some fumigants kill weed seeds including metam sodium (Vapam), clorpicrin (Telone), dazomet (Basamid), methyl bromide, methyl iodide, and calcium cyanamide.
Flooding and solarization also can kill weed seeds. Fumigants, flooding, and solarization are often used primarily to control diseases and have the added benefit of controlling some weeds.
Contact Tickes: (928) 580-9902 or [email protected].
|
<urn:uuid:0a43b1c8-1a7a-4a15-8884-c8726dcff029>
|
CC-MAIN-2016-26
|
http://westernfarmpress.com/print/vegetables/arizona-veg-ipm-vegetable-pest-control-lettuce-drop-weed-seeds?page=3
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00048-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.910339 | 1,080 | 2.671875 | 3 |
This month, the world was shocked by a natural disaster in South Asia. The outpouring of humanitarian assistance is a testament to what can be accomplished by the international community when it finds the will to act. We will soon learn if the will exists to confront a man-made disaster in Western Sudan.
Last May, I wrote in these pages about 30,000 Sudanese dead in Darfur, the victims of ethnic cleansing. Since then, 50,000 more have died and some 2 million have been displaced, most of them now struggling to survive on the brink of starvation. On Tuesday, the United Nation's Commission of Inquiry will present its report on violations of international humanitarian and human rights law in Darfur, on allegations of genocide, and on the identities of those responsible. After the presentation, the UN Security Council will have an opportunity to refer Darfur to the International Criminal Court. Without reservation or delay, it should do so.
Almost a year has passed since the world began to learn of the ethnic cleansing taking place in the Darfur region of Sudan. The violence has been perpetrated by Janjaweed militias, which are armed and directed by Sudanese authorities. Government troops have also assisted with the murder and displacement of thousands of civilians. Despite three UN resolutions condemning the violence and a report from the U.S. State Department that found evidence of genocide in Darfur, little action has been taken to stop the killing.
Meanwhile, the Sudanese government and the militias it directs carry on their deadly work largely unencumbered. Some African Union forces have arrived in Darfur, but their numbers are too small to be consequential: Even the full contingent of 3,500 will be barely enough to guard the refugee camps, let alone protect the civilians still living in the region or return the displaced to their homes.
The credibility of the international community--especially the UN, but including the United States--is at stake in Darfur. Voices have been raised, diplomatic pressure has been applied, genocide has been declared by the secretary of state. But since nothing has changed on the ground, all of this talk must be followed by action.
Short of military intervention, there are still a number of steps that would bring meaningful pressure on the government in Khartoum. On Tuesday, the Security Council will have an opportunity to take them.
For instance, the number of African Union troops should be increased and given a clear mandate to protect Sudanese civilians from harm. An international arms embargo should be imposed on the Sudanese government to stem the flow of weapons to the Janjaweed militias. And equally important, the UN Security Council should refer Darfur for investigation by and possible prosecution at the International Criminal Court. The court has a mandate to try cases of genocide, war crimes and crimes against humanity where national courts are unwilling or unable to do so. It was designed precisely for situations like Darfur.
Identifying the architects, indicting them and beginning the process of determining their guilt or innocence is a powerful tool at the disposal of the Security Council. It should be used in Sudan to marginalize the planners and disrupt their operations.
The ICC would be by far the most efficient and effective way to pursue those who are orchestrating war crimes in Darfur. As a standing institution that has already opened investigations into violations of humanitarian law in Uganda and Congo, it has the infrastructure in place to take up the cases in Western Sudan: investigators, prosecutors, judges. An investigation by the ICC could help protect the civilians who remain in Darfur, keep the Janjaweed from consolidating its territorial gains, and assist humanitarian groups to work unmolested in devastated areas.
The U.S. has led the world in criticizing Sudan's government, but it may be a major obstacle to referral by the Security Council. The U.S. has opposed the ICC as an institution because it fears that a strong court might be used some day to stage political show-trials against American soldiers and citizens. These worries are misplaced: the court only has jurisdiction when other legal forums are not available, and, in any case, there are no Americans implicated in Darfur. U.S. opposition to the ICC should not be allowed to deny the victims of Darfur access to the only court that can hear their claims, provide them justice, and deter further crimes.
The ICC passes judgment on acts that have already been committed, and the ethnic cleansing in Darfur is ongoing. But a referral to the court could help curtail the violence by presenting a very real threat of imprisonment, frozen assets, and international isolation to Sudan's leadership. With more than 2 million people currently at risk of attack or starvation, we cannot wait until the killing stops to bring justice to Darfur. By then it will be far too late.
|
<urn:uuid:5e7fc170-64b5-4f6e-824c-dce57dfab001>
|
CC-MAIN-2016-26
|
http://articles.chicagotribune.com/2005-01-24/news/0501240154_1_darfur-janjaweed-militias-african-union
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00199-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.957891 | 971 | 2.578125 | 3 |
What is key-value pair?
A key-value pair (KVP) is a set of two linked data items: a key, which is a unique identifier for some item of data, and the value, which is either the data that is identified or a pointer to the location of that data. Key-value pairs are frequently used in lookup tables, hash tables and configuration files.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Learn more about Redis open source DBMS
|
<urn:uuid:ffa0b4c7-e26e-4acc-b600-3c3d5fda6f28>
|
CC-MAIN-2016-26
|
http://searchenterprisedesktop.techtarget.com/definition/key-value-pair
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00091-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920799 | 133 | 2.734375 | 3 |
Inside MakerKids, a workshop space in Toronto’s west end, children are presented with what’s called the Possibility Wall. The shelves on the wall are filled with bins of just about anything a child might think to create with – motors, gears, crayons, glitter, electrical tape, scissors and even power drills. There’s also a 3-D printer and soldering guns in the space, along with several tables and computers.
From this, kids are free to create whatever their imaginations come up with.
“The light turns on when they realize it’s anything that they want to do,” says Andy Forest, a Web developer who co-founded MakerKids with his wife, a planetary scientist.
Since the space opened in April, 2012, it has seen a steady stream of kids age three and up keen to attend “open shop” nights where kids are free to work on anything they want, and attend a range of classes that include inventing, programming and robotics. Children also regularly attend the workshop’s open space events, where toy hacking – taking a toy and redesigning it into something entirely new – is especially popular, as is making things on the 3-D printer and working with Arduinos, programmable circuit boards.
The maker movement is on the cusp of mainstream recognition. Christened in 2006, makers represent a do-it-yourself culture informed by a hacker ethos that often has a strong tech element. It includes everything from robotics, 3-D printing and electronics to traditional arts and crafts. Maker Faires that attract thousands of visitors are cropping up all over the world, including several in Canada. And maker spaces are also sprouting up in Canada, where kids can develop a passion for science and technology.
The cost of open nights at MakerKids is on a pay-what-you-can basis, with a suggested donation of $20, plus materials. Other maker spaces for kids offer classes that stretch over months and cost $150.
These classes are perhaps the latest example of how the way children play is changing. While kids have long tinkered in the garage with a parent, maker spaces provide kids with the chance to explore science, technology and engineering in a more formalized way.
“For kids, you can get all sorts of other activities, like arts, dance, music, sports, starting at a very early age. But it’s odd that there’s no supplemental fun things to do in science and engineering,” says Henry Houh, a self-described “geek engineering-type dad” who launched workshop space in Burlington, Mass., in December, 2012.
Th e space is open to kids as young as pre-school age, who can learn to use a 3-D printer or learn 3-D computer-assisted design.
Sandy Beaman, manager of Victoria Island Technology Park, the new home of the Victoria Makerspace, says that developing an interest in science and engineering will help inspire kids to pursue careers in these fields, not to mention give them a leg up on peers when they finally begin taking classes.
The Edmonton Public Library launched a maker space last month. It features a green screen and computers loaded with things such as game-creation software and 3-D modelling software. It also has two 3-D printers and a machine that allows you to make a fully formed book.
“Our maker space is very focused on the digital,” says Pam Ryan, the library’s director of collections and technology. “Anything around learning to be a digital citizen, to participate in the digital environment, is really key right now.”
The Ottawa Public Library has plans to open a maker space next March. “Learning is changing,” says Danielle McDonald, CEO of the Ottawa Public Library. “It’s how young people want to learn. They don’t want to sit in a classroom and have it taught to them. They want to create.”
Jason Nolan, an associate professor at the school of early-childhood education at Toronto’s Ryerson University, says that, “In terms of maker culture, laser cutters, 3-D scanners and 3-D printers, Arduino circuits and DIY robotics are great ways of extending a child’s interest in STEM [science, technology, engineering and math] beyond what they’ve done on their own at home. However, there must always be an intrinsic interest on which to build.”
“It is a way to shape, if not create, the culture we live in,” says Dale Dougherty, founder of Maker Media, a company based in Sebastopol, Calif. He coined the term maker in 2006; it appealed because it was broad enough to include a wide array of pursuits.
At Toronto’s MakerKids, recent projects have included an underwater robot and one seven-year-old girl’s remote-controlled teddy bear, which she learned to use a soldering gun to make.
Monica Peschmann drives her nine-year-old son Patrick Burns across town so he can attend MakerKids. On a recent Friday evening, he was busy working on a security system to keep his sister out of his room. It would be made from a television remote taken from home – with mom’s permission, of course – and taken apart.
“I wanted the infrared LED,” Patrick explained. The system would also include a pressure plate for outside his door, which would trigger an alarm if his sister stood on it.
The space’s ambition, says Jennifer Turliuk, MakerKids’s co-executive director, is to show kids a way of thinking about themselves and the way they relate to the world.
“You think of the world a different way when you know you can fix it,” she says.Report Typo/Error
|
<urn:uuid:ab7c0758-c923-49a3-9b4a-686ca02d31d8>
|
CC-MAIN-2016-26
|
http://www.theglobeandmail.com/life/parenting/remaking-the-way-children-learn-and-play/article15535333/?cmpid=rss1
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00026-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962674 | 1,240 | 2.84375 | 3 |
Roadway improvements have been shown to reduce crashes.
Pedestrians comprise the second largest category of motor vehicle crash deaths after vehicle occupants, accounting for 11 percent of fatalities. The rates of pedestrian deaths in motor vehicle crashes per 100,000 people also are higher for older people.
Pedestrian deaths occur primarily in urban areas. Many pedestrians are killed on crosswalks, sidewalks, median strips, and traffic islands. Physical separations such as overpasses, underpasses, and barriers can reduce the problem. Increased illumination and improved signal timing at intersections also can be effective. Because traffic speeds affect the risk and severity of pedestrian crashes, reducing speeds can reduce pedestrian deaths.
Retting, R.A.; Ferguson, S.A.; and McCartt, A.T. 2003. A review of evidence-based traffic engineering measures to reduce pedestrian-motor vehicle crashes. American Journal of Public Health 93:1456-63.
Vehicle factors count, too, because the most serious injuries often result from pedestrians being thrown onto the hoods, windshields, or tops of vehicles. Serious head, pelvis, and leg injuries are common, and the severity of such injuries could be mitigated by improving vehicle designs and materials.
The following facts are based on analysis of data from the U.S. Department of Transportation's Fatality Analysis Reporting System (FARS).
A total of 4,881 pedestrian deaths occurred in 2005, up 4 percent from 2004. Since 1975 pedestrian deaths have declined from 17 percent of all motor vehicle crash deaths to 11 percent in 2005.
Pedestrian deaths and other motor vehicle crash deaths, 1975-2005
Nineteen percent of pedestrian deaths in 2005 occurred in hit-and-run crashes.
The rate of pedestrian deaths per 100,000 people decreased 53 percent between 1975 and 2005 (from 3.5 to 1.6 per 100,000). The pedestrian death rate for children ages 0-12 decreased 85 percent. Children this age had the third highest pedestrian death rate in 1975 but in 2005 had the lowest.
Pedestrian deaths per 100,000 people by age, 1975-2005
More details: population and number of pedestrian deaths
The rate of pedestrian deaths per 100,000 people in 2005 was approximately twice as high for people 70 and older than for those younger than 70. Since 1975 the rate of pedestrian deaths per 100,000 people has decreased 50 percent for people younger than 70 and 69 percent for those age 70 and older.
Seventy percent of pedestrians killed in 2005 were males, a proportion that has varied little since 1975.
Fifty-three percent of pedestrians 16 and older killed in nighttime (9pm–6am) motor vehicle crashes in 2005 had blood alcohol concentrations (BACs) at or above 0.08 percent.
Seventy-two percent of pedestrian deaths in 2005 occurred in urban areas, up from 59 percent in 1975.
Thirty-five percent of pedestrian deaths among people 70 and older in 2005 occurred at intersections, compared with 21 percent for those younger than 70.
Seventy-one percent of pedestrian deaths in 2005 occurred on major roads, including interstates and freeways.
In urban areas 56 percent of pedestrian deaths in 2005 occurred on roads with speed limits of 40 mph or less; in rural areas 22 percent of deaths occurred on such roads.
Forty-five percent of fatal pedestrian motor vehicle collisions in 2005 occurred between 6pm and midnight.
A greater proportion of pedestrian deaths in 2005 occurred on Friday and Saturday than on other days of the week.
©1996-2016, Insurance Institute for Highway Safety, Highway Loss Data Institute | www.iihs.org
Subscribe to news updates
Watch our channel
Mobile ratings & news
|
<urn:uuid:46059722-f15f-41c2-8565-06812219836c>
|
CC-MAIN-2016-26
|
http://www.iihs.org/iihs/topics/t/pedestrians-and-bicyclists/fatalityfacts/pedestrians/2005
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00001-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947802 | 765 | 3.453125 | 3 |
Today, we see an unobstructed view of the cosmos in all directions. But, a time existed near the Big Bang when the space between galaxies was an opaque fog where nothing could be seen. And according to two University of Michigan researchers, rare Green Pea galaxies, discovered in 2007, could offer clues into a pivotal step, called reionization, in the Universe’s evolution when space became transparent.
Reionization occurred just a few million years after the Big Bang. During this time, the first stars were beginning to blaze forth and galaxies. Astronomers believe these massive stars blasted the early universe with high-energy ultraviolet light. The UV light interacted with the neutral hydrogen gas it met, scraping off electrons and leaving behind a plasma of negatively charged electrons and positively charged hydrogen ions.
“We think this is what happened but when we looked at galaxies nearby, the high-energy radiation doesn’t appear to make it out. There’s been a push to find some galaxies that show signs of radiation escaping,” Anne Jaskot, a doctoral student in astronomy, says in a press release.
In findings released in the current edition of the Astrophysical Journal, Jaskot and Sally Oey, an associate professor of astronomy, the astronomers focused on six of the most intensely star-forming Green Pea galaxies between one billion and five billion light-years from Earth. The galaxies are compact and closely resemble early galaxies. The objects are thought to be a type of Luminous Blue Compact Galaxy, a type of starburst galaxy where stars are forming at prodigious rates. They were discovered in 2007 by volunteers with the citizen science project Galaxy Zoo. Named “peas” because of their fuzzy green appearance, the galaxies are very small. Scientists estimate that they are no larger than about 16,000 light-years across making them about the size of the Large Magellanic Cloud, a irregular galaxy near our Milky Way Galaxy.
Using data from the Sloan Digital Sky Survey, Jaskot and Oey studied the emission lines from the galaxies to determine how much light was absorbed. Emission lines tell astronomers not only what elements are present in the stars but also much about the intervening space. By studying this interaction, the researchers determined that the galaxies produced more radiation than observed, meaning some must have escaped.
“An analogy might be if you have a tablecloth and you spill something on it. If you see the cloth has been stained all the way to the edges, there’s a good chance it also spilled onto the floor,” Jaskot said. “We’re looking at the gas like the tablecloth and seeing how much light it has absorbed. It has absorbed a lot of light. We’re seeing that the galaxy is saturated with it and there’s probably some extra that spilled off the edges.”
|
<urn:uuid:59086dc4-905a-4afc-a176-508db6f85373>
|
CC-MAIN-2016-26
|
http://www.universetoday.com/101269/green-peas-offer-tiny-clues-to-early-universe/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00073-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.945729 | 590 | 3.96875 | 4 |
Yes, that's right: Clock confusion is upon us yet again.
Not all states, however, will observe the time change. Residents of Arizona, Hawaii and U.S. territories like Puerto Rico, Guam and the Virgin Islands will remain on their normal schedules.
According to TimeandDate.com, about 75 countries and territories have at least one location that observed Daylight Saving Time this year and will be implementing the changeover this fall. However, the website notes that "countries, territories and states sometimes make adjustments that are announced just days or weeks ahead of the change."
In the U.S., the upcoming time shift is part of a longstanding tradition in which most residents set their clocks ahead an hour in the spring ("spring forward") and turn them back an hour as winter approaches ("fall back").
This means that come Sunday morning on Nov. 4, many U.S. residents will have had an extra hour of shut-eye.
Why do we have Daylight Saving Time?
The idea behind Daylight Saving Time, wrote MSNBC in 2011, is to use the "extended daylight hours during the warmest part of the year to best advantage."
The time shift is said to reduce the need for lighting during the evening, which is why the changeover is considered an energy-saver.
However, experts are divided as to whether or not this true.
According to National Geographic, several studies conducted in recent years have suggested that Daylight Saving Time "doesn't actually save energy and may even result in a net loss."
The magazine wrote in March:
Environmental economist Hendrik Wolff of the University of Washington, co-authored a paper that studied Australian power-use data when parts of the country extended daylight saving time for the 2000 Sydney Olympics and others did not. The researchers found that the practice reduced lighting and electricity consumption in the evening but increased energy use in the now dark mornings -- wiping out the evening gains.
Other studies, however, have shown energy gains.
In an October 2008 report to Congress, for example, the U.S. Department of Energy asserted that the changeover in the spring does save energy.
According to the Scientific American, senior analyst Jeff Dowd and his colleagues at the U.S. Department of Energy investigated what effect extending Daylight Saving Time would have on national energy consumption by looking at 67 electric utilities across the country.
"They [concluded that a] four-week extension of daylight time saved about 0.5 percent of the nation’s electricity per day, or 1.3 trillion watt-hours in total. That amount could power 100,000 households for a year," the science magazine wrote in 2009.
Benjamin Franklin has been credited with the idea of Daylight Saving Time, but Britain and Germany began using the concept in World War I to conserve energy, the Washington Post observes. The U.S. used Daylight Saving Time for a brief time during the war, but it didn't become widely accepted in the States until after World War II.
In 1966, the Uniform Time Act outlined that clocks should be set forward on the last Sunday in April and set back the last Sunday in October.
That law was amended in 1986 to start Daylight Saving Time on the first Sunday in April, but the new system wasn't implemented until 1987. The end date was not changed, however, and remained the last Sunday in October until 2006.
Nowadays, Daylight Saving Time begins on the second Sunday in March and ends on the first Sunday in November.
In 2013, Mar. 10 marks the beginning of Daylight Saving Time. Until then, enjoy standard time to the fullest.
|
<urn:uuid:a7deb510-decf-4a4a-b407-a50acfbe54bf>
|
CC-MAIN-2016-26
|
http://www.huffingtonpost.com/2012/10/23/daylight-saving-time-fall-2012-when-will-it-end_n_2006163.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00142-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956622 | 742 | 3.015625 | 3 |
In mathematics, a prism is a polyhedron constructed from
two congruent n-sided polygons and n parallelograms. The word 'prism' comes from the Greek prizma, which relates
to cutting or sawing. A prism is a semiregular polyhedron if all of its faces are regular polygons. If the lateral surfaces of the prism are perpendicular to the base, it is said to be a right prism; otherwise, it is known as a oblique prism.
If the base of a prism is a regular n-agon, it is said to be a regular n-sided prism. The distance between the base and the upper surface is called the altitude. The volume of a prism with base-area B and altitude h is V = B.h.
A four-sided right prism is also called a right parallelepiped. A right paralellepiped whose altitude is equal to the edges of the base is a cube. A four-sided oblique prism is also called a parallelepiped. Prisms with equal altitudes have equal volumes.
A prismoid resembles a prism but
has bases that are similar rather than congruent, and sides that are trapezoids
rather than parallelograms. An example of a prismoid is the frustum of a pyramid. A prismatoid is
a polyhedron with all its vertices lying in two parallel planes.
In physics, prismatic pieces of transparent materials are much used in optical instruments.
In spectroscopes and devices for producing
monochromatic light, prisms are used to produce dispersion effects, just as Newton first
used a triangular prism to reveal that sunlight could be split up to give
a spectrum of colors. In binoculars and single-lens reflex cameras reflecting prisms (employing total
internal reflection) are used in preference to ordinary mirrors. The Nicol prism is used to produce polarized
|When light hits a prism it is refracted by the two surface it hits (1, 2). White light splits into the spectrum (3) because each of the colors of the spectrum have varying wavelengths. For example, the short wavelengths of blue and indigo are refracted more than the colors further down the spectrum with longer wavelengths such as orange and red.
AND OPTICAL PHENOMENA
|
<urn:uuid:c83bbcc3-cf9e-4dd9-9e6f-049580e6c3ce>
|
CC-MAIN-2016-26
|
http://www.daviddarling.info/encyclopedia/P/prism.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00196-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.918731 | 486 | 3.96875 | 4 |
A new study shows the rates of prescription poisonings in children are increasing.
Researchers found a link between a surge in adults taking prescription medications and an increase in kids getting drug poisoning.
Children under age 6 face the greatest risk, followed by teenagers.
Despite safeguards like child-resistant packaging, more than 70,000 children a year are evaluated for accidental drug poisoning.
Poisonings from cholesterol and high blood pressure medicines led to the most ER visits. Poisonings from prescription painkillers and diabetes drugs led to the most serious injuries and hospitalizations.
Doctors say if a child accidentally ingests prescription drugs, consult a medical professional or call Poison Control at 1-800-222-1222.
|
<urn:uuid:c2d3eac1-eff9-4d26-bc1a-67066f569f7c>
|
CC-MAIN-2016-26
|
http://www.wtvy.com/news/nation/headlines/Study-Child-Prescription-Drug-Poisoning-on-the-Rise-209963731.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00075-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.8738 | 142 | 2.796875 | 3 |
The Food and Drug Administration is ordering all drugmakers to add the government's strongest safety alert to all antidepressants.
The FDA says the drugs must carry a "black box" warning linking them to increased suicidal thoughts and behavior among children and teens taking them. Also, since the warnings are mainly seen by doctors, the agency is creating an information guide for patients and their parents advising them of the risk.
In a statement, the FDA says it recognizes the health risks of untreated depression in young patients and "advises close monitoring of patients" on antidepressants.
Independent experts working with Columbia University found two to three-percent of children taking antidepressants have increased suicidal thoughts.
The drug labels include details of studies that have so far pointed to Prozac as the safest antidepressant for children.
|
<urn:uuid:8770dab3-5797-4bdc-8671-b69a2ba35b52>
|
CC-MAIN-2016-26
|
http://www.wtvy.com/home/headlines/1104386.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00136-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.944318 | 156 | 2.5625 | 3 |
Guest post by Claire Douglass, campaign director at Oceana, the largest international advocacy group working solely to protect the world’s oceans.
Imagine you navigate your world through sound. You maneuver through your surroundings, locate food and communicate with others through using your hearing alone. Now think about trying to go about your daily life with dynamite-like blasts louder than standing near a jet plane going off every 10 seconds, for 24 hours, for days to weeks on end in your living room. Now imagine standing near something 100,000 times louder than a jet plane.
This will soon be a reality for millions of marine animals that inhabit the waters of the Atlantic off the East Coast of the United States. Last month, the federal government released a final proposal to allow the use of seismic airguns in the Atlantic Ocean. These airguns send incredibly intense blasts of compressed air (one of the loudest humans have produced) into the seabed to find oil and gas deposits deep below the ocean floor.
The area planned to be blasted stretches from Delaware to Florida, encompassing a swath of ocean twice the size of California and is home to a diverse array of marine mammals, including the critically endangered North Atlantic right whale. After being hunted to the brink of extinction by 18th and 19th century whalers, the animals are still slowly recovering their numbers, with only approximately 500 currently left in the world. These whales migrate between the warm waters of North Florida and Georgia to the cooler areas of the Northeast; a route directly in the path of the planned seismic blasting.
In fact, the government itself estimates that more than 138,000 marine mammals will be injured, if not possibly killed, by these blasts. These numbers don’t even include the millions of other animals likely to be disturbed, including both migratory and resident fish species. More than 100 scientists have called on the government to include the best available science to protect marine mammals, such as acoustic guidelines. These guidelines are 15 years in the making and aim to provide a better understanding of how marine mammals are impacted by manmade sound as well as demonstrate the measures that are needed to protect them.
If seismic airgun blasting is allowed in the Atlantic, it will not only jeopardize wildlife, but commercial and recreational fisheries, tourism and coastal recreation as well. More than 730,000 jobs are at risk in the blast zone alone. In the government’s rush to finalize this proposal, the Obama administration is disregarding the combined impacts that these repeated dynamite-like blasts will have on critical behaviors like mating, feeding, breathing, communicating and navigating for thousands of marine animals.
Seismic airgun blasting is the first step towards offshore oil drilling in the Atlantic Ocean. We all saw the horrific effects of the BP oil disaster, and we could see similar tolls before the drilling even begins.
The government needs to move away from dirty and dangerous offshore drilling and instead invest in cleaner, renewable energy sources. Offshore wind in the Atlantic has the potential to provide three times as many jobs and generate 30 percent more electricity than oil and gas in the same area.
Simply put, turning the Atlantic into a blast zone is not the way to fulfill our energy needs.
To learn more about the threats of seismic airgun blasting, visit Oceana.
|
<urn:uuid:e73e4af5-1220-40f4-987b-b7160ca7068a>
|
CC-MAIN-2016-26
|
http://www.earthshare.org/2014/03/blasting.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396029.85/warc/CC-MAIN-20160624154956-00029-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94319 | 675 | 3.171875 | 3 |
Coping in an Evil World--
Inquisitors and Conquistadors
Steven Dutch, Natural and Applied Sciences,
University of Wisconsin - Green Bay
Purpose: to examine the moral options available to people who
find themselves within a morally untenable "system."
Possible ways of coping with an immoral system
- Active endorsement (system may be seen as good by some).
- Accept evils as unavoidable price to obtain a good result.
- Reluctant or coerced cooperation.
- Active cooperation for personal gain.
- Keep low profile.
- Passive or non-violent resistance.
- Obstruction, sabotage, diversion.
- Overt acts of rebellion.
- Established ca. 1200.
- Concerned only with Christian heresy, not Jews, Moslems or non-Christians.
- Many heretics highly antisocial, would be treated harshly by
- Penalties included penance, imprisonment, confiscation; death
penalty administered by civil authorities.
- Cruel and unjust by modern standards but no more so than civil
courts of the time.
- Few legal safeguards for accused.
- Inquisition very unpopular, some officials assassinated.
- Strongest in Spain and Italy, weak in N. Europe.
- Declined in late Middle Ages.
- Established in late 1400's to foster religious and political
unity in Spain.
- Directed against Jews, Moslems, Protestants, later against native
American tribal religions after discovery of America.
- Directed against Catholic reformers and mystics: Ignatius Loyola
(founder of Jesuits), Teresa of Avila.
- Essentially a secret police arm of Spanish crown rather than Church.
- Bitterly hated. When Latin American nations gained independence
ca. 1820, mobs often attacked Inquisition offices.
- Abolished in Spain 1834.
- Established 1542 to combat Protestantism.
- Generally (with some exceptions) moderate.
- Still exists. Renamed Holy Office 1908, Congregation for the Doctrine of
the Faith 1965.
The two most famous cases of the Roman Inquisition
- Bruno was a mystic, not a scientist in any sense of the word.
- Used Copernican theory as metaphor for some of his mystical views.
- Bruno returned to Italy from then-Protestant N. Europe to convert
Pope to his views. Grasp of political realities left something to be
- Burned at stake, 1600.
Backdrop of the Galileo Affair
- Bruno affair cast pall of suspicion on Copernican theory.
- Counter-Reformation (including the founding of the Roman
Inquisition) had made Church highly sensitive to challenges to its authority.
Galileo's early career illustrious but inspired jealousy.
- Galileo made clerical enemies by gratuitous attacks on the
Jesuits, Scheiner (1613) and Grassi (1623). Both were competent astronomers, and, in fact, Copernicans.
- Galileo summoned to Rome and warned but not charged, 1615.
- Rumors of formal charges persisted.
- Galileo obtained letter from Cardinal Bellarmine stating that no
charges filed or sentence passed.
- It appears Galileo's enemies planted a contrary document in Vatican files. It surfaced later during his trial.
Dialogue on the Two Great World Systems, 1632.
- Takes place in a garden over four days (common argumentative format of the time).
- Three characters:
- Salviati - staunch Copernican
- Sagredo - open-minded Renaissance man
- Simplicio - hidebound Aristotelean, incapable of original thought
Results of Book
- Immediate best-seller
- Infuriated Galileo's enemies
- Pope needed political support of Jesuits, yielded to pressure
- Pope eventually persuaded that Simplicio was a caricature of him.
- Inquisitor, Vincenzo Maculano, supported Galileo.
- Pressure from above made it impossible to prevent trial.
- Advised Galileo to plea-bargain.
- Maculano was able to blunt or turn aside deep probes into
- Conflict between Galileo's and Vatican documents of 1615.
- Galileo censured, sentenced to imprisonment.
- Three of the ten judges refused to sign the sentence.
Deflating some Galileo myths
- Galileo created most of his enemies by his own rash attacks.
- Galileo was treated quite leniently.
- Support from Inquisitor
- Imprisonment was a loose house arrest
- Lively Black Market in his books
- Allowed to keep a Papal pension
- Galileo corresponded freely
- He received visitors from Protestant N. Europe (Thomas Hobbes,
- Church took no action when a late book of Galileo was smuggled to
Holland and printed.
- Galileo could at any time have found sanctuary in Venice or a
Protestant country. Why did he not do so?
The Spanish Conquest of the Americas
Memoirs of some Spanish participants express genuine admiration
for Aztec civilization (e.g., Bernal Diaz).
Bernardino de Sahagun (1499-1590)
- Born 1499, became Franciscan, sailed to Mexico 1529.
- Found Aztec culture still well preserved despite conquest.
- Became fluent in Nahuatl (Aztec language), developed admiration
- Believed that effective conversion depended on thorough knowledge
of native culture.
- Began systematic scientific and ethnographic work ca. 1545.
- Used detailed questionnaires.
- Aztec eyewitness accounts of conquest.
- Written in Nahuatl (12 vol.)
- Work even today meets stringent ethnographic standards.
- Considered first modern ethnographic work.
- Aroused opposition and claims that he was sympathetic to
- Work confiscated by royal decree 1578. Sahagun, 80 years old,
lost results of 50 years of labor.
- Works re-discovered in 19th century, form principal source for
our knowledge of Aztec culture in 1500's.
- Classical Maya culture had declined centuries before Spanish
- Most destructive aspect of Spanish conquest was the near-total
destruction of Maya written records.
- Motivated primarily by desire to stamp out native religions.
- Destruction condemned by several Spanish writers.
- May have been possible for opponents of destruction to save more
documents since destruction took place over a long period, but
there was no concerted effort.
Diego de Landa
An illustration of how people in history can wear black hats and white hats at the same time
- De Landa was responsible for destruction of innumerable Maya books
- He criticized harsh colonial treatment of the Maya
- He was himself accused of cruelty toward the Maya; may have been trumped up to silence him.
- His writings are among the most complete sources of information on the Maya at the time of the Conquest, and provided clues that aided the eventual decipherment of Maya writing.
- Spanish conquest of capital (Cuzco) was a coup, but complete
conquest took some years.
- Many Spaniards expressed high regard for Inca culture (e.g.,
Garcilaso de la Vega).
- Some Spaniards expressed hope of preserving best aspects of Inca
- Overall impact was destructive
- Plunder of resources
- Destruction of centralized Inca State
It is possible to maintain integrity in the midst of repressive
institutions (Galileo, Maculano, Sahagun) but it involves risk.
Not for the faint of heart.
Moral issues seem much clearer in retrospect than they were at
- Church (whether Catholic or Protestant) got its authority from God. "Legitimate dissent" was a contradiction in terms.
- Suppression of belief viewed as a positive good by some, as
legitimate by almost every thinker.
- Conquest seen as regrettable but not intrinsically evil in itself
(at time of Spanish Conquest, Europe was fending off conquest by Turks).
- Undesirable effects may be unforeseen until too late. Few Spaniards must have had any idea they were destroying great civilizations.
- Some practices, like human sacrifice among the Maya and Aztecs, were abhorrent. Compare feminists' response to female genital mutilation in Africa and the Middle East today to get some idea of how people then responded to these customs.
Return to Outline Index
Return to Professor Dutch's Home Page
Created 20 May 1997, Last Update 4 June 1997
Not an official UW Green Bay site
|
<urn:uuid:026ece15-963f-41c4-9243-d7b946411e5d>
|
CC-MAIN-2016-26
|
https://www.uwgb.edu/dutchs/WestTech/evili.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395992.75/warc/CC-MAIN-20160624154955-00115-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.908849 | 1,788 | 2.875 | 3 |
Tar sands - July 8
Click on the headline (link) for the full text.
Many more articles are available through the Energy Bulletin homepage
Oil sands no quick fix as Big Oil leaves Venezuela
Jeffrey Jones and Scott Haggett, Reuters
For Exxon Mobil Corp. and ConocoPhillips it may appear simple: shift efforts, people and resources to Canada's oil sands now that the oil majors have retreated from Venezuela.
In reality, it's no simple matter.
The oil sands have their own set of risks: surging costs due to a squeezed labor force, technical complexity and a shrinking pool of attractive available properties.
Exxon Mobil and ConocoPhillips -- who fled Venezuela last week after refusing to agree to President Hugo Chavez's more nationalistic terms -- are already among the biggest oil sands players. They know well that new projects take years to build as the rush to exploit the unconventional resource fattens costs and schedules.
(4 July 2007)
Black gold's tarnish seen in Canada
Tim Reiterman, Los Angeles Times
.. Almost half of Canada's oil production comes from the oil sands — and the energy industry estimates that enough oil can be economically extracted to fill the country's needs for three centuries.
The vast majority of Canadian oil exports goes to the United States, and the Bush administration sees the remaining resources as America's best hope for reducing dependence on Middle Eastern oil. ..
The benefits may be great, but the toll on other natural resources is also enormous.
Separating petroleum from sand burns so much natural gas that the enterprise is becoming the largest source of greenhouse gas emissions growth in Canada. The oil sands lie within a major intact ecosystem, the boreal forest covering almost a third of Canada's land mass.
The forest is one of the world's biggest freshwater storehouses and absorbs a vast amount of carbon dioxide. It also provides habitat for hundreds of species of birds and is home to caribous, wolves and bears. Expansion of the oil sands operations could tear huge holes in a forest already rent by logging, oil and gas exploration and other industries. ..
Statistics recently compiled by the local nursing station show an increase in mortality and of cancer-related deaths in the last decade. Twenty-one residents died last year, eight of cancer.
After the doctor expressed his concerns in a radio report last year, federal health authorities filed a complaint this year alleging that he was unduly alarming the public. Alberta's medical licensing body is investigating the complaint.
A year ago, the Alberta Health & Wellness ministry had conducted a study that found more cases of certain cancers than expected in Fort Chipewyan, but only one case of cholangiocarcinoma. It concluded that overall cancer levels were not significantly different from elsewhere in the province.
But local residents and colleagues of O'Connor questioned the thoroughness of the study and accused the government of trying to shut up the doctor to protect the oil industry.
"The message for anyone who blows a whistle is you will be clobbered," said Dr. Michel Sauve, the regional chief of medicine. ..
(8 July 2007)
Global warming threatens alternative-oil projects
Daniel B. Wood, The Christian Science Monitor
Development of oil-sand, oil-shale, and coal-to-oil projects could be slowed by a new California law.
Oil-sand, oil-shale, and coal-to-oil projects - alternative fuel sources that could enhance US energy security - have always faced one hurdle. They look good only when oil prices are high. Now, they have another challenge: global warming.
California has enacted new climate-change policies that make energy companies responsible for the carbon emissions not just of their refineries but all phases of oil production, including extraction and transportation. If that notion catches on - at least two Canadian provinces have already signed on to California's plan - then the futures of oil-sand, shale, and coal-to-oil projects may look less attractive.
The reason: Extracting these alternative sources of oil requires so much energy that their "carbon footprint" may outweigh their benefits.
The issue has gained fresh currency because of the new state legislation and predictions that Congress will call for mandatory carbon controls in the next two years.
"As the US and the world move toward more controls on carbon to solve the problem of global warming, it is clear that the development of high-polluting fuels will incur a penalty and the support of and investment in such fuels will be a more and more risky business," says Roland Hwang, a senior policy analyst at the Natural Resources Defense Council (NRDC).
California's move came in January, when Gov. Arnold Schwarzenegger (R) signed a state executive order creating a new "low carbon fuel standard." The standard gives petroleum refiners 13 years to cut the carbon content of their passenger vehicle fuels by 10 percent. In May, Governor Schwarzenegger signed agreements committing Ontario and British Columbia to adhere to California's standard.
(6 July 2007)
An interesting set of priorities that sees oil-sands, etc. as being threatened. An alternative framing would be that human populations and ecosystems are threatened by these alternative-oil sources. -BA
Its Time For Albertans To Draw A Line In The (Tar) Sand
Bill Moore-Kilgannon, Vue Weekly
There once was a thin red line on a map. That may sound like the start of a fairy tale, but in fact it is the real beginning of a critical debate about Alberta's energy future. ..
This line is called the Keystone Pipeline, and it is part of a proposal from TransCanada Pipelines to ship 435 000 barrels of bitumen from Alberta's tar sands per day to be processed into oil and other petrochemical products in the United States.
This line is important to a lot of powerful people. In particular, this line is seen by many of the largest oil companies as a critical foundation in their plan to keep rapidly expanding Alberta's oil sands to feed the United States' "addiction to oil" while keeping declining US processing plants functioning. ..
This would explain why the pipeline and oil companies had close to 40 corporate lawyers and senior staff out to defend their pipeline at the National Energy Board (NEB) hearings last week in Calgary.
Normally, the NEB is a mere speed bump on an oil company's path to developing their plans, but this time the NEB hearings have become the scene for serious interventions from the Alberta Federation of Labour (AFL), the Communications Energy and Paperworkers Union (CEP) and the Parkland Institute.
The three groups have used the only venue that is available to highlight some facts and pose some critical questions about whose interest will be served by this pipeline. ..
Bill Moore-Kilgannon is the executive director of Public Interest Alberta, an Edmonton-based, non-partisan, province-wide organization focused on education and advocacy on public interest issues.
What do you think? Leave a comment below.
Sign up for regular Resilience bulletins direct to your email.
This is a community site and the discussion is moderated. The rules in brief: no personal abuse and no climate denial. Complete Guidelines.
|
<urn:uuid:80579d64-2361-4aee-a85a-ceb5dd0f6a1b>
|
CC-MAIN-2016-26
|
http://www.resilience.org/stories/2007-07-08/tar-sands-july-8
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00120-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947073 | 1,484 | 2.5625 | 3 |
This site was created for Spanish learners; every material provided here is free.
We hope you'll have fun learning Spanish!
A short introduction
Where to begin
Get acquainted with the basics of the language, including elementary vocabulary for various topics (like colors, numbers, jobs, clothes or animals) and useful everyday expressions from greetings to introducing yourself to shopping and asking for directions.
All this vocabulary comes with pronunciation included - you can listen to each word and expression!
In the grammar section, you'll find some explanations, rules and charts about masculine/feminine nouns, forming plurals, the quite complicated verb conjugation of the language (with irregular -ar, -er, -ir ending verbs), or the use of adjectives.
Practice your words
Your knowledge of basic vocabulary can be reinforced with the easy-to-use interactive word practice module.
Listen and speak
Lots of audio files created by the University of California can be found here with full transcript so you can listen to native Spanish speakers and improve your pronunciation.
Tests for different levels
The interactive online tests provided on the site will give you instant feedback about your progress.
More advanced exercises
Create a Spanish-speaking environment in your own room by "tuning in" to an online radio station from Spain or Latin America. We have collected more than 600 such radios for you to choose from.
As the web is full of free stuff that can assist your studies, a helpful guide to explore other educational content (audio, video, quizzes and textbooks) is also provided, as well as a list of online Spanish-English dictionaries.
(Not necessarily of interest to English-speakers, but the site also has a Spanish-Hungarian dictionary.)
In order to make it easier to use materials away from the computer, our pages are optimized to be printable without design and navigation elements.
We aim to offer an ever-expanding and up-to-date aggregate resource for anyone who decides to learn Spanish.
In order to accomplish this we welcome your suggestions either about additional online resources to list, relevant information to publish, or new ideas to implement.
|
<urn:uuid:3e21e0fb-a7df-46b6-ba49-5435e063d969>
|
CC-MAIN-2016-26
|
http://www.e-spanyol.hu/en/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.915656 | 443 | 2.6875 | 3 |
Petrified wood is known for it's exquisite color and detail
Imagine standing in a lush semi-tropical forest with a 200 foot canopy of coinfers and tropical flora. Slow moving streams and swamps populated with fish, clams fallen logs and reptiles moved like blue ribbons that drained into an inland sea. A range of volcanic mountains called the Mogollon Highland filled the southern skyline, the source of the steams and rivers.
It is a scene that is hard to imagine 225 million years later when the land we see today is an arid desert scattered with wood that has since turned to stone. Petrified Wood is real wood that has turned into rock composed of quartz crystals.
One of the greatest concentrations of petrified wood in the world is found in the Petrified Forest National Park in north-east Arizona. Logs as long as 200 feet and 10 feet diameter have been found in the park.
What turned the wood to stone?
Petrified wood has been preserved for millions of years by the process of petrification . This process turns the wood into quartz crystal which is very brittle and shatters. Even though petrified wood is fragile, it is also harder than steel.
Petrified wood is known for it's exquisite color and detail. Some pieces of petrified wood have retained the original cellular structure of the wood and the grain can easily be seen.
Petrified wood can be found throughout the desert regions. It is easy to find and identify. It is used often in jewelry making and for other types of decorative artwork.
What is petrification?
The process of petrification begins with three raw ingredients: wood, water and mud. Petrification of the wood found in the Petrified Forest began during the Triassic Period when the primitive coinfers fell to the ground and into the waterways on a journey through time. The logs were swept and tumbled downstream with sediment and other debris. The streams traveled through a plain of lakes and swamps were wood, sediment and debris were deposited along the way.
In fact, 400 feet of sediments were deposited in the plain by the rivers that originated from the volcanic mountain range. The layer of sediments is known today as the Chinle Formation. As the logs were deposited in the plain they were buried with mud, water and debris. This is when the petrification process began.
The mud that covered the logs contained volcanic ash which was a key ingredient in the petrification process. When the volcanic ash began to decompose it released chemicals into the water and mud. As the water seeped into the wood the chemicals from the volcanic ash reacted to the wood and formed into quartz crystals. As the crystals grew over time, the wood became encased in the crystals which over millions of years, turned the wood into stone.
How did the tropical forest become a desert?
The petrified logs were buried in the sediment for millions of years, protected from the elements of decay. During this time the plain was covered by an ocean and another layer of sediments on top of the wood-rich Chinle Formation.
It wasn't until 60 million years ago that the ocean moved away and the erosion process began. More than 2600 feet of sediment have eroded to expose the top 100 feet of the Chinle Formation.
What makes petrified wood colorful?
It is not wood that makes petrified wood colorful, but the chemistry of the petrifying groundwater. Minerals such as manganese, iron, and copper were in the water/mud during the petrification process. These minerals give petrified wood a variety color ranges. Quartz crystals are colorless, but when iron is added to the process the crystals become stained with a yellow or red tint.
Following is a list of minerals and related color hues:
Copper - green/blue
Cobalt - green/blue
Chromium - green/blue
Manganese - pink
Carbon - black
Iron Oxides - red, brown, yellow
Manganese Oxides - black
Silica - white, grey
SEARCH THIS SITE
View Video about The Black Widow Spider. The female black widow spider is the most venomous spider in North America, but it seldom causes death to humans, because it only injects a very small amount of poison when it bites. Click here to view video.
Despite its pussycat appearance when seen in repose, the bobcat is quite fierce and is equipped to kill animals as large as deer. However, food habit studies have shown bobcats subsist on a diet of rabbits, ground squirrels, mice, pocket gophers and wood rats. Join us as we watch this sleepy bobcat show his teeth.
The Mountain Lion, also known as the Cougar, Panther or Puma, is the most widely distributed cat in the Americas. It is unspotted -- tawny-colored above overlaid with buff below. It has a small head and small, rounded, black-tipped ears. Watch one in this video.
Click here to see current desert temperatures!
is a comprehensive resource about the North American deserts and Southwest destinations. Learn about desert biomes while you discover how desert plants and animals learn to adapt to the harsh desert environment. Find travel information about national parks, state parks, BLM land, and Southwest cities and towns located in or near the desert regions of the United States. Access maps and information about the Sonoran Desert, Mojave Desert, Great Basin Desert, and Chihuahuan Desert.
|
<urn:uuid:8e6e4a64-9fdd-42b3-a734-02475a679303>
|
CC-MAIN-2016-26
|
http://www.desertusa.com/mag00/jan/papr/rock.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00018-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.962864 | 1,132 | 3.734375 | 4 |
HONG KONG (MarketWatch) — To tackle Beijing’s notorious air pollution, characterized by frequent thick smog hanging over the city, local authorities have come up with a possible solution: funneling wind through the streets to blow away the dirty air.
The national and municipal governments’ respective weather bureaus are studying the feasibility of creating an “urban wind passage,” a Beijing News report Wednesday quoted a senior Beijing city environmental researcher, Liu Chunlan, as saying.
The wind corridor would allow air from the suburbs to blow through the urban center and, hopefully, remove the air pollutants, Liu said.
She said the city government is currently revising its urban planning to include specific details on the wind passage, which could be ready by the end of the year.
Specifically, the planning department would control the density and height of buildings to channel air pollutants and urban heat and create room for them to disperse.
Beijing may not be the only Chinese city considering this method to attack China’s nationwide air-quality problem. A number of major Chinese cities — including Shanghai, Hangzhou and Nanjing — have thought about the possibility of building such wind passages, Beijing News said.
|
<urn:uuid:a5c6a516-45e9-42e7-9271-fb69291d4a33>
|
CC-MAIN-2016-26
|
http://www.marketwatch.com/story/beijing-to-build-wind-passage-to-blow-away-smog-2014-07-03?siteid=rss
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00101-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.934492 | 247 | 2.6875 | 3 |
Arguably the most influential pop songwriters in recent history, the Beatles satisfy listeners all across the world with their messages of free love and free imagination… I dare say they are Super Hippies. Their songwriting contains a beautiful mix of characters and some very original ideas. One of my personal favorites is “Julia” from The White Album. It showcases two of their major compositional strengths: voice leading and advanced harmony. I’m going to go ahead and jump into an analysis of the tune. I did an arrangement for my brass band a few months ago and I really fell in love with some of the little tricks in this composition. Check out last week’s blog on Modal Interchange if you’d like also. Learning a little about that topic will help to explain its use in this song.
The chord progression is a very sophisticated reorganization of typical sounds. It maintains the key of D Major throughout and highlights use of the Tonic (I), Mediant (iii), Submediant (vi), and Dominant (V) chords for it’s most common progression. The intro and several interludes use these chords alone:
DMaj (I) Bmin (vi) F#min (iii) F#min (iii)
In the interludes and end of the chorus the second F#min is replaced with A7 (V).
These sounds serve as the harmonic backdrop for the mood and style of the piece. As the song develops into the chorus there are several very interesting harmonic choices the group makes. Here is the progression for the chorus:
DMaj (I) Bmin (vi) Amin (v) Amin9 (v)
B7 (VI) B7 (VI) G7 (IV) Gmin (iv)
DMaj (I) Bmin (vi) F#min (iii) A7 (V)
The beginning and end of the chorus uses the harmonic motif from the intro. First off in the chorus I think the the use of A minor is interesting. In the key of D, A minor would be the (v) minor chord. This is already a reasonably advanced choice. It shows that the Beatles were aware of the concept of modal interchange. Although the main key of the song is D Major (Ionian), they are borrowing this A minor chord from D minor (Dorian). Dorian is the second mode of the major scale and by using chords from both the Dorian and original Ionian mode in the chorus they are already entering an advanced harmonic landscape. Next in the chorus is a very beautiful voice leading passage also made possible by modal interchange.
From the A minor we arrive next at B7 (VI). This is a sound that is typical in blues and pop writing when the progression is headed back to the II chord. In relation to the key of D this sound is not strange. We hang here on B7 long enough to set up harmonic expectations but then another modal interchange event is used to prolong the resolution. G7 is borrowed from the same D minor (Dorian) mode as before and is a slight shock. Then the harmony shifts abruptly to G minor. This is a very interesting choice and has a very dramatic effect on the tonal color. This G minor chord comes from the Aeolian mode in the key of D (allowing for the Bb). The G minor resolves to D Major and after our brief departure we’re back to the home base of this composition. Already in a short 12 bar chorus the composition has borrowed chords from 3 different modes (Ionian, Dorian, and Aeolian) all with their tonic in D.
The voice leading made possible in this chorus is very tight knit. Here is an example from the first guitar line.
Notice that the D# in the B7 chord resolves downward into the G7 (becoming D). The B both resolves downward into an A (the 9th) on the G7 as well as jumps down an octave to become the Major 3rd of the G7. Then when the G minor hits B becomes Bb, the third of G minor, and finally resolves downward one last step to A, the 5th of the D Major chord. This is commonly referred to as a “Line Cliché.” This is when an ascending or descending line is taken stepwise through a chord progression. The Beatles were masters of this device. Nearly all instrumentalists use line clichés in their writing in some capacity. The Beatles add them into their pieces in very creative and surprising ways.
The next important harmonic event in “Julia” is the brief interlude section that comes after the second repeat of the chorus. It starts with the lyric: “Her hair of floating sky is shimmering.”
This is a direct departure from the harmonies we heard previously in the song. Here are the chords:
C#min (iiv) C#min (iiv) DMaj7#11 (I) DMaj7#11 (I)
Bmin7 (vi) Bmin6 (vi) F#min7 (iii) F#min6 (iii)
F#min b6 (iii) F#min (iii)
We have yet another example of Modal Interchange right away in this passage. The C# minor is borrowed from the Lydian mode of D Major (allowing for the G#). This is so far the most dramatic harmonic shift of the song. The melody here also has a much different shape then in the other areas of the composition. It starts low and climbs up the C# minor scale, dips down again, and then reaches the top note A and very powerfully states the D Lydian sound with the G# on the second beat of the D major chord.
Quickly the phrases is stated again under a different harmonic backdrop (B minor). Here is our next example of a line cliché in the piece. Starting with the A in the melody (which is doubled in the guitar line) the Line Cliché begins. We have a strong A over the Bmin7 then a G# for the Bmin6. Then another Line Cliché is used as a transition back to D major. We have an E natural over the F#min7, then a D#(Eb) on the F#min6, followed by a D natural on the F#min b6, and finally the end of the Line Cliché with a C# on the last F#min chord.
Here is an example of the inner voice leading created by this chord progression.
Notice the descending resolution from the A to the G#. Then also notice the extended line cliché that descends from the E on the F# minor 7 all the way down to the C# on the last F# minor chord. In this passage the line cliché is highlighted by the fact that the vocal melody sustains on a single note. This draws the listener’s attention away from the vocal and focuses it on the guitar line.
Besides the depth of harmonic variety and nuance in this piece it is also interesting rhythmically. Without any drums present, the bass and guitars supply rhythm. The bass chunks away half-notes in a traditional shape. The second rhythm guitar plays consistent quarter-notes which with the bass create a very solid texture. What adds the forward motion to the song is the first guitar part. It plays a syncopated rhythm that fits in the cracks of the other two figures. Here is an example from the intro:
Notice the simplicity in the arrangement. The rhythmic counterpoint stays exactly like this for the entire piece. There is no deviation and it produces the groove that you feel when listening. As it turns out, it’s the beautiful curves in the harmony and the organization of the melody and lyrics that create the different textures in the song.
Such simplicity and drive is present in work by many of the world’s great musicians. In passages from “Julia” you can obtain a view into the dynamic structure of the Beatles music. It’s surprising, innovative, and solid as a rock.
|
<urn:uuid:3af4b5a1-3d78-479c-b44a-49247548c911>
|
CC-MAIN-2016-26
|
http://blog.indabamusic.com/2012/01/15146-julia-song-analysis/comment-page-1/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00174-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.947733 | 1,694 | 2.53125 | 3 |
| Streptococcus agalactiae|
Streptococcus Agalactiae, also known as Group B streptococci, are Gram-positive cocci distinguished from other streptococci by the presence of the group B antigen. These bacteria range in size from 0.6 to 1.2 υm and arrange themselves in chains, forming shorter chains in clinical specimen and longer chains in culture specimen.
S. agalactiae colonize the lower gastrointestinal tract and the genitourinary tract in a commensal relationship that is often asymptomatic, but can cause bacterial sepsis, neonatal sepsis, pneumonia, meningitis, postpartum infection and other infections in infected hosts.
S. agalactiae inhabit a human host, colonizing the lower gastrointestinal tract and the genitourinary tract. Colonization is frequent in pregnant women, where the bacteria colonize 15% to 45% of women . In pregnant women, colonization can be transferred in utero to the fetus, or transferred from the birth canal during delivery.
Cell structure and metabolism
S. agalactiae are facultative anaerobes that are mostly B-hemolytic (1%-2% are non hemolytic) (hemolysis is the breakdown of red blood cells and is used to identify certain bacterial strains on culture plates).
Different strains of S. agalactiae have been identified based on serologic markers that have classified different groups based on the presence of either a B antigen or group specific cell wall polysaccharide antigen, a surface (C) protein and a type-specific capsular polysaccharides. The type-specific capsular polysaccharides have been labeled Ia, Ia/c, Ib/c, II, IIc, III, IV, V, VI, VII, VIII and are used as epidemiologic markers.
The cell structure of S. agalactiae helps to contribute to the organism’s virulence in several different ways. S. agalactiae contains a thick peptidoglycan cell wall layer that prevents desiccation and allows for the organism to live on dry surfaces. The capsular polysaccharides Ia, III and V further contribute to the organism’s virulence by preventing the immune response of complement mediated phagocytosis. In addition, the organism’s virulence is heightened by the presence of hydrolytic enzymes that aide in the spread of bacteria and allow for host tissue destruction.
As there are several isolates of S. agalactiae, the genome sequence of different isolates have been determined and comparatively analyzed with other S. agalactiae strains. One such study, performed by Herve Tettelin, et. al (2002), sequenced the genome of the S. agalactiae type V isolate 2603. In order to understand the S. agalactiae genome, the study compared the sequenced type V isolate 2603 genome to the genomes of other known S. agalactiae serotypes and streptococci strains.
The study found that the circular S. agalactiae type V genome consists of 2,160,267 base pairs, including a G + C content of 35.7%, 80 tRNAs, 7 rRNAs and 3 sRNAs. The project predicted that the genome encodes for 2,175 proteins, 61% of which (1,333) were identified, while the remaining proteins coded by the genome are “of unknown function.”
In addition, the genome sequencing experiment identified various genes that act as possible virulence factors. Several genes such as Sip (SAG0032), CAMP factor (SAG2043), R5 protein (SAG1331), Streptococcal enolase (SAG0628), hyaluronidase (SAG1197) and hemolysin/cytolysin (cylE, SAG0669), have been identified as coding for surface proteins or secretory proteins that contribute to the organism’s virulence or aide in the organism’s immunity against host defenses.
The genome project highlighted the unique membrane structure of S. agalactiae as it identified the S. agalactiae genomic sequences that code for the B antigen present on the surface of all S. agalactiae strains and the capsular polysaccharide specific to each strain of S. agalactiae. The project also recognized nine differing capsular polysaccharides types, each one containing salic acid structures. These units are part of a repeating structure that prevent the activation of the host’s alternative complement pathway and thereby contribute to the organism’s virulence.
The sequencing project also undertook comparative genomics, comparing the genome of S. agalactiae to that of its common streptococci, S. pneumoniae and S. pyogenes. The analysis discovered 1,060 homologous genes in the three genomes and identified 683 genes specific to S. agalactiae only. These findings are in line with the relationships between the different strains, which must have similar gene factors as they all cause invasive diseases, but cannot have identical genomes as they each colonize and invade different areas and cause different diseases. For example, while S. agalactiae codes for the synthesis of arginine, asparate and citruline, it is missing the genes that S. pneumoniae and S. pyogenes use to synthesize fucose, lactose, mannitol, raffinose, lysine, and threonine. The differing genes are most probably a reflection of the differing organs that play host to each of the bacteria.
While there are various different serotypes of S. agalactiae, the genomic variations between and within serotypes are not well recognized. It has been hypothesized, however, that these variations are mainly unique to S. agalactiae, as while 260 (38%) of S. agalactiae’s unique 683 genes vary amongst different S. agalactiae serotypes, only 47 (4%) of the genes found in all three streptococci strains vary amongst different S. agalactiae serotypes.
S. agalactiae colonization can result in infection and serious diseases in pregnant women, infants, men and non-pregnant women. S. agalactiae virulence and pathology varies amongst the various serotypes of the bacteria, with types Ia, II, III, and V found to be the most virulent
The bacteria are well known for the serious infections and complications it causes in pregnant women and neonates. While carriage rates for pregnant women are very high (10%-30%), infection rates are much lower. In fact, the majority of pregnant women colonized with the bacteria are asymptomatic, with only 2%-4% of patients diagnosed with a urinary tract infection, the most common infection associated with pregnant women and the bacteria. These infections occur during and after pregnancy and generally clear up quite quickly. In very few cases, more serious complications such as endocarditis, meningitis, and osteomyelitis can occur.
S. agalactiae colonization complicates childbirth, as the rate of passing along colonization to the newborn is extremely high. In fact, over half – approximately 60% - of colonized mothers pass along the bacteria to newborns. Various risk factors including heavy bacterial colonization, premature delivery, prolonged membrane rupture, and fever during labor (>100.4°F) increase the probability of passing along colonization. Colonization in the baby can occur while the baby is developing in utero, during birth or in the first few months of a baby’s life. In utero colonization can have serious effects on neonates, as fetal aspiration of the bacteria can lead to stillbirth, neonatal pneumonia or neonatal sepsis . Colonization during childbirth can also seriously infect an infant’s health. While a majority of mothers pass along colonization during childbirth, only a very small percentage of colonization results in infection (only 1%-2%) . Early onset disease, which presents within an infants’ first week, is usually caused by in utero or birth colonization. Early onset disease usually presents itself as bacteremia, pneumonia or meningitis. In fact, S. agalactiae is considered the main cause of these diseases in infants . Medical advancements have led to more efficient diagnosis and care of newborns colonized with S. agalactiae and have reduced the mortality rate to less than 5%. However, while the mortality rate is low, many infected newborns do not completely recover from meningitis and develop neurologic sequelae, an immulogical condition that often includes mental retardation, blindness and deafness. S. agalactiae colonization can also occur in older infants, resulting in late-onset diseases that occur from one week after birth to 3 months of age. Such infections usually present as sepsis, pneumonia, meningitis, osteomyelitis or septic arthritis . While the mortality rate is low for infants with late onset disease, developmental complications from meningitis are common.
S. agalactiae colonization occurs in individuals throughout the population, including non-pregnant women and in men. In such individuals, colonization in conjunction with compromised immunity can result in diseases such as bacteremia, pneumonia, bone and joint infections, and skin and soft-tissue infections. Mortality is higher for these patients and falls between 15% and 32%.
In search of a way to prevent the dangerous infections caused by S. agalactiae infection, much research has gone into how to treat S. agalactiae colonization in pregnant patients. The Center for Disease Control and Prevention (CDC) has issued guidelines for detecting and treating S. agalactiae colonization in pregnant patients. These guidelines, last updated in 2002, have been largely responsible for the lowered infant mortality rate. The CDC promotes two methods of detecting colonization. The risk-based method analyzes a particular patient’s risk of colonizing the bacteria by identifying risk factors correlated with S. agalactiae infections. The presence of these factors, which include premature delivery (before 37 weeks), having a temperature during labor and delivery (greater than 100.4 degrees Fahrenheit), or premature rupture of the amniotic fluid, indicate a high probability of S. agalactiae colonization. . The screening-based method cultures swabs taken from pregnant women between 35 and 37 weeks to test for vaginal and rectal S. agalactiae colonization. Both methods indicate that an infected patient should receive antibiotics during labor in order to reduce the risk of passing along colonization to the infant. While penicillin is generally regarded as the first choice for intrapartum antibiotic prophylaxis, ampicillin can be used for patients with penicillin allergies. .
As S. agalactiae can cause many different infections and complications, various different medical treatments are often used to treat patients sickened by the bacterium. Current research is focusing on the development of a vaccine to create immunity against the bacteria and many in the medical community are rallying to work on ways to better diagnose and treat non-pregnant patients with colonization.
S. agalactiae affects three main groups within the population: pregnant women, infants, and non-pregnant women and men. Following the 2002 release of the CDC’s treatment guidelines, a major epidemiological study (Phares, et. al) was published studying the trends of S. agalactiae disease within the population. The study, conducted from 1999-2005 and studying the population in 10 U.S. states, found that there 14,573 cases of S. agalactiae disease. 1,348, or 9.25% of these cases resulted in death. The incidence of early onset disease (which the study defined as from birth until 6 days old) decreased during the study, as from 1999-2001 the rate was 0.47 per 1000 live births and from 2003-2005 the rate decreased by 0.12 to 0.34 per 1000 live births. .This decrease occurred at the same time of the revised CDC treatment guidelines, indicating that the guidelines were effective in reducing disease. The study also measured the incidence of late onset disease and reported the ratio to be relatively stable throughout the study at 0.34 per 1000 live births. The study reported 409 S. agalactiae invasive infections in pregnant women, a ratio of 0.12 per 1000 live births. Half of these cases (203/409) were urinary, placental or amniotic sac infections that caused fetal death. A large majority of these pregnant patients, 81% (330/409) did not present with previous medical conditions (such as asthma, diabetes, obesity, or alcohol and drug abuse) . While the study only knew the pregnancy outcomes of 368 of the 409 pregnant women, data from the 368 known pregnancies are known. 61% of these pregnancies never came to term and ended in miscarriage or stillbirth, 4% had induced abortions, 5% delivered babies with infections and 30% delivered healthy babies. The study reported an incidence of 233 cases of S. agalactiae infections in children aged 90 days to 14 years, with a ratio of 0.56 per 100,000. This ratio, like the ratio of late-onset neonatal disease, was relatively stable throughout the study. The majority of these cases (61%, 143/233) were found in children younger than a year, while the remaining infections were evenly spaced amongst children older than year. The study reported the incidence of 6087 causes of S. agalactiae infections in adults (age 15 through 64) and 5576 cases in senior citizens (age 65 and older). The 2005 ratio for the first group (aged 15 through 64) was 5.0 per 100,000, a 48% increase from the 1999 ratio of 3.4 per 100,000. A similar increase was seen in the senior citizen group that rose 20% from a 1999 ratio of 21.5 per 100,000 to 26.00 per 100,000. While this study shows the success of the CDC regulations in decreasing incidence of early-onset disease and maintaining the rate of late onset disease, the increase in adult disease and the continual incidence of pregnancy and neonatal disease (it has not been eliminated) has highlighted the importance of researching a vaccine to help relieve infection. .
A recent study conducted by Drs. Karen M. Puopolo, Lawrence C. Madoff and Eric C. Eichenwald attempted to understand the continual incidence of S. agalactiae infections despite updated CDC regulations and treatment plans to combat and eliminate the infections. The study, conducted over a six year time period from 1997 to 2003, reviewed the all cases of S. agalactiae infections at the Brigham and Women’s Hospital in Boston Massachusetts. By studying these cases of early onset S. agalactiae infections, the researchers hoped to determine if clinical, procedural or microbiological factors were responsible for the continual occurrence of infections, despite updated regulations and treatment protocol.
Though the study coincided with the release of the CDC’s updated protocol in 2002, Brigham and Women’s Hospital requires pregnant women to be tested for S. agalactiae colonization by rectovaginal swabs, a protocol that follows the CDC’s 2002 guidelines. Thus, this study was examining the effectiveness of the screening based method. .
During the study, 67,260 babies were born in the hospital, of which 25 were infected with S. agalactiae colonization. 17 of these babies were term infants and 8 were preterm. As some mothers were preterm, only 21 of the 25 mothers had been screened for colonization, and 16 (64%) were found negative. Of the 17 term babies, 12 were asymptomatic or mildly ill, while 5 had more invasive infections, and one of these 5 died. The 8 preterm babies were all more critically ill, and 3 died. . The study observed that while 19 (76%) of the mothers presented with identifiable risk factors of S. agalactiae colonization (including positive signs of S. agalactiae colonization, delivering earlier than 37 weeks, intrapartum fever greater than 100.4°C, or clinical chorioamnionitis) only 4 received intrapartum antibiotic prophylaxis. . In fact, two term mothers who tested for colonization did not receive antibiotics. While one of these cases were the result of hospital error (and the other was due to a quick and unexpected delivery) the study concluded that hospital error was not responsible for most of the infection cases, as only 3 cases in total were caused by such errors. Additionally, the study observed that antibiotic resistance was not responsible for infection, as this only occurred in 1 of the 25 cases. Rather, the study references the high incidence of S. agalactiae infections in patients with negative cultures and suggests that in the presence of a negative culture, many other risk factors are ignored. This results in “a false sense of reassurance” which prevents these factors from being recognized, the proper steps being taken and increases the infection rate. The study identified a 4% "false-negative" colonization rate, a percentage that can have serious implications when this is the only screening method used and other risk factors are ignored. This study therefore highlights the need for a risk based method to accompany the screening method in order to help decrease S. agalactiae infections. .
While CDC regulations have helped to decrease the amount of S. agalactiae invasive diseases in mothers and neonates, the continual incidence of disease, as well as the increase in adult infections, have led to increased research concerning an S. agalactiae vaccine. A recent (2006) study, conducted by Scilla Buccato, et al, researched such a vaccine that would provide patient immunity to S. agalactiae infections. The study focused on the bacterial pili that extend outside of the S. agalactiae cell, acting as adhesive factors that help with host colonization and virulence. Research has shown that there are 3 types of pili present in S. agalactiae, and genome sequencing of 8 S. agalactiae serotypes have shown that at least 1 pili is present in each strain of the bacteria. These pili are coded by genomic islands, a once mobile part of the genome that can be removed and transferred to a new genome. As a pathogenic factor found in all strains of S. agalactiae, the pili present as a possible vaccine component. A vaccine was made when a S. agalactiae pilus 1 operon was inserted into the Lactociccus lactis species. In order to test the vaccine’s ability to provide immunity and pass along antibodies to neonates, mice were immunized with the recombinant microorganism (the vaccine) and their offspring were later exposed to S. agalactiae. The vaccine proved to pass along immunity to the offspring, as after parental immunization with 107 cfu of the recombinant bacteria, more than 70% of the offspring survived exposure to S. agalactiae that was meant to kill 90% off the offspring. Mucosal immunization was also tested, as the vaccine was also given intranasaly. The intranasal mucosal immunization, like the subcutaneous injected vaccine, was found to successfully pass along immunity to the offspring, as a “statistically significant protection” with P>.0002 was found. While the experiment successfully immunized mice against S. agalactiae infection, the mice were not totally immune to all infections, as different serotypes of S. agalactiae have different pili. The researchers attempted to make a hybrid vaccine, made up of genes for two or more of the bacterial pili. Such a sequence, made up of genes that encode for pili on island 1 and island 2, was made and inserted into L. lactis. This vaccine was tested in a similar way as the pili 1 vaccine and the offspring of immunized mice were found to be immune to infections from each of the individual pili that the hybrid vaccine was made of. Thus, the L. lactis pili vaccine was found to not only confer immunity upon mice and their offspring, but also serves as a way to recombine different pili in order to provide protection against infection. By creating such a hybrid, the experiment exposed groundbreaking research on the path to making a vaccine to combat "S. agalactiae" infections. .
- [ Murray R., Rosenthal S. and A. Pfaller. “Streptococcus.” Medical Microbiology, Fifth Edition, Elsevier Mosby: United States, 2005. 247-250]
- [Tettelin, H., Masignani, V., Cieslewicz, M., Eisen, J.,Peterson, S., Wessels, M., et. al. (2002) Complete genome sequence and comparative genomic analysis of an emerging human pathogen, serotype V Streptococcus agalactiae. PNAS 99.19, 12391-12396.]
- [Woods, Christian J. and Charles S. Levy, “"Streptococcus" Group B Infections.” Emedicine March 2009. 22 April 2009. <http://emedicine.medscape.com/article/229091-overview>]
- [Medline Plus. 20 April 2002 <http://www.nlm.nih.gov/medlineplus/ency/article/002372.htm >]
- [Glaser, P., Rusniok, C., Buchrieser, C., Chevalier, F., Frangeul, L., Msadek, T., et al, (2002) Genome sequence of Streptococcus agalactiae, a pathogen causing invasive neonatal disease. Molecular Microbiology 45(6), 1499-1513 ]
- [Schrag, S., Gorwitz, R., Fultz-Butts, K., and Anne Schuchat (2002) Prevention of Perinatal Group B Streptococcal Disease, Revised Guidelines from CDC. Center for Disease Control and prevention, Division of Bacterial and Mycotic Diseases, National Center for Infectious Diseases]
- [Phares, C.R., Lynfield R., Farley, M., et al. (2008) Epidemiology of Invasive Group B Streptococcal Disease in the United States, 1999-2005. JAMA. 2008; 299(17):2056-2065]
- [Puopolo, K., Madoff, L., and Eric C. Eichenwald. (2005) Early-Onset Group B Streptococcal Disease in the Era of Maternal Screening. PEDIATRICS 115.5:1240-1246 ]
- [Buccato, S., Maione, D., Rinaudo, C., Volpini, G., Taddei, A., Rosini, R. (2006) Use of Lactococcus lactis Expressing Pili from Group B Streptococcus as a Broad-Coverage Vaccine against Streptococcal Disease. The Journal of Infectious Diseases 194: 331-340]
- Definition of Genomic Island. Everything Bio. 25 April 2009 <http://www.everythingbio.com/glos/definition.php?word=genomic+island>
|
<urn:uuid:87a259f5-ba44-462f-b00f-aa94913f43af>
|
CC-MAIN-2016-26
|
http://en.citizendium.org/wiki/Streptococcus_agalactiae
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00185-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931174 | 4,938 | 3.3125 | 3 |
Measuring a patient's ratio of white blood cell types may help physicians accurately distinguish between the similar conditions infectious mononucleosis and bacterial tonsillitis, potentially guiding treatment decisions, according to an article in the January issue of Archives of Otolaryngology-Head & Neck Surgery, one of the JAMA/Archives journals.
Acute tonsillitis (inflammation of the tonsils) and infectious mononucleosis (caused by the Epstein-Barr virus) are both common ear, nose and throat conditions with similar symptoms, according to background information in the article. These symptoms include sore throat, fever, painful swallowing, white plaque on the tonsils and redness of the throat and tonsils. "The importance in differentiating patients with tonsillitis from those with glandular fever [mononucleosis] is the prevention of spontaneous rupture of the spleen and acute intra-abdominal hemorrhage," potential complications of mononucleosis, the authors write. Currently, distinguishing between them requires an expensive mononucleosis spot test.
Dennis M. Wolf, B.Sc., D.O.-H.N.S., M.R.C.S., and colleagues at St. George's Hospital, London, retrospectively analyzed laboratory tests from 120 patients with infectious mononucleosis and 100 patients with bacterial tonsillitis treated at their facility. All patients were given the spot test for mononucleosis and additional blood tests were performed to determine the number of lymphocytes (a particular type of white blood cell involved in the body's immune response) and overall white blood cell count.
Total white blood cell count was significantly increased in the tonsillitis group compared with the mononucleosis group (16,560 cells per microliter vs. 11,400 cells per microliter), but the lymphocyte count was higher in the mononucleosis group (6,490 cells per microliter vs. 1,590 cells per microliter). The ratio of lymphocyte/white blood cell count ratio averaged .54 in the mononucleosis group and .10 in the tonsillitis group.
Based on this data, the researchers determined that a ratio higher than .35 would have a sensitivity of 90 percent and a specificity of 100 percent for the detection of mononucleosis, meaning that an individual with a ratio this high would be correctly diagnosed with mononucleosis 90 percent of the time and an individual with a ratio of .35 or lower would be correctly diagnosed as not having mononucleosis 100 percent of the time. "The specificity and sensitivity of this test seem to be better than the mononucleosis spot test itself," the authors write.
"In conclusion, we recommend that the lymphocyte-white blood cell count ratio should be used as an indicator to decide whether mononucleosis spot tests are required," they continue. "Results from our retrospective pilot study suggest that the lymphocyte-white blood cell count ratio could be a quickly available alternative test for the detection of glandular fever [mononucleosis]."
(Arch Otolaryngol Head Neck Surg. 2007;133:61-64. Available pre-embargo to the media at www.jamamedia.org.)
Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.
|
<urn:uuid:7ea08fbb-a421-49cc-b485-15a5db35e7ee>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2007-01/jaaj-cbt011107.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00094-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.924143 | 697 | 3.21875 | 3 |
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi test
Opportunities for recent engineering grads.
New to MATLAB?
Given a circular pizza with radius z and thickness a, return the pizza's volume. [ z is first input argument.]
Non-scored bonus question: Why is the function interesting?
98 players like this problem
1 player likes this solution
3 players like this solution
2 players like this solution
|
<urn:uuid:ae4dfcae-cda6-46b4-945c-3675463ae283>
|
CC-MAIN-2016-26
|
https://www.mathworks.com/matlabcentral/cody/problems/167-pizza
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00150-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.859435 | 95 | 2.609375 | 3 |
A character who is invisible places their hands on the sides of the head, thumbs touching the head and pinkies pointed outward in an "antlers" gesture. So long as the hands remain on the head, the character is invisible and must be role-played as such. Once the hands are removed (such as to cast a spell or swing a weapon), the invisibility ends.
Invisible characters may not do anything with their hands, including holding weapons, opening doors, eating food, etc. (although they may attempt to perform these activities with other parts of the body). The only ability they may actively use is Awareness; other abilities may only be used passively (for example, defensive Will). The hand gesture represents intense concentration, which makes many activities extremely difficult or even impossible. Thus, unconscious people, inanimate objects, and mundane animals are incapable of maintaining invisibility.
Invisible characters are not perfectly silent and may still be heard, smelled, etc.
Characters who are under the protection of a Safe spell cannot be harmed by physical or magical means. The caster of a Safe spell holds a yellow or gold piece of rope or cloth overhead, or stands in a circle of yellow rope on the ground. Targets of a Safe spell must be touching the caster or inside the circle, and if they stop touching the caster or leave the circle, they are no longer safe and may not return to the safety.
Safe Circle and Safe Journey spells can be dispelled; Safe Retreat spells cannot.
Characters protected by a Safe spell cannot attack or cast spells on anyone outside the safety. This also applies to physically aggressive behavior: characters protected by the spells may not push, shove, force their way past, tie up, grapple with, or brawl others. Similarly, no one may do those things to someone protected by the spells. Think of the spells as a passive defense, not as a way to play rugby without being hurt.
However those in a Safe Journey spell may block the path of (and have their path blocked by) others. As a guideline, those in the spell should avoid approaching within arms' length of any enemy, and vice versa. (This is for a purely physical blockade-those protected by Safe Retreat should never be this close to an enemy if they can help it.)
It is permissible to block the retreat of those in a Safe Retreat spell. Persons under the spell should stay safely outside melee weapon range of an enemy, even if they have no other means of escape. If an enemy comes within range of them, they must either remain still or redirect their path of flight away from the new enemy. If they can't flee without moving into the range of another enemy, then they cannot move at all (other than to equalize the distance between enemies). However, if there are large undefended gaps between their enemies, they can slip through and continue their retreat.
When fatigued, a character cannot fight, cast spells, stand, perform any strenuous activity, or even walk unassisted. The character may crawl, but their stomach must be touching the ground.
This fatigue applies to any type of fatigue that occurs in Quest, such as "spell fatigue" from up-casting, Knit Torso spells, etc. The Revive spell does not affect fatigue; however, Restore Health will cure its target of fatigue.
A character who is dazed is knocked down to the ground for a time. During that time, you may not get up or attack (which includes dealing blows and casting Combat spells), but may defend yourself (which includes blocking blows and casting Noncombat spells).
Certain spells may cause you to become dazed; they will state so in their descriptions. Poison (below) will also cause you to become dazed.
A character who has been rendered unconscious (such as by losing a Brawling contest) cannot be awakened by any means other than a Revive spell. Shaking, pain, water splashed in the face, etc., will not work.
If your character is unconscious, you should role-play it—lie still, and (unless you're in mid-combat and in danger of being stepped on) don't look around at the situation.
A character may become poisoned by being hit by a poisoned weapon, by ingesting poison, or by being the target of a Poison or Poisoned Grasp spell. After being poisoned, you will collapse and be dazed for two minutes due to pain. During this time, you may not get up or attack (including dealing blows and casting Combat spells), but may still defend yourself (including blocking blows and casting Noncombat spells). After two minutes have elapsed, you will pass out, and you will die two minutes after that unless the poison is cured.
Poison is painful and should be role-played as such.
When characters die, they become "spirits." Players place a white sheet of cheesecloth (called a "spirit veil") over the head to represent this. Spirits are immaterial; they cannot carry anything, cannot cast spells, cannot swing weapons (even if they could carry them), and cannot speak or touch anything. However, they also cannot be wounded and are immune to all spells, save the ones that specifically affect spirits (e.g., Speak with Spirit). Spirits resist TOW spells with the base Will they had when alive. Spells in place at the time of the character's death, magic items in the character's possession, and any physical effects (such as drunkenness) do not apply. Basically, all spirits can do is gesture and walk.
As a spirit, you are free to wander about for up to 30 minutes. If this time is up and you have not been raised, or you simply choose to depart, you should go out-of-game and locate a GM (unless otherwise instructed). At the GM's preference, you may be allowed to play another player character, join the event staff, or have something game-specific happen to you.
If you are resurrected after these 30 minutes have expired (or after a Spirit Speed), the return to the mortal realm will inflict severe psychological trauma on the character. For the remainder of the game (and beyond, if applicable), you will suffer the effects of Resurrection Trauma, as described in the Continuing Game rules.
Resurrection and Restore Life
When characters are resurrected, raised, or magically called from beyond, they will not remember the last ten minutes of their lives. Effectively, you will not remember who or what killed you. You can, of course, be told by someone else—but they could be lying. The shock of returning to life also removes any memories of life as a spirit.
|
<urn:uuid:da28900f-2bdf-4edb-b7ae-499d35bdb25a>
|
CC-MAIN-2016-26
|
http://www.quest.org/rules/rog/othereffects.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952304 | 1,365 | 2.796875 | 3 |
The Belgians were soon won over by their charming and caring young new queen. Despite their differences she came to love and adore King Leopold and although he did not feel sufficiently the same to remain an always faithful husband, he did feel great affection for his wife and even greater respect for her talents and intelligence. Coming from France, she was quick to judge her new country, and the Belgians themselves, in the areas she deemed them to not be measuring up and her easy honesty at times got her into trouble by those who thought she was being entirely too critical. However, she had winning ways and proved invaluable to her husband in acting as a go-between in the recurring feuds between the liberals and more conservative Catholics in the new Kingdom of Belgium. Queen Louise had a great gift for being able to be appealing to both sides. She was also helpful in foreign relations, in regards to the Kingdom of France this goes without saying but she also won-over Britain’s Queen Victoria who she often sent gifts in the form of the latest fashions.
If anything, Queen Louise was too kind-hearted for her own good. Her concern for everyone around her caused her to worry quite a bit which may have had a harmful effect on her health. Her greatest stress and worry came with the Revolutions of 1848 and the downfall of the “July Monarchy” in France as for some time she had no idea whether her parents were even alive. As the years went by she became more religious and worried about the soul of her Protestant husband and she worried about how her son, Leopold II, would reign when the throne came to him due to his withdrawn nature and, shall we say, ‘inability to play well with others’. Weighted down by such worries, all too soon, her health began to fail and she became increasingly frail and delicate. Ultimately, she contracted tuberculosis and died in Ostend on October 11, 1850. The Kingdom of Belgium went into deep mourning at her death and King Leopold I was first in this, showing how deeply he had cared for his wife, saying she had died in as saintly a way as she had always lived her live, directing all sympathy toward her to her husband and children. She was a great and lovely lady and queen, a dutiful wife, caring mother and compassionate queen who sent the standard for royal charity in Belgium.
|
<urn:uuid:501ae4cc-8f2f-4303-ba12-761e3495d011>
|
CC-MAIN-2016-26
|
http://madmonarchist.blogspot.com/2012/08/consort-profile-queen-louise-marie-of.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00007-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.995153 | 490 | 2.78125 | 3 |
December 2007, Vol. 19, No.12
Wastewater Microbiology, Third Edition
Gabriel Bitton (2005). John Wiley & Sons Inc., 111 River St., Hoboken, N.J. 07030, 768 pp., $99.95, hardcover, ISBN 0-471-65071-4.
This book covers the microbiological principles and role of microorganisms in water and wastewater treatment. It begins with the fundamentals of microbiology, followed by a discussion of public health issues and challenges. The core of the book addresses the microbiology of wastewater and drinking water treatments. The author has thoroughly examined and discussed the microbiological principles and applications as related to wastewater and drinking water treatments. The application of biotechnology in wastewater treatment is described in great detail. The remainder of the book covers toxicity testing in wastewater treatment plants and microbiological and public health aspects of wastewater effluents and biosolids disposal and reuse.
This edition touches on some important current topics. Molecular and other state-of-art detection techniques have been added. A new chapter is included on bioterrorism and drinking water safety. The author provides the latest developments in biofilm microbial ecology and its impact on drinking water quality. The discussion of toxicity testing and studies of endocrine disruptors has been expanded and updated.
The book is easy to read, and its chapters are organized and formatted clearly. The illustrations, problem sets, and list of Internet resources make it a great reference book.
It will be a valuable tool for researchers, civil and environmental engineers, public health officials, and administrators. It also could serve as a college textbook for microbiological principles and application courses.
Rasheed Ahmad is a consulting engineer in Alpharetta, Ga.
|
<urn:uuid:912232bd-73a8-4ceb-b51a-2ad61b4b8912>
|
CC-MAIN-2016-26
|
http://www.wef.org/publications/page_wet.aspx?id=4647&page=ca§ion=Water%20Volumes
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00182-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920357 | 357 | 2.734375 | 3 |
- What we do
- Where we work
Everybody has a right to access to clean, safe, affordable and reliable drinking water and sanitation services. But how to translate a human right into practice? The handbook on the human rights to water and sanitation provides guidance.
The Handbook (available via the link below) serves as a practical guide: it translates the often complicated legal language into practical information for officials and professionals in civil society organisations. It explains the meaning and legal obligations that stem from the human rights to safe drinking water and sanitation, and provides clarifications, recommendations, examples of good (and bad) practices, but also checklists and clarifications so users can analyse how they are complying with the rights. The handbook is available in English, Arabic, French, Spanish and Portuguese.
This book was published by the first UN Special Rapporteur on the Human Right to Water and Sanitation (Catarina de Albuquerque, who held her term between 2008 and 2014) and was prepared with the support of many individuals and organisations. On the one hand, a group of experts, researchers and scholars led by Virginia Roaf. Then a task group consisting of WaterAid, WASH United, End Water Poverty, Sustainable Futures Initiative, and UNICEF, along with Kerstin Danert from the Rural Water Supply Network. Furthermore, an Advisory Committee (composed of Helena Alegre, Ger Bergkamp, Maria Virginia Brás Gomes, Clarissa Brocklehurst, Victor Dankwa, Ursual Eid, Ashfaq Khalfan, Alejo Molinari, Tom Palakudiyil, Frederico Properzi, Paul Reiter, Cecilia Scharp and Michael Windfuhr) was put in place. Several regional consultations (in Africa, Latin America, Asia and Europe) were also organised to enable the author to get first-hand information on good practices, challenges and the way to overcome them in different parts of the world. Two online consultations (one organised by the Rural Water Supply Network and another by HuriTALK) were equally organised
De Albuquerque: "International human rights law obliges States to work towards achieving universal access to water and sanitation while prioritizing those most in need. Water and sanitation facilities should not only be available and accessible to all, they should also be affordable for the poorest while ensuring quality and safety to the health of users. All these dimensions are captured in the human rights legal framework. Sustainability is a fundamental human rights principle. It is essential to the realization of the human rights to water and sanitation. (...) Once services and facilities have been improved, the positive change must be maintained and slippages and retrogression must be avoided."
The acknowledgement of the Human Right to Water and Sanitation is a crucial step in the long term vision of universal and sustainable access.
Patrick Moriarty, CEO of IRC, an international think-and-do-tank that works to find sustainable solutions to the worldwide water and sanitation crisis, congratulated the UN team's initiative with this publication. "The recognition of the Human Right to Water and Sanitation was a crucial step in the long-term vision of universal and sustainable access. It is particularly important in that it unambiguously puts the onus where it belongs: with government - national and local. Yet, as we all know, human rights are all too often disrespected. Working out, on the ground, how to put the rights into practice is therefore a critical and defining role for anyone - and especially NGOs - working in our sector. These booklets are therefore an immensely valuable resource - a tool to help make the leap from good intentions to measurable and impactful actions. In this, civil society and NGOs like IRC have a crucial role in both supporting governments, and holding them to account, for delivering the human right: ensuring that sufficient resources are provided and necessary capacities developed".
Putting it into practice
How to bring these rights to water and sanitation into real practice? Remi Kempers, Programme Manager Water at Both ENDS, believes that: "first people should be made aware of their rights, and understand what their national laws on water, sanitation and hygiene are. For example, in Bangladesh we use the Right to Information Act to ask the government to provide information on access to water and sanitation services. Then we use the information in workshops, rallies, articles, media. With the available data people can address the government (mostly via the local government) and ask for more provisions of good quality. Secondly, you should analyse the current laws and see how they relate to standards and principles as set in the international Right to Water and Sanitation, and identify the gaps. Often these gaps occur through translation of the human rights framework into national laws, or in the implementation of laws that are of good quality, as often a lot is dependent on the budgets which are available at local, district and national levels. You need NGOs who are supporting people and peoples' organisations like women self-help groups to wake up the local population and make them aware of their rights. Sometimes pressure from higher and often better educated government officials is needed to set actions by local government into motion".
What can NGOs do to support this? Kempers: "NGOs can deliver and organise programmes that empower local communities to better understand their rights, and hold governments accountable, so they will start providing the services needed".
Sjef Ernes, Managing Director of Aqua for All, believes in the role of government. Ernes: "Acknowledgement of access to safe water and sanitation as a human right addresses the responsibility of (local) government to give priority to this public good. This opportunistic thinking means that politicians have an opportunity to go for it, to score with providing tailor-made solutions which are fundable and feasible, and create services that comply with the human right to water and sanitation. The cost-benefit ratio of sound access to safe water and sanitation is 1:8 up to 1:35. It is by far the best investment governments, private sector and consumers can make," he concludes.
The Handbook is presented in nine booklets, each of which addresses a particular area of activity:
- Booklet 1: Introduction
- Booklet 2: Frameworks (Legislative, regulatory and policy frameworks)
- Booklet 3: Financing (Financing, budgeting and budget-tracking)
- Booklet 4: Services (Planning processes, service providers, service levels and settlements)
- Booklet 5: Monitoring
- Booklet 6: Justice (Access to justice)
- Booklet 7: Principles
- Booklet 8: Checklists
- Booklet 9: Sources
|
<urn:uuid:0a44f453-d325-445a-a421-3c15050a1332>
|
CC-MAIN-2016-26
|
http://www.ircwash.org/news/safe-drinking-water-and-sanitation-human-right-theory-and-practice-handbook
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00079-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.936927 | 1,358 | 2.640625 | 3 |
August 13 2007
Education is the second largest U.S. industry, and female employees outnumber male employees by more than three to one. Since there are more career opportunities today than ever before, ensuring the teaching profession attracts talented women is an important public policy concern. However, since 1983, when A Nation at Risk,a landmark assessment of U.S. education, concluded that the "professional working life of teachers is on the whole unacceptable," little has changed despite numerous state and national efforts. A fundamental shortcoming of those efforts is that they treat teachers as objects of change, not agents of change. In fact, educators are driving emerging reforms by starting schools where teachers want to work and parents want their children to learn.
Until recently, private schools were the only alternative to the traditional public-school system. Overall, private-school teachers are nearly twice as satisfied as public-school teachers with their working conditions. In the 1970s and 1980s, educators began advocating "chartered schools" or public schools that would abide by the same accountability and admissions requirements as district schools but would be run by teachers, have distinct educational missions, and serve general or targeted student populations. The first charter school opened in 1991, and today some 4,000 charter schools are educating nearly 1.2 million students. Although charter schools represent about three percent of all American schools, charter schools create an instructive microcosm of a diversified educational system and show how that system might benefit teachers and students.
At 82 percent, overall satisfaction rates among charter-school teachers are twice as high as their private counterparts and more than three times as high as their district counterparts. Two-thirds of charter-school teachers report high levels of satisfaction with the influence they have over curricula, student discipline, and professional development, as well as school safety, collaboration with colleagues, and their schools' learning environments. On those same measures, slightly more than half of private-school teachers and slightly more than one-third of public-school teachers report high levels of satisfaction. These results suggest the teachers' and students' ability to choose their schools positively affects teachers' and students' experience at school. In contrast to our current system, which is dominated by assigned, government-run public schools, a more diversified system would offer teachers the same wide range of employment options other professionals now enjoy. To attract quality teachers, schools would have to offer competitive salaries, flexible schedules, and a professional working environment in which teachers have autonomy to innovate and are rewarded for their successes.
In short, if their top concern were truly the well-being of teachers, organizations purporting to represent them, such as the National Education Association and the American Federation of Teachers, would make diversifying the education marketplace- through charter schools, voucher programs, and other initiatives that increase parental choice-their top priority.1
|
<urn:uuid:8697c23f-ad44-4076-9b91-3e73bbb11940>
|
CC-MAIN-2016-26
|
http://www.iwf.org/publications/2434869/Empowering-Teachers-with-Choice:-How-a-Diversified-Education-System-Benefits-Teachers,-Students,-and-America
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.980025 | 575 | 3.140625 | 3 |
Cookie is a very important part in Internet. PHP supports HTTP cookies. Using cookie we can store data into a remote browser. We can set the cookies using either setcookie() or setrawcookie() function. Cookies belong to the HTTP header that's why setcookie() must be called before any output is sent to the browser like header() function.
Any cookie which is sent from client side will automatically be included into a $_COOKIE auto global array. If more than one value has to be stored then we need to declare the $_COOKIE as associative array.
$var="This is a test of cookie";
<form action="cookie-2.php" method="post">
Name<input type="text" name="name"/>
Enter an input and hit the return key, and the output will be as follows:
roseindia: This is a test of cookie
Posted on: March 31, 2010 If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
<urn:uuid:0833f5bc-3241-4068-babf-79e669a0f632>
|
CC-MAIN-2016-26
|
http://www.roseindia.net/tutorial/php/phpfeature/tutorial/PHP-Cookie.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00184-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.780532 | 213 | 3.15625 | 3 |
Vitamins, Minerals and Dietary Supplements
What is inosine?
Inosine is a nucleoside, one of the basic substances of which cells are comprised. It is a precursor to adenosine, an important molecule that plays a role in energy production and metabolism. It is also a precursor to uric acid, a naturally occurring substance that is believed to neutralize some free radicals and may prevent the development of multiple sclerosis.
Inosine is believed to play a supportive role in many bodily functions, including the release of insulin, protein synthesis, and oxygen metabolism. Studies conducted in Europe suggest that inosine may enhance oxygen delivery to the muscles, which can result in increased endurance and may be of benefit to athletes. Inosine may also work in conjunction with other chemicals to remove a buildup of lactic acid in the blood, improving energy production and exercise performance.
How much inosine should I take?
The amount of inosine to be taken depends on the condition being treated. Generally, some practitioners will recommend 500-2,000 milligrams of inosine in supplement form, taken 30 minutes before exercising. Some studies have used doses ranging up to 6 grams per day, taken for several weeks.
What forms of inosine are available?
Inosine is found in brewer's yeast and various animal organ meats. It is also available as a supplement, usually in capsule or tablet form.
What can happen if I take too much inosine? Are there any interactions I should be aware of? What precautions should I take?
Inosine appears to be well-tolerated in individuals taking relatively large doses (5-6 grams per day) for prolonged periods of time (>26 weeks). While no side-effects have been reported with the use of inosine, unused inosine can be converted by the body into uric acid, which may present problems for people at risk of developing gout. High amounts of uric acid may lead to conditions such as arthritic joints and toes.
As of this writing, there are no well-known drug interactions associated with inosine. As always, make sure to consult with a licensed health care provider before taking inosine or any other herbal remedy or dietary supplement.
- Benowitz L, et al. Inosine stimulates extensive axon collateral growth in the rat corticospinal tract after injury" Proceedings of the National Academy of Sciences 1999;96:13486-13490.
- Chen P, Goldberg D, Kolb B, et al. Inosine induces axonal rewiring and improves behavioral outcome after stroke. Proceedings of the National Academy of Sciences 2002;99(13):9031-9036.
- D'Ambrosi N, et al. Interaction between ATP and nerve growth factor signaling in the survival and neuritic outgrowth from PC12 cells. Neuroscience 2001;108(3):527-534.
- Koprowski H, Spitsin SV, Hooper DC. Prospects for the treatment of multiple sclerosis by raising serum levels of uric acid, a scavenger of peroxynitrite. Ann Neurol 2001;49:139.
- Starling RD, Trappe TA, Short KR, et al. Effect of inosine supplementation on aerobic and anaerobic cycling performance. Med Sci Sports Ex 1996;28:1193-8.
|
<urn:uuid:de76d0d6-3de4-452c-bdb2-8ed2b116d334>
|
CC-MAIN-2016-26
|
http://www.acupuncturetoday.com/vitamincentral/inosine.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.903186 | 709 | 2.71875 | 3 |
Care of bonsai includes bonsai tree fertilizer. It supplies plants with the essential nutrients necessary for their growth while in a container.
Bonsai are living, containerized plants and they need feeding to stay healthy.
This can be done with tablets, pills, powders, granules etc.
There are both synthetic and organic types. One of the most popular is water soluble.
Some growers create their own fertilizer cakes.
A common bonsai myth is to use fertilizer half strength, because it is a little tree.
Not true ... and do not give your bonsai extra fertilizer either.
Unless you have more specific instructions for your tree, use an even numbered, common house plant formula such as 20-20-20 or 14-14-14, and follow the instructions on the container.
These numbers are more important than the brand name.
Most brands have several different formulas. When someone tells you to use 'X' brand, that is not enough information.
(Soil should be moist before using liquid products).
ANOTHER MYTH: Just as there is no one plant called bonsai, there is no one type of "bonsai tree fertilizer". See more bonsai myths.
Kingsville Boxwood Bonsai
Because container plants are watered frequently and soluble fertilizers leach quickly, many growers use slow-release (often called time-release) pellets or prills.
These can be mixed in the growing medium to prevent them from bouncing off the top of the soil during rain or watering.
Another good reason for mixing these “little balls” in the soil is, a crust often forms as they dissolve if placed on top. This crust can disturb the effectiveness of your watering.
If you choose to sprinkle this type of fertilizer on top of the potting medium, be sure to disturb the top of the soil mix from time to time with a chop stick.
Fertilizers have three numbers printed on the package. These numbers refer to the three nutrients that plants need in the greatest quantity.
These elements are part of the macro-nutrients -- nitrogen (N), phosphorus (P) and potassium (K). Numbers such as 20-20-20 or 7- 9-5 indicate the total percentages of N-P-K (in that order).
Changing the fertilizer formula you use, can make an amazing difference in the response of your bonsai!
Other minerals are also provided in fertilizer. They are important; however they are available in much smaller amounts and are a mixture of the other macronutrients and micronutrients.
Whichever type of fertilizer you use, "feeding" is an important part of bonsai care.
If you suspect your tree is sick, fertilizer may not fix it.
Look for the cause of the problem.
See the Plant Pests and Bonsai page.
Once your bonsai is "finished," it enters a maintenance period. The numbers on your fertilizer should be more even and lower overall.
A special thanks to Jeff Wasielewski at Fairchild Tropical Botanic Garden for reviewing the bonsai tree fertilizer information on this page.
is my free monthly newsletter. Subscribe to get current tips, ideas and photos that may not appear on this site.
Craftsy sent me a copy of this class to review. I was very impressed, became an affiliate and do receive a small commission on purchases. (Which helps support this site.)
Know the basics? Ready for more?
Watch these amazing Colin Lewis bonsai class videos on your own schedule wherever, whenever you want ... and the class is yours to keep and revisit forever!
CLICK HERE for more details:
Don't Miss This Tampa, Fl Event:
|
<urn:uuid:a0b54053-e2df-4696-9d79-e6cc902b53f0>
|
CC-MAIN-2016-26
|
http://www.bonsaimary.com/bonsai-tree-fertilizer.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.91049 | 777 | 2.578125 | 3 |
That legacy is alive in what Emil Jacoby left behind -- powerful, haunting images of the brutal horrors the Nazis unleashed on six million innocent victims, based on his own searing memories as a concentration-camp survivor.
"My father never spoke about what happened until he was 60 years old," Mrs. Borenstein said. "That year was a catharsis. He painted non-stop, and could not stop talking about the Holocaust -- everything had to be told."
Jacoby's media included charcoal, pen and ink, watercolor, and oil.
Small black-and-white drawings represent some of his most dramatic work: A naked woman, awaiting execution, overlooking a mass grave; a group of uniformed soldiers watching the shooting of a kneeling man; a long line of men on a forced march, under armed guard.
"He was a gentle soul and his love for my mother Betty was truly legendary," Mrs. Borenstein said. "He was also a loving and devoted father, and wanted me, my sister Nechama Cohen, my half-brother Jirka Kuchar, and our seven children to know what happened.
"His message was that you can never forget what happened during the Holocaust. Now, more than 10 years after his death [in 1998], I feel an obligation to carry out his wishes, and share his art and legacy," she said. Her sister, who lives in Israel, and her half-brother, who lives in the Czech Republic, feel the same way, she said.
Jacoby painted happier images as well, including still lifes and portraits. One impressive oil painting, in a golden-yellow and maroon palette, depicts men and children praying at the Wailing Wall in Jerusalem.
Emil Jacoby was born in 1923 in Bustina, Czechoslovakia, now part of Ukraine, where his father came from a line of distinguished rabbis who founded the town in 1817, Mrs. Borenstein said.
Jacoby left home in 1939 to work in Budapest, Hungary, at the same time that his half-sister Hanna moved to Belgium.
"My father always dreamed of coming to America, the land of freedom, and when World War II broke out, he mentioned it to his father constantly, but to no avail," Mrs. Borenstein said. "My grandfather, like many others of his generation, never believed the Germans were capable of exterminating the Jews."
In 1941, the young Jacoby received a postcard from his mother, Rachel, informing him that the family had been rounded up, and put on a train to a concentration camp. En route, his father, Mendel Jacobovitz, was shot and killed in front of his wife and two sons, Bezalel, 15, and Nissim, 10, Mrs. Borenstein recounted.
"My grandmother's postcard warned my father not to return home," she said.
Emil Jacoby never went back, and learned later that his mother and two younger brothers were executed in 1941 at Kamenets Podolsk in western Ukraine, according to information that he provided in 1989 to the Central Database of Shoah Victims' Names maintained by Yad Vashem, the Holocaust Martyrs' and Heroes' Remembrance Authority in Jerusalem.
In March 1944, at age 21, Jacoby was conscripted into a Nazi labor brigade, and laid bricks for a military camp west of Budapest, and dug ditches for communications cables. In October, he was transported to Mauthausen concentration camp on the banks of the Danube River in Upper Austria, where a grueling routine of forced labor kept him alive until the camp was liberated in May 1945.
He was one of almost 200,000 people that the Nazis brought to Mauthausen between August 1938 and May 1945, including non-Jewish civilians from France, Italy, Poland, Russia, Spain, Yugoslavia, and Czechoslovakia, as well as 10,000 Russian prisoners of war. "At least 95,000 died there," and more than 14,000 of them were Jewish, according to the U.S. Holocaust Memorial Museum.
One infamous feature of Mauthausen was the so-called Infirmary Camp where, beginning in 1943, ill or weak prisoners "received little or no treatment and most eventually died," according to Holocaust Museum records.
The camp's gas chamber operated until the closing days of the war. Almost 3,000 prisoners from the infirmary perished there, after a "selection" on April 20, 1945. The last killings came eight days later, when 33 anti-regime political activists from Austria -- Social Democrats and Communists -- were sent to their deaths.
FROM THE ASHES
After surviving Mauthausen, the young, multi-lingual Jacoby -- fluent in Czech, English, German, Hungarian, Romanian, Russian, Yiddish, and Hebrew -- found work at a factory in Nachod, Czechoslovakia. He emigrated to Israel in 1949, and lived there until 1969, the year he fulfilled his dream and moved to the U.S. with his wife and two daughters.
"Some of my earliest childhood memories are of my father singing songs and reading books to me in English, a language he learned from his father, who lived in Chicago from 1917 to 1919 and owned a dairy franchise called Yore Dairy," Mrs. Borenstein said.
The family lived first in Long Beach, L.I., and then Brooklyn. They moved to Staten Island in 1985, where Jacoby painted in the first-floor studio of his Grasmere home.
Beyond sharing her father's art, Rachel Borenstein is also motivated to publicize his "fifth question," posed in "The Holocaust Haggadah," his handwritten book in English and Hebrew that he completed in 1993, with 11 full-page drawings. He challenges rabbinical authorities to add a fifth question to the traditional four-question Passover seder ritual:
"All year round we remember the Holocaust, tonight we are asking -- How could it happen?" he wrote.
For Rachel Borenstein, this was her father's final intellectual query "as he faced his own mortality," she said. Jacoby received the Nova Original Teleplay Award in 1998 for his movie, "The Fifth Question," and his autobiography is in the Holocaust Memorial Museum in Washington, D.C., she added proudly.
|
<urn:uuid:ffd76834-a4c1-4b7d-8067-ec8081f2d102>
|
CC-MAIN-2016-26
|
http://www.silive.com/news/index.ssf/2010/06/haunting_images_left_as_a_lega.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.981914 | 1,321 | 2.53125 | 3 |
The hippocampus a seahorse shaped structure within the temporal lobe- the area of above your ears. It is part of the limbic system (involved in emotion, learning, and memory). Recently, a division of the hippocampus has been proposed that separates it into two functional areas: the dorsal (upper) hippocampus is believed to be a factor in spatial learning and memory; and the ventral (lower) hippocampus is believed to play a role in regulation of emotion (Sahay and Hen, 2007).
The olfactory bulb is a forebrain structure at the end of the olfactory nerve which receives smell input from the nose. Particles from the world bind to smell receptors in the nose. These receptors send signals, via the olfactory nerve, to the olfactory bulb.
Bear, Connors, and Paradiso: Neuroscience: Exploring the Brain (Third Edition)
Carlson: Physiology of Behavior (Tenth Edition)
By Matt McGranaghan and Mike Hadley
|
<urn:uuid:f0fe6271-72d3-4f35-af91-5ca6ab71d848>
|
CC-MAIN-2016-26
|
http://sites.lafayette.edu/neur401-sp10/what-is-neurogenesis/where-adult-ng-exists/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00078-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.889183 | 203 | 3.390625 | 3 |
Companies need to use an options-pricing model in order to "expense" the fair value of their employee stock options (ESOs). Here we show how companies produce these estimates under the rules in effect as of April 2004.
An Option Has a Minimum Value
When granted, a typical ESO has time value but no intrinsic value. But the option is worth more than nothing. Minimum value is the minimum price someone would be willing to pay for the option. It is the value advocated by two proposed pieces of legislation (the Enzi-Reid and Baker-Eshoo congressional bills). It is also the value that private companies can use to value their grants.
If you use zero as the volatility input into the Black-Scholes model, you get the minimum value. Private companies can use the minimum value because they lack a trading history, which makes it difficult to measure volatility. Legislators like the minimum value because it removes volatility - a source of great controversy - from the equation. The high-tech community in particular tries to undermine the Black-Scholes by arguing that volatility is unreliable. Unfortunately, removing volatility creates unfair comparisons because it removes all risk. For example, a $50 option on Wal-Mart stock has the same minimum value as a $50 option on a high-tech stock.
Minimum value assumes that the stock must grow by at least the risk-less rate (for example, the five or 10-year Treasury yield). We illustrate the idea below, by examining a $30 option with a 10-year term and a 5% risk-less rate (and no dividends):
You can see that the minimum-value model does three things: (1) grows the stock at the risk-free rate for the full term, (2) assumes an exercise and (3) discounts the future gain to the present value with the same risk-free rate.
Calculating the Minimum Value
If we expect a stock to achieve at least a risk-less return under the minimum-value method, dividends reduce the value of the option (as the options holder forgoes dividends). Put another way, if we assume a risk-less rate for the total return, but some of the return "leaks" to dividends, the expected price appreciation will be lower. The model reflects this lower appreciation by reducing the stock price.
In the two exhibits below we derive the minimum-value formula. The first shows how we get to a minimum value for a non-dividend-paying stock; the second substitutes a reduced stock price into the same equation to reflect the reducing effect of dividends.
Here is the minimum value formula for a dividend-paying stock:
s = stock price
e = Euler's constant (2.718…)
d = dividend yield
t = option term
k = exercise (strike) price
r = risk-less rate
Don't worry about the constant e (2.718…); it is just a way to compound and discount continuously instead of compounding at annual intervals.
Black-Scholes = Minimum Value + Volatility
We can understand the Black-Scholes as being equal to the option's minimum value plus additional value for the option's volatility: the greater the volatility, the greater the additional value. Graphically, we can see minimum value as an upward-sloping function of the option term. Volatility is a "plus-up" on the minimum value line.
Those who are mathematically inclined may prefer to understand the Black-Scholes as taking the minimum-value formula we have already reviewed and adding two volatility factors (N1 and N2). Together, these increase the value depending on the degree of volatility.
Black-Scholes Must Be Adjusted for ESOs
Black-Scholes estimates the fair value of an option. It is a theoretical model that makes several assumptions, including the full trade-ability of the option (that is, the extent to which the option can be exercised or sold at the options holder's will) and a constant volatility throughout the option's life. If the assumptions are correct, the model is a mathematical proof and its price output must be correct.
But strictly speaking, the assumptions are probably not correct. For example, it requires stock prices to move in a path called the Brownian motion - a fascinating random walk that is actually observed in microscopic particles. Many studies dispute that stocks move only this way. Others think Brownian motion gets close enough, and consider the Black-Scholes an imprecise but usable estimate. For short-term traded options, the Black-Scholes has been extremely successful in many empirical tests that compare its price output to observed market prices.
There are three key differences between ESOs and short-term traded options (which are summarized in the table below). Technically, each of these differences violates a Black-Scholes assumption - a fact contemplated by the accounting rules in FAS 123. These included two adjustments or "fixes" to the model's natural output, but the third difference - that volatility cannot hold constant over the unusually long life of an ESO - was not addressed. Here are the three differences and the proposed valuation fixes proposed in FAS 123 that are still in effect as of March 2004.
The most significant fix under current rules is that companies can use "expected life" in the model instead of the actual full term. It is typical for a company to use an expected life of four to six years to value options with 10-year terms. This is an awkward fix - a band-aid, really - since Black-Scholes requires the actual term. But FASB was looking for a quasi-objective way to reduce the ESO's value since it is not traded (that is, to discount the ESO's value for its lack of liquidity).
Conclusion - Practical Effects
The Black-Scholes is sensitive to several variables, but if we assume a 10-year option on a 1% dividend-paying stock and a risk-less rate of 5%, the minimum value (assumes no volatility) gives us 30% of the stock price. If we add expected volatility of, say, 50%, the option value roughly doubles to almost 60% of stock price.
So, for this particular option, Black-Scholes gives us 60% of stock price. But when applied to an ESO, a company can reduce the actual 10-year term input to a shorter expected life. For the example above, reducing the 10-year term to a five-year expected life brings the value down to about 45% of face value (and a reduction of at least 10-20% is typical when reducing the term to the expected life). Finally, the company gets to take a haircut reduction in anticipation of forfeitures due to employee turnover. In this regard, a further haircut of 5-15% would be common. So, in our example, the 45% would be further reduced to an expense charge of about 30-40% of stock price. After adding volatility and then subtracting for a reduced expected-life term and expected forfeitures, we are almost back to the minimum value!
ESOs: Using the Binomial Model
InvestingBy John Summa, CTA, PhD, Founder of HedgeMyOptions.com and OptionsNerd.comValuation of ESOs is a complex issue but can be simplified for practical understanding so that holders of ESOs can make ...
InvestingBy David Harper On April 1, 2004, the Financial Accounting Standards Board (FASB) published a proposal on the new accounting treatment of employee stock options ESOs. The final rules will probably ...
InvestingBy John Summa, CTA, PhD, Founder of HedgeMyOptions.com and OptionsNerd.comEmployee stock options, or ESOs, represent one form of equity compensation granted by companies to their employees and ...
TradingThe Black-Scholes model is a mathematical model of a financial market. From it, the Black-Scholes formula was derived. The introduction of the formula in 1973 by three economists led to rapid ...
InvestingBy David Harper Most public companies grant stock options (ESOs) to their employees, and almost everybody agrees that ESOs represent a cost to shareholders, or, to put it differently, that ESOs ...
InvestingBy John Summa, CTA, PhD, Founder of HedgeMyOptions.com and OptionsNerd.comLet's begin with the participants – the grantee (employee) and grantor (employer). The latter is the company that ...
TradingTake advantage of stock movements by getting to know these derivatives.
Investingby David HarperIn this tutorial we review the accounting and valuation treatment of employee stock options (ESOs) and illustrate the best ways for investors to incorporate them into their analysis ...
TradingWant to build a model like Black-Scholes? Here are the tips and guidelines for developing a framework with the example of the Black-Scholes model.
Managing WealthLearn the different accounting and valuation treatments of ESOs, and discover the best ways to incorporate these techniques into your analysis of stock.
|
<urn:uuid:2cfe8b89-50af-468d-a7aa-0b96149ff374>
|
CC-MAIN-2016-26
|
http://www.investopedia.com/features/eso/eso2.asp
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00062-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.920954 | 1,876 | 2.59375 | 3 |
When the International Astronomical Union (IAU) voted to downgrade tiny Pluto to a “dwarf” planet in 2006 at a meeting in Prague, small school children screamed to high heaven and some of their parents did, too.
But kids get over such things quickly. And many grown-ups who initially thought the idea bizarre, including the occasional astronomer who voted to keep Pluto in the fold as a full-fledged member of our extended solar family, appear to have moved on as well.
Only don’t expect astrologers to go quietly into this dark night.
For IAU astronomers, Pluto’s claims for equal status began to unravel with the discovery of similar icy worlds in the Kuiper Belt, a region of space that extends out beyond the orbit of Neptune. First discovered in 1992, the Kuiper Belt is now known to be home to more than 1,000 icy bodies, some large and round enough to fit the new IAU definition for a “dwarf” planet.
“Most astronomers believe Pluto should take its place alongside other Kuiper Belt objects rather than consort with the ‘real’ planets. Astrologers have a different idea,” says Gisele Terry, president of the International Society for Astrological Research (ISAR).
Pluto’s demotion doesn’t square with evidence the astrological community has been collecting for decades, she maintains.
By all accounts, it wasn’t easy for 18th century astrologers to adjust to the idea that two massive planets, Uranus and Neptune, were circling the sun at distances well beyond the orbit of Saturn. And then, more than a century later, tiny Pluto exploded into public awareness after being spotted glowing dimly on photographic plates exposed at the Lowell Observatory in Arizona.
Cartoonist Walt Disney was so impressed he named one of his most lovable and endearing animated cartoon characters, Pluto the pup, after the distant wanderer. But astrologers noticed that Pluto’s namesake was the mythological ruler of the underworld and thought it might prove fruitful to check out the planet’s darker side, Terry said.
After years of observing the planets in action, western astrologers have determined that Uranus is the impulsive, rebellious, liberating archetypal force involved in sudden, unexpected changes of all kinds. Dreamy, idealistic, imaginative Neptune is a complex archetypal force most typically identified with spiritual transcendence or with qualities of an elusive or illusory nature.
But tiny Pluto has emerged as a solar system powerhouse on every level. Although three times smaller than the Earth’s moon and five times lighter, astrologers say the planet influences events that are titanic, massive, psychologically profound and compelling.
Archetypal Pluto is linked not only to death and regeneration but to the fundamental principal of power itself. As New York astrologer John Marchesella puts it, “Pluto is not one of the sweet little dwarves who whistled while they worked with Snow White.”
Marchesella is Chairman of the National Council for Geocosmic Research (NCGR) and describes Pluto as “a warlord, the God of transformation. Pluto is war but not the honorable kind but rather guerilla warfare,” he said.
Astrologer Robert Gover notes that Pluto currently is transiting through the Saturn-ruled astrological sign of Capricorn, an event that occurs every 248 years as Pluto circles the sun in its wide elliptical orbit. Dating to Roman times, every transit of Pluto through this sign has been accompanied by major cultural restructurings or revolutions, he said.
In the current issue of Archai, The Journal of Archetypal Cosmology, research scholar Rod O’Neal historically chronicles Pluto’s role in the major events unfolding in the Puritanical religious movement from its inception in England through its journey to New England and the New World — and into the current era. His research shows Pluto to be especially powerful when dynamically aligned with Saturn, the planet identified with caution, rigidity, contraction, established boundaries and rules of the game.
Pluto was locked in tight, stressful alignments with Saturn during every major or climactic turning point in the movement’s long, tension-rived history, he noted.
“The Pluto archetype represents shadow, taboo and feared elements, including the underworld, hell, Satan and sin. But it is also the strength and regeneration that comes from successfully encountering what is feared,” he said.
Thanks to the orbiting Hubel telescope, astronomers tracking Pluto today see a great deal more than the faint images on photographic plates at the Lowell Observatory. The Pluto they see is a small but self-contained world with four moons and a wildly elliptical orbit that reaches the Kuiper belt at one extreme and moves inside the orbit and closer to the sun than Neptune at the other.
At a recent astrological conference, a “Bill of Rights” for astrologers was circulated. Atop the list was the right to continue calling Pluto a planet.
|
<urn:uuid:d43e6efa-91d6-4383-b213-def2f6cd84a0>
|
CC-MAIN-2016-26
|
http://astrologynewsservice.com/news/is-tiny-pluto-really-a-planet/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00193-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.935222 | 1,069 | 2.90625 | 3 |
Track topics on Twitter Track topics that are important to you
Peach (Prunus persica (L.) Batsch) is one of the most important model fruits in the Rosaceae family. Native to the west of China, where peach has been domesticated for more than 4,000 years, its cultivation spread from China to Persia, Mediterranean countries and to America. Chinese peach has had a major impact on international peach breeding programs due to its high genetic diversity. In this research, we used 48 highly polymorphic SSRs, distributed over the peach genome, to investigate the difference in genetic diversity, and linkage disequilibrium (LD) among Chinese cultivars, and North American and European cultivars, and the evolution of current peach cultivars.
This article was published in the following journal.
Name: BMC genetics
Knowledge of linkage disequilibrium (LD) levels among different populations can be used to detect genetic diversity and to investigate the historical changes in population sizes. Availability of large...
Peach was domesticated in China more than four millennia ago and from there it spread world-wide. Since the middle of the last century, peach breeding programs have been very dynamic generating hundre...
Characterizing the genetic diversity present in a working set of plant germplasm can contribute to its effective management and genetic improvement. The cut flower chrysanthemum (Chrysanthemum morifol...
We are concerned with statistical inference for 2×C×K contingency tables in the context of genetic case-control association studies. Multivariate methods based on asymptotic Gaussianity of vectors o...
Waxy maize (Zea mays L. var. ceratina) is an important vegetable and economic crop that is thought to have originated from cultivated flint maize and most recently underwent divergence from common mai...
The Family Investigation of Nephropathy and Diabetes (FIND) is a multicenter study designed to identify genetic determinants of diabetic kidney disease. FIND will be conducted in eleven c...
To screen by electrocardiography the entire population of 1,400 individuals in seven Amish Mennonite communities in order to perform genetic linkage studies of long QT syndrome (LQTS).
RATIONALE: Identifying genes that increase a person's susceptibility or resistance to hepatitis B virus infection may help the study of hepatitis. PURPOSE: This clinical trial is studying...
This study is aimed at verifying the role and the efficacy of the recombinants allergens Pru p 1, Pru p 3 and Pru p 4, Bet v 1, Bet v 2 and Bet v 4 in the diagnosis of peach allergy
Compelling evidence of genetic components in high myopia has been put forward by several studies. Twin cohorts, familial linkage studies and population studies has described at least 10 lo...
The discipline studying genetic composition of populations and effects of factors such as GENETIC SELECTION, population size, MUTATION, migration, and GENETIC DRIFT on the frequencies of various GENOTYPES and PHENOTYPES using a variety of GENETIC TECHNIQUES.
Nonrandom association of linked genes. This is the tendency of the alleles of two separate but already linked loci to be found together more frequently than would be expected by chance alone.
The change in gene frequency in a population due to migration of gametes or individuals (ANIMAL MIGRATION) across population barriers. In contrast, in GENETIC DRIFT the cause of gene frequency changes are not a result of population or gamete movement.
Human histocompatibility (HLA) surface antigen encoded by the A locus on chromosome 6. Individuals bearing this allele are more susceptible to Hodgkin's disease. HLA-A1 is in linkage disequilibrium with HLA-B8 and HLA-DR3.
A phenomenon that is observed when a small subgroup of a larger POPULATION establishes itself as a separate and isolated entity. The subgroup's GENE POOL carries only a fraction of the genetic diversity of the parental population resulting in an increased frequency of certain diseases in the subgroup, especially those diseases known to be autosomal recessive.
Bioinformatics is the application of computer software and hardware to the management of biological data to create useful information. Computers are used to gather, store, analyze and integrate biological and genetic information which can then be applied...
|
<urn:uuid:0ec783aa-d0af-4add-8957-42915883adb6>
|
CC-MAIN-2016-26
|
http://www.bioportfolio.com/resources/pmarticle/611841/Peach-genetic-resources-diversity-population-structure-and-linkage-disequilibrium.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.18/warc/CC-MAIN-20160624154955-00127-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.898048 | 908 | 3.109375 | 3 |
There are numerous possible applications for MEMS and Nanotechnology. As a breakthrough technology, allowing unparalleled synergy between previously unrelated fields such as biology and microelectronics, many new MEMS and Nanotechnology applications will emerge, expanding beyond that which is currently identified or known. Here are a few applications of current interest:
MEMS and Nanotechnology is enabling new discoveries in science and engineering such as the Polymerase Chain Reaction (PCR) microsystems for DNA amplification and identification, enzyme linked immunosorbent assay (ELISA), capillary electrophoresis, electroporation, micromachined Scanning Tunneling Microscopes (STMs), biochips for detection of hazardous chemical and biological agents, and microsystems for high-throughput drug screening and selection.
There are a wide variety of applications for MEMS in medicine. The first and by far the most successful application of MEMS in medicine (at least in terms of number of devices and market size) are MEMS pressure sensors, which have been in use for several decades. The market for these pressure sensors is extremely diverse and highly fragmented, with a few high-volume markets and many lower volume ones. Some of the applications of MEMS pressure sensors in medicine include:
The contribution to patient care for all of these applications has been enormous. More recently, MEMS pressure sensors have been developed and are being marketed that have wireless interrogation capability. These sensors can be implanted into a human body and the pressure can be measured using a remotely scanned wand. Another application are MEMS inertial sensors, specifically accelerometers and rate sensors which are being used as activity sensors. Perhaps the foremost application of inertial sensors in medicine is in cardiac pacemakers wherein they are used to help determine the optimum pacing rate for the patient based on their activity level. MEMS devices are also starting to be employed in drug delivery devices, for both ambulatory and implantable applications. MEMS electrodes are also being used in neuro-signal detection and neuro-stimulation applications. A variety of biological and chemical MEMS sensors for invasive and non-invasive uses are beginning to be marketed. Lab-on-a-chip and miniaturized biochemical analytical instruments are being marketed as well.
High frequency circuits are benefiting considerably from the advent of RF-MEMS technology. Electrical components such as inductors and tunable capacitors can be improved significantly compared to their integrated counterparts if they are made using MEMS and Nanotechnology. With the integration of such components, the performance of communication circuits will improve, while the total circuit area, power consumption and cost will be reduced. In addition, the mechanical switch, as developed by several research groups, is a key component with huge potential in various RF and microwave circuits. The demonstrated samples of mechanical switches have quality factors much higher than anything previously available. Another successful application of RF-MEMS is in resonators as mechanical filters for communication circuits.
MEMS inertial sensors, specifically accelerometers and gyroscopes, are quickly gaining market acceptance. For example, MEMS accelerometers have displaced conventional accelerometers for crash air-bag deployment systems in automobiles. The previous technology approach used several bulky accelerometers made of discrete components mounted in the front of the car with separate electronics near the air-bag and cost more than $50 per device. MEMS technology has made it possible to integrate the accelerometer and electronics onto a single silicon chip at a cost of only a few dollars. These MEMS accelerometers are much smaller, more functional, lighter, more reliable, and are produced for a fraction of the cost of the conventional macroscale accelerometer elements. More recently, MEMS gyroscopes (i.e., rate sensors) have been developed for both automobile and consumer electronics applications. MEMS inertial sensors are now being used in every car sold as well as notable customer electronic handhelds such as Apple iPhones and the Nintendo Wii.
The MNX has expertise about every application of MEMS and Nanotechnology and can help you with your development effort. Contact us at [email protected] or at 703-262-5368.
|
<urn:uuid:56c26192-05be-45f8-b640-bdff38e75b5c>
|
CC-MAIN-2016-26
|
http://www.memsnet.org/mems/applications.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00187-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.952065 | 842 | 3.0625 | 3 |
Chapter I: The Formation of the "Black Ghetto" By the early 1950s, the black, inner-city ghetto was already well formed. African Americans already lived in highly segregated, densely concentrated urban areas. These ghettos, however, differed significantly from their modern counterparts. Their levels of "social organization" were intact, that is to say, informal social networks kept neighbors in touch with one another, formal social networks (churches, fraternal organizations, and volunteer organizations) brought people together, and institutions such as businesses and schools made community viable. Most people worked. Single-parent families were a distinct minority (about 17%). Levels of violence were low. Education were "integrated vertically," meaning that affluent, middle-class, working-class, and poor people all lived in relatively close proximity. This is not to overlook the often severe poverty (with its related conditions) that existed, but the social organization still present made these ghettos vastly different from their modern counterparts. They were societies unto themselves that mirrored the larger society. The seeds of the black ghetto's current problems had, however, already been planted. Most importantly, these areas were highly segregated. It had not always been so. Prior to the late 1800s, urban rich and poor, white and black lived in relatively close proximity, (whether they wanted to or not). The poor were often servants (or, previously, slaves) of the rich and so lived close by, and the still primitive modes of transportation made living close to the centers of business and commerce necessary for everyone. With the coming of large manufacturing factories to northern cities during the industrialization of the late 1800s, however, workers were needed and wages were unprecedented, so immigrant workers flocked to the United States from Europe, Asia, and every other area of the world. At the same time, efficient modes of transportation were coming into use, so the affluent were able to avoid this onslaught of "undesirables" by moving from the central cities. It was, in some ways, the beginnings of American suburbanization. Most immigrants could not afford to move away from the places where they worked, so they lived close to the factories and tended to live together in the same neighborhoods, choosing to live in a culture familiar to them. These were the first American urban ghettos. But foreigners were not only "immigrants." African-American agricultural workers from the South also poured into the northern industrialized cities. They were not only pulled into the North by the lure of decent wages but also pushed out of the South because of the joblessness due to the mechanization of southern agriculture. Like other immigrant groups, they settled in ghettos near their jobs. Unlike other immigrant groups, however, they stayed there. As workers from the white ethnic ghettos became more affluent over the course of one or two generations, they gradually moved out from their ghettos and dispersed into the general population. We don't speak of "Finnish ghettos" or "German ghettos" anymore. Segregation, of course, did not allow black people into white areas. Even those African Americans who became affluent were confined to black ghettos. The second great migration of African Americans from the South into northern cities occurred in the 1940s or 50s. Once again, they were pushed out of the South by increasing agricultural mechanization (especially the introduction of the mechanical cotton picker), and they were pulled north by decently paying jobs in the manufacturing centers of the cities. Because of continuing segregation, however, the geographical area of the black ghetto could expand only slowly and these new immigrants had few options. Population density increased constantly. By the 1950s, black people in the least segregated cities of America were more segregated than any other ethnic or racial group had ever been in any city in the United States. The second factor that would play increasing prominence in the formation of the modern black ghetto was the relative poverty of African Americans compared to European Americans. Discrimination in education, employment, and housing was, of course, legal, but there were other, less well-known causes of the relative poverty of black Americans. Poverty had been widespread among all ethnic groups during the Great Depression, but many Federal programs had helped to alleviate that poverty. Unfortunately, African Americans were often left out of those efforts. Two of the most important elements of social insurance introduced during the Depression, for instance, were Social Security and mandatory unemployment insurance, but they specifically excluded domestics and agricultural workers. Since two-thirds of employed blacks were, at that time, either domestics or agricultural workers, most black people were not eligible for benefits. While the rest of the country was receiving significant Federal help in moving out of poverty, African Americans were left out. The Federal Housing Administration (FHA) was another important anti-poverty program developed during the Depression to guarantee mortgages for the purchase of homes. This not only allowed families to become homeowners (and thus accumulate wealth) but also created jobs and provided investment in the community. Citing concerns that the poorer black neighborhoods were not good financial risks, however, the FHA "redlined" almost all black areas, refusing to guarantee mortgages there. Private lenders followed suit. These FHA policies lasted well into the 1960, and redlining by private institutions is still in unofficial practice today. Finally, cities had frequently used zoning requirements (first initiated in the United States in the early 1900s) to zone poor neighborhoods as "industrial," prohibiting not only new residential construction but also frequently the improvement of old residential buildings. The quality of life in these areas was already lower because of neighboring industry, and the housing stock tended to deteriorate easily. Other poor people could move out to other areas, but the reality of segregation forced African Americans to stay in these increasingly industrialized areas of the cities. Despite the segregation, crowding and poverty, however, the black ghettos of the early 1950s were viable neighborhoods, primarily because of the intact social organization. A series of events over the next three decades, however, was to change that situation markedly. The first event was the wholesale destruction of black neighborhoods by the Federal Urban Renewal and Federal Interstate Highway programs. Urban renewal was an attempt to improve decaying center cities by transforming them into new, architecturally pleasing areas. Because of the minimal political power held by African Americans at that time, black ghettos (and other poor areas) were usually chosen as sites for urban renewal. Large, inner-city black ghettos were razed. Some of the poorer people from the "renewed" areas were moved into public housing, but these were usually large apartment buildings reserved only for the poor. But the rest simply squeezed into remaining ghettos areas. The same phenomenon occurred when the Interstate Highway Program started during the Eisenhower administration. When these superhighways went through cities, poor black areas were usually the ones disrupted. Either the area was simply razed and the former inhabitants moved into public housing or the highway was placed so as to create a physical boundary between the black ghetto and other areas of the city, effectively isolating the inhabitants. A second event facilitating the disintegration of the ghettos was the gradual loss of jobs that paid a living wage. The major structural changes in the American economy over the last four decades have almost all been detrimental to poor people. By the middle of the 20th century, the United States had become the overwhelming leader in worldwide manufacturing, and many of these factories were located in the large cities of the North. They offered good employment for workers who entered the job market with little education and few skills. High levels of unionization meant that the jobs were secure, wages were relatively high, and the chances for advancement were good if one stayed with the company. Blue-collar jobs were a primary way out of poverty for many African Americans. But soon Europe and Japan had rebuilt themselves after the destruction of World War II, and their manufacturing competed, often quite successfully, with American companies. Later on, less developed countries, such as Korea and Taiwan, expanded their manufacturing, too. More recently, the globalization of the economy and the development of large, multinational companies have led to the loss of manufacturing in the United States as plants have moved to the Third World, where salaries are lower, environmental regulations are few, and expensive regulations for worker protection almost non-existent. With the increasing computerization and mechanization of manufacturing worldwide, moreover, the well-paying jobs that remained wen to those whom William Julius Wilson calls the "symbol manipulators," those who analyzed data, wrote computer programs, managed people, administered organizations, or performed other tasks for which higher degrees of formal education were required. Increasingly, the only jobs remaining for poorly trained or educated people were in the service sectoras domestics, janitors, clerks, salespeople, nursing aides, and so onwhere wages had historically been low and benefits poor. To make matters worse, the wages in the service sector were declining even further relative to other sectors of the economy, so even full-time workers were finding it difficult to stay out of poverty. Segregation, of course, made it difficult to find well-paying jobs outside of black areas. The third event was integration itself. With the coming of integration, affluent and middle-class African Americans could now find housing outside the crowding of the black ghetto. Only those who could not afford to move outthat is, the poorestwere left, often crowded together in high-rise public housing. What had been poor but vertically integrated neighborhoodswhere most people worked, social networks were intact, and institutions functionedwere now extremely poor areas where only poor people lived with few or no social networks, no institutions of support, no jobs, and large numbers of people who did not work. Under such conditions, the results are predictable. The "surround of force" that people experience leads to despair, inertia, and increasing anti-social behavior. By the 1960s, the wider society had begun to notice the changes occurring in the inner city. As always, there were analysts eager to blame the poor themselves for their poverty, but the political tenor of the times (as a society, we believed much more strongly then in structural causes of poverty than we do now) made it unfashionable to criticize poor people, and the structuralist view dominated. Due in part to the publication of Michael Harrington's The Other America, the country was rediscovering poverty and wanted to do something about it. In 1964, Daniel Patrick Moynihan, then a young advisor to President Johnson, wrote what was supposed to be a confidential memo to the President. Although the report, The Negro Family: The Case for National Action, stressed male unemployment as the primary cause of black poverty, Moynihan also documented dramatic increases in single-parenthood among black families4 and expressed concern about its impact on black poverty. The report was leaked, circulated widely, and the issue of single-parenthood was sensationalized by the press, causing a firestorm among liberals.5 Black activists (their influence nearing its apex in the liberal community) interpreted the report as humiliating to blacks at a time when they were trying to support black strength and identity. More radical Black Power advocates condemned the report as another racist attempt to discredit black people. What right did this white man have even to write such a report about black people? Other (white) liberals didn't like it either, since it seemed to blame black people for their plight. The condemnation of the Moynihan Report was so severe that liberals, sociologists and researchers responded by
- avoiding even mentioning race when discussing behavioral problems among the poor,
- emphasizing racism as the cause of any such behavior,
- denying that the behavior (e.g., decreased labor-force attachment, increase in single-parenthood, even the increase in drug use) was inappropriate, or even
- denying that the behavior existed.
Anyone who dared talk about a "culture of poverty"6 was so viciously attacked that research simply stopped. Lyndon B. Johnson declared a "War on Poverty" early in his presidency, significantly increasing public spending on poverty, availability of services, and growth in benefits available to the poor, especially to the elderly poor. In today's political climate, the War on Poverty is vilified as an utter failure, but many of its programsHeadstart, food stamps, Medicaid, Medicare, higher social security benefits, increases in disability benefits, Legal Aid, the Job Corps and otherswere much more successful than is commonly realized. Between 1959 and 1979, the poverty rate among fully employed blacks went through 43% to 16%. The War on Poverty was especially successful among the elderly as their poverty rate was cut by two-thirds. But the War on Poverty was stunted and ultimately cut short by the War in Vietnam. Few poverty programs were fully implemented and funding was curtailed in almost all programs. Despite the success with the elderly, overall poverty increased during the next two decades, primarily due to the major economic changes occurring worldwide. During the seventies, the forces within the now-fully-formed black ghetto intensified. Since there were no jobs, the illicit drug industry found a fertile field in which to grow new employees. And with the drugs came the violence, especially with the rise of gun sales in the 1980s. Liberals were in denial, refusing even to notice this new phenomenon in the cities for fear of criticizing African Americans. Middle America, of course, was watching television and reading the newspapers, and the behavior changes in the black ghetto were not only obvious but also frightening. Since liberals wouldn't acknowledge those changes, they were marginalized in the debate and the only voices average Americans heard were those who blamed the poor for their poverty. So, the conservative view (that focused almost exclusively on individual characteristics as a cause of poverty) was essentially unopposed until the black sociologist William Julius Wilson began writing in the mid-80s. The conservatives (most importantly, Charles Murray in Losing Ground7 in 1983) also added the new argument that the liberal welfare policies of the Great Society programs had worsened poverty. Given that one couldn't do much about cultural traditions, family structures, or individual character, their arguments strongly bolstered the conservative attack of social spending in the 1980s, which has continued in the 90s as "welfare reform." The mood of the country hardened against the ghetto. Poverty was increasing and the War on Poverty was declared a failure, forgetting that it had been more a skirmish than a war. By the 1980s, government programs for the poor were being drastically curtailed, and society was moving toward controlling the ghetto rather than helping it. The "black ghetto" that we know today had been found. Footnotes 4 In absolute numbers had been an explosion in black single-parenthood. The ratio of black single-parent households to white single-parent households, however, has remained the same since 1950. In 1950, for instance, 17.2% of black households and 5.3% of white households were headed by women. The "black multiple" was 3.2. In 1993, the figures were 58.4% and 18.7% respectively, so the black multiple was essentially the same, 3.1. 5 The words "liberal" and "conservative" have been so misused as to become almost meaningless today. "Liberal," for instance, seems to describe anybody in favor of big government. In this book, I will use the term "liberal" to refer to those who emphasize the structural causes of poverty and see them as prior to and more important than behavioral causes. I will use the word "conservative" to refer to those who see individual agency as more important. 6 The "culture of poverty" was a term introduced by sociologist Oscar Lewis in the late 1950s implying that certain groups had culturally induced behaviors that precipitated their poverty. 7 Murray, Charles, Losing Ground.
|
<urn:uuid:4834e15a-c539-4692-8dd0-1ec39fb42360>
|
CC-MAIN-2016-26
|
http://www.onbeing.org/program/seeing-poverty-after-katrina/extra/history-poverty-america-chapter-1/2152
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00159-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.98372 | 3,227 | 4.1875 | 4 |
Editor’s note: second in a mastitis series
Almost all dairies deal with mastitis problems from one degree to another, but the problems and the pathogens that cause them are not all alike. Depending on the particular pathogen(s) that are present and the dynamics of your dairy client’s environment and management, controlling and/or preventing mastitis can take several different avenues.
“One has to think about what organisms you are looking for before you even start talking about bulk tank vs. clinical samples,” says Page Dinsmore, DVM, Colorado State University. “In the bulk tank, we look for three basic mastitis pathogens, Streptococcus agalactiae, Staphylococcus aureus and Mycoplasma bovis. It’s crucial to identify them because depending on what you find, there are very different ways of managing these organisms.”
“It’s most important to identify organisms when there’s a herd outbreak,” adds Steve Nickerson, PhD, Louisiana State University. “Many of the outbreaks are caused by Streptococcus agalactiae. That organism can be successfully eradicated, so it’s important to know if that’s what is causing the outbreak.”
Dinsmore adds that mastitis microbiology is fairly simple, with a few exceptions, such as Mycoplasma which takes special procedures. “Every producer, regardless of size or geography, should know the pattern of organisms on the dairy in order to proceed with management of clinical cases and to be able to identify when an outbreak is occurring.”
“It’s important to identify pathogens because different pathogens behave differently,” says Karen Jacobsen, DVM, MS, University of Georgia. “I like labs that will quantitate the different bacteria and I wish more diagnostic labs would adopt those procedures. Many will identify which bacteria are present, but not count them.” Jacobsen says at the end of the year she takes those numbers from the lab and puts them on an Excel spreadsheet, plotting each organism on a graph to show clients how the organisms increase or decrease in the herd over time.
Nickerson agrees but says unfortunately many laboratories don’t have the capabilities or knowledge to thoroughly identify mastitis organisms. “If a veterinarian is going to the trouble of looking at bacteria and somatic cell counts, I think it would be worth the extra effort or money to get a species identification where possible.”
Indicators of change
Bulk tank tests can be great indicators of change in management procedures, milking equipment operation and environmental changes. Jacobsen notes that one organism she tracks with routine monitoring is Corynebacterium bovis. “It’s kind of a sentinel organism to let you know whether or not you’re doing a good job because C. bovis does not colonize the udder, per se, it just colonizes the streak canal. If you have a high count it’s an indicator that something has gone wrong with your post-dipping program, such as a change in teat dips or improper dipping.”
Nickerson says one of his clients was suddenly having a lot of clinicals in the fresh cows from Staph. organisms, and looking back at the dry cows they found Staph. there, too. “The producer quit using dry cow therapy to save money, and it caught up with him. At least he kept good records so we could look back at what had changed.”
Another of his clients cultured a lot of Klebsiella, an environmental organism, from cows with mastitis. Sampling a load of new green sawdust that was full of Klebsiella indicated where it came from. “At that point you know it’s not Staph. or Strep., it’s an environmental pathogen that’s causing the problem. In those cases it’s necessary to know what the bug is.”
Each of these organisms has a source and there is generally a common way of preventing their spread, but you have to look at what kind of pre-dip and post-dip is being used, bedding and other sources of contamination. “You can’t fight a battle if you don’t know who you’re fighting,” says Jim Brett, DVM, Macon County Animal Hospital, Montezuma, Ga. “By looking at the type of bugs you get, you can look at sources, then at prevention. Is it a human problem, a cow problem, an equipment problem or a combination? If you find you have an equipment problem, quit running around culturing individual cows. Break the system down and see where the problem is.”
When faced with a mastitis problem on a client’s dairy, the first step is to assess whether it’s an acute problem that needs to be investigated, an outbreak of clinical cases or a high somatic cell count. “Unfortunately, many times you are called out because the client is in trouble with high somatic cell counts that are close to the legal limit,” says Brett. “The first thing I do is take multiple bulk tank samples and look at the farm size. If it’s a large farm, maybe only half or two-thirds of the herd is in the bulk tank, so you have to make sure you get everyone cultured by culturing other tanks as well. That’ll help determine if it’s a cow or an equipment problem.”
Starting a culturing program on a herd can involve weekly bulk tank testing until a pattern of organisms is identified. Jacobsen says for an initial problem in a herd, culturing on a weekly basis or culturing bulk tanks for different groups of animals within the herd can help you identify those patterns.
Brett says when a client’s herd has a mastitis problem he will take daily samples for three to seven days to make sure he has all the information he needs. He then sends the samples to a diagnostic lab and within four to five days he has organism identification and counts. “If I have a herd problem and identify that it’s Staph. aureus or contagious, then I have to back up and look at the individual cow,” he says. He does this by culturing different strings of cows to identify certain sections of the barns that have problems, or on smaller dairies he uses the California Mastitis Test paddles to identify individual cows.
When Brett starts a monitoring program, he likes to culture weekly for four to six weeks, then according to what he finds and the size of the farm, every two to four weeks after that.
“I also encourage clients to sample any clinical cases they have,” says Brett. “The results we get are not for that cow; it’s for us to monitor and look at preventatives and historical data.” Some of Brett’s clients take a sample from every clinical cow, freeze it, then send in a group of samples to the lab once a week. “I get a group sample that I can look at and compare to the data. Then we can identify the contagious cattle and if the client has a low level of Mycoplasma or Staph. aureus, we can check the individual to see if she’s worth keeping, or we might decide to cull her and get her out of the system.”
Probably the most valuable tool you can use in the fight against mastitis is good, routine monitoring of the herd. But to get accurate, consistent results, either the herd veterinarian or a designated employee or employees on the dairy should be the ones who take the samples. “It’s extremely important to have one person who is designated, trained and responsible for taking samples,” says Nickerson. “People may think it’s not a significant job, but you wouldn’t believe the contamination we can get in samples. Many times the problem is when the designated person is gone or on vacation and someone else takes the samples who isn’t trained. It’s very important to have a certain individual responsible for taking sanitary samples.”
Brett has found that identifying a person to collect samples can be rewarding for both him and the employee. “If you can find someone who likes a little bit of a challenge and can be your culture person, you give them two vials for samples, mark them and use one for HyMast and put one in the freezer for culture. That employee usually enjoys this job and will write it down in the records, keep a list of positive and negatives and have it ready to show you when you come out to the farm.”
But all of the routine monitoring in the world won’t tell you anything if you don’t keep good records and use those records to assess your situation. “Records are quite important,” says Dinsmore. “If you don’t have a way to retrieve the information, you won’t have a way of knowing where you’ve come and what you’ve accomplished, or what trouble you’ve gotten into. You have to be able to track things and it’s just as important to keep track of culture results, patterns of pathogens and bulk tank samples as it is to keep track of milk production. You can’t manage something if you can’t count it.”
“I used to think you didn’t need bulk tank tests every month in a closed herd,” says Jacobsen. “But one thing I have found is most of our herds are not closed, and bulk tank tests are a good way to let you know if you’ve brought in a cow with Mycoplasma or Strep. ag if you didn’t have it before.”
Nickerson agrees. “Dairy farmers should culture any new animals that are brought into the herd, whether it’s replacement cows or replacement heifers. They can be buying or adding significant problems if you put a Strep. or Staph. cow is put in the herd.”
One tool that has been helpful in mastitis identification and treatment has been the HyMast test by Pharmacia & Upjohn. Dinsmore says the test can indicate whether you have a Gram– or a Gram+ organism in a milk sample. “You should treat the Gram– coliform group differently than the Gram+ Strep. and Staph. group,” he says. Once you have the problem segregated into those basic groups you can use the information to make different treatment recommendations. “But if you usually treat all mastitis the same, whether it’s Gram– or Gram+, and your client uses a commercial mastitis tube on all clinical cases, you’re wasting your money.”
Brett agrees. “I’ve been amazed at the savings to a producer by not tubing or giving antibiotics to every cow. If you see that she’s Gram–, you know antibiotics aren’t going to help anyway.”
Dinsmore says with the E. coli mastitis vaccines, what used to be severe coliform cases are now quite a bit milder and may be harder to distinguish from the Gram+ types, so the HyMast test may be more useful in those vaccinated herds to differentiate mastitis organisms.
Keeping clients on a routine monitoring program can be difficult if all seems to be going well. Some of Brett’s clients frequently monitor even though they may not need to. Others get complacent and want to stop. “You have to tell these clients, ‘no,’” he says. “They’re still bringing heifers in, purchasing cows and have some biosecurity problems. They must keep checking when they’re doing business like that.”
Brett summarizes his bulk tank analyses in a one-page report for clients. He keeps it short and gives them an idea of how things are now compared to the last report or last year, better or worse and recommendations for improvement.
“Fifteen years ago when the legal limit was a million SCCs, the average was probably around 600,000. Now the average in my county is around 300,000 or less and I have herds consistently below 200,000. By keeping track of what’s going on, my clients do a good job and have learned how to tweak the system when it needs it.”
Types of mastitis-causing pathogens
Karen Jacobsen, DVM, MS, offers the following chart on mastitis-causing pathogens, their source and means of spreading infection.
TYPE: Staphylococcus (coagulase +) aureus, hyicus
SOURCE: Infected udders, teat lesions, udder skin, etc.
SPREADBY: Cow to cow by contaminated udder wash rags, teat cups, hands, etc.
TYPE: Staphylococcus, spp. (coagulase -) epidermidis, micrococcus, etc.
SOURCE: Normal inhabitant of udder skin
SPREADBY: Poor udder preparation, milking wet udders and teats
TYPE: Streptococcus agalactiae (causes high somatic cell counts)
SOURCE: Infected udders
MEANSOFSPREAD: Cow to cow by contaminated udder wash rags, teat cups, hands, etc.
TYPE: Streptococcus non-ags uberis, faecalis, dysgalactiae
SOURCE: Numerous locations on the cow: hair, lips, vagina, feces, bedding, muddy lots, etc.
MEANSOFSPREAD: Environment to cow by wet, dirty lots and bedding, milking wet teats, poor udder preparation.
TYPE; Corynebacterium bovis
MEANSOFSPREAD:Inhabits the teat canal. Appears in tank milk when cows are not pre-stripped.
TYPE: Coliforms Escherichia coli, Klebsiella, etc.
SOURCE: Manure, bedding, especially sawdust
MEANSOFSPREAD:Same as Strep. non-ags
TYPE: Arcanobacterium pyogenes
SOURCE: Moist environment, cracked liners, water hoses, refrigerators, drug contamination
MEANSOFSPREAD: Common sequel to lacerated teat sphincter. Carried by flies.
TYPE: Mycoplasma bovis, californicum
SOURCE: Infected cows, udder shedders, secretions in calves and older heifers
MEANSOFSPREAD:Infusion procedures in dry and lactating cows; cow-to-cow spread via milking machine and hands.
SOURCE: Soil initially, secondarily contaminated treatment materials, hands and sponges
MEANSOFSPREAD:Contamination of syringes and cannulas in infusion; secondarily by hands
SOURCE: Water, feces, flies, rotted materials, contaminated drugs
MEANSOFSPREAD: Originates in environment and initially spread by contamination in infusion. Becomes contagious cow-to-cow as it gains momentum.
TYPES: Miscellaneous Bacillus, Pseudomonas, etc.
SOURCE: Hoses, dirty water, milk manure, bedding, etc.
MEANSOFSPREAD:Same as Strep. non-ags
What are the lab tests?
There’s a variety of lab tests you can have done to interpret mastitis problems from bulk tank samples. Jim Brett, DVM, offers these definitions:
Who should be tested?
Brett offers parameters for those farms that can benefit from bulk tank testing. He notes that goals are set by the individual dairy farmer with veterinary assistance, and that no two farms are alike, but if the farm is above these general parameters, it is a good candidate for testing.
Standard plate count: greater than 10,000
Lab pasteurization count: greater than 500
Coli count: greater than 100
SCC: greater than 300,000
Presence of Staph. aureus, Strep. ag or Mycoplasma in herd
P.I. count: greater than 50,000
Standard plate count (SPC)
Bacteria making up the sum total called the SPC are divided into two major groups for veterinary purposes, those capable of causing herd mastitis problems and those not usually associated with mastitis. Bacteria capable of causing common herd mastitis problems, Streptococcus agalactiae and hemolytic Staphylococcus come almost exclusively from within the udder of infected cows. Their presence in bulk milk means infected cows were milked into the bulk tank. Other types of bacteria more frequently represent contamination from external sources into the milk.
Excessive SPCs resulting from mastitis-causing organisms warrant appropriate mastitis treatment and prevention programs. Excessive SPCs from sources outside the udder do not warrant cow treatments, but indicate need for improved hygiene of cows or milking equipment. It should be recognized that contamination, especially from external surfaces that contact teats, substantial enough to show up as a major portion of an excessive SPC in most cases, will contribute to an excessive incidence of mastitis.
Lab pasteurized counts (LPC)
The LPC is the bacteria count of milk after it has been heated to 145° for 30 minutes in the lab. The procedure kills all of the usual mastitis-causing bacteria that would have originated as external contamination of milk. Generally speaking, high LPCs reflect an inadequate milking equipment wash-up procedure. Poorly cleaned inflations and claws or inadequate pre-milking hygiene are situations that could result in both increased LPCs and increased incidence of mastitis.
Coliform count (Coli)
The coliform count is a bacteria count conducted with selective media that allows only Gram negative bacteria of the coliform group to grow and be counted. High coliform counts indicate fecal contamination of milk or milking equipment. Mastitis cows seldom if ever contribute to coliform counts. On the other hand, milking wet, dirty udders or using dirty milking equipment causes elevated coli counts and is conducive to coliform mastitis. Coliform counts in the hundreds are consistent with poor milking hygiene. Coliform counts in the thousands suggest that incubation on milking equipment is occurring.
Somatic cell counts (SCC)
Somatic cells are normally found in the milk in low numbers. When found in high numbers, they are most likely in response to bacterial infections. Their numbers are approximated by such tests as the California Mastitis Test (CMT) and the Wisconsin Mastitis Test (WMT) or counted more precisely by the Coulter Counters or Fossmatic electronic counters.
The SCC of bulk tank milk is a very useful monitor of herd udder health if used on a regular basis. Upward or downward trends in mastitis in SCC can be correlated with progress or lack of progress in controlling mastitis in a herd. Absolute numbers of somatic cells in bulk milk are useful in assessing whether mastitis is a problem in a herd. The combination of SCC and SPC by species enables the most informed appraisal of herd udder health.
Preliminary incubation (P.I.) count
For the P.I. count the raw milk sample is incubated at 55° F or 18 hours. Bacterial results below 100,000 per ml are acceptable, but the goal should be 50,000 or less per ml. Many results will be less than 10,000 just like the SPC, if sanitation is good.
Causes of high P.I. counts include dirty cows, poor udder washing practices, slow cooling or temperatures above 40° F, failure to thoroughly clean equipment twice each day, and neglecting to sanitize equipment before use.
|
<urn:uuid:8c62e1fb-ceca-4475-9eba-e0b4b646c2e0>
|
CC-MAIN-2016-26
|
http://www.dairyherd.com/dairy-herd/special-sections/mastitis-pathogens-114039839.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00139-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.928444 | 4,245 | 2.84375 | 3 |
Palawa (Aboriginal) languages
Humans first reached Tasmania c40,000 BCE. They arrived over a sand dune desert known as the Bassian Plains during an interglacial which lasted from c60,000 BCE to c30,000 BCE. By far the greater part of the Plains now forms the sea bed of Bass Strait.
The immigrants spoke a language which in semantic content, phonology and morphology was similar too, and ancestral to the languages spoken at the beginning of the nineteenth century in central and eastern Victoria, coastal New South Wales and central and eastern Tasmania. In common with the languages of south-eastern Australia, the Palawa languages did not contrast the stops, ie [p] with [b], [t] with [d], nor [k] with [g]; [s], [z] and [f] were not articulated; diphthongs were not fully fused; and consonant clusters were rare: inflections and other affixes took the place of pronouns and prepositions, indicated the dual and the plural, and defined the role of nouns in sentences. Archaic features included verbs in the present tense only, the preservation of replicated word elements, and the formal structuring of words in two parts, viz a classifier which placed the word in a general category, followed by an item which provided more specific information. Very few Palawa sentences and songs were recorded, which means that very little is known with respect to grammatical structure.
The Palawa languages of central and eastern Tasmania will be collectively referred to as 'Mara speech'. During the early Holocene the Mara speakers were confined to a region probably comprised the northern and southern Midlands, the Fingal Valley, the north-eastern highlands, and the eastern ranges. A language referred to as 'North Eastern speech', was spoken by the clans who in the nineteenth century occupied the Fingal Valley and north-eastern highlands, the northern Midlands, the Tamar Valley west to the Liffey River and Port Sorell, and east of the Tamar Valley to include the Cape Portland peninsula. 'Eastern speech' (sometimes referred to as the 'Oyster Bay' language) was spoken over the remainder of eastern and south-eastern Tasmania through to the eastern shores of the Derwent Estuary, the southern Midlands, and the catchment of the Ouse River up to the Central Plateau. Apart from an input principally to 'North Eastern speech' by Aborigines from Victoria at the end of the Pleistocene, Mara speech was a direct descendant of the language spoken by the first Tasmanians. Its connection with the Mainland Australian languages is evidenced by most of the 700 recorded Palawa place names, by place names in south eastern Australia, and by similarities in the Palawa and Mainland lexicons.
'(South) Eastern speech' refers to the dialects spoken by clans which occupied the western shores of the Derwent Estuary from somewhere near Bridgewater south to Recherche Bay, and which included Bruny Island and the valley of the Huon River upstream almost as far as Lake Pedder. It was a fused language formed as a result of the merger of a Mara speech dialect with Nara speech dialects.
The last ice age lasted from c30,000 BCE to c11,000 BCE. It peaked c18,000 BCE, and retreated rapidly thereafter. For most of the period the Bassian Plains again became the barrier they had been earlier. From c10,500 BCE Bass Strait itself became a permanent barrier. The waning of the ice age provided an increased rainfall, and for one or two millennia before 14,000 BCE this permitted Aborigines from Victoria to follow rivers downstream to a large lake known as the Bassian Lake, and thence up the Tasmanian rivers. Until they were displaced and/or absorbed by others early in the Holocene, they occupied much of the northern Tasmania, and its eastern coastline. The Lake spawned a river which flowed west out of the Bassian Lake north of King Island into the Indian Ocean. The river and the improving climate enabled a population from the Mt Gambier-Warrnambool regions to penetrate the western end of the Plains. Their language was an amalgam of the Pleistocene languages of south-eastern Australia, and a language from northern Australia which arrived between 30,000 BCE, and 15,000 BCE.
Sea levels rose rapidly. By c14,000 BCE a marine gulf had cut off both the mainland populations from their cousins. By 9,000 BCE the western population had been forced onto the Tasmanian land mass where it merged with local Pleistocene populations. The Nara speech languages spoken in the western third of Tasmania at the beginning of the nineteenth century resulted. At one time it was spoken around the whole Tasmanian coastline, throughout north-western and northern Tasmania, and in the valleys of the Derwent and Huon Rivers. Nara speech is better understood as a continuum of dialects rather than as a group of different languages.
Further reading: John A Taylor, ‘The Aboriginal Discovery and Settlement of Tasmania', THRAPP 50/4: John A Taylor; ‘A Description of the Palawa Languages', unpublished thesis, University of Tasmania, 2004.
John A Taylor
|
<urn:uuid:5fef3a12-6b3c-42d9-8efc-4d486ccd9e6d>
|
CC-MAIN-2016-26
|
http://www.utas.edu.au/library/companion_to_tasmanian_history/A/Aboriginal%20languages.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00096-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.968574 | 1,084 | 3.421875 | 3 |
In this section we will discuss about downloading tomcat.
This section will describe you about Apache Tomcat. This page will describe you what is Tomcat, tomcat components how to download Tomcat, tomcat releases, how to install tomcat etc.
What is Apache Tomcat ?
Apache Tomcat is a Web Server which implements the Java Servlet and JavaServer Pages technologies. Apache Tomcat is developed and maintained by the Apache Software Foundation. Tomcat is a software, available as an open source software and is released under Apache License Version 2, provides an environment for running the Java code based on HTTP request and response.
Tomcat has various components some of the components of tomcat were added in the previous version like 4.X and some components are added in the newer versions like Tomcat 7. Components that were released with the Tomcat 4.X version are as :
Components that are released with Tomcat 7 version are as :
How To Download Tomcat
Current releases of Tomcat can be downloaded from the official web site of Apache Tomcat i.e. http://tomcat.apache.org/.
Apache Software Foundation release tomcat web server time to time. The current version at the time of writing this tutorial is 7.0.39. Prior major released versions are as :
How To Install Tomcat
To install tomcat first download the tomcat then click here to know how to install tomcat.
Posted on: May 9, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
<urn:uuid:7e0d749e-9768-4814-aec6-3ca7431ec1ee>
|
CC-MAIN-2016-26
|
http://www.roseindia.net/quickguide/tomcat/downloadtomcat.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.915608 | 325 | 2.625 | 3 |
Go Inside the White House
About “The White House”
The White House is the official home and principal workplace of the President of the United States of America. The house is built of white-painted Aquia sandstone in the late Georgian style. It is located at 1600 Pennsylvania Avenue in Washington, D.C. As the office of the U.S. President, the term “White House” is used as a metonym for a U.S. president’s administration. The property is owned by the National Park Service and is part of “President’s Park.”
George Washington not only served as the namesake for the capital city of the United States, he also chose its location, perhaps envisioning the transportation possibilities that the Potomac River flowing past the site would provide. The city has seen its share of conflict; in the War of 1812, British forces invaded and burned several public buildings. The Civil War marked the beginning of the city’s transformation from a provincial town to a world center of culture, history and political energy during the 20th century.This picture was taken as the International Space Station passed over the western border of Maryland and West Virginia on May 2, 2006.
Construction began when the first cornerstone was laid in October of 1792. Although President Washington oversaw the construction of the house, he never lived in it. It was not until 1800, when the White House was nearly completed, that its first residents, President John Adams and his wife, Abigail, moved in. Since that time, each President has made his own changes and additions. The White House is, after all, the President’s private home. It is also the only private residence of a head of state that is open to the public, free of charge.
- There are 132 rooms, 35 bathrooms, and 6 levels in the Residence. There are also 412 doors, 147 windows, 28 fireplaces, 8 staircases, and 3 elevators.
- At various times in history, the White House has been known as the “President’s Palace,” the “President’s House,” and the “Executive Mansion.” President Theodore Roosevelt officially gave the White House its current name in 1901.
- Presidential Firsts while in office… President James Polk (1845-49) was the first President to have his photograph taken… President Theodore Roosevelt (1901-09) was not only the first President to ride in an automobile, but also the first President to travel outside the country when he visited Panama… President Franklin Roosevelt (1933-45) was the first President to ride in an airplane.
- With five full-time chefs, the White House kitchen is able to serve dinner to as many as 140 guests and hors d’oeuvres to more than 1,000.
- The White House requires 570 gallons of paint to cover its outside surface.
- For recreation, the White House has a variety of facilities available to its residents, including a tennis court, jogging track, swimming pool, movie theater, and bowling lane.
|
<urn:uuid:214c19f3-3ce9-4882-a892-6ec5f854e7dd>
|
CC-MAIN-2016-26
|
http://kygl.com/go-inside-the-white-house/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00017-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.974116 | 645 | 3.515625 | 4 |
An example of a committee is a group of people assembled for the purpose of fundraising.
- a group of people chosen, as from the members of a legislature or club, to consider, investigate, and report or act on some matter or on matters of a certain kind
- a group of people organized to support some cause
- Archaic someone into whose charge someone or something is committed
Origin of committeeMiddle English committe, a representative ; from Anglo-French commité, past participle (for French commis) of commettre, to commit ; from Classical Latin committere: see commit
- A group of people officially delegated to perform a function, such as investigating, considering, reporting, or acting on a matter. See Usage Note at collective noun.
- Archaic A person to whom a trust or charge is committed.
Origin of committeeFrom Middle English committe, trustee, from Anglo-Norman comité, past participle of cometre, to commit, from Latin committere; see commit.
committee - Legal Definition
- A person or group of people who are members of a larger body or organization and are appointed or elected by the body or organization to consider, investigate, or make recommendations concerning a particular subject or to carry out some other duty delegated to it by the body or organization on an ad hoc or permanent basis.
- A person who has been civilly committed.
- The guardian of a civilly committed person or the individual into whose care an incompetent person has been placed. See also conservator.
|
<urn:uuid:2aae3753-466e-45c1-9447-82e9bf72eebb>
|
CC-MAIN-2016-26
|
http://www.yourdictionary.com/committee
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00070-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.931379 | 317 | 3.09375 | 3 |
Green Lessons From Local Schools
Back in the day, school curricula were all about the three R's. Now, many learning institutions are shifting some of the focus to the three E's: ecology, environmental awareness and energy efficiency.
Schools in the D.C. area are no exception. Each day in classrooms and courtyards, students perform countless good deeds for the environment, from growing underwater grasses for the Chesapeake Bay to planning community-wide conservation campaigns.
We visited several local schools to find out what they could teach the rest of us about going green. Here are a few lessons we learned:
Every little bit counts. During the 2003-04 school year, students at Poolesville High School in Montgomery County were measuring energy usage and discovered that turning off computers when not in use could save the school nearly $5,000 per year. Their research led to new rules for turning on computers throughout Montgomery County public schools.
Anyone can garden just about anywhere. Most of the hands at Watkins Elementary School are tiny (it runs through fourth grade), and the campus sits amid asphalt and concrete in Southeast Washington. Yet with the help of a few dedicated adults, its students have been growing vibrant gardens there for more than a decade. Today, more than 20 themed plots in the Living Schoolyard provide classroom snacks and reduce storm-water runoff, among other uses.
Wetlands work for us. The Sidwell Friends School's environmentally friendly middle school building in Northwest arcs around a constructed wetland, which is both educational and functional. It filters the building's wastewater, which is then reused in the toilets and cooling tower. As a result, the building uses 93 percent less water than one of comparable size.
Our streams need our help. Students, staff and friends of Daniels Run Elementary School in Fairfax helped turn the stream at the edge of the school's property from troubled to vibrant by clearing invasive species, stabilizing the banks and building a special vegetated area called a riparian buffer several years ago.
-- Jenny Mayo
|
<urn:uuid:7c25b9c6-03d9-437d-bda3-93b719e70e16>
|
CC-MAIN-2016-26
|
http://www.washingtonpost.com/wp-dyn/content/article/2008/12/11/AR2008121103088.html?wprss=rss_print/sundaysource
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00158-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.959633 | 414 | 2.796875 | 3 |
The Wall Street Journal recently highlighted another Washington regulation that is holding back the economy. But this one can’t be blamed on President Obama, because it was enacted over 90 years ago.
The protectionist Jones Act requires shippers transporting goods between two points in the United States to use vessels built in the U.S., owned by U.S. companies, and manned by U.S.-based crews—even if there are more affordable transportation options available.
The article cited one expert who said foreign-flagged ships could transport oil for less than one-third the cost of U.S.-flagged ships if not for the Jones Act. This would reduce prices and increase the availability of energy produced in states such as Texas and North Dakota that is destined for consumers in the northeast.
The Jones Act is especially harmful to consumers in places such as Hawaii and Puerto Rico, where rail or truck transportation is not an option. For example, the Puerto Rico Electric Power Authority pays as much as 30 percent more for liquefied natural gas because of restrictions on the use of foreign-flagged ships.
The Jones Act is an example of crony capitalism, where one group benefits from special treatment by the government at the expense of everyone else. If politicians are serious about affordable energy, they should remove government barriers to competition in the shipping industry.
|
<urn:uuid:736528b7-ee2d-495d-9c61-7cf99d2eec67>
|
CC-MAIN-2016-26
|
http://dailysignal.com/2012/09/20/the-jones-act-vs-affordable-energy/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00052-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948228 | 273 | 2.671875 | 3 |
2001 Excavation at Safonfok
In 1999 the Kosrae Historic Preservation Office, along with then consulting archaeologist Dr. Felicia Beardsley, undertook archaeological test excavations at the prehistoric site of Safonfok in Walung on the Southwestern coast of Kosrae. At the time, it was believed that Safonfok had been highly disturbed, that it's integrity and the context of its archaeological record had been compromised by many years of local building materials scavenging, compounded by damage from pigs and crabs. The test excavations show that nothing could be further from the truth!
In spite of the visible disturbance across the surface of the site, the disassembly of architectural features and bioturbation of near surface depths, the initial archaeological excavation demonstrated the site to be totally unique in the history of Kosrae and indeed the entire Pacific. The excavations opened a mere 5 ½ square meters (less than 1/10 of 1% of the site area), yet the wealth and integrity of the buried archaeological record revealed a technological industry never before seen anywhere in the Pacific: a local production system of coral fishhooks. A find of this nature is rare in the world of archaeology, and requires a more complete and thorough documentation, along with a more intensive search of the surrounding area to establish an overall context for the site and it's location in the landscape.
For two months in 2001, Dr. Beardsley returns to Kosrae to work once again with the Kosrae Historic Preservation Office. Together they will conduct intensive archaeological investigations at Safonfok. Larger excavations will be opened in the area of the initial fishhook find as well as an adjacent area where an unusual diamond shaped beveled bead was recovered along with unique coral tools. An archaeological survey will also be conducted in the terrain surrounding the Safonfok compound, to establish the general pattern of sites in the area as well as to search for the prospective canoe landing that should be associated with Safonfok.
This year's project will also serve as the training ground for a locally selected crew, as well as archaeological staff members from many of the Historic Preservation Offices across Micronesia.
This invaluable project is being funded this year through a generous grant from the U.S. National Park Service and the Kosrae Historic Preservation Office, with equipment donated by Surveyors Supply Company. Of course this project could not go forward without the generous permission of the landowner, Mr. Stoney Taulung, who also provided housing for the archaeology staff during their stay in Walung.
Artifact Photographs taken by Paul Kulessa
These artifacts are on display at the Kosrae Museum, under the care of the Kosrae State Department of Historic Preservation.
|
<urn:uuid:05e1a2ae-56e4-401d-b4e2-e3bda3ca8a73>
|
CC-MAIN-2016-26
|
http://www.kosraevillage.com/archeology.shtml
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00046-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.95007 | 568 | 2.71875 | 3 |
A new supercomputer facility, known as ‘BlueCrystal’ that will revolutionise research in areas such as climate change, drug design and aerospace engineering has been opened at the University of Bristol today [Thursday 1 May] by the Vice-Chancellor, Professor Eric Thomas.
BlueCrystal is one of the fastest and largest computers of its kind in the UK, able to carry out more than 37 trillion calculations a second. The state-of-the-art system, provided as a result of collaboration between various companies including ClusterVision, IBM and ClearSpeed, enables researchers from a wide range of disciplines to undertake research requiring either very large amounts of data to be processed or lengthy computations to be carried out.
Dr Ian Stewart, Director of the University’s Advanced Computing Research Centre, said: "Serious research in many disciplines can no longer be undertaken without High Performance Computing (HPC) and the University has recognised this through its investment in BlueCrystal. HPC-based research contributes significantly to University research income and will play an increasingly important role in teaching."
The HPC facility is housed in a unique state-of-the-art machine room and is designed to be energy-efficient. The room makes use of advanced remote management equipment and is fitted with a leading-edge air-conditioning solution, which uses energy-efficient, water-cooled racks.
The event took place at the University and Literary Club. Guests included IBM’s WW Vice-President of Deep Computing, Dave Turek; Chief Executive of Bristol City Council, Jan Ormondroyd; the Lord-Lieutenant, Mary Prior; and the Lord Mayor of Bristol, Councillor Royston Griffey.
Dave Turek, WW Vice-President of Supercomputing at IBM, said: "The new supercomputer facility at the University of Bristol is an exciting development and we are delighted that the University has chosen to work with IBM to create this leading-edge infrastructure. Bristol is a world-class facility with researchers leading work in some of the most significant areas of modern research. We look forward to collaborating with the University."
|
<urn:uuid:390aa291-7074-4ce7-bc3c-631dd6166279>
|
CC-MAIN-2016-26
|
http://www.cs.bris.ac.uk/news/news-item.jsp?nid=65
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397842.93/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.948385 | 434 | 2.5625 | 3 |
Special education teachers also provide instruction in resource and self-contained classrooms within the public schools. In a resource room model, students with disabilities leave the general education class for a designated time period to visit the resource room and receive specialized instruction in areas such as language, reading, and math. For example, Kathi is a sixth-grader who has been classified as having learning disabilities. Kathi is functioning intellectually within the average ability range, but she has reading, spelling, and written language skills on an upper third-grade level. The multidisciplinary team recommended that Kathi receive specialized instruction in reading, written communication, and spelling with a special education teacher 1.5 hours per day in her school’s resource room. This means that Kathi would be receiving services on Level 4 of the continuum of services model.
Originally called the Education for All Handicapped Children’s Act (PL 94-142), the Individuals with Disabilities Education Act (IDEA) provided that all the children between the ages 3 and 21, regardless of disability, are entitled to a free, appropriate public education.
Most of her school day would be in the least-restrictive environment of her general education class with Mrs. Gomez. Mrs. Gomez will be responsible for Kathi’s instruction for the entire time that she is in the general education class. This might even include making some adaptations in instructional procedures and assignments to accommodate Kathi’s special learning needs in the general education sixth-grade classroom. For example, during content area classes, Mrs. Gomez will need to provide adapted reading and study materials appropriate to Kathi’s skill levels. During her 1.5 hours in the resource room, Kathi will receive instruction with Mr. Halleran, the special education teacher in the same school. This resource room arrangement represents the least-restrictive environment to meet Kathi’s special needs in reading, written communication, and mathematics, while maintaining her placement in her general education class for the majority of the school day.
The resource model is often referred to as a “pull-out” model, indicating that students with disabilities are pulled out of the general education class for special education instruction. In a self-contained model of instruction (Level 5 of the continuum of services model), students with disabilities receive all or most of their classroom instruction from special education teachers. Even in these models, however, students with disabilities usually have opportunities to interact with their non-disabled peers during such activities as art, music, physical education, recess, lunch, and assemblies.
Special educators working in resource rooms often provide individualized or small-group instruction for some students with disabilities.
© ______ 2007, Merrill, an imprint of Pearson Education Inc. Used by permission. All rights reserved. The reproduction, duplication, or distribution of this material by any means including but not limited to email and blogs is strictly prohibited without the explicit permission of the publisher.
|
<urn:uuid:12edd555-6119-45ab-8325-d677fd8d0dd9>
|
CC-MAIN-2016-26
|
http://www.education.com/reference/article/what-resource-self-contained-services/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00183-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.965004 | 602 | 3.9375 | 4 |
ARCHIVED - USQUE AD MARE
A History of the Canadian Coast Guard and Marine Services
by Thomas E. Appleton
This information has been archived because it is outdated and no longer relevant.
Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats by contacting us.
The history of Canadian shipping, particularly the fishing schooners of the Atlantic coast, is punctuated by tragedy. In the days of sail there was little which could be done to aid a vessel in distress, unless by chance another arrived to make the attempt, and the heavy casualties of the period can be seen in the fishermen chapel at Lunenburg where the names of drowned men and lost ships are recorded in a simple and moving memorial.
Seamen were then cynical about safety procedures which are nowadays considered appropriate and very few of them could swim; a man washed off the bowsprit of a schooner while shortening sail, clad as he was in stiff oilskins and heavy boots, was likely to see his ship driven off to leeward in a flurry of foaming crests with little chance of his recovery. In situations such as this, men felt that death was merciful. If the ship herself went down, hope lay in the working dories which, in the hands of skilled fisher-men, might yet save all. Other than this there was nothing. So it was in other walks of Canadian life where the farmer and the miner and the trapper, confronted with comparable emergencies, faced them with whatever resources lay to hand. Today all this has changed and, except in the most remote places, people living ashore benefit from communications and emergency help. In the word of shipping, as in other walks of life, it is no longer accepted as inevitable that men should lose their lives; search and rescue services are now regarded as a necessary and normal support for an industry which is always exposed to the dangers of nature.
The demise of the salt banker and the dory has greatly reduced marine casualties but, even under modern conditions, ships continue to be overwhelmed in some circumstances and fire, collision, ice and stranding remain as hazards which all must face. The advent of a new approach to these hazards demanded more than the coverage provided by the inshore lifesaving stations and, in 1963, five 95-foot search and rescue cutters were built in Canada and put into service throughout the country. These "R" Class cutters are based on a United States Coast Guard design which was adapted to meet Canadian conditions. They carry a complement of 12 and have a range of 1500 miles at a sea speed of 17 knots. Three of them are stationed on the Atlantic coast in winter, one being transferred to the Great Lakes in summer; two are permanently based on the Pacific coast. A smaller type of rescue cutter, the "S" Class, augments the 95-foot cutter on the Lakes in summer, and is 69 feet in length with a crew of four.
Although all the foregoing cutters take part in offshore work according to needs and capability, they are too small for extended search and rescue in the open Atlantic and a much larger deep-sea cutter is now (1967) under construction at Lauzon. Intended for long range operation under the worst conditions, the new vessel is 220 feet long and has diesel electric machinery giving a speed of 17 knots. Fitted with a flight deck and hangar, this vessel will carry a ship borne helicopter to extend the range of search.
In addition to the special purpose search and rescue cutters, other Coast Guard ships, such as icebreakers and buoy tenders, are often dispatched to take a leading part when casualties occur and all ships, regardless of nationally or type, may be called on for assistance when they are in a position to aid a vessel in distress. Three rescue centres are maintained, at Halifax, Trenton and Vancouver, where Coast Guard rescue officers keep constant watch in the operations room of the Canadian Armed Forces. These three rescue centres are arranged to cover the entire country and to co-ordinate and plan the most effective use of whatever shipping may be available. This organization, with a superb system of communication, is a far cry from the old lifesaving stations and, in many cases, receipt of a telephone call from some remote locality has enabled the centre to dispatch help whose presence would not have been known in the area of the trouble. In the case of offshore rescue, the centre is in a position to send the nearest available ship to start the search; typical of this type of operation is the case of the French ship Douala.
In the week before Christmas 1963, shipping on the Atlantic coast was operating under difficulties and a number of vessels were in distress. Marine search and rescue was hampered because ships could make little headway in heavy seas and, to make matters worse, radio communication was below normal efficiency owing to the density of traffic occasioned by the various emergencies and by the heavy coat of ice which had accumulated on ships antennae. At 7:40 p.m. on December 20, the 2300 ton motor ship Douala of Marseilles sent a message indicating that she was shipping water continuously through a damaged hatch and that she would shortly be in danger of sinking. Her stated position placed her some thirty miles south of the island of Ramea off the south coast of Newfoundland where she was barely making headway in appalling weather. With winds gusting to hurricane force, the normal visibility of three or four miles was reduced to less than half a mile in snow flurries, the temperature was zero, and the sea and swell were logged officially as "phenomenal". Under these conditions the atmosphere is filled with frozen spray which renders human activity almost impossible in exposed situations. Conditions were as bad as they could possibly be in the face of the almost certain fact that the Douala would founder and that her crew would soon have to take to the boats.
Based on all the available information, it was decided by the rescue organization that the CCGS Sir Humphrey Gilbert was in the best position to assist the Douala and, at 8:20 p.m., she was instructed to commence searching for the stricken ship. The Sir Humphrey Gilbert, then under the command of Captain G. S. Burdock, is a modern diesel electric icebreaker and lighthouse supply vessel based on St. Johns, Newfoundland. The Gilbert had left St. Johns three days earlier to lay buoys, which were secured on deck as was a steel barge. Despite the weather she had laid buoys at Breton Harbour and was making for Bay d'Espoir when she was diverted to go to the aid of a fishing trawler in distress farther out in the Atlantic. The Gilbert had been plowing along on this mission for some eight hours when instructed to proceed to the French ship, which she commenced to do at best possible speed.
CCGS Sir Humphrey Gilbert, Captain G. S. Burdock, rescuing survivors of the French ship Douala, December 1963.
At ten to six in the morning, while the Gilbert was bucking to windward against the gale, the barge on the foredeck broke loose, stripping the tarpaulins off the main hatch as it slid across to crash into the port bulwarks. Both barge and ship were damaged and, with the vessel heavily iced up, it was four hours until the barge was again secured and the Gilbert was able to continue with the search.
Up till this time, the Douala had not declared a state of distress, her message of the previous night being in the nature of an alert which, because of its serious content and the prevailing conditions, was correctly assessed by the rescue centre as a portent of alarming events about to occur. At 7:50 a.m. on the 21st., a message from the French ship announced that she was in critical condition and required immediate help. Several other ships answered the distress call but, as the stricken vessel was uncertain of her position owing to a damaged antenna, neither the Gilbert, which was presumed to be near at hand, nor any of the other searchers was able to make contact. At 11:52 the Douala's final message was transmitted "abandoning ship".
While all this was going on, aircraft from Torbay, Argentia and Prince Edward Island searched the area at intervals as permitted by weather and an increased air search was arranged for the 22nd. At three that afternoon, a U.S. Coast Guard aircraft from Argentia sighted a lifeboat which the Sir Humphrey Gilbert was able to pick up half an hour later. Sixteen survivors were taken aboard under great difficulties but the Gilbert was unable to recover three bodies owing to the high winds and heavy seas, and one of the survivors died on passage to Newfoundland. At 6 p.m. a second lifeboat was sighted by an RCAF Argus and the fishing vessel Rodrique was able to take on board three of the Douala's crew and to recover two of the bodies from the first lifeboat; one of the surviving crew died on passage to St. Pierre. Seven missing men were drowned on abandoning ship and the master went down with the Douala.
The Gilbert landed her rescued men at Port aux Basques at 4:30 p.m.; later that evening she sailed for St. Jacques Island, where she arrived on the morning of the 23rd., to find both light keepers drowned, the light unattended, and the store demolished by force of weather. Relief keepers were landed with stores and radio and the light was re-established.
The case of the Douala illustrates how teamwork lies behind a workable search and rescue organization, the elements being ships, aircraft, communications and, above all, determined and resourceful personnel. In this particular incident, the Douala was unable to transmit on low or medium frequencies owing to icing problems, and no searching ship or aircraft was able to contact her directly, the messages being relayed by stations as far afield as New York and Puerto Rico. An efficient control and monitoring system was able to coordinate the efforts of all so that aircraft sightings were followed by ship recoveries. In addition to the immediate participants involved, several other ships answered the call of distress and took part in the search, some not recorded, but all of them prepared to seek to the limit of their endurance. Truly international in response, search and rescue evokes the best traditions of sea, air and radio.
- Date modified:
|
<urn:uuid:93f181b6-6bd1-4416-9c8c-8e9d259ba980>
|
CC-MAIN-2016-26
|
http://www.ccg-gcc.gc.ca/eng/CCG/USQUE_Offshore_Rescue
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396100.16/warc/CC-MAIN-20160624154956-00197-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.978597 | 2,192 | 3.046875 | 3 |
Composting 101 – How to Make Compost
Welcome to COMPOSTING 101, Planet Natural’s go-to guide for turning what unsuspecting folks call yard waste into garden magic. Here you’ll find all you need to know about the best ingredients, containers, techniques, time-honored wisdom and common mistakes that will let you build the healthiest soil your plants will ever see.
3 Essential Elements for Perfect Compost
It’s time to let you in on a little secret: this type of soil building is the perfect lazy person’s gardening project. Unlike weeding or double-digging, which take lots of time and physical effort, a compost pile pretty much takes care of itself. Build it right, and it will transform your growing expectations.
1. Start with a container. We’re dealing with decomposing organic material, folks, so the structure doesn’t need to be fancy. You just need some sort of way to hold all of the ingredients together so the beneficial bacteria that break down the plant matter can heat up and work effectively.
A compost bin can be as simple as a cage made from wire fencing or as sophisticated as a drum tumbler, complete with a specially-designed frame, venting system and handle for turning the contents. Select one based on how much plant matter (grass, leaves, weeds, stalks and stems from last year’s garden) you have at your disposal, how large your yard is, and how quickly you need to use the finished product.
Locate the pile in a sunny location so the pile has as much heat as possible. If it’s in the shade all day, decomposition will still happen, but it will be much slower, especially when freezing temps arrive in the fall.
2. Get the ingredient mix right. A low-maintenance pile has a combination of brown and green plant matter, plus some moisture to keep the good bacterial humming. Shredded newspaper, wood chips and dry leaves are ideal for the brown elements; kitchen waste and grass clippings are perfect for the green add-ins.
Skip meat, fish and dairy for outdoor bins because they tend to attract pests like mice, raccoons and dogs. If you can’t bear the thought of sending your leftovers to the landfill, there are clever systems that turn them into superfood for your plants.
If you’re using a simple containers, it’s best to start heaping the ingredients right on the ground, starting with chunky material like small branches or woody stems on the bottom for good airflow. Every time you add green material, add some brown as well to keep a good moisture balance and create air pockets.
If you need to jump-start your pile to get the process started, there are several great activators that are ready to go right out of the box.
3. Remember a few simple chores. Taking care of a compost pile is extremely basic, but a wee bit of care makes a huge difference. Add material regularly to give the happy bacteria some fresh food to consume and enough insulation to keep the process warm.
Turn the pile with a shovel or pitchfork every week or two to make sure that all of the materials are blended in and working together. After you’ve mixed things up, grab a handful to see if it’s slightly damp. Too little moisture will slow the decomposition process and too much will leave you with a slimy mess.
In a few months, your finished product should be a dark, crumbly soil that smells like fresh earth.
Avoid Common Mistakes
It’s hard to mess up compost, but we’re happy to offer a little direction so you get off to the best start.
• Don’t start too small. The breakdown process needs a critical mass in order to do its job. However, certain bins work well for small amounts of material, so choose a product for your specific needs.
• Keep things moist. It’s easy to walk away and forget that there’s an active process going on, so check the pile regularly, especially during hot, dry weather.
• Don’t depend on one material. A combination of different textures and nutrients created by the disintegration of many different plants will give your plants a gourmet diet that helps create disease and pest resistance. Think about it – a huge clump of grass clippings just sticks together in a huge mat that hangs around for years. Add some leaves, stir, and natural forces like water, air and heat go to work quickly!
• Don’t get overwhelmed. This isn’t rocket science, so jump in and try, even if you don’t have a clue. You’ll soon see what works and what doesn’t.
In Montana, where I live, the Holy Grail of gardeners is a homegrown tomato. The optimistic folks who try to outsmart the over-in-a-flash growing season, chilly summer nights, skimpy rainfall and marauding gophers or deer are courageous, indeed. I know a woman who tried every trick in the book to grow tomatoes she could brag about. She started them early, protected them from wind and cold, and staked them up oh-so carefully.
No luck. They always turned out puny, mealy and tasteless.
Last year, she decided to focus on the soil instead. After reading up on the nutrients that plants need to thrive, she decided to mix organic compost into her garden and see what happened.
The experiment was a complete success! The tomatoes were so luscious and tempting that someone actually stole the crop out of the woman’s backyard. She was so miffed she actually filed a police report about it!
Compost is no guarantee that your vegetables (and flowers!) will inspire jealousy in your neighborhood, but it’s the fastest ticket to healthy, productive plants that reward your hard work with beautiful blooms and bountiful harvests. Taking the time to get smart about using your kitchen scraps, grass clippings and other plant material practically guarantees garden success!
- How It Works
- What To Use (Ingredients)
- Carbon-to-Nitrogen Ratios
- The Finished Product
- A Bevy of Bins
- Using Worms (Vermicomposting)
- Tips & Tricks
|
<urn:uuid:a5d41267-1f21-4d98-92cd-1f054ffa3de5>
|
CC-MAIN-2016-26
|
http://www.planetnatural.com/make-compost/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00016-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.93011 | 1,337 | 2.671875 | 3 |
You can read many answers to this very question when it's been posted on Jiskha in the past.
These organizations have done very little on reducing tribal poverty and encouraging prosperity. As noted in our text: Today’s Native Americans are the “most undernourished, most short-lived, least educated, least healthy.” The BIA was organized to be in charge of the accounts of landowners. However, they have failed in every way. Most Indian Americans only receive about $20 a month for their land. A lot of this land has been leased out to the government and they are drilling oil on this land which is very rich in minerals, one barrel of this oil is sold for $21, and the government is only giving the Indian Americans $20 a month?
The National Congress of American Indians, also known as NCAI, was founded in 1944 in Denver, registered itself as a lobby in Washington, D.C., hoping to make the Native American perspective heard in the aftermath of the Reorganization Act. They were concerned about white people meddling in their business.
Casinos have helped some tribes but only about one-third of the recognized Indian tribes have gambling ventures. As sited in our text: There are two important factors that need to be considered. First, the impact of this revenue is limited. The tribes that make substantial revenue from gambling are a small fraction of all Native American people. Second, even on reservations that benefit from gambling enterprises, the levels of unemployment are substantially higher and the family income significantly lower than for the nation as a whole.
Indian Americans are making tremendous gains but the rest of the world is not standing still. As Native American income rises, so does White income. As Native American children stay in school longer, so do White children. American Indian health care improves, but so does White health care. Advances have been made, but the gap between the two stay the same.
|
<urn:uuid:e8b6ad0f-3c58-4c67-9d73-e164524c6762>
|
CC-MAIN-2016-26
|
http://www.jiskha.com/display.cgi?id=1278627069
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00088-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.977209 | 397 | 3.171875 | 3 |
The associate professor of astronomy at Vanderbilt and his graduate student are taking a critical look at T Tauri stars. These are stellar adolescents, less than 10 million years old, which are destined to become stars similar to the Sun as they age.
Classical T Tauri stars - those less than 3 million years old - are invariably accompanied by a thick disk of dust and gas, which is often called a protoplanetary disk because it is a breeding ground for planet formation. Most older T Tauri stars show no signs of encircling disks. Because they are not old enough for planets to form, astronomers have concluded that most of these stars must loose their disk material before planetary systems can develop.
Weintraub and Bary are pursuing an alternative theory. They propose that most older T Tauri stars haven't lost their disks at all: The disk material has simply changed into a form that is nearly invisible to Earth-based telescopes. They published a key observation supporting their hypothesis in the September 1 issue of the Astrophysical Journal Letter and the article was highlighted by the editors of Science magazine as particularly noteworthy. The two researchers currently are preparing to publish additional evidence in support of their hypothesis.
The dense disks of dust and gas surrounding classical T Tauri stars are easily visible because dust glows brightly in the infrared region of the spectrum. Although infrared light is invisible to the naked eye, it is readily detectable with specially equipped telescopes. The second group of T Tauri stars that are somewhat older - between three to six billion years - and show no evidence of disks have been labeled as "naked" or "weak line" T Tauri stars.
Because there is no visible evidence that naked T Tauri stars possess protoplanetary disks. So astronomers have concluded that the material must have been absorbed by the star or blown out into interplanetary space or pulled away by the gravitational attraction of a nearby star in the first few million years. According to current theories, it takes about 10 million years to form a Jupiter-type planet and even longer to form a planet like Earth. If the models are correct and if most Sun-like stars loose their protoplanetary disks in the T Tauri stage, then very few stars like the Sun are likely to possess planetary systems.
This picture doesn't sit well with Weintraub, however. "Approaching it from a planetary evolution point of view, I have not been comfortable with some of the underlying assumptions," he says.
Current models do not take the evolution of protoplanetary disks into account. Over time, the disk material should begin agglomerating into solid objects called planetesimals. As the planetesimals grow, an increasing amount of the mass in the disk becomes trapped inside these solid objects where it cannot emit light directly into space. The constituents of the disk that astronomers knew how to detect - small grains of dust and carbon monoxide molecules - should quickly disappear during the first steps of planet building. "Rather than the disk material dissipating," says Bary, "It may simply become invisible to our instruments."
So Weintraub and Bary began searching for ways to determine if such "invisible protoplanetary disks" actually exist.
They decided that their best bet was to search for evidence of molecular hydrogen, the main constituent of the protoplanetary disk, which should persist much longer than the dust grains and carbon monoxide. Unfortunately, molecular hydrogen is notoriously difficult to stimulate into emitting light: It must be heated to a fairly high temperature before it will give off infrared light.
The fact that T Tauri stars are also strong X-ray sources gave them an idea. Perhaps the X-rays coming from the star could act as an energy source capable of stimulating the molecular hydrogen. To produce enough light to be seen from earth, however, the molecular hydrogen could not b mixed with dust and had to be at an adequate density. Studying various theories of planet formation, they determined that the proper conditions should hold in a "flare region" near the outer edge of the protoplanetary disk.
The next step was to get observation time on a big telescope to put their out-of-the-mainstream theory to the test. After repeated rejections, they were finally allocated viewing time on the four-meter telescope at the National Optical Astronomical Observatory in Kitt Peak, Arizona. When they finally took control of the telescope and pointed it toward one of their prime targets - a naked, apparently diskless T Tauri star named DoAr21 - they found the faint signal for which they were searching.
"We found evidence for hydrogen molecules where no hydrogen molecules were thought to exist," says Weintraub.
When Bary calculated the amount of hydrogen involved in producing this signal, however, he came up with about a billionth of the mass of the Sun, not even enough to make the Moon. As they argued in their Astrophysical Journal Letter article, they believe that they have detected only the proverbial tip of the iceberg, since most of the hydrogen gas will not radiate in the infrared. But the calculation raises the question of whether the molecular hydrogen that they detected is part of a complete protoplanetary disk or just its shadowy remains. Although they do not completely answer the question, additional observations that the two are readying for publication provides additional support for their contention that DoAr21 contains a sizeable but invisible disk.
The new observations are the detection of the same molecular hydrogen emission line around three classical T Tauri stars with visible protoplanetary disks. The strength of the hydrogen emission lines in the three is comparable to that measured at DoAr21. In addition, they have calculated the ratio between the mass of hydrogen molecules that are producing the infrared emissions and the mass of the entire disk in the three systems. For all three they calculate that the ratio is about one in 100 million.
"If the ratio between the amount of hydrogen emitting in the infrared and the total amount of hydrogen in the disk is about the same in the two types of T Tauri stars, which is not an unreasonable assumption, this suggests the naked T Tauri star has a sizable but hard-to-detect disk," says Bary.
Weintraub and Bary admit that they have more work to do to in order to convince their colleagues to adopt their theory. They have been allocated time on a larger telescope, the eight-meter Gemini South in Chile and plan to survey 50 more naked T Tauri stars to see how many of them produce the same molecular hydrogen emissions. If a large number of them do, it will indicate that they have discovered a general mechanism involved in the planetary formation process. They also intend to search for a second, fainter hydrogen emission line. If they find it, it will provide additional insights into the excitation process.
Currently, the number of naked T Tauri stars that have been discovered is much greater than the number of known classical T Tauri stars. If Weintraub and Bary are proven right, however, and a significant percentage of the naked T Tauri stars develop planetary systems, it means that solar systems similar to our own are a common sight in the universe.
|
<urn:uuid:6dc621fb-3510-4159-a1a4-b8e8a7e16948>
|
CC-MAIN-2016-26
|
http://www.eurekalert.org/pub_releases/2002-12/vu-mss120902.php
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963908 | 1,468 | 4.375 | 4 |
Almost 14 million people have been affected by the torrential rains in Pakistan, making it a more serious humanitarian disaster than the South Asian tsunami and recent earthquakes in Kashmir and Haiti combined.
The disaster was driven by a ‘supercharged jet stream’ that has also caused floods in China and a prolonged heatwave in Russia.
It comes after flash floods in France and Eastern Europe killed more than 30 people over the summer.
Experts from the United Nations (UN) and universities around the world said the recent “extreme weather events” prove global warming is already happening.
Jean-Pascal van Ypersele, vice-president of the body set up by the UN to monitor global warming, the Intergovernmental Panel on Climate Change (IPCC), said the ‘dramatic’ weather patterns are consistent with changes in the climate caused by mankind.
“These are events which reproduce and intensify in a climate disturbed by greenhouse gas pollution," he said.
"Extreme events are one of the ways in which climatic changes become dramatically visible."
The UN has rated the floods in Pakistan as the greatest humanitarian crisis in recent history, with 13.8 million people affected and 1,600 dead.
Flooding in China has killed more than 1,100 people this year and caused tens of billions of dollars in damage across 28 provinces and regions.
In Russia the morgues are overflowing in Moscow and wildfires are raging in the countryside after the worst heatwave in 130 years.
Dr Peter Stott, head of climate monitoring and attribution at the Met Office, said it was impossible to attribute any one of these particular weather events to global warming alone.
But he said there is “clear evidence” of an increase in the frequency of extreme weather events because of climate change.
"The odds of such extreme events are rapidly shortening and could become considered the norm by the middle of this century," he warned.
Dr Stott also said global warming is likely to be make extreme events worse. For example, when there is more heat in the atmosphere it holds more water and therefore floods in places like Pakistan are heavier.
“If we have these type of extreme weather patterns then climate change has loaded the dice so there is more risk of bad things happening,” he said.
Professor Andrew Watson, a climatologist at the University of East Anglia, which was at the centre of last year's 'climategate' scandal, said the extreme events are "fairly consistent with the IPCC reports and what 99 per cent of the scientists believe to be happening".
"I'm quite sure that the increased frequency of these kind of summers over the last few decades is linked to climate change," he said.
|
<urn:uuid:de7ac80f-9faa-499a-8a63-174ad01b3664>
|
CC-MAIN-2016-26
|
http://www.telegraph.co.uk/news/worldnews/asia/pakistan/7937269/Pakistan-floods-Climate-change-experts-say-global-warming-could-be-the-cause.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00145-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.956582 | 558 | 3.03125 | 3 |
You may have heard many vegetable gardeners talk about how awesome composting is, because it’s so easy and you can throw just about anything on the compost pile.
While this may be partly true, there are some things you should never compost. These items may be biodegradable but could contain bacteria and pathogens that are harmful to humans.
There are some materials that could attract unwelcome wildlife, and even cause your compost pile to smell something awful.
To avoid these issues, here are seven things you should never compost.
Any type of meat should always be avoided in the compost pile.
The rotting meat will stink to high heaven, and will almost certainly attract critters like rats and raccoons.
It can also attract flies, maggots, and other annoying and disgusting insects.
Dispose of meats in the city or local trash pickup. The same goes for animal bones.
Dairy products such as milk, cheese, and butter should never be added to compost piles. The rotting milk will give off a stench as bad as the meat.
Rotting milk can also attract roaches and other nasty varmints you really don’t want in your yard, or around your home.
While most manures from herbivores (plant-eating animals) are good for speeding up the decomposition process, manures from carnivores (meat-eating animals) should be strictly prohibited from composting.
Manure from animals such as dogs, cats, and humans have the possibility of carrying harmful pathogens, bacteria, and even parasites.
Although some recommend these can be used in lawns and ornamental gardens, I recommend staying completely away from these types of manures for composting no matter what the intended purposes for it.
It’s better to be safe than sorry, in my opinion.
Some localities sell a compost that is derived from the local sanitation and water treatment facilities. Different locations may have different names for the compost (in my local area it is called Nutri-Green®), but it is all pretty much the same thing – composted sludge that is left over after the sewage and water treatment process.
I strongly recommend avoiding this stuff!
There’s no telling what is in it, and is not safe for edible gardening, in my opinion. Remember, this is the left over sludge from human waste and storm water treatment. Not something you really would want in your vegetable garden and lawn!
Cooking Oils and Grease
Another material to avoid when composting is cooking oils and grease.
These items do not break down that easily and can contain fat and other by-products that will also attract unwanted animals.
Cooking oils, grease, and other liquids containing oils and fats should be disposed of properly according to local and state laws.
Plants Treated with Pesticides or Herbicides
Never put plants, grass clippings, or leaves in a compost pile that have been treated with pesticides, herbicides, insecticides, or other chemicals.
These chemicals will not break down and will incorporate within the compost. Once you spread the contaminated compost into the vegetable garden the soil is then contaminated with those chemicals.
The chemicals are also very dangerous to the beneficial microbes that are responsible for creating the compost.
It’s much better to bag up the chemically treated plant materials and dispose of them properly.
If you question whether a certain material has been treated or not, err on the side of caution, and leave them out of your compost.
Any item such as plastic, rubber, polyester, and other synthetic materials should not go into the compost.
These items will not break down and could leach unwanted chemicals into the compost.
Also, leave baby diapers, cat litter, and charcoal ashes out of the mix.
Check with your local sanitation department about their recycling program for plastics and other recyclable materials.
Plants With Diseases
Do not ever put diseased plants into your compost. An almost guaranteed way to spread the diseases throughout your lawn and vegetable garden is to place a diseased plant in the compost.
The disease will just harbor in the compost, and then be quickly spread when you use the compost in the vegetable garden.
Place diseased plants into a clear plastic bag and leave it out in the sun for a few days, which will help to kill the disease.
Chunk the bag and all into the garbage. Even if you suspect a plant may have a disease, pull it up and dispose of it properly.
Also, remember to throughly wash your hands after handling a diseased plant to decrease the risk of spreading it to other plants.
Keeping Your Compost Healthy and Safe
The first step is adding materials that are useful, and avoiding the things you should never compost.
Discuss in our forums
|
<urn:uuid:15d583ad-2def-4725-a80b-fe01853026b2>
|
CC-MAIN-2016-26
|
http://www.veggiegardener.com/7-things-never-compost/
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00116-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.946439 | 993 | 2.59375 | 3 |
I'm working on extra-curricular latin, and it is not graded. I have reached the enclitics section and I'm having trouble... For Learning purposes, I am to compose two sentences each using to enclitics. So I need to translate "When did Rufus or his friend see the man and the woman?" and “Have you seen my brother or sister?” into Latin using two enclitics. I think I have the second one down, but I am unsure. Here is what I have so far:
Vidistisne mea fratris sororisve?
Thanks in Advance,
|
<urn:uuid:3904b951-69cd-4e1d-b867-b9d78eab386e>
|
CC-MAIN-2016-26
|
http://www.textkit.com/greek-latin-forum/viewtopic.php?p=85525
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00054-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.963651 | 132 | 2.53125 | 3 |
OPENDISK(3) BSD Programmer's Manual OPENDISK(3)
opendisk - open a disk's "raw" partition
#include <sys/types.h> #include <util.h> int opendisk(const char *path, int flags, char *buf, size_t buflen, int iscooked);
opendisk() opens path, for reading and/or writing as specified by the ar- gument flags using open(2), and the file descriptor is returned to the caller. buf is used to store the resultant filename. buflen is the size, in bytes, of the array referenced by buf (usually MAXPATHLEN bytes). If iscooked is non zero, the "cooked" partition (block device) is opened, rather than the "raw" partition (character device). opendisk() attempts to open the following variations of path, in order: path The pathname as given. pathX path with a suffix of 'X', where 'X' represents the raw par- tition of the device, as determined by getrawpartition(3), usually "c". If iscooked is zero, then the following two variations are attempted: /dev/rpath path with a prefix of "/dev/r". /dev/rpathX path with a prefix of "/dev/r" and a suffix of 'X' (q.v.). Otherwise (i.e., iscooked is non-zero), the following variations are at- tempted: /dev/path path with a prefix of "/dev/". /dev/pathX path with a prefix of "/dev/" and a suffix of 'X' (q.v.).
An open file descriptor, or -1 if the open(2) failed.
opendisk() may set errno to one of the following values: [EINVAL] O_CREAT was set in flags, or getrawpartition(3) didn't re- turn a valid partition. [EFAULT] buf was the NULL pointer. The opendisk() function may also set errno to any value specified by the open(2) function.
The opendisk() function first appeared in NetBSD 1.3. MirOS BSD #10-current September 22, 1997 1
Generated on 2016-04-09 18:24:16 by $MirOS: src/scripts/roff2htm,v 1.83 2016/03/26 23:38:28 tg Exp $
These manual pages and other documentation are copyrighted by their respective writers;
their source is available at our CVSweb,
AnonCVS, and other mirrors. The rest is Copyright © 2002–2016 The MirOS Project, Germany.
This product includes material provided by mirabilos.
This manual page’s HTML representation is supposed to be valid XHTML/1.1; if not, please send a bug report – diffs preferred.
|
<urn:uuid:2188c3a9-2239-4aa9-bebc-e573b8a09c7c>
|
CC-MAIN-2016-26
|
http://www.mirbsd.org/htman/i386/man3/opendisk.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00132-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.825424 | 625 | 3.125 | 3 |
Though the H1N1 vaccine is still not widely available, some states are doing a better job than others at keeping their public informed about where the limited supply can be found.
The U.S. Centers for Disease Control and Prevention has admitted that getting enough vaccine to all the states will be a "bumpy road," but a state-by-state comparison of flu Web sites reveals that some states, like New Jersey, Wisconsin and Kansas, are helping this process run a little smoother by providing vaccine locating tools, lists of local doctors who will provide the vaccine, and even phone numbers for hotlines devoted to helping the public locate a H1N1 vaccine clinic or doctor nearby.
Meanwhile, other state health department Web sites keep their citizens in the dark. For example, Alabama and Mississippi have virtually no specific information about where flu shots can be found, and at best, they suggest that you "contact your health provider" or promise that information is "coming soon."
As of Friday, 16.1 million doses of H1N1 vaccine were available for shipping to health providers nationwide, and millions more become available every week. Now that vaccine supply is increasing, it's up to state and local health departments to let the public know when vaccine will be coming to their area and where those eligible can go to get it.
Using the information on New Jersey's site, one can easily find counties that have available clinics, the location of the clinics, and the times they will administer vaccine -- though you must be a resident of that county to attend such a clinic. A quick statewide search turns up a few counties that are currently providing clinics; Randolph Township, for example, will hold a nasal spray clinic this Thursday for residents who are in the CDC's priority group.
Some States Better Than Others for Online H1N1 Vaccine Info
For those not eligible to receive the nasal spray -- pregnant women, for example -- local health departments are taking names and contact information for a priority list. When injectable vaccine becomes available in the county, those on this list will get a call.
This is the way Washington County, Kan., is handling the situation as well. Anyone in the CDC's priority group can be put on a list and will be called in to receive their vaccination when it arrives at their local health department.
Many other states provide this level of information. Georgia's Web site offers a list of doctors who will provide the vaccine -- to current patients and to new ones -- once it's delivered to them. Their site also connects users with local county health department clinics. While not all counties have clinics at this point, those that do, such as Jefferson County, have all the necessary information right there: dates, location, time of day and a phone number for fielding questions.
North Carolina's Web site has a "flu clinic finder" that currently connects users to local seasonal flu clinics, but will transition seamlessly into an "H1N1 clinic finder" once enough vaccine is available.
California's Web site, like many states', links users out to their local county health department's site, which can be hit or miss at finding vaccine information. If your county happens to be one of the good ones, with lots of vaccine information, then finding a clinic can take a matter of seconds.
With vaccine production chugging along, it is essential that states have means of communicating with the public so that when vaccine supply does pick up, residents will know how to get vaccinated. For example, even though Wisconsin does not yet provide H1N1 vaccine clinics on a broad basis, they already have a 2-1-1 number in place -- a statewide hotline that links the public with information on a nearby flu clinic.
|
<urn:uuid:d9add9b1-46fa-4747-ab8a-5c6b05691ab0>
|
CC-MAIN-2016-26
|
http://abcnews.go.com/Health/SwineFluNews/best-worst-states-h1n1-vaccine-info/story?id=8921708
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00003-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.94575 | 759 | 2.796875 | 3 |
Abdominal Tenderness Overview
What is abdominal tenderness?
Abdominal tenderness is an important finding on physical examination in a person who complains of abdominal pain. A person with abdominal tenderness has abdominal pain when pressure is applied to the abdomen (with the hand). Tenderness that is located in one area of the abdomen can provide an important clue to the underlying cause. Many conditions cause abdominal pain, but few conditions also cause abdominal tenderness.
What are the symptoms of abdominal tenderness?
Abdominal tenderness can range from mild to severe. Symptoms commonly associated with abdominal tenderness include abdominal pain, abdominal swelling, nausea, vomiting, diarrhea, fever, anorexia, and excessive sweating.
How does the doctor treat abdominal tenderness?
Treatment of abdominal tenderness depends on the underlying cause. Severe abdominal tenderness is often treated with antibiotics and surgery.
Continue to Abdominal Tenderness Underlying Cause
- Bundy DG, Byerley JS, Liles EA, Perrin EM, Katznelson J, Rice HE. Does this child have appendicitis? JAMA. 2007 Jul 25;298(4):438-51.
- Flasar MH, Goldberg E. Med Clin North Am. 2006 May;90(3):481-503.
- Lyon C, Clark DC. Diagnosis of acute abdominal pain in older patients. Am Fam Physician. 2006 Nov 1;74(9):1537-44.
- McCollough M, Sharieff GQ. Abdominal pain in children. Pediatr Clin North Am. 2006 Feb;53(1):107-37, vi.
|
<urn:uuid:ef8d37c1-fc5f-4540-b93f-a54908888264>
|
CC-MAIN-2016-26
|
http://www.freemd.com/abdominal-tenderness/overview.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00173-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.850512 | 348 | 2.75 | 3 |
With each new month there tends to be a different focus for raising awareness, and for the month of September it's National Preparedness Month.
The American Red Cross is working to remind people how to be prepared for any type of natural disaster.
That includes tornadoes, hurricanes and earthquakes.
A disaster can happen at any second and most people aren't prepared, so when the disaster does hit there's no plan to put into action.
Especially for the natural disasters that are common here in the Mid- Ohio Valley.
"For this area it would be flooding and winter storms are the two biggest ones. Recently wind storms have started to pop up in peoples' minds. Basically just be prepared when the electricity goes off," says Todd Wines, with the Mid-Ohio Valley Chapter of the American Red Cross.
The Red Cross recommends having preparedness kits that include food, water, cash and battery powered cell phone chargers.
They also suggest having a disaster plan and practicing it with your family.
|
<urn:uuid:d63059d8-afbc-4a33-b2de-405806aba290>
|
CC-MAIN-2016-26
|
http://www.thenewscenter.tv/home/headlines/National-Preparedness-Month-273628251.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00168-ip-10-164-35-72.ec2.internal.warc.gz
|
en
| 0.969884 | 205 | 3.21875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.